I could use a little bit of help in making some changes to our galaxy server. I'm in the process of setting up/testing a production galaxy server for our research computing center. Our server is setup with an apache proxy, ldap authentication and jobs will run as the logged in user. The server running galaxy has 24 cores. While most of the jobs submitted will go to either our HPC or the condor cluster there are some jobs that are small and should run on the galaxy server itself. I was planning to set up a single web handler and 23 job handlers. I don't expect the web server to get bogged down, especially since I have apache serving as a proxy. I expect a smaller (<100) number of users submitting many jobs. I just went with 23 handlers for no good reason other than the server has 24 cores. Perhaps there is a better way to discern the optimum number of job handlers. I'd like jobs submitted to galaxy to go either to our HPC, Condor, or one of the 23 local workers. Can galaxy effectively load balance itself in this way? Does the configuration below accomplish this? universe_wsgi.ini: [server:handler1] use = egg:Paste#http port = 8081 host = 127.0.0.1 use_threadpool = true threadpool_workers = 10 . . [server:handler23] job_conf.xml: <job_conf> <plugins workers="4"> <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" /> <plugin id="drmaa" type="runner" load="galaxy.jobs.runners.drmaa:DRMAAJobRunner" /> </plugins> <handlers> <handler id="handler1"/> ... <handler id="handler23"/> </handlers> <destinations default="dynamic"> <destination id="local" runner="local" tags="head"/> <destination id="dynamic" runner="dynamic"> <param id="function">default</param> <param id="type">python</param> </destination> <destination id="moab" runner="drmaa" /> <destination id="local" runner="local" /> </destinations> In my destinations.py script I point tools to the appropriate destination: if tool_id.startswith('upload1'): return JobDestination(id="local", runner="local")