On Jul 22, 2014, at 11:25 AM, Kandalaft, Iyad <Iyad.Kandalaft@agr.gc.ca> wrote:
Your overall thought process seems correct. I suspect you still have a web process for galaxy that is the only process being proxied by apache?
Make sure your universe_wsgi.ini has the option set to manage jobs in the database (required for multiple handlers).
I would start with 8 handlers and work my way up (despite the python GIC issue). I suspect that each handler with 4 threads would easily saturate your 24 core server.
I believe you need to set the default attribute and the tags attribute (I could be mistaken).
Here's what I have and it seems to work as expected (please correct it if it's wrong)
<?xml version="1.0"?>
<job_conf>
<plugins>
<plugin id="drmaa" type="runner" load="galaxy.jobs.runners.drmaa:DRMAAJobRunner"/>
<plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>
</plugins>
<destinations default="biocluster">
<!-- Executes jobs on the biocluster | this is the default -->
<destination id="biocluster" runner="drmaa">
<param id="nativeSpecification">-q all.q -S /bin/bash</param>
</destination>
<!-- Executes job locally on the galaxy host server -->
<destination id="local" runner="local"/>
</destinations>
<handlers default="handlers">
<handler id="handler0" tags="handlers"/>
<handler id="handler1" tags="handlers"/>
<handler id="handler2" tags="handlers"/>
</handlers>
<tools>
<tool id="upload1" destination="local"/>
</tools>
</job_conf>
Hi Donnie,
I'd second Iyad's recommandation that you do not need 23 handlers. The number of concurrent jobs in the local runner, if you don't configure any limits, is controlled by the number of workers you start for the local runner plugin multiplied by the number of handlers you have configured to accept jobs going to a destination that uses the local plugin.
A much better way to control the number of concurrent jobs on local destinations is to use the destination_total_concurrent_jobs limit, as explained in the advanced job config sample:
You don't need to use a dynamic plugin to statically assign tools to specific destinations. This can be done under the <tools> section of job_conf.xml:
As far as load balancing goes, there is no way to do this using Galaxy's config mechanisms. You'd need to write a dynamic job running plugin that is able to inspect the available job destinations and choose the desired one based on whatever criteria about the job are appropriate.
--nate
Iyad Kandalaft
Microbial Biodiversity Bioinformatics
Agriculture and Agri-Food Canada | Agriculture et Agroalimentaire Canada
960 Carling Ave.| 960 Ave. Carling
Ottawa, ON| Ottawa (ON) K1A 0C6
E-mail Address / Adresse courriel Iyad.Kandalaft@agr.gc.ca
Telephone | Téléphone 613-759-1228
Facsimile | Télécopieur 613-759-1701
Teletypewriter | Téléimprimeur 613-773-2600
Government of Canada | Gouvernement du Canada
-----Original Message-----
From: galaxy-dev-bounces@lists.bx.psu.edu [mailto:galaxy-dev-bounces@lists.bx.psu.edu] On Behalf Of Shrum, Donald C
Sent: Tuesday, July 22, 2014 10:24 AM
To: galaxy-dev@lists.bx.psu.edu
Subject: [galaxy-dev] handlers
I could use a little bit of help in making some changes to our galaxy server.
I'm in the process of setting up/testing a production galaxy server for our research computing center. Our server is setup with an apache proxy, ldap authentication and jobs will run as the logged in user.
The server running galaxy has 24 cores. While most of the jobs submitted will go to either our HPC or the condor cluster there are some jobs that are small and should run on the galaxy server itself.
I was planning to set up a single web handler and 23 job handlers. I don't expect the web server to get bogged down, especially since I have apache serving as a proxy.
I expect a smaller (<100) number of users submitting many jobs.
I just went with 23 handlers for no good reason other than the server has 24 cores. Perhaps there is a better way to discern the optimum number of job handlers.
I'd like jobs submitted to galaxy to go either to our HPC, Condor, or one of the 23 local workers. Can galaxy effectively load balance itself in this way?
Does the configuration below accomplish this?
universe_wsgi.ini:
[server:handler1]
use = egg:Paste#http
port = 8081
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 10
.
.
[server:handler23]
job_conf.xml:
<job_conf>
<plugins workers="4">
<plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" /> <plugin id="drmaa" type="runner" load="galaxy.jobs.runners.drmaa:DRMAAJobRunner" /> </plugins> <handlers> <handler id="handler1"/> ...
<handler id="handler23"/>
</handlers>
<destinations default="dynamic">
<destination id="local" runner="local" tags="head"/> <destination id="dynamic" runner="dynamic"> <param id="function">default</param> <param id="type">python</param> </destination> <destination id="moab" runner="drmaa" /> <destination id="local" runner="local" /> </destinations>
In my destinations.py script I point tools to the appropriate destination:
if tool_id.startswith('upload1'):
return JobDestination(id="local", runner="local")
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/