I think there are some problems with your job_conf.xml setup (for instance the real user functionality is not supported with the PBS runner at this time), my advice would be to simplify things first and then build back in the complexity so you can see what features are causing it to fail - try without tags, real-user, dynamic destinations, etc.... Can you try this: <job_conf> <plugins workers="4"> <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="2"/> <plugin id="pbs" type="runner" load="galaxy.jobs.runners.pbs:PBSJobRunner" workers="6"/> </plugins> <handlers> <handler id="handler0"/> </handlers> <destinations default="regularjobs"> <destination id="local" runner="local"/> <destination id="regularjobs" runner="pbs"> </destination> </destinations> </job_conf> This would seem to be a very simple configuration that should submit all jobs to your cluster. Can you try this out and if that doesn't work, post the relevant bits from the handler's log file? -John On Mon, Oct 21, 2013 at 3:55 AM, Jurgens de Bruin <debruinjj@gmail.com> wrote:
Hi,
I am struggling to get galaxy to submit job to a cluster, I am using Torque and Maui.
This is my universe_wsgi.ini
----- Handler Definitions ------------------------------------------------- [server:web0] use = egg:Paste#http port = 8181 host = 127.0.0.1 use_threadpool = true threadpool_workers = 7
[server:web1] use = egg:Paste#http port = 8182 host = 127.0.0.1 use_threadpool = true threadpool_workers = 7
[server:handler0] use = egg:Paste#http port = 8090 host = 127.0.0.1 use_threadpool = true threadpool_workers = 5
[server:handler1] use = egg:Paste#http port = 8091 host = 127.0.0.1 use_threadpool = true threadpool_workers = 5
# ------ Handler Definitions -------------------------------------------------
This is job_conf.xml
<?xml version="1.0"?> <!-- A sample job config that explicitly configures job running the way it is configured by default (if there is no explicit config). --> <job_conf> <plugins workers="4"> <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="2"/> <plugin id="pbs" type="runner" load="galaxy.jobs.runners.pbs:PBSJobRunner" workers="6"/> </plugins> <handlers default="handlers"> <handler id="handler0" tags="handlers"/> <handler id="handler1" tags="handlers"/> </handlers> <destinations default="regularjobs"> <destination id="local" runner="local"/> <destination id="regularjobs" runner="pbs" tags="cluster"> <param id="Resource_List">walltime=24:00:00</param> <param id="galaxy_external_runjob_script">scripts/drmaa_external_runner.py</param> <param id="galaxy_external_killjob_script">scripts/drmaa_external_killer.py</param> <param id="galaxy_external_chown_script">scripts/external_chown_script.py</param> </destination> <destination id="real_user_cluster" runner="pbs"> <param id="galaxy_external_runjob_script">scripts/drmaa_external_runner.py</param> <param id="galaxy_external_killjob_script">scripts/drmaa_external_killer.py</param> <param id="galaxy_external_chown_script">scripts/external_chown_script.py</param> </destination> <destination id="dynamic" runner="dynamic"> <!-- A destination that represents a method in the dynamic runner. --> <param id="type">python</param> <param id="function">interactiveOrCluster</param> </destination> </destinations> </job_conf><?xml version="1.0"?> <!-- A sample job config that explicitly configures job running the way it is configured by default (if there is no explicit config). --> <job_conf> <plugins workers="4"> <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="2"/> <plugin id="pbs" type="runner" load="galaxy.jobs.runners.pbs:PBSJobRunner" workers="6"/> </plugins> <handlers default="handlers"> <handler id="handler0" tags="handlers"/> <handler id="handler1" tags="handlers"/> </handlers> <destinations default="regularjobs"> <destination id="local" runner="local"/> <destination id="regularjobs" runner="pbs" tags="cluster"> <param id="Resource_List">walltime=24:00:00</param> <param id="galaxy_external_runjob_script">scripts/drmaa_external_runner.py</param> <param id="galaxy_external_killjob_script">scripts/drmaa_external_killer.py</param> <param id="galaxy_external_chown_script">scripts/external_chown_script.py</param> </destination> <destination id="real_user_cluster" runner="pbs"> <param id="galaxy_external_runjob_script">scripts/drmaa_external_runner.py</param> <param id="galaxy_external_killjob_script">scripts/drmaa_external_killer.py</param> <param id="galaxy_external_chown_script">scripts/external_chown_script.py</param> </destination> <destination id="dynamic" runner="dynamic"> <!-- A destination that represents a method in the dynamic runner. --> <param id="type">python</param> <param id="function">interactiveOrCluster</param> </destination> </destinations> </job_conf>
I can submit jobs to the cluster so I know that part is working. Any help would be appreciated.
-- Regards/Groete/Mit freundlichen Grüßen/recuerdos/meilleures salutations/ distinti saluti/siong/duì yú/привет
Jurgens de Bruin
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/