[Apologies for my previous email : finger slippage meant I sent before completing the email] Dear dev I would be grateful for some assistance in setting up our galaxy server to submit jobs to our torque server.. We are using release_2013.04.01 I have tried going through the install information but clearly I¹ve misunderstood something, and unfortunately most examples come from earlier versions Here is my job_conf.xml file ‹ this is just a temporary file while I set things up. My intention is to run clustalw on our cluster using the torque queue WitsLong, assuming our torque server is galaxy.bioinf.wits.ac.za Below that is the output I get in the log. From the torque log it complains that there is no default queue specified. Any help would be appreciated. Thanks Scott <?xml version="1.0"?> <!-- A sample job config that explicitly configures job running the way it is configured by default (if there is no explicit config). --> <job_conf> <plugins> <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner"/> <plugin id="pbs" type="runner" load="galaxy.jobs.runners.pbs:PBSJobRunner"/> </plugins> <handlers default="handlers"> <handler id="main" tags="handlers,pbs"/> </handlers> <destinations default="WitsLong"> <destination id="local" runner="local"/> <destination id="WitsLong" runner="pbs"> <param id="Resource_List">walltime=72:00:00,nodes=1:ppn=8</param> <param id="-p">128</param> </destination> </destinations> <tools> <tool id="foo" handler="trackster_handler"> <param id="source">trackster</param> </tool> <tool id="clustalw" destination="WitsLong"/> </tools> </job_conf> galaxy.jobs DEBUG 2013-04-30 15:28:13,481 (33) Working directory for job is: /home/galaxy/galaxy-dist/database/job_working_directory/000/33 galaxy.jobs.handler DEBUG 2013-04-30 15:28:13,527 (33) Dispatching to pbs runner 146.141.75.218, 146.141.1.81 - - [30/Apr/2013:15:28:12 +0200] "POST /galaxy/tool_runner/index HTTP/1.1" 200 - "http://galaxy.bioinf.wits.ac.za/galaxy/tool_runner/rerun?id=9752b387803d3e1 e" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:20.0) Gecko/20100101 Firefox/20.0" galaxy.jobs DEBUG 2013-04-30 15:28:14,213 (33) Persisting job destination (destination id: WitsLong) Exception AssertionError: AssertionError('State <sqlalchemy.orm.state.InstanceState object at 0x7f79682e5150> is not present in this identity map',) in <bound method InstanceState._cleanup of <sqlalchemy.orm.state.InstanceState object at 0x7f79682e5150>> ignored galaxy.jobs.handler INFO 2013-04-30 15:28:14,522 (33) Job dispatched galaxy.tools DEBUG 2013-04-30 15:28:15,189 Building dependency shell command for dependency 'clustalw2' galaxy.tools WARNING 2013-04-30 15:28:15,192 Failed to resolve dependency on 'clustalw2', ignoring galaxy.jobs.runners.pbs DEBUG 2013-04-30 15:28:15,957 (33) submitting file /home/galaxy/galaxy-dist/database/pbs/33.sh galaxy.jobs.runners.pbs DEBUG 2013-04-30 15:28:15,958 (33) command is: python /home/galaxy/galaxy-dist/tools/rgenetics/rgClustalw.py -i "/home/galaxy/galaxy-dist/database/files/000/dataset_2.dat" -o "/home/galaxy/galaxy-dist/database/files/000/dataset_60.dat" -s "ALIGNED" -l "/home/galaxy/galaxy-dist/database/files/000/dataset_61.dat" -t "Clustal_run" -d "DNA" -f "CLUSTAL" galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:15,963 (33) pbs_submit failed (try 1/5), PBS error 15039: Route rejected by all destinations galaxy.jobs DEBUG 2013-04-30 15:28:16,400 (34) Working directory for job is: /home/galaxy/galaxy-dist/database/job_working_directory/000/34 galaxy.jobs.handler DEBUG 2013-04-30 15:28:16,416 (34) Dispatching to pbs runner 146.141.75.218, 146.141.1.81 - - [30/Apr/2013:15:28:13 +0200] "POST /galaxy/tool_runner/index HTTP/1.1" 200 - "http://galaxy.bioinf.wits.ac.za/galaxy/tool_runner/rerun?id=9752b387803d3e1 e" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:20.0) Gecko/20100101 Firefox/20.0" galaxy.jobs DEBUG 2013-04-30 15:28:16,668 (34) Persisting job destination (destination id: WitsLong) galaxy.jobs.handler INFO 2013-04-30 15:28:16,689 (34) Job dispatched galaxy.tools DEBUG 2013-04-30 15:28:16,937 Building dependency shell command for dependency 'clustalw2' galaxy.tools WARNING 2013-04-30 15:28:16,938 Failed to resolve dependency on 'clustalw2', ignoring galaxy.jobs.runners.pbs DEBUG 2013-04-30 15:28:17,160 (34) submitting file /home/galaxy/galaxy-dist/database/pbs/34.sh galaxy.jobs.runners.pbs DEBUG 2013-04-30 15:28:17,161 (34) command is: python /home/galaxy/galaxy-dist/tools/rgenetics/rgClustalw.py -i "/home/galaxy/galaxy-dist/database/files/000/dataset_2.dat" -o "/home/galaxy/galaxy-dist/database/files/000/dataset_62.dat" -s "ALIGNED" -l "/home/galaxy/galaxy-dist/database/files/000/dataset_63.dat" -t "Clustal_run" -d "DNA" -f "CLUSTAL" galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:17,162 (34) pbs_submit failed (try 1/5), PBS error 15039: Route rejected by all destinations galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:17,971 (33) pbs_submit failed (try 2/5), PBS error 15039: Route rejected by all destinations galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:19,166 (34) pbs_submit failed (try 2/5), PBS error 15039: Route rejected by all destinations galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:19,975 (33) pbs_submit failed (try 3/5), PBS error 15039: Route rejected by all destinations galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:21,170 (34) pbs_submit failed (try 3/5), PBS error 15039: Route rejected by all destinations galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:21,978 (33) pbs_submit failed (try 4/5), PBS error 15039: Route rejected by all destinations galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:23,173 (34) pbs_submit failed (try 4/5), PBS error 15039: Route rejected by all destinations galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:23,982 (33) pbs_submit failed (try 5/5), PBS error 15039: Route rejected by all destinations galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:25,176 (34) pbs_submit failed (try 5/5), PBS error 15039: Route rejected by all destinations galaxy.jobs.runners.pbs ERROR 2013-04-30 15:28:25,984 (33) All attempts to submit job failed galaxy.jobs.runners.pbs ERROR 2013-04-30 15:28:27,179 (34) All attempts to submit job failed <table width="100%" border="0" cellspacing="0" cellpadding="0" style="width:100%;"> <tr> <td align="left" style="text-align:justify;"><font face="arial,sans-serif" size="1" color="#999999"><span style="font-size:11px;">This communication is intended for the addressee only. It is confidential. If you have received this communication in error, please notify us immediately and destroy the original message. You may not copy or disseminate this communication without the permission of the University. Only authorised signatories are competent to enter into agreements on behalf of the University and recipients are thus advised that the content of this message may not be legally binding on the University and may contain the personal views and opinions of the author, which are not necessarily the views and opinions of The University of the Witwatersrand, Johannesburg. All agreements between the University and outsiders are subject to South African Law unless the University agrees in writing to the contrary. </span></font></td> </tr> </table
On Tue, Apr 30, 2013 at 9:46 AM, Scott Hazelhurst <Scott.Hazelhurst@wits.ac.za> wrote:
Below that is the output I get in the log. From the torque log it complains that there is no default queue specified.
Hi Scott, Today I was hit by what it seems the same issue. In my case it was an issue with my torque server. I failed to set a default queue in the server, which means I would always need to use something like '-q batch' with 'qsub'. Modifying your 'job_conf.xml' in this way will probably fix it: <destinations default="WitsLong"> <destination id="local" runner="local"/> <destination id="WitsLong" runner="pbs"> <param id="Resource_List">walltime=72:00:00,nodes=1:ppn=8</param> <param id="-p">128</param> <param id="-q">batch</param> </destination> </destinations> If you have access to the torque server configuration, this should do and is probably a better option: # qmgr -c 'set server default_queue = batch' Hopefully this can still help you or anybody else running into this issue and finding this thread. Best, Carlos
participants (2)
-
Carlos Borroto
-
Scott Hazelhurst