I am having an issue with pbs_python loading, but not submitting jobs to the cluster. The first issue is that when I ran 'LIBTORQUE_DIR=/usr/local/lib/ python scripts/scramble.py –e pbs_python' it failed to generate the pbs_python egg in the /galaxy-dist/eggs/ directory. Secondly I copied the pbs_python-4.1.0-py2.6-linux-x86_64-ucs4.egg from my production galaxy server and restarted my test server. It appears to start, but when I attempt to run anything it uses the local runner and fails. The runner0.log shows the pbsjobrunner is loaded is below: galaxy.jobs.handler DEBUG 2012-07-06 10:08:45,681 Loaded job runner: galaxy.jobs.runners.pbs:PBSJobRunner galaxy.jobs.handler INFO 2012-07-06 10:08:45,682 job handler stop queue started galaxy.jobs.manager INFO 2012-07-06 10:08:45,687 job manager queue started galaxy.jobs.manager INFO 2012-07-06 10:08:45,688 job manager stop queue started galaxy.jobs.handler INFO 2012-07-06 10:08:45,692 job handler queue started I can't seem to get past this point since I upgraded. Can someone point me in the right direction? Thanks, Iry The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible.
On Jul 6, 2012, at 12:32 PM, Iry Witham wrote:
I am having an issue with pbs_python loading, but not submitting jobs to the cluster.
The first issue is that when I ran 'LIBTORQUE_DIR=/usr/local/lib/ python scripts/scramble.py –e pbs_python' it failed to generate the pbs_python egg in the /galaxy-dist/eggs/ directory.
Secondly I copied the pbs_python-4.1.0-py2.6-linux-x86_64-ucs4.egg from my production galaxy server and restarted my test server. It appears to start, but when I attempt to run anything it uses the local runner and fails. The runner0.log shows the pbsjobrunner is loaded is below:
galaxy.jobs.handler DEBUG 2012-07-06 10:08:45,681 Loaded job runner: galaxy.jobs.runners.pbs:PBSJobRunner galaxy.jobs.handler INFO 2012-07-06 10:08:45,682 job handler stop queue started galaxy.jobs.manager INFO 2012-07-06 10:08:45,687 job manager queue started galaxy.jobs.manager INFO 2012-07-06 10:08:45,688 job manager stop queue started galaxy.jobs.handler INFO 2012-07-06 10:08:45,692 job handler queue started
I can't seem to get past this point since I upgraded. Can someone point me in the right direction?
Hi Iry, Did you set default_cluster_job_runner in universe_wsgi.ini? --nate
Thanks, Iry
The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible. ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
Hi Nate, I have set this as shown below: # The URL for the default runner to use when a tool doesn't explicitly define a # runner below. default_cluster_job_runner = pbs:////-l nodes=1:ppn=1,walltime=20:00:00 Iry On 7/12/12 4:29 PM, "Nate Coraor" <nate@bx.psu.edu> wrote:
On Jul 6, 2012, at 12:32 PM, Iry Witham wrote:
I am having an issue with pbs_python loading, but not submitting jobs to the cluster.
The first issue is that when I ran 'LIBTORQUE_DIR=/usr/local/lib/ python scripts/scramble.py e pbs_python' it failed to generate the pbs_python egg in the /galaxy-dist/eggs/ directory.
Secondly I copied the pbs_python-4.1.0-py2.6-linux-x86_64-ucs4.egg from my production galaxy server and restarted my test server. It appears to start, but when I attempt to run anything it uses the local runner and fails. The runner0.log shows the pbsjobrunner is loaded is below:
galaxy.jobs.handler DEBUG 2012-07-06 10:08:45,681 Loaded job runner: galaxy.jobs.runners.pbs:PBSJobRunner galaxy.jobs.handler INFO 2012-07-06 10:08:45,682 job handler stop queue started galaxy.jobs.manager INFO 2012-07-06 10:08:45,687 job manager queue started galaxy.jobs.manager INFO 2012-07-06 10:08:45,688 job manager stop queue started galaxy.jobs.handler INFO 2012-07-06 10:08:45,692 job handler queue started
I can't seem to get past this point since I upgraded. Can someone point me in the right direction?
Hi Iry,
Did you set default_cluster_job_runner in universe_wsgi.ini?
--nate
Thanks, Iry
The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible. ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible.
On Jul 13, 2012, at 9:05 AM, Iry Witham wrote:
Hi Nate,
I have set this as shown below:
# The URL for the default runner to use when a tool doesn't explicitly define a # runner below. default_cluster_job_runner = pbs:////-l nodes=1:ppn=1,walltime=20:00:00
This should be okay. Could you send along the server output from startup to job completion? --nate
Iry
On 7/12/12 4:29 PM, "Nate Coraor" <nate@bx.psu.edu> wrote:
On Jul 6, 2012, at 12:32 PM, Iry Witham wrote:
I am having an issue with pbs_python loading, but not submitting jobs to the cluster.
The first issue is that when I ran 'LIBTORQUE_DIR=/usr/local/lib/ python scripts/scramble.py e pbs_python' it failed to generate the pbs_python egg in the /galaxy-dist/eggs/ directory.
Secondly I copied the pbs_python-4.1.0-py2.6-linux-x86_64-ucs4.egg from my production galaxy server and restarted my test server. It appears to start, but when I attempt to run anything it uses the local runner and fails. The runner0.log shows the pbsjobrunner is loaded is below:
galaxy.jobs.handler DEBUG 2012-07-06 10:08:45,681 Loaded job runner: galaxy.jobs.runners.pbs:PBSJobRunner galaxy.jobs.handler INFO 2012-07-06 10:08:45,682 job handler stop queue started galaxy.jobs.manager INFO 2012-07-06 10:08:45,687 job manager queue started galaxy.jobs.manager INFO 2012-07-06 10:08:45,688 job manager stop queue started galaxy.jobs.handler INFO 2012-07-06 10:08:45,692 job handler queue started
I can't seem to get past this point since I upgraded. Can someone point me in the right direction?
Hi Iry,
Did you set default_cluster_job_runner in universe_wsgi.ini?
--nate
Thanks, Iry
The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible. ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible.
Will the runner0.log or web0.log be sufficient? If not what would you like? Iry On 7/13/12 2:23 PM, "Nate Coraor" <nate@bx.psu.edu> wrote:
On Jul 13, 2012, at 9:05 AM, Iry Witham wrote:
Hi Nate,
I have set this as shown below:
# The URL for the default runner to use when a tool doesn't explicitly define a # runner below. default_cluster_job_runner = pbs:////-l nodes=1:ppn=1,walltime=20:00:00
This should be okay. Could you send along the server output from startup to job completion?
--nate
Iry
On 7/12/12 4:29 PM, "Nate Coraor" <nate@bx.psu.edu> wrote:
On Jul 6, 2012, at 12:32 PM, Iry Witham wrote:
I am having an issue with pbs_python loading, but not submitting jobs to the cluster.
The first issue is that when I ran 'LIBTORQUE_DIR=/usr/local/lib/ python scripts/scramble.py e pbs_python' it failed to generate the pbs_python egg in the /galaxy-dist/eggs/ directory.
Secondly I copied the pbs_python-4.1.0-py2.6-linux-x86_64-ucs4.egg from my production galaxy server and restarted my test server. It appears to start, but when I attempt to run anything it uses the local runner and fails. The runner0.log shows the pbsjobrunner is loaded is below:
galaxy.jobs.handler DEBUG 2012-07-06 10:08:45,681 Loaded job runner: galaxy.jobs.runners.pbs:PBSJobRunner galaxy.jobs.handler INFO 2012-07-06 10:08:45,682 job handler stop queue started galaxy.jobs.manager INFO 2012-07-06 10:08:45,687 job manager queue started galaxy.jobs.manager INFO 2012-07-06 10:08:45,688 job manager stop queue started galaxy.jobs.handler INFO 2012-07-06 10:08:45,692 job handler queue started
I can't seem to get past this point since I upgraded. Can someone point me in the right direction?
Hi Iry,
Did you set default_cluster_job_runner in universe_wsgi.ini?
--nate
Thanks, Iry
The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible. ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible.
The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible.
On Jul 13, 2012, at 2:52 PM, Iry Witham wrote:
Will the runner0.log or web0.log be sufficient? If not what would you like?
If you have updated to a relatively recent version of Galaxy, be sure that you updated to the web/manager/handler configuration from the web/runner configuration: http://wiki.g2.bx.psu.edu/Admin/Config/Performance/Web%20Application%20Scali... Once you've done this, if there are still problems, send the manager/handler logs. --nate
Iry
On 7/13/12 2:23 PM, "Nate Coraor" <nate@bx.psu.edu> wrote:
On Jul 13, 2012, at 9:05 AM, Iry Witham wrote:
Hi Nate,
I have set this as shown below:
# The URL for the default runner to use when a tool doesn't explicitly define a # runner below. default_cluster_job_runner = pbs:////-l nodes=1:ppn=1,walltime=20:00:00
This should be okay. Could you send along the server output from startup to job completion?
--nate
Iry
On 7/12/12 4:29 PM, "Nate Coraor" <nate@bx.psu.edu> wrote:
On Jul 6, 2012, at 12:32 PM, Iry Witham wrote:
I am having an issue with pbs_python loading, but not submitting jobs to the cluster.
The first issue is that when I ran 'LIBTORQUE_DIR=/usr/local/lib/ python scripts/scramble.py e pbs_python' it failed to generate the pbs_python egg in the /galaxy-dist/eggs/ directory.
Secondly I copied the pbs_python-4.1.0-py2.6-linux-x86_64-ucs4.egg from my production galaxy server and restarted my test server. It appears to start, but when I attempt to run anything it uses the local runner and fails. The runner0.log shows the pbsjobrunner is loaded is below:
galaxy.jobs.handler DEBUG 2012-07-06 10:08:45,681 Loaded job runner: galaxy.jobs.runners.pbs:PBSJobRunner galaxy.jobs.handler INFO 2012-07-06 10:08:45,682 job handler stop queue started galaxy.jobs.manager INFO 2012-07-06 10:08:45,687 job manager queue started galaxy.jobs.manager INFO 2012-07-06 10:08:45,688 job manager stop queue started galaxy.jobs.handler INFO 2012-07-06 10:08:45,692 job handler queue started
I can't seem to get past this point since I upgraded. Can someone point me in the right direction?
Hi Iry,
Did you set default_cluster_job_runner in universe_wsgi.ini?
--nate
Thanks, Iry
The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible. ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible.
The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible.
participants (2)
-
Iry Witham
-
Nate Coraor