Hello Will,
The second command returns the container.hpp inside the directory of the torque source which i used to install it:
find / -name container.hpp /home/user/torque-5.1.1.2-1_18e4a5f1/src/include/container.hpp
I noticed there is also a log.h file there.
the first command doesn't return anything (I've search manually too, nothing inside /usr/local/include/).
How do I fix this? Shall I change the export command of PBS_PYTHON_INCLUDEDIR?
Thank you, Makis
Date: Thu, 24 Sep 2015 08:31:07 -0700 Subject: Re: [galaxy-dev] pbs_python can't be installed via scramble From: wholtz@lygos.com To: makis4ever@hotmail.com CC: galaxy-dev@lists.galaxyproject.org
And does find /usr/local/include -name container.hppor find / -name container.hppreturn torque/src/include/container.hpp ? My guess is you installed binaries for Torque, but the scrambling requires the Torque source (or at least the header files form Torque). -Will On Thu, Sep 24, 2015 at 1:16 AM, Makis Ladoukakis makis4ever@hotmail.com wrote:
Hello Will,
Thanks again for your reply. Yes I had already tried these but I got the same container.hpp error.
The only thing different in my case is that the names of the directories of the directories are a bit different:
export PBS_PYTHON_INCLUDEDIR=/usr/local/include/ export PBSCONFIG=/usr/local/bin/pbs-config export LIBTORQUE_DIR=/usr/local/lib/libtorque.so python scripts/scramble.py -e pbs_python
Any ideas?
Kind regards, Makis
Date: Wed, 23 Sep 2015 07:06:32 -0700 Subject: Re: [galaxy-dev] pbs_python can't be installed via scramble From: wholtz@lygos.com To: makis4ever@hotmail.com CC: galaxy-dev@lists.galaxyproject.org
Hi Makis, The thread I posted does have the exact same container.hpp error and proposes a solution. Did you try setting your PBS_PYTHON_INCLUDEDIR to point to your torque/include directory, as mentioned in that thread? from the thread:export PBS_PYTHON_INCLUDEDIR=/usr/local/torque/include/ export PBSCONFIG=/usr/local/torque/bin/pbs-config export LIBTORQUE_DIR=/usr/local/torque/lib/libtorque.so
-Will
On Wed, Sep 23, 2015 at 5:49 AM, Makis Ladoukakis makis4ever@hotmail.com wrote:
Hello Will,
Thank you for your reply. I had already checked this thread before as I browsed through the mailing list for "pbs_python" issues. It doesn't have the same errors (can't locate log.h and when i provide it it wants the container.hpp ) and up to one point i've already followed the "rescrambling" of the pbs_python.
Anyone else had any experience with that?
Thank you, Makis
Date: Tue, 22 Sep 2015 09:35:11 -0700 Subject: Re: [galaxy-dev] pbs_python can't be installed via scramble From: wholtz@lygos.com To: makis4ever@hotmail.com CC: galaxy-dev@lists.galaxyproject.org
Here is an old thread that looks rather similar to your problems:https://www.mail-archive.com/galaxy-dev@lists.galaxyproject.org/msg00078.htm...
-Will
On Tue, Sep 22, 2015 at 2:43 AM, Makis Ladoukakis makis4ever@hotmail.com wrote:
Hello everyone,
I'm trying to set up a Galaxy instance on a multi-core server in my university so according to the instructions here:
https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster#PBS
I installed Torque, changed the eggs.ini file by adding the 4.4.0. version of pbs_python
and tried to setup pbs_python via scramble:
LIBTORQUE_DIR=/usr/local/lib/ python scripts/scramble.py -e pbs_python
When I did that I got the error:
Failed to find log.h in inlcude dir /usr/include/torque. (Set incude dir via PBS_PYTHON_INCLUDEDIR variable) Traceback (most recent call last): File "scripts/scramble.py", line 50, in <module> egg.scramble() File "/home/galaxy/galaxy/lib/galaxy/eggs/scramble.py", line 57, in scramble self.run_scramble_script() File "/home/galaxy/galaxy/lib/galaxy/eggs/scramble.py", line 210, in run_scramble_script raise ScrambleFailure( self, "%s(): Egg build failed for %s %s" % ( sys._getframe().f_code.co_name, self.name, self.version ) ) galaxy.eggs.scramble.ScrambleFailure: run_scramble_script(): Egg build failed for pbs_python 4.4.0
so I did some digging around and found that the file I need is in
/scripts/scramble/build/py2.7-linux-x86_64-ucs4/pbs_python/src/C++/
(please correct me if I am wrong)
So I tried again using:
PBS_PYTHON_INCLUDEDIR=/home/galaxy/galaxy/scripts/scramble/build/py2.7-linux-x86_64-ucs4/pbs_python/src/C++/ LIBTORQUE_DIR=/usr/local/lib/ python scripts/scramble.py -e pbs_python
but then I got the error:
In file included from src/C++/pbs_ifl.h:90:0, from src/C++/pbs_wrap.cxx:2978: /usr/local/include/u_hash_map_structs.h:82:25: fatal error: container.hpp: No such file or directory #include "container.hpp"
Can someone help me please?
Kind regards, Makis
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
https://lists.galaxyproject.org/
To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/
Hello again,
So I figured out a workaround. I had to downgrade torque to version 4.2.10 (that is the latest version of torque that doesn't have any compatibility issues with pbs_python 4.4.0).
Then i used:
export PBS_PYTHON_INCLUDEDIR=/home/user/torque-4.2.10/src/include/
to include... well... the include directory
So the pbs_python egg for 4.4.0. was created succesfully.
But then after running galaxy:
GALAXY_RUN_ALL=1 sh run.sh --daemon
I got another error:
galaxy.jobs.runners DEBUG 2015-09-25 15:08:22,030 Loading DRMAARunner with params: {'invalidjobexception_retries': '0', 'invalidjobexception_state': 'ok', 'internalexception_retries': '0', 'internalexception_state': 'ok'} galaxy.jobs.runners.state_handler_factory DEBUG 2015-09-25 15:08:22,030 Loaded 'failure' state handler from module galaxy.jobs.runners.state_handlers.resubmit Traceback (most recent call last): File "/home/galaxy/galaxy/lib/galaxy/webapps/galaxy/buildapp.py", line 51, in app_factory app = galaxy.app.UniverseApplication( global_conf=global_conf, **kwargs ) File "/home/galaxy/galaxy/lib/galaxy/app.py", line 142, in __init__ self.job_manager = manager.JobManager( self ) File "/home/galaxy/galaxy/lib/galaxy/jobs/manager.py", line 23, in __init__ self.job_handler = handler.JobHandler( app ) File "/home/galaxy/galaxy/lib/galaxy/jobs/handler.py", line 32, in __init__ self.dispatcher = DefaultJobDispatcher( app ) File "/home/galaxy/galaxy/lib/galaxy/jobs/handler.py", line 722, in __init__ self.job_runners = self.app.job_config.get_job_runner_plugins( self.app.config.server_name ) File "/home/galaxy/galaxy/lib/galaxy/jobs/__init__.py", line 667, in get_job_runner_plugins rval[id] = runner_class( self.app, runner[ 'workers' ], **runner.get( 'kwds', {} ) ) File "/home/galaxy/galaxy/lib/galaxy/jobs/runners/drmaa.py", line 62, in __init__ drmaa = __import__( "drmaa" ) File "/home/galaxy/galaxy/eggs/drmaa-0.7.6-py2.7.egg/drmaa/__init__.py", line 48, in <module> ZipImportError: bad local file header in /home/galaxy/galaxy/eggs/drmaa-0.7.6-py2.7.egg Removing PID file handler0.pid
Has anyone ever encountered that?
Thank you, Makis
From: makis4ever@hotmail.com To: wholtz@lygos.com Date: Fri, 25 Sep 2015 12:07:02 +0300 CC: galaxy-dev@lists.galaxyproject.org Subject: Re: [galaxy-dev] pbs_python can't be installed via scramble
Hello Will,
The second command returns the container.hpp inside the directory of the torque source which i used to install it:
find / -name container.hpp /home/user/torque-5.1.1.2-1_18e4a5f1/src/include/container.hpp
I noticed there is also a log.h file there.
the first command doesn't return anything (I've search manually too, nothing inside /usr/local/include/).
How do I fix this? Shall I change the export command of PBS_PYTHON_INCLUDEDIR?
Thank you, Makis
Date: Thu, 24 Sep 2015 08:31:07 -0700 Subject: Re: [galaxy-dev] pbs_python can't be installed via scramble From: wholtz@lygos.com To: makis4ever@hotmail.com CC: galaxy-dev@lists.galaxyproject.org
And does find /usr/local/include -name container.hppor find / -name container.hppreturn torque/src/include/container.hpp ? My guess is you installed binaries for Torque, but the scrambling requires the Torque source (or at least the header files form Torque). -Will On Thu, Sep 24, 2015 at 1:16 AM, Makis Ladoukakis makis4ever@hotmail.com wrote:
Hello Will,
Thanks again for your reply. Yes I had already tried these but I got the same container.hpp error.
The only thing different in my case is that the names of the directories of the directories are a bit different:
export PBS_PYTHON_INCLUDEDIR=/usr/local/include/ export PBSCONFIG=/usr/local/bin/pbs-config export LIBTORQUE_DIR=/usr/local/lib/libtorque.so python scripts/scramble.py -e pbs_python
Any ideas?
Kind regards, Makis
Date: Wed, 23 Sep 2015 07:06:32 -0700 Subject: Re: [galaxy-dev] pbs_python can't be installed via scramble From: wholtz@lygos.com To: makis4ever@hotmail.com CC: galaxy-dev@lists.galaxyproject.org
Hi Makis, The thread I posted does have the exact same container.hpp error and proposes a solution. Did you try setting your PBS_PYTHON_INCLUDEDIR to point to your torque/include directory, as mentioned in that thread? from the thread:export PBS_PYTHON_INCLUDEDIR=/usr/local/torque/include/ export PBSCONFIG=/usr/local/torque/bin/pbs-config export LIBTORQUE_DIR=/usr/local/torque/lib/libtorque.so
-Will
On Wed, Sep 23, 2015 at 5:49 AM, Makis Ladoukakis makis4ever@hotmail.com wrote:
Hello Will,
Thank you for your reply. I had already checked this thread before as I browsed through the mailing list for "pbs_python" issues. It doesn't have the same errors (can't locate log.h and when i provide it it wants the container.hpp ) and up to one point i've already followed the "rescrambling" of the pbs_python.
Anyone else had any experience with that?
Thank you, Makis
Date: Tue, 22 Sep 2015 09:35:11 -0700 Subject: Re: [galaxy-dev] pbs_python can't be installed via scramble From: wholtz@lygos.com To: makis4ever@hotmail.com CC: galaxy-dev@lists.galaxyproject.org
Here is an old thread that looks rather similar to your problems:https://www.mail-archive.com/galaxy-dev@lists.galaxyproject.org/msg00078.htm...
-Will
On Tue, Sep 22, 2015 at 2:43 AM, Makis Ladoukakis makis4ever@hotmail.com wrote:
Hello everyone,
I'm trying to set up a Galaxy instance on a multi-core server in my university so according to the instructions here:
https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster#PBS
I installed Torque, changed the eggs.ini file by adding the 4.4.0. version of pbs_python
and tried to setup pbs_python via scramble:
LIBTORQUE_DIR=/usr/local/lib/ python scripts/scramble.py -e pbs_python
When I did that I got the error:
Failed to find log.h in inlcude dir /usr/include/torque. (Set incude dir via PBS_PYTHON_INCLUDEDIR variable) Traceback (most recent call last): File "scripts/scramble.py", line 50, in <module> egg.scramble() File "/home/galaxy/galaxy/lib/galaxy/eggs/scramble.py", line 57, in scramble self.run_scramble_script() File "/home/galaxy/galaxy/lib/galaxy/eggs/scramble.py", line 210, in run_scramble_script raise ScrambleFailure( self, "%s(): Egg build failed for %s %s" % ( sys._getframe().f_code.co_name, self.name, self.version ) ) galaxy.eggs.scramble.ScrambleFailure: run_scramble_script(): Egg build failed for pbs_python 4.4.0
so I did some digging around and found that the file I need is in
/scripts/scramble/build/py2.7-linux-x86_64-ucs4/pbs_python/src/C++/
(please correct me if I am wrong)
So I tried again using:
PBS_PYTHON_INCLUDEDIR=/home/galaxy/galaxy/scripts/scramble/build/py2.7-linux-x86_64-ucs4/pbs_python/src/C++/ LIBTORQUE_DIR=/usr/local/lib/ python scripts/scramble.py -e pbs_python
but then I got the error:
In file included from src/C++/pbs_ifl.h:90:0, from src/C++/pbs_wrap.cxx:2978: /usr/local/include/u_hash_map_structs.h:82:25: fatal error: container.hpp: No such file or directory #include "container.hpp"
Can someone help me please?
Kind regards, Makis
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
https://lists.galaxyproject.org/
To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/
Hello everyone,
I am trying to set up a local Galaxy instance publicly available and configured via pbs/torque in order to fully exploit a multi-cpu server.
So I changed my job_conf.xml and my galaxy.ini files (described in the end of the e-mail) according to the istructions on the Galaxy site but when I tried to run the run.sh script:
GALAXY_RUN_ALL=1 sh run.sh --daemon
all the handlers start from 127.0.0.1 instead of 0.0.0.0.
Has anyone encountered that issue before. Am I missing something?
Thank you, Makis
The changes in the files:
galaxy.ini
[uwsgi] processes = 12 stats = 0.0.0.0:9191 # i had this at 127.0.0.1:9191 but with no difference socket = 0.0.0.0:4001 # I had this at 127.0.0.1:4001 but with no difference pythonpath = lib threads = 4 logto = /home/galaxy/uwsgi.log master = True
[server:handler0] use = egg:Paste#http port = 8090 use_threadpool = True threadpool_workers = 10
[server:handler1] use = egg:Paste#http port = 8091 use_threadpool = True threadpool_workers = 10
[server:handler2] use = egg:Paste#http port = 8092 use_threadpool = True threadpool_workers = 10
[server:handler3] use = egg:Paste#http port = 8093 use_threadpool = True threadpool_workers = 10
[server:handler4] use = egg:Paste#http port = 8094 use_threadpool = True threadpool_workers = 10
[server:handler5] use = egg:Paste#http port = 8095 use_threadpool = True threadpool_workers = 10
# The port on which to listen. #port = 8090
# The address on which to listen. By default, only listen to localhost (Galaxy # will not be accessible over the network). Use '0.0.0.0' to listen on all # available network interfaces. host = 0.0.0.0
job_conf.xml <?xml version="1.0"?> <job_conf> <plugins workers="16"> <!-- "workers" is the number of threads for the runner's work queue. The default from <plugins> is used if not defined for a <plugin>. --> <plugin id="pbs" type="runner" load="galaxy.jobs.runners.pbs:PBSJobRunner" workers="16"/> </plugins> <handlers default="handlers"> <!-- Additional job handlers - the id should match the name of a [server:<id>] in galaxy.ini. --> <handler id="handler0" tags="handlers"/> <handler id="handler1" tags="handlers"/> <handler id="handler2" tags="handlers"/> <handler id="handler3" tags="handlers"/> <handler id="handler4" tags="handlers"/> <handler id="handler5" tags="handlers"/>
</handlers> <destinations default="torque"> <!-- Destinations define details about remote resources and how jobs should be executed on those remote resources.
--> <destination id="torque" runner="pbs"/> <destination id="other_cluster" runner="pbs"> <param id="destination">@other.cluster</param> </destination> <destination id="long_jobs" runner="pbs"> <param id="Resource_List">walltime=72:00:00, nodes=1:ppn=8</param> <param id="-p">128</param> </destination> </destinations>
Actually I just fixed that by adding the "host" in each handler but I get the same error as before from the browser when I try to access the webpage:
Bad Gateway The proxy server received an invalid response from an upstream server.
My apache configuration settings are the following:
RewriteEngine on #RewriteRule ^(.*) http://localhost:8080$1 [P] RewriteRule ^/galaxy$ /galaxy/ [R] RewriteRule ^/galaxy/static/style/(.*) /home/galaxy/galaxy/static/june_2007_sty$ RewriteRule ^/galaxy/static/scripts/(.*) /home/galaxy/galaxy/static/scripts/pac$ RewriteRule ^/galaxy/static/(.*) /home/galaxy/galaxy/static/$1 [L] RewriteRule ^/galaxy/favicon.ico /home/galaxy/galaxy/static/favicon.ico [L] RewriteRule ^/galaxy/robots.txt /home/galaxy/galaxy/static/robots.txt [L]
LoadModule proxy_uwsgi_module /usr/lib64/httpd/modules/mod_proxy_uwsgi.so LoadModule uwsgi_module /usr/lib64/httpd/modules/mod_uwsgi.so
<Location "/galaxy"> Sethandler uwsgi-handler uWSGISocket 0.0.0.0:4001 uWSGImaxVars 512 </Location>
From: makis4ever@hotmail.com To: galaxy-dev@lists.galaxyproject.org Date: Wed, 14 Oct 2015 12:17:05 +0300 Subject: [galaxy-dev] Galaxy in cluster mode won't start from 0.0.0.0.
Hello everyone,
I am trying to set up a local Galaxy instance publicly available and configured via pbs/torque in order to fully exploit a multi-cpu server.
So I changed my job_conf.xml and my galaxy.ini files (described in the end of the e-mail) according to the istructions on the Galaxy site but when I tried to run the run.sh script:
GALAXY_RUN_ALL=1 sh run.sh --daemon
all the handlers start from 127.0.0.1 instead of 0.0.0.0.
Has anyone encountered that issue before. Am I missing something?
Thank you, Makis
The changes in the files:
galaxy.ini
[uwsgi] processes = 12 stats = 0.0.0.0:9191 # i had this at 127.0.0.1:9191 but with no difference socket = 0.0.0.0:4001 # I had this at 127.0.0.1:4001 but with no difference pythonpath = lib threads = 4 logto = /home/galaxy/uwsgi.log master = True
[server:handler0] use = egg:Paste#http port = 8090 use_threadpool = True threadpool_workers = 10
[server:handler1] use = egg:Paste#http port = 8091 use_threadpool = True threadpool_workers = 10
[server:handler2] use = egg:Paste#http port = 8092 use_threadpool = True threadpool_workers = 10
[server:handler3] use = egg:Paste#http port = 8093 use_threadpool = True threadpool_workers = 10
[server:handler4] use = egg:Paste#http port = 8094 use_threadpool = True threadpool_workers = 10
[server:handler5] use = egg:Paste#http port = 8095 use_threadpool = True threadpool_workers = 10
# The port on which to listen. #port = 8090
# The address on which to listen. By default, only listen to localhost (Galaxy # will not be accessible over the network). Use '0.0.0.0' to listen on all # available network interfaces. host = 0.0.0.0
job_conf.xml <?xml version="1.0"?> <job_conf> <plugins workers="16"> <!-- "workers" is the number of threads for the runner's work queue. The default from <plugins> is used if not defined for a <plugin>. --> <plugin id="pbs" type="runner" load="galaxy.jobs.runners.pbs:PBSJobRunner" workers="16"/> </plugins> <handlers default="handlers"> <!-- Additional job handlers - the id should match the name of a [server:<id>] in galaxy.ini. --> <handler id="handler0" tags="handlers"/> <handler id="handler1" tags="handlers"/> <handler id="handler2" tags="handlers"/> <handler id="handler3" tags="handlers"/> <handler id="handler4" tags="handlers"/> <handler id="handler5" tags="handlers"/>
</handlers> <destinations default="torque"> <!-- Destinations define details about remote resources and how jobs should be executed on those remote resources.
--> <destination id="torque" runner="pbs"/> <destination id="other_cluster" runner="pbs"> <param id="destination">@other.cluster</param> </destination> <destination id="long_jobs" runner="pbs"> <param id="Resource_List">walltime=72:00:00, nodes=1:ppn=8</param> <param id="-p">128</param> </destination> </destinations>
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
galaxy-dev@lists.galaxyproject.org