Hi Nate, Yeah, looks like I have no choice... I'll try to negociate with the IT guys to run the runner on machine B. Thanks for all the information! BTW, out of curiosity, is submitting jobs across machines possible with Torque? Best, L-A Le 30/01/2012 18:20, Nate Coraor a écrit :
On Jan 30, 2012, at 12:07 PM, Louise-Amélie Schmitt wrote:
Hi Nate,
Thanks for the leads!
But setting DRMAA_LIBRARY_PATH means I'm in trouble since the libraries are on machine B which is maintained by our IT dept. I cannot access them from machine A.
Is it a desperate situation? Will it work if I have a copy of those libs somewhere? :/ Hi L-A,
The Galaxy server will need to be a submission host, so I believe it will have to have PBS Pro installed. If it has this, then the FedStage DRMAA library should be installable on the same host. It may be possible copy the libraries, although I don't know whether you'd be able to configure the server address without access to the directories in which the library will look for its configuration.
--nate
Best, L-A
Le 30/01/2012 17:18, Nate Coraor a écrit :
On Jan 30, 2012, at 7:37 AM, Louise-Amélie Schmitt wrote:
Hi Nate,
Thanks for the info!
I'm trying to understand how the URL for DRMAA works but I don't understand how we can set it so it uses a different machine. Our Galaxy runner runs on machine A and the cluster is on machine B, where do I put B in the URL?
In the wiki there is this example: drmaa://[native_options]/ I'm a bit confused, I would have expected something like: drmaa://[machine]/[native_options]/ like for TORQUE. Did I miss something? Hi L-A,
Hrm, I've only used it with SGE, which uses an environment variable to define the cell location, and LSF, which I don't remember, but I assume it used the default. I think if you configure the PBS Pro client on the submission host and point DRMAA_LIBRARY_PATH at the correct libdrmaa, it will use your client configuration. There are other PBS Pro users on the list who can hopefully chime in with more details.
--nate
Best, L-A
Le 19/01/2012 19:43, Nate Coraor a écrit :
On Jan 16, 2012, at 5:22 AM, Louise-Amélie Schmitt wrote:
Hello,
We want to move Galaxy's jobs from our small TORQUE local install to a big cluster running PBS Pro.
In the universe_wsgi.ini, I changed the cluster address as follows: default_cluster_job_runner = pbs:/// to: default_cluster_job_runner = pbs://sub-master/clng_new/ where sub-master is the name of the machine and clng_new is the queue.
However, I get an error when trying to run any job:
galaxy.jobs.runners.pbs ERROR 2012-01-16 11:10:00,894 Connection to PBS server for submit failed: 111: Could not find a text for this error, uhhh
This corresponds to the qsub error 111 (Cannot connect to specified server host) which is, for some reason, caught by pbs_python as an error of its own (111 not corresponding to any pbs_python error code, hence the face-plant-message).
Our guess is that we might need to re-scramble the pbs_python egg with PBS pro's libraries, is that correct? If it's the case, what do we have to set as LIBTORQUE_DIR? Hi L-A,
pbs_python is only designed for TORQUE, I don't think it is compatible with the PBS Pro API. For that, you need to use the drmaa runner, which uses the FedStage libdrmaa for PBS Pro.
--nate
Thanks, L-A ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: