Hi,
I would like to run the Galaxy framework on server A, while performing all of the jobs on server B using ssh.
Looking at the documentation here: https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster#CLI, this seems like it should be possible. However, the documentation states that the cli runner requires, at a minimum, two parameters, one for the shell (which I'm selecting SecureShell) and a Job plugin. I'm not sure what this should be since the ones available are Torque, Slurm, and SlurmTorque and I'm not running any of these. Can anyone give me any hints? My current job_conf.xml looks like this:
<?xml version="1.0"?> <job_conf> <plugins> <!-- <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>--> <plugin id="cli" type="runner" load="galaxy.jobs.runners.cli:ShellJobRunner"/> </plugins> <handlers> <handler id="main"/> </handlers> <destinations default="cli_default"> <destination id="cli_default" runner="cli"> <param id="shell_plugin">SecureShell</param> <param id="job_plugin">cli</param> <param id="shell_hostname">computehost</param> <param id="shell_username">galaxy</param> </destination> </destinations> </job_conf>
As an alternative, in a tool xml I tried directly sshing between the <command> tags e.g.:
<command>ssh galaxy@serverB 'sh $__tool_directory__/runTool.sh --verbose
$output 2>&1' </command>
This appears to work. Is it possible to "inject" this ssh command into tool execution in Galaxy?
I was looking to do this to avoid having to install a scheduler like SGE or using Pulsar. Any suggestions would be greatly appreciated.
Thanks!
Even if you just have two servers, I would strongly recommend you setup a cluster distributed resource manager (DRM) like SLURM, PBS, or Condor and ensuring there exists a shared file system between Galaxy and the node running the jobs. You wouldn't even need to use the CLI job runner - you could just use the DRMAA job runner directly in most cases if you configured one of these - which is what most people use to run Galaxy jobs across machines on a cluster.
If you wish to send Galaxy jobs to a single machine without setting up a DRM or if a shared file system is impossible - you can use Pulsar (http://pulsar.readthedocs.org/en/latest/) for most kinds of jobs (some jobs like data source jobs and upload jobs should remain on the Galxay host in a such a configuration).
-John
On Wed, Jan 20, 2016 at 5:18 PM, D K danielfortin86@gmail.com wrote:
Hi,
I would like to run the Galaxy framework on server A, while performing all of the jobs on server B using ssh.
Looking at the documentation here: https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster#CLI, this seems like it should be possible. However, the documentation states that the cli runner requires, at a minimum, two parameters, one for the shell (which I'm selecting SecureShell) and a Job plugin. I'm not sure what this should be since the ones available are Torque, Slurm, and SlurmTorque and I'm not running any of these. Can anyone give me any hints? My current job_conf.xml looks like this:
<?xml version="1.0"?>
<job_conf> <plugins> <!-- <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>--> <plugin id="cli" type="runner" load="galaxy.jobs.runners.cli:ShellJobRunner"/> </plugins> <handlers> <handler id="main"/> </handlers> <destinations default="cli_default"> <destination id="cli_default" runner="cli"> <param id="shell_plugin">SecureShell</param> <param id="job_plugin">cli</param> <param id="shell_hostname">computehost</param> <param id="shell_username">galaxy</param> </destination> </destinations> </job_conf>
As an alternative, in a tool xml I tried directly sshing between the <command> tags e.g.:
<command>ssh galaxy@serverB 'sh $__tool_directory__/runTool.sh --verbose > $output 2>&1' </command>
This appears to work. Is it possible to "inject" this ssh command into tool execution in Galaxy?
I was looking to do this to avoid having to install a scheduler like SGE or using Pulsar. Any suggestions would be greatly appreciated.
Thanks!
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Hi John,
Thanks for the clarification. I've now installed PBS/torque using the instructions found here: http://www.discngine.com/blog/2014/6/27/install-torque-on-a-single-node-cent...
I can correctly submit a job so all is good there.
However, when I follow the instructions on https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster regarding scrambling the eggs, e.g.:
galaxy_user@galaxy_server% LIBTORQUE_DIR=/path/to/libtorque python scripts/scramble.py -e pbs_python
I get the following:
Traceback (most recent call last): File "scripts/scramble.py", line 22, in <module> from galaxy.eggs.scramble import ScrambleCrate, ScrambleFailure, EggNotFetchable ImportError: cannot import name EggNotFetchable
Also, what should my LIBTORQUE_DIR be? I installed this as root in /root/torque-6.0.0.1-1449528029_21cc3d8
Thanks!
On Wed, Jan 20, 2016 at 12:27 PM, John Chilton jmchilton@gmail.com wrote:
Even if you just have two servers, I would strongly recommend you setup a cluster distributed resource manager (DRM) like SLURM, PBS, or Condor and ensuring there exists a shared file system between Galaxy and the node running the jobs. You wouldn't even need to use the CLI job runner - you could just use the DRMAA job runner directly in most cases if you configured one of these - which is what most people use to run Galaxy jobs across machines on a cluster.
If you wish to send Galaxy jobs to a single machine without setting up a DRM or if a shared file system is impossible - you can use Pulsar (http://pulsar.readthedocs.org/en/latest/) for most kinds of jobs (some jobs like data source jobs and upload jobs should remain on the Galxay host in a such a configuration).
-John
On Wed, Jan 20, 2016 at 5:18 PM, D K danielfortin86@gmail.com wrote:
Hi,
I would like to run the Galaxy framework on server A, while performing
all
of the jobs on server B using ssh.
Looking at the documentation here: https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster#CLI,
this
seems like it should be possible. However, the documentation states that
the
cli runner requires, at a minimum, two parameters, one for the shell
(which
I'm selecting SecureShell) and a Job plugin. I'm not sure what this
should
be since the ones available are Torque, Slurm, and SlurmTorque and I'm
not
running any of these. Can anyone give me any hints? My current
job_conf.xml
looks like this:
<?xml version="1.0"?>
<job_conf> <plugins> <!-- <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>--> <plugin id="cli" type="runner" load="galaxy.jobs.runners.cli:ShellJobRunner"/> </plugins> <handlers> <handler id="main"/> </handlers> <destinations default="cli_default"> <destination id="cli_default" runner="cli"> <param id="shell_plugin">SecureShell</param> <param id="job_plugin">cli</param> <param id="shell_hostname">computehost</param> <param id="shell_username">galaxy</param> </destination> </destinations> </job_conf>
As an alternative, in a tool xml I tried directly sshing between the <command> tags e.g.:
<command>ssh galaxy@serverB 'sh $__tool_directory__/runTool.sh
--verbose >
$output 2>&1' </command>
This appears to work. Is it possible to "inject" this ssh command into
tool
execution in Galaxy?
I was looking to do this to avoid having to install a scheduler like SGE
or
using Pulsar. Any suggestions would be greatly appreciated.
Thanks!
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
I guess LIBTORQUE should be /root/torque-6.0.0.1-1449528029_21cc3d8 then, the default permissions on /root may be too restrictive - you may have to install this to a more traditional location such as /usr/local/lib or /opt.
-John
On Wed, Jan 20, 2016 at 10:32 PM, D K danielfortin86@gmail.com wrote:
Hi John,
Thanks for the clarification. I've now installed PBS/torque using the instructions found here: http://www.discngine.com/blog/2014/6/27/install-torque-on-a-single-node-cent...
I can correctly submit a job so all is good there.
However, when I follow the instructions on https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster regarding scrambling the eggs, e.g.:
galaxy_user@galaxy_server% LIBTORQUE_DIR=/path/to/libtorque python scripts/scramble.py -e pbs_python
I get the following:
Traceback (most recent call last): File "scripts/scramble.py", line 22, in <module> from galaxy.eggs.scramble import ScrambleCrate, ScrambleFailure, EggNotFetchable ImportError: cannot import name EggNotFetchable
Also, what should my LIBTORQUE_DIR be? I installed this as root in /root/torque-6.0.0.1-1449528029_21cc3d8
Thanks!
On Wed, Jan 20, 2016 at 12:27 PM, John Chilton jmchilton@gmail.com wrote:
Even if you just have two servers, I would strongly recommend you setup a cluster distributed resource manager (DRM) like SLURM, PBS, or Condor and ensuring there exists a shared file system between Galaxy and the node running the jobs. You wouldn't even need to use the CLI job runner - you could just use the DRMAA job runner directly in most cases if you configured one of these - which is what most people use to run Galaxy jobs across machines on a cluster.
If you wish to send Galaxy jobs to a single machine without setting up a DRM or if a shared file system is impossible - you can use Pulsar (http://pulsar.readthedocs.org/en/latest/) for most kinds of jobs (some jobs like data source jobs and upload jobs should remain on the Galxay host in a such a configuration).
-John
On Wed, Jan 20, 2016 at 5:18 PM, D K danielfortin86@gmail.com wrote:
Hi,
I would like to run the Galaxy framework on server A, while performing all of the jobs on server B using ssh.
Looking at the documentation here: https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster#CLI, this seems like it should be possible. However, the documentation states that the cli runner requires, at a minimum, two parameters, one for the shell (which I'm selecting SecureShell) and a Job plugin. I'm not sure what this should be since the ones available are Torque, Slurm, and SlurmTorque and I'm not running any of these. Can anyone give me any hints? My current job_conf.xml looks like this:
<?xml version="1.0"?>
<job_conf> <plugins> <!-- <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>--> <plugin id="cli" type="runner" load="galaxy.jobs.runners.cli:ShellJobRunner"/> </plugins> <handlers> <handler id="main"/> </handlers> <destinations default="cli_default"> <destination id="cli_default" runner="cli"> <param id="shell_plugin">SecureShell</param> <param id="job_plugin">cli</param> <param id="shell_hostname">computehost</param> <param id="shell_username">galaxy</param> </destination> </destinations> </job_conf>
As an alternative, in a tool xml I tried directly sshing between the <command> tags e.g.:
<command>ssh galaxy@serverB 'sh $__tool_directory__/runTool.sh --verbose > $output 2>&1' </command>
This appears to work. Is it possible to "inject" this ssh command into tool execution in Galaxy?
I was looking to do this to avoid having to install a scheduler like SGE or using Pulsar. Any suggestions would be greatly appreciated.
Thanks!
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
galaxy-dev@lists.galaxyproject.org