Hi, I would like to run the Galaxy framework on server A, while performing all of the jobs on server B using ssh. Looking at the documentation here: https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster#CLI, this seems like it should be possible. However, the documentation states that the cli runner requires, at a minimum, two parameters, one for the shell (which I'm selecting SecureShell) and a Job plugin. I'm not sure what this should be since the ones available are Torque, Slurm, and SlurmTorque and I'm not running any of these. Can anyone give me any hints? My current job_conf.xml looks like this: <?xml version="1.0"?> <job_conf> <plugins> <!-- <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>--> <plugin id="cli" type="runner" load="galaxy.jobs.runners.cli:ShellJobRunner"/> </plugins> <handlers> <handler id="main"/> </handlers> <destinations default="cli_default"> <destination id="cli_default" runner="cli"> <param id="shell_plugin">SecureShell</param> <param id="job_plugin">cli</param> <param id="shell_hostname">computehost</param> <param id="shell_username">galaxy</param> </destination> </destinations> </job_conf> As an alternative, in a tool xml I tried directly sshing between the <command> tags e.g.: <command>ssh galaxy@serverB 'sh $__tool_directory__/runTool.sh --verbose
$output 2>&1' </command>
This appears to work. Is it possible to "inject" this ssh command into tool execution in Galaxy? I was looking to do this to avoid having to install a scheduler like SGE or using Pulsar. Any suggestions would be greatly appreciated. Thanks!