Connecting Galaxy to multiple Remote Clusters?
Hello, I've read through past emails that connecting Galaxy to a remote cluster is possible but I wanted to ask if it was possible to setup Galaxy as a standalone webserver and have it use multiple remote clusters (multiple head nodes each with their own compute nodes) and if so what the best strategy to implement this is? Also on one of these clusters we are not able to mount an NFS directory onto the compute nodes, since we do not own/manage it. Your help is greatly appreciated. Thanks, Joe
On Mar 11, 2011, at 2:28 PM, Joe Cruz wrote:
Hello,
I've read through past emails that connecting Galaxy to a remote cluster is possible but I wanted to ask if it was possible to setup Galaxy as a standalone webserver and have it use multiple remote clusters (multiple head nodes each with their own compute nodes) and if so what the best strategy to implement this is?
Also on one of these clusters we are not able to mount an NFS directory onto the compute nodes, since we do not own/manage it.
Your help is greatly appreciated.
We run Galaxy on a VM and it offloads jobs to two different clusters, however in our case our Galaxy filesystem is NFS mounted on both clusters. For both clusters we needed to setup the VM as a "submit node" (with torque: qmgr -c "set server submit_hosts += galaxyvm"), and we also setup a TORQUE "$usecp" directive for all of our pbs_moms so it would do a local copy for stderr/stdout files rather than try to scp them back to galaxy:/path/to/galaxy/database/pbs. -- Glen L. Beane Senior Software Engineer The Jackson Laboratory (207) 288-6153
participants (2)
-
Glen Beane
-
Joe Cruz