I've "inherited" a galaxy setup from someone (details: a contractor set it up but left it unfinished so I'm completing the job while discovering and documenting what was done). The catch: it's setup as a twin galaxy instance with one as a front-end web-server and the other as the job runner. Admittedly the documentation is lighter on this sort of setup, but I've been able to puzzle it out until getting to the issue of toolsheds. We have a large number of custom tools we want to install from our own toolshed and the main Galaxy one. The problem: I'm not clear on how this supposed to work. Once I started installing tools, of course they all appeared in the front-end toolshed and are running on the front-end. Understandable - but not what I wanted. Does this mean that for every custom tool I install, I have to go to every Galaxy instance (frontend and job-runners) and install the tools into their toolshed? It would seem so, but this seems ... awkward. tl;dr: Toolsheds and job-ruuners - how is that supposed to work? Is there a convenient way to handle this? Any pointers to documentation or experience? -- Paul Agapow (pma@agapow.net)
Hi Paul, In case you are not aware of it, the Galaxy tool shed wiki explains the tool shed: http://wiki.g2.bx.psu.edu/Tool%20Shed On Sep 4, 2012, at 6:25 AM, Paul-Michael Agapow wrote:
I've "inherited" a galaxy setup from someone (details: a contractor set it up but left it unfinished so I'm completing the job while discovering and documenting what was done). The catch: it's setup as a twin galaxy instance with one as a front-end web-server and the other as the job runner.
Just to confirm, you have set up a single Galaxy instance (with a single database on the back-end) with multiple web front-ends - is this correct?
Admittedly the documentation is lighter on this sort of setup, but I've been able to puzzle it out until getting to the issue of toolsheds. We have a large number of custom tools we want to install from our own toolshed and the main Galaxy one. The problem: I'm not clear on how this supposed to work. Once I started installing tools, of course they all appeared in the front-end toolshed and are running on the front-end. Understandable - but not what I wanted. Does this mean that for every custom tool I install, I have to go to every Galaxy instance (frontend and job-runners) and install the tools into their toolshed? It would seem so, but this seems ... awkward.
The Galaxy tool shed is a separate application that has no dependencies on a Galaxy instance (and vice-versa), although a Galaxy instance and a tool shed instance can each communicate with the other, so your question is a bit confusing. A single Galaxy tool shed can be used by any number of Galaxy instances, so if you create a repository in a tool shed instance and upload tools to it, any number of Galaxy instances will be able to access that tool shed and install the repositories from it. Keep in mind that if you are, in fact, running a single Galaxy instance with multiple web front-ends, automatically installing a tool into that galaxy instance from the tool shed will result in displaying the tool only in the web front-end in which it was installed. Your Galaxy server will need to be stopped and restarted in order for the installed tools to be available in all of the web front-ends you've configured in your Galaxy instance. Again, this has nothing to do with any tool shed instances that you may have configured in your Galaxy instance.
tl;dr: Toolsheds and job-ruuners - how is that supposed to work? Is there a convenient way to handle this? Any pointers to documentation or experience?
-- Paul Agapow (pma@agapow.net)
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
Thanks Greg, In case you are not aware of it, the Galaxy tool shed wiki explains the
tool shed: http://wiki.g2.bx.psu.edu/Tool%20Shed
I am aware, thanks, although I won't exclude the possibility that I've misunderstood something ... I've "inherited" a galaxy setup from someone (details: a contractor set it
up but left it unfinished so I'm completing the job while discovering and documenting what was done). The catch: it's setup as a twin galaxy instance with one as a front-end web-server and the other as the job runner.
Just to confirm, you have set up a single Galaxy instance (with a single database on the back-end) with multiple web front-ends - is this correct?
Right - a single database with one web front-end and one job-runner. And here's a detail that I forgot and may be important ... they're on different machines. They share the db and a shared data space but are on different VMs. The Galaxy tool shed is a separate application that has no dependencies on
a Galaxy instance (and vice-versa), although a Galaxy instance and a tool shed instance can each communicate with the other, so your question is a bit confusing. A single Galaxy tool shed can be used by any number of Galaxy instances, so if you create a repository in a tool shed instance and upload tools to it, any number of Galaxy instances will be able to access that tool shed and install the repositories from it.
Right - but setting up the toolshed is not the confusing point for me. It's how the job-runner gets to use a tool installed from a toolshed when it's on a different machine to the front-end. The tool has to be installed into the job-runner as well, right? As said, I inherited this system, so I'm discovering how it's been setup. cheers p -- Paul Agapow (pma@agapow.net)
Hi Paul, Since you are using one config file/one database, you just need to manually restart the job runner after installing tools (assuming your servers are configured correctly). The old way of having two config files, one for the web front ends and one for the SINGLE job runner is no longer supported - now you have one config file. If you want to have one web process and one job "runner" you just configure one process as a "Web Server" and one as both the "Job Manager" and "Job Handler" as explained in the documentation here: http://wiki.g2.bx.psu.edu/Admin/Config/Performance/Web%20Application%20Scali... This system level stuff is out of my area of expertise, so others may need to chime in if you're still having issues. Greg Von Kuster On Sep 5, 2012, at 5:22 AM, Paul-Michael Agapow wrote:
Thanks Greg,
In case you are not aware of it, the Galaxy tool shed wiki explains the tool shed: http://wiki.g2.bx.psu.edu/Tool%20Shed
I am aware, thanks, although I won't exclude the possibility that I've misunderstood something ...
I've "inherited" a galaxy setup from someone (details: a contractor set it up but left it unfinished so I'm completing the job while discovering and documenting what was done). The catch: it's setup as a twin galaxy instance with one as a front-end web-server and the other as the job runner.
Just to confirm, you have set up a single Galaxy instance (with a single database on the back-end) with multiple web front-ends - is this correct?
Right - a single database with one web front-end and one job-runner. And here's a detail that I forgot and may be important ... they're on different machines. They share the db and a shared data space but are on different VMs.
The Galaxy tool shed is a separate application that has no dependencies on a Galaxy instance (and vice-versa), although a Galaxy instance and a tool shed instance can each communicate with the other, so your question is a bit confusing. A single Galaxy tool shed can be used by any number of Galaxy instances, so if you create a repository in a tool shed instance and upload tools to it, any number of Galaxy instances will be able to access that tool shed and install the repositories from it.
Right - but setting up the toolshed is not the confusing point for me. It's how the job-runner gets to use a tool installed from a toolshed when it's on a different machine to the front-end. The tool has to be installed into the job-runner as well, right?
As said, I inherited this system, so I'm discovering how it's been setup.
cheers
p -- Paul Agapow (pma@agapow.net)
participants (2)
-
Greg Von Kuster
-
Paul-Michael Agapow