Hi Ben, The job running code is in lib/galaxy/jobs/. Galaxy jobs get a "wrapper" which include the start/finish methods, that's in __init__.py. handler.py is what dispatches jobs out to various runner plugins, finds new jobs to run, and generally controls the operation. runners/*.py are the individual DRM plugins. This is an interesting solution and I'd like to see the implementation. --nate On Thu, Dec 19, 2013 at 6:33 PM, Ben Gift <corn8bit2@gmail.com> wrote:
You've been extremely helpful, I appreciate it.
So we went ahead and decided that we need this feature. We're planning to have a lot of people running huge pipelines, ones that work best on one node, and there's no reason to do all this writing to any shared file system when it works best on one node using /tmp/ for intermediate step data. So I've been working on that.
So far I've made the checkbox for using one node (in run.mako). In workflow.py I catch this and set a new variable in each step of the workflow called use_one_node, if checkbox is checked.
Now I'm trying to find where jobs are run, so that I can put the logic in for getting a node to run on, and setting that as a variable on each step. Could you point me in the direction of the files/classes associated with running the history's jobs, and getting nodes (or sending jobs to condor?)?
Thanks, and I'll be sure to push this upstream after it's done if you'd like it. Maybe as something you can turn on from the universal_wsgi.
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/