Thanks John, I will try to find some time and courage to implement this in my galaxy and let you know what happens... Thon On Jun 05, 2012, at 06:58 PM, John Chilton <chilton@msi.umn.edu> wrote: I have produced an updated version of my changes to be compatible with newer versions of galaxy, but but my local Galaxy instances are too far behind to test the changes. Once I get some time to test this out more I will update the pull request, but since you are interested now the following commands should pull down the updated changeset and apply it in isolation for minimum chance of breaking other things. wget https://bitbucket.org/jmchilton/umn-galaxy-central/changeset/66395a9d870f/ra... -O dynamic_jobs.patch hg import dynamic_jobs.patch The original pull request was well tested, these changes aren't and I am not sure how my changes will play with other changes that have been made to Galaxy in meantime. If you chose to try this out, good luck and let me know what goes wrong. -John ------------------------------------------------ John Chilton Senior Software Developer University of Minnesota Supercomputing Institute Office: 612-625-0917 Cell: 612-226-9223 On Fri, Jun 1, 2012 at 1:32 PM, Anthonius deBoer <thondeboer@me.com> wrote:
I would be very interested in this piece of code...
Would I be wrong in assuming if I did the pull of that version I would be messing up my Galaxy system and I need to manually resolve it?
I would not want to re-introduce many of the bugs etc. so could I not just pull in whatever code you changed? Could you indicate which python modules you changed so I can pull in just those?
Thanks
Thon
On Jun 01, 2012, at 11:18 AM, John Chilton <chil0060@umn.edu> wrote:
I posted a message about this half a year ago and issued a pull request (which is now conflicted). I developed some code which was not a turn key solution but I thought it was a good first step and it was very expressive. It would allow you to do stuff like inspect input sizes or parameter values and dynamically change queues or queue parameters based on these (using raw python to express these rules).
galaxy-dev@lists.bx.psu.edu/msg03010.html'>galaxy-dev@lists.bx.psu.edu/msg03010.html'>http://www.mail-archive.com/galaxy-dev@lists.bx.psu.edu/msg03010.html
https://bitbucket.org/galaxy/galaxy-central/pull-request/12/dynamic-job-runn...
You are not the first person to follow up with requests like this, I would be happy to update the pull request if it were going to be acted on.
-John
On Fri, Jun 1, 2012 at 1:02 PM, Anthonius deBoer <thondeboer@me.com> wrote:
Could you give me a pointer where in the codebase I should start looking to implement this, please? I'd be happy to share this with the community if it is useful for others....
Thon
On Jun 01, 2012, at 10:53 AM, Jeremy Goecks <jeremy.goecks@emory.edu> wrote:
No, this is not configurable via XML; you'd have to extend the Galaxy codebase to implement this behavior.
Best, J.
On Jun 1, 2012, at 1:50 PM, Anthonius deBoer wrote:
Hi Jeremy,
Thanks for the info...But it's not clear from your message if I could use this Key/Value approach at the moment to distinguish if a job should be run in the fast or in the slow queue.
I would like to add a parameter to every tool that would have it determine if it should be in the fast queue or in the slow queue... It would be checked for interactive jobs and if someone created a workflow with this tool, they could turn it off and it would run in the slow/high memory queue.... Could I add this today and what would the XML look like or are you saying it only works for the trackster example you gave...
Thanks
Thon
On Jun 01, 2012, at 05:44 AM, Jeremy Goecks <jeremy.goecks@emory.edu> wrote:
Is there a way for a tool to sometimes be placed in the fast queue and sometimes in the long queue?
Not through Galaxy as far as I know.
Yes, this is possible using job parameterization. From universe.ini.sample:
-- # Per-tool job handler and runner overrides. Parameters can be included to define multiple # runners per tool. E.g. to run Cufflinks jobs initiated from Trackster # differently than standard Cufflinks jobs: # # cufflinks = local:/// # cufflinks[source@trackster] = local:/// --
This approach is definitely a beta feature, but the idea is that any set of key@value parameters should be able to be used to direct jobs to different queues as needed.
Job parameterization is done in only one place right now, the tracks.py controller in rerun_tool The idea is that jobs run via Trackster are short, so they get a different queue:
-- subset_job, subset_job_outputs = tool.execute( trans, incoming=tool_params, history=target_history, job_params={ "source" : "trackster" } ) --
Right now I'd like to be able to allocate jobs to different queues based on the input data size (and thus the expected compute time and resource needed), but that is rather complicated. e.g. If you have a low memory queue and a high memory query.
To make this work, you'd want to modify the execute() method in the DefaultToolAction class (/lib/galaxy/tools/actions/__init__.py) to add job parameters based on either tool parameters and/or input dataset size.
You might even want different queues according to the user (e.g. one group might have paid for part of the cluster and get priority access).
This could also be done in the same location as trans.user will give you the user running the tool/job.
Best, J.
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: