Thanks John, now it works !
You look like a Galaxy professional. Maybe you can help me again ? I've made an
another topic about a FTP setting to upload large file. I've been to the Galaxy wiki
tutorials but I don't understand a lot ...
Pat
Date: Wed, 25 Jun 2014 07:49:36 -0500
Subject: Re: [galaxy-dev] Run a tool's workflow one by one ?
From: jmchilton(a)gmail.com
To: leonardsquall15(a)hotmail.com
CC: galaxy-dev(a)lists.bx.psu.edu
There was a problem with the config I sent you - it defines two
destinations for jobs but doesn't specify a default. I have updated
the gist (and actually tried loading it in Galaxy this time):
https://gist.github.com/jmchilton/ff186b01d51d401623be. Hope this
helps you make progress on this issue.
-John
On Wed, Jun 25, 2014 at 2:09 AM, Pat-74100 <leonardsquall15(a)hotmail.com> wrote:
> Hi John
>
> Thanks for your reply.
> I've copied your job_conf.xml.
>
> Unfortunately, I've got an error when I ran run.sh:
>
> galaxy.jobs DEBUG 2014-06-25 09:06:17,610 Loading job configuration from
> ./job_conf.xml
> galaxy.jobs DEBUG 2014-06-25 09:06:17,610 Read definition for handler
'main'
> galaxy.jobs INFO 2014-06-25 09:06:17,610 Setting <handlers> default to child
> with id 'main'
> Traceback (most recent call last):
> File
>
"my_repertory/galaxy-python/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py",
> line 39, in app_factory
> app = UniverseApplication( global_conf = global_conf, **kwargs )
> File "my_repertory/galaxy-python/galaxy-dist/lib/galaxy/app.py", line
64,
> in __init__
> self.job_config = jobs.JobConfiguration(self)
> File
> "my_repertoryr/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py",
line
> 107, in __init__
> self.__parse_job_conf_xml(tree)
> File
"my_repertory/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py",
> line 177, in __parse_job_conf_xml
> self.default_destination_id = self.__get_default(destinations,
> self.destinations.keys())
> File my_repertoryr/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py",
> line 300, in __get_default
> raise Exception("No <%s> default specified, please specify a valid id
or
> tag with the 'default' attribute" % parent.tag)
> Exception: No <destinations> default specified, please specify a valid id or
> tag with the 'default' attribute
>
>
> I don't understand where is the problem.
>
> Pat
>
>> Date: Mon, 23 Jun 2014 22:19:27 -0500
>> Subject: Re: [galaxy-dev] Run a tool's workflow one by one ?
>> From: jmchilton(a)gmail.com
>> To: leonardsquall15(a)hotmail.com
>> CC: galaxy-dev(a)lists.bx.psu.edu
>
>>
>> Its looks like you were going to post an error message but didn't.
>> That might help debug problem.
>>
>> There is no way currently at the workflow level to force one job to
>> wait for another before completion (other than assigning an explicit
>> input/output relationship between the steps). There is a Trello card
>> for this here
https://trello.com/c/h5qZlgU8.
>>
>> I am not sure that Trello card is really the best approach for this
>> problem though. If it really is the case that these jobs can run
>> simultaneously and they are not implicitly dependent on each other in
>> some way not represented in the workflow - than it is likely they are
>> running on a machine that just doesn't have enough resources (likely
>> memory) to run these properly. The correct solution for this I think
>> should be properly configuring a job_conf.xml file to not let Galaxy
>> tools over consume memory.
>>
>> By default Galaxy will run 4 jobs simultaneously - any job of any time
>> - regardless of memory consumption, threads used, etc.... This gist
>> (
https://gist.github.com/jmchilton/ff186b01d51d401623be) contains a
>> job_conf.xml that you can stick in your Galaxy root directory to
>> ensure a handful of tools (I used ids hilbert, fft, slm as example ids
>> but you should replace these values with actual values of your tool)
>> can only run one job at a time. All other jobs will continue t
>> concurrently run two at a time beside these.
>>
>> If you are using a distributed resource manager (like sun grid engine,
>> SLURM, Condor, etc...) then the solution is a little different. You
>> should assign these tools to job destination that consume a whole node
>> - you would to provide more information about the cluster hardware and
>> software configuration for me to provide an example of this.
>>
>> Beside that the common advice about scaling up Galaxy holds - you
>> should configure Postgres instead of sqlite, setup a proxy (nginx or
>> Apache), disable debug in universe_wsgi.ini, etc.... See
>>
https://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer
>> for details. All of these things can help in situations like this.
>>
>> -John
>>
>>
>>
>>
>> On Mon, Jun 23, 2014 at 11:22 AM, Pat-74100 <leonardsquall15(a)hotmail.com>
>> wrote:
>> > Hi !
>> >
>> > I have a big workflow and sometime when I launched it, I got an error
>> > message for some of my tools : unable to finish job.
>> > I think it's maybe Galaxy ran multiple job so I get this error
message.
>> >
>> > I'm looking to run my workflow "step by step".
>> >
>> > For example this workflow:
>> >
>> >
http://hcsvlab.org.au/wp-content/uploads/2014/02/PsySoundTest1.png
>> >
>> > I'm looking to run Hilbert THEN FFT THEN SLM and no Hilbert, FFT and
SLM
>> > at
>> > the same time.
>> >
>> > Is it possible to make a workflow which wait to finish a job before run
>> > an
>> > another job ?
>> >
>> > Thanks
>> >
>> > ___________________________________________________________
>> > Please keep all replies on the list by using "reply all"
>> > in your mail client. To manage your subscriptions to this
>> > and other Galaxy lists, please use the interface at:
>> >
http://lists.bx.psu.edu/
>> >
>> > To search Galaxy mailing lists use the unified search at:
>> >
http://galaxyproject.org/search/mailinglists/