Hi !
I have a big workflow and sometime when I launched it, I got an error message for some of my tools : unable to finish job. I think it's maybe Galaxy ran multiple job so I get this error message.
I'm looking to run my workflow "step by step".
For example this workflow:
http://hcsvlab.org.au/wp-content/uploads/2014/02/PsySoundTest1.png
I'm looking to run Hilbert THEN FFT THEN SLM and no Hilbert, FFT and SLM at the same time.
Is it possible to make a workflow which wait to finish a job before run an another job ?
Thanks
Its looks like you were going to post an error message but didn't. That might help debug problem.
There is no way currently at the workflow level to force one job to wait for another before completion (other than assigning an explicit input/output relationship between the steps). There is a Trello card for this here https://trello.com/c/h5qZlgU8.
I am not sure that Trello card is really the best approach for this problem though. If it really is the case that these jobs can run simultaneously and they are not implicitly dependent on each other in some way not represented in the workflow - than it is likely they are running on a machine that just doesn't have enough resources (likely memory) to run these properly. The correct solution for this I think should be properly configuring a job_conf.xml file to not let Galaxy tools over consume memory.
By default Galaxy will run 4 jobs simultaneously - any job of any time - regardless of memory consumption, threads used, etc.... This gist (https://gist.github.com/jmchilton/ff186b01d51d401623be) contains a job_conf.xml that you can stick in your Galaxy root directory to ensure a handful of tools (I used ids hilbert, fft, slm as example ids but you should replace these values with actual values of your tool) can only run one job at a time. All other jobs will continue t concurrently run two at a time beside these.
If you are using a distributed resource manager (like sun grid engine, SLURM, Condor, etc...) then the solution is a little different. You should assign these tools to job destination that consume a whole node - you would to provide more information about the cluster hardware and software configuration for me to provide an example of this.
Beside that the common advice about scaling up Galaxy holds - you should configure Postgres instead of sqlite, setup a proxy (nginx or Apache), disable debug in universe_wsgi.ini, etc.... See https://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer for details. All of these things can help in situations like this.
-John
On Mon, Jun 23, 2014 at 11:22 AM, Pat-74100 leonardsquall15@hotmail.com wrote:
Hi !
I have a big workflow and sometime when I launched it, I got an error message for some of my tools : unable to finish job. I think it's maybe Galaxy ran multiple job so I get this error message.
I'm looking to run my workflow "step by step".
For example this workflow:
http://hcsvlab.org.au/wp-content/uploads/2014/02/PsySoundTest1.png
I'm looking to run Hilbert THEN FFT THEN SLM and no Hilbert, FFT and SLM at the same time.
Is it possible to make a workflow which wait to finish a job before run an another job ?
Thanks
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Hi John
Thanks for your reply. I've copied your job_conf.xml.
Unfortunately, I've got an error when I ran run.sh:
galaxy.jobs DEBUG 2014-06-25 09:06:17,610 Loading job configuration from ./job_conf.xml galaxy.jobs DEBUG 2014-06-25 09:06:17,610 Read definition for handler 'main' galaxy.jobs INFO 2014-06-25 09:06:17,610 Setting <handlers> default to child with id 'main' Traceback (most recent call last): File "my_repertory/galaxy-python/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py", line 39, in app_factory app = UniverseApplication( global_conf = global_conf, **kwargs ) File "my_repertory/galaxy-python/galaxy-dist/lib/galaxy/app.py", line 64, in __init__ self.job_config = jobs.JobConfiguration(self) File "my_repertoryr/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py", line 107, in __init__ self.__parse_job_conf_xml(tree) File "my_repertory/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py", line 177, in __parse_job_conf_xml self.default_destination_id = self.__get_default(destinations, self.destinations.keys()) File my_repertoryr/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py", line 300, in __get_default raise Exception("No <%s> default specified, please specify a valid id or tag with the 'default' attribute" % parent.tag) Exception: No <destinations> default specified, please specify a valid id or tag with the 'default' attribute
I don't understand where is the problem.
Pat
Date: Mon, 23 Jun 2014 22:19:27 -0500 Subject: Re: [galaxy-dev] Run a tool's workflow one by one ? From: jmchilton@gmail.com To: leonardsquall15@hotmail.com CC: galaxy-dev@lists.bx.psu.edu
Its looks like you were going to post an error message but didn't. That might help debug problem.
There is no way currently at the workflow level to force one job to wait for another before completion (other than assigning an explicit input/output relationship between the steps). There is a Trello card for this here https://trello.com/c/h5qZlgU8.
I am not sure that Trello card is really the best approach for this problem though. If it really is the case that these jobs can run simultaneously and they are not implicitly dependent on each other in some way not represented in the workflow - than it is likely they are running on a machine that just doesn't have enough resources (likely memory) to run these properly. The correct solution for this I think should be properly configuring a job_conf.xml file to not let Galaxy tools over consume memory.
By default Galaxy will run 4 jobs simultaneously - any job of any time
- regardless of memory consumption, threads used, etc.... This gist
(https://gist.github.com/jmchilton/ff186b01d51d401623be) contains a job_conf.xml that you can stick in your Galaxy root directory to ensure a handful of tools (I used ids hilbert, fft, slm as example ids but you should replace these values with actual values of your tool) can only run one job at a time. All other jobs will continue t concurrently run two at a time beside these.
If you are using a distributed resource manager (like sun grid engine, SLURM, Condor, etc...) then the solution is a little different. You should assign these tools to job destination that consume a whole node
- you would to provide more information about the cluster hardware and
software configuration for me to provide an example of this.
Beside that the common advice about scaling up Galaxy holds - you should configure Postgres instead of sqlite, setup a proxy (nginx or Apache), disable debug in universe_wsgi.ini, etc.... See https://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer for details. All of these things can help in situations like this.
-John
On Mon, Jun 23, 2014 at 11:22 AM, Pat-74100 leonardsquall15@hotmail.com wrote:
Hi !
I have a big workflow and sometime when I launched it, I got an error message for some of my tools : unable to finish job. I think it's maybe Galaxy ran multiple job so I get this error message.
I'm looking to run my workflow "step by step".
For example this workflow:
http://hcsvlab.org.au/wp-content/uploads/2014/02/PsySoundTest1.png
I'm looking to run Hilbert THEN FFT THEN SLM and no Hilbert, FFT and SLM at the same time.
Is it possible to make a workflow which wait to finish a job before run an another job ?
Thanks
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Dear Galaxy developers
I'm trying to allow users to upload file via FTP.
I've been to the tutorial website but as a beginner, I understand nothing...
https://wiki.galaxyproject.org/Admin/Config/UploadviaFTP?action=show&red...
I've configured the universe_wsgi.ini file with
ftp_upload_dir = galaxy_dist/database/files/ ftp_upload_site = any adress.
I use Filezilla to for FTP connecting but it doesn't works and it asks me a Port ... Can someone provide to me a simply tutorial to enable ftp upload ?
Thanks
Pat
Dear Galaxy developers
I'm trying to allow users to upload file via FTP.
I've been to the tutorial website but as a beginner, I understand nothing...
https://wiki.galaxyproject.org/Admin/Config/UploadviaFTP?action=show&red...
I've configured the universe_wsgi.ini file with
ftp_upload_dir = galaxy_dist/database/files/ ftp_upload_site = any adress.
I use Filezilla to for FTP connecting but it doesn't works and it asks me a Port ... Can someone provide to me a simply tutorial to enable ftp upload ?
Thanks
Pat
Hi Pat,
Briefly, to get galaxy uploads working, you need to do the following:
1. Install an FTP server. The FTP server can be on the same/different host than galaxy but must be able to access the "ftp_upload_dir" 2. Configure the FTP server to authenticate users. This needs to be the same user ID as the galaxy user ID. For example: user id -> iyad.kandalaft@agr.gc.ca 3. Configure the FTP server to create a folder in "ftp_upload_dir" that has the same name as the user that logged into the FTP server. For example: user id -> iyad.kandalaft@agr.gc.ca directory is -> galaxy_dist/database/files/iyad.kandalaft@agr.gc.ca/
The link you posted takes briefly describes how to configure ProFTPd, which is an FTP server, to work with Galaxy. Unfortunately, the page is somewhat random as it describes steps for some operating systems in detail but leaves out the details for other operating systems.
On a redhat/centos operating systems, you install proftpd using:
sudo yum install proftpd
On Debian/Ubuntu:
sudo apt-get install proftpd
The configuration file that the tutorial refers to is /etc/proftpd.conf
I hope this helps get you started.
Regards,
Iyad Kandalaft Bioinformatics Programmer Microbial Biodiversity Bioinformatics Science & Technology Branch Agriculture & Agri-Food Canada Iyad.Kandalaft@agr.gc.ca | (613) 759-1228 ________________________________ From: galaxy-dev-bounces@lists.bx.psu.edu [galaxy-dev-bounces@lists.bx.psu.edu] on behalf of Pat-74100 [leonardsquall15@hotmail.com] Sent: June 29, 2014 10:29 AM To: galaxy-dev@lists.bx.psu.edu Subject: [galaxy-dev] Set a FTP upload in Local Galaxy instance
Dear Galaxy developers
I'm trying to allow users to upload file via FTP.
I've been to the tutorial website but as a beginner, I understand nothing...
https://wiki.galaxyproject.org/Admin/Config/UploadviaFTP?action=show&red...
I've configured the universe_wsgi.ini file with
ftp_upload_dir = galaxy_dist/database/files/ ftp_upload_site = any adress.
I use Filezilla to for FTP connecting but it doesn't works and it asks me a Port ... Can someone provide to me a simply tutorial to enable ftp upload ?
Thanks
Pat
Hi Pat,
At the ENS we use a Samba share. This is an alternate solution. Our /opt/galaxy-dist/database/upload folder contains a subfolder per user (username@ens-lyon.fr).
Regards,
Thomas
Hi Thomas,
How do you ensure that files are created with galaxy ownership using the samba share?
Regards,
Iyad Kandalaft Bioinformatics Programmer Microbial Biodiversity Bioinformatics Science & Technology Branch Agriculture & Agri-Food Canada Iyad.Kandalaft@agr.gc.ca | (613) 759-1228
________________________________________ From: galaxy-dev-bounces@lists.bx.psu.edu [galaxy-dev-bounces@lists.bx.psu.edu] on behalf of Thomas Bellembois [thomas.bellembois@ens-lyon.fr] Sent: July 1, 2014 5:44 AM To: galaxy-dev@lists.bx.psu.edu Subject: Re: [galaxy-dev] Set a FTP upload in Local Galaxy instance
Hi Pat,
At the ENS we use a Samba share. This is an alternate solution. Our /opt/galaxy-dist/database/upload folder contains a subfolder per user (username@ens-lyon.fr).
Regards,
Thomas -- Thomas Bellembois, Network and System Administrator ENS-Lyon/IGFL - FRANCE +33 4 26 73 13 67
.The IGFL is a member of the "portail-labos" project (https://portail-labos.ens-lyon.fr/site/). .IGFL IT documentation: http://itdoc.igfl.ens-lyon.fr/itdoc
On 30/06/2014 02:41, Kandalaft, Iyad wrote:
Hi Pat,
Briefly, to get galaxy uploads working, you need to do the following:
- Install an FTP server. The FTP server can be on the same/different host than galaxy but must be able to access the "ftp_upload_dir"
- Configure the FTP server to authenticate users. This needs to be the same user ID as the galaxy user ID. For example: user id -> iyad.kandalaft@agr.gc.ca
- Configure the FTP server to create a folder in "ftp_upload_dir" that has the same name as the user that logged into the FTP server. For example: user id -> iyad.kandalaft@agr.gc.ca directory is -> galaxy_dist/database/files/iyad.kandalaft@agr.gc.ca/
The link you posted takes briefly describes how to configure ProFTPd, which is an FTP server, to work with Galaxy. Unfortunately, the page is somewhat random as it describes steps for some operating systems in detail but leaves out the details for other operating systems.
On a redhat/centos operating systems, you install proftpd using:
sudo yum install proftpd
On Debian/Ubuntu:
sudo apt-get install proftpd
The configuration file that the tutorial refers to is /etc/proftpd.conf
I hope this helps get you started.
Regards,
Iyad Kandalaft Bioinformatics Programmer Microbial Biodiversity Bioinformatics Science & Technology Branch Agriculture & Agri-Food Canada Iyad.Kandalaft@agr.gc.ca | (613) 759-1228 ________________________________ From: galaxy-dev-bounces@lists.bx.psu.edu [galaxy-dev-bounces@lists.bx.psu.edu] on behalf of Pat-74100 [leonardsquall15@hotmail.com] Sent: June 29, 2014 10:29 AM To: galaxy-dev@lists.bx.psu.edu Subject: [galaxy-dev] Set a FTP upload in Local Galaxy instance
Dear Galaxy developers
I'm trying to allow users to upload file via FTP.
I've been to the tutorial website but as a beginner, I understand nothing...
https://wiki.galaxyproject.org/Admin/Config/UploadviaFTP?action=show&red...
I've configured the universe_wsgi.ini file with
ftp_upload_dir = galaxy_dist/database/files/ ftp_upload_site = any adress.
I use Filezilla to for FTP connecting but it doesn't works and it asks me a Port ... Can someone provide to me a simply tutorial to enable ftp upload ?
Thanks
Pat
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Dear Galaxy developers.
Running tools produce dataset_x.dat files in /database/files/ Looking this link https://wiki.galaxyproject.org/Admin/Config/Performance/Purge%20Histories%20... , I know we can purge and delete datasets .
But new analysis produce dataset_(x+1).
Is it possible to reset the dataset name ? that is restart from dataset_1.dat ?
Thanks. Pat
No, since the integer X relates the file on disk to the Dataset object in the database. Technically, you could probably pull it off if you wanted to perform a bunch of database surgery, but I'd really not recommend it.
-Dannon
On Fri, Jul 4, 2014 at 3:43 AM, Pat-74100 leonardsquall15@hotmail.com wrote:
Dear Galaxy developers.
Running tools produce dataset_x.dat files in /database/files/ Looking this link https://wiki.galaxyproject.org/Admin/Config/Performance/Purge%20Histories%20... , I know we can purge and delete datasets .
But new analysis produce dataset_(x+1).
Is it possible to reset the dataset name ? that is restart from dataset_1.dat ?
Thanks. Pat
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
There was a problem with the config I sent you - it defines two destinations for jobs but doesn't specify a default. I have updated the gist (and actually tried loading it in Galaxy this time): https://gist.github.com/jmchilton/ff186b01d51d401623be. Hope this helps you make progress on this issue.
-John
On Wed, Jun 25, 2014 at 2:09 AM, Pat-74100 leonardsquall15@hotmail.com wrote:
Hi John
Thanks for your reply. I've copied your job_conf.xml.
Unfortunately, I've got an error when I ran run.sh:
galaxy.jobs DEBUG 2014-06-25 09:06:17,610 Loading job configuration from ./job_conf.xml galaxy.jobs DEBUG 2014-06-25 09:06:17,610 Read definition for handler 'main' galaxy.jobs INFO 2014-06-25 09:06:17,610 Setting <handlers> default to child with id 'main' Traceback (most recent call last): File "my_repertory/galaxy-python/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py", line 39, in app_factory app = UniverseApplication( global_conf = global_conf, **kwargs ) File "my_repertory/galaxy-python/galaxy-dist/lib/galaxy/app.py", line 64, in __init__ self.job_config = jobs.JobConfiguration(self) File "my_repertoryr/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py", line 107, in __init__ self.__parse_job_conf_xml(tree) File "my_repertory/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py", line 177, in __parse_job_conf_xml self.default_destination_id = self.__get_default(destinations, self.destinations.keys()) File my_repertoryr/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py", line 300, in __get_default raise Exception("No <%s> default specified, please specify a valid id or tag with the 'default' attribute" % parent.tag) Exception: No <destinations> default specified, please specify a valid id or tag with the 'default' attribute
I don't understand where is the problem.
Pat
Date: Mon, 23 Jun 2014 22:19:27 -0500 Subject: Re: [galaxy-dev] Run a tool's workflow one by one ? From: jmchilton@gmail.com To: leonardsquall15@hotmail.com CC: galaxy-dev@lists.bx.psu.edu
Its looks like you were going to post an error message but didn't. That might help debug problem.
There is no way currently at the workflow level to force one job to wait for another before completion (other than assigning an explicit input/output relationship between the steps). There is a Trello card for this here https://trello.com/c/h5qZlgU8.
I am not sure that Trello card is really the best approach for this problem though. If it really is the case that these jobs can run simultaneously and they are not implicitly dependent on each other in some way not represented in the workflow - than it is likely they are running on a machine that just doesn't have enough resources (likely memory) to run these properly. The correct solution for this I think should be properly configuring a job_conf.xml file to not let Galaxy tools over consume memory.
By default Galaxy will run 4 jobs simultaneously - any job of any time
- regardless of memory consumption, threads used, etc.... This gist
(https://gist.github.com/jmchilton/ff186b01d51d401623be) contains a job_conf.xml that you can stick in your Galaxy root directory to ensure a handful of tools (I used ids hilbert, fft, slm as example ids but you should replace these values with actual values of your tool) can only run one job at a time. All other jobs will continue t concurrently run two at a time beside these.
If you are using a distributed resource manager (like sun grid engine, SLURM, Condor, etc...) then the solution is a little different. You should assign these tools to job destination that consume a whole node
- you would to provide more information about the cluster hardware and
software configuration for me to provide an example of this.
Beside that the common advice about scaling up Galaxy holds - you should configure Postgres instead of sqlite, setup a proxy (nginx or Apache), disable debug in universe_wsgi.ini, etc.... See https://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer for details. All of these things can help in situations like this.
-John
On Mon, Jun 23, 2014 at 11:22 AM, Pat-74100 leonardsquall15@hotmail.com wrote:
Hi !
I have a big workflow and sometime when I launched it, I got an error message for some of my tools : unable to finish job. I think it's maybe Galaxy ran multiple job so I get this error message.
I'm looking to run my workflow "step by step".
For example this workflow:
http://hcsvlab.org.au/wp-content/uploads/2014/02/PsySoundTest1.png
I'm looking to run Hilbert THEN FFT THEN SLM and no Hilbert, FFT and SLM at the same time.
Is it possible to make a workflow which wait to finish a job before run an another job ?
Thanks
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Thanks John, now it works !
You look like a Galaxy professional. Maybe you can help me again ? I've made an another topic about a FTP setting to upload large file. I've been to the Galaxy wiki tutorials but I don't understand a lot ...
Pat
Date: Wed, 25 Jun 2014 07:49:36 -0500 Subject: Re: [galaxy-dev] Run a tool's workflow one by one ? From: jmchilton@gmail.com To: leonardsquall15@hotmail.com CC: galaxy-dev@lists.bx.psu.edu
There was a problem with the config I sent you - it defines two destinations for jobs but doesn't specify a default. I have updated the gist (and actually tried loading it in Galaxy this time): https://gist.github.com/jmchilton/ff186b01d51d401623be. Hope this helps you make progress on this issue.
-John
On Wed, Jun 25, 2014 at 2:09 AM, Pat-74100 leonardsquall15@hotmail.com wrote:
Hi John
Thanks for your reply. I've copied your job_conf.xml.
Unfortunately, I've got an error when I ran run.sh:
galaxy.jobs DEBUG 2014-06-25 09:06:17,610 Loading job configuration from ./job_conf.xml galaxy.jobs DEBUG 2014-06-25 09:06:17,610 Read definition for handler 'main' galaxy.jobs INFO 2014-06-25 09:06:17,610 Setting <handlers> default to child with id 'main' Traceback (most recent call last): File "my_repertory/galaxy-python/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py", line 39, in app_factory app = UniverseApplication( global_conf = global_conf, **kwargs ) File "my_repertory/galaxy-python/galaxy-dist/lib/galaxy/app.py", line 64, in __init__ self.job_config = jobs.JobConfiguration(self) File "my_repertoryr/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py", line 107, in __init__ self.__parse_job_conf_xml(tree) File "my_repertory/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py", line 177, in __parse_job_conf_xml self.default_destination_id = self.__get_default(destinations, self.destinations.keys()) File my_repertoryr/galaxy-python/galaxy-dist/lib/galaxy/jobs/__init__.py", line 300, in __get_default raise Exception("No <%s> default specified, please specify a valid id or tag with the 'default' attribute" % parent.tag) Exception: No <destinations> default specified, please specify a valid id or tag with the 'default' attribute
I don't understand where is the problem.
Pat
Date: Mon, 23 Jun 2014 22:19:27 -0500 Subject: Re: [galaxy-dev] Run a tool's workflow one by one ? From: jmchilton@gmail.com To: leonardsquall15@hotmail.com CC: galaxy-dev@lists.bx.psu.edu
Its looks like you were going to post an error message but didn't. That might help debug problem.
There is no way currently at the workflow level to force one job to wait for another before completion (other than assigning an explicit input/output relationship between the steps). There is a Trello card for this here https://trello.com/c/h5qZlgU8.
I am not sure that Trello card is really the best approach for this problem though. If it really is the case that these jobs can run simultaneously and they are not implicitly dependent on each other in some way not represented in the workflow - than it is likely they are running on a machine that just doesn't have enough resources (likely memory) to run these properly. The correct solution for this I think should be properly configuring a job_conf.xml file to not let Galaxy tools over consume memory.
By default Galaxy will run 4 jobs simultaneously - any job of any time
- regardless of memory consumption, threads used, etc.... This gist
(https://gist.github.com/jmchilton/ff186b01d51d401623be) contains a job_conf.xml that you can stick in your Galaxy root directory to ensure a handful of tools (I used ids hilbert, fft, slm as example ids but you should replace these values with actual values of your tool) can only run one job at a time. All other jobs will continue t concurrently run two at a time beside these.
If you are using a distributed resource manager (like sun grid engine, SLURM, Condor, etc...) then the solution is a little different. You should assign these tools to job destination that consume a whole node
- you would to provide more information about the cluster hardware and
software configuration for me to provide an example of this.
Beside that the common advice about scaling up Galaxy holds - you should configure Postgres instead of sqlite, setup a proxy (nginx or Apache), disable debug in universe_wsgi.ini, etc.... See https://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer for details. All of these things can help in situations like this.
-John
On Mon, Jun 23, 2014 at 11:22 AM, Pat-74100 leonardsquall15@hotmail.com wrote:
Hi !
I have a big workflow and sometime when I launched it, I got an error message for some of my tools : unable to finish job. I think it's maybe Galaxy ran multiple job so I get this error message.
I'm looking to run my workflow "step by step".
For example this workflow:
http://hcsvlab.org.au/wp-content/uploads/2014/02/PsySoundTest1.png
I'm looking to run Hilbert THEN FFT THEN SLM and no Hilbert, FFT and SLM at the same time.
Is it possible to make a workflow which wait to finish a job before run an another job ?
Thanks
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
galaxy-dev@lists.galaxyproject.org