Hi all,
I have a local galaxy instance with basic configuration, and automated
workflow submission to galaxy using script written with bioblend.
Steps within the script:
- step1: data upload
- step2: dataset_collection creation
- step3: workflow submission
- step4: every 5 minutes, poll galaxy for the history status; if state
== "ok", workflow execution was successful
- step5: download the datasets marked available.
In my case, when I do a workflow submissions in parallel with different
datasets, the status is immediately set to "ok" for few of them after
dataset upload and control returns (but galaxy has "queued" the execution
when checked from UI). So for these submissions, although it did not fail
or raise error, the status is misleading. I didn't find anything strange
happening from the galaxy logs as well
My question specifically are,
1. Am I correct in using the "*state*" from the following call to say that
the execution has done? It has worked for the successfully completed
submissions.
> historyClient.get_status(history_id)['state']
2. Should I use a production setup? Strangely, none of these jobs fail and
I can see them finishing in the UI, except that their "*state*" is set to
"ok" and hence the script ends.
Appreciate your thoughts in this.
Galaxy version 15.03.
Thanks for your help!
Best regards,
Aarthi