Dear Galaxy devs
(Apologies: this is a repost of my message from last week, with a correction: the problem
actually appears in the current Galaxy release 16.07.)
I've recently updated our local Galaxy instances to use a novel in-house job runner
(as part of a migration to a new VM platform).
In all versions of Galaxy pre-16.07, the runner executes without any problems. However in
16.07 there is a minor bug in the UI, where jobs never update from the grey
"queued" state to the yellow "running" state. The jobs themselves run
okay in all versions.
Does anyone know if there were changes in 16.07 which might have caused this? I
couldn't see anything in the change log, or by comparing with the distributed
runners.
(The our runner code can be seen here:
https://github.com/pjbriggs/ansible-palfinder-galaxy/blob/jse-drop-add-pr...
with the implementation of the 'check_watched_item' method being the likely
culprit?)
Any suggestions immensely appreciated!
Best wishes
Peter
________________________________________
From: galaxy-dev [galaxy-dev-bounces(a)lists.galaxyproject.org] on behalf of Peter Briggs
[peter.briggs(a)manchester.ac.uk]
Sent: Friday, December 02, 2016 2:58 PM
To: galaxy-dev(a)lists.galaxyproject.org
Subject: [galaxy-dev] Datasets no longer automatically update to yellow
"running" state using novel job runner
Hello all
I've recently updated our local Galaxy instances to use a novel in-house job runner as
part of a migration to a new VM platform.
As far as I can determine the runner is able to run jobs without any issues, on all
versions of Galaxy that I've tried (v15.10/v16.01/v16.04/release_16.10).
However: when moving to release_16.10 I've noticed a minor issue with the UI, where
the job state in the history no longer automatically switches from the grey
"queued" state to the yellow "running" state - even though the logs
indicate that the job status has changed internally.
Refreshing the history correctly updates the job to the correct yellow "running"
state (so it would seem that internally Galaxy knows that the job status has changed), and
transitions between other states seem to happen automatically.
Comparing my novel runner code with that for e.g. the drmaa runner , I can't see what
might be missing from mine to cause this issue. I had wondered if it could be a problem
with out of date Javascript in the UI somehow not detecting the job status change from
"queued" to "running", but using the "local" job runner
doesn't exhibit this behaviour.
(The source code for the runner can be found here:
https://github.com/pjbriggs/ansible-palfinder-galaxy/blob/jse-drop-add-pr...
It submits jobs to the cluster by "dropping" the scripts into a shared
directory, which are then collected and executed by a separate process running on the
cluster - with all communication done via files.)
Can anyone suggest what might be happening to cause this issue?
Many thanks
Peter
--
Peter Briggs peter.briggs(a)manchester.ac.uk
Bioinformatics Core Facility University of Manchester
B.1083 Michael Smith Bldg Tel: (0161) 2751482
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
https://lists.galaxyproject.org/
To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/