Hello,
On a local galaxy server, I've got into a strange situation:
Several jobs are marked as "new", but non are starting.
I've stop and re-started the server, and got the following message:
-----
galaxy.jobs.runners.local DEBUG 2009-01-26 19:29:00,829 5 workers ready
galaxy.jobs.schedulingpolicy.roundrobin INFO 2009-01-26 19:29:00,829
RoundRobin policy: initialized
galaxy.jobs INFO 2009-01-26 19:29:00,829 job scheduler policy is
galaxy.jobs.schedulingpolicy.roundrobin:UserRoundRobin
galaxy.jobs INFO 2009-01-26 19:29:00,829 job manager started
galaxy.jobs DEBUG 2009-01-26 19:29:00,952 no runner: 7886 is still in
new state, adding to the jobs queue
galaxy.jobs DEBUG 2009-01-26 19:29:00,952 no runner: 7893 is still in
new state, adding to the jobs queue
galaxy.jobs DEBUG 2009-01-26 19:29:00,952 no runner: 7896 is still in
new state, adding to the jobs queue
galaxy.jobs DEBUG 2009-01-26 19:29:00,952 no runner: 7902 is still in
new state, adding to the jobs queue
galaxy.jobs DEBUG 2009-01-26 19:29:00,952 no runner: 7904 is still in
new state, adding to the jobs queue
galaxy.jobs DEBUG 2009-01-26 19:29:00,952 no runner: 7905 is still in
new state, adding to the jobs queue
galaxy.jobs DEBUG 2009-01-26 19:29:00,952 no runner: 7906 is still in
new state, adding to the jobs queue
galaxy.jobs DEBUG 2009-01-26 19:29:00,952 no runner: 7907 is still in
new state, adding to the jobs queue
galaxy.jobs DEBUG 2009-01-26 19:29:00,952 no runner: 7908 is still in
new state, adding to the jobs queue
galaxy.jobs INFO 2009-01-26 19:29:00,971 job stopper started
-----
But even after a server restart - no jobs are starting (I've waited for
about a minute after restart).
Is there any configuration setting that can cause these jobs to start if
I restart the server? (or cause the 'stale' jobs to be deleted?)
Thanks,
Gordon.