Hello Shaun, Brute force method: you can simply kill the process id of the job if it is running locally ( or are you using a cluster? ). If you kill the process id of the job, you should manually update the 'state' column in the "job" table for that job id, changing it from 'running' to 'error'. To get the job id, get the dataset_id from the job command line ( via looking at the running process ), and query for that dataset id in the "job_to_output_dataset" table. Greg Von Kuster Galaxy Development Team SHAUN WEBB wrote:
It appears that I have a "stuck" job on our local galaxy server that is causing problems. Galaxy seems unable to upload new files or run tools. I have checked the manage jobs tab in the admin section and there is a job (an upload) that has not been updated for an hour. I have tried to stop this job using the button on this page but that hasn't worked.
Is there any other way I can stop or delete this?
Thanks Shaun