On Wed, Aug 25, 2010 at 10:27:37AM -0400, Nate Coraor wrote:
Hi Ry4an,
Could you find these jobs in the database and see what the value of their 'state' column is? If no job action was ever logged in the server log from which you can determine the job id, you can check job_to_output_library_dataset (unless you have added a lot of library datasets since these two became stuck).
I can't find entries in the job table corresponding to the loads. Indeed, the most recent create_time in there is 2010-08-20 (this system is not yet in use) and I kicked these imports off yesterday. This aligns with the lack of entries in the admin job view -- everything is in 'ok' or 'error'. The job_to_output_library_dataset is empty. In the 'dataset' table I do see entries that correspond to the additions and their state is 'queued'. And so are 12,000 other entries from an ill-conceived load that I tried to cancel on the 20th: galaxy=# select count(state) from dataset where state = 'queued'; count ------- 12187 All those datasets were in a library I deleted after I realized that 1.2TB of whatever a researcher happens to have on their external drive shouldn't be bulk imported as a single library. Is there a chance that whatever dequeues import jobs is still trying to chew through those 12,000 entries despite the library having been deleted (the datasets aren't showing up as deleted)? There are no errors in the logs and zero system load, but 12K is a lot of imports to wait in line behind... Could I safely clear out those queued datasets or am I playing with internal-referential-integrity fire at that point? Thanks, -- Ry4an Brase 612-626-6575 University of Minnesota Supercomputing Institute for Advanced Computational Research http://www.msi.umn.edu