On Jul 5, 2012, at 8:02 PM, Anthonius deBoer wrote:
I am running galaxy on a system that normally runs all jobs on the cluster, but some jobs need to be run locally... But when I restart galaxy when these jobs are running, those jobs fail (and I am fine with that)...
But then when I check the logs, I see a lot of this kind of errors and the word CRITICAL scares me a little...
It complains about certain directories not being empty (although they look empty to me when I look at them).... But if it needs to clean up should it not expect stuff to be in those working directories? And why is this a critical error and why don't those directories get removed anyway...It clutters my database and I fear this was the reason for weird behaviour I saw earlier, since it may re-use some of these directories after a while ?
galaxy.objectstore CRITICAL 2012-07-05 16:57:24,964 /mnt/ngs/analysis/tdeboer/galaxy-data/database2/job_working_directory/003/3350 delete error [Errno 39] Directory not empty: '/mnt/ngs/analysis/tdeboer/galaxy-data/database2/job_working_directory/003/3350'
Hi Thon, This is the job's working directory. It's possible that the sequence of removal steps is out of order, or that removing files from the directory in the previous steps takes too long. It could also be hanging filehandles (like .nfs* files) for a shared filesystem. If you want to see what's in there, I'd suggest logging the contents of the directory right before the call that attempts to remove the directory in lib/galaxy/jobs/__init__.py. Otherwise, this is harmless and you can simply use any method you prefer to clean up extra job working directories at a later time. --nate
Thanks
Thon ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: