Hi all, In some workflow testing I was getting errors "Unable to cleanup job" which was due to the cleanup method trying to remove non-existent directories. The following patch appears to solve this particular error (although I am still looking into why my workflow was failing). Peter Example error (before the patch): galaxy.jobs ERROR 2010-11-19 13:58:04,806 Unable to cleanup job 150 Traceback (most recent call last): File "/home/pjcock/repositories/galaxy-central/lib/galaxy/jobs/__init__.py", line 614, in cleanup shutil.rmtree( self.working_directory ) File "/usr/local/lib/python2.6/shutil.py", line 204, in rmtree onerror(os.listdir, path, sys.exc_info()) File "/usr/local/lib/python2.6/shutil.py", line 202, in rmtree names = os.listdir(path) OSError: [Errno 2] No such file or directory: './database/job_working_directory/150' Proposed patch: diff -r 730b89c4da26 lib/galaxy/jobs/__init__.py --- a/lib/galaxy/jobs/__init__.py Wed Nov 17 18:26:33 2010 +0000 +++ b/lib/galaxy/jobs/__init__.py Fri Nov 19 14:13:45 2010 +0000 @@ -610,7 +610,7 @@ try: for fname in self.extra_filenames: os.remove( fname ) - if self.working_directory is not None: + if self.working_directory and os.path.isdir( self.working_directory ): shutil.rmtree( self.working_directory ) if self.app.config.set_metadata_externally: self.external_output_metadata.cleanup_external_metadata( self.sa_session )
On Fri, Nov 19, 2010 at 2:19 PM, Peter Cock <p.j.a.cock@googlemail.com> wrote:
Hi all,
In some workflow testing I was getting errors "Unable to cleanup job" which was due to the cleanup method trying to remove non-existent directories. The following patch appears to solve this particular error (although I am still looking into why my workflow was failing).
Peter
Example error (before the patch):
galaxy.jobs ERROR 2010-11-19 13:58:04,806 Unable to cleanup job 150 Traceback (most recent call last): File "/home/pjcock/repositories/galaxy-central/lib/galaxy/jobs/__init__.py", line 614, in cleanup shutil.rmtree( self.working_directory ) File "/usr/local/lib/python2.6/shutil.py", line 204, in rmtree onerror(os.listdir, path, sys.exc_info()) File "/usr/local/lib/python2.6/shutil.py", line 202, in rmtree names = os.listdir(path) OSError: [Errno 2] No such file or directory: './database/job_working_directory/150'
Proposed patch:
diff -r 730b89c4da26 lib/galaxy/jobs/__init__.py --- a/lib/galaxy/jobs/__init__.py Wed Nov 17 18:26:33 2010 +0000 +++ b/lib/galaxy/jobs/__init__.py Fri Nov 19 14:13:45 2010 +0000 @@ -610,7 +610,7 @@ try: for fname in self.extra_filenames: os.remove( fname ) - if self.working_directory is not None: + if self.working_directory and os.path.isdir( self.working_directory ): shutil.rmtree( self.working_directory ) if self.app.config.set_metadata_externally:
self.external_output_metadata.cleanup_external_metadata( self.sa_session )
As a changeset: http://bitbucket.org/peterjc/galaxy-central/changeset/1a2b4aacef76 I worked out what was wrong - an early step in my saved workflow was missing a parameter. However, the output from this step was being hidden -- so I was not able to read the error message. Peter
Peter, I want to follow up on the hidden dataset issue. You have a step in your workflow with a Hide Dataset Action and even though the job fails, the dataset still gets hidden? If this is what you're seeing, it is a bug and I'll fix it. This is not the intended behavior. Lastly, and you might have already found this, you can toggle the display of hidden datasets in the history panel through the Options menu at the top. Dannon
I worked out what was wrong - an early step in my saved workflow was missing a parameter. However, the output from this step was being hidden -- so I was not able to read the error message.
On Fri, Nov 19, 2010 at 2:57 PM, Dannon Baker <dannonbaker@me.com> wrote:
Peter, I want to follow up on the hidden dataset issue. You have a step in your workflow with a Hide Dataset Action and even though the job fails, the dataset still gets hidden?
If this is what you're seeing, it is a bug and I'll fix it. This is not the intended behavior.
I didn't think it was the intended behaviour. Can you reproduce this?
Lastly, and you might have already found this, you can toggle the display of hidden datasets in the history panel through the Options menu at the top.
I hadn't seen that - thanks for the tip. See also: http://lists.bx.psu.edu/pipermail/galaxy-user/2010-November/001615.html Cheers, Peter
Peter Cock wrote:
Hi all,
In some workflow testing I was getting errors "Unable to cleanup job" which was due to the cleanup method trying to remove non-existent directories. The following patch appears to solve this particular error (although I am still looking into why my workflow was failing).
Thanks Peter, I've committed this. --nate
Peter
Example error (before the patch):
galaxy.jobs ERROR 2010-11-19 13:58:04,806 Unable to cleanup job 150 Traceback (most recent call last): File "/home/pjcock/repositories/galaxy-central/lib/galaxy/jobs/__init__.py", line 614, in cleanup shutil.rmtree( self.working_directory ) File "/usr/local/lib/python2.6/shutil.py", line 204, in rmtree onerror(os.listdir, path, sys.exc_info()) File "/usr/local/lib/python2.6/shutil.py", line 202, in rmtree names = os.listdir(path) OSError: [Errno 2] No such file or directory: './database/job_working_directory/150'
Proposed patch:
diff -r 730b89c4da26 lib/galaxy/jobs/__init__.py --- a/lib/galaxy/jobs/__init__.py Wed Nov 17 18:26:33 2010 +0000 +++ b/lib/galaxy/jobs/__init__.py Fri Nov 19 14:13:45 2010 +0000 @@ -610,7 +610,7 @@ try: for fname in self.extra_filenames: os.remove( fname ) - if self.working_directory is not None: + if self.working_directory and os.path.isdir( self.working_directory ): shutil.rmtree( self.working_directory ) if self.app.config.set_metadata_externally:
self.external_output_metadata.cleanup_external_metadata( self.sa_session ) _______________________________________________ galaxy-dev mailing list galaxy-dev@lists.bx.psu.edu http://lists.bx.psu.edu/listinfo/galaxy-dev
participants (3)
-
Dannon Baker
-
Nate Coraor
-
Peter Cock