When running a Tophat job for over a hour, I get the following message and the job stops.
Would anyone know why and is there a way to stop Galaxy from cleaning up the job working
directory so that I can check the error log files?
Dwayne
galaxy.jobs.runners.local DEBUG 2015-07-14 16:54:10,478 execution finished: /data/galaxy/galaxy_prod/database/job_working_directory/000/26/galaxy_26.sh
galaxy.jobs.output_checker INFO 2015-07-14 16:54:10,595 Job 26: Log: tool progress
galaxy.jobs.output_checker INFO 2015-07-14 16:54:10,595 Job 26: Log: tool progress
galaxy.datatypes.metadata DEBUG 2015-07-14 16:54:10,675 setting metadata externally failed for HistoryDatasetAssociation 32: [Errno 2] No such file or directory
galaxy.jobs.runners.local ERROR 2015-07-14 16:54:10,754 Job wrapper finish method failed
Traceback (most recent call last):
File "/data/galaxy/galaxy_prod/lib/galaxy/jobs/runners/local.py", line 129, in queue_job
job_wrapper.finish( stdout, stderr, exit_code )
File "/data/galaxy/galaxy_prod/lib/galaxy/jobs/__init__.py", line 1126, in finish
dataset.datatype.set_meta( dataset, overwrite=False ) # call datatype.set_meta directly for the initial set_meta call during dataset creation
File "/data/galaxy/galaxy_prod/lib/galaxy/datatypes/binary.py", line 268, in set_meta
proc = subprocess.Popen( args=command, stderr=open( stderr_name, 'wb' ) )
File "/opt/python/2.7.9/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/opt/python/2.7.9/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory