Hello, We almost succeed in using drmaa-0.4b3 with galaxy and pbs pro : The job launched from galaxy runs on our cluster but when job status changes to finished, there's an error in drmaa python egg. Here is the server log : galaxy.jobs.runners.drmaa ERROR 2010-11-26 17:11:10,857 (21/516559.service0.ice.ifremer.fr) Unable to check job status Traceback (most recent call last): File "/home12/caparmor/bioinfo/galaxy_dist/lib/galaxy/jobs/runners/drmaa.py", line 252, in check_watched_items state = self.ds.jobStatus( job_id ) File "/usr/lib/python2.5/site-packages/drmaa-0.4b3-py2.5.egg/drmaa/__init__.py", line 522, in jobStatus File "/usr/lib/python2.5/site-packages/drmaa-0.4b3-py2.5.egg/drmaa/helpers.py", line 213, in c return f(*(args + (error_buffer, sizeof(error_buffer)))) File "/usr/lib/python2.5/site-packages/drmaa-0.4b3-py2.5.egg/drmaa/errors.py", line 90, in error_check raise _ERRORS[code-1]("code %s: %s" % (code, error_buffer.value)) InternalException: code 1: pbs_statjob: Job %s has finished galaxy.jobs.runners.drmaa WARNING 2010-11-26 17:11:10,861 (21/516559.service0.ice.ifremer.fr) job will now be errored galaxy.jobs.runners.drmaa DEBUG 2010-11-26 17:11:10,986 (21/516559.service0.ice.ifremer.fr) User killed running job, but error encountered removing from DRM queue: code 1: pbs_deljob: Job %s has finished Any idea ? Thanks a lot Laure