I want to add that this issue doesn't show up if I turn off 'use_tasked_jobs' for jobs splitting. I realize this feature is marked as not ready for production. I will stay away from it for the time being. Thanks, Carlos On Fri, Nov 16, 2012 at 5:13 PM, Carlos Borroto <carlos.borroto@gmail.com> wrote:
Hi all,
I ran into this error while uploading a ~300mb file. I'm using latest galaxy-dist, Apache proxy and Torque/Maui throught DRMAA. After this error appear in the log, the dataset stays in grey. If I restart the galaxy daemon, the job gets correctly submitted.
galaxy.jobs.runners.drmaa ERROR 2012-11-16 16:29:49,833 Uncaught exception queueing job Traceback (most recent call last): File "/local/opt/galaxy/galaxy-dist/lib/galaxy/jobs/runners/drmaa.py", line 145, in run_next self.queue_job( obj ) File "/local/opt/galaxy/galaxy-dist/lib/galaxy/jobs/runners/drmaa.py", line 233, in queue_job job_id = self.ds.runJob(jt) File "build/bdist.linux-x86_64/egg/drmaa/__init__.py", line 331, in runJob _h.c(_w.drmaa_run_job, jid, _ct.sizeof(jid), jobTemplate) File "build/bdist.linux-x86_64/egg/drmaa/helpers.py", line 213, in c return f(*(args + (error_buffer, sizeof(error_buffer)))) File "build/bdist.linux-x86_64/egg/drmaa/errors.py", line 90, in error_check raise _ERRORS[code-1]("code %s: %s" % (code, error_buffer.value)) InternalException: code 1: pbs-drm: unknown error
The same error happens even if I don't upload the file directly but use something like scp. Another file of only a few kbs doesn't show this behavior. I have the feeling this might be a memory issue, as the box running galaxy has only 2gb of RAM.
Please let me know if there is something else I could do to help figuring out this issue.
Thanks, Carlos