I faced a problem in edgeR execution. My test environment is MacOS X, R
Fatal error: Matched on Execution halted Error: package ‘limma’ was built
for x86_64-apple-darwin14.5.0 Execution halted
Anyone has the sample problem?
Thanks in advance.
Are there any galaxy forks using Singularity images instead of Docker ?
Is it trivial enough to replace "docker run" with "singularity exec" ?
Greek Research and Technology Network (GRNET)
☎ +30 210 7471127
I was thinking about implementing the job resubmission feature for drmaa.
I hope that I can simplify the job configuration for our installation
(and probably others as well) by escalating through different queues (or
ressource limits). Thereby I hope to reduce the number of special cases
that I need to take care.
I was wondering if there are others
- who are also interested in this feature and want to join? I would try
to give this project a head start in the next week.
- that may have started to work on this feature or just started to think
about it and want to share code/experience
I have two question for all DRMAA users. Here is the first one.
I was checking how our queuing system (univa GridEngine) and Galaxy
react if jobs are submitted that exceed run time or memory limits.
I found out that the python drmaa library cannot query the job status
after the job is finished (for both successful and unsuccessful jobs).
In lib/galaxy/jobs/runners/drmaa.py the call gives an exception
self.ds.job_status( external_job_id )
Is this always the case? Or might this be a problem with our GridEngine?
I have attached some code for testing. Here the first call to
s.jobStatus(jobid) works, but the second after s.wait(...) doesn't.
But I get "drmaa.errors.InvalidJobException: code 18: The job specified
by the 'jobid' does not exist."
The same error pops up in the Galaxy logs. The consequence is that jobs
that reached the limits are shown as completed successfully in Galaxy.
Interestingly, quite a bit of information can be obtained from the
return value of s.wait. I was wondering if this can be used to
differentiate successful from failed jobs. In particular hasExited,
hasSignal, and terminateSignal are different in the two cases.
always keep the mailing list in the loop! in order for others to help or
On 06/08/2017 07:27 PM, John Chen wrote:
> This is the error I get when I start the Galaxy server.
> xml.etree.ElementTree.ParseError: mismatched tag: line 8, column 105
This is very informative. Looking at line 8 in your file:
The element "a" is not terminated
What happens, if you try just
Has anyone seen a similar error like this before. We are unsure if
galaxy is causing the issue or it is being cuased by slurm as it seems
galaxy is prematurely deleting a file.
galaxy.jobs.runners DEBUG 2017-06-14 19:36:45,719 (3261/143577) Unable
[Errno 2] No such file or directory:
galaxy.jobs.output_checker DEBUG 2017-06-14 19:36:45,725 Tool produced
standard error failing job - [slurmstepd: get_exit_code task 0 died by