We've encountered a problem trying to run a workflow on our local Galaxy instance, which is resulting in an "uncaught exception in exposed API method" message displayed in the Galaxy window.
The most trivial workflow exhibiting the problem consists of a single 'cut' task, where the output of the task is configured to change the datatype to "interval", and to explicitly assign the chrom, start and end columns to 1,2 and 3. Attempts to run this workflow generate the API exception.
If the workflow is updated so that the column assignments are no longer attempted then the workflow is able to start without apparent problems.
Our local Galaxy instance is a bit out of data (18.09) but I have been able to reproduce this error on Galaxy main so it looks like it also affects later versions. The failing workflow can be accessed on main via:
The same workflow with the column assignment removed is accessible via:
(Main offers two variants of the 'cut' tool - as far as I can tell this problem happens with both.)
Apologies if this has already been reported elsewhere - any suggestions for a workaround appreciated.
Peter Briggs peter.briggs(a)manchester.ac.uk
Bioinformatics Core Facility University of Manchester
B.1083 Michael Smith Bldg Tel: (0161) 2751482
I have a question about the samtools_slice_bam tool in the samtools suite.
It seems the references variable remains empty after re-detecting metadata.
I now tried sorting the bam in galaxy and manually in samtools, but no
references coming up.
I also ran a breakpoint in the binary.Bam.set_meta but it doesn't seem to
Any ideas? Thx.
today we found out that some of our files in your galaxy instance are
new in "database/files/000/" which is the first folder of your database
files. Could some please explain me how galaxy generates new file names?
and how this is possible?
Thanks in advance Jochen
Institute of Agricultural Sciences
Postal address: Universitätstrasse 2 / LFW B 58.1
8092 Zurich, Switzerland
Office: Eschikon 27
8315 Lindau, Switzerland
Phone +41 52 354 92 06
I don't know if its excpected. But at least I can tell that the same is
happening here. And I can add if the 000 folder contains 000-999 the
pattern repeats in 001.
On 21.05.19 17:38, galaxy-dev-request(a)lists.galaxyproject.org wrote:
> Coming back to my question:
> this is how an ls -ls looks in my database folder...
> 2. Apr 14:40 000/
> 9. Nov 2015 005/
> 19. Nov 2018 007/
> 19. Nov 2018 008/
> 3. Dez 2015 010/
> 10. Jan 2017 012/
> 10. Jan 2017 013/
> 29. Jun 2017 014/
> 4. Jul 2017 015/
> 9. Jan 2018 018/
> 29. Jun 2017 020/
> 21. Sep 2016 021/
> 20. Mär 2018 032/
> 29. Jan 09:38 033/
> 29. Jan 10:32 034/
> 16. Jul 2018 035/
> 30. Okt 2018 036/
> 30. Okt 2018 037/
> 30. Okt 2018 038/
> 30. Okt 2018 039/
> 29. Aug 2018 070/
> 29. Aug 2018 071/
> 29. Aug 2018 072/
> 29. Aug 2018 073/
> 29. Aug 2018 074/
> 29. Aug 2018 075/
> 29. Aug 2018 076/
> 29. Aug 2018 077/
> 29. Aug 2018 078/
> 29. Aug 2018 079/
> 29. Aug 2018 080/
> 29. Aug 2018 081/
> 29. Aug 2018 082/
> 29. Aug 2018 083/
> 29. Aug 2018 084/
> 29. Aug 2018 085/
> 29. Aug 2018 086/
> 29. Aug 2018 087/
> 29. Aug 2018 088/
> 29. Aug 2018 089/
> 29. Aug 2018 090/
> 29. Aug 2018 091/
> 29. Aug 2018 092/
> 29. Aug 2018 093/
> 29. Aug 2018 094/
> 29. Aug 2018 095/
> 31. Aug 2018 096/
> 30. Aug 2018 097/
> 1. Sep 2018 098/
> 24. Okt 2018 099/
> if you look into the 000 folder it restarts:
> 10. Dez 19:36 100/
> 30. Nov 10:45 101/
> 10. Dez 19:39 102/
> 10. Dez 19:37 103/
> 11. Jan 15:13 104/
> 11. Jan 11:31 105/
> 22. Jan 14:33 106/
> 8. Feb 10:46 107/
> 14. Feb 15:47 108/
> 21. Mär 11:00 109/
> 2. Apr 15:22 110/
> 21. Mai 17:17 111/
> is this the expected behavior?
> The only thing we did was we archived some of the very early generated
> datasets. Is this maybe the problem?
> Cheers Jochen
> On 14.05.19 15:32, Jochen Bick wrote:
>> today we found out that some of our files in your galaxy instance are
>> new in "database/files/000/" which is the first folder of your database
>> files. Could some please explain me how galaxy generates new file names?
>> and how this is possible?
>> Thanks in advance Jochen
> -- ETH Zurich *Jochen Bick* Animal Physiology Institute of Agricultural
> Sciences Postal address: Universitätstrasse 2 / LFW B 58.1 8092 Zurich,
> Switzerland Office: Eschikon 27 8315 Lindau, Switzerland Phone +41 52
> 354 92 06 jochen.bick(a)usys.ethz.ch <mailto:email@example.com>
Molekulare Systembiologie (MOLSYB)
Helmholtz-Zentrum für Umweltforschung GmbH - UFZ/
Helmholtz Centre for Environmental Research GmbH - UFZ
Permoserstraße 15, 04318 Leipzig, Germany
Phone +49 341 235 482296,
Helmholtz-Zentrum für Umweltforschung GmbH - UFZ
Helmholtz Centre for Environmental Research GmbH - UFZ
Permoserstraße 15, 04318 Leipzig, Germany
Phone +49 341 235 1269, Fax +49 341 235 1468 (optional)
Sitz der Gesellschaft/Registered Office: Leipzig
Registergericht/Registration Office: Amtsgericht Leipzig
Handelsregister Nr./Trade Register Nr.: B 4703
Vorsitzender des Aufsichtsrats/Chairman of the Supervisory Board:
MinDirig Wilfried Kraus
Wissenschaftlicher Geschäftsführer/Scientific Managing Director:
Prof. Dr. Georg Teutsch
Administrative Geschäftsführerin/Administrative Managing Director:
Dr. Sabine König
I configured URL prefix in galaxy 18.09. It has problems of resolving some
of the links, including:
visualize -> create visualization
shared data -> workflows
shared data -> visualizations
I will use workflow as an example to illustrate what happens. When the
mouse cursor is put on workflow, the URL shown in at the left bottom of the
But after I clicked 'workflow', the real URL shown in the address bar is
Of course, there is no such a page and the browser shows a blank page. If I
manually remove one galaxy in the URL, the page is correctly shown.
Beside the four mentioned above, other links work fine. If galaxy is
configured without prefix, then there is no problem.
I also tested v18.05. With all the configuration being the same, this
version doesn't have the problem with prefix like v18.09 does. So I think
this is a bug in v18.09 with URL prefix.
I am trying to tag datasets with hashtags (so they propagate) with tags like #key:value . The idea is to transport information through the workflow that is used at the end of the WF by a custom tool to rename and organise datasets depending on their tags
I can do this in the GUI: I tag datasets manually with "#key:value” tags, create a collection with all of them and run a WF => the tags properly propagate.
However, when I try to do the same using the collection tool “Tag elements from file”, it does not work properly : the tags are visible in the collection produced by the “Tag elements from file” but are not propagated. I assume this means that the “Tag elements from file” escapes the “#” somehow and my tags are simple tags, not hashtags.
Note that my input file is like :
element_identifier1 #key1:value1 #key2:value1
element_identifier2 #key1:value2 #key2:value2
Now the questions :
- Is this a bug or intended ?
- where can I find the file “tag_collection_from_file.xml”, I can t locate it :-/
- any suggestion for a fix ?
Thx a bunch
I want to integrate a Docker container as a tool in Galaxy. This works great. However,
I need the forward a Volume to the container (at least that how it works outside of Galaxy).
In particular, this would be the BLAST database.
This means, I would want to share the location specified in blastdb.loc and forward it to the
container. Is there a way to do this?
I see that there is an option to expose volumes to containers by specifying it in the job.conf.xml
However, I guess this will then be done for all containers? And how do I integrate blastdb.loc in that command?
For that, my tool config looks like this:
<param name="database" type="select" display="radio" label="NCBI-nt database">
<column name="name" index="1"/>
<column name="value" index="2"/>
I simply want to access the path inside of the container and therefore shared it with the container.
Thanks for any help
The latest GCC2019 news:
- A (final) reminder that early registration for the 2019 Galaxy
Community Conference ENDS THIS FRIDAY NIGHT, 17 May, at
midnight Freiburg time (CEST / GMT +2). After that registration
goes up by ~ 60%. So, register this week!
- All accepted speakers have been notified and talks are being
added to the schedule as presenters confirm. There were far
more talk submissions this year than in any previous year and that
is reflected in the breadth and quality of the accepted talks.
- Poster and demo submission is still open! However, space *will*
fill up and when it's gone, it's gone. Got something to present?
Submit it *soon*.
We hope to see you in Freiburg!
Dave C on behalf of the GCC2019 Organizers
2019 Galaxy Community Conference (GCC2019)
1-8 July, Freiburg, Germany
The 2019 Galaxy Community Conference (GCC2019) will be held 1-8 July
in Freiburg, Germany. GCC brings together hundreds of faculty,
clinicians, researchers, and students, all working in and supporting
data intensive science that is accessible, sharable and reproducible.
GCC2019 features oral presentations, lightning talks, posters, demos,
birds-of-a-feather gatherings (BoFs), training, a CollaborationFest,
and plenty of opportunities for networking.
Presentations will cover the full spectrum of Galaxy applications,
enhancements and deployments. If you are working in data intensive
science then GCC2019 is an ideal conference for sharing your work,
learning from others, and finding new collaborators.
Early registration starts at €49/day for students and postdocs, and
€79 / day for other academics and non-profit researchers. Childcare
is available and travel fellowships are available as well. Early
registration ends 17 May, when rates go up by 60%. So, register early.
Galaxy (https://galaxyproject.org/) is a platform for data integration
and analysis in the life sciences. It enables researchers to build,
run, share, and repeat their own complex computational analyses using
only a web browser and without having to first learn system
administration and command line interfaces.
The Galaxy Project is driven by a vibrant community who publish
workflows and analyses, wrap new tools, maintain and enhance the
source code, provide support, and write documentation and training
materials. Galaxy is open-source and freely available, and is deployed
in hundreds of organizations, running on everything from laptops
through supercomputers to public and private clouds. Over 150 of these
platforms are publicly available and can be used with little or no
setup. Thousands of tools have been ported to Galaxy ("wrapped") and
are deployable from the Galaxy Tool Shed. Galaxy was developed to
support life science research, but the software is domain agnostic and
is now used in domains as diverse as natural language processing,
constructive solid geometry, and social science.
I’m trying to run Galaxy (version 18.05) jobs with Apache MESOS (version 1.5.0). I configured a NFS between galaxy and mesos cluster nodes, thus sharing the directories /home/galaxy and /path/to/galaxy/database with the directories (citations, compiled_templates, files, ftp, home, job_working_directory, object_store_cache, tmp).
Then I have changed the job_conf.xml accordingly: ( https://gist.github.com/pmandreoli/6ffba03193717393a2322586686f9aed ).
It works fine with my very simple test wrapper here: https://gist.github.com/pmandreoli/ce120612afd0ac9ee80ce70c90e7d324
Then I enabled mulled containers in the galaxy.yml file in order to test the configuration using fastQC (version 0.72, owner devteam).
In this case the job was correctly executed on the mesos node (using the container quay.io/biocontainers/fastqc:0.11.8--1) but the results were not linked in the history (please see the attached fig1).
[fig1. history panel screenshot for fastQC job on chronos destination]
You can have a look to the output here:
I checked the docker run command on chronos and the working directory is located on /root/working.
In order to check if the problems is related to my job_conf.xml configuration and not to the mulled containers i changed the job_conf.xml to run jobs locally ( https://gist.github.com/pmandreoli/484566b2c548d39d8bddb5aa54461ecc )
and i have run the same tool (fastQC 0.72). In this case everything was fine.
Is my job_conf.xml configuration correct?
I would also like to ask if it is possible to add more than one volume to the docker container run on MESOS in order to add the location for the reference data, located on my mesos slave node in /cvmfs. I tried to modify the job_conf.xml block in this way
but the job failed. The docker run command sent to chronos was indeed wrong: “-v /export/galaxy/database/,/cvmfs/:/export/galaxy/database/,/cvmfs/:rw “.
This is probably due to the definition of the “volumes” field in the chronos.py runner: https://github.com/galaxyproject/galaxy/blob/3b3b52f013ac8c6b5bf8a4765f9f...
which, if I understand well, is allowing to use only one path. Did I miss something?
Any suggestion and correction is, of course, more than welcome.
After updating our galaxy instance (18.05 ->19.01) we have a strange problem.
When we run or re-run a job from a history the job failed randomly.
We have no errors in log file (level_log DEBUG) but in the dataset we see "cannot import name find_instance_nested"