I remember there previously being an additional tool that ran on a
different port from galaxy that allowed for monitoring of performance
and jobs running. Is this tool still packaged with galaxy and if so how
can it be activated.
I’m installing via GalaxyKickStart…
I’m getting the following error:
galaxy.jobs.runners ERROR 2017-11-06 19:14:05,263 (19) Failure preparing job
Traceback (most recent call last):
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/__init__.py", line 175, in prepare_job
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/__init__.py", line 209, in build_command_line
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/command_factory.py", line 84, in build_command
externalized_commands = __externalize_commands(job_wrapper, external_command_shell, commands_builder, remote_command_params)
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/command_factory.py", line 143, in __externalize_commands
write_script(local_container_script, script_contents, config)
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/util/job_script/__init__.py", line 112, in write_script
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/util/job_script/__init__.py", line 147, in _handle_script_integrity
raise Exception("Failed to write job script, could not verify job script integrity.")
Exception: Failed to write job script, could not verify job script integrity.
galaxy.model.metadata DEBUG 2017-11-06 19:14:05,541 Cleaning up external metadata files
galaxy.model.metadata DEBUG 2017-11-06 19:14:05,576 Failed to cleanup MetadataTempFile temp files from /home/exacloud/lustre1/galaxydev/galaxyuser/database/jobs/000/19/metadata_out_HistoryDatasetAssociation_16_I8bhLX: No JSON object could be decoded
I would like to further understand what it means to not verify integrity of a job script. Does this just mean there is a permissions error? Ownership doesn’t match up?
I’m receiving the following error trying to access tools autoinstalled via conda:
Fatal error: Exit code 126 ()
~/galaxy/database/jobs_directory/000/38/tool_script.sh: line 41: ~/galaxy/database/dependencies/_conda/envs/mulled-v1-bb83f93efd111e042823e53ddfa74c32d81ba74cceca9445dfddfc9e940ff738/bin/samtools: Text file busy
So, something is happening too fast I guess, meaning a process is attempting access while this is being created. Any ideas on how I can diagnose this? Not seeing any other errors floating around the logs. Maybe we are dealing with a lustre file locking issue?
I tried to install velvetoptimizer (for the assembly tutorial in the GTN).
It lists velvetoptimiser as requirement in the main xml and the
tool_dependencies.xml (listing the installation steps). Now there seems
to be a conda package perl-velvetoptimiser.
What would be the steps to to update the tool (I would happily create a
PR)? Is the tool_dependencies.xml intended to be kept and I just need to
update the names of the requirements (I guess the name in the
tool_dependencies.xml does not matter)?
Furthermore the tool_dependencies.xml includes the package_velvet which
is listed as missing dependency in the management pages. Shouldn't this
be ignored if there is a conda package?
Last question: Is it possible to update an installed package by editing
the xml files? It seemed to me that changes do not have an effect.
Molekulare Systembiologie (MOLSYB)
Helmholtz-Zentrum für Umweltforschung GmbH - UFZ/
Helmholtz Centre for Environmental Research GmbH - UFZ
Permoserstraße 15, 04318 Leipzig, Germany
Phone +49 341 235 482296,
Sitz der Gesellschaft/Registered Office: Leipzig
Registergericht/Registration Office: Amtsgericht Leipzig
Handelsregister Nr./Trade Register Nr.: B 4703
Vorsitzender des Aufsichtsrats/Chairman of the Supervisory Board:
MinDirig Wilfried Kraus
Wissenschaftlicher Geschäftsführer/Scientific Managing Director:
Prof. Dr. Dr. h.c. Georg Teutsch
Administrative Geschäftsführerin/ Administrative Managing Director:
Prof. Dr. Heike Graßmann
Dear Mohamed, Karim, galaxy-dev list,
A rapid e-mail to inform you that we plan to work on this task in the upcoming weeks with Thimothée and Valentin, copied. We have had some exchanges with Eric (Rasche) and Björn (Grüning) and it seems that using the Galaxy "interactive environment" functionality is a good way to proceed. Don't hesitate to give us more informations to collaborate on it...
Wishing you a good week end.
-------- Message d'origine --------
De : "Md. Rezaul Karim" <rezaul.karim(a)insight-centre.org>
Date : 26/05/2017 15:45 (GMT+01:00)
À : Mohamed Kassam <k.mamoud(a)gmail.com>
Cc : Galaxy Dev List <galaxy-dev(a)lists.galaxyproject.org>
Objet : Re: [galaxy-dev] Shiny in Galaxy
On May 26, 2017 2:44 PM, "Mohamed Kassam" <k.mamoud(a)gmail.com> wrote:
I have a shiny application working in my RStudio, but I would like to integrate it to Galaxy that the users can call my application via Galaxy .
Thanks in Advance,
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
To search Galaxy mailing lists use the unified search at:
I am not quit sure if I am in the right mailing list to ask this question.
To this email, i have attached my RepeatExplorer workflow. I downloaded the
"Archive with clustering results from dataset 2" output from RepeatExplorer
using the Ftp server. I followed the instruction provided by RepeatExplorer
to transfer the data to the ftp server (I have attached to this email). The
data was successfully transferred (see screenshot attached). The downloaded
file seems to be in text format. I put that file on Zipeg to extract the
files but it is not working. I changed the extension of that file to (.gz)
then (.zip). I am still unable to extract those files. I am wondering if I
could get some insight on how to unzip that file
We are experiencing a job dispatching issue on our public galaxy instance (
https://galaxy.inf.ethz.ch/). On the compute machine, I am not able to
start the workers. The following messages got from the worker log file:
migrate.versioning.repository DEBUG 2018-01-30 12:23:41,816 Repository
lib/galaxy/model/migrate loaded successfully
migrate.versioning.repository DEBUG 2018-01-30 12:23:41,816 Config:
OrderedDict([('db_settings', OrderedDict([('__name__', 'db_settings'),
('repository_id', 'Galaxy'), ('version_table', 'migrate_version'),
After this line, it is restarting the process again and stop at the same
I have checked the status of the workers through supervisor few times and
-bash-4.2$ supervisorctl status
galaxy:worker0 BACKOFF Exited too quickly (process log
may have details)
galaxy:worker0 FATAL Exited too quickly (process log
may have details)
use = egg:Paste#http
port = 9999
host = COMPUTE.MACHINE.IP
use_threadpool = true
threadpool_workers = 5
<plugin id="condor" type="runner"
<handler id="worker0" tags="handlers"/>
<destination id="worker0" runner="condor">
command = /tools/galaxy_ratschlab/.venv/bin/python
./lib/galaxy/main.py -c /tools/galaxy_ratschlab/config/galaxy.ini
directory = /tools/galaxy_ratschlab
process_name = worker%(process_num)s
numprocs = 2
umask = 022
autostart = true
autorestart = true
startsecs = 20
user = galaxy
I tried to restart but didn't help, anything wrong with my configuration?
Please let me know what you think and need.
Thanks in advance!
I have been trying to finish up a production cluster Galaxy installation, and am having trouble with the below error. In the past, when seeing something along these lines, I usually can adjust environmental variables either in startup scripts, or by including a script for Galaxy to source before it sends out a job. I have tried all of these different methods, but I can’t seem to get rid of this error message in any tool invocation. I currently have “embed_metadata_in_job” set to False in my job_conf.xml file. This removes a “No module named galaxy_ext.metadata.set_metadata” error, but this hashlib error remains. If I could understand a little more about the steps that are taken when sending out a job, perhaps I could better diagnose this?
Could not find platform dependent libraries
Consider setting $PYTHONHOME to [:]
Traceback (most recent call last):
File "~/galaxydev/galaxy/tools/data_source/upload.py", line 14, in
File "/usr/lib64/python2.7/tempfile.py", line 35, in
from random import Random as _Random
File "/usr/lib64/python2.7/random.py", line 49, in
import hashlib as _hashlib
File "/usr/lib64/python2.7/hashlib.py", line 116, in
ImportError: No module named _hashlib
Dear Dev Mailing List,
is there any possibility build into galaxy to "archive" histories?
By "archive" I mean not deleting them, but hiding them from the normal views?
Many thanks in advance!
Mit freundlichen Grüßen
GERMAN SEED ALLIANCE GmbH
c/o Norddeutsche Pflanzenzucht
Hans-Georg Lembke KG
Hohenlieth, 24363 Holtsee
Tel.: +49 (0)4351/ 736-189
Fax: + 49 (0)4351/ 736-271
Mobil: +49 (0)151/ 14247360
Amtsgericht Köln, HRB 73844
Von: galaxy-dev [mailto:email@example.com] Im Auftrag von galaxy-dev-request(a)lists.galaxyproject.org
Gesendet: Dienstag, 23. Januar 2018 18:00
Betreff: galaxy-dev Digest, Vol 139, Issue 12
Send galaxy-dev mailing list submissions to
To subscribe or unsubscribe via the World Wide Web, visit
or, via email, send a message with subject or body 'help' to
You can reach the person managing the list at
When replying, please edit your Subject line so it is more specific than "Re: Contents of galaxy-dev digest..."
HEY! This is important! If you reply to a thread in a digest, please 1. Change the subject of your response from "Galaxy-dev Digest Vol ..." to the original subject for the thread.
2. Strip out everything else in the digest that is not part of the thread you are responding to.
1. This will keep the subject meaningful. People will have some idea from the subject line if they should read it or not.
2. Not doing this greatly increases the number of emails that match search queries, but that aren't actually informative.
1. GCCBOSC 2018 Training Topic Voting is now open (Dave Clements)
Date: Mon, 22 Jan 2018 11:06:01 -0800
From: Dave Clements <clements(a)galaxyproject.org>
To: Galaxy Dev List <galaxy-dev(a)lists.galaxyproject.org>, Galaxy
Announcements List <galaxy-announce(a)lists.galaxyproject.org>
Cc: GCCBOSC 2018 Organizers <gccbosc2018-org(a)lists.galaxyproject.org>
Subject: [galaxy-dev] GCCBOSC 2018 Training Topic Voting is now open
Content-Type: text/plain; charset="utf-8"
Voting is now open <http://bit.ly/gccbosc2018-vote> on what training topics <https://galaxyproject.org/events/gccbosc2018/training/> will be offered at GCCBOSC
Your vote <http://bit.ly/gccbosc2018-vote> will determine the topics that are offered, which topics should be offered more than once, and which ones should not be scheduled at the same time. Your vote matters.
*Topic voting closes January 31.* The Training schedule, including instructors, will be published before early registration opens.
GCCBOSC2018 <https://gccbosc2018.sched.com/> will be held 25-30 June in Portland, Oregon, United States. It will feature two days of training: the second of which is multi-track and will feature content for both the BOSC and Galaxy communities. Workshops will be hands-on and participants will be strongly encouraged to bring a laptop and follow along. If you work in data-intensive biomedical research, there is no better place than GCCBOSC
2018 to present your work and to learn from others.
Thanks, and see you in Portland!
The GCCBOSC 2018 Organizers
-------------- next part --------------
An HTML attachment was scrubbed...
Subject: Digest Footer
galaxy-dev mailing list
To search Galaxy mailing lists use the unified search at:
End of galaxy-dev Digest, Vol 139, Issue 12