I remember there previously being an additional tool that ran on a
different port from galaxy that allowed for monitoring of performance
and jobs running. Is this tool still packaged with galaxy and if so how
can it be activated.
I’m installing via GalaxyKickStart…
I’m getting the following error:
galaxy.jobs.runners ERROR 2017-11-06 19:14:05,263 (19) Failure preparing job
Traceback (most recent call last):
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/__init__.py", line 175, in prepare_job
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/__init__.py", line 209, in build_command_line
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/command_factory.py", line 84, in build_command
externalized_commands = __externalize_commands(job_wrapper, external_command_shell, commands_builder, remote_command_params)
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/command_factory.py", line 143, in __externalize_commands
write_script(local_container_script, script_contents, config)
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/util/job_script/__init__.py", line 112, in write_script
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/util/job_script/__init__.py", line 147, in _handle_script_integrity
raise Exception("Failed to write job script, could not verify job script integrity.")
Exception: Failed to write job script, could not verify job script integrity.
galaxy.model.metadata DEBUG 2017-11-06 19:14:05,541 Cleaning up external metadata files
galaxy.model.metadata DEBUG 2017-11-06 19:14:05,576 Failed to cleanup MetadataTempFile temp files from /home/exacloud/lustre1/galaxydev/galaxyuser/database/jobs/000/19/metadata_out_HistoryDatasetAssociation_16_I8bhLX: No JSON object could be decoded
I would like to further understand what it means to not verify integrity of a job script. Does this just mean there is a permissions error? Ownership doesn’t match up?
I tried to install velvetoptimizer (for the assembly tutorial in the GTN).
It lists velvetoptimiser as requirement in the main xml and the
tool_dependencies.xml (listing the installation steps). Now there seems
to be a conda package perl-velvetoptimiser.
What would be the steps to to update the tool (I would happily create a
PR)? Is the tool_dependencies.xml intended to be kept and I just need to
update the names of the requirements (I guess the name in the
tool_dependencies.xml does not matter)?
Furthermore the tool_dependencies.xml includes the package_velvet which
is listed as missing dependency in the management pages. Shouldn't this
be ignored if there is a conda package?
Last question: Is it possible to update an installed package by editing
the xml files? It seemed to me that changes do not have an effect.
Molekulare Systembiologie (MOLSYB)
Helmholtz-Zentrum für Umweltforschung GmbH - UFZ/
Helmholtz Centre for Environmental Research GmbH - UFZ
Permoserstraße 15, 04318 Leipzig, Germany
Phone +49 341 235 482296,
Sitz der Gesellschaft/Registered Office: Leipzig
Registergericht/Registration Office: Amtsgericht Leipzig
Handelsregister Nr./Trade Register Nr.: B 4703
Vorsitzender des Aufsichtsrats/Chairman of the Supervisory Board:
MinDirig Wilfried Kraus
Wissenschaftlicher Geschäftsführer/Scientific Managing Director:
Prof. Dr. Dr. h.c. Georg Teutsch
Administrative Geschäftsführerin/ Administrative Managing Director:
Prof. Dr. Heike Graßmann
Dear Mohamed, Karim, galaxy-dev list,
A rapid e-mail to inform you that we plan to work on this task in the upcoming weeks with Thimothée and Valentin, copied. We have had some exchanges with Eric (Rasche) and Björn (Grüning) and it seems that using the Galaxy "interactive environment" functionality is a good way to proceed. Don't hesitate to give us more informations to collaborate on it...
Wishing you a good week end.
-------- Message d'origine --------
De : "Md. Rezaul Karim" <rezaul.karim(a)insight-centre.org>
Date : 26/05/2017 15:45 (GMT+01:00)
À : Mohamed Kassam <k.mamoud(a)gmail.com>
Cc : Galaxy Dev List <galaxy-dev(a)lists.galaxyproject.org>
Objet : Re: [galaxy-dev] Shiny in Galaxy
On May 26, 2017 2:44 PM, "Mohamed Kassam" <k.mamoud(a)gmail.com> wrote:
I have a shiny application working in my RStudio, but I would like to integrate it to Galaxy that the users can call my application via Galaxy .
Thanks in Advance,
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
To search Galaxy mailing lists use the unified search at:
Is it possible to execute a script within a tool xml confit to retrieve some data and append to the ui depending on the requested data. I.e if I wanted to get a list of files and provide them as select options before hitting the execute button.
--- Sent from VMware Boxer
Below is an error obtained from our galaxy install. We have custom
directories sets for the files directory and the jobs directory. Galaxy
appears to move the files but then attempts to look for them at the
original path. I have tried unsetting these options but it still persists.
galaxy.jobs DEBUG 2017-11-24 17:53:19,057 finish(): Moved
galaxy.model.metadata DEBUG 2017-11-24 17:53:19,128 setting metadata
externally failed for HistoryDatasetAssociation 431: [Errno 2] No such
file or directory
galaxy.jobs.runners ERROR 2017-11-24 17:53:19,319 (320/179084) Job
wrapper finish method failed
Traceback (most recent call last):
line 630, in finish_job
job_state.job_wrapper.finish( stdout, stderr, exit_code )
File "/var/web_services/galaxy/lib/galaxy/jobs/__init__.py", line
1266, in finish
dataset.datatype.set_meta( dataset, overwrite=False )
File "/var/web_services/galaxy/lib/galaxy/datatypes/binary.py", line
391, in set_meta
exit_code = subprocess.call( args=command, stderr=open(
stderr_name, 'wb' ) )
File "/usr/lib64/python2.7/subprocess.py", line 524, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
OSError: [Errno 2] No such file or directory
galaxy.jobs ERROR 2017-11-24 17:53:19,337 fail(): Missing output file in
working directory: [Errno 2] No such file or directory:
I would like to close my cluster, but when click on delete permanently, I see the cluster is still active. Could you help me to close the cluster?
Appliance: Galaxy CloudMan Version: 17.05
I'm working on data visualization / display in galaxy.
I'm wondering if it's possible to save a resource edited from the viewer
plugin or an external display in the current history.
In my opinion the easier / proper way is to use an external display to show
/ edit the resources and upload the edited files in the current history
using galaxy API.
Does anyone have tried something similar?
This January before PAG on the Wednesday and Thursday before PAG (January
10-11) in San Diego we are planning a GMOD hackathon. We expect that
participants will be interested in solving problems/creating solutions
related to Tripal, JBrowse, Apollo, and Galaxy but if you're interested in
another GMOD project, by all means, let us know! We expect this hackathon
to overlap with the Tripal hackathon that is on January 11 (I'm pretty
sure; right Stephen?)
If you are interested in attending this hackathon, please let me know so I
can be sure we have an appropriately sized space. And if you're coming for
the pre-PAG hackathon, consider staying for PAG, since there is always a
lot of GMOD-related content at the meeting!
Scott Cain, Ph. D. scott at scottcain dot
GMOD Coordinator (http://gmod.org/) 216-392-3087
Ontario Institute for Cancer Research
If I execute the following command:
python import_with_dbkey.py *** file.bam hg19sub
the bam files will be added to the user's history
the code (import_with_dbkey.py) is as follows:
from bioblend.galaxy import GalaxyInstance
from bioblend.galaxy.tools import ToolClient
url = "http://galaxy-dev.fmi.ch"
key = "***"
history_id = sys.argv
file_path = sys.argv
gi = GalaxyInstance(url=url, key=key)
toolClient = ToolClient(gi)
uploadedFile = toolClient.upload_file(file_path, history_id, dbkey=dbkey)
Now, if I use the virtual environment from Galaxy, I get:
(.venv)-bash-4.2$ python import_with_dbkey.py *** file.bam hg19sub
Traceback (most recent call last):
File "/***/galaxy/tools/fmi__api_helpers/import_with_dbkey.py", line
40, in <module>
uploadedFile = toolClient.upload_file(file_path, history_id,
line 148, in upload_file
return self._tool_post(payload, files_attached=True)
line 218, in _tool_post
return Client._post(self, payload, files_attached=files_attached)
line 171, in _post
131, in make_post_request
bioblend.galaxy.client.ConnectionError: Unexpected response from galaxy:
400: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<title>400 Bad Request</title>
<p>Your browser sent a request that this server could not understand.<br />
...and the file does not get uploaded.
I will continue investigating the difference between the python modules
used, but maybe someone has a clue and could help me
Thank you very much