Groovy tool for galaxy
by Anshul Tak
Hi,
I have a question regarding galaxy. Can I execute groovy scipts using
galaxy? I installed groovy in galaxy docker image and all the dependencies.
However, when I try to execute a tool which require groovy engine it
says "/export/galaxy-central/database/job_working_directory/000/2/tool_script.sh:
line 9: groovy: command not found"
So I understood the error that galaxy is not able to find groovy but
ideally it should take the groovy engine as it is installed.
Can you give me some pointers to solve this issue?
--
With Regards,
Anshul Tak
3 years, 5 months
DeepTools installation error
by Timo Janßen
Hi all,
trying to install the deeptools_suite from the toolshed I only get the
following error:
"500 Internal Server Error
nginx/1.11.10"
This happens as well for the dependencies, e.g. deeptools_bigwig_compare.
I am currently using Galaxy 16.10 with nginx proxy.
Also the installation of the dependency
package_python_2_7_matplotlib_1_4 fails because of a missing dependency:
"Error installing tool dependency python version 2.7.10: Unable to
locate required tool shed repository named package_python_2_7_10 owned
by iuc with revision 80931fb5f433."
Has anyone seen these problems before and knows a solution?
Best regards,
Timo Janßen
--
------------------------------------------------------------------------
Timo Janßen
Wissenschaftliche Hilfskraft
Arbeitsgruppe "Anwendungs- und Informationssysteme"
Tel.: +49(0)551/201-1791
E-Mail: timo.janssen(a)gwdg.de
------------------------------------------------------------------------
Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen
(GWDG)
Am Faßberg 11, 37077 Göttingen, URL: http://www.gwdg.de
Tel.: +49 551 201-1510, Fax: +49 551 201-2150, E-Mail: gwdg(a)gwdg.de
Service-Hotline: Tel.: +49 551 201-1523, E-Mail: support(a)gwdg.de
Geschäftsführer: Prof. Dr. Ramin Yahyapour
Aufsichtsratsvorsitzender: Prof. Dr. Norbert Lassau
Sitz der Gesellschaft: Göttingen
Registergericht: Göttingen, Handelsregister-Nr. B 598
------------------------------------------------------------------------
Zertifiziert nach ISO 9001
------------------------------------------------------------------------
3 years, 6 months
cluster access
by Matthias Bernt
Dear list,
I'm just starting to get jobs submitted to our cluster as real system
user. I read in the documentation that the name of the system user is
determined by "the Galaxy user's email address (with the @domain
stripped off)". But this does not work on our system, rather the
username stored in the galaxy_user table is the name of the user (this
is correctly generated by the LDAP login).
The job runner script seems to get the user id, so I guess I need to dig
deeper. So my question is: Where could I change this behavior?
Thanks a lot.
Cheers,
Matthias
--
-------------------------------------------
Matthias Bernt
Bioinformatics Service
Molekulare Systembiologie (MOLSYB)
Helmholtz-Zentrum für Umweltforschung GmbH - UFZ/
Helmholtz Centre for Environmental Research GmbH - UFZ
Permoserstraße 15, 04318 Leipzig, Germany
Phone +49 341 235 482296,
m.bernt(a)ufz.de, www.ufz.de
Sitz der Gesellschaft/Registered Office: Leipzig
Registergericht/Registration Office: Amtsgericht Leipzig
Handelsregister Nr./Trade Register Nr.: B 4703
Vorsitzender des Aufsichtsrats/Chairman of the Supervisory Board:
MinDirig Wilfried Kraus
Wissenschaftlicher Geschäftsführer/Scientific Managing Director:
Prof. Dr. Dr. h.c. Georg Teutsch
Administrative Geschäftsführerin/ Administrative Managing Director:
Prof. Dr. Heike Graßmann
-------------------------------------------
3 years, 8 months
Pulsar - running as real DRMAA user problems
by Poole, Richard
Hey John,
So I’ve been happily using Pulsar to send all my Galaxy server jobs to our cluster here at UCL for several months now (I love it!). I am now exploring the ‘run-as-real-user’ option for DRMAA submissions and have run into a problem. The files are correctly staged, correctly chowned, successfully submitted to the queue and the job runs. However, at job end (collection?) fails with the following error message in Pulsar:
Exception happened during processing of request from (‘*.*.*.*', 54321)
Traceback (most recent call last):
File "/opt/rocks/lib/python2.6/site-packages/Paste-2.0.1-py2.6.egg/paste/httpserver.py", line 1072, in process_request_in_thread
self.finish_request(request, client_address)
File "/opt/rocks/lib/python2.6/SocketServer.py", line 322, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/opt/rocks/lib/python2.6/SocketServer.py", line 617, in __init__
self.handle()
File "/opt/rocks/lib/python2.6/site-packages/Paste-2.0.1-py2.6.egg/paste/httpserver.py", line 446, in handle
BaseHTTPRequestHandler.handle(self)
File "/opt/rocks/lib/python2.6/BaseHTTPServer.py", line 329, in handle
self.handle_one_request()
File "/opt/rocks/lib/python2.6/site-packages/Paste-2.0.1-py2.6.egg/paste/httpserver.py", line 441, in handle_one_request
self.wsgi_execute()
File "/opt/rocks/lib/python2.6/site-packages/Paste-2.0.1-py2.6.egg/paste/httpserver.py", line 291, in wsgi_execute
self.wsgi_start_response)
File "/cluster/galaxy/pulsar/pulsar/web/framework.py", line 39, in __call__
return controller(environ, start_response, **request_args)
File "/cluster/galaxy/pulsar/pulsar/web/framework.py", line 144, in controller_replacement
result = self.__execute_request(func, args, req, environ)
File "/cluster/galaxy/pulsar/pulsar/web/framework.py", line 124, in __execute_request
result = func(**args)
File "/cluster/galaxy/pulsar/pulsar/web/routes.py", line 82, in status
return status_dict(manager, job_id)
File "/cluster/galaxy/pulsar/pulsar/manager_endpoint_util.py", line 12, in status_dict
job_status = manager.get_status(job_id)
File "/cluster/galaxy/pulsar/pulsar/managers/stateful.py", line 95, in get_status
proxy_status, state_change = self.__proxy_status(job_directory, job_id)
File "/cluster/galaxy/pulsar/pulsar/managers/stateful.py", line 115, in __proxy_status
proxy_status = self._proxied_manager.get_status(job_id)
File "/cluster/galaxy/pulsar/pulsar/managers/queued_external_drmaa_original.py", line 62, in get_status
external_status = super(ExternalDrmaaQueueManager, self)._get_status_external(external_id)
File "/cluster/galaxy/pulsar/pulsar/managers/base/base_drmaa.py", line 31, in _get_status_external
drmaa_state = self.drmaa_session.job_status(external_id)
File "/cluster/galaxy/pulsar/pulsar/managers/util/drmaa/__init__.py", line 50, in job_status
return self.session.jobStatus(str(external_job_id))
File "build/bdist.linux-x86_64/egg/drmaa/session.py", line 518, in jobStatus
c(drmaa_job_ps, jobId, byref(status))
File "build/bdist.linux-x86_64/egg/drmaa/helpers.py", line 299, in c
return f(*(args + (error_buffer, sizeof(error_buffer))))
File "build/bdist.linux-x86_64/egg/drmaa/errors.py", line 151, in error_check
raise _ERRORS[code - 1](error_string)
InvalidJobException: code 18: The job specified by the 'jobid' does not exist.
With this corresponding error from my Galaxy server:
galaxy.tools.actions INFO 2016-10-13 18:47:51,851 Handled output (279.421 ms)
galaxy.tools.actions INFO 2016-10-13 18:47:52,093 Verified access to datasets (5.271 ms)
galaxy.tools.execute DEBUG 2016-10-13 18:47:52,118 Tool [toolshed.g2.bx.psu.edu/repos/devteam/sam_to_bam/sam_to_bam/1.1.4] created job [25008<http://toolshed.g2.bx.psu.edu/repos/devteam/sam_to_bam/sam_to_bam/1.1.4]%...>] (560.404 ms)
galaxy.jobs DEBUG 2016-10-13 18:47:52,579 (25008) Working directory for job is: /Users/galaxy/galaxy-dist/database/job_working_directory/025/25008
galaxy.jobs.handler DEBUG 2016-10-13 18:47:52,591 (25008) Dispatching to pulsar runner
galaxy.jobs DEBUG 2016-10-13 18:47:52,677 (25008) Persisting job destination (destination id: hpc_low)
galaxy.jobs.runners DEBUG 2016-10-13 18:47:52,681 Job [25008] queued (90.231 ms)
galaxy.jobs.handler INFO 2016-10-13 18:47:52,699 (25008) Job dispatched
galaxy.tools.deps DEBUG 2016-10-13 18:47:53,138 Building dependency shell command for dependency 'samtools'
galaxy.jobs.runners.pulsar INFO 2016-10-13 18:47:53,233 Pulsar job submitted with job_id 25008
galaxy.jobs DEBUG 2016-10-13 18:47:53,257 (25008) Persisting job destination (destination id: hpc_low)
galaxy.datatypes.metadata DEBUG 2016-10-13 18:51:03,922 Cleaning up external metadata files
galaxy.jobs.runners.pulsar ERROR 2016-10-13 18:51:03,945 failure finishing job 25008
Traceback (most recent call last):
File "/Users/galaxy/galaxy-dist/lib/galaxy/jobs/runners/pulsar.py", line 386, in finish_job
run_results = client.full_status()
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/client.py", line 132, in full_status
return self.raw_check_complete()
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/decorators.py", line 28, in replacement
return func(*args, **kwargs)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/decorators.py", line 13, in replacement
response = func(*args, **kwargs)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/client.py", line 146, in raw_check_complete
check_complete_response = self._raw_execute("status", {"job_id": self.job_id})
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/client.py", line 215, in _raw_execute
return self.job_manager_interface.execute(command, args, data, input_path, output_path)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/interface.py", line 96, in execute
response = self.transport.execute(url, method=method, data=data, input_path=input_path, output_path=output_path)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/transport/standard.py", line 34, in execute
response = self._url_open(request, data)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/transport/standard.py", line 20, in _url_open
return urlopen(request, data)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 475, in error
return self._call_chain(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 500: Internal Server Error
I am running 15.10 and Python 2.7.10 on my iMac for the server and the cluster submission node is running Pulsar 0.5.0 and Python 2.7.12
For these tests I run Pulsar in an interactive window so I have not set the sudoers file up, but rather enter sudo password when requested by Pulsar (at the first step of chowning the staging directory). Also have rewrites set up in Galaxy’s pulsar_actions.yml and I am using remote_scp for the file transfers rather than http - although I have also tried switching back to http (as I noticed caching, which I am also testing, does not work with scp transfers) but get an identical set of error messages.
As I say, I have no troubles using a regular queued_drmaa manager in pulsar. Any ideas what the problem may be?
Cheers,
Rich
Richard J Poole PhD
Wellcome Trust Fellow
Department of Cell and Developmental Biology
University College London
518 Rockefeller
21 University Street, London WC1E 6DE
Office (518 Rockefeller): +44 20 7679 6577 (int. 46577)
Lab (529 Rockefeller): +44 20 7679 6133 (int. 46133)
https://www.ucl.ac.uk/cdb/academics/poole
3 years, 8 months
Combine different resolvers for one tool
by Ignacio EGUINOA
Hi all,
I'm setting up a Galaxy server which consists of a Galaxy head node and a Docker engine (swarm) running on top of an OpenNebula cloud. So, ideally I would like to run everything using this Docker engine.
For tools that only have 1 dependency, the Docker resolver works perfectly. But, in the case of tools having more than 1 dependency, if I activate mulled containers (enable_beta_mulled_containers = True) it will just try to run the tool script using the container corresponding to the 1st dependency found, which of course fails.
What is the correct approach for resolving several dependencies using Docker?
I tried activating the involucro which (as far as I understood) builds the docker container on-the-go based on the conda environment which results from the merge of all the dependencies. I know it's a beta feature but still, couldn't make it work...besides setting the path and auto_init to True, what am I supposed to put in the containers_resolvers_config_file ?
Beside this, would it be possible to combine the dependencies in another way? Is it possible in the current version to load the conda environment inside a
I know it may sound redundant and unnecessary in terms of dependency resolution but in cases like I the one I described, where you have the Docker swarm already running.
I tried this with a few tools, using one of the containers as base and loading the rest of the dependencies using conda environment. Would it be possible to do this in an automatic way, using a general base container?
Anyway, the general question is where are things going regarding dependency resolution, so that I can get an idea of what to expect in the future and how to collaborate with development of these features. I found a thread on github about this ( [ https://github.com/galaxyproject/galaxy/issues/3299 | https://github.com/galaxyproject/galaxy/issues/3299 ] ) but don't know the resolutions taken about it (if any).
Thanks in advance for the help.
Ignacio
Ignacio Eguinoa - Predoctoral fellow
Applied Bioinformatics And Biostatistics
VIB-UGent Center for Plant Systems Biology
Ghent University
Technologiepark 927 - 9052 Ghent - Belgium
Tel. +32(0)9 331 36 95
[ http://www.psb.ugent.be/ | www.psb.ugent.be ]
3 years, 8 months
Running jobs as real user and extra_file_path
by Louise-Amélie Schmitt
Hi everyone,
I just wanted to ask how the extra_file_path is handled in case of job
running as the real user since the file_path is only writable by the
galaxy user. Any clue?
Thanks,
L-A
3 years, 8 months
Environmental Variable in <version_command>
by D K
Is it possible to use environment variables within <version_command>? I'm
trying to use git commit information as the version ID in galaxy where the
path $GITREPO can change:
e.g.
<version_command>git --git-dir="$GITREPO/project.git" log -n 1
origin/master | grep -P "commit|Author|Date"</version_command>
Thanks!
3 years, 9 months
Additional Information about users in the API
by Milen Kouylekov
Dear Team,
Hi, my name is Milen Kouylekov and I am working on the galaxy portal
customized for natural language technology applications.
http://lap.clarino.uio.no.
I wrote an reporting script that uses Galaxy API to summarize the user
activity. The idea of the script is to send regular reports with the
information available to the reporting server as well as additional
information like queries that failed to execute etc. I have managed to
retrieve using the API most of the relevant information except the
following:
1) User creation date.
2) User last access date.
I have investigated that user object and see that information is part of
it. My question is:
Is there a way to access this information through the API? If not can I
make a feature request? It is an information useful for administration
of a galaxy instance.
Cheers Milen
3 years, 9 months
Galaxy User IDs and Group IDs
by Lemley, Rob
Hi everyone,
This may seem like a silly question, but are user id's and group id's expected to be the "same"?
For example:
>>> gi.users.get_users()
[{u'username': u'user1', u'model_class': u'User', u'id': u'f2db41e1fa331b3e', u'email': u'XXXXX'},
{u'username': u'user2', u'model_class': u'User', u'id': u'f597429621d6eb2b', u'email': u'XXXXX'}]
>>> gi.groups.get_groups()
[{u'url': u'/api/groups/f2db41e1fa331b3e', u'model_class': u'Group', u'id': u'f2db41e1fa331b3e', u'name': u'group1'},
{u'url': u'/api/groups/f597429621d6eb2b', u'model_class': u'Group', u'id': u'f597429621d6eb2b', u'name': u'group2'}]
This is just a dev instance I set up really quick, so these are the first two users/groups I've created. If this is expected behavior I'll carry on, just seems odd to me.
Thanks!
-Rob
--
Rob Lemley
Systems Administrator - CIRC
University of Rochester
3 years, 9 months
conda path problem
by Matthias Bernt
Dear list,
now that I got conda working I got a new error when installing tools
(bot when installing via the web interface and via the galaxy-tools
ansible role). There seems to be an update write on a nonexistent file
...../_condarc
I guess an important point is that I needed to redefine the conda path
in galaxy.ini (since I got warnings that the path name was to long).
Maybe there are still some hard coded paths around?
Best,
Matthias
Uncaught exception in exposed API method:
Traceback (most recent call last):
File
"/gpfs1/data/galaxy_server/galaxy-dev/lib/galaxy/web/framework/decorators.py",
line 282, in decorator
rval = func( self, trans, *args, **kwargs )
File
"/gpfs1/data/galaxy_server/galaxy-dev/lib/galaxy/webapps/galaxy/api/tool_shed_repositories.py",
line 502, in install_repository_revision
payload )
File
"/gpfs1/data/galaxy_server/galaxy-dev/lib/tool_shed/galaxy_install/install_manager.py",
line 706, in install
install_options
File
"/gpfs1/data/galaxy_server/galaxy-dev/lib/tool_shed/galaxy_install/install_manager.py",
line 805, in __initiate_and_install_repositories
return self.install_repositories(tsr_ids, decoded_kwd,
reinstalling=False)
File
"/gpfs1/data/galaxy_server/galaxy-dev/lib/tool_shed/galaxy_install/install_manager.py",
line 852, in install_repositories
tool_panel_section_mapping=tool_panel_section_mapping )
File
"/gpfs1/data/galaxy_server/galaxy-dev/lib/tool_shed/galaxy_install/install_manager.py",
line 910, in install_tool_shed_repository
[self._view.install_dependencies(r) for r in new_requirements]
File
"/gpfs1/data/galaxy_server/galaxy-dev/lib/galaxy/tools/deps/views.py",
line 59, in install_dependencies
return
self._dependency_manager._requirements_to_dependencies_dict(requirements,
**{'install': True})
File
"/gpfs1/data/galaxy_server/galaxy-dev/lib/galaxy/tools/deps/__init__.py",
line 148, in _requirements_to_dependencies_dict
dependencies = resolver.resolve_all(resolvable_requirements, **kwds)
File
"/gpfs1/data/galaxy_server/galaxy-dev/lib/galaxy/tools/deps/resolvers/conda.py",
line 171, in resolve_all
is_installed = self.install_all(conda_targets)
File
"/gpfs1/data/galaxy_server/galaxy-dev/lib/galaxy/tools/deps/resolvers/conda.py",
line 126, in install_all
return_code = install_conda_targets(conda_targets, env,
conda_context=self.conda_context)
File
"/gpfs1/data/galaxy_server/galaxy-dev/lib/galaxy/tools/deps/conda_util.py",
line 405, in install_conda_targets
conda_context.ensure_channels_configured()
File
"/gpfs1/data/galaxy_server/galaxy-dev/lib/galaxy/tools/deps/conda_util.py",
line 136, in ensure_channels_configured
self.save_condarc(conda_conf)
File
"/gpfs1/data/galaxy_server/galaxy-dev/lib/galaxy/tools/deps/conda_util.py",
line 198, in save_condarc
raise Exception(message)
Exception: Failed to update write to path
[/gpfs1/data/galaxy_server/galaxy-dev/database/dependencies/_condarc]
while attempting to update conda configuration, please update the
configuration to override the condarc location or grant this application
write to the parent directory.
--
-------------------------------------------
Matthias Bernt
Bioinformatics Service
Molekulare Systembiologie (MOLSYB)
Helmholtz-Zentrum für Umweltforschung GmbH - UFZ/
Helmholtz Centre for Environmental Research GmbH - UFZ
Permoserstraße 15, 04318 Leipzig, Germany
Phone +49 341 235 482296,
m.bernt(a)ufz.de, www.ufz.de
Sitz der Gesellschaft/Registered Office: Leipzig
Registergericht/Registration Office: Amtsgericht Leipzig
Handelsregister Nr./Trade Register Nr.: B 4703
Vorsitzender des Aufsichtsrats/Chairman of the Supervisory Board:
MinDirig Wilfried Kraus
Wissenschaftlicher Geschäftsführer/Scientific Managing Director:
Prof. Dr. Dr. h.c. Georg Teutsch
Administrative Geschäftsführerin/ Administrative Managing Director:
Prof. Dr. Heike Graßmann
-------------------------------------------
3 years, 9 months