Groovy tool for galaxy
by Anshul Tak
Hi,
I have a question regarding galaxy. Can I execute groovy scipts using
galaxy? I installed groovy in galaxy docker image and all the dependencies.
However, when I try to execute a tool which require groovy engine it
says "/export/galaxy-central/database/job_working_directory/000/2/tool_script.sh:
line 9: groovy: command not found"
So I understood the error that galaxy is not able to find groovy but
ideally it should take the groovy engine as it is installed.
Can you give me some pointers to solve this issue?
--
With Regards,
Anshul Tak
5 years, 5 months
Pulsar - running as real DRMAA user problems
by Poole, Richard
Hey John,
So I’ve been happily using Pulsar to send all my Galaxy server jobs to our cluster here at UCL for several months now (I love it!). I am now exploring the ‘run-as-real-user’ option for DRMAA submissions and have run into a problem. The files are correctly staged, correctly chowned, successfully submitted to the queue and the job runs. However, at job end (collection?) fails with the following error message in Pulsar:
Exception happened during processing of request from (‘*.*.*.*', 54321)
Traceback (most recent call last):
File "/opt/rocks/lib/python2.6/site-packages/Paste-2.0.1-py2.6.egg/paste/httpserver.py", line 1072, in process_request_in_thread
self.finish_request(request, client_address)
File "/opt/rocks/lib/python2.6/SocketServer.py", line 322, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/opt/rocks/lib/python2.6/SocketServer.py", line 617, in __init__
self.handle()
File "/opt/rocks/lib/python2.6/site-packages/Paste-2.0.1-py2.6.egg/paste/httpserver.py", line 446, in handle
BaseHTTPRequestHandler.handle(self)
File "/opt/rocks/lib/python2.6/BaseHTTPServer.py", line 329, in handle
self.handle_one_request()
File "/opt/rocks/lib/python2.6/site-packages/Paste-2.0.1-py2.6.egg/paste/httpserver.py", line 441, in handle_one_request
self.wsgi_execute()
File "/opt/rocks/lib/python2.6/site-packages/Paste-2.0.1-py2.6.egg/paste/httpserver.py", line 291, in wsgi_execute
self.wsgi_start_response)
File "/cluster/galaxy/pulsar/pulsar/web/framework.py", line 39, in __call__
return controller(environ, start_response, **request_args)
File "/cluster/galaxy/pulsar/pulsar/web/framework.py", line 144, in controller_replacement
result = self.__execute_request(func, args, req, environ)
File "/cluster/galaxy/pulsar/pulsar/web/framework.py", line 124, in __execute_request
result = func(**args)
File "/cluster/galaxy/pulsar/pulsar/web/routes.py", line 82, in status
return status_dict(manager, job_id)
File "/cluster/galaxy/pulsar/pulsar/manager_endpoint_util.py", line 12, in status_dict
job_status = manager.get_status(job_id)
File "/cluster/galaxy/pulsar/pulsar/managers/stateful.py", line 95, in get_status
proxy_status, state_change = self.__proxy_status(job_directory, job_id)
File "/cluster/galaxy/pulsar/pulsar/managers/stateful.py", line 115, in __proxy_status
proxy_status = self._proxied_manager.get_status(job_id)
File "/cluster/galaxy/pulsar/pulsar/managers/queued_external_drmaa_original.py", line 62, in get_status
external_status = super(ExternalDrmaaQueueManager, self)._get_status_external(external_id)
File "/cluster/galaxy/pulsar/pulsar/managers/base/base_drmaa.py", line 31, in _get_status_external
drmaa_state = self.drmaa_session.job_status(external_id)
File "/cluster/galaxy/pulsar/pulsar/managers/util/drmaa/__init__.py", line 50, in job_status
return self.session.jobStatus(str(external_job_id))
File "build/bdist.linux-x86_64/egg/drmaa/session.py", line 518, in jobStatus
c(drmaa_job_ps, jobId, byref(status))
File "build/bdist.linux-x86_64/egg/drmaa/helpers.py", line 299, in c
return f(*(args + (error_buffer, sizeof(error_buffer))))
File "build/bdist.linux-x86_64/egg/drmaa/errors.py", line 151, in error_check
raise _ERRORS[code - 1](error_string)
InvalidJobException: code 18: The job specified by the 'jobid' does not exist.
With this corresponding error from my Galaxy server:
galaxy.tools.actions INFO 2016-10-13 18:47:51,851 Handled output (279.421 ms)
galaxy.tools.actions INFO 2016-10-13 18:47:52,093 Verified access to datasets (5.271 ms)
galaxy.tools.execute DEBUG 2016-10-13 18:47:52,118 Tool [toolshed.g2.bx.psu.edu/repos/devteam/sam_to_bam/sam_to_bam/1.1.4] created job [25008<http://toolshed.g2.bx.psu.edu/repos/devteam/sam_to_bam/sam_to_bam/1.1.4]%...>] (560.404 ms)
galaxy.jobs DEBUG 2016-10-13 18:47:52,579 (25008) Working directory for job is: /Users/galaxy/galaxy-dist/database/job_working_directory/025/25008
galaxy.jobs.handler DEBUG 2016-10-13 18:47:52,591 (25008) Dispatching to pulsar runner
galaxy.jobs DEBUG 2016-10-13 18:47:52,677 (25008) Persisting job destination (destination id: hpc_low)
galaxy.jobs.runners DEBUG 2016-10-13 18:47:52,681 Job [25008] queued (90.231 ms)
galaxy.jobs.handler INFO 2016-10-13 18:47:52,699 (25008) Job dispatched
galaxy.tools.deps DEBUG 2016-10-13 18:47:53,138 Building dependency shell command for dependency 'samtools'
galaxy.jobs.runners.pulsar INFO 2016-10-13 18:47:53,233 Pulsar job submitted with job_id 25008
galaxy.jobs DEBUG 2016-10-13 18:47:53,257 (25008) Persisting job destination (destination id: hpc_low)
galaxy.datatypes.metadata DEBUG 2016-10-13 18:51:03,922 Cleaning up external metadata files
galaxy.jobs.runners.pulsar ERROR 2016-10-13 18:51:03,945 failure finishing job 25008
Traceback (most recent call last):
File "/Users/galaxy/galaxy-dist/lib/galaxy/jobs/runners/pulsar.py", line 386, in finish_job
run_results = client.full_status()
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/client.py", line 132, in full_status
return self.raw_check_complete()
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/decorators.py", line 28, in replacement
return func(*args, **kwargs)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/decorators.py", line 13, in replacement
response = func(*args, **kwargs)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/client.py", line 146, in raw_check_complete
check_complete_response = self._raw_execute("status", {"job_id": self.job_id})
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/client.py", line 215, in _raw_execute
return self.job_manager_interface.execute(command, args, data, input_path, output_path)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/interface.py", line 96, in execute
response = self.transport.execute(url, method=method, data=data, input_path=input_path, output_path=output_path)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/transport/standard.py", line 34, in execute
response = self._url_open(request, data)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/transport/standard.py", line 20, in _url_open
return urlopen(request, data)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 475, in error
return self._call_chain(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 500: Internal Server Error
I am running 15.10 and Python 2.7.10 on my iMac for the server and the cluster submission node is running Pulsar 0.5.0 and Python 2.7.12
For these tests I run Pulsar in an interactive window so I have not set the sudoers file up, but rather enter sudo password when requested by Pulsar (at the first step of chowning the staging directory). Also have rewrites set up in Galaxy’s pulsar_actions.yml and I am using remote_scp for the file transfers rather than http - although I have also tried switching back to http (as I noticed caching, which I am also testing, does not work with scp transfers) but get an identical set of error messages.
As I say, I have no troubles using a regular queued_drmaa manager in pulsar. Any ideas what the problem may be?
Cheers,
Rich
Richard J Poole PhD
Wellcome Trust Fellow
Department of Cell and Developmental Biology
University College London
518 Rockefeller
21 University Street, London WC1E 6DE
Office (518 Rockefeller): +44 20 7679 6577 (int. 46577)
Lab (529 Rockefeller): +44 20 7679 6133 (int. 46133)
https://www.ucl.ac.uk/cdb/academics/poole
5 years, 8 months
Running jobs as real user and extra_file_path
by Louise-Amélie Schmitt
Hi everyone,
I just wanted to ask how the extra_file_path is handled in case of job
running as the real user since the file_path is only writable by the
galaxy user. Any clue?
Thanks,
L-A
5 years, 9 months
Testing updated NCBI BLAST+ wrappers for version 2.5.0
by Peter Cock
Hello all,
I have updated the NCBI BLAST+ wrappers on the Test Tool Shed,
the wrapper is now at v0.2.00:
https://testtoolshed.g2.bx.psu.edu/view/devteam/ncbi_blast_plus/
The main changes is this now depends on BLAST+ 2.5.0, and that is
available via either BioConda or the Tool Shed:
https://toolshed.g2.bx.psu.edu/view/iuc/package_blast_plus_2_5_0/
https://testtoolshed.g2.bx.psu.edu/view/iuc/package_blast_plus_2_5_0/
In order for the dependency to work smoothly on both BioConda
and the Tool Shed system, we have changed the package name
from "blast+" to just "blast". Given the NCBI stopped updated the
original "legacy" BLAST some time ago, when combined with the
version number this is no longer ambiguous.
Jumping from using BLAST+ 2.2.31 to using BLAST+ 2.5.0
required updating lots of the test files for NCBI changes, including
dropping the GI numbers in many outputs, expanding the percentage
identity field from 2dp to 3dp, and also changing how -parse_deflines
works with tabular output.
The wrappers (deliberately) do not yet offer any new functionality
added in the recent NCBI BLAST+ updates, in particular BLAST
XML v2 is not yet available as an output with a datatype in Galaxy.
At this point I would welcome feedback from those of you using the
BLAST+ wrappers - including if you were able to install this with the
dependencies from BioConda or the traditional Tool Shed packages.
Once I'm confident that this is all OK, I will update the main Tool Shed
(and think about adding new functionality in 2017).
Thank you all,
Peter
5 years, 9 months
installation error on venv dependencies.xml
by Linda Bakker
Hi All,
I am having some problems with my tool dependencies to setup a python venv.
Dependencies.xml looks like this:
When i install my tool from the toolshed, it fails to install the venv
dependency with this error message:
I am not sure what the problem exactly seems to be. os.py should be a
default python package right or do i need to specify it in the
dependencies file?
Can someone point me in the right direction of what the problem might be.
Thanks!
with regards ,
Linda Bakker
--
Linda Bakker
Bioinformatics Research Assistant
Plant Research International, Wageningen UR
Building 107 (Radix), room W1.Fa.054
Droevendaalsesteeg 1, 6708 PB, Wageningen, The Netherlands
tel: +31-317-480756
email: linda.bakker(a)wur.nl
5 years, 10 months
Issue when using xsendfile with galaxy release_16.07
by SAPET, Frederic
Hello
I'm trying the latest Galaxy (git status gives me : # On branch release_16.07)
I've set the xsend option in galaxy.ini (in order to run Galaxy behind a web server proxy, and apply all methods described here : https://wiki.galaxyproject.org/Admin/Config/ApacheProxy)
apache_xsendfile = True
HTML, gff3,tabular files are just fine but the preview of some other files (txt, csv) is broken on the interface.
When I want to preview a txt file :
Not Found
The requested URL /datasets/fd1df6a9052810d5/display/ was not found on this server.
And apache says :
[Thu Aug 18 15:33:48 2016] [error] [client 192.168.29.12] (13)Permission denied: xsendfile: cannot open file: /softs/bioinfo/galaxy-prod/database/files/000/dataset_19.dat, referer: http:
But it looks like this is not really a problem related to access.
Indeed, from the same file, I switch it from txt to tabular (edit attributes), the preview works.
Are you able to reproduce the behavior ?
The same problem occurs on release_16.01.
It's fine with the release v15.05 which is my production server right now.
Fred
5 years, 11 months
Question about Data Managers
by Carlos Lijeron
Everyone,
I’m trying to setup various data managers to automatically download and index reference indices. However, I can’t figure out where to change the default location where these are saved. As you can see in the attached image, the default location for the indexes is the … /tool-data/dbkey/index_name However, I would like to save them in /projects/reference_datasets which is a shared location that has a lot of space available.
Please advise.
Thank you all !
Carlos.
[cid:image001.png@01D25B9D.4F483110]
5 years, 11 months
display at UCSC does not work
by Hans-Rudolf Hotz
Hi
I am trying to set up the possibility to display data sets with the UCSC
genome browser for our new galaxy installation.
After clicking on "display at UCSC main", I get the UCSC genome browser,
opening in the right assembly, and at the correct location, but I get
the following Error:
Warning/Error(s):
redirected to non-http(s): /login?redirect=%2Froot%2Fdisplay_as
In galaxy.ini, I did uncomment the line:
display_servers = hgw1.cse.ucsc.edu,hgw2.cse.ucsc.edu,hgw3.cse. ...
We use the 'new' authentication method to connect to our LDAP server,
with the following changes in galaxy.ini:
require_login = True
auth_config_file = config/auth_conf.xml
Connection from UCSC to our galaxy server is working, since I can
display the data, when I switch to
require_login = False
it works, independent of whether I am logged in or not.
Is there a way to get it to work, even if I have "require_login = True"?
Thank you very much for your help
Regards, Hans-Rudolf
--
Hans-Rudolf Hotz, PhD
Bioinformatics Support
Friedrich Miescher Institute for Biomedical Research
Maulbeerstrasse 66
4058 Basel/Switzerland
6 years
Fwd: Fwd: Usegalaxy.org Data Rsync Connection refused
by Christopher Previti
Dear all,
I was wondering if anybody was able to get the rsync server
datacache.g2.bx.psu.edu to work? Have there been non-documented changes?
We still get following messages:
rsync -avzP rsync://datacache.g2.bx.psu.edu/indexes/phiX .
rsync: failed to connect to datacache.g2.bx.psu.edu (128.118.250.10):
Connection refused (111)
rsync error: error in socket IO (code 10) at clientserver.c(128)
[Receiver=3.1.1]
My colleague Georgios Nikolis and I are working on a local Galaxy
installation at the DKFZ and would really like to use the pre-built indices.
Cheers,
Christopher Previti
-------- Weitergeleitete Nachricht --------
Betreff: Fwd: [galaxy-dev] Usegalaxy.org Data Rsync Connection refused
Datum: Tue, 13 Dec 2016 13:46:48 +0100
Von: Georgios Nikolis <g.nikolis(a)dkfz-heidelberg.de>
An: Christopher Previti <christopher.previti(a)dkfz-heidelberg.de>
*Georgios Nikolis*
Genomics and Proteomics Core Facility
High Throughput Sequencing (W190)
IT Management Genomics Proteomics
German Cancer Research Center (DKFZ)
Foundation under Public Law
Im Neuenheimer Feld 580
69120 Heidelberg
Germany
phone: +49 6221 42-4659
g.nikolis(a)dkfz.de <http://www.dkfz.de/>
www.dkfz.de <http://www.dkfz.de/>
logo <http://www.dkfz.de>
Management Board: Prof. Dr. Michael Baumann, Prof. Dr. Josef Puchta
VAT-ID No.: DE143293537
-------- Forwarded Message --------
Subject: [galaxy-dev] Usegalaxy.org Data Rsync Connection refused
Date: Fri, 28 Oct 2016 15:23:14 +0200
From: Georgios Nikolis <g.nikolis(a)dkfz-heidelberg.de>
To: galaxy-dev(a)lists.galaxyproject.org
<galaxy-dev(a)lists.galaxyproject.org>
Dear all,
we are currently working on setting up a new local Galaxy installation
and I would like to obtain the genome reference data available on the
public Galaxy instance. I am following the instructions on the Wiki page
(https://wiki.galaxyproject.org/Admin/UseGalaxyRsync), but I can't seem
to be able to access datacache.g2.bx.psu.edu via rsync.
rsync -avzP rsync://datacache.g2.bx.psu.edu/indexes/phiX .
rsync: failed to connect to datacache.g2.bx.psu.edu (128.118.250.10):
Connection refused (111)
rsync error: error in socket IO (code 10) at clientserver.c(128)
[Receiver=3.1.1]
Does anyone know if the URL has changed in the meantime? Is there any
alternative way to access this data?
Thanks in advance!
Best regards,
Georgios
--
*Georgios Nikolis*
Genomics and Proteomics Core Facility
High Throughput Sequencing (W190)
IT Management Genomics Proteomics
German Cancer Research Center (DKFZ)
Foundation under Public Law
Im Neuenheimer Feld 580
69120 Heidelberg
Germany
phone: +49 6221 42-4659
g.nikolis(a)dkfz.de <http://www.dkfz.de/>
www.dkfz.de <http://www.dkfz.de/>
logo <http://www.dkfz.de>
Management Board: Prof. Dr. Michael Boutros (interim), Prof. Dr. Josef
Puchta
VAT-ID No.: DE143293537
6 years
Python script can't can't import BioPython modules in Galaxy 16.01
by Makis Ladoukakis
Hello everyone,
I've recently upgraded to Galaxy 16.01 and i have some issues with a python wrapper from a tool. When I run it via Galaxy I get the following error:
Traceback (most recent call last):
File "/home/galaxy_user/galaxy/tools/myTools/know_parser/know_parser.py", line 12, in <module>
from Bio import SeqIO
ImportError: No module named Bio
However when I click on the "view details" icon to copy the command from the tool and run it on my command line the tool works perfectly. In addition this issue occurs while I already have installed BioPython.
When I type Python and write "from Bio import SeqIO" I get no error.
Can this be an issue related to the virtual environment that is initialized each time Galaxy starts?
Thank you in advance,
Makis Ladoukakis
6 years