My attempt to 'package' pycurl on testtoolshed fails with the following error:
src/pycurl.c: In function ‘do_multi_info_read’: src/pycurl.c:3549:15:
warning: call to ‘_curl_easy_getinfo_err_string’ declared with
attribute warning: curl_easy_getinfo expects a pointer to char * for
this info [enabled by default] src/pycurl.c: In function
‘do_curl_getinfo’: src/pycurl.c:2888:19: warning: call to
‘_curl_easy_getinfo_err_curl_slist’ declared with attribute warning:
curl_easy_getinfo expects a pointer to struct curl_slist * for this
info [enabled by default] error: could not create
'/usr/local/share/doc': Permission denied
A local install worked fine. Is this because of libcurl-devel missing
on toolshed instance?
I would like to use bioblend to export the result of a galaxy workflow.
To do that, we want to use an implemented tool called export_data; which
enables the copy of a result file from galaxy arborescence to a local
directory which belong to the user making the analysis and the export.
We tried to launch this tool using the run_tool function but it doesn't
I get an error message which is very long and attached to this email.
Please help me on that it is the final step of several ones using bioblend.
Have a nice day,
Olivia Doppelt-Azeroual, PhD
Tel: 92 15
CIB - Institut Pasteur
Thanks for the major update of the Freebayes wrapper, excellent!
I've run into two issues, however.
1) When using "set allelic scope" I get the following error:
Fatal error: Exit code 1 ()
freebayes: unrecognized option `--min-repeat-length'
did you mean --min-repeat-size ?
2) When using a vcf file as input I get the following error:
Fatal error: Exit code 1 ()
open: No such file or directory
[bgzf_check_bgzf] failed to open the file: input_variant_vcf.vcf.gz
[tabix++] was bgzip used to compress this file? input_variant_vcf.vcf.gz
I'd be happy to help resolve these. Is there a bitbucket repo somewhere
to submit pull requests?
Lance Parsons - Scientific Programmer
134 Carl C. Icahn Laboratory
Lewis-Sigler Institute for Integrative Genomics
Thank you for your answer and for trying to help. This is greatly
I didn't really made any progress in tracking down this error, and
hopefully this weird behaviour will not happen anymore with the November
But here are my answers to your questions, in case it would ring a bell:
Has this behaviour been reported with any other workflow?
It has been reported with 2 different workflows as of now. These 2
workflows doesn't have anything in common, except that they are huge (one
of them has 37 steps, producing a total of about 110 datasets).
Are you running Galaxy as a single process or multiple processes? If
multiple processes, how many web, handler and manager processes do you
have and are they all on the same machine?
We are running Galaxy in multiple processes with 5 web servers, 3 job
handlers and no manager (I believe the manager was rendered obsolete in
one of the latest Galaxy distributions). All these processes are run on
the same machine.
Have you made any modifications to Galaxy that could result in this
What is the value of track_jobs_in_database in your universe_wsgi.ini
We never touched this part of the configuration file and the line still
reads: "#track_jobs_in_database = None".
After reading your answer, I've decided to modify this line to:
"track_jobs_in_database = True"
Unfortunately, running one of the faulty workflows several times (5x), I
noticed that one of them was still showing this strange behaviour where
some jobs were executed before their inputs were ready.
Do you think this issue could be related to the fact that we are using
Galaxy with the multiple processes configuration? We implemented this
configuration some time ago because some of our users were complaining
about the slow responsiveness of the web interface.
Would you recommend using Galaxy without the multiple processes
configuration? (Lets say if updating to November 4th distribution doesn't
fix this issue)
I guess you are probably using the multiple processes configuration as
well on Galaxy main?
Thanks again for your help!
Posted by John Chilton on Nov 09, 2013; 2:50pm
Have you made any progress tracking down this error? This appears very
serious, but to tell you the truth I have no clue what could cause it. The
distribution you are using is pretty old at this point I feel like if it
was a bug the exhibited under relatively standard parameter combinations
someone else would have reported it by now.
Can you tell me some things: has this been reported with any other
workflows? Is there anything special about this workflow? Can you rebuild
the workflow and see if the error occurs again?
Additional questions if the problem is not restricted to the workflow:
are you running Galaxy as a single process or multiple processes? If
multiple processes, how many web, handler, and manager processes do you
have? Are they all on the same machine? Have you made any modifications to
Galaxy that could result in this behavior? What is the value of
track_jobs_in_database in your universe_wsgi.ini configuration file?
On Thu, Nov 7, 2013 at 10:34 AM, Jean-Francois Payotte <[hidden email]>
Dear Galaxy mailing-list,
Once again I come seeking for your help. I hope someone already had this
issue or will have an idea on where to look to solve it. :)
One of our users reported having workflows failing because some steps were
executed before all their inputs where ready.
You can find a screenshot attached, where we can see that step (42) "Sort
on data 39" has been executed while step (39) is still waiting to run
This behaviour has been reproduced with at least two different Galaxy
tools (one custom, and the sort tool which comes standard with Galaxy).
This behaviour seems to be a little bit random, as running two times a
workflow where this issue occurs, only one time did some steps were
executed in the wrong order.
I could be wrong, but I don't think this issue is grid-related as, from my
understanding, Galaxy is not using SGE job dependencies functionality.
I believe all jobs stays in some internal queues (within Galaxy) until all
input files are ready, and only then the job is submitted to the cluster.
Any help or any hint on what to look at to solve this issue would be
We have updated our Galaxy instance to August 12th distribution on October
1st, and I believe we never experienced this issue before the update.
Many thanks for your help,
I have a big workflow and sometime when I launched it, I got an error message for some of my tools : unable to finish job.
I think it's maybe Galaxy ran multiple job so I get this error message.
I'm looking to run my workflow "step by step".
For example this workflow:
I'm looking to run Hilbert THEN FFT THEN SLM and no Hilbert, FFT and SLM at the same time.
Is it possible to make a workflow which wait to finish a job before run an another job ?
Hi All --
I'm a novice when it comes to all things programming, but I recently
installed my own instance of Galaxy because I'm working with very large
datasets. Everything seems to be running OK, but when I attempt to run
Cuffdiff (w/ ref. annotation and bias correction from UCSC genome) on
output from Tophat2, I get the following error: "*Error: sort order of
reads in BAMs must be the same*." I'm perplexed, because I downloaded this
output from Tophat2 that I ran on the public server, before putting it
through Cuffdiff on my local instance. When I tried running the same files
(prior to download) in Cuffdiff on the public server, they ran just fine.
So, why do the files run properly when I run them publicly, but not
locally? Am I missing some metadata? Is my install not complete? Does the
ordering of bam files change when you download them? I've had a hard time
finding solutions, so any tips are greatly appreciated.
A few days ago, I asked how Galaxy maintains its knowledge of the user,
even if the user hasn't logged in, such that two different anonymous users
see their respective histories (and not each other's) when they return to
Galaxy sometime later. I got the response that I was guessing I might get:
session cookies. Next question: how can I access those cookies / that
information from within my Galaxy tool?
I remember seeing somewhere that there is an internal variable that gives
me the username, if the user is logged in. What is that variable, and what
is its value if the user has not logged in?
I am writing a tool that should be configurable by the server admin. I am
considering adding a configuration file, but where should such a file be
placed? Is the tool-data directory the right place? Is there another
standard way for per-tool configuration?
There is a missing parameter in the unified genotyer config from the iuc
The following line should be added around line 204 (just before
<param name="sample_ploidy" type="integer" value="2" label="Sample
Ploidy (number of allles). For tumour, set to 2x the number of expected
subclones" help="-ploidy,--sample_ploidy" />
Without this line, activating the advanced analysis options causes the
job preparation to fail because there is a reference to it in the
gatk2 wrapper used:
version : 8bcc13094767
date: Wed Feb 19 04:39:38 2014 -0500
Geert Vandeweyer, Ph.D.
Department of Medical Genetics
University of Antwerp
Prins Boudewijnlaan 43
Tel: +32 (0)3 275 97 56
In a tool_dependency.xml file I want to install python package lxml in a
virtual environment, as a tool I'm building needs it. The python lxml
package requires the libxml2 tool dependency. I have added a
set_environment_for_install action that refers to the libxml2 repository,
but when python/pip tries to install lxml it fails, apparently because it
can't find the required headers. This appears to be because the
setup_virtualenv action does not include install environment variables.
It seems to me that install environment variables should be sourced for
every following action that can do nontrivial things, not just shell
Alternatively, am I trying to install lxml the wrong way, is there a better
way? (I'm running on python 2.6)