I have a list of genomic regions with some variants and would like to study
the correlation between theses variants and epigenomics marks such as
>From Encode download page, i got some files corresponding to peaks of these
hsitone modifications and would like to know if there is a way to create a
pipeline using galaxy to map my variants, depending on genomic regions to
the information I have from the histone modification peaks.
Is there someone who can point me to a step by step to do things to start
using Galaxy ?
I'm using TORQUE with Galaxy, and we noticed that if a tool is
multithreaded, the number of needed cores is not communicated to pbs,
leading to job crashes if the required resources are not available when
the job is submitted.
Therefore I modified a little the code as follows in
256 # define PBS job options
257 attrs.append( dict( name = pbs.ATTR_N, value = str( "%s_%s_%
s" % ( job_wrapper.job_id, job_wrapper.tool.id, job_wrapper.user ) ) ) )
258 mt_file = open('tool-data/multithreading.csv', 'r')
259 for l in mt_file:
260 l = string.split(l)
261 if ( l == job_wrapper.tool.id ):
262 attrs.append( dict( name = pbs.ATTR_l,
resource = 'nodes', value = '1:ppn='+str(l) ) )
263 attrs.append( dict( name = pbs.ATTR_l,
resource = 'mem', value = str(l) ) )
266 job_attrs = pbs.new_attropl( len( attrs ) +
len( pbs_options ) )
(sorry it didn't come out very well due to line breaking)
The csv file contains a list of the multithreaded tools, each line
<tool id>\t<number of threads>\t<memory needed>\n
And it works fine, the jobs wait for their turn properly, but
information is duplicated. Perhaps there would be a way to include
something similar in galaxy's original code (if it is not already the
case, I may not be up-to-date) without duplicating data.
I hope that helps :)
I'm trying to run Mosaik on our galaxy instance on Ilumina paired reads. However when I selected "paired reads" and Ilumina as an input option, I can still only select one of the two fastq files as input. No 2nd file selector appears like with bwa, bowtie, etc...
Can anybody tell me what is going on - is this a known issue?
Dear Galaxy staff,
I have recently started using your tool and it has been really helpful,
When using Human Genome Variation, aaChanges, I would like to keep some
extra lines in the output file from either of the input files. In the tool
description it says I should be able to keep them:
"...chromosome, start, and end position as well as the SNP. The SNP can be
given using ambiguous-nucleotide symbols or a list of two to four alleles
separated by '/'. *Any other columns in the first input file will not be
used but will be kept for the output*. The second input file contains..."
However, I haven't found a way of actually have them in the output file.
What am I missing/doing incorrectly?
What I've been trying to keep by the way is rs IDs or Ensembl gene IDs.
Thank you in advance for your answer.
I filed an enhancement report since if the workflow conditional facility
does not appear to exist in Galaxy:
-------- Original Message --------
Subject: Workflows with conditional statements
Date: Wed, 18 May 2011 10:31:21 +1000
From: Florent Angly <florent.angly(a)gmail.com>
To: galaxy-user(a)lists.bx.psu.edu <galaxy-user(a)lists.bx.psu.edu>
I was wondering if there is a way to put conditional statements in a
This would be useful, for example, in the case of a workflow that has an
optional advanced option that the user can click. This advanced option
would add some extra steps to the data processing.
Another example of how this could be useful is if inside a workflow, the
data needs to be processed differently based on the results of previous
workflow steps. Say, you have a worflow that takes some sequences, and
calculate their average length. Using a conditional statement, the
workflow would put the data through DeBruijn assembler if the reads are
small, but through a traditional Overlap-Layout-Consensus assembler if
the reads are long.
Are conditional statements possible in Galaxy workflows and I just don't
know how to use them?
I posted to the seqanswers forum, but have not received any feedback. I am working with RNA-seq Illumina data files in Galaxy (http://main.g2.bx.psu.edu/). The two files are 100bp paired-end reads, multiplexed with barcoding to distinguish samples. The barcodes are the first four bases of the sequences in the s_7_1_sequence.txt file.
Would the following Galaxy workflow be correct?
1. Upload both s_7_1_sequence.txt and s_7_2_sequence.txt to Galaxy with the reference genome selected
2. Run NGS: QC and manipulation --> FASTQ Groomer on each file to convert to Sanger FASTQ
3. Run NGS: QC and manipulation --> FASTQ joiner to combine the data from the two files
4. Run FASTX-TOOLKIT FOR FASTQ DATA --> Barcode Splitter to generate separate FASTQ files for each barcode group
5. Run NGS: RNA Analysis --> Tophat to map the reads from each group to the reference genome
The problem I am having is that if I select paired-end for the library in Tophat, it requests two FASTQ files. Would I have to use FASTQ Splitter to separate the joined FASTQ files? If there is a more standard way to handle these types of barcoded files, I would appreciate hearing about this workflow.
Thanks very much in advance,
P.S. Galaxy is an incredibly useful resource. Thanks!
Hi Again Galaxy team.
I'm attempting to use the new workflow API, which I understand is still in
development. I created a test workflow with a single input, and was able to
use 'example_watch_folder.py' to successfully execute it as a history and
get the output to a specified location.
My question is how I can modify the script to accept multiple inputs (i.e.
how do I define which files in the input folder I want to be each input) and
if there's a way to specify runtime parameters. For instance, the workflow I
want to execute has a filter step on a tabular input item as one of the
later steps which needs to be defined at runtime. How would I specify this
in the 'watch_folder.py' parameters? or is this not possible yet?
FYI, I know perl but not python, so ultimately I want to wrap the python
scripts into a larger perl script to execute recursive workflows.
Thanks for all your help. Galaxy is an amazing tool and the workflow API is
a fantastic improvement.
O'Connor Lab, WNPRC
555 Science Dr. Madison WI
When I attempt to run cufflinks based on .sam output from bowtie I get an
An error occurred running this job: *cufflinks v1.0.1
cufflinks -q --no-update-check -I 300000 -F 0.050000 -j 0.050000 -p 8 -b
Error running cufflinks. [Errno 2] No such file or directory:
*What can I do to get around this problem and run cufflinks?
My workflow is on http://main.g2.bx.psu.edu and can be found here (I ran it
using a .fastq file):
Thanks in advance for your help!
Lewis-Sigler Institute for Integrative Genomics
Carl Icahn Laboratory
First of all I would like to thank you guys for developing and mainting this
so useful tool as Galaxy!
I have our own installation fo Galaxy (galaxy-dist version) with
ncbi-blast-plus. Calling blastn results to the following error message:
An error occurred running this job:*Traceback (most recent call last):
File "/home/galaxy/galaxy-dist/tools/ncbi_blast_plus/hide_stderr.py", line
29, in <module>
sys.stderr.write("Error invoking command:\n%s\n\n%s\n" % (cmd, err))
NameError: name 'cmd' is not defined*
Could you please explain, what this error can mean?
The link to the shared history is* *
Thank you in advance!
I have a question for you guys regarding quality filtering.
I have a data set of double MID tagged 454 amplicons, from which I wish to
select high quality sequences above Q20.
The 454 quality filtering system seems to work differently from that given
for the Illumina sequencing i.e. 454 filtering takes high quality segments,
while Illumina (FASTQ) can select high quality full reads based on certain
OK, so I know that the total length of my amplicon, including primers and
barcodes is around 260bp. If I then set the 454 quality filtering tool to
extract contiguous high quality sequence of >260, it gives me back around
45% of my raw data as hitting this criterion i.e. All 260bp are above Q20. I
don¹t necessarily need this high stringency as most bases may not be
But if I convert my 454 data to FASTQ format and then run the Illumina
filtering system which also allows me to set the number of bases allowed to
deviate from the Q20 criteria, I get back over 90% of my data (allowing 10bp
to deviate from Q20).
I then need to go ahead and convert back to 454 format.
Can you tell me if this is OK?
Will I loose /confuse information somewhere along these conversions?
It seems that if I do this, my barcodes are removed, as amplicons do not
sort properly when I parse them through my barcode filtering program.
Does anyone know of a program to filter 454 data based on average sequence
quality score, which doesn¹t involve Linux and the Roche off instrument
program (I have no experience in Linux! )
Department of Biology,
Halifax, NS, B3H 4J1