I am attempting to use Galaxy to calculate the mean sequence read
length and identify the range of read lengths for my 454 data. The
data has already been organized and sorted by species. The format of
the data is as follows:
etc...for each species
I have attempted to use the "Summary Statistics" button, however it
appears to only be for numerical data and not sequence data. Is this
User name: dac330
Dear galaxy users,
We have done deep sequencing on some known genomic loci using
Hiseq2000. I have already mapped the reads to the reference sequences by
using Galaxy. In the next step, I want to find SNPs and calculate the SNP
percentage within the reads. There are 500,000 to 1,000,000 reads per
biological sample. Can I do it with galaxy? If not, is there other programs
available in windows? Considering that I am not very familiar with
University of Florida
I would like to know whether I can post my supplementary data for an article in the galaxy server.
I have .docx files and .xlsx files. I am planning to upload these data in a folder and share the link at first and later publish it. When I tried uploading xlsx file, it is getting automatically converted to .xml files.
Please let me know if it is possible provide the supplementary information through galaxy that includes doc and excel sheets..
Thank you so much for your help
Dear Galaxy managers,
I would like to ask about the queuing rules for the workflows before
processed on the server?
I use my customized Galaxy workflow which contains 22 following steps
(basically, filtering, trimming and format change of NGS data). I guess the
server is generally very busy during last weeks/months (?), so my job was
waiting about 24 hours in a queue (which would not be a problem), and then
the first step of the workflow was processed, but the following 21 is
again/still waiting (already for another couple of hours...). It makes me
wondering about the queuing rules because I expected that the whole
workflow is queued as one job.. Then my question is if the whole workflow,
once submitted, is listed in the queue, or does the following step queue
only after the previous step is finished (which would mean to wait the
whole queue for each step of the workflow...)?
I routinely used those wrokflows before (months ago) without any
I tried to search similar question in the archive before I posted this
Thanks a lot for your answer,
Zuzana Musilova, PhD.
University of Basel
Vesalgasse 1, CH-4051 Basel
Switzerland - Europe
I have a problem in galaxy to get host/domain name in two different pages.
First one is in the tool installation from toolshed, I got the error below,
The requested URL /admin_toolshed/prepare_for_install was not found on this server.
The second one is in the saved histories. When I click the buttons of the saved histories. I got the similar error like below.
The requested URL /history/list was not found on this server.
I haven't seen these any other pages yet.
My installation is working on LDAP authentication with Proxy. So, I could not find a place to set the domain or host name in these two places that they can actually find the requested URLs.
In the paster.log file. I don't get any error when I install a tool or go to another history. It doesn't report any error.
Thanks for your help,
I am interested the Galaxy and want to set up a local galaxy. Could you address me the PV of galaxy one day?
Email: liubiao(a)genomics.cn | M. +86 186 7677 1486
China National GeneBank http://nationalgenebank.org/en/index.html
Building No.11 | Beishan Industrial Zone | Yantian | Shenzhen China
Dear Galaxy Gurus,
I am a student and still new to Galaxy. I am encountering an error when
trying to run the Picard Paired Read Mate Fixer with default settings on
Galaxy. The code only runs for a few seconds before failing and displaying
the error message.
## exit code=1; stdout=
[Mon Aug 19 16:46:15 EDT 2013] net.sf.picard.sam.FixMateInformation
VALIDATION_STRINGENCY=LENIENT VERBOSITY=INFO QUIET=false
COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false
[Mon Aug 19 16:46:15 EDT 2013] Executing as root@biolinux-pbcn6 on Linux
3.2.0-43-generic amd64; OpenJDK 64-Bit Server VM 1.6.0_27-b27; Picard
[Mon Aug 19 16:46:15 EDT 2013] net.sf.picard.sam.FixMateInformation done.
Elapsed time: 0.00 minutes.
To get help, see http://picard.sourceforge.net/index.shtml#GettingHelp
Exception in thread "main" java.lang.NoClassDefFoundError:
Caused by: java.lang.ClassNotFoundException:
at java.security.AccessController.doPrivileged(Native Method)
... 4 more
For input I am using data that I have successfully mapped with BWA for
Illumina. I currently have Python version 2.7.3 and Java version 1.6.0_27,
and an working on a machine running Ubuntu 12.04.2 LTS.
Any help is greatly appreciated!
I want to know how to part the tool panel into several group just like the
public Galaxy. It classify the tools into NGS Tools Box and RECENTS. I can
not find tag that up <section> tag in the wiki document. Anyone can tell me
how to configure to classify tools?
Thank you for all your help
The University of HongKong
I have run several analysis with Tophat 2 on my local instance of galaxy and I get this error for all of them..
segment-based junction search failed with err = 1 or -9
Here is an example of full error report:
Error in tophat:
[2013-08-23 11:56:58] Beginning TopHat run (v2.0.6)
[2013-08-23 11:56:58] Checking for Bowtie
Bowtie version: 188.8.131.52
[2013-08-23 11:56:58] Checking for Samtools
Samtools version: 0.1.18.0
[2013-08-23 11:56:58] Checking for Bowtie index files
[2013-08-23 11:56:58] Checking for reference FASTA file
[2013-08-23 11:56:58] Generating SAM header for /usr/local/data/bowtie2/hg19/hg19
quality scale: phred33 (default)
[2013-08-23 11:58:04] Preparing reads
left reads: min. length=50, max. length=50, 145339247 kept reads (34946 discarded)
right reads: min. length=50, max. length=50, 145340153 kept reads (34040 discarded)
[2013-08-23 14:16:21] Mapping left_kept_reads to genome hg19 with Bowtie2
[2013-08-24 01:04:37] Mapping left_kept_reads_seg1 to genome hg19 with Bowtie2 (1/2)
[2013-08-24 03:38:22] Mapping left_kept_reads_seg2 to genome hg19 with Bowtie2 (2/2)
[2013-08-24 05:29:58] Mapping right_kept_reads to genome hg19 with Bowtie2
[2013-08-24 19:50:22] Mapping right_kept_reads_seg1 to genome hg19 with Bowtie2 (1/2)
[2013-08-24 22:36:38] Mapping right_kept_reads_seg2 to genome hg19 with Bowtie2 (2/2)
[2013-08-25 01:40:37] Searching for junctions via segment mapping
Coverage-search algorithm is turned on, making this step very slow
Please try running TopHat again with the option (--no-coverage-search) if this step takes too much time or memory.
Error: segment-based junction search failed with err =-9
Collecting potential splice sites in islands
cp: cannot stat `/home/galaxy/galaxy-dist/database/job_working_directory/000/515/tophat_out/deletions.bed': No such file or directory
cp: cannot stat `/home/galaxy/galaxy-dist/database/job_working_directory/000/515/tophat_out/insertions.bed': No such file or directory
I did some research on the internet and it seems to be a memory problem to me, is there any solution other than rerun these jobs on a more powerful machine?
And why has Bowtie/Tophat discard different numbers of reads? What will be the impact? Does it means that if I don't have exact matches between the paired end input, it is still be possible to run the job?