Hello,
I have run several analysis with Tophat 2 on my local
instance of galaxy and I get this error for all of them..
segment-based junction search failed with err = 1 or -9
Here is an example of full error report:
Error in tophat:
[2013-08-23 11:56:58] Beginning TopHat run (v2.0.6)
-----------------------------------------------
[2013-08-23 11:56:58] Checking for Bowtie
Bowtie version: 2.0.2.0
[2013-08-23 11:56:58] Checking for Samtools
Samtools version: 0.1.18.0
[2013-08-23 11:56:58] Checking for Bowtie index files
[2013-08-23 11:56:58] Checking for reference FASTA file
[2013-08-23 11:56:58] Generating SAM header for /usr/local/data/bowtie2/hg19/hg19
format: fastq
quality scale: phred33 (default)
[2013-08-23 11:58:04] Preparing reads
left reads: min. length=50, max. length=50, 145339247 kept reads (34946 discarded)
right reads: min. length=50, max. length=50, 145340153 kept reads (34040 discarded)
[2013-08-23 14:16:21] Mapping left_kept_reads to genome hg19 with Bowtie2
[2013-08-24 01:04:37] Mapping left_kept_reads_seg1 to genome hg19 with Bowtie2 (1/2)
[2013-08-24 03:38:22] Mapping left_kept_reads_seg2 to genome hg19 with Bowtie2 (2/2)
[2013-08-24 05:29:58] Mapping right_kept_reads to genome hg19 with Bowtie2
[2013-08-24 19:50:22] Mapping right_kept_reads_seg1 to genome hg19 with Bowtie2 (1/2)
[2013-08-24 22:36:38] Mapping right_kept_reads_seg2 to genome hg19 with Bowtie2 (2/2)
[2013-08-25 01:40:37] Searching for junctions via segment mapping
Coverage-search algorithm is turned on, making this step very slow
Please try running TopHat again with the option (--no-coverage-search) if this step takes too much time or memory.
[FAILED]
Error: segment-based junction search failed with err =-9
Collecting potential splice sites in islands
cp: cannot stat `/home/galaxy/galaxy-dist/database/job_working_directory/000/515/tophat_out/deletions.bed': No such file or directory
cp: cannot stat `/home/galaxy/galaxy-dist/database/job_working_directory/000/515/tophat_out/insertions.bed': No such file or directory
I did some research on the internet and it seems to be a
memory problem to me, is there any solution other than rerun
these jobs on a more powerful machine?