galaxy-commits
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
December 2013
- 1 participants
- 207 discussions
commit/galaxy-central: jmchilton: Re-tweak config.py imports for legacy installs.
by commits-noreply@bitbucket.org 17 Dec '13
by commits-noreply@bitbucket.org 17 Dec '13
17 Dec '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/a2bad69f03f4/
Changeset: a2bad69f03f4
User: jmchilton
Date: 2013-12-17 17:10:41
Summary: Re-tweak config.py imports for legacy installs.
Affected #: 1 file
diff -r 374783ca6a480fbb571bfe9eb46052667b9d0479 -r a2bad69f03f4786309ba80ff305ebdd7b68f93fa lib/galaxy/config.py
--- a/lib/galaxy/config.py
+++ b/lib/galaxy/config.py
@@ -1,6 +1,8 @@
"""
Universe configuration builder.
"""
+# absolute_import needed for tool_shed package.
+from __future__ import absolute_import
import sys, os, tempfile, re
import logging, logging.config
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
9 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/0084ac72aa7b/
Changeset: 0084ac72aa7b
User: jmchilton
Date: 2013-11-13 05:02:58
Summary: FASTQ Opt: In convert_read_to_format, adjust when new_encoding logic is calculated.
Doesn't change the behavior or optimize anything, this is just done to simplify subsequent commits.
Baseline:
I took the first 100 megabytes of a 26 gigabyte example of a FASTQ file filtered with the FASTQ filter tool that I found in an "Important Galaxy User"'s history on main. I ran with the same command-line on my dev box with the start of this file and using Python's -m to profile function times and total amount of time to serve as a baseline as I optimized the FASTQ filter code. Here is the start of the output:
Kept 135848 of 250000 reads (54.34%).
200015991 function calls (199765991 primitive calls) in 136.934 seconds
Extrapolating this out, that 26 gigabyte file would take roughly 10 hours to process on my laptop - this is slightly longer than what it took on main - indicating to me this is likely not disk bound since my SSD would probably outperform main?
Affected #: 1 file
diff -r d61de8f362929483c2a6a6cb8e7b7be77fa5cfa4 -r 0084ac72aa7bdd253454027edf1bcbb3cdeb496d lib/galaxy_utils/sequence/fastq.py
--- a/lib/galaxy_utils/sequence/fastq.py
+++ b/lib/galaxy_utils/sequence/fastq.py
@@ -89,7 +89,6 @@
score_list = self.convert_score_solexa_to_phred( self.get_decimal_quality_scores() )
else:
score_list = self.get_decimal_quality_scores()
- new_read.quality = "%s " % " ".join( map( str, new_class.restrict_scores_to_valid_range( score_list ) ) ) #need trailing space to be valid decimal fastq
if force_quality_encoding is None:
if self.is_ascii_encoded():
new_encoding = 'ascii'
@@ -97,6 +96,7 @@
new_encoding = 'decimal'
else:
new_encoding = force_quality_encoding
+ new_read.quality = "%s " % " ".join( map( str, new_class.restrict_scores_to_valid_range( score_list ) ) ) #need trailing space to be valid decimal fastq
if new_encoding == 'ascii':
new_read.quality = "".join( new_read.get_ascii_quality_scores() )
return new_read
https://bitbucket.org/galaxy/galaxy-central/commits/046c608abfa3/
Changeset: 046c608abfa3
User: jmchilton
Date: 2013-11-13 05:02:58
Summary: FASTQ Opt: Restructure fastq_filter to only call execfile once and reuse the same function.
After this optimization:
Kept 135848 of 250000 reads (54.34%).
200015991 function calls (199765991 primitive calls) in 85.375 seconds
Down from 136.934 seconds on previous changeset. Main Difference:
(ncalls|tottime|percall|cumtime|percall|filename:lineno(function))
250000 45.130 0.000 67.811 0.000 {execfile}
- became -
1 0.000 0.000 0.000 0.000 {execfile}
Affected #: 2 files
diff -r 0084ac72aa7bdd253454027edf1bcbb3cdeb496d -r 046c608abfa31091f3e030db3979f848666fa602 tools/fastq/fastq_filter.py
--- a/tools/fastq/fastq_filter.py
+++ b/tools/fastq/fastq_filter.py
@@ -18,10 +18,10 @@
i = None
reads_kept = 0
+ execfile(script_filename, globals())
for i, fastq_read in enumerate( fastqReader( open( input_filename ), format = input_type ) ):
- local = {'fastq_read':fastq_read, 'ret_val':False}
- execfile( script_filename, {}, local )
- if local['ret_val']:
+ ret_val = fastq_read_pass_filter( fastq_read ) ## fastq_read_pass_filter defined in script_filename
+ if ret_val:
out.write( fastq_read )
reads_kept += 1
out.close()
diff -r 0084ac72aa7bdd253454027edf1bcbb3cdeb496d -r 046c608abfa31091f3e030db3979f848666fa602 tools/fastq/fastq_filter.xml
--- a/tools/fastq/fastq_filter.xml
+++ b/tools/fastq/fastq_filter.xml
@@ -96,7 +96,6 @@
return False
#end for
return True
-ret_val = fastq_read_pass_filter( fastq_read )
</configfile></configfiles><outputs>
https://bitbucket.org/galaxy/galaxy-central/commits/04c47872b79e/
Changeset: 04c47872b79e
User: jmchilton
Date: 2013-11-13 05:02:58
Summary: FASTQ Opt: Utilize optimized in place alternatives to restrict_scores_to_valid_range.
Separate out ascii vs. decimal encoding branches of convert_read_to_format and use these new transform_ alternatives operate "in place" (don't produce new lists/allocate memory). The ascii version transforms to ascii in place instead of requiring another call and creating another array.
Runtime Result:
Kept 135848 of 250000 reads (54.34%).
117148711 function calls (116898711 primitive calls) in 48.272 seconds
Down from 85.375 seconds on previous changeset. Main Difference:
(ncalls|tottime|percall|cumtime|percall|filename:lineno(function))
135848 1.185 0.000 59.856 0.000 fastq.py:71(convert_read_to_format)
-became-
135848 0.502 0.000 22.979 0.000 fastq.py:99(convert_read_to_format)
About a third the time is spent convert reads to the correct format. This is a varaint of the core optimization I made when optimizing the FASTQ groomer for MSI - it has likewise a substantial impact on the performance of that tool.
Affected #: 1 file
diff -r 046c608abfa31091f3e030db3979f848666fa602 -r 04c47872b79eea8dd0b745aed3e7324513653d78 lib/galaxy_utils/sequence/fastq.py
--- a/lib/galaxy_utils/sequence/fastq.py
+++ b/lib/galaxy_utils/sequence/fastq.py
@@ -35,6 +35,34 @@
return max( min( score, cls.quality_max ), cls.quality_min )
return map( restrict_score, decimal_score_list )
@classmethod
+ def transform_scores_to_valid_range( cls, decimal_score_list):
+ cls_quality_max = cls.quality_max
+ cls_quality_min = cls.quality_min
+ for i in range( len( decimal_score_list ) ):
+ score = decimal_score_list[i]
+ if(score > cls_quality_max):
+ transformed_score = cls_quality_max
+ elif( score < cls_quality_min ):
+ transformed_score = cls_quality_min
+ else:
+ transformed_score = score
+ decimal_score_list[i] = str(transformed_score)
+ @classmethod
+ def transform_scores_to_valid_range_ascii( cls, decimal_score_list ):
+ cls_quality_max = cls.quality_max
+ cls_quality_min = cls.quality_min
+ to_quality = cls.ascii_min - cls.quality_min
+ for i in range( len( decimal_score_list ) ):
+ score = decimal_score_list[i]
+ if(score > cls_quality_max):
+ transformed_score = cls_quality_max
+ elif( score < cls_quality_min ):
+ transformed_score = cls_quality_min
+ else:
+ transformed_score = score
+ transformed_score = chr(transformed_score + to_quality)
+ decimal_score_list[i] = transformed_score
+ @classmethod
def convert_base_to_color_space( cls, sequence ):
return cls.color_space_converter.to_color_space( sequence )
@classmethod
@@ -96,9 +124,14 @@
new_encoding = 'decimal'
else:
new_encoding = force_quality_encoding
- new_read.quality = "%s " % " ".join( map( str, new_class.restrict_scores_to_valid_range( score_list ) ) ) #need trailing space to be valid decimal fastq
if new_encoding == 'ascii':
- new_read.quality = "".join( new_read.get_ascii_quality_scores() )
+ new_class.transform_scores_to_valid_range_ascii( score_list )
+ restricted_scores = map( str, score_list )
+ new_read.quality = "".join( restricted_scores )
+ else: # decimal
+ new_class.transform_scores_to_valid_range( score_list )
+ restricted_scores = map( str, score_list )
+ new_read.quality = "%s " % " ".join( restricted_scores ) #need trailing space to be valid decimal fastq
return new_read
def get_sequence( self ):
return self.sequence
https://bitbucket.org/galaxy/galaxy-central/commits/f54589589c1b/
Changeset: f54589589c1b
User: jmchilton
Date: 2013-11-13 05:02:58
Summary: FASTQ Opt: No need to map(str) over chr and join, can just join chr's.
Prevent an extra array creation and chr-> str map per base. Shaves 10% off remaining run time of filter on SSD.
Kept 135848 of 250000 reads (54.34%).
117012863 function calls (116762863 primitive calls) in 46.897 seconds
Down from 48.272 seconds on previous changset. Main Difference:
(ncalls|tottime|percall|cumtime|percall|filename:lineno(function))
135848 0.502 0.000 22.979 0.000 fastq.py:99(convert_read_to_format)
-became-
135848 0.448 0.000 21.159 0.000 fastq.py:99(convert_read_to_format)
Not huge, but a consistent improvement.
Affected #: 1 file
diff -r 04c47872b79eea8dd0b745aed3e7324513653d78 -r f54589589c1b3882127aa96cc1d62b7eba9bef61 lib/galaxy_utils/sequence/fastq.py
--- a/lib/galaxy_utils/sequence/fastq.py
+++ b/lib/galaxy_utils/sequence/fastq.py
@@ -1,4 +1,4 @@
-#Dan Blankenberg
+##Dan Blankenberg
import math
import string
import transform
@@ -126,12 +126,10 @@
new_encoding = force_quality_encoding
if new_encoding == 'ascii':
new_class.transform_scores_to_valid_range_ascii( score_list )
- restricted_scores = map( str, score_list )
- new_read.quality = "".join( restricted_scores )
+ new_read.quality = "".join( score_list )
else: # decimal
new_class.transform_scores_to_valid_range( score_list )
- restricted_scores = map( str, score_list )
- new_read.quality = "%s " % " ".join( restricted_scores ) #need trailing space to be valid decimal fastq
+ new_read.quality = "%s " % " ".join( score_list ) #need trailing space to be valid decimal fastq
return new_read
def get_sequence( self ):
return self.sequence
https://bitbucket.org/galaxy/galaxy-central/commits/6d723e3bf2eb/
Changeset: 6d723e3bf2eb
User: jmchilton
Date: 2013-11-13 05:02:58
Summary: FASTQ Opt: Precompute difference ascii -> decimal difference.
Kept 135848 of 250000 reads (54.34%).
117012863 function calls (116762863 primitive calls) in 39.416 seconds
Down from 46.897 seconds on previous changeset. Main Difference WRT:
(ncalls|tottime|percall|cumtime|percall|filename:lineno(function))
385848 18.215 0.000 22.866 0.000 fastq.py:90(get_decimal_quality_scores)
-became-
385848 11.415 0.000 15.546 0.000 fastq.py:91(get_decimal_quality_scores)
Looks like a math optimization but actual optimization is coming from reading local variable instead of dereferencing object variable twice per base per.
Affected #: 1 file
diff -r f54589589c1b3882127aa96cc1d62b7eba9bef61 -r 6d723e3bf2ebddf0730e7b38761ca2b511d129c3 lib/galaxy_utils/sequence/fastq.py
--- a/lib/galaxy_utils/sequence/fastq.py
+++ b/lib/galaxy_utils/sequence/fastq.py
@@ -82,14 +82,16 @@
quality = self.quality.rstrip() #decimal scores should have a trailing space
if quality:
try:
- return [ chr( int( val ) + self.ascii_min - self.quality_min ) for val in quality.split() ]
+ to_quality = self.ascii_min - self.quality_min
+ return [ chr( int( val ) + to_quality ) for val in quality.split() ]
except ValueError, e:
raise ValueError( 'Error Parsing quality String. ASCII quality strings cannot contain spaces (%s): %s' % ( self.quality, e ) )
else:
return []
def get_decimal_quality_scores( self ):
if self.is_ascii_encoded():
- return [ ord( val ) - self.ascii_min + self.quality_min for val in self.quality ]
+ to_quality = self.quality_min - self.ascii_min
+ return [ ord( val ) + to_quality for val in self.quality ]
else:
quality = self.quality.rstrip() #decimal scores should have a trailing space
if quality:
https://bitbucket.org/galaxy/galaxy-central/commits/6265e769718e/
Changeset: 6265e769718e
User: jmchilton
Date: 2013-11-13 05:02:58
Summary: FASTQ Opt: Do not generate arrays just to check length.
... just compute what length would be.
Kept 135848 of 250000 reads (54.34%).
117012863 function calls (116762863 primitive calls) in 38.412 seconds
Down from 39.416 seconds on previous changeset. Main Difference:
(ncalls|tottime|percall|cumtime|percall|filename:lineno(function))
500000 0.518 0.000 1.889 0.000 fastq.py:171(insufficient_quality_length)
250000 0.322 0.000 1.103 0.000 fastq.py:173(assert_sequence_quality_lengths)
- became -
500000 0.306 0.000 0.932 0.000 fastq.py:187(insufficient_quality_length)
250000 0.164 0.000 0.478 0.000 fastq.py:189(assert_sequence_quality_lengths)
Affected #: 1 file
diff -r 6d723e3bf2ebddf0730e7b38761ca2b511d129c3 -r 6265e769718e2487b4089c181948769cf67869ce lib/galaxy_utils/sequence/fastq.py
--- a/lib/galaxy_utils/sequence/fastq.py
+++ b/lib/galaxy_utils/sequence/fastq.py
@@ -88,6 +88,22 @@
raise ValueError( 'Error Parsing quality String. ASCII quality strings cannot contain spaces (%s): %s' % ( self.quality, e ) )
else:
return []
+ def get_ascii_quality_scores_len( self ):
+ """
+ Compute ascii quality score length, without generating relatively
+ expensive qualty score array.
+ """
+ if self.is_ascii_encoded():
+ return len( self.quality )
+ else:
+ quality = self.quality.rstrip()
+ if quality:
+ try:
+ return len( quality.split() )
+ except ValueError, e:
+ raise ValueError( 'Error Parsing quality String. ASCII quality strings cannot contain spaces (%s): %s' % ( self.quality, e ) )
+ else:
+ return 0
def get_decimal_quality_scores( self ):
if self.is_ascii_encoded():
to_quality = self.quality_min - self.ascii_min
@@ -168,9 +184,9 @@
return False
return True
def insufficient_quality_length( self ):
- return len( self.get_ascii_quality_scores() ) < len( self.sequence )
+ return self.get_ascii_quality_scores_len() < len( self.sequence )
def assert_sequence_quality_lengths( self ):
- qual_len = len( self.get_ascii_quality_scores() )
+ qual_len = self.get_ascii_quality_scores_len()
seq_len = len( self.sequence )
assert qual_len == seq_len, "Invalid FASTQ file: quality score length (%i) does not match sequence length (%i)" % ( qual_len, seq_len )
def reverse( self, clone = True ):
@@ -235,11 +251,11 @@
return False
def insufficient_quality_length( self ):
if self.has_adapter_base():
- return len( self.get_ascii_quality_scores() ) + 1 < len( self.sequence )
+ return self.get_ascii_quality_scores_len() + 1 < len( self.sequence )
return fastqSequencingRead.insufficient_quality_length( self )
def assert_sequence_quality_lengths( self ):
if self.has_adapter_base():
- qual_len = len( self.get_ascii_quality_scores() )
+ qual_len = self.get_ascii_quality_scores_len()
seq_len = len( self.sequence )
assert ( qual_len + 1 == seq_len ) or ( qual_len == seq_len ), "Invalid FASTQ file: quality score length (%i) does not match sequence length (%i with adapter base)" % ( qual_len, seq_len ) #SRA adds FAKE/DUMMY quality scores to the adapter base, we'll allow the reading of the Improper score here, but remove it in the Reader when "apply_galaxy_conventions" is set to True
else:
https://bitbucket.org/galaxy/galaxy-central/commits/b5970d24670f/
Changeset: b5970d24670f
User: jmchilton
Date: 2013-11-13 05:02:58
Summary: FASTQ Opt: Eliminate a few extra calls to is_ascii_encoded.
Kept 135848 of 250000 reads (54.34%).
117127015 function calls (116877015 primitive calls) in 38.632 seconds
Down from 39.416 seconds on previous changeset. Main Difference
(ncalls|tottime|percall|cumtime|percall|filename:lineno(function))
385848 11.289 0.000 15.499 0.000 fastq.py:91(get_decimal_quality_scores)
1271696 0.716 0.000 0.716 0.000 fastq.py:71(is_ascii_encoded)
250000 0.185 0.000 10.434 0.000 fastq.py:107(get_decimal_quality_scores)
385848 11.380 0.000 15.368 0.000 fastq.py:109(__get_decimal_quality_scores)
1135848 0.649 0.000 0.649 0.000 fastq.py:71(is_ascii_encoded)
Affected #: 1 file
diff -r 6265e769718e2487b4089c181948769cf67869ce -r b5970d24670fefc0bf028bcab681f81ef2dd62e9 lib/galaxy_utils/sequence/fastq.py
--- a/lib/galaxy_utils/sequence/fastq.py
+++ b/lib/galaxy_utils/sequence/fastq.py
@@ -105,7 +105,9 @@
else:
return 0
def get_decimal_quality_scores( self ):
- if self.is_ascii_encoded():
+ return self.__get_decimal_quality_scores(self.is_ascii_encoded())
+ def __get_decimal_quality_scores( self, ascii ):
+ if ascii:
to_quality = self.quality_min - self.ascii_min
return [ ord( val ) + to_quality for val in self.quality ]
else:
@@ -128,15 +130,16 @@
else:
new_read.sequence = self.convert_color_to_base_space( self.sequence )
new_read.description = self.description
+ is_ascii = self.is_ascii_encoded()
if self.score_system != new_read.score_system:
if self.score_system == 'phred':
- score_list = self.convert_score_phred_to_solexa( self.get_decimal_quality_scores() )
+ score_list = self.convert_score_phred_to_solexa( self.__get_decimal_quality_scores(is_ascii) )
else:
- score_list = self.convert_score_solexa_to_phred( self.get_decimal_quality_scores() )
+ score_list = self.convert_score_solexa_to_phred( self.__get_decimal_quality_scores(is_ascii) )
else:
- score_list = self.get_decimal_quality_scores()
+ score_list = self.__get_decimal_quality_scores(is_ascii)
if force_quality_encoding is None:
- if self.is_ascii_encoded():
+ if is_ascii:
new_encoding = 'ascii'
else:
new_encoding = 'decimal'
https://bitbucket.org/galaxy/galaxy-central/commits/bb6c21a42274/
Changeset: bb6c21a42274
User: jmchilton
Date: 2013-11-13 05:02:58
Summary: FASTQ Opt: Disable formatting output, reading is already enforcing it?
This is would be huge optimization for filter at this point - it cuts the runtime of my filter test to half of what is remaining. This changeset if just leaving a comment because I am not certain the results are the same - though I strongly suspect they would be - I just want some confirmation before pulling the trigger. None of the test cases fail as a result of this or any of these changesets.
Kept 135848 of 250000 reads (54.34%).
60407639 function calls in 18.837 seconds
If change made, this would be down from 38.632 seconds on previous call. Main Difference:
(ncalls|tottime|percall|cumtime|percall|filename:lineno(function))
135848 0.357 0.000 19.710 0.000 fastq.py:631(write)
- to -
135848 0.219 0.000 0.609 0.000 fastq.py:631(write)
Affected #: 1 file
diff -r b5970d24670fefc0bf028bcab681f81ef2dd62e9 -r bb6c21a422741b32ab78f7a4fe6dae571019d532 tools/fastq/fastq_filter.py
--- a/tools/fastq/fastq_filter.py
+++ b/tools/fastq/fastq_filter.py
@@ -14,6 +14,8 @@
os.mkdir( additional_files_path )
shutil.copy( script_filename, os.path.join( additional_files_path, 'debug.txt' ) )
+ ## Dan, Others: Can we simply drop the "format=input_type" here since it is specified in reader.
+ ## This optimization would cut runtime roughly in half (for my test case anyway). -John
out = fastqWriter( open( output_filename, 'wb' ), format = input_type )
i = None
https://bitbucket.org/galaxy/galaxy-central/commits/374783ca6a48/
Changeset: 374783ca6a48
User: dannon
Date: 2013-12-17 16:42:49
Summary: Merged in jmchilton/galaxy-central-fork-1 (pull request #259)
FASTQ Optimizations (Round 1 - Filtering)
Affected #: 3 files
diff -r 31179a01a1ec4176884926907b212259e4148319 -r 374783ca6a480fbb571bfe9eb46052667b9d0479 lib/galaxy_utils/sequence/fastq.py
--- a/lib/galaxy_utils/sequence/fastq.py
+++ b/lib/galaxy_utils/sequence/fastq.py
@@ -1,4 +1,4 @@
-#Dan Blankenberg
+##Dan Blankenberg
import math
import string
import transform
@@ -35,6 +35,34 @@
return max( min( score, cls.quality_max ), cls.quality_min )
return map( restrict_score, decimal_score_list )
@classmethod
+ def transform_scores_to_valid_range( cls, decimal_score_list):
+ cls_quality_max = cls.quality_max
+ cls_quality_min = cls.quality_min
+ for i in range( len( decimal_score_list ) ):
+ score = decimal_score_list[i]
+ if(score > cls_quality_max):
+ transformed_score = cls_quality_max
+ elif( score < cls_quality_min ):
+ transformed_score = cls_quality_min
+ else:
+ transformed_score = score
+ decimal_score_list[i] = str(transformed_score)
+ @classmethod
+ def transform_scores_to_valid_range_ascii( cls, decimal_score_list ):
+ cls_quality_max = cls.quality_max
+ cls_quality_min = cls.quality_min
+ to_quality = cls.ascii_min - cls.quality_min
+ for i in range( len( decimal_score_list ) ):
+ score = decimal_score_list[i]
+ if(score > cls_quality_max):
+ transformed_score = cls_quality_max
+ elif( score < cls_quality_min ):
+ transformed_score = cls_quality_min
+ else:
+ transformed_score = score
+ transformed_score = chr(transformed_score + to_quality)
+ decimal_score_list[i] = transformed_score
+ @classmethod
def convert_base_to_color_space( cls, sequence ):
return cls.color_space_converter.to_color_space( sequence )
@classmethod
@@ -54,14 +82,34 @@
quality = self.quality.rstrip() #decimal scores should have a trailing space
if quality:
try:
- return [ chr( int( val ) + self.ascii_min - self.quality_min ) for val in quality.split() ]
+ to_quality = self.ascii_min - self.quality_min
+ return [ chr( int( val ) + to_quality ) for val in quality.split() ]
except ValueError, e:
raise ValueError( 'Error Parsing quality String. ASCII quality strings cannot contain spaces (%s): %s' % ( self.quality, e ) )
else:
return []
+ def get_ascii_quality_scores_len( self ):
+ """
+ Compute ascii quality score length, without generating relatively
+ expensive qualty score array.
+ """
+ if self.is_ascii_encoded():
+ return len( self.quality )
+ else:
+ quality = self.quality.rstrip()
+ if quality:
+ try:
+ return len( quality.split() )
+ except ValueError, e:
+ raise ValueError( 'Error Parsing quality String. ASCII quality strings cannot contain spaces (%s): %s' % ( self.quality, e ) )
+ else:
+ return 0
def get_decimal_quality_scores( self ):
- if self.is_ascii_encoded():
- return [ ord( val ) - self.ascii_min + self.quality_min for val in self.quality ]
+ return self.__get_decimal_quality_scores(self.is_ascii_encoded())
+ def __get_decimal_quality_scores( self, ascii ):
+ if ascii:
+ to_quality = self.quality_min - self.ascii_min
+ return [ ord( val ) + to_quality for val in self.quality ]
else:
quality = self.quality.rstrip() #decimal scores should have a trailing space
if quality:
@@ -82,23 +130,27 @@
else:
new_read.sequence = self.convert_color_to_base_space( self.sequence )
new_read.description = self.description
+ is_ascii = self.is_ascii_encoded()
if self.score_system != new_read.score_system:
if self.score_system == 'phred':
- score_list = self.convert_score_phred_to_solexa( self.get_decimal_quality_scores() )
+ score_list = self.convert_score_phred_to_solexa( self.__get_decimal_quality_scores(is_ascii) )
else:
- score_list = self.convert_score_solexa_to_phred( self.get_decimal_quality_scores() )
+ score_list = self.convert_score_solexa_to_phred( self.__get_decimal_quality_scores(is_ascii) )
else:
- score_list = self.get_decimal_quality_scores()
- new_read.quality = "%s " % " ".join( map( str, new_class.restrict_scores_to_valid_range( score_list ) ) ) #need trailing space to be valid decimal fastq
+ score_list = self.__get_decimal_quality_scores(is_ascii)
if force_quality_encoding is None:
- if self.is_ascii_encoded():
+ if is_ascii:
new_encoding = 'ascii'
else:
new_encoding = 'decimal'
else:
new_encoding = force_quality_encoding
if new_encoding == 'ascii':
- new_read.quality = "".join( new_read.get_ascii_quality_scores() )
+ new_class.transform_scores_to_valid_range_ascii( score_list )
+ new_read.quality = "".join( score_list )
+ else: # decimal
+ new_class.transform_scores_to_valid_range( score_list )
+ new_read.quality = "%s " % " ".join( score_list ) #need trailing space to be valid decimal fastq
return new_read
def get_sequence( self ):
return self.sequence
@@ -135,9 +187,9 @@
return False
return True
def insufficient_quality_length( self ):
- return len( self.get_ascii_quality_scores() ) < len( self.sequence )
+ return self.get_ascii_quality_scores_len() < len( self.sequence )
def assert_sequence_quality_lengths( self ):
- qual_len = len( self.get_ascii_quality_scores() )
+ qual_len = self.get_ascii_quality_scores_len()
seq_len = len( self.sequence )
assert qual_len == seq_len, "Invalid FASTQ file: quality score length (%i) does not match sequence length (%i)" % ( qual_len, seq_len )
def reverse( self, clone = True ):
@@ -202,11 +254,11 @@
return False
def insufficient_quality_length( self ):
if self.has_adapter_base():
- return len( self.get_ascii_quality_scores() ) + 1 < len( self.sequence )
+ return self.get_ascii_quality_scores_len() + 1 < len( self.sequence )
return fastqSequencingRead.insufficient_quality_length( self )
def assert_sequence_quality_lengths( self ):
if self.has_adapter_base():
- qual_len = len( self.get_ascii_quality_scores() )
+ qual_len = self.get_ascii_quality_scores_len()
seq_len = len( self.sequence )
assert ( qual_len + 1 == seq_len ) or ( qual_len == seq_len ), "Invalid FASTQ file: quality score length (%i) does not match sequence length (%i with adapter base)" % ( qual_len, seq_len ) #SRA adds FAKE/DUMMY quality scores to the adapter base, we'll allow the reading of the Improper score here, but remove it in the Reader when "apply_galaxy_conventions" is set to True
else:
diff -r 31179a01a1ec4176884926907b212259e4148319 -r 374783ca6a480fbb571bfe9eb46052667b9d0479 tools/fastq/fastq_filter.py
--- a/tools/fastq/fastq_filter.py
+++ b/tools/fastq/fastq_filter.py
@@ -14,14 +14,16 @@
os.mkdir( additional_files_path )
shutil.copy( script_filename, os.path.join( additional_files_path, 'debug.txt' ) )
+ ## Dan, Others: Can we simply drop the "format=input_type" here since it is specified in reader.
+ ## This optimization would cut runtime roughly in half (for my test case anyway). -John
out = fastqWriter( open( output_filename, 'wb' ), format = input_type )
i = None
reads_kept = 0
+ execfile(script_filename, globals())
for i, fastq_read in enumerate( fastqReader( open( input_filename ), format = input_type ) ):
- local = {'fastq_read':fastq_read, 'ret_val':False}
- execfile( script_filename, {}, local )
- if local['ret_val']:
+ ret_val = fastq_read_pass_filter( fastq_read ) ## fastq_read_pass_filter defined in script_filename
+ if ret_val:
out.write( fastq_read )
reads_kept += 1
out.close()
diff -r 31179a01a1ec4176884926907b212259e4148319 -r 374783ca6a480fbb571bfe9eb46052667b9d0479 tools/fastq/fastq_filter.xml
--- a/tools/fastq/fastq_filter.xml
+++ b/tools/fastq/fastq_filter.xml
@@ -96,7 +96,6 @@
return False
#end for
return True
-ret_val = fastq_read_pass_filter( fastq_read )
</configfile></configfiles><outputs>
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/f6723d1ddaf5/
Changeset: f6723d1ddaf5
Branch: yhoogstrate/some-minor-select-field-changes-includin-1387270048132
User: yhoogstrate
Date: 2013-12-17 09:49:21
Summary: Some minor select field changes including Illumina 1.8+
Affected #: 1 file
diff -r af87cdcf36084831fce7d99e621b1efc943d2aff -r f6723d1ddaf5c15e194aa5bde2fc78937b9c5423 tools/fastq/fastq_groomer.xml
--- a/tools/fastq/fastq_groomer.xml
+++ b/tools/fastq/fastq_groomer.xml
@@ -17,7 +17,7 @@
<param name="input_type" type="select" label="Input FASTQ quality scores type"><option value="solexa">Solexa</option><option value="illumina">Illumina 1.3-1.7</option>
- <option value="sanger" selected="True">Sanger</option>
+ <option value="sanger" selected="True">Sanger & Illumina 1.8+</option><option value="cssanger">Color Space Sanger</option></param><conditional name="options_type">
@@ -31,7 +31,7 @@
<when value="advanced"><param name="output_type" type="select" label="Output FASTQ quality scores type" help="Galaxy tools are designed to work with the Sanger Quality score format."><option value="solexa">Solexa</option>
- <option value="illumina">Illumina 1.3+</option>
+ <option value="illumina">Illumina 1.3-1.7</option><option value="sanger" selected="True">Sanger (recommended)</option><option value="cssanger">Color Space Sanger</option></param>
https://bitbucket.org/galaxy/galaxy-central/commits/31179a01a1ec/
Changeset: 31179a01a1ec
User: dannon
Date: 2013-12-17 16:18:22
Summary: Merged in yhoogstrate/galaxy-central/yhoogstrate/some-minor-select-field-changes-includin-1387270048132 (pull request #276)
Some minor select field changes including Illumina 1.8+
Affected #: 1 file
diff -r 029cc5c14f1a8fb81e4bbdc3872757b297e1efa0 -r 31179a01a1ec4176884926907b212259e4148319 tools/fastq/fastq_groomer.xml
--- a/tools/fastq/fastq_groomer.xml
+++ b/tools/fastq/fastq_groomer.xml
@@ -17,7 +17,7 @@
<param name="input_type" type="select" label="Input FASTQ quality scores type"><option value="solexa">Solexa</option><option value="illumina">Illumina 1.3-1.7</option>
- <option value="sanger" selected="True">Sanger</option>
+ <option value="sanger" selected="True">Sanger & Illumina 1.8+</option><option value="cssanger">Color Space Sanger</option></param><conditional name="options_type">
@@ -31,7 +31,7 @@
<when value="advanced"><param name="output_type" type="select" label="Output FASTQ quality scores type" help="Galaxy tools are designed to work with the Sanger Quality score format."><option value="solexa">Solexa</option>
- <option value="illumina">Illumina 1.3+</option>
+ <option value="illumina">Illumina 1.3-1.7</option><option value="sanger" selected="True">Sanger (recommended)</option><option value="cssanger">Color Space Sanger</option></param>
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Handle strangely defined tool tests when setting metadata in tool shed repositories.
by commits-noreply@bitbucket.org 17 Dec '13
by commits-noreply@bitbucket.org 17 Dec '13
17 Dec '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/029cc5c14f1a/
Changeset: 029cc5c14f1a
User: greg
Date: 2013-12-17 16:08:03
Summary: Handle strangely defined tool tests when setting metadata in tool shed repositories.
Affected #: 2 files
diff -r 97c69fcfeaac1756a7ad5d92c8ade74b2d611561 -r 029cc5c14f1a8fb81e4bbdc3872757b297e1efa0 lib/tool_shed/util/metadata_util.py
--- a/lib/tool_shed/util/metadata_util.py
+++ b/lib/tool_shed/util/metadata_util.py
@@ -927,10 +927,13 @@
guid = suc.generate_tool_guid( repository_clone_url, tool )
# Handle tool.requirements.
tool_requirements = []
- for tr in tool.requirements:
- requirement_dict = dict( name=tr.name,
- type=tr.type,
- version=tr.version )
+ for tool_requirement in tool.requirements:
+ name = str( tool_requirement.name )
+ type = str( tool_requirement.type )
+ version = str( tool_requirement.version ) if tool_requirement.version else None
+ requirement_dict = dict( name=name,
+ type=type,
+ version=version )
tool_requirements.append( requirement_dict )
# Handle tool.tests.
tool_tests = []
@@ -941,18 +944,31 @@
value, extra = required_file
required_files.append( ( value ) )
inputs = []
- for input_field, values in ttb.inputs.iteritems():
- if len( values ) == 1:
- inputs.append( ( input_field, values[0] ) )
- else:
- inputs.append( ( input_field, values ) )
+ for param_name, values in ttb.inputs.iteritems():
+ # Handle improperly defined or strange test parameters and values.
+ if param_name is not None:
+ if values is None:
+ # An example is the 3rd test in http://testtoolshed.g2.bx.psu.edu/view/devteam/samtools_rmdup
+ # which is defined as:
+ # <test>
+ # <param name="input1" value="1.bam" ftype="bam" />
+ # <param name="bam_paired_end_type_selector" value="PE" />
+ # <param name="force_se" />
+ # <output name="output1" file="1.bam" ftype="bam" sort="True" />
+ # </test>
+ inputs.append( ( param_name, values ) )
+ else:
+ if len( values ) == 1:
+ inputs.append( ( param_name, values[ 0 ] ) )
+ else:
+ inputs.append( ( param_name, values ) )
outputs = []
for output in ttb.outputs:
name, file_name, extra = output
outputs.append( ( name, suc.strip_path( file_name ) if file_name else None ) )
if file_name not in required_files and file_name is not None:
required_files.append( file_name )
- test_dict = dict( name=ttb.name,
+ test_dict = dict( name=str( ttb.name ),
required_files=required_files,
inputs=inputs,
outputs=outputs )
diff -r 97c69fcfeaac1756a7ad5d92c8ade74b2d611561 -r 029cc5c14f1a8fb81e4bbdc3872757b297e1efa0 lib/tool_shed/util/shed_util_common.py
--- a/lib/tool_shed/util/shed_util_common.py
+++ b/lib/tool_shed/util/shed_util_common.py
@@ -254,11 +254,8 @@
else:
deleted = False
uninstalled = False
- tool_shed_repository = get_tool_shed_repository_by_shed_name_owner_installed_changeset_revision( app,
- tool_shed,
- name,
- owner,
- installed_changeset_revision )
+ tool_shed_repository = \
+ get_tool_shed_repository_by_shed_name_owner_installed_changeset_revision( app, tool_shed, name, owner, installed_changeset_revision )
if tool_shed_repository:
log.debug( "Updating an existing row for repository '%s' in the tool_shed_repository table, status set to '%s'." % ( str( name ), str( status ) ) )
tool_shed_repository.description = description
@@ -271,19 +268,20 @@
tool_shed_repository.status = status
else:
log.debug( "Adding new row for repository '%s' in the tool_shed_repository table, status set to '%s'." % ( str( name ), str( status ) ) )
- tool_shed_repository = app.install_model.ToolShedRepository( tool_shed=tool_shed,
- name=name,
- description=description,
- owner=owner,
- installed_changeset_revision=installed_changeset_revision,
- changeset_revision=current_changeset_revision,
- ctx_rev=ctx_rev,
- metadata=metadata_dict,
- includes_datatypes=includes_datatypes,
- dist_to_shed=dist_to_shed,
- deleted=deleted,
- uninstalled=uninstalled,
- status=status )
+ tool_shed_repository = \
+ app.install_model.ToolShedRepository( tool_shed=tool_shed,
+ name=name,
+ description=description,
+ owner=owner,
+ installed_changeset_revision=installed_changeset_revision,
+ changeset_revision=current_changeset_revision,
+ ctx_rev=ctx_rev,
+ metadata=metadata_dict,
+ includes_datatypes=includes_datatypes,
+ dist_to_shed=dist_to_shed,
+ deleted=deleted,
+ uninstalled=uninstalled,
+ status=status )
context.add( tool_shed_repository )
context.flush()
return tool_shed_repository
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
3 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/08320798463e/
Changeset: 08320798463e
User: nsoranzo
Date: 2013-12-06 11:17:25
Summary: Specify mosaik v. 1.1.0021 (last compatible with the wrapper). Add requirement for samtools.
Affected #: 1 file
diff -r a29d54619813d5da992b897557162a360b8d610c -r 08320798463e173f61f0b53fbd2db43419d373e2 tools/sr_mapping/mosaik.xml
--- a/tools/sr_mapping/mosaik.xml
+++ b/tools/sr_mapping/mosaik.xml
@@ -1,7 +1,10 @@
<?xml version="1.0"?><tool id="mosaik_wrapper" name="Map with Mosaik" version="1.1.1"><description/>
- <requirements><requirement type="package">mosaik</requirement></requirements>
+ <requirements>
+ <requirement type="package" version="1.1.0021">mosaik</requirement>
+ <requirement type="package" version="0.1.18">samtools</requirement>
+ </requirements><command>
#set $processors = '-p ${GALAXY_SLOTS:-4}'
#set $lm = ''
https://bitbucket.org/galaxy/galaxy-central/commits/5fbc02eff708/
Changeset: 5fbc02eff708
User: nsoranzo
Date: 2013-12-06 11:20:56
Summary: Fix for failure preparing job: "NotFound: cannot find 'mfl'" with paired-end library.
Add mate FASTQ file to inputs.
'ls' param was not used.
Add missing '-mfl ' to MosaikBuild command.
Remove always empty $lm variable.
Add <version_command>.
Affected #: 1 file
diff -r 08320798463e173f61f0b53fbd2db43419d373e2 -r 5fbc02eff708cc86afd60a4fa7588c9f72e877b5 tools/sr_mapping/mosaik.xml
--- a/tools/sr_mapping/mosaik.xml
+++ b/tools/sr_mapping/mosaik.xml
@@ -1,28 +1,30 @@
<?xml version="1.0"?>
-<tool id="mosaik_wrapper" name="Map with Mosaik" version="1.1.1">
+<tool id="mosaik_wrapper" name="Map with Mosaik" version="1.1.2"><description/><requirements><requirement type="package" version="1.1.0021">mosaik</requirement><requirement type="package" version="0.1.18">samtools</requirement></requirements>
+ <version_command>MosaikAligner | sed -e 's/\x1b\[[[:digit:]]\{1,2\}\(;[[:digit:]]\{1,2\}\)\{0,1\}m//g' | grep -o 'MosaikAligner [[:digit:].]\{1,\}'</version_command><command>
#set $processors = '-p ${GALAXY_SLOTS:-4}'
- #set $lm = ''
- #if $paired.kind == 'single':
- #set $mfl = ''
- #set $ls = ''
- #else:
- #set $ls = '-ls $mfl'
- #end if
MosaikBuild -fr
#if $genomeSource.refGenomeSource == 'indexed':
${genomeSource.indexReference.fields.path}
#else:
- $genomeSource.historyReference
+ ${genomeSource.historyReference}
#end if
- -oa mosaik_ref_file;
- MosaikBuild -q $reads $mfl -st $st -out mosaik_reads_file;
- MosaikAligner -ia mosaik_ref_file -in mosaik_reads_file -out mosaik_aligned_file $ls -mm $mm -mhp $mhp -act $act -bw $bw $processors $lm -hs 15;
+ -oa mosaik_ref_file;
+ MosaikBuild -q $reads
+ #if $paired.kind == 'single'
+ #set $ls_string = ''
+ #else
+ -q2 ${paired.reads2}
+ -mfl ${paired.mfl}
+ #set $ls_string = '-ls %s' % $paired.ls
+ #end if
+ -st $st -out mosaik_reads_file;
+ MosaikAligner -ia mosaik_ref_file -in mosaik_reads_file -out mosaik_aligned_file $ls_string -mm $mm -mhp $mhp -act $act -bw $bw $processors -hs 15;
MosaikText -in mosaik_aligned_file -$outFormat sam_bam_file;
#if str($outFormat) == 'bam':
samtools sort sam_bam_file sorted_bam;
@@ -50,12 +52,12 @@
<param format="fasta" name="historyReference" type="data" metadata_name="dbkey" label="Select a reference from history"/></when></conditional>
- <param format="fastq" name="reads" type="data" label="Fastq Reads File"/>
- <param name="outFormat" type="select" label="Output Format">
- <option value="sam">Sam</option>
- <option value="bam">Bam</option>
+ <param format="fastq" name="reads" type="data" label="FASTQ reads file" />
+ <param name="outFormat" type="select" label="Output format">
+ <option value="sam">SAM</option>
+ <option value="bam">BAM</option></param>
- <param name="st" type="select" label="Sequencing Technology Used">
+ <param name="st" type="select" label="Sequencing technology used"><option value="454">454</option><option value="illumina">Illumina</option><option value="solid">Solid</option>
@@ -69,14 +71,15 @@
</param><when value="single"/><when value="paired">
- <param name="mfl" type="integer" value="200" label="Insert Size" help="the length between the paired reads"/>
- <param name="ls" type="integer" value="50" label="Realignment Window" help="Window size to realign mate pairs that are out of position. Large values slow down performance"/>
+ <param format="fastq" name="reads2" type="data" label="FASTQ 2nd mate" />
+ <param name="mfl" type="integer" value="200" label="Median fragment length" />
+ <param name="ls" type="integer" min="0" value="50" label="Local alignment search radius to rescue mates" help="A large value slows down performances" /></when></conditional>
- <param name="mm" size="5" type="integer" value="6" label="Mismatches allowed" help="mismatches allowed per sequence"/>
- <param name="act" size="5" type="integer" value="35" label="Alignment Candidate Threshold" help="determines which hash regions will be aligned with Smith Waterman"/>
- <param name="bw" size="5" type="integer" value="19" label="Smith-Waterman band width"/>
- <param name="mhp" size="5" type="integer" value="100" label="Maximum # Of Positions Stored Per Seed" help="number of places in the reference the aligner will try to place a particular hash"/>
+ <param name="mm" type="integer" value="6" label="Number of mismatches allowed per sequence" />
+ <param name="act" type="integer" value="35" label="Alignment candidate threshold" help="Determines which hash regions will be aligned with Smith-Waterman" />
+ <param name="bw" type="integer" value="9" label="Smith-Waterman band width" />
+ <param name="mhp" type="integer" value="100" label="Maximum number of positions stored per seed" help="Number of places in the reference the aligner will try to place a particular hash" /></inputs><outputs><data format="sam" name="output">
https://bitbucket.org/galaxy/galaxy-central/commits/97c69fcfeaac/
Changeset: 97c69fcfeaac
User: dannon
Date: 2013-12-17 15:00:44
Summary: Merged in nsoranzo/galaxy-central (pull request #271)
Fixes for MOSAIK tool
Affected #: 1 file
diff -r 3eb13c85bf4140448865328b9c6feaed96d8682f -r 97c69fcfeaac1756a7ad5d92c8ade74b2d611561 tools/sr_mapping/mosaik.xml
--- a/tools/sr_mapping/mosaik.xml
+++ b/tools/sr_mapping/mosaik.xml
@@ -1,25 +1,30 @@
<?xml version="1.0"?>
-<tool id="mosaik_wrapper" name="Map with Mosaik" version="1.1.1">
+<tool id="mosaik_wrapper" name="Map with Mosaik" version="1.1.2"><description/>
- <requirements><requirement type="package">mosaik</requirement></requirements>
+ <requirements>
+ <requirement type="package" version="1.1.0021">mosaik</requirement>
+ <requirement type="package" version="0.1.18">samtools</requirement>
+ </requirements>
+ <version_command>MosaikAligner | sed -e 's/\x1b\[[[:digit:]]\{1,2\}\(;[[:digit:]]\{1,2\}\)\{0,1\}m//g' | grep -o 'MosaikAligner [[:digit:].]\{1,\}'</version_command><command>
#set $processors = '-p ${GALAXY_SLOTS:-4}'
- #set $lm = ''
- #if $paired.kind == 'single':
- #set $mfl = ''
- #set $ls = ''
- #else:
- #set $ls = '-ls $mfl'
- #end if
MosaikBuild -fr
#if $genomeSource.refGenomeSource == 'indexed':
${genomeSource.indexReference.fields.path}
#else:
- $genomeSource.historyReference
+ ${genomeSource.historyReference}
#end if
- -oa mosaik_ref_file;
- MosaikBuild -q $reads $mfl -st $st -out mosaik_reads_file;
- MosaikAligner -ia mosaik_ref_file -in mosaik_reads_file -out mosaik_aligned_file $ls -mm $mm -mhp $mhp -act $act -bw $bw $processors $lm -hs 15;
+ -oa mosaik_ref_file;
+ MosaikBuild -q $reads
+ #if $paired.kind == 'single'
+ #set $ls_string = ''
+ #else
+ -q2 ${paired.reads2}
+ -mfl ${paired.mfl}
+ #set $ls_string = '-ls %s' % $paired.ls
+ #end if
+ -st $st -out mosaik_reads_file;
+ MosaikAligner -ia mosaik_ref_file -in mosaik_reads_file -out mosaik_aligned_file $ls_string -mm $mm -mhp $mhp -act $act -bw $bw $processors -hs 15;
MosaikText -in mosaik_aligned_file -$outFormat sam_bam_file;
#if str($outFormat) == 'bam':
samtools sort sam_bam_file sorted_bam;
@@ -47,12 +52,12 @@
<param format="fasta" name="historyReference" type="data" metadata_name="dbkey" label="Select a reference from history"/></when></conditional>
- <param format="fastq" name="reads" type="data" label="Fastq Reads File"/>
- <param name="outFormat" type="select" label="Output Format">
- <option value="sam">Sam</option>
- <option value="bam">Bam</option>
+ <param format="fastq" name="reads" type="data" label="FASTQ reads file" />
+ <param name="outFormat" type="select" label="Output format">
+ <option value="sam">SAM</option>
+ <option value="bam">BAM</option></param>
- <param name="st" type="select" label="Sequencing Technology Used">
+ <param name="st" type="select" label="Sequencing technology used"><option value="454">454</option><option value="illumina">Illumina</option><option value="solid">Solid</option>
@@ -66,14 +71,15 @@
</param><when value="single"/><when value="paired">
- <param name="mfl" type="integer" value="200" label="Insert Size" help="the length between the paired reads"/>
- <param name="ls" type="integer" value="50" label="Realignment Window" help="Window size to realign mate pairs that are out of position. Large values slow down performance"/>
+ <param format="fastq" name="reads2" type="data" label="FASTQ 2nd mate" />
+ <param name="mfl" type="integer" value="200" label="Median fragment length" />
+ <param name="ls" type="integer" min="0" value="50" label="Local alignment search radius to rescue mates" help="A large value slows down performances" /></when></conditional>
- <param name="mm" size="5" type="integer" value="6" label="Mismatches allowed" help="mismatches allowed per sequence"/>
- <param name="act" size="5" type="integer" value="35" label="Alignment Candidate Threshold" help="determines which hash regions will be aligned with Smith Waterman"/>
- <param name="bw" size="5" type="integer" value="19" label="Smith-Waterman band width"/>
- <param name="mhp" size="5" type="integer" value="100" label="Maximum # Of Positions Stored Per Seed" help="number of places in the reference the aligner will try to place a particular hash"/>
+ <param name="mm" type="integer" value="6" label="Number of mismatches allowed per sequence" />
+ <param name="act" type="integer" value="35" label="Alignment candidate threshold" help="Determines which hash regions will be aligned with Smith-Waterman" />
+ <param name="bw" type="integer" value="9" label="Smith-Waterman band width" />
+ <param name="mhp" type="integer" value="100" label="Maximum number of positions stored per seed" help="Number of places in the reference the aligner will try to place a particular hash" /></inputs><outputs><data format="sam" name="output">
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/b92f61b482d9/
Changeset: b92f61b482d9
User: BjoernGruening
Date: 2013-11-11 10:09:54
Summary: add dependency of type package to the ccat wrapper
Affected #: 1 file
diff -r 8e448f2ac955e6d434476b18c59b2cffcfc55404 -r b92f61b482d9b5897cb78ac260ae1dcec3260045 tools/peak_calling/ccat_wrapper.xml
--- a/tools/peak_calling/ccat_wrapper.xml
+++ b/tools/peak_calling/ccat_wrapper.xml
@@ -10,6 +10,7 @@
'$output_peak_file' '$output_region_file' '$output_top_file' '$output_log_file'</command><requirements><requirement type="binary" version="3.0">CCAT</requirement>
+ <requirement type="package" version="3.0">CCAT</requirement></requirements><inputs><param name="input_tag_file" type="data" format="bed" label="ChIP-Seq Tag File" >
https://bitbucket.org/galaxy/galaxy-central/commits/3eb13c85bf41/
Changeset: 3eb13c85bf41
User: BjoernGruening
Date: 2013-11-11 15:35:45
Summary: remove binary dependency
Affected #: 1 file
diff -r b92f61b482d9b5897cb78ac260ae1dcec3260045 -r 3eb13c85bf4140448865328b9c6feaed96d8682f tools/peak_calling/ccat_wrapper.xml
--- a/tools/peak_calling/ccat_wrapper.xml
+++ b/tools/peak_calling/ccat_wrapper.xml
@@ -9,7 +9,6 @@
'CCAT in Galaxy'
'$output_peak_file' '$output_region_file' '$output_top_file' '$output_log_file'</command><requirements>
- <requirement type="binary" version="3.0">CCAT</requirement><requirement type="package" version="3.0">CCAT</requirement></requirements><inputs>
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/714f5b1ac790/
Changeset: 714f5b1ac790
User: jmchilton
Date: 2013-12-15 23:16:34
Summary: Move history contents filtering logic into model.
Filtering for deleted, visible, and ids with ORM should prevent loading unneeded objects into memory (a history with 1000 items and 5 visible will now only cause 5 items to be loaded instead of 1000 just to filter out 5). Simplifies history_contents API logic somewhat, allows easier 'unit' testing (included), and provides a clearer entry point for additional 'showing' additional contents types (read dataset collections).
Affected #: 3 files
diff -r 4fe46fd3181bfb63e8093200d71801cb507b586d -r 714f5b1ac7902cc5b79d5fdf6390ea70b7e17afb lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py
+++ b/lib/galaxy/model/__init__.py
@@ -19,6 +19,7 @@
import socket
import time
from string import Template
+from itertools import ifilter
import galaxy.datatypes
import galaxy.datatypes.registry
@@ -43,6 +44,11 @@
# Default Value Required for unit tests
datatypes_registry.load_datatypes()
+# When constructing filters with in for a fixed set of ids, maximum
+# number of items to place in the IN statement. Different databases
+# are going to have different limits so it is likely best to not let
+# this be unlimited - filter in Python if over this limit.
+MAX_IN_FILTER_LENGTH = 100
class NoConverterException(Exception):
def __init__(self, value):
@@ -893,6 +899,32 @@
rval = galaxy.datatypes.data.nice_size( rval )
return rval
+ def contents_iter( self, **kwds ):
+ """
+ Fetch filtered list of contents of history.
+ """
+ python_filter = None
+ db_session = object_session( self )
+ assert db_session != None
+ query = db_session.query( HistoryDatasetAssociation ).filter( HistoryDatasetAssociation.table.c.history_id == self.id )
+ deleted = galaxy.util.string_as_bool_or_none( kwds.get( 'deleted', None ) )
+ if deleted is not None:
+ query = query.filter( HistoryDatasetAssociation.deleted == bool( kwds['deleted'] ) )
+ visible = galaxy.util.string_as_bool_or_none( kwds.get( 'visible', None ) )
+ if visible is not None:
+ query = query.filter( HistoryDatasetAssociation.visible == bool( kwds['visible'] ) )
+ if 'ids' in kwds:
+ ids = kwds['ids']
+ max_in_filter_length = kwds.get('max_in_filter_length', MAX_IN_FILTER_LENGTH)
+ if len(ids) < max_in_filter_length:
+ query = query.filter( HistoryDatasetAssociation.id.in_(ids) )
+ else:
+ python_filter = lambda hda: hda.id in ids
+ if python_filter:
+ return ifilter(python_filter, query)
+ else:
+ return query
+
def copy_tags_from(self,target_user,source_history):
for src_shta in source_history.tags:
new_shta = src_shta.copy()
diff -r 4fe46fd3181bfb63e8093200d71801cb507b586d -r 714f5b1ac7902cc5b79d5fdf6390ea70b7e17afb lib/galaxy/webapps/galaxy/api/history_contents.py
--- a/lib/galaxy/webapps/galaxy/api/history_contents.py
+++ b/lib/galaxy/webapps/galaxy/api/history_contents.py
@@ -51,47 +51,28 @@
else:
history = self.get_history( trans, history_id, check_ownership=True, check_accessible=True )
- # if ids, return _FULL_ data (as show) for each id passed
+ contents_kwds = {}
if ids:
- ids = ids.split( ',' )
- for index, hda in enumerate( history.datasets ):
- encoded_hda_id = trans.security.encode_id( hda.id )
- if encoded_hda_id in ids:
- #TODO: share code with show
- rval.append( self._detailed_hda_dict( trans, hda ) )
-
- # if no ids passed, return a _SUMMARY_ of _all_ datasets in the history
+ ids = map( lambda id: trans.security.decode_id( id ), ids.split( ',' ) )
+ contents_kwds[ 'ids' ] = ids
+ # If explicit ids given, always used detailed result.
+ details = 'all'
else:
+ contents_kwds[ 'deleted' ] = kwd.get( 'deleted', None )
+ contents_kwds[ 'visible' ] = kwd.get( 'visible', None )
# details param allows a mixed set of summary and detailed hdas
#TODO: this is getting convoluted due to backwards compat
details = kwd.get( 'details', None ) or []
if details and details != 'all':
details = util.listify( details )
- # by default return all datasets - even if deleted or hidden (defaulting the next switches to None)
- # if specified return those datasets that match the setting
- # backwards compat
- return_deleted = util.string_as_bool_or_none( kwd.get( 'deleted', None ) )
- return_visible = util.string_as_bool_or_none( kwd.get( 'visible', None ) )
-
- for hda in history.datasets:
- # if either return_ setting has been requested (!= None), skip hdas that don't match the request
- if return_deleted is not None:
- if( ( return_deleted and not hda.deleted )
- or ( not return_deleted and hda.deleted ) ):
- continue
- if return_visible is not None:
- if( ( return_visible and not hda.visible )
- or ( not return_visible and hda.visible ) ):
- continue
-
- encoded_hda_id = trans.security.encode_id( hda.id )
- if( ( encoded_hda_id in details )
- or ( details == 'all' ) ):
- rval.append( self._detailed_hda_dict( trans, hda ) )
- else:
- rval.append( self._summary_hda_dict( trans, history_id, hda ) )
-
+ for hda in history.contents_iter( **contents_kwds ):
+ encoded_hda_id = trans.security.encode_id( hda.id )
+ detailed = details == 'all' or ( encoded_hda_id in details )
+ if detailed:
+ rval.append( self._detailed_hda_dict( trans, hda ) )
+ else:
+ rval.append( self._summary_hda_dict( trans, history_id, hda ) )
except Exception, e:
# for errors that are not specific to one hda (history lookup or summary list)
rval = "Error in history API at listing contents: " + str( e )
diff -r 4fe46fd3181bfb63e8093200d71801cb507b586d -r 714f5b1ac7902cc5b79d5fdf6390ea70b7e17afb test/unit/test_galaxy_mapping.py
--- a/test/unit/test_galaxy_mapping.py
+++ b/test/unit/test_galaxy_mapping.py
@@ -180,6 +180,39 @@
assert hist1.name == "History 2b"
# gvk TODO need to ad test for GalaxySessions, but not yet sure what they should look like.
+ def test_history_contents( self ):
+ model = self.model
+ u = model.User( email="contents(a)foo.bar.baz", password="password" )
+ # gs = model.GalaxySession()
+ h1 = model.History( name="HistoryContentsHistory1", user=u)
+
+ self.persist( u, h1, expunge=True )
+
+ d1 = self.new_hda( h1, name="1" )
+ d2 = self.new_hda( h1, name="2", visible=False )
+ d3 = self.new_hda( h1, name="3", deleted=True )
+ d4 = self.new_hda( h1, name="4", visible=False, deleted=True )
+
+ def contents_iter_names(**kwds):
+ history = model.context.query( model.History ).filter(
+ model.History.name == "HistoryContentsHistory1"
+ ).first()
+ return set( map( lambda hda: hda.name, history.contents_iter( **kwds ) ) )
+
+ assert contents_iter_names() == set( [ "1", "2", "3", "4" ] )
+ assert contents_iter_names( deleted=False ) == set( [ "1", "2" ] )
+ assert contents_iter_names( visible=True ) == set( [ "1", "3" ] )
+ assert contents_iter_names( visible=False ) == set( [ "2", "4" ] )
+ assert contents_iter_names( deleted=True, visible=False ) == set( [ "4" ] )
+
+ assert contents_iter_names( ids=[ d1.id, d2.id, d3.id, d4.id ] ) == set( [ "1", "2", "3", "4" ] )
+ assert contents_iter_names( ids=[ d1.id, d2.id, d3.id, d4.id ], max_in_filter_length=1 ) == set( [ "1", "2", "3", "4" ] )
+
+ assert contents_iter_names( ids=[ d1.id, d3.id ] ) == set( [ "1", "3" ] )
+
+ def new_hda( self, history, **kwds ):
+ return self.persist( self.model.HistoryDatasetAssociation( history=history, create_dataset=True, sa_session=self.model.session, **kwds ), flush=True )
+
@classmethod
def setUpClass(cls):
# Start the database and connect the mapping
@@ -191,10 +224,16 @@
return cls.model.session.query( type )
@classmethod
- def persist(cls, *args):
+ def persist(cls, *args, **kwargs):
+ session = cls.model.session
+ flush = kwargs.get('flush', True)
for arg in args:
- cls.model.session.add( arg )
- cls.expunge()
+ session.add( arg )
+ if flush:
+ session.flush()
+ if kwargs.get('expunge', not flush):
+ cls.expunge()
+ return arg # Return last or only arg.
@classmethod
def expunge(cls):
https://bitbucket.org/galaxy/galaxy-central/commits/8e448f2ac955/
Changeset: 8e448f2ac955
User: jmchilton
Date: 2013-12-17 14:31:52
Summary: Merged in jmchilton/galaxy-central-fork-1 (pull request #275)
Move history contents filtering logic into model.
Affected #: 3 files
diff -r 90a639389de4a4348a32e21d87a13de4b1a72f56 -r 8e448f2ac955e6d434476b18c59b2cffcfc55404 lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py
+++ b/lib/galaxy/model/__init__.py
@@ -19,6 +19,7 @@
import socket
import time
from string import Template
+from itertools import ifilter
import galaxy.datatypes
import galaxy.datatypes.registry
@@ -43,6 +44,11 @@
# Default Value Required for unit tests
datatypes_registry.load_datatypes()
+# When constructing filters with in for a fixed set of ids, maximum
+# number of items to place in the IN statement. Different databases
+# are going to have different limits so it is likely best to not let
+# this be unlimited - filter in Python if over this limit.
+MAX_IN_FILTER_LENGTH = 100
class NoConverterException(Exception):
def __init__(self, value):
@@ -893,6 +899,32 @@
rval = galaxy.datatypes.data.nice_size( rval )
return rval
+ def contents_iter( self, **kwds ):
+ """
+ Fetch filtered list of contents of history.
+ """
+ python_filter = None
+ db_session = object_session( self )
+ assert db_session != None
+ query = db_session.query( HistoryDatasetAssociation ).filter( HistoryDatasetAssociation.table.c.history_id == self.id )
+ deleted = galaxy.util.string_as_bool_or_none( kwds.get( 'deleted', None ) )
+ if deleted is not None:
+ query = query.filter( HistoryDatasetAssociation.deleted == bool( kwds['deleted'] ) )
+ visible = galaxy.util.string_as_bool_or_none( kwds.get( 'visible', None ) )
+ if visible is not None:
+ query = query.filter( HistoryDatasetAssociation.visible == bool( kwds['visible'] ) )
+ if 'ids' in kwds:
+ ids = kwds['ids']
+ max_in_filter_length = kwds.get('max_in_filter_length', MAX_IN_FILTER_LENGTH)
+ if len(ids) < max_in_filter_length:
+ query = query.filter( HistoryDatasetAssociation.id.in_(ids) )
+ else:
+ python_filter = lambda hda: hda.id in ids
+ if python_filter:
+ return ifilter(python_filter, query)
+ else:
+ return query
+
def copy_tags_from(self,target_user,source_history):
for src_shta in source_history.tags:
new_shta = src_shta.copy()
diff -r 90a639389de4a4348a32e21d87a13de4b1a72f56 -r 8e448f2ac955e6d434476b18c59b2cffcfc55404 lib/galaxy/webapps/galaxy/api/history_contents.py
--- a/lib/galaxy/webapps/galaxy/api/history_contents.py
+++ b/lib/galaxy/webapps/galaxy/api/history_contents.py
@@ -51,47 +51,28 @@
else:
history = self.get_history( trans, history_id, check_ownership=True, check_accessible=True )
- # if ids, return _FULL_ data (as show) for each id passed
+ contents_kwds = {}
if ids:
- ids = ids.split( ',' )
- for index, hda in enumerate( history.datasets ):
- encoded_hda_id = trans.security.encode_id( hda.id )
- if encoded_hda_id in ids:
- #TODO: share code with show
- rval.append( self._detailed_hda_dict( trans, hda ) )
-
- # if no ids passed, return a _SUMMARY_ of _all_ datasets in the history
+ ids = map( lambda id: trans.security.decode_id( id ), ids.split( ',' ) )
+ contents_kwds[ 'ids' ] = ids
+ # If explicit ids given, always used detailed result.
+ details = 'all'
else:
+ contents_kwds[ 'deleted' ] = kwd.get( 'deleted', None )
+ contents_kwds[ 'visible' ] = kwd.get( 'visible', None )
# details param allows a mixed set of summary and detailed hdas
#TODO: this is getting convoluted due to backwards compat
details = kwd.get( 'details', None ) or []
if details and details != 'all':
details = util.listify( details )
- # by default return all datasets - even if deleted or hidden (defaulting the next switches to None)
- # if specified return those datasets that match the setting
- # backwards compat
- return_deleted = util.string_as_bool_or_none( kwd.get( 'deleted', None ) )
- return_visible = util.string_as_bool_or_none( kwd.get( 'visible', None ) )
-
- for hda in history.datasets:
- # if either return_ setting has been requested (!= None), skip hdas that don't match the request
- if return_deleted is not None:
- if( ( return_deleted and not hda.deleted )
- or ( not return_deleted and hda.deleted ) ):
- continue
- if return_visible is not None:
- if( ( return_visible and not hda.visible )
- or ( not return_visible and hda.visible ) ):
- continue
-
- encoded_hda_id = trans.security.encode_id( hda.id )
- if( ( encoded_hda_id in details )
- or ( details == 'all' ) ):
- rval.append( self._detailed_hda_dict( trans, hda ) )
- else:
- rval.append( self._summary_hda_dict( trans, history_id, hda ) )
-
+ for hda in history.contents_iter( **contents_kwds ):
+ encoded_hda_id = trans.security.encode_id( hda.id )
+ detailed = details == 'all' or ( encoded_hda_id in details )
+ if detailed:
+ rval.append( self._detailed_hda_dict( trans, hda ) )
+ else:
+ rval.append( self._summary_hda_dict( trans, history_id, hda ) )
except Exception, e:
# for errors that are not specific to one hda (history lookup or summary list)
rval = "Error in history API at listing contents: " + str( e )
diff -r 90a639389de4a4348a32e21d87a13de4b1a72f56 -r 8e448f2ac955e6d434476b18c59b2cffcfc55404 test/unit/test_galaxy_mapping.py
--- a/test/unit/test_galaxy_mapping.py
+++ b/test/unit/test_galaxy_mapping.py
@@ -180,6 +180,39 @@
assert hist1.name == "History 2b"
# gvk TODO need to ad test for GalaxySessions, but not yet sure what they should look like.
+ def test_history_contents( self ):
+ model = self.model
+ u = model.User( email="contents(a)foo.bar.baz", password="password" )
+ # gs = model.GalaxySession()
+ h1 = model.History( name="HistoryContentsHistory1", user=u)
+
+ self.persist( u, h1, expunge=True )
+
+ d1 = self.new_hda( h1, name="1" )
+ d2 = self.new_hda( h1, name="2", visible=False )
+ d3 = self.new_hda( h1, name="3", deleted=True )
+ d4 = self.new_hda( h1, name="4", visible=False, deleted=True )
+
+ def contents_iter_names(**kwds):
+ history = model.context.query( model.History ).filter(
+ model.History.name == "HistoryContentsHistory1"
+ ).first()
+ return set( map( lambda hda: hda.name, history.contents_iter( **kwds ) ) )
+
+ assert contents_iter_names() == set( [ "1", "2", "3", "4" ] )
+ assert contents_iter_names( deleted=False ) == set( [ "1", "2" ] )
+ assert contents_iter_names( visible=True ) == set( [ "1", "3" ] )
+ assert contents_iter_names( visible=False ) == set( [ "2", "4" ] )
+ assert contents_iter_names( deleted=True, visible=False ) == set( [ "4" ] )
+
+ assert contents_iter_names( ids=[ d1.id, d2.id, d3.id, d4.id ] ) == set( [ "1", "2", "3", "4" ] )
+ assert contents_iter_names( ids=[ d1.id, d2.id, d3.id, d4.id ], max_in_filter_length=1 ) == set( [ "1", "2", "3", "4" ] )
+
+ assert contents_iter_names( ids=[ d1.id, d3.id ] ) == set( [ "1", "3" ] )
+
+ def new_hda( self, history, **kwds ):
+ return self.persist( self.model.HistoryDatasetAssociation( history=history, create_dataset=True, sa_session=self.model.session, **kwds ), flush=True )
+
@classmethod
def setUpClass(cls):
# Start the database and connect the mapping
@@ -191,10 +224,16 @@
return cls.model.session.query( type )
@classmethod
- def persist(cls, *args):
+ def persist(cls, *args, **kwargs):
+ session = cls.model.session
+ flush = kwargs.get('flush', True)
for arg in args:
- cls.model.session.add( arg )
- cls.expunge()
+ session.add( arg )
+ if flush:
+ session.flush()
+ if kwargs.get('expunge', not flush):
+ cls.expunge()
+ return arg # Return last or only arg.
@classmethod
def expunge(cls):
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/ee365d4406e6/
Changeset: ee365d4406e6
User: jmchilton
Date: 2013-12-13 06:25:25
Summary: casperjs: Fix user login, registration, logout for recent masthead changes...
They are generated in javascript instead of HTML now so I don't think they are available when page loads. My fix is to wait for the User menu to appear before attempting these operations.
Affected #: 4 files
diff -r b3eeedb81a1e0db8d643af0cbba68f55eb49cdb0 -r ee365d4406e604b573c3011e5b642dd7075f561d test/casperjs/api-configuration-tests.js
--- a/test/casperjs/api-configuration-tests.js
+++ b/test/casperjs/api-configuration-tests.js
@@ -102,18 +102,19 @@
//}, function failedLoginRegister(){
// this.info( 'Admin level configuration API tests not run: no admin account available' );
spaceghost.thenOpen( spaceghost.baseUrl ).then( function(){
+ spaceghost.waitForMasthead( function() {
+ if( spaceghost.user.userIsAdmin() ){
+ this.test.comment( 'index should get a (full) list of configuration settings '
+ + 'when requested by an admin user' );
+ configIndex = this.api.configuration.index();
+ this.debug( this.jsonStr( configIndex ) );
+ this.test.assert( utils.isObject( configIndex ), "index returned an object" );
+ this.test.assert( hasKeys( configIndex, adminKeys ), 'Has the proper keys' );
- if( spaceghost.user.userIsAdmin() ){
- this.test.comment( 'index should get a (full) list of configuration settings '
- + 'when requested by an admin user' );
- configIndex = this.api.configuration.index();
- this.debug( this.jsonStr( configIndex ) );
- this.test.assert( utils.isObject( configIndex ), "index returned an object" );
- this.test.assert( hasKeys( configIndex, adminKeys ), 'Has the proper keys' );
-
- } else {
- this.info( 'Admin level configuration API tests not run: no admin account available' );
- }
+ } else {
+ this.info( 'Admin level configuration API tests not run: no admin account available' );
+ }
+ });
});
// ===================================================================
diff -r b3eeedb81a1e0db8d643af0cbba68f55eb49cdb0 -r ee365d4406e604b573c3011e5b642dd7075f561d test/casperjs/modules/user.js
--- a/test/casperjs/modules/user.js
+++ b/test/casperjs/modules/user.js
@@ -44,17 +44,18 @@
};
spaceghost.thenOpen( spaceghost.baseUrl, function(){
- this.clickLabel( spaceghost.data.labels.masthead.menus.user );
- this.clickLabel( spaceghost.data.labels.masthead.userMenu.register );
+ this.waitForMasthead( function() {
+ this.clickLabel( spaceghost.data.labels.masthead.menus.user );
+ this.clickLabel( spaceghost.data.labels.masthead.userMenu.register );
- this.withMainPanel( function mainBeforeRegister(){
- spaceghost.debug( '(' + spaceghost.getCurrentUrl() + ') registering user:\n'
- + spaceghost.jsonStr( userInfo ) );
- this.fill( spaceghost.data.selectors.registrationPage.form, userInfo, false );
- // need manual submit (not a normal html form)
- this.click( xpath( spaceghost.data.selectors.registrationPage.submit_xpath ) );
+ this.withMainPanel( function mainBeforeRegister(){
+ spaceghost.debug( '(' + spaceghost.getCurrentUrl() + ') registering user:\n'
+ + spaceghost.jsonStr( userInfo ) );
+ this.fill( spaceghost.data.selectors.registrationPage.form, userInfo, false );
+ // need manual submit (not a normal html form)
+ this.click( xpath( spaceghost.data.selectors.registrationPage.submit_xpath ) );
+ });
});
-
//// debugging
//spaceghost.withFrame( spaceghost.data.selectors.frames.main, function mainAfterRegister(){
// var messageInfo = spaceghost.getElementInfo( spaceghost.data.selectors.messages.all );
@@ -79,24 +80,26 @@
};
spaceghost.thenOpen( spaceghost.baseUrl, function(){
- spaceghost.clickLabel( spaceghost.data.labels.masthead.menus.user );
- spaceghost.clickLabel( spaceghost.data.labels.masthead.userMenu.login );
+ spaceghost.waitForMasthead( function() {
+ spaceghost.clickLabel( spaceghost.data.labels.masthead.menus.user );
+ spaceghost.clickLabel( spaceghost.data.labels.masthead.userMenu.login );
- spaceghost.withMainPanel( function mainBeforeLogin(){
- spaceghost.debug( '(' + spaceghost.getCurrentUrl() + ') logging in user:\n'
- + spaceghost.jsonStr( loginInfo ) );
- spaceghost.fill( spaceghost.data.selectors.loginPage.form, loginInfo, false );
- spaceghost.click( xpath( spaceghost.data.selectors.loginPage.submit_xpath ) );
+ spaceghost.withMainPanel( function mainBeforeLogin(){
+ spaceghost.debug( '(' + spaceghost.getCurrentUrl() + ') logging in user:\n'
+ + spaceghost.jsonStr( loginInfo ) );
+ spaceghost.fill( spaceghost.data.selectors.loginPage.form, loginInfo, false );
+ spaceghost.click( xpath( spaceghost.data.selectors.loginPage.submit_xpath ) );
+ });
+
+ //// debugging
+ //spaceghost.withFrame( spaceghost.data.selectors.frames.main, function mainAfterLogin(){
+ // //TODO: prob. could use a more generalized form of this for url breakdown/checking
+ // if( spaceghost.getCurrentUrl().search( spaceghost.data.selectors.loginPage.url_regex ) != -1 ){
+ // var messageInfo = spaceghost.getElementInfo( spaceghost.data.selectors.messages.all );
+ // spaceghost.debug( 'post login message:\n' + spaceghost.jsonStr( messageInfo ) );
+ // }
+ //});
});
-
- //// debugging
- //spaceghost.withFrame( spaceghost.data.selectors.frames.main, function mainAfterLogin(){
- // //TODO: prob. could use a more generalized form of this for url breakdown/checking
- // if( spaceghost.getCurrentUrl().search( spaceghost.data.selectors.loginPage.url_regex ) != -1 ){
- // var messageInfo = spaceghost.getElementInfo( spaceghost.data.selectors.messages.all );
- // spaceghost.debug( 'post login message:\n' + spaceghost.jsonStr( messageInfo ) );
- // }
- //});
});
};
@@ -133,17 +136,19 @@
var spaceghost = this.spaceghost;
this._submitLogin( email, password );
- spaceghost.withMainPanel( function mainAfterLogin(){
- if( spaceghost.getCurrentUrl().search( spaceghost.data.selectors.loginPage.url_regex ) !== -1 ){
- var messageInfo = spaceghost.getElementInfo( spaceghost.data.selectors.messages.all );
- if( messageInfo && messageInfo.attributes[ 'class' ] === 'errormessage' ){
- this.warning( 'Login failed: ' + messageInfo.text );
- throw new spaceghost.GalaxyError( 'LoginError: ' + messageInfo.text );
+ spaceghost.waitForMasthead( function() {
+ spaceghost.withMainPanel( function mainAfterLogin(){
+ if( spaceghost.getCurrentUrl().search( spaceghost.data.selectors.loginPage.url_regex ) !== -1 ){
+ var messageInfo = spaceghost.getElementInfo( spaceghost.data.selectors.messages.all );
+ if( messageInfo && messageInfo.attributes[ 'class' ] === 'errormessage' ){
+ this.warning( 'Login failed: ' + messageInfo.text );
+ throw new spaceghost.GalaxyError( 'LoginError: ' + messageInfo.text );
+ }
}
- }
- if( spaceghost.user.loggedInAs() === email ){
- spaceghost.info( 'logged in as ' + email );
- }
+ if( spaceghost.user.loggedInAs() === email ){
+ spaceghost.info( 'logged in as ' + email );
+ }
+ });
});
return spaceghost;
};
@@ -173,8 +178,10 @@
spaceghost.thenOpen( spaceghost.baseUrl, function(){
this.info( 'user logging out' );
//TODO: handle already logged out
- spaceghost.clickLabel( spaceghost.data.labels.masthead.menus.user );
- spaceghost.clickLabel( spaceghost.data.labels.masthead.userMenu.logout );
+ spaceghost.waitForMasthead( function _logout() {
+ spaceghost.clickLabel( spaceghost.data.labels.masthead.menus.user );
+ spaceghost.clickLabel( spaceghost.data.labels.masthead.userMenu.logout );
+ });
});
return spaceghost;
};
diff -r b3eeedb81a1e0db8d643af0cbba68f55eb49cdb0 -r ee365d4406e604b573c3011e5b642dd7075f561d test/casperjs/registration-tests.js
--- a/test/casperjs/registration-tests.js
+++ b/test/casperjs/registration-tests.js
@@ -62,18 +62,24 @@
this.user._submitRegistration( email, password, username, confirm );
});
spaceghost.thenOpen( spaceghost.baseUrl, function(){
- this.clickLabel( 'User' );
- this.test.assertSelectorHasText( 'a #user-email', email, '#user-email === ' + email );
+ this.waitForMasthead( function() {
+ this.clickLabel( 'User' );
+ this.test.assertSelectorHasText( 'a #user-email', email, '#user-email === ' + email );
+ });
});
// ------------------------------------------------------------------- log out that user
spaceghost.then( function(){
- this.test.comment( 'logging out user: ' + email );
- this.user.logout();
+ this.waitForMasthead( function() {
+ this.test.comment( 'logging out user: ' + email );
+ this.user.logout();
+ });
});
spaceghost.then( function(){
- this.debug( 'email:' + this.getElementInfo( 'a #user-email' ).html );
- this.test.assert( !this.getElementInfo( 'a #user-email' ).html, '#user-email is empty' );
+ this.waitForMasthead( function() {
+ this.debug( 'email:' + this.getElementInfo( 'a #user-email' ).html );
+ this.test.assert( !this.getElementInfo( 'a #user-email' ).html, '#user-email is empty' );
+ });
});
// ------------------------------------------------------------------- bad user registrations
diff -r b3eeedb81a1e0db8d643af0cbba68f55eb49cdb0 -r ee365d4406e604b573c3011e5b642dd7075f561d test/casperjs/spaceghost.js
--- a/test/casperjs/spaceghost.js
+++ b/test/casperjs/spaceghost.js
@@ -1103,6 +1103,10 @@
return match[2];
};
+SpaceGhost.prototype.waitForMasthead = function wait( then ) {
+ return this.waitForText( this.data.labels.masthead.menus.user, then );
+}
+
// =================================================================== TEST DATA
/** General use selectors, labels, and text. Kept here to allow a centralized location.
https://bitbucket.org/galaxy/galaxy-central/commits/90a639389de4/
Changeset: 90a639389de4
User: jmchilton
Date: 2013-12-17 14:30:24
Summary: Merged in jmchilton/galaxy-central-fork-1 (pull request #274)
casperjs: Fix user login, registration, logout for recent masthead changes...
Affected #: 4 files
diff -r af87cdcf36084831fce7d99e621b1efc943d2aff -r 90a639389de4a4348a32e21d87a13de4b1a72f56 test/casperjs/api-configuration-tests.js
--- a/test/casperjs/api-configuration-tests.js
+++ b/test/casperjs/api-configuration-tests.js
@@ -102,18 +102,19 @@
//}, function failedLoginRegister(){
// this.info( 'Admin level configuration API tests not run: no admin account available' );
spaceghost.thenOpen( spaceghost.baseUrl ).then( function(){
+ spaceghost.waitForMasthead( function() {
+ if( spaceghost.user.userIsAdmin() ){
+ this.test.comment( 'index should get a (full) list of configuration settings '
+ + 'when requested by an admin user' );
+ configIndex = this.api.configuration.index();
+ this.debug( this.jsonStr( configIndex ) );
+ this.test.assert( utils.isObject( configIndex ), "index returned an object" );
+ this.test.assert( hasKeys( configIndex, adminKeys ), 'Has the proper keys' );
- if( spaceghost.user.userIsAdmin() ){
- this.test.comment( 'index should get a (full) list of configuration settings '
- + 'when requested by an admin user' );
- configIndex = this.api.configuration.index();
- this.debug( this.jsonStr( configIndex ) );
- this.test.assert( utils.isObject( configIndex ), "index returned an object" );
- this.test.assert( hasKeys( configIndex, adminKeys ), 'Has the proper keys' );
-
- } else {
- this.info( 'Admin level configuration API tests not run: no admin account available' );
- }
+ } else {
+ this.info( 'Admin level configuration API tests not run: no admin account available' );
+ }
+ });
});
// ===================================================================
diff -r af87cdcf36084831fce7d99e621b1efc943d2aff -r 90a639389de4a4348a32e21d87a13de4b1a72f56 test/casperjs/modules/user.js
--- a/test/casperjs/modules/user.js
+++ b/test/casperjs/modules/user.js
@@ -44,17 +44,18 @@
};
spaceghost.thenOpen( spaceghost.baseUrl, function(){
- this.clickLabel( spaceghost.data.labels.masthead.menus.user );
- this.clickLabel( spaceghost.data.labels.masthead.userMenu.register );
+ this.waitForMasthead( function() {
+ this.clickLabel( spaceghost.data.labels.masthead.menus.user );
+ this.clickLabel( spaceghost.data.labels.masthead.userMenu.register );
- this.withMainPanel( function mainBeforeRegister(){
- spaceghost.debug( '(' + spaceghost.getCurrentUrl() + ') registering user:\n'
- + spaceghost.jsonStr( userInfo ) );
- this.fill( spaceghost.data.selectors.registrationPage.form, userInfo, false );
- // need manual submit (not a normal html form)
- this.click( xpath( spaceghost.data.selectors.registrationPage.submit_xpath ) );
+ this.withMainPanel( function mainBeforeRegister(){
+ spaceghost.debug( '(' + spaceghost.getCurrentUrl() + ') registering user:\n'
+ + spaceghost.jsonStr( userInfo ) );
+ this.fill( spaceghost.data.selectors.registrationPage.form, userInfo, false );
+ // need manual submit (not a normal html form)
+ this.click( xpath( spaceghost.data.selectors.registrationPage.submit_xpath ) );
+ });
});
-
//// debugging
//spaceghost.withFrame( spaceghost.data.selectors.frames.main, function mainAfterRegister(){
// var messageInfo = spaceghost.getElementInfo( spaceghost.data.selectors.messages.all );
@@ -79,24 +80,26 @@
};
spaceghost.thenOpen( spaceghost.baseUrl, function(){
- spaceghost.clickLabel( spaceghost.data.labels.masthead.menus.user );
- spaceghost.clickLabel( spaceghost.data.labels.masthead.userMenu.login );
+ spaceghost.waitForMasthead( function() {
+ spaceghost.clickLabel( spaceghost.data.labels.masthead.menus.user );
+ spaceghost.clickLabel( spaceghost.data.labels.masthead.userMenu.login );
- spaceghost.withMainPanel( function mainBeforeLogin(){
- spaceghost.debug( '(' + spaceghost.getCurrentUrl() + ') logging in user:\n'
- + spaceghost.jsonStr( loginInfo ) );
- spaceghost.fill( spaceghost.data.selectors.loginPage.form, loginInfo, false );
- spaceghost.click( xpath( spaceghost.data.selectors.loginPage.submit_xpath ) );
+ spaceghost.withMainPanel( function mainBeforeLogin(){
+ spaceghost.debug( '(' + spaceghost.getCurrentUrl() + ') logging in user:\n'
+ + spaceghost.jsonStr( loginInfo ) );
+ spaceghost.fill( spaceghost.data.selectors.loginPage.form, loginInfo, false );
+ spaceghost.click( xpath( spaceghost.data.selectors.loginPage.submit_xpath ) );
+ });
+
+ //// debugging
+ //spaceghost.withFrame( spaceghost.data.selectors.frames.main, function mainAfterLogin(){
+ // //TODO: prob. could use a more generalized form of this for url breakdown/checking
+ // if( spaceghost.getCurrentUrl().search( spaceghost.data.selectors.loginPage.url_regex ) != -1 ){
+ // var messageInfo = spaceghost.getElementInfo( spaceghost.data.selectors.messages.all );
+ // spaceghost.debug( 'post login message:\n' + spaceghost.jsonStr( messageInfo ) );
+ // }
+ //});
});
-
- //// debugging
- //spaceghost.withFrame( spaceghost.data.selectors.frames.main, function mainAfterLogin(){
- // //TODO: prob. could use a more generalized form of this for url breakdown/checking
- // if( spaceghost.getCurrentUrl().search( spaceghost.data.selectors.loginPage.url_regex ) != -1 ){
- // var messageInfo = spaceghost.getElementInfo( spaceghost.data.selectors.messages.all );
- // spaceghost.debug( 'post login message:\n' + spaceghost.jsonStr( messageInfo ) );
- // }
- //});
});
};
@@ -133,17 +136,19 @@
var spaceghost = this.spaceghost;
this._submitLogin( email, password );
- spaceghost.withMainPanel( function mainAfterLogin(){
- if( spaceghost.getCurrentUrl().search( spaceghost.data.selectors.loginPage.url_regex ) !== -1 ){
- var messageInfo = spaceghost.getElementInfo( spaceghost.data.selectors.messages.all );
- if( messageInfo && messageInfo.attributes[ 'class' ] === 'errormessage' ){
- this.warning( 'Login failed: ' + messageInfo.text );
- throw new spaceghost.GalaxyError( 'LoginError: ' + messageInfo.text );
+ spaceghost.waitForMasthead( function() {
+ spaceghost.withMainPanel( function mainAfterLogin(){
+ if( spaceghost.getCurrentUrl().search( spaceghost.data.selectors.loginPage.url_regex ) !== -1 ){
+ var messageInfo = spaceghost.getElementInfo( spaceghost.data.selectors.messages.all );
+ if( messageInfo && messageInfo.attributes[ 'class' ] === 'errormessage' ){
+ this.warning( 'Login failed: ' + messageInfo.text );
+ throw new spaceghost.GalaxyError( 'LoginError: ' + messageInfo.text );
+ }
}
- }
- if( spaceghost.user.loggedInAs() === email ){
- spaceghost.info( 'logged in as ' + email );
- }
+ if( spaceghost.user.loggedInAs() === email ){
+ spaceghost.info( 'logged in as ' + email );
+ }
+ });
});
return spaceghost;
};
@@ -173,8 +178,10 @@
spaceghost.thenOpen( spaceghost.baseUrl, function(){
this.info( 'user logging out' );
//TODO: handle already logged out
- spaceghost.clickLabel( spaceghost.data.labels.masthead.menus.user );
- spaceghost.clickLabel( spaceghost.data.labels.masthead.userMenu.logout );
+ spaceghost.waitForMasthead( function _logout() {
+ spaceghost.clickLabel( spaceghost.data.labels.masthead.menus.user );
+ spaceghost.clickLabel( spaceghost.data.labels.masthead.userMenu.logout );
+ });
});
return spaceghost;
};
diff -r af87cdcf36084831fce7d99e621b1efc943d2aff -r 90a639389de4a4348a32e21d87a13de4b1a72f56 test/casperjs/registration-tests.js
--- a/test/casperjs/registration-tests.js
+++ b/test/casperjs/registration-tests.js
@@ -62,18 +62,24 @@
this.user._submitRegistration( email, password, username, confirm );
});
spaceghost.thenOpen( spaceghost.baseUrl, function(){
- this.clickLabel( 'User' );
- this.test.assertSelectorHasText( 'a #user-email', email, '#user-email === ' + email );
+ this.waitForMasthead( function() {
+ this.clickLabel( 'User' );
+ this.test.assertSelectorHasText( 'a #user-email', email, '#user-email === ' + email );
+ });
});
// ------------------------------------------------------------------- log out that user
spaceghost.then( function(){
- this.test.comment( 'logging out user: ' + email );
- this.user.logout();
+ this.waitForMasthead( function() {
+ this.test.comment( 'logging out user: ' + email );
+ this.user.logout();
+ });
});
spaceghost.then( function(){
- this.debug( 'email:' + this.getElementInfo( 'a #user-email' ).html );
- this.test.assert( !this.getElementInfo( 'a #user-email' ).html, '#user-email is empty' );
+ this.waitForMasthead( function() {
+ this.debug( 'email:' + this.getElementInfo( 'a #user-email' ).html );
+ this.test.assert( !this.getElementInfo( 'a #user-email' ).html, '#user-email is empty' );
+ });
});
// ------------------------------------------------------------------- bad user registrations
diff -r af87cdcf36084831fce7d99e621b1efc943d2aff -r 90a639389de4a4348a32e21d87a13de4b1a72f56 test/casperjs/spaceghost.js
--- a/test/casperjs/spaceghost.js
+++ b/test/casperjs/spaceghost.js
@@ -1103,6 +1103,10 @@
return match[2];
};
+SpaceGhost.prototype.waitForMasthead = function wait( then ) {
+ return this.waitForText( this.data.labels.masthead.menus.user, then );
+}
+
// =================================================================== TEST DATA
/** General use selectors, labels, and text. Kept here to allow a centralized location.
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Eliminate deprecated repositories from the tool shed's install and test framework.
by commits-noreply@bitbucket.org 16 Dec '13
by commits-noreply@bitbucket.org 16 Dec '13
16 Dec '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/af87cdcf3608/
Changeset: af87cdcf3608
User: greg
Date: 2013-12-17 04:06:16
Summary: Eliminate deprecated repositories from the tool shed's install and test framework.
Affected #: 1 file
diff -r 422f9675cb5025e20b402ad92aa591a87ca3a883 -r af87cdcf36084831fce7d99e621b1efc943d2aff test/install_and_test_tool_shed_repositories/functional_tests.py
--- a/test/install_and_test_tool_shed_repositories/functional_tests.py
+++ b/test/install_and_test_tool_shed_repositories/functional_tests.py
@@ -176,14 +176,14 @@
else:
galaxy_encode_secret = os.environ[ 'GALAXY_INSTALL_TEST_SECRET' ]
-testing_single_repository = {}
+testing_single_repository_dict = {}
if 'repository_name' in os.environ and 'repository_owner' in os.environ:
- testing_single_repository[ 'name' ] = os.environ[ 'repository_name' ]
- testing_single_repository[ 'owner' ] = os.environ[ 'repository_owner' ]
+ testing_single_repository_dict[ 'name' ] = str( os.environ[ 'repository_name' ] )
+ testing_single_repository_dict[ 'owner' ] = str( os.environ[ 'repository_owner' ] )
if 'repository_revision' in os.environ:
- testing_single_repository[ 'changeset_revision' ] = os.environ[ 'repository_revision' ]
+ testing_single_repository_dict[ 'changeset_revision' ] = str( os.environ[ 'repository_revision' ] )
else:
- testing_single_repository[ 'changeset_revision' ] = None
+ testing_single_repository_dict[ 'changeset_revision' ] = None
class ReportResults( Plugin ):
@@ -364,6 +364,8 @@
log.debug( 'Testing is restricted to the latest downloadable revision in this test run.' )
repository_dicts = []
parts = [ 'repository_revisions' ]
+ # We'll filter out deprecated repositories from testing since testing them is necessary only if reproducibility
+ # is guaranteed and we currently do not guarantee reproducibility.
params = dict( do_not_test='false',
downloadable='true',
includes_tools='true',
@@ -381,21 +383,25 @@
if error_message:
log.debug( 'Error getting additional details from the API: %s' % str( error_message ) )
else:
- # Don't test empty repositories.
- changeset_revision = baseline_repository_dict[ 'changeset_revision' ]
- if changeset_revision != suc.INITIAL_CHANGELOG_HASH:
- # Merge the dictionary returned from /api/repository_revisions with the detailed repository_dict and
- # append it to the list of repository_dicts to install and test.
- if latest_revision_only:
- latest_revision = repository_dict[ 'latest_revision' ]
- if changeset_revision == latest_revision:
+ # Don't test deprecated repositories since testing them is necessary only if reproducibility is guaranteed
+ # and we are not currently guaranteeing reproducibility.
+ deprecated = asbool( repository_dict.get( 'deprecated', False ) )
+ if not deprecated:
+ # Don't test empty repositories.
+ changeset_revision = baseline_repository_dict[ 'changeset_revision' ]
+ if changeset_revision != suc.INITIAL_CHANGELOG_HASH:
+ # Merge the dictionary returned from /api/repository_revisions with the detailed repository_dict and
+ # append it to the list of repository_dicts to install and test.
+ if latest_revision_only:
+ latest_revision = repository_dict[ 'latest_revision' ]
+ if changeset_revision == latest_revision:
+ repository_dicts.append( dict( repository_dict.items() + baseline_repository_dict.items() ) )
+ else:
repository_dicts.append( dict( repository_dict.items() + baseline_repository_dict.items() ) )
- else:
- repository_dicts.append( dict( repository_dict.items() + baseline_repository_dict.items() ) )
- if testing_single_repository:
- tsr_name = testing_single_repository[ 'name' ]
- tsr_owner = testing_single_repository[ 'owner' ]
- tsr_changeset_revision = testing_single_repository[ 'changeset_revision' ]
+ if testing_single_repository_dict:
+ tsr_name = testing_single_repository_dict[ 'name' ]
+ tsr_owner = testing_single_repository_dict[ 'owner' ]
+ tsr_changeset_revision = testing_single_repository_dict[ 'changeset_revision' ]
log.debug( 'Testing single repository with name %s and owner %s.' % ( str( tsr_name ), str( tsr_owner ) ) )
for repository_to_install in repository_dicts:
rti_name = repository_to_install[ 'name' ]
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Handle exceptions when uninstalling repositories in the tool shed's install nad test framework.
by commits-noreply@bitbucket.org 16 Dec '13
by commits-noreply@bitbucket.org 16 Dec '13
16 Dec '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/422f9675cb50/
Changeset: 422f9675cb50
User: greg
Date: 2013-12-17 01:27:28
Summary: Handle exceptions when uninstalling repositories in the tool shed's install nad test framework.
Affected #: 1 file
diff -r 46521f5722c621a8b84d2a8101af87d82cc5eba8 -r 422f9675cb5025e20b402ad92aa591a87ca3a883 test/install_and_test_tool_shed_repositories/functional_tests.py
--- a/test/install_and_test_tool_shed_repositories/functional_tests.py
+++ b/test/install_and_test_tool_shed_repositories/functional_tests.py
@@ -1220,9 +1220,9 @@
for owner, grouped_repository_dicts in repository_dicts_by_owner.items():
print "# "
for repository_dict in grouped_repository_dicts:
- name = repository_dict[ 'name' ]
- owner = repository_dict[ 'owner' ]
- changeset_revision = repository_dict[ 'changeset_revision' ]
+ name = str( repository_dict[ 'name' ] )
+ owner = str( repository_dict[ 'owner' ] )
+ changeset_revision = str( repository_dict[ 'changeset_revision' ] )
print "# Revision %s of repository %s owned by %s" % ( changeset_revision, name, owner )
def test_repository_tools( app, repository, repository_dict, tool_test_results_dicts, tool_test_results_dict, results_dict ):
@@ -1328,15 +1328,20 @@
owner=owner,
changeset_revision=changeset_revision )
log.debug( 'Revision %s of repository %s owned by %s selected for uninstallation.' % ( changeset_revision, name, owner ) )
- test_install_repositories.generate_uninstall_method( uninstall_repository_dict )
- # Set up nose to run the generated uninstall method as a functional test.
- test_config = nose.config.Config( env=os.environ, plugins=nose.plugins.manager.DefaultPluginManager() )
- test_config.configure( sys.argv )
- # Run the uninstall method. This method uses the Galaxy web interface to uninstall the previously installed
- # repository and all of its repository dependencies, deleting each of them from disk.
- result, _ = run_tests( test_config )
- success = result.wasSuccessful()
- if success:
+ try:
+ test_install_repositories.generate_uninstall_method( uninstall_repository_dict )
+ # Set up nose to run the generated uninstall method as a functional test.
+ test_config = nose.config.Config( env=os.environ, plugins=nose.plugins.manager.DefaultPluginManager() )
+ test_config.configure( sys.argv )
+ # Run the uninstall method. This method uses the Galaxy web interface to uninstall the previously installed
+ # repository and all of its repository dependencies, deleting each of them from disk.
+ result, _ = run_tests( test_config )
+ repository_uninstall_successful = result.wasSuccessful()
+ except Exception, e:
+ repository_uninstall_successful = False
+ log.exception( 'Uninstallation of revision %s of repository %s owned by %s failed: %s.' % \
+ ( rd_changeset_revision, rd_name, rd_owner, str( e ) ) )
+ if repository_uninstall_successful:
# Now that the repository is uninstalled we can attempt to uninstall each of its repository dependencies.
# We have to do this through Twill in order to maintain app.toolbox and shed_tool_conf.xml in a state that
# is valid for future tests. Since some of the repository's repository dependencies may require other of
@@ -1356,15 +1361,19 @@
( rd_changeset_revision, rd_name, rd_owner ) )
# Generate a test method to uninstall the repository dependency through the embedded Galaxy application's
# web interface.
- test_install_repositories.generate_uninstall_method( uninstall_repository_dict )
- # Set up nose to run the generated uninstall method as a functional test.
- test_config = nose.config.Config( env=os.environ, plugins=nose.plugins.manager.DefaultPluginManager() )
- test_config.configure( sys.argv )
- # Run the uninstall method.
- result, _ = run_tests( test_config )
- success = result.wasSuccessful()
- if not success:
- log.debug( 'Uninstallation of revision %s of repository %s owned by %s failed: %s' % \
+ try:
+ test_install_repositories.generate_uninstall_method( uninstall_repository_dict )
+ # Set up nose to run the generated uninstall method as a functional test.
+ test_config = nose.config.Config( env=os.environ, plugins=nose.plugins.manager.DefaultPluginManager() )
+ test_config.configure( sys.argv )
+ # Run the uninstall method.
+ result, _ = run_tests( test_config )
+ if not result.wasSuccessful():
+ # We won't set ok here because we'll continue to uninstall whatever we can.
+ log.debug( 'Uninstallation of revision %s of repository %s owned by %s failed.' % \
+ ( rd_changeset_revision, rd_name, rd_owner ) )
+ except Exception, e:
+ log.exception( 'Uninstallation of revision %s of repository %s owned by %s failed: %s.' % \
( rd_changeset_revision, rd_name, rd_owner, str( e ) ) )
else:
log.debug( 'Uninstallation of revision %s of repository %s owned by %s failed.' % ( changeset_revision, name, owner ) )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0