galaxy-commits
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
August 2012
- 1 participants
- 118 discussions
17 Aug '12
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/changeset/3840c86e34b7/
changeset: 3840c86e34b7
user: dannon
date: 2012-08-17 19:55:37
summary: Fix chunk-serving logic for tabular files.
affected #: 1 file
diff -r 9e9e104ad5c93a2268bb00cfae09b57a31ece13c -r 3840c86e34b77bbd0d7c2da0c4ff3e5004dc0fe4 lib/galaxy/datatypes/tabular.py
--- a/lib/galaxy/datatypes/tabular.py
+++ b/lib/galaxy/datatypes/tabular.py
@@ -264,10 +264,10 @@
def display_data(self, trans, dataset, preview=False, filename=None, to_ext=None, chunk=None):
#TODO Prevent failure when displaying extremely long > 50kb lines.
+ if chunk:
+ return self.get_chunk(trans, dataset, chunk)
if to_ext or not preview:
return self._serve_raw(trans, dataset, to_ext)
- if chunk:
- return self.get_chunk(trans, dataset, chunk)
else:
column_names = 'null'
if dataset.metadata.column_names:
@@ -644,4 +644,5 @@
"""
file_ext='fli'
MetadataElement( name="columns", default=2, desc="Number of columns", readonly=True, visible=False )
- MetadataElement( name="column_types", default=['str', 'str'], param=metadata.ColumnTypesParameter, desc="Column types", readonly=True, visible=False, no_value=[] )
\ No newline at end of file
+ MetadataElement( name="column_types", default=['str', 'str'], param=metadata.ColumnTypesParameter, desc="Column types", readonly=True, visible=False, no_value=[] )
+
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: jgoecks: Add feature/attribute name indexing framework to converters. Provide full text indexing for GFF attributes.
by Bitbucket 16 Aug '12
by Bitbucket 16 Aug '12
16 Aug '12
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/changeset/9e9e104ad5c9/
changeset: 9e9e104ad5c9
user: jgoecks
date: 2012-08-16 22:32:00
summary: Add feature/attribute name indexing framework to converters. Provide full text indexing for GFF attributes.
affected #: 7 files
diff -r 0bfd0f25956e3be0f04f68c9e05c1d07f2948f98 -r 9e9e104ad5c93a2268bb00cfae09b57a31ece13c datatypes_conf.xml.sample
--- a/datatypes_conf.xml.sample
+++ b/datatypes_conf.xml.sample
@@ -4,6 +4,7 @@
<datatype extension="ab1" type="galaxy.datatypes.binary:Ab1" mimetype="application/octet-stream" display_in_upload="true"/><datatype extension="afg" type="galaxy.datatypes.assembly:Amos" display_in_upload="false"/><datatype extension="axt" type="galaxy.datatypes.sequence:Axt" display_in_upload="true"/>
+ <datatype extension="fli" type="galaxy.datatypes.tabular:FeatureLocationIndex" display_in_upload="false"/><datatype extension="bam" type="galaxy.datatypes.binary:Bam" mimetype="application/octet-stream" display_in_upload="true"><converter file="bam_to_bai.xml" target_datatype="bai"/><converter file="bam_to_summary_tree_converter.xml" target_datatype="summary_tree" depends_on="bai"/>
@@ -79,6 +80,7 @@
<converter file="gff_to_bed_converter.xml" target_datatype="bed"/><converter file="gff_to_interval_index_converter.xml" target_datatype="interval_index"/><converter file="gff_to_summary_tree_converter.xml" target_datatype="summary_tree"/>
+ <converter file="gff_to_fli_converter.xml" target_datatype="fli"/><display file="ensembl/ensembl_gff.xml" inherit="True"/><!-- <display file="gbrowse/gbrowse_gff.xml" inherit="True" /> --></datatype>
diff -r 0bfd0f25956e3be0f04f68c9e05c1d07f2948f98 -r 9e9e104ad5c93a2268bb00cfae09b57a31ece13c lib/galaxy/datatypes/converters/gff_to_fli.py
--- /dev/null
+++ b/lib/galaxy/datatypes/converters/gff_to_fli.py
@@ -0,0 +1,53 @@
+'''
+Creates a feature location index for a given GFF file.
+'''
+
+import sys
+from galaxy import eggs
+from galaxy.datatypes.util.gff_util import read_unordered_gtf, convert_gff_coords_to_bed
+
+# Process arguments.
+in_fname = sys.argv[1]
+out_fname = sys.argv[2]
+
+# Create dict of name-location pairings.
+name_loc_dict = {}
+for feature in read_unordered_gtf( open( in_fname, 'r' ) ):
+ for name in feature.attributes:
+ val = feature.attributes[ name ]
+ try:
+ float( val )
+ continue
+ except:
+ convert_gff_coords_to_bed( feature )
+ # Value is not a number, so it can be indexed.
+ if val not in name_loc_dict:
+ # Value is not in dictionary.
+ name_loc_dict[ val ] = {
+ 'contig': feature.chrom,
+ 'start': feature.start,
+ 'end': feature.end
+ }
+ else:
+ # Value already in dictionary, so update dictionary.
+ loc = name_loc_dict[ val ]
+ if feature.start < loc[ 'start' ]:
+ loc[ 'start' ] = feature.start
+ if feature.end > loc[ 'end' ]:
+ loc[ 'end' ] = feature.end
+
+# Print name, loc in sorted order.
+out = open( out_fname, 'w' )
+max_len = 0
+entries = []
+for name in sorted( name_loc_dict.iterkeys() ):
+ loc = name_loc_dict[ name ]
+ entry = '%s\t%s' % ( name, '%s:%i-%i' % ( loc[ 'contig' ], loc[ 'start' ], loc[ 'end' ] ) )
+ if len( entry ) > max_len:
+ max_len = len( entry )
+ entries.append( entry )
+
+out.write( str( max_len + 1 ).ljust( max_len ) + '\n' )
+for entry in entries:
+ out.write( entry.ljust( max_len ) + '\n' )
+out.close()
\ No newline at end of file
diff -r 0bfd0f25956e3be0f04f68c9e05c1d07f2948f98 -r 9e9e104ad5c93a2268bb00cfae09b57a31ece13c lib/galaxy/datatypes/converters/gff_to_fli_converter.xml
--- /dev/null
+++ b/lib/galaxy/datatypes/converters/gff_to_fli_converter.xml
@@ -0,0 +1,13 @@
+<tool id="CONVERTER_gff_to_fli_0" name="Convert GFF to Feature Location Index">
+ <!-- <description>__NOT_USED_CURRENTLY_FOR_CONVERTERS__</description> -->
+ <!-- Used on the metadata edit page. -->
+ <command interpreter="python">gff_to_fli.py $input1 $output1</command>
+ <inputs>
+ <param format="gff" name="input1" type="data" label="Choose GFF file"/>
+ </inputs>
+ <outputs>
+ <data format="fli" name="output1"/>
+ </outputs>
+ <help>
+ </help>
+</tool>
diff -r 0bfd0f25956e3be0f04f68c9e05c1d07f2948f98 -r 9e9e104ad5c93a2268bb00cfae09b57a31ece13c lib/galaxy/datatypes/tabular.py
--- a/lib/galaxy/datatypes/tabular.py
+++ b/lib/galaxy/datatypes/tabular.py
@@ -638,3 +638,10 @@
dataset.metadata.reads = reads.keys()
+class FeatureLocationIndex( Tabular ):
+ """
+ An index that stores feature locations in tabular format.
+ """
+ file_ext='fli'
+ MetadataElement( name="columns", default=2, desc="Number of columns", readonly=True, visible=False )
+ MetadataElement( name="column_types", default=['str', 'str'], param=metadata.ColumnTypesParameter, desc="Column types", readonly=True, visible=False, no_value=[] )
\ No newline at end of file
diff -r 0bfd0f25956e3be0f04f68c9e05c1d07f2948f98 -r 9e9e104ad5c93a2268bb00cfae09b57a31ece13c lib/galaxy/visualization/tracks/data_providers.py
--- a/lib/galaxy/visualization/tracks/data_providers.py
+++ b/lib/galaxy/visualization/tracks/data_providers.py
@@ -2,7 +2,7 @@
Data providers for tracks visualizations.
"""
-import sys
+import os, sys
from math import ceil, log
import pkg_resources
pkg_resources.require( "bx-python" )
@@ -59,6 +59,51 @@
def _chrom_naming_matches( chrom1, chrom2 ):
return ( chrom1.startswith( 'chr' ) and chrom2.startswith( 'chr' ) ) or ( not chrom1.startswith( 'chr' ) and not chrom2.startswith( 'chr' ) )
+
+class FeatureLocationIndexDataProvider( object ):
+ '''
+
+ '''
+
+ def __init__( self, converted_dataset ):
+ self.converted_dataset = converted_dataset
+
+ def get_data( self, query ):
+ # Init.
+ textloc_file = open( self.converted_dataset.file_name, 'r' )
+ line_len = int( textloc_file.readline() )
+ file_len = os.path.getsize( self.converted_dataset.file_name )
+
+ # Find query in file using binary search.
+ low = 0
+ high = file_len / line_len
+ while low < high:
+ mid = ( low + high ) // 2
+ position = mid * line_len
+ textloc_file.seek( position )
+
+ # Compare line with query and update low, high.
+ line = textloc_file.readline()
+ print '--', mid, line
+ if line < query:
+ low = mid + 1
+ else:
+ high = mid
+
+ position = low * line_len
+
+ # At right point in file, generate hits.
+ result = [ ]
+ while True:
+ line = textloc_file.readline()
+ if not line.startswith( query ):
+ break
+ if line[ -1: ] == '\n':
+ line = line[ :-1 ]
+ result.append( line.split() )
+
+ textloc_file.close()
+ return result
class TracksDataProvider( object ):
""" Base class for tracks data providers. """
diff -r 0bfd0f25956e3be0f04f68c9e05c1d07f2948f98 -r 9e9e104ad5c93a2268bb00cfae09b57a31ece13c lib/galaxy/web/controllers/tracks.py
--- a/lib/galaxy/web/controllers/tracks.py
+++ b/lib/galaxy/web/controllers/tracks.py
@@ -345,6 +345,20 @@
# Have data if we get here
return { "status": messages.DATA, "valid_chroms": valid_chroms }
+
+ @web.json
+ def feature_loc( self, trans, hda_ldda, dataset_id, query ):
+ """
+ Returns features, locations in dataset that match query. Format is a
+ list of features; each feature is a list itself: [name, location]
+ """
+ dataset = self.get_hda_or_ldda( trans, hda_ldda, dataset_id )
+ converted_dataset = dataset.get_converted_dataset( trans, "fli" )
+ data_provider = FeatureLocationIndexDataProvider( converted_dataset=converted_dataset )
+ if data_provider:
+ return data_provider.get_data( query )
+ else:
+ return 'None'
@web.json
def data( self, trans, hda_ldda, dataset_id, chrom, low, high, start_val=0, max_vals=None, **kwargs ):
diff -r 0bfd0f25956e3be0f04f68c9e05c1d07f2948f98 -r 9e9e104ad5c93a2268bb00cfae09b57a31ece13c tools/filters/gff/sort_gtf.py
--- a/tools/filters/gff/sort_gtf.py
+++ b/tools/filters/gff/sort_gtf.py
@@ -24,5 +24,6 @@
# Print feature.
for interval in feature.intervals:
out.write( "\t".join(interval.fields) )
+out.close()
# TODO: print status information: how many lines processed and features found.
\ No newline at end of file
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: inithello: Improved error handling for failed indexing jobs.
by Bitbucket 16 Aug '12
by Bitbucket 16 Aug '12
16 Aug '12
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/changeset/0bfd0f25956e/
changeset: 0bfd0f25956e
user: inithello
date: 2012-08-16 21:43:02
summary: Improved error handling for failed indexing jobs.
affected #: 2 files
diff -r 35d80be976914c49759e17068a2ba65b752eb3b1 -r 0bfd0f25956e3be0f04f68c9e05c1d07f2948f98 lib/galaxy/tools/genome_index/__init__.py
--- a/lib/galaxy/tools/genome_index/__init__.py
+++ b/lib/galaxy/tools/genome_index/__init__.py
@@ -18,13 +18,16 @@
<tool id="__GENOME_INDEX__" name="Index Genome" version="0.1" tool_type="genome_index"><type class="GenomeIndexTool" module="galaxy.tools"/><action module="galaxy.tools.actions.index_genome" class="GenomeIndexToolAction"/>
- <command>$__GENOME_INDEX_COMMAND__ $output_file $output_file.files_path $__app__.config.rsync_url "$__app__.config.tool_data_path"</command>
+ <command>$__GENOME_INDEX_COMMAND__ $output_file $output_file.files_path "$__app__.config.rsync_url" "$__app__.config.tool_data_path"</command><inputs><param name="__GENOME_INDEX_COMMAND__" type="hidden"/></inputs><outputs><data format="txt" name="output_file"/></outputs>
+ <stdio>
+ <exit_code range="1:" err_level="fatal" />
+ </stdio></tool>
"""
@@ -64,6 +67,18 @@
if gitd:
+ fp = open( gitd.dataset.get_file_name(), 'r' )
+ deferred = sa_session.query( model.DeferredJob ).filter_by( id=gitd.deferred_job_id ).first()
+ try:
+ logloc = json.load( fp )
+ except ValueError:
+ deferred.state = app.model.DeferredJob.states.ERROR
+ sa_session.add( deferred )
+ sa_session.flush()
+ log.debug( 'Indexing job failed, setting deferred job state to error.' )
+ return False
+ finally:
+ fp.close()
destination = None
tdtman = ToolDataTableManager( app.config.tool_data_path )
xmltree = tdtman.load_from_config_file( app.config.tool_data_table_config_path, app.config.tool_data_path )
@@ -72,16 +87,12 @@
location = node.findall('file')[0].get('path')
self.locations[table] = os.path.abspath( location )
locbase = os.path.abspath( os.path.split( self.locations['all_fasta'] )[0] )
- deferred = sa_session.query( model.DeferredJob ).filter_by( id=gitd.deferred_job_id ).first()
params = deferred.params
dbkey = params[ 'dbkey' ]
basepath = os.path.join( os.path.abspath( app.config.genome_data_path ), dbkey )
intname = params[ 'intname' ]
indexer = gitd.indexer
workingdir = os.path.abspath( gitd.dataset.extra_files_path )
- fp = open( gitd.dataset.get_file_name(), 'r' )
- logloc = json.load( fp )
- fp.close()
location = []
indexdata = gitd.dataset.extra_files_path
if indexer == '2bit':
diff -r 35d80be976914c49759e17068a2ba65b752eb3b1 -r 0bfd0f25956e3be0f04f68c9e05c1d07f2948f98 lib/galaxy/tools/genome_index/index_genome.py
--- a/lib/galaxy/tools/genome_index/index_genome.py
+++ b/lib/galaxy/tools/genome_index/index_genome.py
@@ -40,15 +40,16 @@
self.genome = os.path.splitext( self.fafile )[0]
with WithChDir( self.basedir ):
if indexer not in self.indexers:
- raise KeyError, 'The requested indexing function does not exist'
+ sys.stderr.write( 'The requested indexing function does not exist' )
+ exit(127)
else:
with WithChDir( self.workingdir ):
self._log( 'Running indexer %s.' % indexer )
result = getattr( self, self.indexers[ indexer ] )()
if result in [ None, False ]:
- self._log( 'Error running indexer %s, %s' % ( indexer, result ) )
+ sys.stderr.write( 'Error running indexer %s, %s' % ( indexer, result ) )
self._flush_files()
- return True
+ exit(1)
else:
self._log( self.locations )
self._log( 'Indexer %s completed successfully.' % indexer )
@@ -309,5 +310,7 @@
# Create archive.
idxobj = ManagedIndexer( outfile, infile, working_dir, rsync_url, tooldata )
- idxobj.run_indexer( indexer )
-
\ No newline at end of file
+ returncode = idxobj.run_indexer( indexer )
+ if not returncode:
+ exit(1)
+ exit(0)
\ No newline at end of file
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/changeset/35d80be97691/
changeset: 35d80be97691
user: jmchilton
date: 2012-08-16 17:58:01
summary: Dynamic job runner bug fixes.
affected #: 2 files
diff -r a42eae47ae877073a077527463beb32e251d7035 -r 35d80be976914c49759e17068a2ba65b752eb3b1 lib/galaxy/jobs/handler.py
--- a/lib/galaxy/jobs/handler.py
+++ b/lib/galaxy/jobs/handler.py
@@ -360,7 +360,7 @@
def __init__( self, app ):
self.app = app
self.job_runners = {}
- start_job_runners = ["local", "lwr", "dynamic"]
+ start_job_runners = ["local", "lwr"]
if app.config.start_job_runners is not None:
start_job_runners.extend( [ x.strip() for x in util.listify( app.config.start_job_runners ) ] )
if app.config.use_tasked_jobs:
diff -r a42eae47ae877073a077527463beb32e251d7035 -r 35d80be976914c49759e17068a2ba65b752eb3b1 lib/galaxy/jobs/mapper.py
--- a/lib/galaxy/jobs/mapper.py
+++ b/lib/galaxy/jobs/mapper.py
@@ -111,7 +111,7 @@
expand_function = self.__get_expand_function( expand_function_name )
return self.__invoke_expand_function( expand_function )
else:
- raise Exception( "Unhandled dynamic job runner type specified - %s" % calculation_type )
+ raise Exception( "Unhandled dynamic job runner type specified - %s" % expand_type )
def __cache_job_runner_url( self, params ):
raw_job_runner_url = self.job_wrapper.tool.get_job_runner_url( params )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
3 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/changeset/7980b6390e48/
changeset: 7980b6390e48
user: afgane
date: 2012-08-14 02:04:25
summary: Add the ability for Galaxy's ObjectStore to use OpenStack's SWIFT object store as the backend data storage
affected #: 3 files
diff -r be81990d148a81bc2896c3c543d2ff70ad9c3c67 -r 7980b6390e48db3bb0ae2f385d259a994e64b3f0 lib/galaxy/config.py
--- a/lib/galaxy/config.py
+++ b/lib/galaxy/config.py
@@ -169,10 +169,14 @@
if self.nginx_upload_store:
self.nginx_upload_store = os.path.abspath( self.nginx_upload_store )
self.object_store = kwargs.get( 'object_store', 'disk' )
- self.aws_access_key = kwargs.get( 'aws_access_key', None )
- self.aws_secret_key = kwargs.get( 'aws_secret_key', None )
- self.s3_bucket = kwargs.get( 's3_bucket', None)
- self.use_reduced_redundancy = kwargs.get( 'use_reduced_redundancy', False )
+ self.os_access_key = kwargs.get( 'os_access_key', None )
+ self.os_secret_key = kwargs.get( 'os_secret_key', None )
+ self.os_bucket_name = kwargs.get( 'os_bucket_name', None )
+ self.os_host = kwargs.get( 'os_host', None )
+ self.os_port = kwargs.get( 'os_port', None )
+ self.os_is_secure = string_as_bool( kwargs.get( 'os_is_secure', True ) )
+ self.os_conn_path = kwargs.get( 'os_conn_path', '/' )
+ self.os_use_reduced_redundancy = kwargs.get( 'os_use_reduced_redundancy', False )
self.object_store_cache_size = float(kwargs.get( 'object_store_cache_size', -1 ))
self.distributed_object_store_config_file = kwargs.get( 'distributed_object_store_config_file', None )
# Parse global_conf and save the parser
diff -r be81990d148a81bc2896c3c543d2ff70ad9c3c67 -r 7980b6390e48db3bb0ae2f385d259a994e64b3f0 lib/galaxy/objectstore/__init__.py
--- a/lib/galaxy/objectstore/__init__.py
+++ b/lib/galaxy/objectstore/__init__.py
@@ -25,6 +25,7 @@
if sys.version_info >= (2, 6):
import multiprocessing
from galaxy.objectstore.s3_multipart_upload import multipart_upload
+ import boto
from boto.s3.key import Key
from boto.s3.connection import S3Connection
from boto.exception import S3ResponseError
@@ -377,9 +378,9 @@
super(S3ObjectStore, self).__init__()
self.config = config
self.staging_path = self.config.file_path
- self.s3_conn = S3Connection()
- self.bucket = self._get_bucket(self.config.s3_bucket)
- self.use_rr = self.config.use_reduced_redundancy
+ self.s3_conn = get_OS_connection(self.config)
+ self.bucket = self._get_bucket(self.config.os_bucket_name)
+ self.use_rr = self.config.os_use_reduced_redundancy
self.cache_size = self.config.object_store_cache_size
self.transfer_progress = 0
# Clean cache only if value is set in universe_wsgi.ini
@@ -468,7 +469,7 @@
for i in range(5):
try:
bucket = self.s3_conn.get_bucket(bucket_name)
- log.debug("Using S3 object store; got bucket '%s'" % bucket.name)
+ log.debug("Using cloud object store with bucket '%s'" % bucket.name)
return bucket
except S3ResponseError:
log.debug("Could not get bucket '%s', attempt %s/5" % (bucket_name, i+1))
@@ -843,7 +844,6 @@
def get_store_usage_percent(self):
return 0.0
-
class DistributedObjectStore(ObjectStore):
"""
ObjectStore that defers to a list of backends, for getting objects the
@@ -1009,14 +1009,14 @@
store = config.object_store
if store == 'disk':
return DiskObjectStore(config=config)
- elif store == 's3':
- os.environ['AWS_ACCESS_KEY_ID'] = config.aws_access_key
- os.environ['AWS_SECRET_ACCESS_KEY'] = config.aws_secret_key
+ elif store == 's3' or store == 'swift':
return S3ObjectStore(config=config)
elif store == 'distributed':
return DistributedObjectStore(config=config)
elif store == 'hierarchical':
return HierarchicalObjectStore()
+ else:
+ log.error("Unrecognized object store definition: {0}".format(store))
def convert_bytes(bytes):
""" A helper function used for pretty printing disk usage """
@@ -1039,3 +1039,26 @@
else:
size = '%.2fb' % bytes
return size
+
+def get_OS_connection(config):
+ """
+ Get a connection object for a cloud Object Store specified in the config.
+ Currently, this is a ``boto`` connection object.
+ """
+ log.debug("Getting a connection object for '{0}' object store".format(config.object_store))
+ a_key = config.os_access_key
+ s_key = config.os_secret_key
+ if config.object_store == 's3':
+ return S3Connection(a_key, s_key)
+ else:
+ # Establish the connection now
+ calling_format = boto.s3.connection.OrdinaryCallingFormat()
+ s3_conn = boto.connect_s3(aws_access_key_id=a_key,
+ aws_secret_access_key=s_key,
+ is_secure=config.os_is_secure,
+ host=config.os_host,
+ port=int(config.os_port),
+ calling_format=calling_format,
+ path=config.os_conn_path)
+ return s3_conn
+
diff -r be81990d148a81bc2896c3c543d2ff70ad9c3c67 -r 7980b6390e48db3bb0ae2f385d259a994e64b3f0 universe_wsgi.ini.sample
--- a/universe_wsgi.ini.sample
+++ b/universe_wsgi.ini.sample
@@ -481,16 +481,22 @@
# -- Beta features
-# Object store mode (valid options are: disk, s3, distributed, hierarchical)
+# Object store mode (valid options are: disk, s3, swift, distributed, hierarchical)
#object_store = disk
-#aws_access_key = <AWS access key>
-#aws_secret_key = <AWS secret key>
-#s3_bucket = <name of an existing S3 bucket>
-#use_reduced_redundancy = True
-
+#os_access_key = <your cloud object store access key>
+#os_secret_key = <your cloud object store secret key>
+#os_bucket_name = <name of an existing object store bucket or container>
+# If using 'swift' object store, you must specify the following connection properties
+#os_host = swift.rc.nectar.org.au
+#os_port = 8888
+#os_is_secure = False
+#os_conn_path = /
+# Reduced redundancy can be used only with the 's3' object store
+#os_use_reduced_redundancy = False
# Size (in GB) that the cache used by object store should be limited to.
-# If the value is not specified, the cache size will be limited only by the file
-# system size.
+# If the value is not specified, the cache size will be limited only by the
+# file system size. The file system location of the cache is considered the
+# configuration of the ``file_path`` directive defined above.
#object_store_cache_size = 100
# Configuration file for the distributed object store, if object_store =
https://bitbucket.org/galaxy/galaxy-central/changeset/7387d61dfd59/
changeset: 7387d61dfd59
user: afgane
date: 2012-08-16 01:50:00
summary: Handle AWS-specific config options for backward compatibility
affected #: 1 file
diff -r 7980b6390e48db3bb0ae2f385d259a994e64b3f0 -r 7387d61dfd59ca588bb84a57272beed1bb930abb lib/galaxy/config.py
--- a/lib/galaxy/config.py
+++ b/lib/galaxy/config.py
@@ -169,14 +169,21 @@
if self.nginx_upload_store:
self.nginx_upload_store = os.path.abspath( self.nginx_upload_store )
self.object_store = kwargs.get( 'object_store', 'disk' )
- self.os_access_key = kwargs.get( 'os_access_key', None )
- self.os_secret_key = kwargs.get( 'os_secret_key', None )
- self.os_bucket_name = kwargs.get( 'os_bucket_name', None )
+ # Handle AWS-specific config options for backward compatibility
+ if kwargs.get( 'aws_access_key', None) is not None:
+ self.os_access_key= kwargs.get( 'aws_access_key', None )
+ self.os_secret_key= kwargs.get( 'aws_secret_key', None )
+ self.os_bucket_name= kwargs.get( 's3_bucket', None )
+ self.os_use_reduced_redundancy = kwargs.get( 'use_reduced_redundancy', False )
+ else:
+ self.os_access_key = kwargs.get( 'os_access_key', None )
+ self.os_secret_key = kwargs.get( 'os_secret_key', None )
+ self.os_bucket_name = kwargs.get( 'os_bucket_name', None )
+ self.os_use_reduced_redundancy = kwargs.get( 'os_use_reduced_redundancy', False )
self.os_host = kwargs.get( 'os_host', None )
self.os_port = kwargs.get( 'os_port', None )
self.os_is_secure = string_as_bool( kwargs.get( 'os_is_secure', True ) )
self.os_conn_path = kwargs.get( 'os_conn_path', '/' )
- self.os_use_reduced_redundancy = kwargs.get( 'os_use_reduced_redundancy', False )
self.object_store_cache_size = float(kwargs.get( 'object_store_cache_size', -1 ))
self.distributed_object_store_config_file = kwargs.get( 'distributed_object_store_config_file', None )
# Parse global_conf and save the parser
https://bitbucket.org/galaxy/galaxy-central/changeset/a42eae47ae87/
changeset: a42eae47ae87
user: afgane
date: 2012-08-16 01:57:05
summary: Merge
affected #: 24 files
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 lib/galaxy/jobs/__init__.py
--- a/lib/galaxy/jobs/__init__.py
+++ b/lib/galaxy/jobs/__init__.py
@@ -309,9 +309,10 @@
return self.fail( job.info )
# Check the tool's stdout, stderr, and exit code for errors, but only
- # if the job has not already been marked as having an error.
+ # if the job has not already been marked as having an error.
+ # The job's stdout and stderr will be set accordingly.
if job.states.ERROR != job.state:
- if ( self.check_tool_output( stdout, stderr, tool_exit_code ) ):
+ if ( self.check_tool_output( stdout, stderr, tool_exit_code, job )):
job.state = job.states.OK
else:
job.state = job.states.ERROR
@@ -335,7 +336,7 @@
log.warning( "finish(): %s not found, but %s is not empty, so it will be used instead" % ( dataset_path.false_path, dataset_path.real_path ) )
else:
return self.fail( "Job %s's output dataset(s) could not be read" % job.id )
- job_context = ExpressionContext( dict( stdout = stdout, stderr = stderr ) )
+ job_context = ExpressionContext( dict( stdout = job.stdout, stderr = job.stderr ) )
job_tool = self.app.toolbox.tools_by_id.get( job.tool_id, None )
@@ -430,12 +431,12 @@
# will now be seen by the user.
self.sa_session.flush()
# Save stdout and stderr
- if len( stdout ) > 32768:
+ if len( job.stdout ) > 32768:
log.error( "stdout for job %d is greater than 32K, only first part will be logged to database" % job.id )
- job.stdout = stdout[:32768]
- if len( stderr ) > 32768:
+ job.stdout = job.stdout[:32768]
+ if len( job.stderr ) > 32768:
log.error( "stderr for job %d is greater than 32K, only first part will be logged to database" % job.id )
- job.stderr = stderr[:32768]
+ job.stderr = job.stderr[:32768]
# custom post process setup
inp_data = dict( [ ( da.name, da.dataset ) for da in job.input_datasets ] )
out_data = dict( [ ( da.name, da.dataset ) for da in job.output_datasets ] )
@@ -457,7 +458,7 @@
# Call 'exec_after_process' hook
self.tool.call_hook( 'exec_after_process', self.queue.app, inp_data=inp_data,
out_data=out_data, param_dict=param_dict,
- tool=self.tool, stdout=stdout, stderr=stderr )
+ tool=self.tool, stdout=job.stdout, stderr=job.stderr )
job.command_line = self.command_line
bytes = 0
@@ -477,7 +478,7 @@
if self.app.config.cleanup_job == 'always' or ( not stderr and self.app.config.cleanup_job == 'onsuccess' ):
self.cleanup()
- def check_tool_output( self, stdout, stderr, tool_exit_code ):
+ def check_tool_output( self, stdout, stderr, tool_exit_code, job ):
"""
Check the output of a tool - given the stdout, stderr, and the tool's
exit code, return True if the tool exited succesfully and False
@@ -487,8 +488,8 @@
Note that, if the tool did not define any exit code handling or
any stdio/stderr handling, then it reverts back to previous behavior:
if stderr contains anything, then False is returned.
+ Note that the job id is just for messages.
"""
- job = self.get_job()
err_msg = ""
# By default, the tool succeeded. This covers the case where the code
# has a bug but the tool was ok, and it lets a workflow continue.
@@ -497,10 +498,14 @@
try:
# Check exit codes and match regular expressions against stdout and
# stderr if this tool was configured to do so.
+ # If there is a regular expression for scanning stdout/stderr,
+ # then we assume that the tool writer overwrote the default
+ # behavior of just setting an error if there is *anything* on
+ # stderr.
if ( len( self.tool.stdio_regexes ) > 0 or
len( self.tool.stdio_exit_codes ) > 0 ):
- # We will check the exit code ranges in the order in which
- # they were specified. Each exit_code is a ToolStdioExitCode
+ # Check the exit code ranges in the order in which
+ # they were specified. Each exit_code is a StdioExitCode
# that includes an applicable range. If the exit code was in
# that range, then apply the error level and add in a message.
# If we've reached a fatal error rule, then stop.
@@ -508,24 +513,33 @@
for stdio_exit_code in self.tool.stdio_exit_codes:
if ( tool_exit_code >= stdio_exit_code.range_start and
tool_exit_code <= stdio_exit_code.range_end ):
- if None != stdio_exit_code.desc:
- err_msg += stdio_exit_code.desc
- # TODO: Find somewhere to stick the err_msg - possibly to
- # the source (stderr/stdout), possibly in a new db column.
+ # Tack on a generic description of the code
+ # plus a specific code description. For example,
+ # this might append "Job 42: Warning: Out of Memory\n".
+ # TODO: Find somewhere to stick the err_msg -
+ # possibly to the source (stderr/stdout), possibly
+ # in a new db column.
+ code_desc = stdio_exit_code.desc
+ if ( None == code_desc ):
+ code_desc = ""
+ tool_msg = ( "Job %s: %s: Exit code %d: %s" % (
+ job.get_id_tag(),
+ galaxy.tools.StdioErrorLevel.desc( tool_exit_code ),
+ tool_exit_code,
+ code_desc ) )
+ log.info( tool_msg )
+ stderr = err_msg + stderr
max_error_level = max( max_error_level,
stdio_exit_code.error_level )
- if max_error_level >= galaxy.tools.StdioErrorLevel.FATAL:
+ if ( max_error_level >=
+ galaxy.tools.StdioErrorLevel.FATAL ):
break
- # If there is a regular expression for scanning stdout/stderr,
- # then we assume that the tool writer overwrote the default
- # behavior of just setting an error if there is *anything* on
- # stderr.
if max_error_level < galaxy.tools.StdioErrorLevel.FATAL:
# We'll examine every regex. Each regex specifies whether
# it is to be run on stdout, stderr, or both. (It is
# possible for neither stdout nor stderr to be scanned,
- # but those won't be scanned.) We record the highest
+ # but those regexes won't be used.) We record the highest
# error level, which are currently "warning" and "fatal".
# If fatal, then we set the job's state to ERROR.
# If warning, then we still set the job's state to OK
@@ -539,19 +553,32 @@
# Repeat the stdout stuff for stderr.
# TODO: Collapse this into a single function.
if ( regex.stdout_match ):
- regex_match = re.search( regex.match, stdout )
+ regex_match = re.search( regex.match, stdout,
+ re.IGNORECASE )
if ( regex_match ):
- err_msg += self.regex_err_msg( regex_match, regex )
- max_error_level = max( max_error_level, regex.error_level )
- if max_error_level >= galaxy.tools.StdioErrorLevel.FATAL:
+ rexmsg = self.regex_err_msg( regex_match, regex)
+ log.info( "Job %s: %s"
+ % ( job.get_id_tag(), rexmsg ) )
+ stdout = rexmsg + "\n" + stdout
+ max_error_level = max( max_error_level,
+ regex.error_level )
+ if ( max_error_level >=
+ galaxy.tools.StdioErrorLevel.FATAL ):
break
- if ( regex.stderr_match ):
- regex_match = re.search( regex.match, stderr )
+
+ if ( regex.stderr_match ):
+ regex_match = re.search( regex.match, stderr,
+ re.IGNORECASE )
if ( regex_match ):
- err_msg += self.regex_err_msg( regex_match, regex )
- max_error_level = max( max_error_level,
+ rexmsg = self.regex_err_msg( regex_match, regex)
+ # DELETEME
+ log.info( "Job %s: %s"
+ % ( job.get_id_tag(), rexmsg ) )
+ stderr = rexmsg + "\n" + stderr
+ max_error_level = max( max_error_level,
regex.error_level )
- if max_error_level >= galaxy.tools.StdioErrorLevel.FATAL:
+ if ( max_error_level >=
+ galaxy.tools.StdioErrorLevel.FATAL ):
break
# If we encountered a fatal error, then we'll need to set the
@@ -565,17 +592,26 @@
# default to the previous behavior: when there's anything on stderr
# the job has an error, and the job is ok otherwise.
else:
- log.debug( "The tool did not define exit code or stdio handling; "
+ # TODO: Add in the tool and job id:
+ log.debug( "Tool did not define exit code or stdio handling; "
+ "checking stderr for success" )
if stderr:
success = False
else:
success = True
+
# On any exception, return True.
except:
+ tb = traceback.format_exc()
log.warning( "Tool check encountered unexpected exception; "
- + "assuming tool was successful" )
+ + "assuming tool was successful: " + tb )
success = True
+
+ # Store the modified stdout and stderr in the job:
+ if None != job:
+ job.stdout = stdout
+ job.stderr = stderr
+
return success
def regex_err_msg( self, match, regex ):
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 lib/galaxy/jobs/handler.py
--- a/lib/galaxy/jobs/handler.py
+++ b/lib/galaxy/jobs/handler.py
@@ -360,7 +360,7 @@
def __init__( self, app ):
self.app = app
self.job_runners = {}
- start_job_runners = ["local"]
+ start_job_runners = ["local", "lwr", "dynamic"]
if app.config.start_job_runners is not None:
start_job_runners.extend( [ x.strip() for x in util.listify( app.config.start_job_runners ) ] )
if app.config.use_tasked_jobs:
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 lib/galaxy/jobs/runners/drmaa.py
--- a/lib/galaxy/jobs/runners/drmaa.py
+++ b/lib/galaxy/jobs/runners/drmaa.py
@@ -339,6 +339,8 @@
# By default, the exit code is 0, which typically indicates success.
exit_code = 0
exit_code_str = "0"
+ # To ensure that files below are readable, ownership must be reclaimed first
+ drm_job_state.job_wrapper.reclaim_ownership()
while which_try < (self.app.config.retry_job_output_collection + 1):
try:
ofh = file(ofile, "r")
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 lib/galaxy/tool_shed/install_manager.py
--- a/lib/galaxy/tool_shed/install_manager.py
+++ b/lib/galaxy/tool_shed/install_manager.py
@@ -136,27 +136,22 @@
repository_clone_url=repository_clone_url,
relative_install_dir=relative_install_dir,
repository_files_dir=None,
- resetting_all_metadata_on_repository=False )
+ resetting_all_metadata_on_repository=False,
+ webapp='galaxy' )
tool_shed_repository.metadata = metadata_dict
self.app.sa_session.add( tool_shed_repository )
self.app.sa_session.flush()
if 'tool_dependencies' in metadata_dict:
# All tool_dependency objects must be created before the tools are processed even if no tool dependencies will be installed.
- tool_dependencies = create_tool_dependency_objects( self.app, tool_shed_repository, tool_shed_repository.installed_changeset_revision, set_status=True )
+ tool_dependencies = create_tool_dependency_objects( self.app, tool_shed_repository, relative_install_dir, set_status=True )
else:
tool_dependencies = None
if 'tools' in metadata_dict:
- work_dir = tempfile.mkdtemp()
repository_tools_tups = get_repository_tools_tups( self.app, metadata_dict )
if repository_tools_tups:
sample_files = metadata_dict.get( 'sample_files', [] )
# Handle missing data table entries for tool parameters that are dynamically generated select lists.
- repository_tools_tups = handle_missing_data_table_entry( self.app,
- tool_shed_repository,
- tool_shed_repository.installed_changeset_revision,
- self.tool_path,
- repository_tools_tups,
- work_dir )
+ repository_tools_tups = handle_missing_data_table_entry( self.app, relative_install_dir, self.tool_path, repository_tools_tups )
# Handle missing index files for tool parameters that are dynamically generated select lists.
repository_tools_tups, sample_files_copied = handle_missing_index_file( self.app, self.tool_path, sample_files, repository_tools_tups )
# Copy remaining sample files included in the repository to the ~/tool-data directory of the local Galaxy instance.
@@ -166,12 +161,8 @@
update_tool_shed_repository_status( self.app,
tool_shed_repository,
self.app.model.ToolShedRepository.installation_status.INSTALLING_TOOL_DEPENDENCIES )
- # Get the tool_dependencies.xml file from the repository.
- tool_dependencies_config = get_config_from_repository( self.app,
- 'tool_dependencies.xml',
- tool_shed_repository,
- tool_shed_repository.installed_changeset_revision,
- work_dir )
+ # Get the tool_dependencies.xml file from disk.
+ tool_dependencies_config = get_config_from_disk( 'tool_dependencies.xml', relative_install_dir )
installed_tool_dependencies = handle_tool_dependencies( app=self.app,
tool_shed_repository=tool_shed_repository,
tool_dependencies_config=tool_dependencies_config,
@@ -189,10 +180,6 @@
self.migrated_tools_config,
tool_panel_dict=tool_panel_dict_for_display,
new_install=True )
- try:
- shutil.rmtree( work_dir )
- except:
- pass
if 'datatypes' in metadata_dict:
tool_shed_repository.status = self.app.model.ToolShedRepository.installation_status.LOADING_PROPRIETARY_DATATYPES
if not tool_shed_repository.includes_datatypes:
@@ -200,11 +187,7 @@
self.app.sa_session.add( tool_shed_repository )
self.app.sa_session.flush()
work_dir = tempfile.mkdtemp()
- datatypes_config = get_config_from_repository( self.app,
- 'datatypes_conf.xml',
- tool_shed_repository,
- tool_shed_repository.installed_changeset_revision,
- work_dir )
+ datatypes_config = get_config_from_disk( 'datatypes_conf.xml', relative_install_dir )
# Load proprietary data types required by tools. The value of override is not important here since the Galaxy server will be started
# after this installation completes.
converter_path, display_path = alter_config_and_load_prorietary_datatypes( self.app, datatypes_config, relative_install_dir, override=False )
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -922,16 +922,11 @@
if self_id in self.app.config.tool_handlers:
self.job_handlers = self.app.config.tool_handlers[ self_id ]
# Set job runner(s). Each runner is a dict with 'url' and, optionally, 'params'.
- if self.app.config.start_job_runners is None:
- # Jobs are always local regardless of tool config if no additional
- # runners are started
- self.job_runners = [ { "url" : "local:///" } ]
- else:
- # Set job runner to the cluster default
- self.job_runners = [ { "url" : self.app.config.default_cluster_job_runner } ]
- # Set custom runner(s) if they're defined.
- if self_id in self.app.config.tool_runners:
- self.job_runners = self.app.config.tool_runners[ self_id ]
+ # Set job runner to the cluster default
+ self.job_runners = [ { "url" : self.app.config.default_cluster_job_runner } ]
+ # Set custom runner(s) if they're defined.
+ if self_id in self.app.config.tool_runners:
+ self.job_runners = self.app.config.tool_runners[ self_id ]
# Is this a 'hidden' tool (hidden in tool menu)
self.hidden = util.xml_text(root, "hidden")
if self.hidden: self.hidden = util.string_as_bool(self.hidden)
@@ -1316,6 +1311,8 @@
return_level = StdioErrorLevel.WARNING
elif ( re.search( "fatal", err_level, re.IGNORECASE ) ):
return_level = StdioErrorLevel.FATAL
+ else:
+ log.debug( "Error level %s did not match warning/fatal" % err_level )
except Exception, e:
log.error( "Exception in parse_error_level "
+ str(sys.exc_info() ) )
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 lib/galaxy/util/shed_util.py
--- a/lib/galaxy/util/shed_util.py
+++ b/lib/galaxy/util/shed_util.py
@@ -248,7 +248,7 @@
except:
pass
return converter_path, display_path
-def check_tool_input_params( app, repo_dir, tool_config_name, tool, sample_files ):
+def check_tool_input_params( app, repo_dir, tool_config_name, tool, sample_files, webapp='galaxy' ):
"""
Check all of the tool's input parameters, looking for any that are dynamically generated using external data files to make
sure the files exist.
@@ -291,8 +291,9 @@
correction_msg = "This file refers to a file named <b>%s</b>. " % str( index_file )
correction_msg += "Upload a file named <b>%s.sample</b> to the repository to correct this error." % str( index_file_name )
invalid_files_and_errors_tups.append( ( tool_config_name, correction_msg ) )
- # Reset the tool_data_tables by loading the empty tool_data_table_conf.xml file.
- reset_tool_data_tables( app )
+ if webapp == 'community':
+ # Reset the tool_data_tables by loading the empty tool_data_table_conf.xml file.
+ reset_tool_data_tables( app )
return invalid_files_and_errors_tups
def config_elems_to_xml_file( app, config_elems, config_filename, tool_path ):
# Persist the current in-memory list of config_elems to a file named by the value of config_filename.
@@ -427,16 +428,11 @@
sa_session.add( tool_shed_repository )
sa_session.flush()
return tool_shed_repository
-def create_tool_dependency_objects( app, tool_shed_repository, current_changeset_revision, set_status=True ):
+def create_tool_dependency_objects( app, tool_shed_repository, relative_install_dir, set_status=True ):
# Create or update a ToolDependency for each entry in tool_dependencies_config. This method is called when installing a new tool_shed_repository.
tool_dependency_objects = []
- work_dir = tempfile.mkdtemp()
# Get the tool_dependencies.xml file from the repository.
- tool_dependencies_config = get_config_from_repository( app,
- 'tool_dependencies.xml',
- tool_shed_repository,
- current_changeset_revision,
- work_dir )
+ tool_dependencies_config = get_config_from_disk( 'tool_dependencies.xml', relative_install_dir )
tree = ElementTree.parse( tool_dependencies_config )
root = tree.getroot()
ElementInclude.include( root )
@@ -454,10 +450,6 @@
status=app.model.ToolDependency.installation_status.NEVER_INSTALLED,
set_status=set_status )
tool_dependency_objects.append( tool_dependency )
- try:
- shutil.rmtree( work_dir )
- except:
- pass
return tool_dependency_objects
def generate_clone_url( trans, repository ):
"""Generate the URL for cloning a repository."""
@@ -541,14 +533,15 @@
if req_name==tool_dependency_name and req_version==tool_dependency_version and req_type==tool_dependency_type:
can_generate_dependency_metadata = True
break
- if not can_generate_dependency_metadata:
+ if requirements and not can_generate_dependency_metadata:
# We've discovered at least 1 combination of name, version and type that is not defined in the <requirement>
# tag for any tool in the repository.
break
if not can_generate_dependency_metadata:
break
return can_generate_dependency_metadata
-def generate_metadata_for_changeset_revision( app, repository_clone_url, relative_install_dir=None, repository_files_dir=None, resetting_all_metadata_on_repository=False ):
+def generate_metadata_for_changeset_revision( app, repository_clone_url, relative_install_dir=None, repository_files_dir=None,
+ resetting_all_metadata_on_repository=False, webapp='galaxy' ):
"""
Generate metadata for a repository using it's files on disk. To generate metadata for changeset revisions older than the repository tip,
the repository will have been cloned to a temporary location and updated to a specified changeset revision to access that changeset revision's
@@ -629,7 +622,7 @@
invalid_tool_configs.append( name )
invalid_file_tups.append( ( name, str( e ) ) )
if tool is not None:
- invalid_files_and_errors_tups = check_tool_input_params( app, files_dir, name, tool, sample_files )
+ invalid_files_and_errors_tups = check_tool_input_params( app, files_dir, name, tool, sample_files, webapp=webapp )
can_set_metadata = True
for tup in invalid_files_and_errors_tups:
if name in tup:
@@ -664,7 +657,7 @@
metadata_dict = generate_tool_dependency_metadata( tool_dependencies_config, metadata_dict )
if invalid_tool_configs:
metadata_dict [ 'invalid_tools' ] = invalid_tool_configs
- if resetting_all_metadata_on_repository:
+ if webapp == 'community' and resetting_all_metadata_on_repository:
# Reset the tool_data_tables by loading the empty tool_data_table_conf.xml file.
reset_tool_data_tables( app )
# Reset the value of the app's tool_data_path to it's original value.
@@ -1255,7 +1248,7 @@
return shed_url
# The tool shed from which the repository was originally installed must no longer be configured in tool_sheds_conf.xml.
return None
-def handle_missing_data_table_entry( app, repository, changeset_revision, tool_path, repository_tools_tups, dir ):
+def handle_missing_data_table_entry( app, relative_install_dir, tool_path, repository_tools_tups ):
"""
Inspect each tool to see if any have input parameters that are dynamically generated select lists that require entries in the
tool_data_table_conf.xml file. This method is called only from Galaxy (not the tool shed) when a repository is being installed
@@ -1269,7 +1262,7 @@
break
if missing_data_table_entry:
# The repository must contain a tool_data_table_conf.xml.sample file that includes all required entries for all tools in the repository.
- sample_tool_data_table_conf = get_config_from_repository( app, 'tool_data_table_conf.xml.sample', repository, changeset_revision, dir )
+ sample_tool_data_table_conf = get_config_from_disk( 'tool_data_table_conf.xml.sample', relative_install_dir )
# Add entries to the ToolDataTableManager's in-memory data_tables dictionary as well as the list of data_table_elems and the list of
# data_table_elem_names.
error, correction_msg = handle_sample_tool_data_table_conf_file( app, sample_tool_data_table_conf, persist=True )
@@ -1394,14 +1387,8 @@
def load_installed_datatypes( app, repository, relative_install_dir, deactivate=False ):
# Load proprietary datatypes and return information needed for loading proprietary datatypes converters and display applications later.
metadata = repository.metadata
- work_dir = tempfile.mkdtemp()
repository_dict = None
- datatypes_config = get_config_from_repository( app,
- 'datatypes_conf.xml',
- repository,
- repository.changeset_revision,
- work_dir,
- install_dir=relative_install_dir )
+ datatypes_config = get_config_from_disk( 'datatypes_conf.xml', relative_install_dir )
if datatypes_config:
converter_path, display_path = alter_config_and_load_prorietary_datatypes( app, datatypes_config, relative_install_dir, deactivate=deactivate )
if converter_path or display_path:
@@ -1413,10 +1400,6 @@
tool_dicts=metadata.get( 'tools', [] ),
converter_path=converter_path,
display_path=display_path )
- try:
- shutil.rmtree( work_dir )
- except:
- pass
return repository_dict
def load_installed_display_applications( app, installed_repository_dict, deactivate=False ):
# Load or deactivate proprietary datatype display applications
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 lib/galaxy/visualization/tracks/data_providers.py
--- a/lib/galaxy/visualization/tracks/data_providers.py
+++ b/lib/galaxy/visualization/tracks/data_providers.py
@@ -947,55 +947,72 @@
return dict( data=dict( min=summary.min_val[0], max=summary.max_val[0], mean=mean, sd=sd ) )
- # The following seems not to work very well, for example it will only return one
- # data point if the tile is 1280px wide. Not sure what the intent is.
+ # Sample from region using approximately this many samples.
+ N = 1000
- # The first zoom level for BBI files is 640. If too much is requested, it will look at each block instead
- # of summaries. The calculation done is: zoom <> (end-start)/num_points/2.
- # Thus, the optimal number of points is (end-start)/num_points/2 = 640
- # num_points = (end-start) / 1280
- #num_points = (end-start) / 1280
- #if num_points < 1:
- # num_points = end - start
- #else:
- # num_points = min(num_points, 500)
+ def summarize_region( bbi, chrom, start, end, num_points ):
+ '''
+ Returns results from summarizing a region using num_points.
+ NOTE: num_points cannot be greater than end - start or BBI
+ will return None for all positions.s
+ '''
+ result = []
- # For now, we'll do 1000 data points by default. However, the summaries
- # don't seem to work when a summary pixel corresponds to less than one
- # datapoint, so we prevent that.
+ # Get summary; this samples at intervals of length
+ # (end - start)/num_points -- i.e. drops any fractional component
+ # of interval length.
+ summary = bbi.summarize( chrom, start, end, num_points )
+ if summary:
+ #mean = summary.sum_data / summary.valid_count
+
+ ## Standard deviation by bin, not yet used
+ ## var = summary.sum_squares - mean
+ ## var /= minimum( valid_count - 1, 1 )
+ ## sd = sqrt( var )
+
+ pos = start
+ step_size = (end - start) / num_points
- # FIXME: need to choose the number of points to maximize coverage of the area.
- # It appears that BBI calculates points using intervals of
- # floor( num_points / end - start )
- # In some cases, this prevents sampling near the end of the interval,
- # especially when (a) the total interval is small ( < 20-30Kb) and (b) the
- # computed interval size has a large fraction, e.g. 14.7 or 35.8
- num_points = min( 1000, end - start )
+ for i in range( num_points ):
+ result.append( (pos, float_nan( summary.sum_data[i] / summary.valid_count[i] ) ) )
+ pos += step_size
- # HACK to address the FIXME above; should generalize.
- if end - start <= 2000:
- num_points = end - start
+ return result
- summary = bbi.summarize( chrom, start, end, num_points )
+ # Approach is different depending on region size.
+ if end - start < N:
+ # Get values for individual bases in region, including start and end.
+ # To do this, need to increase end to next base and request number of points.
+ num_points = end - start + 1
+ end += 1
+ else:
+ #
+ # The goal is to sample the region between start and end uniformly
+ # using ~N data points. The challenge is that the size of sampled
+ # intervals rarely is full bases, so sampling using N points will
+ # leave the end of the region unsampled due to remainders for each
+ # interval. To recitify this, a new N is calculated based on the
+ # step size that covers as much of the region as possible.
+ #
+ # However, this still leaves some of the region unsampled. This
+ # could be addressed by repeatedly sampling remainder using a
+ # smaller and smaller step_size, but that would require iteratively
+ # going to BBI, which could be time consuming.
+ #
+
+ # Start with N samples.
+ num_points = N
+ step_size = ( end - start ) / num_points
+ # Add additional points to sample in the remainder not covered by
+ # the initial N samples.
+ remainder_start = start + step_size * num_points
+ additional_points = ( end - remainder_start ) / step_size
+ num_points += additional_points
+
+ result = summarize_region( bbi, chrom, start, end, num_points )
+
+ # Cleanup and return.
f.close()
-
- result = []
-
- if summary:
- #mean = summary.sum_data / summary.valid_count
-
- ## Standard deviation by bin, not yet used
- ## var = summary.sum_squares - mean
- ## var /= minimum( valid_count - 1, 1 )
- ## sd = sqrt( var )
-
- pos = start
- step_size = (end - start) / num_points
-
- for i in range( num_points ):
- result.append( (pos, float_nan( summary.sum_data[i] / summary.valid_count[i] ) ) )
- pos += step_size
-
return { 'data': result }
class BigBedDataProvider( BBIDataProvider ):
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 lib/galaxy/web/controllers/admin_toolshed.py
--- a/lib/galaxy/web/controllers/admin_toolshed.py
+++ b/lib/galaxy/web/controllers/admin_toolshed.py
@@ -525,13 +525,8 @@
# Get the tool_shed_repository from one of the tool_dependencies.
message = ''
tool_shed_repository = tool_dependencies[ 0 ].tool_shed_repository
- work_dir = tempfile.mkdtemp()
# Get the tool_dependencies.xml file from the repository.
- tool_dependencies_config = get_config_from_repository( trans.app,
- 'tool_dependencies.xml',
- tool_shed_repository,
- tool_shed_repository.changeset_revision,
- work_dir )
+ tool_dependencies_config = get_config_from_disk( 'tool_dependencies.xml', tool_shed_repository.repo_path( trans.app ) )
installed_tool_dependencies = handle_tool_dependencies( app=trans.app,
tool_shed_repository=tool_shed_repository,
tool_dependencies_config=tool_dependencies_config,
@@ -539,10 +534,6 @@
for installed_tool_dependency in installed_tool_dependencies:
if installed_tool_dependency.status == trans.app.model.ToolDependency.installation_status.ERROR:
message += ' %s' % installed_tool_dependency.error_message
- try:
- shutil.rmtree( work_dir )
- except:
- pass
tool_dependency_ids = [ trans.security.encode_id( td.id ) for td in tool_dependencies ]
if message:
status = 'error'
@@ -663,11 +654,7 @@
tool_shed_repository,
trans.model.ToolShedRepository.installation_status.INSTALLING_TOOL_DEPENDENCIES )
# Get the tool_dependencies.xml file from the repository.
- tool_dependencies_config = get_config_from_repository( trans.app,
- 'tool_dependencies.xml',
- tool_shed_repository,
- tool_shed_repository.installed_changeset_revision,
- work_dir )
+ tool_dependencies_config = get_config_from_disk( 'tool_dependencies.xml', relative_install_dir )
installed_tool_dependencies = handle_tool_dependencies( app=trans.app,
tool_shed_repository=tool_shed_repository,
tool_dependencies_config=tool_dependencies_config,
@@ -691,24 +678,19 @@
repository_clone_url=repository_clone_url,
relative_install_dir=relative_install_dir,
repository_files_dir=None,
- resetting_all_metadata_on_repository=False )
+ resetting_all_metadata_on_repository=False,
+ webapp='galaxy' )
tool_shed_repository.metadata = metadata_dict
trans.sa_session.add( tool_shed_repository )
trans.sa_session.flush()
if 'tool_dependencies' in metadata_dict and not reinstalling:
- tool_dependencies = create_tool_dependency_objects( trans.app, tool_shed_repository, tool_shed_repository.installed_changeset_revision, set_status=True )
+ tool_dependencies = create_tool_dependency_objects( trans.app, tool_shed_repository, relative_install_dir, set_status=True )
if 'tools' in metadata_dict:
tool_panel_dict = generate_tool_panel_dict_for_new_install( metadata_dict[ 'tools' ], tool_section )
repository_tools_tups = get_repository_tools_tups( trans.app, metadata_dict )
if repository_tools_tups:
# Handle missing data table entries for tool parameters that are dynamically generated select lists.
- work_dir = tempfile.mkdtemp()
- repository_tools_tups = handle_missing_data_table_entry( trans.app,
- tool_shed_repository,
- tool_shed_repository.changeset_revision,
- tool_path,
- repository_tools_tups,
- work_dir )
+ repository_tools_tups = handle_missing_data_table_entry( trans.app, relative_install_dir, tool_path, repository_tools_tups )
# Handle missing index files for tool parameters that are dynamically generated select lists.
sample_files = metadata_dict.get( 'sample_files', [] )
repository_tools_tups, sample_files_copied = handle_missing_index_file( trans.app, tool_path, sample_files, repository_tools_tups )
@@ -723,22 +705,13 @@
shed_tool_conf=shed_tool_conf,
tool_panel_dict=tool_panel_dict,
new_install=True )
- try:
- shutil.rmtree( work_dir )
- except:
- pass
if 'datatypes' in metadata_dict:
tool_shed_repository.status = trans.model.ToolShedRepository.installation_status.LOADING_PROPRIETARY_DATATYPES
if not tool_shed_repository.includes_datatypes:
tool_shed_repository.includes_datatypes = True
trans.sa_session.add( tool_shed_repository )
trans.sa_session.flush()
- work_dir = tempfile.mkdtemp()
- datatypes_config = get_config_from_repository( trans.app,
- 'datatypes_conf.xml',
- tool_shed_repository,
- tool_shed_repository.changeset_revision,
- work_dir )
+ datatypes_config = get_config_from_disk( 'datatypes_conf.xml', relative_install_dir )
# Load data types required by tools.
converter_path, display_path = alter_config_and_load_prorietary_datatypes( trans.app, datatypes_config, relative_install_dir, override=False )
if converter_path or display_path:
@@ -756,10 +729,6 @@
if display_path:
# Load proprietary datatype display applications
trans.app.datatypes_registry.load_display_applications( installed_repository_dict=repository_dict )
- try:
- shutil.rmtree( work_dir )
- except:
- pass
@web.expose
@web.require_admin
def manage_repository( self, trans, **kwd ):
@@ -1507,7 +1476,7 @@
trans.sa_session.flush()
# Create tool_dependency records if necessary.
if 'tool_dependencies' in metadata_dict:
- tool_dependencies = create_tool_dependency_objects( trans.app, repository, repository.changeset_revision, set_status=False )
+ tool_dependencies = create_tool_dependency_objects( trans.app, repository, relative_install_dir, set_status=False )
message = "The installed repository named '%s' has been updated to change set revision '%s'. " % ( name, latest_changeset_revision )
# See if any tool dependencies can be installed.
shed_tool_conf, tool_path, relative_install_dir = get_tool_panel_config_tool_path_install_dir( trans.app, repository )
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 lib/galaxy/webapps/community/config.py
--- a/lib/galaxy/webapps/community/config.py
+++ b/lib/galaxy/webapps/community/config.py
@@ -87,6 +87,7 @@
self.server_name = ''
self.job_manager = ''
self.default_job_handlers = []
+ self.default_cluster_job_runner = 'local:///'
self.job_handlers = []
self.tool_handlers = []
self.tool_runners = []
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 lib/galaxy/webapps/community/controllers/admin.py
--- a/lib/galaxy/webapps/community/controllers/admin.py
+++ b/lib/galaxy/webapps/community/controllers/admin.py
@@ -696,9 +696,14 @@
owner = repository_name_owner_list[ 1 ]
repository = get_repository_by_name_and_owner( trans, name, owner )
try:
- reset_all_metadata_on_repository( trans, trans.security.encode_id( repository.id ) )
- log.debug( "Successfully reset metadata on repository %s" % repository.name )
- successful_count += 1
+ invalid_file_tups = reset_all_metadata_on_repository( trans, trans.security.encode_id( repository.id ) )
+ if invalid_file_tups:
+ message = generate_message_for_invalid_tools( invalid_file_tups, repository, None, as_html=False )
+ log.debug( message )
+ unsuccessful_count += 1
+ else:
+ log.debug( "Successfully reset metadata on repository %s" % repository.name )
+ successful_count += 1
except Exception, e:
log.debug( "Error attempting to reset metadata on repository '%s': %s" % ( repository.name, str( e ) ) )
unsuccessful_count += 1
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 lib/galaxy/webapps/community/controllers/common.py
--- a/lib/galaxy/webapps/community/controllers/common.py
+++ b/lib/galaxy/webapps/community/controllers/common.py
@@ -277,6 +277,41 @@
return '%s://%s%s/repos/%s/%s' % ( protocol, username, base, repository.user.username, repository.name )
else:
return '%s/repos/%s/%s' % ( base_url, repository.user.username, repository.name )
+def generate_message_for_invalid_tools( invalid_file_tups, repository, metadata_dict, as_html=True ):
+ if as_html:
+ new_line = '<br/>'
+ bold_start = '<b>'
+ bold_end = '</b>'
+ else:
+ new_line = '\n'
+ bold_start = ''
+ bold_end = ''
+ message = ''
+ if metadata_dict:
+ message += "Metadata was defined for some items in revision '%s'. " % str( repository.tip )
+ message += "Correct the following problems if necessary and reset metadata.%s" % new_line
+ else:
+ message += "Metadata cannot be defined for revision '%s' so this revision cannot be automatically " % str( repository.tip )
+ message += "installed into a local Galaxy instance. Correct the following problems and reset metadata.%s" % new_line
+ for itc_tup in invalid_file_tups:
+ tool_file, exception_msg = itc_tup
+ if exception_msg.find( 'No such file or directory' ) >= 0:
+ exception_items = exception_msg.split()
+ missing_file_items = exception_items[ 7 ].split( '/' )
+ missing_file = missing_file_items[ -1 ].rstrip( '\'' )
+ if missing_file.endswith( '.loc' ):
+ sample_ext = '%s.sample' % missing_file
+ else:
+ sample_ext = missing_file
+ correction_msg = "This file refers to a missing file %s%s%s. " % ( bold_start, str( missing_file ), bold_end )
+ correction_msg += "Upload a file named %s%s%s to the repository to correct this error." % ( bold_start, sample_ext, bold_end )
+ else:
+ if as_html:
+ correction_msg = exception_msg
+ else:
+ correction_msg = exception_msg.replace( '<br/>', new_line ).replace( '<b>', bold_start ).replace( '</b>', bold_end )
+ message += "%s%s%s - %s%s" % ( bold_start, tool_file, bold_end, correction_msg, new_line )
+ return message
def generate_tool_guid( trans, repository, tool ):
"""
Generate a guid for the received tool. The form of the guid is
@@ -795,7 +830,8 @@
repository_clone_url=repository_clone_url,
relative_install_dir=repo_dir,
repository_files_dir=work_dir,
- resetting_all_metadata_on_repository=True )
+ resetting_all_metadata_on_repository=True,
+ webapp='community' )
if current_metadata_dict:
if not metadata_changeset_revision and not metadata_dict:
# We're at the first change set in the change log.
@@ -853,6 +889,7 @@
clean_repository_metadata( trans, id, changeset_revisions )
# Set tool version information for all downloadable changeset revisions. Get the list of changeset revisions from the changelog.
reset_all_tool_versions( trans, id, repo )
+ return invalid_file_tups
def set_repository_metadata( trans, repository, content_alert_str='', **kwd ):
"""
Set metadata using the repository's current disk files, returning specific error messages (if any) to alert the repository owner that the changeset
@@ -885,7 +922,8 @@
repository_clone_url=repository_clone_url,
relative_install_dir=repo_dir,
repository_files_dir=None,
- resetting_all_metadata_on_repository=False )
+ resetting_all_metadata_on_repository=False,
+ webapp='community' )
if metadata_dict:
downloadable = is_downloadable( metadata_dict )
repository_metadata = None
@@ -929,27 +967,7 @@
message += "be defined so this revision cannot be automatically installed into a local Galaxy instance."
status = "error"
if invalid_file_tups:
- if metadata_dict:
- message += "Metadata was defined for some items in revision '%s'. " % str( repository.tip )
- message += "Correct the following problems if necessary and reset metadata.<br/>"
- else:
- message += "Metadata cannot be defined for revision '%s' so this revision cannot be automatically " % str( repository.tip )
- message += "installed into a local Galaxy instance. Correct the following problems and reset metadata.<br/>"
- for itc_tup in invalid_file_tups:
- tool_file, exception_msg = itc_tup
- if exception_msg.find( 'No such file or directory' ) >= 0:
- exception_items = exception_msg.split()
- missing_file_items = exception_items[ 7 ].split( '/' )
- missing_file = missing_file_items[ -1 ].rstrip( '\'' )
- if missing_file.endswith( '.loc' ):
- sample_ext = '%s.sample' % missing_file
- else:
- sample_ext = missing_file
- correction_msg = "This file refers to a missing file <b>%s</b>. " % str( missing_file )
- correction_msg += "Upload a file named <b>%s</b> to the repository to correct this error." % sample_ext
- else:
- correction_msg = exception_msg
- message += "<b>%s</b> - %s<br/>" % ( tool_file, correction_msg )
+ message = generate_message_for_invalid_tools( invalid_file_tups, repository, metadata_dict )
status = 'error'
return message, status
def set_repository_metadata_due_to_new_tip( trans, repository, content_alert_str=None, **kwd ):
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 lib/galaxy/webapps/community/controllers/repository.py
--- a/lib/galaxy/webapps/community/controllers/repository.py
+++ b/lib/galaxy/webapps/community/controllers/repository.py
@@ -458,7 +458,10 @@
metadata = downloadable_revision.metadata
invalid_tools = metadata.get( 'invalid_tools', [] )
for invalid_tool_config in invalid_tools:
- invalid_tools_dict[ invalid_tool_config ] = ( repository.id, repository.name, downloadable_revision.changeset_revision )
+ invalid_tools_dict[ invalid_tool_config ] = ( repository.id,
+ repository.name,
+ repository.user.username,
+ downloadable_revision.changeset_revision )
else:
for repository in trans.sa_session.query( trans.model.Repository ) \
.filter( and_( trans.model.Repository.table.c.deleted == False,
@@ -468,7 +471,10 @@
metadata = downloadable_revision.metadata
invalid_tools = metadata.get( 'invalid_tools', [] )
for invalid_tool_config in invalid_tools:
- invalid_tools_dict[ invalid_tool_config ] = ( repository.id, repository.name, downloadable_revision.changeset_revision )
+ invalid_tools_dict[ invalid_tool_config ] = ( repository.id,
+ repository.name,
+ repository.user.username,
+ downloadable_revision.changeset_revision )
return trans.fill_template( '/webapps/community/repository/browse_invalid_tools.mako',
cntrller=cntrller,
invalid_tools_dict=invalid_tools_dict,
@@ -1373,6 +1379,7 @@
return trans.response.send_redirect( url )
@web.expose
def load_invalid_tool( self, trans, repository_id, tool_config, changeset_revision, **kwd ):
+ # FIXME: loading an invalid tool should display an appropriate message as to why the tool is invalid. This worked until recently.
params = util.Params( kwd )
message = util.restore_text( params.get( 'message', '' ) )
status = params.get( 'status', 'error' )
@@ -1752,9 +1759,14 @@
status=status )
@web.expose
def reset_all_metadata( self, trans, id, **kwd ):
- reset_all_metadata_on_repository( trans, id, **kwd )
- message = "All repository metadata has been reset."
- status = 'done'
+ invalid_file_tups = reset_all_metadata_on_repository( trans, id, **kwd )
+ if invalid_file_tups:
+ repository = get_repository( trans, id )
+ message = generate_message_for_invalid_tools( invalid_file_tups, repository, None )
+ status = 'error'
+ else:
+ message = "All repository metadata has been reset."
+ status = 'done'
return trans.response.send_redirect( web.url_for( controller='repository',
action='manage_repository',
id=id,
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 run.sh
--- a/run.sh
+++ b/run.sh
@@ -36,6 +36,10 @@
fi
done
+if [ -n "$GALAXY_UNIVERSE_CONFIG_DIR" ]; then
+ python ./scripts/build_universe_config.py "$GALAXY_UNIVERSE_CONFIG_DIR"
+fi
+
# explicitly attempt to fetch eggs before running
FETCH_EGGS=1
for arg in "$@"; do
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 scripts/build_universe_config.py
--- /dev/null
+++ b/scripts/build_universe_config.py
@@ -0,0 +1,26 @@
+from ConfigParser import ConfigParser
+from os import listdir
+from os.path import join
+from re import match
+from sys import argv
+
+
+def merge():
+ "Merges all .ini files in a specified directory into ./universe.ini"
+ if len(argv) < 2:
+ message = "%s: Must specify directory to merge configuration files from." % argv[0]
+ raise Exception(message)
+ conf_directory = argv[1]
+ conf_files = [f for f in listdir(conf_directory) if match(r'.*\.ini', f)]
+ conf_files.sort()
+
+ parser = ConfigParser()
+ for conf_file in conf_files:
+ parser.read([join(conf_directory, conf_file)])
+ ## TODO: Expand enviroment variables here, that would
+ ## also make Galaxy much easier to configure.
+
+ parser.write(open("universe_wsgi.ini", 'w'))
+
+if __name__ == '__main__':
+ merge()
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 scripts/db_shell.py
--- /dev/null
+++ b/scripts/db_shell.py
@@ -0,0 +1,85 @@
+# This script allows easy access to Galaxy's database layer via the
+# Galaxy models. For example:q
+# % python -i scripts/db_shel.py
+# >>> new_user = User("admin(a)gmail.com")
+# >>> new_user.set_password
+# >>> sa_session.add(new_user)
+# >>> sa_session.commit()
+# >>> sa_session.query(User).all()
+#
+# You can also use this script as a library, for instance see https://gist.github.com/1979583q
+# TODO: This script overlaps alot wth manage_db.py and create_db.py,
+# these should maybe be refactored to remove duplication.
+import sys, os.path, logging
+
+new_path = [ os.path.join( os.getcwd(), "lib" ) ]
+new_path.extend( sys.path[1:] ) # remove scripts/ from the path
+sys.path = new_path
+
+from galaxy import eggs
+
+import pkg_resources
+pkg_resources.require( "sqlalchemy-migrate" )
+pkg_resources.require( "SQLAlchemy" )
+
+from ConfigParser import SafeConfigParser
+
+log = logging.getLogger( __name__ )
+
+if sys.argv[-1] in [ 'community' ]:
+ # Need to pop the last arg so the command line args will be correct
+ # for sqlalchemy-migrate
+ webapp = sys.argv.pop()
+ config_file = 'community_wsgi.ini'
+ repo = 'lib/galaxy/webapps/community/model/migrate'
+else:
+ # Poor man's optparse
+ config_file = 'universe_wsgi.ini'
+ if '-c' in sys.argv:
+ pos = sys.argv.index( '-c' )
+ sys.argv.pop(pos)
+ config_file = sys.argv.pop( pos )
+ if not os.path.exists( config_file ):
+ print "Galaxy config file does not exist (hint: use '-c config.ini' for non-standard locations): %s" % config_file
+ sys.exit( 1 )
+ repo = 'lib/galaxy/model/migrate'
+
+cp = SafeConfigParser()
+cp.read( config_file )
+
+if cp.has_option( "app:main", "database_connection" ):
+ db_url = cp.get( "app:main", "database_connection" )
+elif cp.has_option( "app:main", "database_file" ):
+ db_url = "sqlite:///%s?isolation_level=IMMEDIATE" % cp.get( "app:main", "database_file" )
+else:
+ db_url = "sqlite:///./database/universe.sqlite?isolation_level=IMMEDIATE"
+
+dialect_to_egg = {
+ "sqlite" : "pysqlite>=2",
+ "postgres" : "psycopg2",
+ "mysql" : "MySQL_python"
+}
+dialect = ( db_url.split( ':', 1 ) )[0]
+try:
+ egg = dialect_to_egg[dialect]
+ try:
+ pkg_resources.require( egg )
+ log.debug( "%s egg successfully loaded for %s dialect" % ( egg, dialect ) )
+ except:
+ # If the module is in the path elsewhere (i.e. non-egg), it'll still load.
+ log.warning( "%s egg not found, but an attempt will be made to use %s anyway" % ( egg, dialect ) )
+except KeyError:
+ # Let this go, it could possibly work with db's we don't support
+ log.error( "database_connection contains an unknown SQLAlchemy database dialect: %s" % dialect )
+
+# Setup DB scripting environment
+from sqlalchemy import *
+from sqlalchemy.orm import *
+from sqlalchemy.exc import *
+
+engine = create_engine(db_url, echo=True)
+db_session = scoped_session( sessionmaker( bind = engine ) )
+from galaxy.model.mapping import context as sa_session
+sa_session.bind = engine
+from galaxy.model import *
+
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 static/scripts/mvc/history.js
--- /dev/null
+++ b/static/scripts/mvc/history.js
@@ -0,0 +1,145 @@
+/*
+TODO:
+ as always: where does the model end and the view begin?
+ HistoryPanel
+ HistoryCollection: (collection of histories: 'Saved Histories')
+
+CASES:
+ logged-in/NOT
+*/
+//==============================================================================
+var HistoryItem = BaseModel.extend({
+ // a single history structure
+ // from: http://localhost:8080/api/histories/f2db41e1fa331b3e/contents/f2db41e1fa331…
+ /*
+ {
+ "data_type": "fastq",
+ "deleted": false,
+ "download_url": "/datasets/f2db41e1fa331b3e/display?to_ext=fastq",
+ "file_size": 226297533,
+ "genome_build": "?",
+ "id": "f2db41e1fa331b3e",
+ "metadata_data_lines": null,
+ "metadata_dbkey": "?",
+ "metadata_sequences": null,
+ "misc_blurb": "215.8 MB",
+ "misc_info": "uploaded fastq file",
+ "model_class": "HistoryDatasetAssociation",
+ "name": "LTCF-2-19_GTGAAA_L001_R1_001.fastq",
+ "state": "ok",
+ "visible": true
+ }
+ */
+
+ display : function(){},
+ edit_attr : function(){},
+ delete : function(){},
+ download : function(){},
+ details : function(){},
+ rerun : function(){},
+ tags : function(){},
+ annotations : function(){},
+ peek : function(){},
+});
+
+//..............................................................................
+var HistoryItemView = BaseView.extend({
+ // view for History model used in HistoryPanelView
+ tagName : "div",
+ className : "historyItemContainer",
+
+ icons : {
+ display : 'path to icon',
+ edit_attr : 'path to icon',
+ delete : 'path to icon',
+ download : 'path to icon',
+ details : 'path to icon',
+ rerun : 'path to icon',
+ tags : 'path to icon',
+ annotations : 'path to icon',
+ },
+
+ render : function(){
+ this.$el.append( 'div' )
+ },
+
+});
+
+
+
+//==============================================================================
+var History = Backbone.Collection.extend({
+ // a collection of HistoryItems
+
+ // from: http://localhost:8080/api/histories/f2db41e1fa331b3e
+ /*
+ {
+ "contents_url": "/api/histories/f2db41e1fa331b3e/contents",
+ "id": "f2db41e1fa331b3e",
+ "name": "one",
+ "state": "ok",
+ "state_details": {
+ "discarded": 0,
+ "empty": 0,
+ "error": 0,
+ "failed_metadata": 0,
+ "new": 0,
+ "ok": 4,
+ "queued": 0,
+ "running": 0,
+ "setting_metadata": 0,
+ "upload": 0
+ }
+ }
+ */
+
+ // from: http://localhost:8080/api/histories/f2db41e1fa331b3e/contents
+ // (most are replicated in HistoryItem)
+ /*
+ [
+ {
+ "id": "f2db41e1fa331b3e",
+ "name": "LTCF-2-19_GTGAAA_L001_R1_001.fastq",
+ "type": "file",
+ "url": "/api/histories/f2db41e1fa331b3e/contents/f2db41e1fa331b3e"
+ },
+ {
+ "id": "f597429621d6eb2b",
+ "name": "LTCF-2-19_GTGAAA_L001_R2_001.fastq",
+ "type": "file",
+ "url": "/api/histories/f2db41e1fa331b3e/contents/f597429621d6eb2b"
+ },
+ {
+ "id": "1cd8e2f6b131e891",
+ "name": "FASTQ Groomer on data 1",
+ "type": "file",
+ "url": "/api/histories/f2db41e1fa331b3e/contents/1cd8e2f6b131e891"
+ },
+ {
+ "id": "ebfb8f50c6abde6d",
+ "name": "FASTQ Groomer on data 2",
+ "type": "file",
+ "url": "/api/histories/f2db41e1fa331b3e/contents/ebfb8f50c6abde6d"
+ },
+ {
+ "id": "33b43b4e7093c91f",
+ "name": "Sa.04-02981.fasta",
+ "type": "file",
+ "url": "/api/histories/f2db41e1fa331b3e/contents/33b43b4e7093c91f"
+ }
+ ]
+ */
+});
+
+//..............................................................................
+var HistoryCollectionView = BaseView.extend({
+ // view for the HistoryCollection (as per current right hand panel)
+ tagName : "body",
+ className : "historyCollection",
+
+ render : function(){
+
+ },
+
+});
+
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 templates/base_panels.mako
--- a/templates/base_panels.mako
+++ b/templates/base_panels.mako
@@ -54,13 +54,17 @@
var galaxy_paths = new GalaxyPaths({
root_path: '${h.url_for( "/" )}',
image_path: '${h.url_for( "/static/images" )}',
+
tool_url: '${h.url_for( controller="/api/tools" )}',
+ history_url: '${h.url_for( controller="/api/histories" )}',
+
data_url: '${h.url_for( controller="/tracks", action="data" )}',
raw_data_url: '${h.url_for( controller="/tracks", action="raw_data" )}',
converted_datasets_state_url: '${h.url_for( controller="/tracks", action="converted_datasets_state" )}',
dataset_state_url: '${h.url_for( controller="/tracks", action="dataset_state" )}',
+ sweepster_url: '${h.url_for( controller="/tracks", action="sweepster" )}',
+
visualization_url: '${h.url_for( controller="/visualization", action="save" )}',
- sweepster_url: '${h.url_for( controller="/tracks", action="sweepster" )}'
});
</script></%def>
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 templates/webapps/community/repository/browse_invalid_tools.mako
--- a/templates/webapps/community/repository/browse_invalid_tools.mako
+++ b/templates/webapps/community/repository/browse_invalid_tools.mako
@@ -13,10 +13,11 @@
<tr><th>Tool config</th><th>Repository name</th>
+ <th>Repository owner</th><th>Changeset revision</th></tr>
%for invalid_tool_config, repository_tup in invalid_tools_dict.items():
- <% repository_id, repository_name, changeset_revision = repository_tup %>
+ <% repository_id, repository_name, repository_owner, changeset_revision = repository_tup %><tr><td><a class="view-info" href="${h.url_for( controller='repository', action='load_invalid_tool', repository_id=trans.security.encode_id( repository_id ), tool_config=invalid_tool_config, changeset_revision=changeset_revision, webapp=webapp )}">
@@ -24,6 +25,7 @@
</a></td><td>${repository_name}</td>
+ <td>${repository_owner}</td><td>${changeset_revision}</td></tr>
%endfor
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 test-data/bwa_wrapper_out2.sam
--- a/test-data/bwa_wrapper_out2.sam
+++ b/test-data/bwa_wrapper_out2.sam
@@ -1,30 +1,30 @@
-seq1 16 phiX174 322 25 36M * 0 0 GATATTTTAAAGGAGCGTGGATTACTATCTGAGTCC B&&I13A$G$*%$IIIIIII9(.+5$IIIIIII#II XT:A:U NM:i:2 X0:i:1 XM:i:2 XO:i:0 XG:i:0 MD:Z:2C8A24
-seq10 0 phiX174 4149 37 17M1D19M * 0 0 ATTCTTTCTTTTCGTATCAGGGCGTTGAGTTCGATA IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 XM:i:1 XO:i:1 XG:i:1 MD:Z:5G11^G19
-seq11 0 phiX174 4072 37 18M1D18M * 0 0 GCATTTCTACTCCTTCTCATCCCCAATGCTTGGCTT II#IIIIIII$5+.(9IIIIIII$%*$G$A31I&&B XT:A:U NM:i:2 X0:i:1 XM:i:1 XO:i:1 XG:i:1 MD:Z:12T5^A18
+seq1 16 phiX174 322 25 36M * 0 0 GATATTTTAAAGGAGCGTGGATTACTATCTGAGTCC B&&I13A$G$*%$IIIIIII9(.+5$IIIIIII#II XT:A:U NM:i:2 X0:i:1 X1:i:0 XM:i:2 XO:i:0 XG:i:0 MD:Z:2C8A24
+seq2 0 phiX174 141 37 36M * 0 0 ATTCGACCTATCCTTGCGCAGCTCGAGAAGCTCTTA IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:0 X0:i:1 X1:i:0 XM:i:0 XO:i:0 XG:i:0 MD:Z:36
+seq3 0 phiX174 505 37 36M * 0 0 GTAACAAAGTTTGGATTGCTACTGACCGCTCTCGTG IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:0 X0:i:1 X1:i:0 XM:i:0 XO:i:0 XG:i:0 MD:Z:36
+seq4 4 * 0 0 * * 0 0 AGCCGCTCGTCTTTTATGTAGGTGGTCAACCATTTT IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
+seq5 0 phiX174 4985 25 36M * 0 0 CAGTTATATGGCTTTTGGTTTCTATGTGGCTTAATA IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 X1:i:0 XM:i:2 XO:i:0 XG:i:0 MD:Z:13G17A4
+seq6 0 phiX174 925 37 11M1D25M * 0 0 AGGCGCTCGTCTTGGTATGTAGGTGGTCAACAATTT IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:1 X0:i:1 X1:i:0 XM:i:0 XO:i:1 XG:i:1 MD:Z:11^T25
+seq7 0 phiX174 943 37 13M1I22M * 0 0 TGTAGGTGGTCAACCAATTTTAATTGCAGGGGCTTC IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:1 X0:i:1 X1:i:0 XM:i:0 XO:i:1 XG:i:1 MD:Z:35
+seq8 4 * 0 0 * * 0 0 ACACCCGTCCTTTACGTCATGCGCTCTATTCTCTGG IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
+seq9 0 phiX174 2596 37 16M1I19M * 0 0 GCCGCTATTCAGGTTGTTTTCTGTTGGTGCTGATAT IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 X1:i:0 XM:i:1 XO:i:1 XG:i:1 MD:Z:7A27
+seq10 0 phiX174 4149 37 17M1D19M * 0 0 ATTCTTTCTTTTCGTATCAGGGCGTTGAGTTCGATA IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 X1:i:0 XM:i:1 XO:i:1 XG:i:1 MD:Z:5G11^G19
+seq11 0 phiX174 4072 37 18M1D18M * 0 0 GCATTTCTACTCCTTCTCATCCCCAATGCTTGGCTT II#IIIIIII$5+.(9IIIIIII$%*$G$A31I&&B XT:A:U NM:i:2 X0:i:1 X1:i:0 XM:i:1 XO:i:1 XG:i:1 MD:Z:12T5^A18
seq12 4 * 0 0 * * 0 0 CGCGCTTCGATAAAAATGGGATTGGCGTTTCCAACC IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
seq13 4 * 0 0 * * 0 0 ATTTCTACTCTTTCTCATCCCCAATCCTTGCCTTCC IIIIIIIIIIIIIIIIIIIIIAAIIIIIIIIIIIII
-seq14 0 phiX174 3998 37 21M1D15M * 0 0 CCCTTTTGAATGTCACGCTGATATTTTGACTTTGAG IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 XM:i:1 XO:i:1 XG:i:1 MD:Z:5C15^T15
+seq14 0 phiX174 3998 37 21M1D15M * 0 0 CCCTTTTGAATGTCACGCTGATATTTTGACTTTGAG IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 X1:i:0 XM:i:1 XO:i:1 XG:i:1 MD:Z:5C15^T15
seq15 4 * 0 0 * * 0 0 CCAACTTACCAAGGTGGGTTACGAAACGCGACGCCG IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
seq16 4 * 0 0 * * 0 0 TCAGGGTATTAAAAGAGATTATTTTTCTCCAGCCAC IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
-seq17 0 phiX174 3034 37 19M1D17M * 0 0 GTGATGTGCTTGCTACCGAAACAATACTTTAGGCAT IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 XM:i:1 XO:i:1 XG:i:1 MD:Z:19^T9G7
+seq17 0 phiX174 3034 37 19M1D17M * 0 0 GTGATGTGCTTGCTACCGAAACAATACTTTAGGCAT IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 X1:i:0 XM:i:1 XO:i:1 XG:i:1 MD:Z:19^T9G7
seq18 4 * 0 0 * * 0 0 TCAATCCCCCATGCTTGGCCGTTCCATAAGCAGATG IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
seq19 4 * 0 0 * * 0 0 TTCCTGCGCTTAATGCTTGAGCGTCCTGGTGCTGAT IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
-seq2 0 phiX174 141 37 36M * 0 0 ATTCGACCTATCCTTGCGCAGCTCGAGAAGCTCTTA IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:0 X0:i:1 XM:i:0 XO:i:0 XG:i:0 MD:Z:36
-seq20 0 phiX174 1082 37 36M * 0 0 CTTATTACCATTTCAACTACTCCGGTTATCGCTGGC IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:0 X0:i:1 XM:i:0 XO:i:0 XG:i:0 MD:Z:36
-seq21 0 phiX174 1344 37 15M1D21M * 0 0 CTGATACCAATAAAACCCTAAGCATTTGGTTCAGGG II#IIIIIII$5+.(9IIIIIII$%*$G$A31I&&B XT:A:U NM:i:2 X0:i:1 XM:i:1 XO:i:1 XG:i:1 MD:Z:15^T13T7
+seq20 0 phiX174 1082 37 36M * 0 0 CTTATTACCATTTCAACTACTCCGGTTATCGCTGGC IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:0 X0:i:1 X1:i:0 XM:i:0 XO:i:0 XG:i:0 MD:Z:36
+seq21 0 phiX174 1344 37 15M1D21M * 0 0 CTGATACCAATAAAACCCTAAGCATTTGGTTCAGGG II#IIIIIII$5+.(9IIIIIII$%*$G$A31I&&B XT:A:U NM:i:2 X0:i:1 X1:i:0 XM:i:1 XO:i:1 XG:i:1 MD:Z:15^T13T7
seq22 4 * 0 0 * * 0 0 AATCAAACTTACCAAGGGGTTACGACGCGACGCCGT IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
seq23 4 * 0 0 * * 0 0 TGTGCTTCCCCAACTTGATTTAATAACCCTATAGAC IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
-seq24 0 phiX174 4084 37 17M1I18M * 0 0 TTTCTCAATCCCCAATGCCTTGGCTTCCCTAAGCAG IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 XM:i:1 XO:i:1 XG:i:1 MD:Z:27A7
-seq25 0 phiX174 520 37 16M1I19M * 0 0 TTGCTACTGACCGCTCTTCGTGCTCGTTGCTGCGTT IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 XM:i:1 XO:i:1 XG:i:1 MD:Z:26C8
-seq26 0 phiX174 1976 37 36M * 0 0 CCGCGTGAAATTTCTATGAAGGATGTTTTCCGTTCT IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:0 X0:i:1 XM:i:0 XO:i:0 XG:i:0 MD:Z:36
-seq27 0 phiX174 2598 37 20M1I15M * 0 0 CGCTAATCAAGTTGTTTCTGTTTGGTGCTGATATTG IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 XM:i:1 XO:i:1 XG:i:1 MD:Z:9G25
+seq24 0 phiX174 4084 37 17M1I18M * 0 0 TTTCTCAATCCCCAATGCCTTGGCTTCCCTAAGCAG IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 X1:i:0 XM:i:1 XO:i:1 XG:i:1 MD:Z:27A7
+seq25 0 phiX174 520 37 16M1I19M * 0 0 TTGCTACTGACCGCTCTTCGTGCTCGTTGCTGCGTT IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 X1:i:0 XM:i:1 XO:i:1 XG:i:1 MD:Z:26C8
+seq26 0 phiX174 1976 37 36M * 0 0 CCGCGTGAAATTTCTATGAAGGATGTTTTCCGTTCT IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:0 X0:i:1 X1:i:0 XM:i:0 XO:i:0 XG:i:0 MD:Z:36
+seq27 0 phiX174 2598 37 20M1I15M * 0 0 CGCTAATCAAGTTGTTTCTGTTTGGTGCTGATATTG IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 X1:i:0 XM:i:1 XO:i:1 XG:i:1 MD:Z:9G25
seq28 4 * 0 0 * * 0 0 AAAGAGATTATTTGTCGGTCCAGCCACTAAAGTGAG IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
seq29 4 * 0 0 * * 0 0 CAAATTAATGCGCGCTTCGATAATGATTGGGGTATC IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
-seq3 0 phiX174 505 37 36M * 0 0 GTAACAAAGTTTGGATTGCTACTGACCGCTCTCGTG IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:0 X0:i:1 XM:i:0 XO:i:0 XG:i:0 MD:Z:36
-seq30 0 phiX174 4091 37 18M1I17M * 0 0 ATCCCCTATGCTTGGCTTACCATAAGCAGATGGATA IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 XM:i:1 XO:i:1 XG:i:1 MD:Z:6A28
-seq4 4 * 0 0 * * 0 0 AGCCGCTCGTCTTTTATGTAGGTGGTCAACCATTTT IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
-seq5 0 phiX174 4985 25 36M * 0 0 CAGTTATATGGCTTTTGGTTTCTATGTGGCTTAATA IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 XM:i:2 XO:i:0 XG:i:0 MD:Z:13G17A4
-seq6 0 phiX174 925 37 11M1D25M * 0 0 AGGCGCTCGTCTTGGTATGTAGGTGGTCAACAATTT IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:1 X0:i:1 XM:i:0 XO:i:1 XG:i:1 MD:Z:11^T25
-seq7 0 phiX174 943 37 13M1I22M * 0 0 TGTAGGTGGTCAACCAATTTTAATTGCAGGGGCTTC IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:1 X0:i:1 XM:i:0 XO:i:1 XG:i:1 MD:Z:35
-seq8 4 * 0 0 * * 0 0 ACACCCGTCCTTTACGTCATGCGCTCTATTCTCTGG IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
-seq9 0 phiX174 2596 37 16M1I19M * 0 0 GCCGCTATTCAGGTTGTTTTCTGTTGGTGCTGATAT IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 XM:i:1 XO:i:1 XG:i:1 MD:Z:7A27
+seq30 0 phiX174 4091 37 18M1I17M * 0 0 ATCCCCTATGCTTGGCTTACCATAAGCAGATGGATA IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII XT:A:U NM:i:2 X0:i:1 X1:i:0 XM:i:1 XO:i:1 XG:i:1 MD:Z:6A28
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 test-data/variant_detection/freebayes/freebayes_out_1.output_trace
--- a/test-data/variant_detection/freebayes/freebayes_out_1.output_trace
+++ /dev/null
@@ -1,8 +0,0 @@
-phiX174,1411,allele,phiX174,phiX174,A,60,100
-phiX174,1412,allele,phiX174,phiX174,G,60,100
-phiX174,1413,allele,phiX174,phiX174,C,60,100
-phiX174,1414,allele,phiX174,phiX174,G,60,100
-phiX174,1415,allele,phiX174,phiX174,C,60,100
-phiX174,1416,allele,phiX174,phiX174,C,60,100
-phiX174,1417,allele,phiX174,phiX174,G,60,100
-phiX174,1418,allele,phiX174,phiX174,T,60,100
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 test-data/variant_detection/freebayes/freebayes_out_1.vcf.contains
--- a/test-data/variant_detection/freebayes/freebayes_out_1.vcf.contains
+++ /dev/null
@@ -1,2 +0,0 @@
-#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT A
-
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 test/base/twilltestcase.py
--- a/test/base/twilltestcase.py
+++ b/test/base/twilltestcase.py
@@ -694,11 +694,19 @@
if filename is not None:
local_name = self.get_filename( filename, shed_tool_id=shed_tool_id )
temp_name = self.makeTfname(fname = filename)
- file( temp_name, 'wb' ).write(data)
- if self.keepOutdir > '':
- ofn = os.path.join(self.keepOutdir,os.path.basename(local_name))
- shutil.copy(temp_name,ofn)
- log.debug('## GALAXY_TEST_SAVE=%s. saved %s' % (self.keepOutdir,ofn))
+ file( temp_name, 'wb' ).write( data )
+
+ # if the server's env has GALAXY_TEST_SAVE, save the output file to that dir
+ if self.keepOutdir:
+ ofn = os.path.join( self.keepOutdir, os.path.basename( local_name ) )
+ try:
+ shutil.copy( temp_name, ofn )
+ except Exception, exc:
+ error_log_msg = ( 'TwillTestCase could not save output file %s to %s: ' % ( temp_name, ofn ) )
+ error_log_msg += str( e )
+ else:
+ log.debug('## GALAXY_TEST_SAVE=%s. saved %s' % ( self.keepOutdir, ofn ) )
+
try:
# have to nest try-except in try-finally to handle 2.4
try:
@@ -729,6 +737,7 @@
self.verify_extra_files_content( extra_files, elem.get( 'id' ), shed_tool_id=shed_tool_id )
except AssertionError, err:
errmsg = 'History item %s different than expected, difference (using %s):\n' % ( hid, compare )
+ errmsg += "( %s v. %s )\n" % ( local_name, temp_name )
errmsg += str( err )
raise AssertionError( errmsg )
finally:
diff -r 7387d61dfd59ca588bb84a57272beed1bb930abb -r a42eae47ae877073a077527463beb32e251d7035 tools/ngs_rna/tophat2_wrapper.xml
--- a/tools/ngs_rna/tophat2_wrapper.xml
+++ b/tools/ngs_rna/tophat2_wrapper.xml
@@ -296,6 +296,9 @@
</inputs><outputs>
+ <data format="tabular" name="fusions" label="${tool.name} on ${on_string}: fusions" from_work_dir="tophat_out/fusions.out">
+ <filter>(params['fusion_search']['do_search'] == 'Yes')</filter>
+ </data><data format="bed" name="insertions" label="${tool.name} on ${on_string}: insertions" from_work_dir="tophat_out/insertions.bed"><actions><conditional name="refGenomeSource.genomeSource">
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: jgoecks: Cleanup for previous commit, 565476ce4f03, mainly to further comment and simplify code and avoid going to index multiple times.
by Bitbucket 15 Aug '12
by Bitbucket 15 Aug '12
15 Aug '12
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/changeset/24c6e925acaf/
changeset: 24c6e925acaf
user: jgoecks
date: 2012-08-16 00:09:48
summary: Cleanup for previous commit, 565476ce4f03, mainly to further comment and simplify code and avoid going to index multiple times.
affected #: 1 file
diff -r 565476ce4f0301d23538d87aeef805edc099badf -r 24c6e925acaf6b0dc8b52126ed295dcec35de2a6 lib/galaxy/visualization/tracks/data_providers.py
--- a/lib/galaxy/visualization/tracks/data_providers.py
+++ b/lib/galaxy/visualization/tracks/data_providers.py
@@ -985,28 +985,31 @@
# To do this, need to increase end to next base and request number of points.
num_points = end - start + 1
end += 1
-
- result = summarize_region( bbi, chrom, start, end, num_points )
else:
#
# The goal is to sample the region between start and end uniformly
- # using N data points. The challenge is that the size of sampled
+ # using ~N data points. The challenge is that the size of sampled
# intervals rarely is full bases, so sampling using N points will
- # leave the end of the region unsampled. To recitify this, samples
- # beyond N are taken at the end of the interval.
+ # leave the end of the region unsampled due to remainders for each
+ # interval. To recitify this, a new N is calculated based on the
+ # step size that covers as much of the region as possible.
+ #
+ # However, this still leaves some of the region unsampled. This
+ # could be addressed by repeatedly sampling remainder using a
+ # smaller and smaller step_size, but that would require iteratively
+ # going to BBI, which could be time consuming.
#
- # Do initial summary.
+ # Start with N samples.
num_points = N
- result = summarize_region( bbi, chrom, start, end, num_points )
-
- # Do summary of remaining part of region.
step_size = ( end - start ) / num_points
- new_start = start + step_size * num_points
- new_num_points = min( ( end - new_start ) / step_size, end - start )
- if new_num_points is not 0:
- result.extend( summarize_region( bbi, chrom, new_start, end, new_num_points ) )
- #TODO: progressively reduce step_size to generate more datapoints.
+ # Add additional points to sample in the remainder not covered by
+ # the initial N samples.
+ remainder_start = start + step_size * num_points
+ additional_points = ( end - remainder_start ) / step_size
+ num_points += additional_points
+
+ result = summarize_region( bbi, chrom, start, end, num_points )
# Cleanup and return.
f.close()
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: jgoecks: Rewrite sampling code for BBI data provider to handle (a) boundary cases during base-level resolution and (b) remainder of region not sampled during first pass.
by Bitbucket 15 Aug '12
by Bitbucket 15 Aug '12
15 Aug '12
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/changeset/565476ce4f03/
changeset: 565476ce4f03
user: jgoecks
date: 2012-08-15 23:49:39
summary: Rewrite sampling code for BBI data provider to handle (a) boundary cases during base-level resolution and (b) remainder of region not sampled during first pass.
affected #: 1 file
diff -r 2531e085f2625b60135a6b4972f125e22a4fd354 -r 565476ce4f0301d23538d87aeef805edc099badf lib/galaxy/visualization/tracks/data_providers.py
--- a/lib/galaxy/visualization/tracks/data_providers.py
+++ b/lib/galaxy/visualization/tracks/data_providers.py
@@ -947,55 +947,69 @@
return dict( data=dict( min=summary.min_val[0], max=summary.max_val[0], mean=mean, sd=sd ) )
- # The following seems not to work very well, for example it will only return one
- # data point if the tile is 1280px wide. Not sure what the intent is.
+ # Sample from region using approximately this many samples.
+ N = 1000
- # The first zoom level for BBI files is 640. If too much is requested, it will look at each block instead
- # of summaries. The calculation done is: zoom <> (end-start)/num_points/2.
- # Thus, the optimal number of points is (end-start)/num_points/2 = 640
- # num_points = (end-start) / 1280
- #num_points = (end-start) / 1280
- #if num_points < 1:
- # num_points = end - start
- #else:
- # num_points = min(num_points, 500)
+ def summarize_region( bbi, chrom, start, end, num_points ):
+ '''
+ Returns results from summarizing a region using num_points.
+ NOTE: num_points cannot be greater than end - start or BBI
+ will return None for all positions.s
+ '''
+ result = []
- # For now, we'll do 1000 data points by default. However, the summaries
- # don't seem to work when a summary pixel corresponds to less than one
- # datapoint, so we prevent that.
+ # Get summary; this samples at intervals of length
+ # (end - start)/num_points -- i.e. drops any fractional component
+ # of interval length.
+ summary = bbi.summarize( chrom, start, end, num_points )
+ if summary:
+ #mean = summary.sum_data / summary.valid_count
+
+ ## Standard deviation by bin, not yet used
+ ## var = summary.sum_squares - mean
+ ## var /= minimum( valid_count - 1, 1 )
+ ## sd = sqrt( var )
+
+ pos = start
+ step_size = (end - start) / num_points
- # FIXME: need to choose the number of points to maximize coverage of the area.
- # It appears that BBI calculates points using intervals of
- # floor( num_points / end - start )
- # In some cases, this prevents sampling near the end of the interval,
- # especially when (a) the total interval is small ( < 20-30Kb) and (b) the
- # computed interval size has a large fraction, e.g. 14.7 or 35.8
- num_points = min( 1000, end - start )
+ for i in range( num_points ):
+ result.append( (pos, float_nan( summary.sum_data[i] / summary.valid_count[i] ) ) )
+ pos += step_size
- # HACK to address the FIXME above; should generalize.
- if end - start <= 2000:
- num_points = end - start
+ return result
- summary = bbi.summarize( chrom, start, end, num_points )
+ # Approach is different depending on region size.
+ if end - start < N:
+ # Get values for individual bases in region, including start and end.
+ # To do this, need to increase end to next base and request number of points.
+ num_points = end - start + 1
+ end += 1
+
+ result = summarize_region( bbi, chrom, start, end, num_points )
+ else:
+ #
+ # The goal is to sample the region between start and end uniformly
+ # using N data points. The challenge is that the size of sampled
+ # intervals rarely is full bases, so sampling using N points will
+ # leave the end of the region unsampled. To recitify this, samples
+ # beyond N are taken at the end of the interval.
+ #
+
+ # Do initial summary.
+ num_points = N
+ result = summarize_region( bbi, chrom, start, end, num_points )
+
+ # Do summary of remaining part of region.
+ step_size = ( end - start ) / num_points
+ new_start = start + step_size * num_points
+ new_num_points = min( ( end - new_start ) / step_size, end - start )
+ if new_num_points is not 0:
+ result.extend( summarize_region( bbi, chrom, new_start, end, new_num_points ) )
+ #TODO: progressively reduce step_size to generate more datapoints.
+
+ # Cleanup and return.
f.close()
-
- result = []
-
- if summary:
- #mean = summary.sum_data / summary.valid_count
-
- ## Standard deviation by bin, not yet used
- ## var = summary.sum_squares - mean
- ## var /= minimum( valid_count - 1, 1 )
- ## sd = sqrt( var )
-
- pos = start
- step_size = (end - start) / num_points
-
- for i in range( num_points ):
- result.append( (pos, float_nan( summary.sum_data[i] / summary.valid_count[i] ) ) )
- pos += step_size
-
return { 'data': result }
class BigBedDataProvider( BBIDataProvider ):
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Improve error message handling when setting metadata on tool shed repositories. Display the repository owner in the table grid when displaying invalid tools in the tool shed.
by Bitbucket 15 Aug '12
by Bitbucket 15 Aug '12
15 Aug '12
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/changeset/2531e085f262/
changeset: 2531e085f262
user: greg
date: 2012-08-15 20:50:38
summary: Improve error message handling when setting metadata on tool shed repositories. Display the repository owner in the table grid when displaying invalid tools in the tool shed.
affected #: 4 files
diff -r b99cd3b7670169d6b2723aff95c458b5448af34d -r 2531e085f2625b60135a6b4972f125e22a4fd354 lib/galaxy/webapps/community/controllers/admin.py
--- a/lib/galaxy/webapps/community/controllers/admin.py
+++ b/lib/galaxy/webapps/community/controllers/admin.py
@@ -696,9 +696,14 @@
owner = repository_name_owner_list[ 1 ]
repository = get_repository_by_name_and_owner( trans, name, owner )
try:
- reset_all_metadata_on_repository( trans, trans.security.encode_id( repository.id ) )
- log.debug( "Successfully reset metadata on repository %s" % repository.name )
- successful_count += 1
+ invalid_file_tups = reset_all_metadata_on_repository( trans, trans.security.encode_id( repository.id ) )
+ if invalid_file_tups:
+ message = generate_message_for_invalid_tools( invalid_file_tups, repository, None, as_html=False )
+ log.debug( message )
+ unsuccessful_count += 1
+ else:
+ log.debug( "Successfully reset metadata on repository %s" % repository.name )
+ successful_count += 1
except Exception, e:
log.debug( "Error attempting to reset metadata on repository '%s': %s" % ( repository.name, str( e ) ) )
unsuccessful_count += 1
diff -r b99cd3b7670169d6b2723aff95c458b5448af34d -r 2531e085f2625b60135a6b4972f125e22a4fd354 lib/galaxy/webapps/community/controllers/common.py
--- a/lib/galaxy/webapps/community/controllers/common.py
+++ b/lib/galaxy/webapps/community/controllers/common.py
@@ -277,6 +277,41 @@
return '%s://%s%s/repos/%s/%s' % ( protocol, username, base, repository.user.username, repository.name )
else:
return '%s/repos/%s/%s' % ( base_url, repository.user.username, repository.name )
+def generate_message_for_invalid_tools( invalid_file_tups, repository, metadata_dict, as_html=True ):
+ if as_html:
+ new_line = '<br/>'
+ bold_start = '<b>'
+ bold_end = '</b>'
+ else:
+ new_line = '\n'
+ bold_start = ''
+ bold_end = ''
+ message = ''
+ if metadata_dict:
+ message += "Metadata was defined for some items in revision '%s'. " % str( repository.tip )
+ message += "Correct the following problems if necessary and reset metadata.%s" % new_line
+ else:
+ message += "Metadata cannot be defined for revision '%s' so this revision cannot be automatically " % str( repository.tip )
+ message += "installed into a local Galaxy instance. Correct the following problems and reset metadata.%s" % new_line
+ for itc_tup in invalid_file_tups:
+ tool_file, exception_msg = itc_tup
+ if exception_msg.find( 'No such file or directory' ) >= 0:
+ exception_items = exception_msg.split()
+ missing_file_items = exception_items[ 7 ].split( '/' )
+ missing_file = missing_file_items[ -1 ].rstrip( '\'' )
+ if missing_file.endswith( '.loc' ):
+ sample_ext = '%s.sample' % missing_file
+ else:
+ sample_ext = missing_file
+ correction_msg = "This file refers to a missing file %s%s%s. " % ( bold_start, str( missing_file ), bold_end )
+ correction_msg += "Upload a file named %s%s%s to the repository to correct this error." % ( bold_start, sample_ext, bold_end )
+ else:
+ if as_html:
+ correction_msg = exception_msg
+ else:
+ correction_msg = exception_msg.replace( '<br/>', new_line ).replace( '<b>', bold_start ).replace( '</b>', bold_end )
+ message += "%s%s%s - %s%s" % ( bold_start, tool_file, bold_end, correction_msg, new_line )
+ return message
def generate_tool_guid( trans, repository, tool ):
"""
Generate a guid for the received tool. The form of the guid is
@@ -854,6 +889,7 @@
clean_repository_metadata( trans, id, changeset_revisions )
# Set tool version information for all downloadable changeset revisions. Get the list of changeset revisions from the changelog.
reset_all_tool_versions( trans, id, repo )
+ return invalid_file_tups
def set_repository_metadata( trans, repository, content_alert_str='', **kwd ):
"""
Set metadata using the repository's current disk files, returning specific error messages (if any) to alert the repository owner that the changeset
@@ -931,27 +967,7 @@
message += "be defined so this revision cannot be automatically installed into a local Galaxy instance."
status = "error"
if invalid_file_tups:
- if metadata_dict:
- message += "Metadata was defined for some items in revision '%s'. " % str( repository.tip )
- message += "Correct the following problems if necessary and reset metadata.<br/>"
- else:
- message += "Metadata cannot be defined for revision '%s' so this revision cannot be automatically " % str( repository.tip )
- message += "installed into a local Galaxy instance. Correct the following problems and reset metadata.<br/>"
- for itc_tup in invalid_file_tups:
- tool_file, exception_msg = itc_tup
- if exception_msg.find( 'No such file or directory' ) >= 0:
- exception_items = exception_msg.split()
- missing_file_items = exception_items[ 7 ].split( '/' )
- missing_file = missing_file_items[ -1 ].rstrip( '\'' )
- if missing_file.endswith( '.loc' ):
- sample_ext = '%s.sample' % missing_file
- else:
- sample_ext = missing_file
- correction_msg = "This file refers to a missing file <b>%s</b>. " % str( missing_file )
- correction_msg += "Upload a file named <b>%s</b> to the repository to correct this error." % sample_ext
- else:
- correction_msg = exception_msg
- message += "<b>%s</b> - %s<br/>" % ( tool_file, correction_msg )
+ message = generate_message_for_invalid_tools( invalid_file_tups, repository, metadata_dict )
status = 'error'
return message, status
def set_repository_metadata_due_to_new_tip( trans, repository, content_alert_str=None, **kwd ):
diff -r b99cd3b7670169d6b2723aff95c458b5448af34d -r 2531e085f2625b60135a6b4972f125e22a4fd354 lib/galaxy/webapps/community/controllers/repository.py
--- a/lib/galaxy/webapps/community/controllers/repository.py
+++ b/lib/galaxy/webapps/community/controllers/repository.py
@@ -458,7 +458,10 @@
metadata = downloadable_revision.metadata
invalid_tools = metadata.get( 'invalid_tools', [] )
for invalid_tool_config in invalid_tools:
- invalid_tools_dict[ invalid_tool_config ] = ( repository.id, repository.name, downloadable_revision.changeset_revision )
+ invalid_tools_dict[ invalid_tool_config ] = ( repository.id,
+ repository.name,
+ repository.user.username,
+ downloadable_revision.changeset_revision )
else:
for repository in trans.sa_session.query( trans.model.Repository ) \
.filter( and_( trans.model.Repository.table.c.deleted == False,
@@ -468,7 +471,10 @@
metadata = downloadable_revision.metadata
invalid_tools = metadata.get( 'invalid_tools', [] )
for invalid_tool_config in invalid_tools:
- invalid_tools_dict[ invalid_tool_config ] = ( repository.id, repository.name, downloadable_revision.changeset_revision )
+ invalid_tools_dict[ invalid_tool_config ] = ( repository.id,
+ repository.name,
+ repository.user.username,
+ downloadable_revision.changeset_revision )
return trans.fill_template( '/webapps/community/repository/browse_invalid_tools.mako',
cntrller=cntrller,
invalid_tools_dict=invalid_tools_dict,
@@ -1373,6 +1379,7 @@
return trans.response.send_redirect( url )
@web.expose
def load_invalid_tool( self, trans, repository_id, tool_config, changeset_revision, **kwd ):
+ # FIXME: loading an invalid tool should display an appropriate message as to why the tool is invalid. This worked until recently.
params = util.Params( kwd )
message = util.restore_text( params.get( 'message', '' ) )
status = params.get( 'status', 'error' )
@@ -1752,9 +1759,14 @@
status=status )
@web.expose
def reset_all_metadata( self, trans, id, **kwd ):
- reset_all_metadata_on_repository( trans, id, **kwd )
- message = "All repository metadata has been reset."
- status = 'done'
+ invalid_file_tups = reset_all_metadata_on_repository( trans, id, **kwd )
+ if invalid_file_tups:
+ repository = get_repository( trans, id )
+ message = generate_message_for_invalid_tools( invalid_file_tups, repository, None )
+ status = 'error'
+ else:
+ message = "All repository metadata has been reset."
+ status = 'done'
return trans.response.send_redirect( web.url_for( controller='repository',
action='manage_repository',
id=id,
diff -r b99cd3b7670169d6b2723aff95c458b5448af34d -r 2531e085f2625b60135a6b4972f125e22a4fd354 templates/webapps/community/repository/browse_invalid_tools.mako
--- a/templates/webapps/community/repository/browse_invalid_tools.mako
+++ b/templates/webapps/community/repository/browse_invalid_tools.mako
@@ -13,10 +13,11 @@
<tr><th>Tool config</th><th>Repository name</th>
+ <th>Repository owner</th><th>Changeset revision</th></tr>
%for invalid_tool_config, repository_tup in invalid_tools_dict.items():
- <% repository_id, repository_name, changeset_revision = repository_tup %>
+ <% repository_id, repository_name, repository_owner, changeset_revision = repository_tup %><tr><td><a class="view-info" href="${h.url_for( controller='repository', action='load_invalid_tool', repository_id=trans.security.encode_id( repository_id ), tool_config=invalid_tool_config, changeset_revision=changeset_revision, webapp=webapp )}">
@@ -24,6 +25,7 @@
</a></td><td>${repository_name}</td>
+ <td>${repository_owner}</td><td>${changeset_revision}</td></tr>
%endfor
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Fix for setting tooll dependency metadata where at least one tool in the repository does not include a <requirements> tag set.
by Bitbucket 15 Aug '12
by Bitbucket 15 Aug '12
15 Aug '12
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/changeset/b99cd3b76701/
changeset: b99cd3b76701
user: greg
date: 2012-08-15 17:55:17
summary: Fix for setting tooll dependency metadata where at least one tool in the repository does not include a <requirements> tag set.
affected #: 2 files
diff -r 01ed2f462dd7709876458b031d786d277d1f72f3 -r b99cd3b7670169d6b2723aff95c458b5448af34d lib/galaxy/util/shed_util.py
--- a/lib/galaxy/util/shed_util.py
+++ b/lib/galaxy/util/shed_util.py
@@ -533,7 +533,7 @@
if req_name==tool_dependency_name and req_version==tool_dependency_version and req_type==tool_dependency_type:
can_generate_dependency_metadata = True
break
- if not can_generate_dependency_metadata:
+ if requirements and not can_generate_dependency_metadata:
# We've discovered at least 1 combination of name, version and type that is not defined in the <requirement>
# tag for any tool in the repository.
break
diff -r 01ed2f462dd7709876458b031d786d277d1f72f3 -r b99cd3b7670169d6b2723aff95c458b5448af34d lib/galaxy/webapps/community/config.py
--- a/lib/galaxy/webapps/community/config.py
+++ b/lib/galaxy/webapps/community/config.py
@@ -87,6 +87,7 @@
self.server_name = ''
self.job_manager = ''
self.default_job_handlers = []
+ self.default_cluster_job_runner = 'local:///'
self.job_handlers = []
self.tool_handlers = []
self.tool_runners = []
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: carlfeberhard: small fix to twilltestcase when GALAXY_TEST_SAVE fails
by Bitbucket 15 Aug '12
by Bitbucket 15 Aug '12
15 Aug '12
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/changeset/01ed2f462dd7/
changeset: 01ed2f462dd7
user: carlfeberhard
date: 2012-08-15 17:13:13
summary: small fix to twilltestcase when GALAXY_TEST_SAVE fails
affected #: 1 file
diff -r 7566c91c1ec65f90c52f173ff1a5442789028f93 -r 01ed2f462dd7709876458b031d786d277d1f72f3 test/base/twilltestcase.py
--- a/test/base/twilltestcase.py
+++ b/test/base/twilltestcase.py
@@ -694,11 +694,19 @@
if filename is not None:
local_name = self.get_filename( filename, shed_tool_id=shed_tool_id )
temp_name = self.makeTfname(fname = filename)
- file( temp_name, 'wb' ).write(data)
- if self.keepOutdir > '':
- ofn = os.path.join(self.keepOutdir,os.path.basename(local_name))
- shutil.copy(temp_name,ofn)
- log.debug('## GALAXY_TEST_SAVE=%s. saved %s' % (self.keepOutdir,ofn))
+ file( temp_name, 'wb' ).write( data )
+
+ # if the server's env has GALAXY_TEST_SAVE, save the output file to that dir
+ if self.keepOutdir:
+ ofn = os.path.join( self.keepOutdir, os.path.basename( local_name ) )
+ try:
+ shutil.copy( temp_name, ofn )
+ except Exception, exc:
+ error_log_msg = ( 'TwillTestCase could not save output file %s to %s: ' % ( temp_name, ofn ) )
+ error_log_msg += str( e )
+ else:
+ log.debug('## GALAXY_TEST_SAVE=%s. saved %s' % ( self.keepOutdir, ofn ) )
+
try:
# have to nest try-except in try-finally to handle 2.4
try:
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0