galaxy-commits
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
January 2014
- 1 participants
- 280 discussions
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/4e6b003daaa4/
Changeset: 4e6b003daaa4
User: guerler
Date: 2014-01-09 20:06:19
Summary: Merge
Affected #: 5 files
diff -r b3a603f522174fa1fc4623580cce39e4069b189c -r 4e6b003daaa44a06ac13c74c5ce8c75fa7bbb7ee scripts/cleanup_datasets/pgcleanup.py
--- a/scripts/cleanup_datasets/pgcleanup.py
+++ b/scripts/cleanup_datasets/pgcleanup.py
@@ -105,7 +105,7 @@
url = make_url(self.config.database_connection)
log.info('Connecting to database with URL: %s' % url)
- args = url.translate_connect_args()
+ args = url.translate_connect_args( username='user' )
args.update(url.query)
assert url.get_dialect().name == 'postgresql', 'This script can only be used with PostgreSQL.'
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: natefoo: Log native spec on drmaa job submission.
by commits-noreply@bitbucket.org 09 Jan '14
by commits-noreply@bitbucket.org 09 Jan '14
09 Jan '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/b3a603f52217/
Changeset: b3a603f52217
User: natefoo
Date: 2014-01-09 19:27:47
Summary: Log native spec on drmaa job submission.
Affected #: 1 file
diff -r b0cb955a9b284bea1423e613be7365ef5df8a071 -r b3a603f522174fa1fc4623580cce39e4069b189c lib/galaxy/jobs/runners/drmaa.py
--- a/lib/galaxy/jobs/runners/drmaa.py
+++ b/lib/galaxy/jobs/runners/drmaa.py
@@ -163,8 +163,10 @@
job_wrapper.cleanup()
return
- log.debug( "(%s) submitting file %s" % ( galaxy_id_tag, ajs.job_file ) )
- log.debug( "(%s) command is: %s" % ( galaxy_id_tag, command_line ) )
+ log.debug( "(%s) submitting file %s", galaxy_id_tag, ajs.job_file )
+ log.debug( "(%s) command is: %s", galaxy_id_tag, command_line )
+ if native_spec:
+ log.debug( "(%s) native specification is: %s", galaxy_id_tag, native_spec )
# runJob will raise if there's a submit problem
if self.external_runJob_script is None:
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: natefoo: Merge 26da838b778c from stable.
by commits-noreply@bitbucket.org 09 Jan '14
by commits-noreply@bitbucket.org 09 Jan '14
09 Jan '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/b0cb955a9b28/
Changeset: b0cb955a9b28
User: natefoo
Date: 2014-01-09 19:09:52
Summary: Merge 26da838b778c from stable.
Affected #: 1 file
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: inithello: Merged in 11734:e059c6607f34 from stable.
by commits-noreply@bitbucket.org 09 Jan '14
by commits-noreply@bitbucket.org 09 Jan '14
09 Jan '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/d8f02450982c/
Changeset: d8f02450982c
User: inithello
Date: 2014-01-09 18:58:55
Summary: Merged in 11734:e059c6607f34 from stable.
Affected #: 3 files
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
6 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/229064ed6add/
Changeset: 229064ed6add
User: natefoo
Date: 2014-01-07 21:13:01
Summary: Allow handlers to control which runner plugins they will load.
Affected #: 2 files
diff -r 968389346b3491bf1a90cf359ff258075816b1e8 -r 229064ed6addac469460190415360e553ae91a0f lib/galaxy/jobs/__init__.py
--- a/lib/galaxy/jobs/__init__.py
+++ b/lib/galaxy/jobs/__init__.py
@@ -86,6 +86,7 @@
self.app = app
self.runner_plugins = []
self.handlers = {}
+ self.handler_runner_plugins = {}
self.default_handler_id = None
self.destinations = {}
self.destination_tags = {}
@@ -138,6 +139,10 @@
else:
log.debug("Read definition for handler '%s'" % id)
self.handlers[id] = (id,)
+ for plugin in handler.findall('plugin'):
+ if id not in self.handler_runner_plugins:
+ self.handler_runner_plugins[id] = []
+ self.handler_runner_plugins[id].append( plugin.get('id') )
if handler.get('tags', None) is not None:
for tag in [ x.strip() for x in handler.get('tags').split(',') ]:
if tag in self.handlers:
@@ -420,13 +425,19 @@
"""
return self.destinations.get(id_or_tag, None)
- def get_job_runner_plugins(self):
+ def get_job_runner_plugins(self, handler_id):
"""Load all configured job runner plugins
:returns: list of job runner plugins
"""
rval = {}
- for runner in self.runner_plugins:
+ if handler_id in self.handler_runner_plugins:
+ plugins_to_load = [ rp for rp in self.runner_plugins if rp['id'] in self.handler_runner_plugins[handler_id] ]
+ log.info( "Handler '%s' will load specified runner plugins: %s", handler_id, ', '.join( [ rp['id'] for rp in plugins_to_load ] ) )
+ else:
+ plugins_to_load = self.runner_plugins
+ log.info( "Handler '%s' will load all configured runner plugins", handler_id )
+ for runner in plugins_to_load:
class_names = []
module = None
id = runner['id']
@@ -477,7 +488,7 @@
try:
rval[id] = runner_class( self.app, runner[ 'workers' ], **runner.get( 'kwds', {} ) )
except TypeError:
- log.warning( "Job runner '%s:%s' has not been converted to a new-style runner" % ( module_name, class_name ) )
+ log.exception( "Job runner '%s:%s' has not been converted to a new-style runner or encountered TypeError on load" % ( module_name, class_name ) )
rval[id] = runner_class( self.app )
log.debug( "Loaded job runner '%s:%s' as '%s'" % ( module_name, class_name, id ) )
return rval
diff -r 968389346b3491bf1a90cf359ff258075816b1e8 -r 229064ed6addac469460190415360e553ae91a0f lib/galaxy/jobs/handler.py
--- a/lib/galaxy/jobs/handler.py
+++ b/lib/galaxy/jobs/handler.py
@@ -565,7 +565,7 @@
class DefaultJobDispatcher( object ):
def __init__( self, app ):
self.app = app
- self.job_runners = self.app.job_config.get_job_runner_plugins()
+ self.job_runners = self.app.job_config.get_job_runner_plugins( self.app.config.server_name )
# Once plugins are loaded, all job destinations that were created from
# URLs can have their URL params converted to the destination's param
# dict by the plugin.
https://bitbucket.org/galaxy/galaxy-central/commits/681643c22fed/
Changeset: 681643c22fed
User: natefoo
Date: 2014-01-07 21:18:11
Summary: Allow runners plugins to accept params, provide some simple load-time validation logic for params.
Affected #: 1 file
diff -r 229064ed6addac469460190415360e553ae91a0f -r 681643c22fedeac632567ae6bcf204616d871f07 lib/galaxy/jobs/runners/__init__.py
--- a/lib/galaxy/jobs/runners/__init__.py
+++ b/lib/galaxy/jobs/runners/__init__.py
@@ -22,13 +22,39 @@
STOP_SIGNAL = object()
+
+class RunnerParams( object ):
+
+ def __init__( self, specs = None, params = None ):
+ self.specs = specs or dict()
+ self.params = params or dict()
+ for name, value in self.params.items():
+ assert name in self.specs, 'Invalid job runner parameter for this plugin: %s' % name
+ if 'map' in self.specs[ name ]:
+ try:
+ self.params[ name ] = self.specs[ name ][ 'map' ]( value )
+ except Exception, e:
+ raise Exception( 'Job runner parameter "%s" value "%s" could not be converted to the correct type: %s' % ( name, value, e ) )
+ if 'valid' in self.specs[ name ]:
+ assert self.specs[ name ][ 'valid' ]( value ), 'Job runner parameter %s failed validation' % name
+
+ def __getattr__( self, name ):
+ return self.params.get( name, self.specs[ name ][ 'default' ] )
+
+
class BaseJobRunner( object ):
- def __init__( self, app, nworkers ):
+ def __init__( self, app, nworkers, **kwargs ):
"""Start the job runner
"""
self.app = app
self.sa_session = app.model.context
self.nworkers = nworkers
+ runner_param_specs = dict( recheck_missing_job_retries = dict( map = int, valid = lambda x: x >= 0, default = 0 ) )
+ if 'runner_param_specs' in kwargs:
+ runner_param_specs.update( kwargs.pop( 'runner_param_specs' ) )
+ if kwargs:
+ log.debug( 'Loading %s with params: %s', self.runner_name, kwargs )
+ self.runner_params = RunnerParams( specs = runner_param_specs, params = kwargs )
def _init_worker_threads(self):
"""Start ``nworkers`` worker threads.
@@ -115,7 +141,7 @@
job_wrapper.cleanup()
return False
elif job_state != model.Job.states.QUEUED:
- log.info( "(%d) Job is in state %s, skipping execution" % ( job_id, job_state ) )
+ log.info( "(%s) Job is in state %s, skipping execution" % ( job_id, job_state ) )
# cleanup may not be safe in all states
return False
@@ -287,8 +313,8 @@
to the correct methods (queue, finish, cleanup) at appropriate times..
"""
- def __init__( self, app, nworkers ):
- super( AsynchronousJobRunner, self ).__init__( app, nworkers )
+ def __init__( self, app, nworkers, **kwargs ):
+ super( AsynchronousJobRunner, self ).__init__( app, nworkers, **kwargs )
# 'watched' and 'queue' are both used to keep track of jobs to watch.
# 'queue' is used to add new watched jobs, and can be called from
# any thread (usually by the 'queue_job' method). 'watched' must only
https://bitbucket.org/galaxy/galaxy-central/commits/e67776961e48/
Changeset: e67776961e48
User: natefoo
Date: 2014-01-07 21:21:50
Summary: Make job terminal state logic somewhat configurable via runner params, update
DRMAA runner plugin changes for configurable terminal state handling. Also
allow the drmaa plugin to read the drmaa library path from job_conf.xml rather
than the $DRMAA_LIBRARY_PATH environment variable.
Affected #: 2 files
diff -r 681643c22fedeac632567ae6bcf204616d871f07 -r e67776961e48dca45850c95b0f053178fd0e61b6 lib/galaxy/jobs/runners/__init__.py
--- a/lib/galaxy/jobs/runners/__init__.py
+++ b/lib/galaxy/jobs/runners/__init__.py
@@ -252,6 +252,10 @@
options.update(**kwds)
return job_script(**options)
+ def _complete_terminal_job( self, ajs, **kwargs ):
+ if ajs.job_wrapper.get_state() != model.Job.states.DELETED:
+ self.work_queue.put( ( self.finish_job, ajs ) )
+
class AsynchronousJobState( object ):
"""
diff -r 681643c22fedeac632567ae6bcf204616d871f07 -r e67776961e48dca45850c95b0f053178fd0e61b6 lib/galaxy/jobs/runners/drmaa.py
--- a/lib/galaxy/jobs/runners/drmaa.py
+++ b/lib/galaxy/jobs/runners/drmaa.py
@@ -16,27 +16,12 @@
from galaxy.jobs.runners import AsynchronousJobState, AsynchronousJobRunner
eggs.require( "drmaa" )
-# We foolishly named this file the same as the name exported by the drmaa
-# library... 'import drmaa' imports itself.
-drmaa = __import__( "drmaa" )
log = logging.getLogger( __name__ )
__all__ = [ 'DRMAAJobRunner' ]
-drmaa_state = {
- drmaa.JobState.UNDETERMINED: 'process status cannot be determined',
- drmaa.JobState.QUEUED_ACTIVE: 'job is queued and active',
- drmaa.JobState.SYSTEM_ON_HOLD: 'job is queued and in system hold',
- drmaa.JobState.USER_ON_HOLD: 'job is queued and in user hold',
- drmaa.JobState.USER_SYSTEM_ON_HOLD: 'job is queued and in user and system hold',
- drmaa.JobState.RUNNING: 'job is running',
- drmaa.JobState.SYSTEM_SUSPENDED: 'job is system suspended',
- drmaa.JobState.USER_SUSPENDED: 'job is user suspended',
- drmaa.JobState.DONE: 'job finished normally',
- drmaa.JobState.FAILED: 'job finished, but failed',
-}
-
+drmaa = None
DRMAA_jobTemplate_attributes = [ 'args', 'remoteCommand', 'outputPath', 'errorPath', 'nativeSpecification',
'jobName', 'email', 'project' ]
@@ -48,8 +33,50 @@
"""
runner_name = "DRMAARunner"
- def __init__( self, app, nworkers ):
+ def __init__( self, app, nworkers, **kwargs ):
"""Start the job runner"""
+
+ global drmaa
+
+ runner_param_specs = dict(
+ drmaa_library_path = dict( map = str, default = os.environ.get( 'DRMAA_LIBRARY_PATH', None ) ),
+ invalidjobexception_state = dict( map = str, valid = lambda x: x in ( model.Job.states.OK, model.Job.states.ERROR ), default = model.Job.states.OK ),
+ invalidjobexception_retries = dict( map = int, valid = lambda x: int >= 0, default = 0 ),
+ internalexception_state = dict( map = str, valid = lambda x: x in ( model.Job.states.OK, model.Job.states.ERROR ), default = model.Job.states.OK ),
+ internalexception_retries = dict( map = int, valid = lambda x: int >= 0, default = 0 ) )
+
+ if 'runner_param_specs' not in kwargs:
+ kwargs[ 'runner_param_specs' ] = dict()
+ kwargs[ 'runner_param_specs' ].update( runner_param_specs )
+
+ super( DRMAAJobRunner, self ).__init__( app, nworkers, **kwargs )
+
+ # This allows multiple drmaa runners (although only one per handler) in the same job config file
+ if 'drmaa_library_path' in kwargs:
+ log.info( 'Overriding DRMAA_LIBRARY_PATH due to runner plugin parameter: %s', self.runner_params.drmaa_library_path )
+ os.environ['DRMAA_LIBRARY_PATH'] = self.runner_params.drmaa_library_path
+
+ # We foolishly named this file the same as the name exported by the drmaa
+ # library... 'import drmaa' imports itself.
+ drmaa = __import__( "drmaa" )
+
+ # Subclasses may need access to state constants
+ self.drmaa_job_states = drmaa.JobState
+
+ # Descriptive state strings pulled from the drmaa lib itself
+ self.drmaa_job_state_strings = {
+ drmaa.JobState.UNDETERMINED: 'process status cannot be determined',
+ drmaa.JobState.QUEUED_ACTIVE: 'job is queued and active',
+ drmaa.JobState.SYSTEM_ON_HOLD: 'job is queued and in system hold',
+ drmaa.JobState.USER_ON_HOLD: 'job is queued and in user hold',
+ drmaa.JobState.USER_SYSTEM_ON_HOLD: 'job is queued and in user and system hold',
+ drmaa.JobState.RUNNING: 'job is running',
+ drmaa.JobState.SYSTEM_SUSPENDED: 'job is system suspended',
+ drmaa.JobState.USER_SUSPENDED: 'job is user suspended',
+ drmaa.JobState.DONE: 'job finished normally',
+ drmaa.JobState.FAILED: 'job finished, but failed',
+ }
+
self.ds = drmaa.Session()
self.ds.initialize()
@@ -58,7 +85,6 @@
self.external_killJob_script = app.config.drmaa_external_killjob_script
self.userid = None
- super( DRMAAJobRunner, self ).__init__( app, nworkers )
self._init_monitor_thread()
self._init_worker_threads()
@@ -175,6 +201,20 @@
# Add to our 'queue' of jobs to monitor
self.monitor_queue.put( ajs )
+ def _complete_terminal_job( self, ajs, drmaa_state, **kwargs ):
+ """
+ Handle a job upon its termination in the DRM. This method is meant to
+ be overridden by subclasses to improve post-mortem and reporting of
+ failures.
+ """
+ if drmaa_state == drmaa.JobState.FAILED:
+ if ajs.job_wrapper.get_state() != model.Job.states.DELETED:
+ ajs.stop_job = False
+ ajs.fail_message = "The cluster DRM system terminated this job"
+ self.work_queue.put( ( self.fail_job, ajs ) )
+ elif drmaa_state == drmaa.JobState.DONE:
+ super( DRMAAJobRunner, self )._complete_terminal_job( ajs )
+
def check_watched_items( self ):
"""
Called by the monitor thread to look at each watched job and deal
@@ -188,16 +228,27 @@
try:
assert external_job_id not in ( None, 'None' ), '(%s/%s) Invalid job id' % ( galaxy_id_tag, external_job_id )
state = self.ds.jobStatus( external_job_id )
- # InternalException was reported to be necessary on some DRMs, but
- # this could cause failures to be detected as completion! Please
- # report if you experience problems with this.
- except ( drmaa.InvalidJobException, drmaa.InternalException ), e:
- # we should only get here if an orphaned job was put into the queue at app startup
- log.info( "(%s/%s) job left DRM queue with following message: %s" % ( galaxy_id_tag, external_job_id, e ) )
- self.work_queue.put( ( self.finish_job, ajs ) )
+ except ( drmaa.InternalException, drmaa.InvalidJobException ), e:
+ ecn = e.__class__.__name__
+ retry_param = ecn.lower() + '_retries'
+ state_param = ecn.lower() + '_state'
+ retries = getattr( ajs, retry_param, 0 )
+ if self.runner_params[ retry_param ] > 0:
+ if retries < self.runner_params[ retry_param ]:
+ # will retry check on next iteration
+ setattr( ajs, retry_param, retries + 1 )
+ continue
+ if self.runner_params[ state_param ] == model.Job.states.OK:
+ log.info( "(%s/%s) job left DRM queue with following message: %s", galaxy_id_tag, external_job_id, e )
+ self.work_queue.put( ( self.finish_job, ajs ) )
+ elif self.runner_params[ state_param ] == model.Job.states.ERROR:
+ log.info( "(%s/%s) job check resulted in %s after %s tries: %s", galaxy_id_tag, external_job_id, ecn, retries, e )
+ self.work_queue.put( ( self.fail_job, ajs ) )
+ else:
+ raise Exception( "%s is set to an invalid value (%s), this should not be possible. See galaxy.jobs.drmaa.__init__()", state_param, self.runner_params[ state_param ] )
continue
except drmaa.DrmCommunicationException, e:
- log.warning( "(%s/%s) unable to communicate with DRM: %s" % ( galaxy_id_tag, external_job_id, e ))
+ log.warning( "(%s/%s) unable to communicate with DRM: %s", galaxy_id_tag, external_job_id, e )
new_watched.append( ajs )
continue
except Exception, e:
@@ -208,19 +259,12 @@
self.work_queue.put( ( self.fail_job, ajs ) )
continue
if state != old_state:
- log.debug( "(%s/%s) state change: %s" % ( galaxy_id_tag, external_job_id, drmaa_state[state] ) )
+ log.debug( "(%s/%s) state change: %s" % ( galaxy_id_tag, external_job_id, self.drmaa_job_state_strings[state] ) )
if state == drmaa.JobState.RUNNING and not ajs.running:
ajs.running = True
ajs.job_wrapper.change_state( model.Job.states.RUNNING )
- if state == drmaa.JobState.FAILED:
- if ajs.job_wrapper.get_state() != model.Job.states.DELETED:
- ajs.stop_job = False
- ajs.fail_message = "The cluster DRM system terminated this job"
- self.work_queue.put( ( self.fail_job, ajs ) )
- continue
- if state == drmaa.JobState.DONE:
- if ajs.job_wrapper.get_state() != model.Job.states.DELETED:
- self.work_queue.put( ( self.finish_job, ajs ) )
+ if state in ( drmaa.JobState.FAILED, drmaa.JobState.DONE ):
+ self._complete_terminal_job( ajs, drmaa_state = state )
continue
ajs.old_state = state
new_watched.append( ajs )
https://bitbucket.org/galaxy/galaxy-central/commits/d3df5b5e91f3/
Changeset: d3df5b5e91f3
User: natefoo
Date: 2014-01-07 21:23:11
Summary: Add a Slurm runner (subclassed from the drmaa runner), adds some logic for
determining/reporting the reason of job failure and requeuing jobs that failed
due to node failure.
Affected #: 1 file
diff -r e67776961e48dca45850c95b0f053178fd0e61b6 -r d3df5b5e91f3c42cffdba83f0ea9300b5caa172a lib/galaxy/jobs/runners/slurm.py
--- /dev/null
+++ b/lib/galaxy/jobs/runners/slurm.py
@@ -0,0 +1,57 @@
+"""
+SLURM job control via the DRMAA API.
+"""
+
+import time
+import logging
+import subprocess
+
+from galaxy import model
+from galaxy.jobs.runners.drmaa import DRMAAJobRunner
+
+log = logging.getLogger( __name__ )
+
+__all__ = [ 'SlurmJobRunner' ]
+
+
+class SlurmJobRunner( DRMAAJobRunner ):
+ runner_name = "SlurmRunner"
+
+ def _complete_terminal_job( self, ajs, drmaa_state, **kwargs ):
+ def __get_jobinfo():
+ scontrol_out = subprocess.check_output( ( 'scontrol', '-o', 'show', 'job', ajs.job_id ) )
+ return dict( [ out_param.split( '=', 1 ) for out_param in scontrol_out.split() ] )
+ if drmaa_state == self.drmaa_job_states.FAILED:
+ try:
+ job_info = __get_jobinfo()
+ sleep = 1
+ while job_info['JobState'] == 'COMPLETING':
+ log.debug( '(%s/%s) Waiting %s seconds for failed job to exit COMPLETING state for post-mortem', ajs.job_wrapper.get_id_tag(), ajs.job_id, sleep )
+ time.sleep( sleep )
+ sleep *= 2
+ if sleep > 64:
+ ajs.fail_message = "This job failed and the system timed out while trying to determine the cause of the failure."
+ break
+ job_info = __get_jobinfo()
+ if job_info['JobState'] == 'TIMEOUT':
+ ajs.fail_message = "This job was terminated because it ran longer than the maximum allowed job run time."
+ elif job_info['JobState'] == 'NODE_FAIL':
+ log.warning( '(%s/%s) Job failed due to node failure, attempting resubmission', ajs.job_wrapper.get_id_tag(), ajs.job_id )
+ ajs.job_wrapper.change_state( model.Job.states.QUEUED, info = 'Job was resubmitted due to node failure' )
+ try:
+ self.queue_job( ajs.job_wrapper )
+ return
+ except:
+ ajs.fail_message = "This job failed due to a cluster node failure, and an attempt to resubmit the job failed."
+ elif job_info['JobState'] == 'CANCELLED':
+ ajs.fail_message = "This job failed because it was cancelled by an administrator."
+ else:
+ ajs.fail_message = "This job failed for reasons that could not be determined."
+ ajs.fail_message += '\nPlease click the bug icon to report this problem if you need help.'
+ ajs.stop_job = False
+ self.work_queue.put( ( self.fail_job, ajs ) )
+ except Exception, e:
+ log.exception( '(%s/%s) Unable to inspect failed slurm job using scontrol, job will be unconditionally failed: %s', ajs.job_wrapper.get_id_tag(), ajs.job_id, e )
+ super( SlurmJobRunner, self )._complete_terminal_job( ajs, drmaa_state = drmaa_state )
+ elif drmaa_state == self.drmaa_job_states.DONE:
+ super( SlurmJobRunner, self )._complete_terminal_job( ajs, drmaa_state = drmaa_state )
https://bitbucket.org/galaxy/galaxy-central/commits/aac6bd9c589b/
Changeset: aac6bd9c589b
User: natefoo
Date: 2014-01-07 21:23:38
Summary: Sample job_conf.xml for new features introduced.
Affected #: 1 file
diff -r d3df5b5e91f3c42cffdba83f0ea9300b5caa172a -r aac6bd9c589bf43ec5835c901911704336932d1d job_conf.xml.sample_advanced
--- a/job_conf.xml.sample_advanced
+++ b/job_conf.xml.sample_advanced
@@ -6,7 +6,19 @@
--><plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner"/><plugin id="pbs" type="runner" load="galaxy.jobs.runners.pbs:PBSJobRunner" workers="2"/>
- <plugin id="drmaa" type="runner" load="galaxy.jobs.runners.drmaa:DRMAAJobRunner"/>
+ <plugin id="drmaa" type="runner" load="galaxy.jobs.runners.drmaa:DRMAAJobRunner">
+ <!-- Different DRMs handle successfully completed jobs differently,
+ these options can be changed to handle such differences and
+ are explained in detail on the Galaxy wiki. Defaults are shown -->
+ <param id="invalidjobexception_state">ok</param>
+ <param id="invalidjobexception_retries">0</param>
+ <param id="internalexception_state">ok</param>
+ <param id="internalexception_retries">0</param>
+ </plugin>
+ <plugin id="sge" type="runner" load="galaxy.jobs.runners.drmaa:DRMAAJobRunner">
+ <!-- Override the $DRMAA_LIBRARY_PATH environment variable -->
+ <param id="drmaa_library_path">/sge/lib/libdrmaa.so</param>
+ </plugin><plugin id="lwr" type="runner" load="galaxy.jobs.runners.lwr:LwrJobRunner"><!-- More information on LWR can be found at https://lwr.readthedocs.org --><!-- Uncomment following line to use libcurl to perform HTTP calls (defaults to urllib) -->
@@ -14,6 +26,7 @@
</plugin><plugin id="cli" type="runner" load="galaxy.jobs.runners.cli:ShellJobRunner" /><plugin id="condor" type="runner" load="galaxy.jobs.runners.condor:CondorJobRunner" />
+ <plugin id="slurm" type="runner" load="galaxy.jobs.runners.slurm:SlurmJobRunner" /></plugins><handlers default="handlers"><!-- Additional job handlers - the id should match the name of a
@@ -21,6 +34,15 @@
--><handler id="handler0" tags="handlers"/><handler id="handler1" tags="handlers"/>
+ <!-- Handlers will load all plugins defined in the <plugins> collection
+ above by default, but can be limited to a subset using <plugin>
+ tags. This is useful for heterogenous environments where the DRMAA
+ plugin would need to be loaded more than once with different
+ configs.
+ -->
+ <handler id="sge_handler">
+ <plugin id="sge"/>
+ </handler><handler id="special_handler0" tags="special_handlers"/><handler id="special_handler1" tags="special_handlers"/><handler id="trackster_handler"/>
https://bitbucket.org/galaxy/galaxy-central/commits/2d525856226f/
Changeset: 2d525856226f
User: natefoo
Date: 2014-01-09 18:01:19
Summary: Merged in natefoo/galaxy-central (pull request #290)
Configurable terminal job state, Slurm enhancements
Affected #: 6 files
diff -r 10ac38a448152b9a927c34f36334724dfd173bfc -r 2d525856226f0115a5f8dea5e7fedfae2fcbcf8a job_conf.xml.sample_advanced
--- a/job_conf.xml.sample_advanced
+++ b/job_conf.xml.sample_advanced
@@ -6,7 +6,19 @@
--><plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner"/><plugin id="pbs" type="runner" load="galaxy.jobs.runners.pbs:PBSJobRunner" workers="2"/>
- <plugin id="drmaa" type="runner" load="galaxy.jobs.runners.drmaa:DRMAAJobRunner"/>
+ <plugin id="drmaa" type="runner" load="galaxy.jobs.runners.drmaa:DRMAAJobRunner">
+ <!-- Different DRMs handle successfully completed jobs differently,
+ these options can be changed to handle such differences and
+ are explained in detail on the Galaxy wiki. Defaults are shown -->
+ <param id="invalidjobexception_state">ok</param>
+ <param id="invalidjobexception_retries">0</param>
+ <param id="internalexception_state">ok</param>
+ <param id="internalexception_retries">0</param>
+ </plugin>
+ <plugin id="sge" type="runner" load="galaxy.jobs.runners.drmaa:DRMAAJobRunner">
+ <!-- Override the $DRMAA_LIBRARY_PATH environment variable -->
+ <param id="drmaa_library_path">/sge/lib/libdrmaa.so</param>
+ </plugin><plugin id="lwr" type="runner" load="galaxy.jobs.runners.lwr:LwrJobRunner"><!-- More information on LWR can be found at https://lwr.readthedocs.org --><!-- Uncomment following line to use libcurl to perform HTTP calls (defaults to urllib) -->
@@ -14,6 +26,7 @@
</plugin><plugin id="cli" type="runner" load="galaxy.jobs.runners.cli:ShellJobRunner" /><plugin id="condor" type="runner" load="galaxy.jobs.runners.condor:CondorJobRunner" />
+ <plugin id="slurm" type="runner" load="galaxy.jobs.runners.slurm:SlurmJobRunner" /></plugins><handlers default="handlers"><!-- Additional job handlers - the id should match the name of a
@@ -21,6 +34,15 @@
--><handler id="handler0" tags="handlers"/><handler id="handler1" tags="handlers"/>
+ <!-- Handlers will load all plugins defined in the <plugins> collection
+ above by default, but can be limited to a subset using <plugin>
+ tags. This is useful for heterogenous environments where the DRMAA
+ plugin would need to be loaded more than once with different
+ configs.
+ -->
+ <handler id="sge_handler">
+ <plugin id="sge"/>
+ </handler><handler id="special_handler0" tags="special_handlers"/><handler id="special_handler1" tags="special_handlers"/><handler id="trackster_handler"/>
diff -r 10ac38a448152b9a927c34f36334724dfd173bfc -r 2d525856226f0115a5f8dea5e7fedfae2fcbcf8a lib/galaxy/jobs/__init__.py
--- a/lib/galaxy/jobs/__init__.py
+++ b/lib/galaxy/jobs/__init__.py
@@ -86,6 +86,7 @@
self.app = app
self.runner_plugins = []
self.handlers = {}
+ self.handler_runner_plugins = {}
self.default_handler_id = None
self.destinations = {}
self.destination_tags = {}
@@ -138,6 +139,10 @@
else:
log.debug("Read definition for handler '%s'" % id)
self.handlers[id] = (id,)
+ for plugin in handler.findall('plugin'):
+ if id not in self.handler_runner_plugins:
+ self.handler_runner_plugins[id] = []
+ self.handler_runner_plugins[id].append( plugin.get('id') )
if handler.get('tags', None) is not None:
for tag in [ x.strip() for x in handler.get('tags').split(',') ]:
if tag in self.handlers:
@@ -420,13 +425,19 @@
"""
return self.destinations.get(id_or_tag, None)
- def get_job_runner_plugins(self):
+ def get_job_runner_plugins(self, handler_id):
"""Load all configured job runner plugins
:returns: list of job runner plugins
"""
rval = {}
- for runner in self.runner_plugins:
+ if handler_id in self.handler_runner_plugins:
+ plugins_to_load = [ rp for rp in self.runner_plugins if rp['id'] in self.handler_runner_plugins[handler_id] ]
+ log.info( "Handler '%s' will load specified runner plugins: %s", handler_id, ', '.join( [ rp['id'] for rp in plugins_to_load ] ) )
+ else:
+ plugins_to_load = self.runner_plugins
+ log.info( "Handler '%s' will load all configured runner plugins", handler_id )
+ for runner in plugins_to_load:
class_names = []
module = None
id = runner['id']
@@ -477,7 +488,7 @@
try:
rval[id] = runner_class( self.app, runner[ 'workers' ], **runner.get( 'kwds', {} ) )
except TypeError:
- log.warning( "Job runner '%s:%s' has not been converted to a new-style runner" % ( module_name, class_name ) )
+ log.exception( "Job runner '%s:%s' has not been converted to a new-style runner or encountered TypeError on load" % ( module_name, class_name ) )
rval[id] = runner_class( self.app )
log.debug( "Loaded job runner '%s:%s' as '%s'" % ( module_name, class_name, id ) )
return rval
diff -r 10ac38a448152b9a927c34f36334724dfd173bfc -r 2d525856226f0115a5f8dea5e7fedfae2fcbcf8a lib/galaxy/jobs/handler.py
--- a/lib/galaxy/jobs/handler.py
+++ b/lib/galaxy/jobs/handler.py
@@ -565,7 +565,7 @@
class DefaultJobDispatcher( object ):
def __init__( self, app ):
self.app = app
- self.job_runners = self.app.job_config.get_job_runner_plugins()
+ self.job_runners = self.app.job_config.get_job_runner_plugins( self.app.config.server_name )
# Once plugins are loaded, all job destinations that were created from
# URLs can have their URL params converted to the destination's param
# dict by the plugin.
diff -r 10ac38a448152b9a927c34f36334724dfd173bfc -r 2d525856226f0115a5f8dea5e7fedfae2fcbcf8a lib/galaxy/jobs/runners/__init__.py
--- a/lib/galaxy/jobs/runners/__init__.py
+++ b/lib/galaxy/jobs/runners/__init__.py
@@ -22,13 +22,39 @@
STOP_SIGNAL = object()
+
+class RunnerParams( object ):
+
+ def __init__( self, specs = None, params = None ):
+ self.specs = specs or dict()
+ self.params = params or dict()
+ for name, value in self.params.items():
+ assert name in self.specs, 'Invalid job runner parameter for this plugin: %s' % name
+ if 'map' in self.specs[ name ]:
+ try:
+ self.params[ name ] = self.specs[ name ][ 'map' ]( value )
+ except Exception, e:
+ raise Exception( 'Job runner parameter "%s" value "%s" could not be converted to the correct type: %s' % ( name, value, e ) )
+ if 'valid' in self.specs[ name ]:
+ assert self.specs[ name ][ 'valid' ]( value ), 'Job runner parameter %s failed validation' % name
+
+ def __getattr__( self, name ):
+ return self.params.get( name, self.specs[ name ][ 'default' ] )
+
+
class BaseJobRunner( object ):
- def __init__( self, app, nworkers ):
+ def __init__( self, app, nworkers, **kwargs ):
"""Start the job runner
"""
self.app = app
self.sa_session = app.model.context
self.nworkers = nworkers
+ runner_param_specs = dict( recheck_missing_job_retries = dict( map = int, valid = lambda x: x >= 0, default = 0 ) )
+ if 'runner_param_specs' in kwargs:
+ runner_param_specs.update( kwargs.pop( 'runner_param_specs' ) )
+ if kwargs:
+ log.debug( 'Loading %s with params: %s', self.runner_name, kwargs )
+ self.runner_params = RunnerParams( specs = runner_param_specs, params = kwargs )
def _init_worker_threads(self):
"""Start ``nworkers`` worker threads.
@@ -115,7 +141,7 @@
job_wrapper.cleanup()
return False
elif job_state != model.Job.states.QUEUED:
- log.info( "(%d) Job is in state %s, skipping execution" % ( job_id, job_state ) )
+ log.info( "(%s) Job is in state %s, skipping execution" % ( job_id, job_state ) )
# cleanup may not be safe in all states
return False
@@ -226,6 +252,10 @@
options.update(**kwds)
return job_script(**options)
+ def _complete_terminal_job( self, ajs, **kwargs ):
+ if ajs.job_wrapper.get_state() != model.Job.states.DELETED:
+ self.work_queue.put( ( self.finish_job, ajs ) )
+
class AsynchronousJobState( object ):
"""
@@ -287,8 +317,8 @@
to the correct methods (queue, finish, cleanup) at appropriate times..
"""
- def __init__( self, app, nworkers ):
- super( AsynchronousJobRunner, self ).__init__( app, nworkers )
+ def __init__( self, app, nworkers, **kwargs ):
+ super( AsynchronousJobRunner, self ).__init__( app, nworkers, **kwargs )
# 'watched' and 'queue' are both used to keep track of jobs to watch.
# 'queue' is used to add new watched jobs, and can be called from
# any thread (usually by the 'queue_job' method). 'watched' must only
diff -r 10ac38a448152b9a927c34f36334724dfd173bfc -r 2d525856226f0115a5f8dea5e7fedfae2fcbcf8a lib/galaxy/jobs/runners/drmaa.py
--- a/lib/galaxy/jobs/runners/drmaa.py
+++ b/lib/galaxy/jobs/runners/drmaa.py
@@ -16,27 +16,12 @@
from galaxy.jobs.runners import AsynchronousJobState, AsynchronousJobRunner
eggs.require( "drmaa" )
-# We foolishly named this file the same as the name exported by the drmaa
-# library... 'import drmaa' imports itself.
-drmaa = __import__( "drmaa" )
log = logging.getLogger( __name__ )
__all__ = [ 'DRMAAJobRunner' ]
-drmaa_state = {
- drmaa.JobState.UNDETERMINED: 'process status cannot be determined',
- drmaa.JobState.QUEUED_ACTIVE: 'job is queued and active',
- drmaa.JobState.SYSTEM_ON_HOLD: 'job is queued and in system hold',
- drmaa.JobState.USER_ON_HOLD: 'job is queued and in user hold',
- drmaa.JobState.USER_SYSTEM_ON_HOLD: 'job is queued and in user and system hold',
- drmaa.JobState.RUNNING: 'job is running',
- drmaa.JobState.SYSTEM_SUSPENDED: 'job is system suspended',
- drmaa.JobState.USER_SUSPENDED: 'job is user suspended',
- drmaa.JobState.DONE: 'job finished normally',
- drmaa.JobState.FAILED: 'job finished, but failed',
-}
-
+drmaa = None
DRMAA_jobTemplate_attributes = [ 'args', 'remoteCommand', 'outputPath', 'errorPath', 'nativeSpecification',
'jobName', 'email', 'project' ]
@@ -48,8 +33,50 @@
"""
runner_name = "DRMAARunner"
- def __init__( self, app, nworkers ):
+ def __init__( self, app, nworkers, **kwargs ):
"""Start the job runner"""
+
+ global drmaa
+
+ runner_param_specs = dict(
+ drmaa_library_path = dict( map = str, default = os.environ.get( 'DRMAA_LIBRARY_PATH', None ) ),
+ invalidjobexception_state = dict( map = str, valid = lambda x: x in ( model.Job.states.OK, model.Job.states.ERROR ), default = model.Job.states.OK ),
+ invalidjobexception_retries = dict( map = int, valid = lambda x: int >= 0, default = 0 ),
+ internalexception_state = dict( map = str, valid = lambda x: x in ( model.Job.states.OK, model.Job.states.ERROR ), default = model.Job.states.OK ),
+ internalexception_retries = dict( map = int, valid = lambda x: int >= 0, default = 0 ) )
+
+ if 'runner_param_specs' not in kwargs:
+ kwargs[ 'runner_param_specs' ] = dict()
+ kwargs[ 'runner_param_specs' ].update( runner_param_specs )
+
+ super( DRMAAJobRunner, self ).__init__( app, nworkers, **kwargs )
+
+ # This allows multiple drmaa runners (although only one per handler) in the same job config file
+ if 'drmaa_library_path' in kwargs:
+ log.info( 'Overriding DRMAA_LIBRARY_PATH due to runner plugin parameter: %s', self.runner_params.drmaa_library_path )
+ os.environ['DRMAA_LIBRARY_PATH'] = self.runner_params.drmaa_library_path
+
+ # We foolishly named this file the same as the name exported by the drmaa
+ # library... 'import drmaa' imports itself.
+ drmaa = __import__( "drmaa" )
+
+ # Subclasses may need access to state constants
+ self.drmaa_job_states = drmaa.JobState
+
+ # Descriptive state strings pulled from the drmaa lib itself
+ self.drmaa_job_state_strings = {
+ drmaa.JobState.UNDETERMINED: 'process status cannot be determined',
+ drmaa.JobState.QUEUED_ACTIVE: 'job is queued and active',
+ drmaa.JobState.SYSTEM_ON_HOLD: 'job is queued and in system hold',
+ drmaa.JobState.USER_ON_HOLD: 'job is queued and in user hold',
+ drmaa.JobState.USER_SYSTEM_ON_HOLD: 'job is queued and in user and system hold',
+ drmaa.JobState.RUNNING: 'job is running',
+ drmaa.JobState.SYSTEM_SUSPENDED: 'job is system suspended',
+ drmaa.JobState.USER_SUSPENDED: 'job is user suspended',
+ drmaa.JobState.DONE: 'job finished normally',
+ drmaa.JobState.FAILED: 'job finished, but failed',
+ }
+
self.ds = drmaa.Session()
self.ds.initialize()
@@ -58,7 +85,6 @@
self.external_killJob_script = app.config.drmaa_external_killjob_script
self.userid = None
- super( DRMAAJobRunner, self ).__init__( app, nworkers )
self._init_monitor_thread()
self._init_worker_threads()
@@ -175,6 +201,20 @@
# Add to our 'queue' of jobs to monitor
self.monitor_queue.put( ajs )
+ def _complete_terminal_job( self, ajs, drmaa_state, **kwargs ):
+ """
+ Handle a job upon its termination in the DRM. This method is meant to
+ be overridden by subclasses to improve post-mortem and reporting of
+ failures.
+ """
+ if drmaa_state == drmaa.JobState.FAILED:
+ if ajs.job_wrapper.get_state() != model.Job.states.DELETED:
+ ajs.stop_job = False
+ ajs.fail_message = "The cluster DRM system terminated this job"
+ self.work_queue.put( ( self.fail_job, ajs ) )
+ elif drmaa_state == drmaa.JobState.DONE:
+ super( DRMAAJobRunner, self )._complete_terminal_job( ajs )
+
def check_watched_items( self ):
"""
Called by the monitor thread to look at each watched job and deal
@@ -188,16 +228,27 @@
try:
assert external_job_id not in ( None, 'None' ), '(%s/%s) Invalid job id' % ( galaxy_id_tag, external_job_id )
state = self.ds.jobStatus( external_job_id )
- # InternalException was reported to be necessary on some DRMs, but
- # this could cause failures to be detected as completion! Please
- # report if you experience problems with this.
- except ( drmaa.InvalidJobException, drmaa.InternalException ), e:
- # we should only get here if an orphaned job was put into the queue at app startup
- log.info( "(%s/%s) job left DRM queue with following message: %s" % ( galaxy_id_tag, external_job_id, e ) )
- self.work_queue.put( ( self.finish_job, ajs ) )
+ except ( drmaa.InternalException, drmaa.InvalidJobException ), e:
+ ecn = e.__class__.__name__
+ retry_param = ecn.lower() + '_retries'
+ state_param = ecn.lower() + '_state'
+ retries = getattr( ajs, retry_param, 0 )
+ if self.runner_params[ retry_param ] > 0:
+ if retries < self.runner_params[ retry_param ]:
+ # will retry check on next iteration
+ setattr( ajs, retry_param, retries + 1 )
+ continue
+ if self.runner_params[ state_param ] == model.Job.states.OK:
+ log.info( "(%s/%s) job left DRM queue with following message: %s", galaxy_id_tag, external_job_id, e )
+ self.work_queue.put( ( self.finish_job, ajs ) )
+ elif self.runner_params[ state_param ] == model.Job.states.ERROR:
+ log.info( "(%s/%s) job check resulted in %s after %s tries: %s", galaxy_id_tag, external_job_id, ecn, retries, e )
+ self.work_queue.put( ( self.fail_job, ajs ) )
+ else:
+ raise Exception( "%s is set to an invalid value (%s), this should not be possible. See galaxy.jobs.drmaa.__init__()", state_param, self.runner_params[ state_param ] )
continue
except drmaa.DrmCommunicationException, e:
- log.warning( "(%s/%s) unable to communicate with DRM: %s" % ( galaxy_id_tag, external_job_id, e ))
+ log.warning( "(%s/%s) unable to communicate with DRM: %s", galaxy_id_tag, external_job_id, e )
new_watched.append( ajs )
continue
except Exception, e:
@@ -208,19 +259,12 @@
self.work_queue.put( ( self.fail_job, ajs ) )
continue
if state != old_state:
- log.debug( "(%s/%s) state change: %s" % ( galaxy_id_tag, external_job_id, drmaa_state[state] ) )
+ log.debug( "(%s/%s) state change: %s" % ( galaxy_id_tag, external_job_id, self.drmaa_job_state_strings[state] ) )
if state == drmaa.JobState.RUNNING and not ajs.running:
ajs.running = True
ajs.job_wrapper.change_state( model.Job.states.RUNNING )
- if state == drmaa.JobState.FAILED:
- if ajs.job_wrapper.get_state() != model.Job.states.DELETED:
- ajs.stop_job = False
- ajs.fail_message = "The cluster DRM system terminated this job"
- self.work_queue.put( ( self.fail_job, ajs ) )
- continue
- if state == drmaa.JobState.DONE:
- if ajs.job_wrapper.get_state() != model.Job.states.DELETED:
- self.work_queue.put( ( self.finish_job, ajs ) )
+ if state in ( drmaa.JobState.FAILED, drmaa.JobState.DONE ):
+ self._complete_terminal_job( ajs, drmaa_state = state )
continue
ajs.old_state = state
new_watched.append( ajs )
diff -r 10ac38a448152b9a927c34f36334724dfd173bfc -r 2d525856226f0115a5f8dea5e7fedfae2fcbcf8a lib/galaxy/jobs/runners/slurm.py
--- /dev/null
+++ b/lib/galaxy/jobs/runners/slurm.py
@@ -0,0 +1,57 @@
+"""
+SLURM job control via the DRMAA API.
+"""
+
+import time
+import logging
+import subprocess
+
+from galaxy import model
+from galaxy.jobs.runners.drmaa import DRMAAJobRunner
+
+log = logging.getLogger( __name__ )
+
+__all__ = [ 'SlurmJobRunner' ]
+
+
+class SlurmJobRunner( DRMAAJobRunner ):
+ runner_name = "SlurmRunner"
+
+ def _complete_terminal_job( self, ajs, drmaa_state, **kwargs ):
+ def __get_jobinfo():
+ scontrol_out = subprocess.check_output( ( 'scontrol', '-o', 'show', 'job', ajs.job_id ) )
+ return dict( [ out_param.split( '=', 1 ) for out_param in scontrol_out.split() ] )
+ if drmaa_state == self.drmaa_job_states.FAILED:
+ try:
+ job_info = __get_jobinfo()
+ sleep = 1
+ while job_info['JobState'] == 'COMPLETING':
+ log.debug( '(%s/%s) Waiting %s seconds for failed job to exit COMPLETING state for post-mortem', ajs.job_wrapper.get_id_tag(), ajs.job_id, sleep )
+ time.sleep( sleep )
+ sleep *= 2
+ if sleep > 64:
+ ajs.fail_message = "This job failed and the system timed out while trying to determine the cause of the failure."
+ break
+ job_info = __get_jobinfo()
+ if job_info['JobState'] == 'TIMEOUT':
+ ajs.fail_message = "This job was terminated because it ran longer than the maximum allowed job run time."
+ elif job_info['JobState'] == 'NODE_FAIL':
+ log.warning( '(%s/%s) Job failed due to node failure, attempting resubmission', ajs.job_wrapper.get_id_tag(), ajs.job_id )
+ ajs.job_wrapper.change_state( model.Job.states.QUEUED, info = 'Job was resubmitted due to node failure' )
+ try:
+ self.queue_job( ajs.job_wrapper )
+ return
+ except:
+ ajs.fail_message = "This job failed due to a cluster node failure, and an attempt to resubmit the job failed."
+ elif job_info['JobState'] == 'CANCELLED':
+ ajs.fail_message = "This job failed because it was cancelled by an administrator."
+ else:
+ ajs.fail_message = "This job failed for reasons that could not be determined."
+ ajs.fail_message += '\nPlease click the bug icon to report this problem if you need help.'
+ ajs.stop_job = False
+ self.work_queue.put( ( self.fail_job, ajs ) )
+ except Exception, e:
+ log.exception( '(%s/%s) Unable to inspect failed slurm job using scontrol, job will be unconditionally failed: %s', ajs.job_wrapper.get_id_tag(), ajs.job_id, e )
+ super( SlurmJobRunner, self )._complete_terminal_job( ajs, drmaa_state = drmaa_state )
+ elif drmaa_state == self.drmaa_job_states.DONE:
+ super( SlurmJobRunner, self )._complete_terminal_job( ajs, drmaa_state = drmaa_state )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/cfdf54c48ecd/
Changeset: cfdf54c48ecd
Branch: stable
User: dan
Date: 2013-12-02 20:48:30
Summary: Fix for missing eggs.require in lib/galaxy/exceptions/__init__.py.
Affected #: 1 file
diff -r a0f7e0e5f05af4112ded852cf61c138582c99c04 -r cfdf54c48ecdcbe5bad4a9af14822a88d7fd6082 lib/galaxy/exceptions/__init__.py
--- a/lib/galaxy/exceptions/__init__.py
+++ b/lib/galaxy/exceptions/__init__.py
@@ -1,6 +1,10 @@
"""
Custom exceptions for Galaxy
"""
+
+from galaxy import eggs
+eggs.require( "Paste" )
+
from paste import httpexceptions
class MessageException( Exception ):
https://bitbucket.org/galaxy/galaxy-central/commits/3e76aceca9d0/
Changeset: 3e76aceca9d0
Branch: stable
User: natefoo
Date: 2014-01-09 16:46:01
Summary: Fix username argument to PostgreSQL connection in pgcleanup.py
Affected #: 1 file
diff -r cfdf54c48ecdcbe5bad4a9af14822a88d7fd6082 -r 3e76aceca9d0ab563e285d630dd7575140236b0d scripts/cleanup_datasets/pgcleanup.py
--- a/scripts/cleanup_datasets/pgcleanup.py
+++ b/scripts/cleanup_datasets/pgcleanup.py
@@ -105,7 +105,7 @@
url = make_url(self.config.database_connection)
log.info('Connecting to database with URL: %s' % url)
- args = url.translate_connect_args()
+ args = url.translate_connect_args( username='user' )
args.update(url.query)
assert url.get_dialect().name == 'postgresql', 'This script can only be used with PostgreSQL.'
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: dan: Special case genomespace file browser import tool's handling of GenomeSpace format identifier 'http://www.genomespace.org/datamanager/dataformat/unknown' to cause datatype auto-dection via sniff with a fallback to using filename extention.
by commits-noreply@bitbucket.org 09 Jan '14
by commits-noreply@bitbucket.org 09 Jan '14
09 Jan '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/10ac38a44815/
Changeset: 10ac38a44815
User: dan
Date: 2014-01-09 16:54:58
Summary: Special case genomespace file browser import tool's handling of GenomeSpace format identifier 'http://www.genomespace.org/datamanager/dataformat/unknown' to cause datatype auto-dection via sniff with a fallback to using filename extention.
Affected #: 1 file
diff -r 110a0aabd293fd1393d1296cb220a0b153d69123 -r 10ac38a448152b9a927c34f36334724dfd173bfc tools/genomespace/genomespace_file_browser.py
--- a/tools/genomespace/genomespace_file_browser.py
+++ b/tools/genomespace/genomespace_file_browser.py
@@ -8,11 +8,16 @@
pkg_resources.require( "simplejson" )
import simplejson
+import galaxy.model # need to import model before sniff to resolve a circular import dependency
+from galaxy.datatypes import sniff
+from galaxy.datatypes.registry import Registry
+
GENOMESPACE_API_VERSION_STRING = "v1.0"
GENOMESPACE_SERVER_URL_PROPERTIES = "https://dm.genomespace.org/config/%s/serverurl.properties" % ( GENOMESPACE_API_VERSION_STRING )
CHUNK_SIZE = 2**20 #1mb
+AUTO_GALAXY_EXT = "auto"
DEFAULT_GALAXY_EXT = "data"
#genomespace format identifier is the URL
@@ -39,6 +44,9 @@
'gmt': 'gmt',
'gct': 'gct'}
+GENOMESPACE_UNKNOWN_FORMAT_KEY = 'unknown'
+GENOMESPACE_FORMAT_IDENTIFIER_UNKNOWN = None
+
VALID_CHARS = '.-()[]0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ '
def chunk_write( source_stream, target_stream, source_method = "read", target_method="write" ):
@@ -67,7 +75,7 @@
ext = GENOMESPACE_EXT_TO_GALAXY_EXT.get( ext, None )
if ext is None:
#could check content type, etc here
- ext = DEFAULT_GALAXY_EXT
+ ext = AUTO_GALAXY_EXT
return ext
def get_genomespace_site_urls():
@@ -90,6 +98,8 @@
genomespace_formats = simplejson.loads( opened_gs_request.read() )
for format in genomespace_formats:
GENOMESPACE_FORMAT_IDENTIFIER_TO_GENOMESPACE_EXT[ format['url'] ] = format['name']
+ global GENOMESPACE_FORMAT_IDENTIFIER_UNKNOWN
+ GENOMESPACE_FORMAT_IDENTIFIER_UNKNOWN = dict( map( lambda x: ( x[1], x[0] ) , GENOMESPACE_FORMAT_IDENTIFIER_TO_GENOMESPACE_EXT.iteritems() ) ).get( GENOMESPACE_UNKNOWN_FORMAT_KEY, GENOMESPACE_FORMAT_IDENTIFIER_UNKNOWN )
def download_from_genomespace_file_browser( json_parameter_file, genomespace_site ):
json_params = simplejson.loads( open( json_parameter_file, 'r' ).read() )
@@ -108,6 +118,11 @@
file_url_prefix = "fileUrl"
file_type_prefix = "fileFormat"
metadata_parameter_file = open( json_params['job_config']['TOOL_PROVIDED_JOB_METADATA_FILE'], 'wb' )
+
+ #setup datatypes registry for sniffing
+ datatypes_registry = Registry()
+ datatypes_registry.load_datatypes( root_dir = json_params[ 'job_config' ][ 'GALAXY_ROOT_DIR' ], config = json_params[ 'job_config' ][ 'GALAXY_DATATYPES_CONF_FILE' ] )
+
file_numbers = []
for name in datasource_params.keys():
if name.startswith( file_url_prefix ):
@@ -143,28 +158,47 @@
filename = urllib.unquote_plus( parsed_url[2].split( '/' )[-1] )
if not filename:
filename = download_url
+ metadata_dict = None
+ original_filename = filename
if output_filename is None:
- original_filename = filename
filename = ''.join( c in VALID_CHARS and c or '-' for c in filename )
while filename in used_filenames:
filename = "-%s" % filename
used_filenames.append( filename )
output_filename = os.path.join( datasource_params['__new_file_path__'], 'primary_%i_%s_visible_%s' % ( hda_id, filename, galaxy_ext ) )
- metadata_parameter_file.write( "%s\n" % simplejson.dumps( dict( type = 'new_primary_dataset',
- base_dataset_id = dataset_id,
- ext = galaxy_ext,
- filename = output_filename,
- name = "GenomeSpace import on %s" % ( original_filename ) ) ) )
+
+ metadata_dict = dict( type = 'new_primary_dataset',
+ base_dataset_id = dataset_id,
+ ext = galaxy_ext,
+ filename = output_filename,
+ name = "GenomeSpace import on %s" % ( original_filename ) )
else:
if dataset_id is not None:
- metadata_parameter_file.write( "%s\n" % simplejson.dumps( dict( type = 'dataset',
- dataset_id = dataset_id,
- ext = galaxy_ext,
- name = "GenomeSpace import on %s" % ( filename ) ) ) )
+ metadata_dict = dict( type = 'dataset',
+ dataset_id = dataset_id,
+ ext = galaxy_ext,
+ name = "GenomeSpace import on %s" % ( filename ) )
output_file = open( output_filename, 'wb' )
chunk_write( target_download_url, output_file )
output_file.close()
+
+ if ( galaxy_ext == AUTO_GALAXY_EXT or filetype_url == GENOMESPACE_FORMAT_IDENTIFIER_UNKNOWN ) and metadata_dict:
+ #try to sniff datatype
+ try:
+ galaxy_ext = sniff.handle_uploaded_dataset_file( output_filename, datatypes_registry )
+ except:
+ #sniff failed
+ galaxy_ext = original_filename.rsplit( '.', 1 )[-1]
+ if galaxy_ext not in datatypes_registry.datatypes_by_extension:
+ galaxy_ext = DEFAULT_GALAXY_EXT
+ metadata_dict[ 'ext' ] = galaxy_ext
+
output_filename = None #only have one filename available
+
+ #write out metadata info
+ if metadata_dict:
+ metadata_parameter_file.write( "%s\n" % simplejson.dumps( metadata_dict ) )
+
metadata_parameter_file.close()
return True
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: inithello: Use separate methods to deactivate and uninstall repositories.
by commits-noreply@bitbucket.org 09 Jan '14
by commits-noreply@bitbucket.org 09 Jan '14
09 Jan '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/110a0aabd293/
Changeset: 110a0aabd293
User: inithello
Date: 2014-01-09 14:49:22
Summary: Use separate methods to deactivate and uninstall repositories.
Affected #: 18 files
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/base/twilltestcase.py
--- a/test/tool_shed/base/twilltestcase.py
+++ b/test/tool_shed/base/twilltestcase.py
@@ -383,7 +383,17 @@
except:
pass
return previously_created, username_taken, invalid_username
-
+
+ def deactivate_repository( self, installed_repository, strings_displayed=[], strings_not_displayed=[] ):
+ url = '/admin_toolshed/deactivate_or_uninstall_repository?id=%s' % self.security.encode_id( installed_repository.id )
+ self.visit_galaxy_url( url )
+ self.check_for_strings( strings_displayed, strings_not_displayed )
+ form = tc.browser.get_form( 'deactivate_or_uninstall_repository' )
+ kwd = self.set_form_value( form, {}, 'remove_from_disk', False )
+ tc.submit( 'deactivate_or_uninstall_repository_button' )
+ strings_displayed = [ 'The repository named', 'has been deactivated' ]
+ self.check_for_strings( strings_displayed, strings_not_displayed=[] )
+
def delete_files_from_repository( self, repository, filenames=[], strings_displayed=[ 'were deleted from the repository' ], strings_not_displayed=[] ):
files_to_delete = []
basepath = self.get_repo_path( repository )
@@ -1204,18 +1214,14 @@
self.visit_url( url )
self.check_for_strings( strings_displayed, strings_not_displayed )
- def uninstall_repository( self, installed_repository, remove_from_disk=True, is_required=False, strings_displayed=[], strings_not_displayed=[] ):
+ def uninstall_repository( self, installed_repository, strings_displayed=[], strings_not_displayed=[] ):
url = '/admin_toolshed/deactivate_or_uninstall_repository?id=%s' % self.security.encode_id( installed_repository.id )
self.visit_galaxy_url( url )
self.check_for_strings( strings_displayed, strings_not_displayed )
form = tc.browser.get_form( 'deactivate_or_uninstall_repository' )
- kwd = self.set_form_value( form, {}, 'remove_from_disk', remove_from_disk )
+ kwd = self.set_form_value( form, {}, 'remove_from_disk', True )
tc.submit( 'deactivate_or_uninstall_repository_button' )
- strings_displayed = [ 'The repository named' ]
- if remove_from_disk and not is_required:
- strings_displayed.append( 'has been uninstalled' )
- else:
- strings_displayed.append( 'has been deactivated' )
+ strings_displayed = [ 'The repository named', 'has been uninstalled' ]
self.check_for_strings( strings_displayed, strings_not_displayed=[] )
def update_installed_repository( self, installed_repository, strings_displayed=[], strings_not_displayed=[] ):
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_0490_export_import_repositories.py
--- a/test/tool_shed/functional/test_0490_export_import_repositories.py
+++ b/test/tool_shed/functional/test_0490_export_import_repositories.py
@@ -65,7 +65,7 @@
global capsule_filepath
repository = self.test_db_util.get_repository_by_name_and_owner( repository_name, common.test_user_1_name )
capsule_filepath = self.export_capsule( repository )
- log.debug( os.path.exists( capsule_filepath ) )
+ assert os.path.exists( capsule_filepath ), 'Exported repository capsule file path %s not found.' % capsule_filepath
def test_0015_verify_exported_capsule( self ):
'''Verify the exported capsule contents.'''
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1020_install_repository_with_repository_dependencies.py
--- a/test/tool_shed/functional/test_1020_install_repository_with_repository_dependencies.py
+++ b/test/tool_shed/functional/test_1020_install_repository_with_repository_dependencies.py
@@ -121,7 +121,7 @@
global base_datatypes_count
installed_repository = self.test_db_util.get_installed_repository_by_name_owner( datatypes_repository_name, common.test_user_1_name )
old_datatypes_count = int( self.get_datatypes_count() )
- self.uninstall_repository( installed_repository, remove_from_disk=False )
+ self.deactivate_repository( installed_repository )
current_datatypes = int( self.get_datatypes_count() )
assert current_datatypes < old_datatypes_count, 'Uninstalling emboss did not remove datatypes.'
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1040_install_repository_basic_circular_dependencies.py
--- a/test/tool_shed/functional/test_1040_install_repository_basic_circular_dependencies.py
+++ b/test/tool_shed/functional/test_1040_install_repository_basic_circular_dependencies.py
@@ -109,7 +109,7 @@
installed_filtering_repository = self.test_db_util.get_installed_repository_by_name_owner( filtering_repository_name, common.test_user_1_name )
self.display_installed_repository_manage_page( installed_freebayes_repository )
self.display_installed_repository_manage_page( installed_filtering_repository )
- self.uninstall_repository( installed_filtering_repository, remove_from_disk=False )
+ self.deactivate_repository( installed_filtering_repository )
self.test_db_util.ga_refresh( installed_filtering_repository )
self.check_galaxy_repository_tool_panel_section( installed_filtering_repository, '' )
strings_displayed = [ 'Missing repository', 'filtering', 'freebayes_0040', 'user1', "Galaxy's freebayes tool for test 0040" ]
@@ -134,7 +134,7 @@
installed_filtering_repository = self.test_db_util.get_installed_repository_by_name_owner( filtering_repository_name, common.test_user_1_name )
self.display_installed_repository_manage_page( installed_freebayes_repository )
self.display_installed_repository_manage_page( installed_filtering_repository )
- self.uninstall_repository( installed_freebayes_repository, remove_from_disk=False )
+ self.deactivate_repository( installed_freebayes_repository )
self.test_db_util.ga_refresh( installed_freebayes_repository )
self.check_galaxy_repository_tool_panel_section( installed_freebayes_repository, '' )
strings_displayed = [ 'Missing repository', 'freebayes', 'filtering_0040', 'user1', "Galaxy's filtering tool for test 0040" ]
@@ -149,7 +149,7 @@
installed_freebayes_repository = self.test_db_util.get_installed_repository_by_name_owner( freebayes_repository_name, common.test_user_1_name )
self.display_installed_repository_manage_page( installed_freebayes_repository )
self.display_installed_repository_manage_page( installed_filtering_repository )
- self.uninstall_repository( installed_filtering_repository, remove_from_disk=False )
+ self.deactivate_repository( installed_filtering_repository )
self.test_db_util.ga_refresh( installed_filtering_repository )
self.check_galaxy_repository_tool_panel_section( installed_filtering_repository, '' )
strings_displayed = [ 'Missing repository', 'filtering', 'freebayes_0040', 'user1', "Galaxy's freebayes tool for test 0040" ]
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1050_circular_dependencies_4_levels.py
--- a/test/tool_shed/functional/test_1050_circular_dependencies_4_levels.py
+++ b/test/tool_shed/functional/test_1050_circular_dependencies_4_levels.py
@@ -315,7 +315,7 @@
'''Deactivate emboss_datatypes and verify that the datatypes count is reduced.'''
original_datatypes = self.get_datatypes_count()
repository = self.test_db_util.get_installed_repository_by_name_owner( emboss_datatypes_repository_name, common.test_user_1_name )
- self.uninstall_repository( repository, remove_from_disk=False )
+ self.deactivate_repository( repository )
assert original_datatypes > self.get_datatypes_count(), 'Deactivating emboss_datatypes did not remove datatypes.'
# Now we have emboss, bismark, column_maker, and convert_chars installed, filtering and freebayes never installed, and emboss_datatypes deactivated.
installed_repositories = [ ( column_repository_name, common.test_user_1_name ),
@@ -330,7 +330,7 @@
def test_0070_uninstall_emboss_repository( self ):
'''Uninstall the emboss_5 repository.'''
repository = self.test_db_util.get_installed_repository_by_name_owner( emboss_repository_name, common.test_user_1_name )
- self.uninstall_repository( repository, remove_from_disk=True )
+ self.uninstall_repository( repository )
strings_not_displayed = [ repository.name, repository.installed_changeset_revision ]
self.display_galaxy_browse_repositories_page( strings_not_displayed=strings_not_displayed )
self.test_db_util.ga_refresh( repository )
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1080_advanced_circular_dependency_installation.py
--- a/test/tool_shed/functional/test_1080_advanced_circular_dependency_installation.py
+++ b/test/tool_shed/functional/test_1080_advanced_circular_dependency_installation.py
@@ -162,7 +162,7 @@
common.test_user_1_name )
installed_column_repository = self.test_db_util.get_installed_repository_by_name_owner( column_repository_name,
common.test_user_1_name )
- self.uninstall_repository( installed_convert_repository, remove_from_disk=False )
+ self.deactivate_repository( installed_convert_repository )
strings_displayed = [ 'column_maker_0080',
'Add column',
self.url.replace( 'http://', '' ),
@@ -199,7 +199,7 @@
common.test_user_1_name )
installed_column_repository = self.test_db_util.get_installed_repository_by_name_owner( column_repository_name,
common.test_user_1_name )
- self.uninstall_repository( installed_column_repository, remove_from_disk=False )
+ self.deactivate_repository( installed_column_repository )
strings_displayed = [ 'convert_chars_0080',
'Convert delimiters',
self.url.replace( 'http://', '' ),
@@ -217,7 +217,7 @@
common.test_user_1_name )
installed_column_repository = self.test_db_util.get_installed_repository_by_name_owner( column_repository_name,
common.test_user_1_name )
- self.uninstall_repository( installed_convert_repository, remove_from_disk=False )
+ self.deactivate_repository( installed_convert_repository )
strings_not_displayed = [ 'column_maker_0080',
installed_column_repository.installed_changeset_revision,
'convert_chars_0080',
@@ -274,7 +274,7 @@
common.test_user_1_name )
installed_column_repository = self.test_db_util.get_installed_repository_by_name_owner( column_repository_name,
common.test_user_1_name )
- self.uninstall_repository( installed_column_repository, remove_from_disk=True )
+ self.uninstall_repository( installed_column_repository )
strings_displayed = [ 'convert_chars_0080',
'Convert delimiters',
self.url.replace( 'http://', '' ),
@@ -320,7 +320,7 @@
common.test_user_1_name )
installed_column_repository = self.test_db_util.get_installed_repository_by_name_owner( column_repository_name,
common.test_user_1_name )
- self.uninstall_repository( installed_convert_repository, remove_from_disk=False )
+ self.deactivate_repository( installed_convert_repository )
strings_displayed = [ 'column_maker_0080',
'Add column',
self.url.replace( 'http://', '' ),
@@ -340,7 +340,7 @@
common.test_user_1_name )
installed_column_repository = self.test_db_util.get_installed_repository_by_name_owner( column_repository_name,
common.test_user_1_name )
- self.uninstall_repository( installed_column_repository, remove_from_disk=False )
+ self.deactivate_repository( installed_column_repository )
strings_displayed = [ 'convert_chars_0080',
'Convert delimiters',
self.url.replace( 'http://', '' ),
@@ -387,8 +387,8 @@
common.test_user_1_name )
installed_column_repository = self.test_db_util.get_installed_repository_by_name_owner( column_repository_name,
common.test_user_1_name )
- self.uninstall_repository( installed_column_repository, remove_from_disk=False )
- self.uninstall_repository( installed_convert_repository, remove_from_disk=False )
+ self.deactivate_repository( installed_column_repository )
+ self.deactivate_repository( installed_convert_repository )
self.test_db_util.install_session.refresh( installed_column_repository )
self.test_db_util.install_session.refresh( installed_convert_repository )
self.check_galaxy_repository_tool_panel_section( installed_column_repository, '' )
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1085_repository_dependency_handling.py
--- a/test/tool_shed/functional/test_1085_repository_dependency_handling.py
+++ b/test/tool_shed/functional/test_1085_repository_dependency_handling.py
@@ -124,14 +124,14 @@
'''uninstall column_maker, verify same section'''
installed_column_repository = self.test_db_util.get_installed_repository_by_name_owner( column_repository_name,
common.test_user_1_name )
- self.uninstall_repository( installed_column_repository, remove_from_disk=True )
+ self.uninstall_repository( installed_column_repository )
self.test_db_util.ga_refresh( installed_column_repository )
self.check_galaxy_repository_tool_panel_section( installed_column_repository, 'column_maker' )
def test_0030_uninstall_convert_repository( self ):
installed_convert_repository = self.test_db_util.get_installed_repository_by_name_owner( convert_repository_name,
common.test_user_1_name )
- self.uninstall_repository( installed_convert_repository, remove_from_disk=True )
+ self.uninstall_repository( installed_convert_repository )
self.test_db_util.ga_refresh( installed_convert_repository )
self.check_galaxy_repository_tool_panel_section( installed_convert_repository, 'column_maker' )
@@ -173,8 +173,8 @@
common.test_user_1_name )
installed_column_repository = self.test_db_util.get_installed_repository_by_name_owner( column_repository_name,
common.test_user_1_name )
- self.uninstall_repository( installed_convert_repository, remove_from_disk=True )
- self.uninstall_repository( installed_column_repository, remove_from_disk=True )
+ self.uninstall_repository( installed_convert_repository )
+ self.uninstall_repository( installed_column_repository )
self.test_db_util.ga_refresh( installed_convert_repository )
self.test_db_util.ga_refresh( installed_column_repository )
self.check_galaxy_repository_tool_panel_section( installed_column_repository, 'new_column_maker' )
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1087_install_updated_repository_dependencies.py
--- a/test/tool_shed/functional/test_1087_install_updated_repository_dependencies.py
+++ b/test/tool_shed/functional/test_1087_install_updated_repository_dependencies.py
@@ -89,7 +89,7 @@
new_tool_panel_section_label='column_maker',
strings_not_displayed=[ 'install_repository_dependencies' ] )
installed_column_repository = self.test_db_util.get_installed_repository_by_name_owner( column_repository_name, common.test_user_1_name )
- self.uninstall_repository( installed_column_repository, remove_from_disk=True )
+ self.uninstall_repository( installed_column_repository )
def test_0020_upload_dependency_xml( self ):
'''Upload a repository_dependencies.xml file to column_maker that specifies convert_chars.'''
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1090_install_tool_from_tool_search.py
--- a/test/tool_shed/functional/test_1090_install_tool_from_tool_search.py
+++ b/test/tool_shed/functional/test_1090_install_tool_from_tool_search.py
@@ -251,7 +251,7 @@
self.display_galaxy_browse_repositories_page( strings_displayed=strings_displayed, strings_not_displayed=strings_not_displayed )
self.verify_installed_repositories( installed_repositories )
filtering_repository = self.test_db_util.get_installed_repository_by_name_owner( filtering_repository_name, common.test_user_1_name )
- self.uninstall_repository( filtering_repository, remove_from_disk=False )
+ self.deactivate_repository( filtering_repository )
strings_displayed = [ 'freebayes_0090' ]
strings_not_displayed = [ 'filtering_0090', 'emboss_0090', 'emboss_datatypes_0090', 'bwa_color_0090', 'bwa_base_0090' ]
self.display_galaxy_browse_repositories_page( strings_displayed=strings_displayed, strings_not_displayed=strings_not_displayed )
@@ -278,7 +278,7 @@
self.display_galaxy_browse_repositories_page( strings_displayed=strings_displayed, strings_not_displayed=strings_not_displayed )
self.verify_installed_repositories( installed_repositories )
datatypes_repository = self.test_db_util.get_installed_repository_by_name_owner( emboss_datatypes_repository_name, common.test_user_1_name )
- self.uninstall_repository( datatypes_repository, remove_from_disk=True )
+ self.uninstall_repository( datatypes_repository )
strings_displayed = [ 'freebayes_0090' ]
strings_not_displayed = [ 'emboss_datatypes_0090', 'filtering_0090', 'emboss_0090', 'bwa_color_0090', 'bwa_base_0090' ]
self.display_galaxy_browse_repositories_page( strings_displayed=strings_displayed, strings_not_displayed=strings_not_displayed )
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1100_install_repository_with_complex_dependencies.py
--- a/test/tool_shed/functional/test_1100_install_repository_with_complex_dependencies.py
+++ b/test/tool_shed/functional/test_1100_install_repository_with_complex_dependencies.py
@@ -278,7 +278,7 @@
'''
base_repository = self.test_db_util.get_installed_repository_by_name_owner( bwa_base_repository_name, common.test_user_1_name )
tool_repository = self.test_db_util.get_installed_repository_by_name_owner( bwa_package_repository_name, common.test_user_1_name )
- self.uninstall_repository( tool_repository, remove_from_disk=False )
+ self.deactivate_repository( tool_repository )
env_sh_path = os.path.join( self.galaxy_tool_dependency_dir,
'bwa',
'0.5.9',
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1130_datatype_converters.py
--- a/test/tool_shed/functional/test_1130_datatype_converters.py
+++ b/test/tool_shed/functional/test_1130_datatype_converters.py
@@ -87,5 +87,5 @@
so we have to uninstall it first, then verify that it was not assigned a tool panel section.
'''
repository = self.test_db_util.get_installed_repository_by_name_owner( repository_name, common.test_user_1_name )
- self.uninstall_repository( repository, remove_from_disk=True )
+ self.uninstall_repository( repository )
self.verify_installed_repository_no_tool_panel_section( repository )
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1160_circular_prior_installation_required.py
--- a/test/tool_shed/functional/test_1160_circular_prior_installation_required.py
+++ b/test/tool_shed/functional/test_1160_circular_prior_installation_required.py
@@ -217,18 +217,16 @@
assert filter_repository.update_time > convert_repository.update_time, 'Error: convert_chars_0160 shows a later update time than filtering_0160'
assert filter_repository.update_time > column_repository.update_time, 'Error: column_maker_0160 shows a later update time than filtering_0160'
- def test_0040_uninstall_all_repositories( self ):
+ def test_0040_deactivate_all_repositories( self ):
'''Uninstall convert_chars_0160, column_maker_0160, and filtering_0160.'''
filter_repository = self.test_db_util.get_installed_repository_by_name_owner( filter_repository_name, common.test_user_1_name )
column_repository = self.test_db_util.get_installed_repository_by_name_owner( column_repository_name, common.test_user_1_name )
convert_repository = self.test_db_util.get_installed_repository_by_name_owner( convert_repository_name, common.test_user_1_name )
- # TODO: Tweak the uninstall_repository method by pulling out the stuff that deactivates a repository
- # instead of uninstalling it and create a new deactivate_repository method.
- self.uninstall_repository( filter_repository, remove_from_disk=False )
- self.uninstall_repository( column_repository, remove_from_disk=False )
- self.uninstall_repository( convert_repository, remove_from_disk=False )
+ self.deactivate_repository( filter_repository )
+ self.deactivate_repository( column_repository )
+ self.deactivate_repository( convert_repository )
- def test_0045_reactivatel_filter_repository( self ):
+ def test_0045_reactivate_filter_repository( self ):
'''Reinstall the filtering_0160 repository.'''
filter_repository = self.test_db_util.get_installed_repository_by_name_owner( filter_repository_name, common.test_user_1_name )
self.reactivate_repository( filter_repository )
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1200_uninstall_and_reinstall_basic_repository.py
--- a/test/tool_shed/functional/test_1200_uninstall_and_reinstall_basic_repository.py
+++ b/test/tool_shed/functional/test_1200_uninstall_and_reinstall_basic_repository.py
@@ -1,7 +1,9 @@
from tool_shed.base.twilltestcase import ShedTwillTestCase, common, os
+
class UninstallingAndReinstallingRepositories( ShedTwillTestCase ):
'''Test uninstalling and reinstalling a basic repository.'''
+
def test_0000_initiate_users( self ):
"""Create necessary user accounts."""
self.logout()
@@ -19,6 +21,7 @@
galaxy_admin_user = self.test_db_util.get_galaxy_user( common.admin_email )
assert galaxy_admin_user is not None, 'Problem retrieving user with email %s from the database' % common.admin_email
galaxy_admin_user_private_role = self.test_db_util.get_galaxy_private_role( galaxy_admin_user )
+
def test_0005_ensure_repositories_and_categories_exist( self ):
'''Create the 0000 category and upload the filtering repository to the tool shed, if necessary.'''
category = self.create_category( name='Test 0000 Basic Repository Features 1', description='Test 0000 Basic Repository Features 1' )
@@ -67,6 +70,7 @@
commit_message='Uploaded readme for 2.2.0',
strings_displayed=[],
strings_not_displayed=[] )
+
def test_0010_install_filtering_repository( self ):
'''Install the filtering repository into the Galaxy instance.'''
self.galaxy_logout()
@@ -82,14 +86,16 @@
self.url.replace( 'http://', '' ),
installed_repository.installed_changeset_revision ]
self.display_galaxy_browse_repositories_page( strings_displayed=strings_displayed )
+
def test_0015_uninstall_filtering_repository( self ):
'''Uninstall the filtering repository.'''
installed_repository = self.test_db_util.get_installed_repository_by_name_owner( 'filtering_0000', common.test_user_1_name )
- self.uninstall_repository( installed_repository, remove_from_disk=True )
+ self.uninstall_repository( installed_repository )
strings_not_displayed = [ 'filtering_0000',
"Galaxy's filtering tool for test 0000",
installed_repository.installed_changeset_revision ]
self.display_galaxy_browse_repositories_page( strings_not_displayed=strings_not_displayed )
+
def test_0020_reinstall_filtering_repository( self ):
'''Reinstall the filtering repository.'''
installed_repository = self.test_db_util.get_installed_repository_by_name_owner( 'filtering_0000', common.test_user_1_name )
@@ -103,14 +109,16 @@
strings_displayed.extend( [ 'Installed tool shed repository', 'Valid tools', 'Filter1' ] )
self.display_installed_repository_manage_page( installed_repository, strings_displayed=strings_displayed )
self.verify_tool_metadata_for_installed_repository( installed_repository )
+
def test_0025_deactivate_filtering_repository( self ):
'''Deactivate the filtering repository without removing it from disk.'''
installed_repository = self.test_db_util.get_installed_repository_by_name_owner( 'filtering_0000', common.test_user_1_name )
- self.uninstall_repository( installed_repository, remove_from_disk=False )
+ self.deactivate_repository( installed_repository )
strings_not_displayed = [ 'filtering_0000',
"Galaxy's filtering tool for test 0000",
installed_repository.installed_changeset_revision ]
self.display_galaxy_browse_repositories_page( strings_not_displayed=strings_not_displayed )
+
def test_0030_reactivate_filtering_repository( self ):
'''Reactivate the filtering repository and verify that it now shows up in the list of installed repositories.'''
installed_repository = self.test_db_util.get_installed_repository_by_name_owner( 'filtering_0000', common.test_user_1_name )
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1210_uninstall_reinstall_repository_with_tool_dependencies.py
--- a/test/tool_shed/functional/test_1210_uninstall_reinstall_repository_with_tool_dependencies.py
+++ b/test/tool_shed/functional/test_1210_uninstall_reinstall_repository_with_tool_dependencies.py
@@ -1,7 +1,9 @@
from tool_shed.base.twilltestcase import ShedTwillTestCase, common, os
+
class UninstallingAndReinstallingRepositories( ShedTwillTestCase ):
'''Test uninstalling and reinstalling a repository with tool dependencies.'''
+
def test_0000_initiate_users( self ):
"""Create necessary user accounts."""
self.galaxy_logout()
@@ -19,6 +21,7 @@
admin_user = self.test_db_util.get_user( common.admin_email )
assert admin_user is not None, 'Problem retrieving user with email %s from the database' % common.admin_email
admin_user_private_role = self.test_db_util.get_private_role( admin_user )
+
def test_0005_ensure_repositories_and_categories_exist( self ):
'''Create the 0010 category and upload the freebayes repository to the tool shed, if necessary.'''
category = self.create_category( name='Test 0010 Repository With Tool Dependencies', description='Tests for a repository with tool dependencies.' )
@@ -84,6 +87,7 @@
commit_message='Uploaded valid tool dependency XML.',
strings_displayed=[],
strings_not_displayed=[] )
+
def test_0010_install_freebayes_repository( self ):
'''Install the freebayes repository into the Galaxy instance.'''
self.galaxy_logout()
@@ -101,12 +105,14 @@
self.url.replace( 'http://', '' ),
installed_repository.installed_changeset_revision ]
self.display_galaxy_browse_repositories_page( strings_displayed=strings_displayed )
+
def test_0015_uninstall_freebayes_repository( self ):
'''Uninstall the freebayes repository.'''
installed_repository = self.test_db_util.get_installed_repository_by_name_owner( 'freebayes_0010', common.test_user_1_name )
- self.uninstall_repository( installed_repository, remove_from_disk=True )
+ self.uninstall_repository( installed_repository )
strings_not_displayed = [ installed_repository.name, installed_repository.installed_changeset_revision ]
self.display_galaxy_browse_repositories_page( strings_not_displayed=strings_not_displayed )
+
def test_0020_reinstall_freebayes_repository( self ):
'''Reinstall the freebayes repository.'''
installed_repository = self.test_db_util.get_installed_repository_by_name_owner( 'freebayes_0010', common.test_user_1_name )
@@ -120,12 +126,14 @@
strings_displayed.extend( [ 'Installed tool shed repository', 'Valid tools', 'FreeBayes' ] )
self.display_installed_repository_manage_page( installed_repository, strings_displayed=strings_displayed )
self.verify_tool_metadata_for_installed_repository( installed_repository )
+
def test_0025_deactivate_freebayes_repository( self ):
'''Deactivate the freebayes repository without removing it from disk.'''
installed_repository = self.test_db_util.get_installed_repository_by_name_owner( 'freebayes_0010', common.test_user_1_name )
- self.uninstall_repository( installed_repository, remove_from_disk=False )
+ self.deactivate_repository( installed_repository )
strings_not_displayed = [ installed_repository.name, installed_repository.installed_changeset_revision ]
self.display_galaxy_browse_repositories_page( strings_not_displayed=strings_not_displayed )
+
def test_0030_reactivate_freebayes_repository( self ):
'''Reactivate the freebayes repository and verify that it now shows up in the list of installed repositories.'''
installed_repository = self.test_db_util.get_installed_repository_by_name_owner( 'freebayes_0010', common.test_user_1_name )
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1220_uninstall_reinstall_repository_with_repository_dependencies.py
--- a/test/tool_shed/functional/test_1220_uninstall_reinstall_repository_with_repository_dependencies.py
+++ b/test/tool_shed/functional/test_1220_uninstall_reinstall_repository_with_repository_dependencies.py
@@ -108,7 +108,7 @@
def test_0015_uninstall_emboss_repository( self ):
'''Uninstall the emboss repository.'''
installed_repository = self.test_db_util.get_installed_repository_by_name_owner( emboss_repository_name, common.test_user_1_name )
- self.uninstall_repository( installed_repository, remove_from_disk=True )
+ self.uninstall_repository( installed_repository )
strings_not_displayed = [ installed_repository.name, installed_repository.installed_changeset_revision ]
self.display_galaxy_browse_repositories_page( strings_not_displayed=strings_not_displayed )
@@ -129,7 +129,7 @@
def test_0025_deactivate_emboss_repository( self ):
'''Deactivate the emboss repository without removing it from disk.'''
installed_repository = self.test_db_util.get_installed_repository_by_name_owner( emboss_repository_name, common.test_user_1_name )
- self.uninstall_repository( installed_repository, remove_from_disk=False )
+ self.deactivate_repository( installed_repository )
strings_not_displayed = [ 'emboss_0020',
installed_repository.installed_changeset_revision ]
self.display_galaxy_browse_repositories_page( strings_not_displayed=strings_not_displayed )
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1230_uninstall_reinstall_repository_with_dependency_revisions.py
--- a/test/tool_shed/functional/test_1230_uninstall_reinstall_repository_with_dependency_revisions.py
+++ b/test/tool_shed/functional/test_1230_uninstall_reinstall_repository_with_dependency_revisions.py
@@ -144,7 +144,7 @@
def test_0015_uninstall_emboss_repository( self ):
'''Uninstall the emboss repository.'''
installed_repository = self.test_db_util.get_installed_repository_by_name_owner( emboss_repository_name, common.test_user_1_name )
- self.uninstall_repository( installed_repository, remove_from_disk=True )
+ self.uninstall_repository( installed_repository )
strings_not_displayed = [ installed_repository.installed_changeset_revision ]
self.display_galaxy_browse_repositories_page( strings_not_displayed=strings_not_displayed )
@@ -165,7 +165,7 @@
def test_0025_deactivate_emboss_repository( self ):
'''Deactivate the emboss repository without removing it from disk.'''
installed_repository = self.test_db_util.get_installed_repository_by_name_owner( emboss_repository_name, common.test_user_1_name )
- self.uninstall_repository( installed_repository, remove_from_disk=False )
+ self.deactivate_repository( installed_repository )
strings_not_displayed = [ installed_repository.installed_changeset_revision ]
self.display_galaxy_browse_repositories_page( strings_not_displayed=strings_not_displayed )
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1430_repair_installed_repository.py
--- a/test/tool_shed/functional/test_1430_repair_installed_repository.py
+++ b/test/tool_shed/functional/test_1430_repair_installed_repository.py
@@ -146,10 +146,9 @@
installed_repository = self.test_db_util.get_installed_repository_by_name_owner( 'filter_1430', common.test_user_1_name )
strings_displayed = [ 'This repository cannot be uninstalled' ]
strings_not_displayed = [ 'Check to uninstall' ]
- self.uninstall_repository( installed_repository,
- remove_from_disk=False,
- strings_displayed=strings_displayed,
- strings_not_displayed=strings_not_displayed )
+ self.deactivate_repository( installed_repository,
+ strings_displayed=strings_displayed,
+ strings_not_displayed=strings_not_displayed )
strings_not_displayed = [ 'filter_1430',
"Galaxy's filter tool for test 1430",
installed_repository.installed_changeset_revision ]
diff -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 -r 110a0aabd293fd1393d1296cb220a0b153d69123 test/tool_shed/functional/test_1450_installing_datatypes_sniffers.py
--- a/test/tool_shed/functional/test_1450_installing_datatypes_sniffers.py
+++ b/test/tool_shed/functional/test_1450_installing_datatypes_sniffers.py
@@ -119,7 +119,7 @@
global repository_datatypes_count
global base_datatypes_count
global base_sniffers_count
- self.uninstall_repository( repository, remove_from_disk=False )
+ self.deactivate_repository( repository )
new_datatypes_count = self.get_datatypes_count()
assert new_datatypes_count == base_datatypes_count, 'Expected %d datatypes, got %d' % ( base_datatypes_count, new_datatypes_count )
current_sniffers = self.get_sniffers_count()
@@ -157,7 +157,7 @@
repository = self.test_db_util.get_installed_repository_by_name_owner( repository_name, common.test_user_1_name )
global repository_datatypes_count
global base_datatypes_count
- self.uninstall_repository( repository, remove_from_disk=True )
+ self.uninstall_repository( repository )
new_datatypes_count = self.get_datatypes_count()
assert new_datatypes_count == base_datatypes_count, 'Expected %d datatypes, got %d' % ( base_datatypes_count, new_datatypes_count )
current_sniffers = self.get_sniffers_count()
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Handle invalid information from the tool shed API in the tool shed's install and test framework.
by commits-noreply@bitbucket.org 08 Jan '14
by commits-noreply@bitbucket.org 08 Jan '14
08 Jan '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/ca11e10aa1a5/
Changeset: ca11e10aa1a5
User: greg
Date: 2014-01-09 04:29:33
Summary: Handle invalid information from the tool shed API in the tool shed's install and test framework.
Affected #: 1 file
diff -r ce0d87c3afb0269307640dc64daee3b47095dedf -r ca11e10aa1a5b7318d3727816bf0c2fe6137c522 test/install_and_test_tool_shed_repositories/base/util.py
--- a/test/install_and_test_tool_shed_repositories/base/util.py
+++ b/test/install_and_test_tool_shed_repositories/base/util.py
@@ -889,9 +889,19 @@
log.debug( 'due to the following error getting repository_dependencies_dicts:\n%s' % str( error_message ) )
else:
for repository_dependencies_dict in repository_dependencies_dicts:
- name = str( repository_dependencies_dict[ 'name' ] )
- owner = str( repository_dependencies_dict[ 'owner' ] )
- changeset_revision = str( repository_dependencies_dict[ 'changeset_revision' ] )
+ if not isinstance( repository_dependencies_dict, dict ):
+ log.debug( 'Skipping invalid repository_dependencies_dict: %s' % str( repository_dependencies_dict ) )
+ continue
+ name = repository_dependencies_dict.get( 'name', None )
+ owner = repository_dependencies_dict.get( 'owner', None )
+ changeset_revision = repository_dependencies_dict.get( 'changeset_revision', None )
+ if name is None or owner is None or changeset_revision is None:
+ log.debug( 'Skipping invalid repository_dependencies_dict due to missing name,owner or changeset_revision: %s' % \
+ str( repository_dependencies_dict ) )
+ continue
+ name = str( name )
+ owner = str( owner )
+ changeset_revision = str( changeset_revision )
log.debug( 'Checking installation containers for revision %s of repository dependency %s owned by %s' % \
( changeset_revision, name, owner ) )
required_repository_metadata_id = repository_dependencies_dict[ 'id' ]
@@ -907,21 +917,21 @@
# Check the required repository's time_last_tested value to see if its tool_test_results column
# has been updated within the past 12 hours. The RepositoryMetadata class's to_dict() method
# returns the value of time_last_tested in datetime.isoformat().
- twelve_hours_ago = ( datetime.utcnow() - timedelta( hours=12 ) ).isoformat()
+ twenty_hours_ago = ( datetime.utcnow() - timedelta( hours=20 ) ).isoformat()
time_last_tested, error_message = get_time_last_tested( galaxy_tool_shed_url, repository_metadata_id )
- if time_last_tested is not None and time_last_tested < twelve_hours_ago:
+ if time_last_tested is not None and time_last_tested < twenty_hours_ago:
log.debug( 'The install containers for version %s of repository dependency %s owned by %s have been ' % \
( changeset_revision, name, owner ) )
- log.debug( 'populated within the past 12 hours (likely in this test run), so skipping this check.' )
+ log.debug( 'populated within the past 20 hours (likely in this test run), so skipping this check.' )
continue
elif time_last_tested is None:
log.debug( 'The time_last_tested column value is None for version %s of repository dependency %s owned by %s.' % \
( changeset_revision, name, owner ) )
- elif time_last_tested < twelve_hours_ago:
- log.debug( 'Version %s of repository dependency %s owned by %s was last tested less than 12 hours ago.' % \
+ elif time_last_tested < twenty_hours_ago:
+ log.debug( 'Version %s of repository dependency %s owned by %s was last tested less than 20 hours ago.' % \
( changeset_revision, name, owner ) )
else:
- log.debug( 'Version %s of repository dependency %s owned by %s was last tested more than 12 hours ago.' % \
+ log.debug( 'Version %s of repository dependency %s owned by %s was last tested more than 20 hours ago.' % \
( changeset_revision, name, owner ) )
# Inspect the tool_test_results_dict for the last test run to see if it has not yet been populated.
if len( tool_test_results_dicts ) == 0:
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Eliminate unnecessary RepositoryMetadata.to_dict() method in the tool shed.
by commits-noreply@bitbucket.org 08 Jan '14
by commits-noreply@bitbucket.org 08 Jan '14
08 Jan '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/ce0d87c3afb0/
Changeset: ce0d87c3afb0
User: greg
Date: 2014-01-09 03:25:19
Summary: Eliminate unnecessary RepositoryMetadata.to_dict() method in the tool shed.
Affected #: 1 file
diff -r 61721f9eda65b1e7444264249815ad993e61fb27 -r ce0d87c3afb0269307640dc64daee3b47095dedf lib/galaxy/webapps/tool_shed/model/__init__.py
--- a/lib/galaxy/webapps/tool_shed/model/__init__.py
+++ b/lib/galaxy/webapps/tool_shed/model/__init__.py
@@ -271,14 +271,6 @@
return True
return False
- def to_dict( self, view='collection', value_mapper=None ):
- rval = super( RepositoryMetadata, self ).to_dict( view=view, value_mapper=value_mapper )
- if 'time_last_tested' in rval:
- time_last_tested = rval[ 'time_last_tested' ]
- if time_last_tested is not None:
- rval[ 'time_last_tested' ] = time_last_tested.isoformat()
- return rval
-
class SkipToolTest( object, Dictifiable ):
dict_collection_visible_keys = ( 'id', 'repository_metadata_id', 'initial_changeset_revision' )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0