galaxy-commits
Threads by month
- ----- 2024 -----
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
March 2015
- 2 participants
- 70 discussions
Branch: refs/heads/master
Home: https://github.com/galaxyproject/usegalaxy-playbook
Commit: 03967ae4c825381d22c31bcad2621b686e3a85d1
https://github.com/galaxyproject/usegalaxy-playbook/commit/03967ae4c825381d…
Author: Nate Coraor <nate(a)bx.psu.edu>
Date: 2015-03-17 (Tue, 17 Mar 2015)
Changed paths:
M production/group_vars/all.yml
Log Message:
-----------
Update Main to v15.03.1
Commit: 5420148bb4e6e6a24ec6d1e379f4b8450109b0df
https://github.com/galaxyproject/usegalaxy-playbook/commit/5420148bb4e6e6a2…
Author: Nate Coraor <nate(a)bx.psu.edu>
Date: 2015-03-17 (Tue, 17 Mar 2015)
Changed paths:
M production/group_vars/galaxyservers.yml
Log Message:
-----------
Fix Main installer app (thanks to @dannon for fixing the `enable_static` conflict under uWSGI).
Compare: https://github.com/galaxyproject/usegalaxy-playbook/compare/62da611fdbb0...…
1
0
5 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/9f40ff3e3fe1/
Changeset: 9f40ff3e3fe1
Branch: stable
User: natefoo
Date: 2015-03-17 02:34:24+00:00
Summary: Merge release_15.03
Affected #: 26 files
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 client/galaxy/scripts/mvc/history/history-model.js
--- a/client/galaxy/scripts/mvc/history/history-model.js
+++ b/client/galaxy/scripts/mvc/history/history-model.js
@@ -208,7 +208,7 @@
/** purge this history, _Mark_ing it as purged and removing all dataset data from the server */
purge : function( options ){
if( this.get( 'purged' ) ){ return jQuery.when(); }
- return this.save( { purged: true }, options );
+ return this.save( { deleted: true, purged: true }, options );
},
/** save this history, _Mark_ing it as undeleted */
undelete : function( options ){
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/datatypes/metadata.py
--- a/lib/galaxy/datatypes/metadata.py
+++ b/lib/galaxy/datatypes/metadata.py
@@ -667,11 +667,20 @@
# need to make different keys for them, since ids can overlap
return "%s_%d" % ( dataset.__class__.__name__, dataset.id )
- def setup_external_metadata( self, datasets, sa_session, exec_dir=None, tmp_dir=None, dataset_files_path=None,
- output_fnames=None, config_root=None, config_file=None, datatypes_config=None, job_metadata=None, compute_tmp_dir=None, kwds=None ):
+ def setup_external_metadata( self, datasets, sa_session, exec_dir=None,
+ tmp_dir=None, dataset_files_path=None,
+ output_fnames=None, config_root=None,
+ config_file=None, datatypes_config=None,
+ job_metadata=None, compute_tmp_dir=None,
+ include_command=True, kwds=None ):
kwds = kwds or {}
if tmp_dir is None:
tmp_dir = MetadataTempFile.tmp_dir
+ else:
+ MetadataTempFile.tmp_dir = tmp_dir
+
+ if not os.path.exists(tmp_dir):
+ os.makedirs(tmp_dir)
# path is calculated for Galaxy, may be different on compute - rewrite
# for the compute server.
@@ -687,8 +696,8 @@
def __get_filename_override():
if output_fnames:
for dataset_path in output_fnames:
- if dataset_path.false_path and dataset_path.real_path == metadata_files.dataset.file_name:
- return dataset_path.false_path
+ if dataset_path.real_path == metadata_files.dataset.file_name:
+ return dataset_path.false_path or dataset_path.real_path
return ""
line = "%s,%s,%s,%s,%s,%s" % (
metadata_path_on_compute(metadata_files.filename_in),
@@ -761,8 +770,18 @@
sa_session.add( metadata_files )
sa_session.flush()
metadata_files_list.append( metadata_files )
- #return command required to build
- return "%s %s %s %s %s %s %s %s" % ( os.path.join( exec_dir, 'set_metadata.sh' ), dataset_files_path, compute_tmp_dir or tmp_dir, config_root, config_file, datatypes_config, job_metadata, " ".join( map( __metadata_files_list_to_cmd_line, metadata_files_list ) ) )
+ args = "%s %s %s" % ( datatypes_config,
+ job_metadata,
+ " ".join( map( __metadata_files_list_to_cmd_line, metadata_files_list ) ) )
+ if include_command:
+ #return command required to build
+ fd, fp = tempfile.mkstemp( suffix='.py', dir = tmp_dir, prefix = "set_metadata_" )
+ metadata_script_file = abspath( fp )
+ os.fdopen( fd, 'w' ).write( 'from galaxy_ext.metadata.set_metadata import set_metadata; set_metadata()' )
+ return "python %s %s" % ( metadata_path_on_compute(metadata_script_file), args )
+ else:
+ # return args to galaxy_ext.metadata.set_metadata required to build
+ return args
def external_metadata_set_successfully( self, dataset, sa_session ):
metadata_files = self.get_output_filenames_by_dataset( dataset, sa_session )
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/datatypes/registry.py
--- a/lib/galaxy/datatypes/registry.py
+++ b/lib/galaxy/datatypes/registry.py
@@ -653,12 +653,15 @@
<requirement type="package">samtools</requirement></requirements><action module="galaxy.tools.actions.metadata" class="SetMetadataToolAction"/>
- <command>$__SET_EXTERNAL_METADATA_COMMAND_LINE__</command>
+ <command>python $set_metadata $__SET_EXTERNAL_METADATA_COMMAND_LINE__</command><inputs><param format="data" name="input1" type="data" label="File to set metadata on."/><param name="__ORIGINAL_DATASET_STATE__" type="hidden" value=""/><param name="__SET_EXTERNAL_METADATA_COMMAND_LINE__" type="hidden" value=""/></inputs>
+ <configfiles>
+ <configfile name="set_metadata">from galaxy_ext.metadata.set_metadata import set_metadata; set_metadata()</configfile>
+ </configfiles></tool>
"""
tmp_name = tempfile.NamedTemporaryFile()
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/jobs/__init__.py
--- a/lib/galaxy/jobs/__init__.py
+++ b/lib/galaxy/jobs/__init__.py
@@ -831,8 +831,7 @@
self.command_line, self.extra_filenames = tool_evaluator.build()
# FIXME: for now, tools get Galaxy's lib dir in their path
- if self.command_line and self.command_line.startswith( 'python' ):
- self.galaxy_lib_dir = os.path.abspath( "lib" ) # cwd = galaxy root
+ self.galaxy_lib_dir = os.path.abspath( "lib" ) # cwd = galaxy root
# Shell fragment to inject dependencies
self.dependency_shell_commands = self.tool.build_dependency_shell_commands()
# We need command_line persisted to the db in order for Galaxy to re-queue the job
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/jobs/actions/post.py
--- a/lib/galaxy/jobs/actions/post.py
+++ b/lib/galaxy/jobs/actions/post.py
@@ -156,7 +156,7 @@
# "replace" option so you can replace a portion of the name,
# support multiple #{name} in one rename action...
- if new_name.find("#{") > -1:
+ while new_name.find("#{") > -1:
to_be_replaced = ""
# This assumes a single instance of #{variable} will exist
start_pos = new_name.find("#{") + 2
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/jobs/command_factory.py
--- a/lib/galaxy/jobs/command_factory.py
+++ b/lib/galaxy/jobs/command_factory.py
@@ -145,7 +145,7 @@
metadata_command = metadata_command.strip()
if metadata_command:
commands_builder.capture_return_code()
- commands_builder.append_command("cd %s; %s" % (exec_dir, metadata_command))
+ commands_builder.append_command(metadata_command)
def __copy_if_exists_command(work_dir_output):
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/jobs/handler.py
--- a/lib/galaxy/jobs/handler.py
+++ b/lib/galaxy/jobs/handler.py
@@ -680,6 +680,9 @@
except Empty:
pass
for job, error_msg in jobs_to_check:
+ if job.state != job.states.DELETED_NEW and job.finished:
+ log.debug('Job %s already finished, not deleting or stopping', job.id)
+ continue
final_state = job.states.DELETED
if error_msg is not None:
final_state = job.states.ERROR
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/managers/hdas.py
--- a/lib/galaxy/managers/hdas.py
+++ b/lib/galaxy/managers/hdas.py
@@ -156,11 +156,12 @@
self.dataset_manager.error_unless_dataset_purge_allowed( trans, hda )
super( HDAManager, self ).purge( trans, hda, flush=flush )
- # signal to stop the creating job?
if hda.creating_job_associations:
job = hda.creating_job_associations[0].job
- job.mark_deleted( self.app.config.track_jobs_in_database )
- self.app.job_manager.job_stop_queue.put( job.id )
+ if not job.finished:
+ # signal to stop the creating job
+ job.mark_deleted( self.app.config.track_jobs_in_database )
+ self.app.job_manager.job_stop_queue.put( job.id )
# more importantly, purge dataset as well
if hda.dataset.user_can_purge:
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/metadata
--- /dev/null
+++ b/lib/galaxy/metadata
@@ -0,0 +1,1 @@
+../galaxy_ext/metadata
\ No newline at end of file
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py
+++ b/lib/galaxy/model/__init__.py
@@ -517,6 +517,9 @@
"""
Mark this job as deleted, and mark any output datasets as discarded.
"""
+ if self.finished:
+ # Do not modify the state/outputs of jobs that are already terminal
+ return
if track_jobs_in_database:
self.state = Job.states.DELETED_NEW
else:
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/objectstore/__init__.py
--- a/lib/galaxy/objectstore/__init__.py
+++ b/lib/galaxy/objectstore/__init__.py
@@ -392,39 +392,39 @@
super(NestedObjectStore, self).shutdown()
def exists(self, obj, **kwargs):
- return self.__call_method('exists', obj, False, False, **kwargs)
+ return self._call_method('exists', obj, False, False, **kwargs)
def file_ready(self, obj, **kwargs):
- return self.__call_method('file_ready', obj, False, False, **kwargs)
+ return self._call_method('file_ready', obj, False, False, **kwargs)
def create(self, obj, **kwargs):
random.choice(self.backends.values()).create(obj, **kwargs)
def empty(self, obj, **kwargs):
- return self.__call_method('empty', obj, True, False, **kwargs)
+ return self._call_method('empty', obj, True, False, **kwargs)
def size(self, obj, **kwargs):
- return self.__call_method('size', obj, 0, False, **kwargs)
+ return self._call_method('size', obj, 0, False, **kwargs)
def delete(self, obj, **kwargs):
- return self.__call_method('delete', obj, False, False, **kwargs)
+ return self._call_method('delete', obj, False, False, **kwargs)
def get_data(self, obj, **kwargs):
- return self.__call_method('get_data', obj, ObjectNotFound, True, **kwargs)
+ return self._call_method('get_data', obj, ObjectNotFound, True, **kwargs)
def get_filename(self, obj, **kwargs):
- return self.__call_method('get_filename', obj, ObjectNotFound, True, **kwargs)
+ return self._call_method('get_filename', obj, ObjectNotFound, True, **kwargs)
def update_from_file(self, obj, **kwargs):
if kwargs.get('create', False):
self.create(obj, **kwargs)
kwargs['create'] = False
- return self.__call_method('update_from_file', obj, ObjectNotFound, True, **kwargs)
+ return self._call_method('update_from_file', obj, ObjectNotFound, True, **kwargs)
def get_object_url(self, obj, **kwargs):
- return self.__call_method('get_object_url', obj, None, False, **kwargs)
+ return self._call_method('get_object_url', obj, None, False, **kwargs)
- def __call_method(self, method, obj, default, default_is_exception, **kwargs):
+ def _call_method(self, method, obj, default, default_is_exception, **kwargs):
"""
Check all children object stores for the first one with the dataset
"""
@@ -432,7 +432,7 @@
if store.exists(obj, **kwargs):
return store.__getattribute__(method)(obj, **kwargs)
if default_is_exception:
- raise default( 'objectstore, __call_method failed: %s on %s, kwargs: %s'
+ raise default( 'objectstore, _call_method failed: %s on %s, kwargs: %s'
% ( method, str( obj ), str( kwargs ) ) )
else:
return default
@@ -535,12 +535,12 @@
log.debug("Using preferred backend '%s' for creation of %s %s" % (obj.object_store_id, obj.__class__.__name__, obj.id))
self.backends[obj.object_store_id].create(obj, **kwargs)
- def __call_method(self, method, obj, default, default_is_exception, **kwargs):
+ def _call_method(self, method, obj, default, default_is_exception, **kwargs):
object_store_id = self.__get_store_id_for(obj, **kwargs)
if object_store_id is not None:
return self.backends[object_store_id].__getattribute__(method)(obj, **kwargs)
if default_is_exception:
- raise default( 'objectstore, __call_method failed: %s on %s, kwargs: %s'
+ raise default( 'objectstore, _call_method failed: %s on %s, kwargs: %s'
% ( method, str( obj ), str( kwargs ) ) )
else:
return default
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -2587,11 +2587,9 @@
hda_source_dict = {} # Mapping from HDA in history to source HDAs.
for hda in history.datasets:
source_hda = hda.copied_from_history_dataset_association
- while source_hda:#should this check library datasets as well?
- #FIXME: could be multiple copies of a hda in a single history, this does a better job of matching on cloned histories,
- #but is still less than perfect when eg individual datasets are copied between histories
- if source_hda not in hda_source_dict or source_hda.hid == hda.hid:
- hda_source_dict[ source_hda ] = hda
+ while source_hda:
+ if source_hda.dataset.id not in hda_source_dict or source_hda.hid == hda.hid:
+ hda_source_dict[ source_hda.dataset.id ] = hda
source_hda = source_hda.copied_from_history_dataset_association
# Ditto for dataset collections.
@@ -2599,8 +2597,8 @@
for hdca in history.dataset_collections:
source_hdca = hdca.copied_from_history_dataset_collection_association
while source_hdca:
- if source_hdca not in hdca_source_dict or source_hdca.hid == hdca.hid:
- hdca_source_dict[ source_hdca ] = hdca
+ if source_hdca.collection.id not in hdca_source_dict or source_hdca.hid == hdca.hid:
+ hdca_source_dict[ source_hdca.collection.id ] = hdca
source_hdca = source_hdca.copied_from_history_dataset_collection_association
# Unpack unvalidated values to strings, they'll be validated when the
@@ -2619,17 +2617,21 @@
if isinstance(value,list):
values = []
for val in value:
- if is_hashable( val ):
- if val in history.datasets:
+ if isinstance(val, trans.app.model.HistoryDatasetAssociation):
+ if val.dataset.id in hda_source_dict:
+ values.append( hda_source_dict[ val.dataset.id ] )
+ else:
values.append( val )
- elif val in hda_source_dict:
- values.append( hda_source_dict[ val ])
return values
- if is_hashable( value ) and value not in history.datasets and value in hda_source_dict:
- return hda_source_dict[ value ]
+ if isinstance(value, trans.app.model.HistoryDatasetAssociation):
+ if value.dataset.id in hda_source_dict:
+ return hda_source_dict[ value.dataset.id ]
+ if isinstance(value, trans.app.model.HistoryDatasetCollectionAssociation):
+ if value.collection.id in hdca_source_dict:
+ return hdca_source_dict[ value.collection.id ]
elif isinstance( input, DataCollectionToolParameter ):
- if is_hashable( value ) and value not in history.dataset_collections and value in hdca_source_dict:
- return hdca_source_dict[ value ]
+ if value.collection.id in hdca_source_dict:
+ return hdca_source_dict[ value.collection.id ]
visit_input_values( tool_inputs, params, rerun_callback )
def _compare_tool_version( self, trans, job ):
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/tools/actions/metadata.py
--- a/lib/galaxy/tools/actions/metadata.py
+++ b/lib/galaxy/tools/actions/metadata.py
@@ -1,11 +1,15 @@
+import logging
+
from __init__ import ToolAction
from galaxy.datatypes.metadata import JobExternalOutputMetadataWrapper
from galaxy.util.odict import odict
from galaxy.util.json import dumps
+from galaxy.jobs.datasets import DatasetPath
-import logging
+
log = logging.getLogger( __name__ )
+
class SetMetadataToolAction( ToolAction ):
"""Tool action used for setting external metadata on an existing dataset"""
@@ -65,17 +69,21 @@
#add parameters to job_parameter table
# Store original dataset state, so we can restore it. A separate table might be better (no chance of 'losing' the original state)?
incoming[ '__ORIGINAL_DATASET_STATE__' ] = dataset.state
+ input_paths = [DatasetPath( dataset.id, real_path=dataset.file_name, mutable=False )]
+ app.object_store.create(job, base_dir='job_work', dir_only=True, extra_dir=str(job.id))
+ job_working_dir = app.object_store.get_filename(job, base_dir='job_work', dir_only=True, extra_dir=str(job.id))
external_metadata_wrapper = JobExternalOutputMetadataWrapper( job )
cmd_line = external_metadata_wrapper.setup_external_metadata( dataset,
sa_session,
exec_dir = None,
- tmp_dir = app.config.new_file_path,
+ tmp_dir = job_working_dir,
dataset_files_path = app.model.Dataset.file_path,
- output_fnames = None,
+ output_fnames = input_paths,
config_root = app.config.root,
config_file = app.config.config_file,
datatypes_config = app.datatypes_registry.integrated_datatypes_configs,
job_metadata = None,
+ include_command = False,
kwds = { 'overwrite' : overwrite } )
incoming[ '__SET_EXTERNAL_METADATA_COMMAND_LINE__' ] = cmd_line
for name, value in tool.params_to_strings( incoming, app ).iteritems():
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/tools/imp_exp/__init__.py
--- a/lib/galaxy/tools/imp_exp/__init__.py
+++ b/lib/galaxy/tools/imp_exp/__init__.py
@@ -17,12 +17,15 @@
<tool id="__EXPORT_HISTORY__" name="Export History" version="0.1" tool_type="export_history"><type class="ExportHistoryTool" module="galaxy.tools"/><action module="galaxy.tools.actions.history_imp_exp" class="ExportHistoryToolAction"/>
- <command>$__EXPORT_HISTORY_COMMAND_INPUTS_OPTIONS__ $output_file</command>
+ <command>python $export_history $__EXPORT_HISTORY_COMMAND_INPUTS_OPTIONS__ $output_file</command><inputs><param name="__HISTORY_TO_EXPORT__" type="hidden"/><param name="compress" type="boolean"/><param name="__EXPORT_HISTORY_COMMAND_INPUTS_OPTIONS__" type="hidden"/></inputs>
+ <configfiles>
+ <configfile name="export_history">from galaxy.tools.imp_exp.export_history import main; main()</configfile>
+ </configfiles><outputs><data format="gzip" name="output_file"/></outputs>
@@ -530,11 +533,9 @@
options = ""
if jeha.compressed:
options = "-G"
- return "python %s %s %s %s %s" % ( os.path.join( os.path.abspath( os.getcwd() ),
- "lib/galaxy/tools/imp_exp/export_history.py" ),
- options, history_attrs_filename,
- datasets_attrs_filename,
- jobs_attrs_filename )
+ return "%s %s %s %s" % ( options, history_attrs_filename,
+ datasets_attrs_filename,
+ jobs_attrs_filename )
def cleanup_after_job( self, db_session ):
""" Remove temporary directory and attribute files generated during setup for this job. """
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/tools/imp_exp/export_history.py
--- a/lib/galaxy/tools/imp_exp/export_history.py
+++ b/lib/galaxy/tools/imp_exp/export_history.py
@@ -70,7 +70,7 @@
except Exception, e:
return 'Error creating history archive: %s' % str( e ), sys.stderr
-if __name__ == "__main__":
+def main():
# Parse command line.
parser = optparse.OptionParser()
parser.add_option( '-G', '--gzip', dest='gzip', action="store_true", help='Compress archive using gzip.' )
@@ -81,3 +81,6 @@
# Create archive.
status = create_archive( history_attrs, dataset_attrs, job_attrs, out_file, gzip )
print status
+
+if __name__ == "__main__":
+ main()
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/version.py
--- a/lib/galaxy/version.py
+++ b/lib/galaxy/version.py
@@ -1,1 +1,3 @@
VERSION_MAJOR = "15.03"
+VERSION_MINOR = "1"
+VERSION = VERSION_MAJOR + ('.' + VERSION_MINOR if VERSION_MINOR else '')
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/webapps/galaxy/api/users.py
--- a/lib/galaxy/webapps/galaxy/api/users.py
+++ b/lib/galaxy/webapps/galaxy/api/users.py
@@ -105,7 +105,7 @@
POST /api/users
Creates a new Galaxy user.
"""
- if not trans.app.config.allow_user_creation:
+ if not trans.app.config.allow_user_creation and not trans.user_is_admin():
raise exceptions.ConfigDoesNotAllowException( 'User creation is not allowed in this Galaxy instance' )
if trans.app.config.use_remote_user and trans.user_is_admin():
user = trans.get_or_create_remote_user( remote_user_email=payload['remote_user_email'] )
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/webapps/galaxy/controllers/history.py
--- a/lib/galaxy/webapps/galaxy/controllers/history.py
+++ b/lib/galaxy/webapps/galaxy/controllers/history.py
@@ -359,6 +359,15 @@
history.purged = True
self.sa_session.add( history )
self.sa_session.flush()
+ for hda in history.datasets:
+ # Not all datasets have jobs associated with them (e.g., datasets imported from libraries).
+ if hda.creating_job_associations:
+ # HDA has associated job, so try marking it deleted.
+ job = hda.creating_job_associations[0].job
+ if job.history_id == history.id and not job.finished:
+ # No need to check other outputs since the job's parent history is this history
+ job.mark_deleted( trans.app.config.track_jobs_in_database )
+ trans.app.job_manager.job_stop_queue.put( job.id )
trans.sa_session.flush()
if n_deleted:
part = "Deleted %d %s" % ( n_deleted, iff( n_deleted != 1, "histories", "history" ) )
@@ -1125,7 +1134,7 @@
if hda.creating_job_associations:
# HDA has associated job, so try marking it deleted.
job = hda.creating_job_associations[0].job
- if job.history_id == history.id and job.state in [ trans.app.model.Job.states.QUEUED, trans.app.model.Job.states.RUNNING, trans.app.model.Job.states.NEW ]:
+ if job.history_id == history.id and not job.finished:
# No need to check other outputs since the job's parent history is this history
job.mark_deleted( trans.app.config.track_jobs_in_database )
trans.app.job_manager.job_stop_queue.put( job.id )
@@ -1224,33 +1233,28 @@
# Use current history.
history = trans.history
id = trans.security.encode_id( history.id )
-
if not history:
return trans.show_error_message( "This history does not exist or you cannot export this history." )
-
- #
# If history has already been exported and it has not changed since export, stream it.
- #
jeha = history.latest_export
if jeha and jeha.up_to_date:
if jeha.ready:
if preview:
url = url_for( controller='history', action="export_archive", id=id, qualified=True )
- return trans.show_message( "History Ready: '%(n)s'. Use this link to download \
- the archive or import it to another Galaxy server: \
- <a href='%(u)s'>%(u)s</a>" % ( { 'n' : history.name, 'u' : url } ) )
+ return trans.show_message( "History Ready: '%(n)s'. Use this link to download "
+ "the archive or import it to another Galaxy server: "
+ "<a href='%(u)s'>%(u)s</a>" % ( { 'n': history.name, 'u': url } ) )
else:
return self.serve_ready_history_export( trans, jeha )
elif jeha.preparing:
- return trans.show_message( "Still exporting history %(n)s; please check back soon. Link: <a href='%(s)s'>%(s)s</a>" \
- % ( { 'n' : history.name, 's' : url_for( controller='history', action="export_archive", id=id, qualified=True ) } ) )
-
+ return trans.show_message( "Still exporting history %(n)s; please check back soon. Link: <a href='%(s)s'>%(s)s</a>"
+ % ( { 'n': history.name, 's': url_for( controller='history', action="export_archive", id=id, qualified=True ) } ) )
self.queue_history_export( trans, history, gzip=gzip, include_hidden=include_hidden, include_deleted=include_deleted )
url = url_for( controller='history', action="export_archive", id=id, qualified=True )
- return trans.show_message( "Exporting History '%(n)s'. Use this link to download \
- the archive or import it to another Galaxy server: \
- <a href='%(u)s'>%(u)s</a>" % ( { 'n' : history.name, 'u' : url } ) )
- #TODO: used in this file and index.mako
+ return trans.show_message( "Exporting History '%(n)s'. You will need to <a href='%(share)s'>make this history 'accessible'</a> in order to import this to another galaxy sever. <br/>"
+ "Use this link to download the archive or import it to another Galaxy server: "
+ "<a href='%(u)s'>%(u)s</a>" % ( { 'share': url_for(controller='history', action='sharing'), 'n': history.name, 'u': url } ) )
+ # TODO: used in this file and index.mako
@web.expose
@web.json
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy/webapps/galaxy/controllers/workflow.py
--- a/lib/galaxy/webapps/galaxy/controllers/workflow.py
+++ b/lib/galaxy/webapps/galaxy/controllers/workflow.py
@@ -1,6 +1,7 @@
import base64
import httplib
import json
+import logging
import os
import sgmllib
import urllib2
@@ -41,6 +42,8 @@
from galaxy.workflow.render import WorkflowCanvas, MARGIN, LINE_SPACING
from markupsafe import escape
+log = logging.getLogger( __name__ )
+
class StoredWorkflowListGrid( grids.Grid ):
@@ -895,8 +898,9 @@
data = json.loads( workflow_data )
except Exception, e:
data = None
- message = "The data content does not appear to be a Galaxy workflow.<br/>Exception: %s" % str( e )
+ message = "The data content does not appear to be a Galaxy workflow."
status = 'error'
+ log.exception("Error importing workflow.")
if data:
# Create workflow if possible. If a required tool is not available in the local
# Galaxy instance, the tool information will be available in the step_dict.
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy_ext/__init__.py
--- /dev/null
+++ b/lib/galaxy_ext/__init__.py
@@ -0,0 +1,4 @@
+""" Package for things which are loaded from outside Galaxy and which we can be
+sure will not conflict with the `galaxy` namespace, which may be provided by
+other packages (e.g. galaxy_ops in the toolshed).
+"""
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy_ext/metadata/__init__.py
--- /dev/null
+++ b/lib/galaxy_ext/metadata/__init__.py
@@ -0,0 +1,2 @@
+""" Work with Galaxy metadata
+"""
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 lib/galaxy_ext/metadata/set_metadata.py
--- /dev/null
+++ b/lib/galaxy_ext/metadata/set_metadata.py
@@ -0,0 +1,125 @@
+"""
+Execute an external process to set_meta() on a provided list of pickled datasets.
+
+This was formerly scripts/set_metadata.py and expects these arguments:
+
+ %prog datatypes_conf.xml job_metadata_file metadata_in,metadata_kwds,metadata_out,metadata_results_code,output_filename_override,metadata_override...
+
+Galaxy should be importable on sys.path and output_filename_override should be
+set to the path of the dataset on which metadata is being set
+(output_filename_override could previously be left empty and the path would be
+constructed automatically).
+"""
+
+import logging
+logging.basicConfig()
+log = logging.getLogger( __name__ )
+
+import cPickle
+import json
+import os
+import sys
+
+# ensure supported version
+assert sys.version_info[:2] >= ( 2, 6 ) and sys.version_info[:2] <= ( 2, 7 ), 'Python version must be 2.6 or 2.7, this is: %s' % sys.version
+
+# insert *this* galaxy before all others on sys.path
+new_path = os.path.abspath( os.path.join( os.path.dirname( __file__ ), os.pardir, os.pardir ) )
+sys.path.insert( 0, new_path )
+
+from galaxy import eggs
+import pkg_resources
+import galaxy.model.mapping # need to load this before we unpickle, in order to setup properties assigned by the mappers
+galaxy.model.Job() # this looks REAL stupid, but it is REQUIRED in order for SA to insert parameters into the classes defined by the mappers --> it appears that instantiating ANY mapper'ed class would suffice here
+from galaxy.util import stringify_dictionary_keys
+from sqlalchemy.orm import clear_mappers
+
+
+def set_meta_with_tool_provided( dataset_instance, file_dict, set_meta_kwds ):
+ # This method is somewhat odd, in that we set the metadata attributes from tool,
+ # then call set_meta, then set metadata attributes from tool again.
+ # This is intentional due to interplay of overwrite kwd, the fact that some metadata
+ # parameters may rely on the values of others, and that we are accepting the
+ # values provided by the tool as Truth.
+ for metadata_name, metadata_value in file_dict.get( 'metadata', {} ).iteritems():
+ setattr( dataset_instance.metadata, metadata_name, metadata_value )
+ dataset_instance.datatype.set_meta( dataset_instance, **set_meta_kwds )
+ for metadata_name, metadata_value in file_dict.get( 'metadata', {} ).iteritems():
+ setattr( dataset_instance.metadata, metadata_name, metadata_value )
+
+def set_metadata():
+ # locate galaxy_root for loading datatypes
+ galaxy_root = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir, os.pardir))
+ galaxy.datatypes.metadata.MetadataTempFile.tmp_dir = tool_job_working_directory = os.path.abspath(os.getcwd())
+
+ # Set up datatypes registry
+ datatypes_config = sys.argv.pop( 1 )
+ datatypes_registry = galaxy.datatypes.registry.Registry()
+ datatypes_registry.load_datatypes( root_dir=galaxy_root, config=datatypes_config )
+ galaxy.model.set_datatypes_registry( datatypes_registry )
+
+ job_metadata = sys.argv.pop( 1 )
+ existing_job_metadata_dict = {}
+ new_job_metadata_dict = {}
+ if job_metadata != "None" and os.path.exists( job_metadata ):
+ for line in open( job_metadata, 'r' ):
+ try:
+ line = stringify_dictionary_keys( json.loads( line ) )
+ if line['type'] == 'dataset':
+ existing_job_metadata_dict[ line['dataset_id'] ] = line
+ elif line['type'] == 'new_primary_dataset':
+ new_job_metadata_dict[ line[ 'filename' ] ] = line
+ except:
+ continue
+
+ for filenames in sys.argv[1:]:
+ fields = filenames.split( ',' )
+ filename_in = fields.pop( 0 )
+ filename_kwds = fields.pop( 0 )
+ filename_out = fields.pop( 0 )
+ filename_results_code = fields.pop( 0 )
+ dataset_filename_override = fields.pop( 0 )
+ # Need to be careful with the way that these parameters are populated from the filename splitting,
+ # because if a job is running when the server is updated, any existing external metadata command-lines
+ #will not have info about the newly added override_metadata file
+ if fields:
+ override_metadata = fields.pop( 0 )
+ else:
+ override_metadata = None
+ set_meta_kwds = stringify_dictionary_keys( json.load( open( filename_kwds ) ) ) # load kwds; need to ensure our keywords are not unicode
+ try:
+ dataset = cPickle.load( open( filename_in ) ) # load DatasetInstance
+ dataset.dataset.external_filename = dataset_filename_override
+ files_path = os.path.abspath(os.path.join( tool_job_working_directory, "dataset_%s_files" % (dataset.dataset.id) ))
+ dataset.dataset.external_extra_files_path = files_path
+ if dataset.dataset.id in existing_job_metadata_dict:
+ dataset.extension = existing_job_metadata_dict[ dataset.dataset.id ].get( 'ext', dataset.extension )
+ # Metadata FileParameter types may not be writable on a cluster node, and are therefore temporarily substituted with MetadataTempFiles
+ if override_metadata:
+ override_metadata = json.load( open( override_metadata ) )
+ for metadata_name, metadata_file_override in override_metadata:
+ if galaxy.datatypes.metadata.MetadataTempFile.is_JSONified_value( metadata_file_override ):
+ metadata_file_override = galaxy.datatypes.metadata.MetadataTempFile.from_JSON( metadata_file_override )
+ setattr( dataset.metadata, metadata_name, metadata_file_override )
+ file_dict = existing_job_metadata_dict.get( dataset.dataset.id, {} )
+ set_meta_with_tool_provided( dataset, file_dict, set_meta_kwds )
+ dataset.metadata.to_JSON_dict( filename_out ) # write out results of set_meta
+ json.dump( ( True, 'Metadata has been set successfully' ), open( filename_results_code, 'wb+' ) ) # setting metadata has succeeded
+ except Exception, e:
+ json.dump( ( False, str( e ) ), open( filename_results_code, 'wb+' ) ) # setting metadata has failed somehow
+
+ for i, ( filename, file_dict ) in enumerate( new_job_metadata_dict.iteritems(), start=1 ):
+ new_dataset = galaxy.model.Dataset( id=-i, external_filename=os.path.join( tool_job_working_directory, file_dict[ 'filename' ] ) )
+ extra_files = file_dict.get( 'extra_files', None )
+ if extra_files is not None:
+ new_dataset._extra_files_path = os.path.join( tool_job_working_directory, extra_files )
+ new_dataset.state = new_dataset.states.OK
+ new_dataset_instance = galaxy.model.HistoryDatasetAssociation( id=-i, dataset=new_dataset, extension=file_dict.get( 'ext', 'data' ) )
+ set_meta_with_tool_provided( new_dataset_instance, file_dict, set_meta_kwds )
+ file_dict[ 'metadata' ] = json.loads( new_dataset_instance.metadata.to_JSON_dict() ) #storing metadata in external form, need to turn back into dict, then later jsonify
+ if existing_job_metadata_dict or new_job_metadata_dict:
+ with open( job_metadata, 'wb' ) as job_metadata_fh:
+ for value in existing_job_metadata_dict.values() + new_job_metadata_dict.values():
+ job_metadata_fh.write( "%s\n" % ( json.dumps( value ) ) )
+
+ clear_mappers()
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 static/scripts/mvc/history/history-model.js
--- a/static/scripts/mvc/history/history-model.js
+++ b/static/scripts/mvc/history/history-model.js
@@ -208,7 +208,7 @@
/** purge this history, _Mark_ing it as purged and removing all dataset data from the server */
purge : function( options ){
if( this.get( 'purged' ) ){ return jQuery.when(); }
- return this.save( { purged: true }, options );
+ return this.save( { deleted: true, purged: true }, options );
},
/** save this history, _Mark_ing it as undeleted */
undelete : function( options ){
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 static/scripts/packed/mvc/history/history-model.js
--- a/static/scripts/packed/mvc/history/history-model.js
+++ b/static/scripts/packed/mvc/history/history-model.js
@@ -1,1 +1,1 @@
-define(["mvc/history/history-contents","mvc/base-mvc","utils/localization"],function(h,i,d){var e=Backbone.Model.extend(i.LoggableMixin).extend(i.mixin(i.SearchableModelMixin,{defaults:{model_class:"History",id:null,name:"Unnamed History",state:"new",diskSize:0,deleted:false},urlRoot:galaxy_config.root+"api/histories",initialize:function(k,l,j){j=j||{};this.logger=j.logger||null;this.log(this+".initialize:",k,l,j);this.log("creating history contents:",l);this.contents=new h.HistoryContents(l||[],{historyId:this.get("id")});this._setUpListeners();this.updateTimeoutId=null},_setUpListeners:function(){this.on("error",function(k,n,j,m,l){this.errorHandler(k,n,j,m,l)});if(this.contents){this.listenTo(this.contents,"error",function(){this.trigger.apply(this,["error:contents"].concat(jQuery.makeArray(arguments)))})}this.on("change:id",function(k,j){if(this.contents){this.contents.historyId=j}},this)},errorHandler:function(k,n,j,m,l){this.clearUpdateTimeout()},ownedByCurrUser:function(){if(!Galaxy||!Galaxy.currUser){return false}if(Galaxy.currUser.isAnonymous()||Galaxy.currUser.id!==this.get("user_id")){return false}return true},contentsCount:function(){return _.reduce(_.values(this.get("state_details")),function(j,k){return j+k},0)},searchAttributes:["name","annotation","tags"],searchAliases:{title:"name",tag:"tags"},checkForUpdates:function(j){if(this.contents.running().length){this.setUpdateTimeout()}else{this.trigger("ready");if(_.isFunction(j)){j.call(this)}}return this},setUpdateTimeout:function(j){j=j||e.UPDATE_DELAY;var k=this;this.clearUpdateTimeout();this.updateTimeoutId=setTimeout(function(){k.refresh()},j);return this.updateTimeoutId},clearUpdateTimeout:function(){if(this.updateTimeoutId){clearTimeout(this.updateTimeoutId);this.updateTimeoutId=null}},refresh:function(k,j){k=k||[];j=j||{};var l=this;j.data=j.data||{};if(k.length){j.data.details=k.join(",")}var m=this.contents.fetch(j);m.done(function(n){l.checkForUpdates(function(){this.fetch()})});return m},_delete:function(j){if(this.get("deleted")){return jQuery.when()}return this.save({deleted:true},j)},purge:function(j){if(this.get("purged")){return jQuery.when()}return this.save({purged:true},j)},undelete:function(j){if(!this.get("deleted")){return jQuery.when()}return this.save({deleted:false},j)},copy:function(m,k){m=(m!==undefined)?(m):(true);if(!this.id){throw new Error("You must set the history ID before copying it.")}var j={history_id:this.id};if(m){j.current=true}if(k){j.name=k}var l=this,n=jQuery.post(this.urlRoot,j);if(m){return n.then(function(o){var p=new e(o);return p.setAsCurrent().done(function(){l.trigger("copied",l,o)})})}return n.done(function(o){l.trigger("copied",l,o)})},setAsCurrent:function(){var j=this,k=jQuery.getJSON("/history/set_as_current?id="+this.id);k.done(function(){j.trigger("set-as-current",j)});return k},toString:function(){return"History("+this.get("id")+","+this.get("name")+")"}}));e.UPDATE_DELAY=4000;e.getHistoryData=function c(j,v){v=v||{};var r=v.detailIdsFn||[];var m=v.hdcaDetailIds||[];var s=jQuery.Deferred(),p=null;function k(w){if(j==="current"){return jQuery.getJSON(galaxy_config.root+"history/current_history_json")}return jQuery.ajax(galaxy_config.root+"api/histories/"+j)}function o(w){return w&&w.empty}function q(x){if(o(x)){return[]}if(_.isFunction(r)){r=r(x)}if(_.isFunction(m)){m=m(x)}var w={};if(r.length){w.dataset_details=r.join(",")}if(m.length){w.dataset_collection_details=m.join(",")}return jQuery.ajax(galaxy_config.root+"api/histories/"+x.id+"/contents",{data:w})}var t=v.historyFn||k,u=v.contentsFn||q;var n=t(j);n.done(function(w){p=w;s.notify({status:"history data retrieved",historyJSON:p})});n.fail(function(y,w,x){s.reject(y,"loading the history")});var l=n.then(u);l.then(function(w){s.notify({status:"contents data retrieved",historyJSON:p,contentsJSON:w});s.resolve(p,w)});l.fail(function(y,w,x){s.reject(y,"loading the contents",{history:p})});return s};var f=Backbone.Collection.extend(i.LoggableMixin).extend({model:e,urlRoot:(window.galaxy_config?galaxy_config.root:"/")+"api/histories",initialize:function(k,j){j=j||{};this.log("HistoryCollection.initialize",arguments);this.includeDeleted=j.includeDeleted||false;this.setUpListeners()},setUpListeners:function a(){var j=this;this.on("change:deleted",function(k){this.debug("change:deleted",j.includeDeleted,k.get("deleted"));if(!j.includeDeleted&&k.get("deleted")){j.remove(k)}});this.on("copied",function(k,l){this.unshift(new e(l,[]))})},create:function g(m,k,j,l){var o=this,n=jQuery.getJSON(galaxy_config.root+"history/create_new_current");return n.done(function(p){var q=new e(p,[],j||{});o.unshift(q);o.trigger("new-current")})},toString:function b(){return"HistoryCollection("+this.length+")"}});return{History:e,HistoryCollection:f}});
\ No newline at end of file
+define(["mvc/history/history-contents","mvc/base-mvc","utils/localization"],function(h,i,d){var e=Backbone.Model.extend(i.LoggableMixin).extend(i.mixin(i.SearchableModelMixin,{defaults:{model_class:"History",id:null,name:"Unnamed History",state:"new",diskSize:0,deleted:false},urlRoot:galaxy_config.root+"api/histories",initialize:function(k,l,j){j=j||{};this.logger=j.logger||null;this.log(this+".initialize:",k,l,j);this.log("creating history contents:",l);this.contents=new h.HistoryContents(l||[],{historyId:this.get("id")});this._setUpListeners();this.updateTimeoutId=null},_setUpListeners:function(){this.on("error",function(k,n,j,m,l){this.errorHandler(k,n,j,m,l)});if(this.contents){this.listenTo(this.contents,"error",function(){this.trigger.apply(this,["error:contents"].concat(jQuery.makeArray(arguments)))})}this.on("change:id",function(k,j){if(this.contents){this.contents.historyId=j}},this)},errorHandler:function(k,n,j,m,l){this.clearUpdateTimeout()},ownedByCurrUser:function(){if(!Galaxy||!Galaxy.currUser){return false}if(Galaxy.currUser.isAnonymous()||Galaxy.currUser.id!==this.get("user_id")){return false}return true},contentsCount:function(){return _.reduce(_.values(this.get("state_details")),function(j,k){return j+k},0)},searchAttributes:["name","annotation","tags"],searchAliases:{title:"name",tag:"tags"},checkForUpdates:function(j){if(this.contents.running().length){this.setUpdateTimeout()}else{this.trigger("ready");if(_.isFunction(j)){j.call(this)}}return this},setUpdateTimeout:function(j){j=j||e.UPDATE_DELAY;var k=this;this.clearUpdateTimeout();this.updateTimeoutId=setTimeout(function(){k.refresh()},j);return this.updateTimeoutId},clearUpdateTimeout:function(){if(this.updateTimeoutId){clearTimeout(this.updateTimeoutId);this.updateTimeoutId=null}},refresh:function(k,j){k=k||[];j=j||{};var l=this;j.data=j.data||{};if(k.length){j.data.details=k.join(",")}var m=this.contents.fetch(j);m.done(function(n){l.checkForUpdates(function(){this.fetch()})});return m},_delete:function(j){if(this.get("deleted")){return jQuery.when()}return this.save({deleted:true},j)},purge:function(j){if(this.get("purged")){return jQuery.when()}return this.save({deleted:true,purged:true},j)},undelete:function(j){if(!this.get("deleted")){return jQuery.when()}return this.save({deleted:false},j)},copy:function(m,k){m=(m!==undefined)?(m):(true);if(!this.id){throw new Error("You must set the history ID before copying it.")}var j={history_id:this.id};if(m){j.current=true}if(k){j.name=k}var l=this,n=jQuery.post(this.urlRoot,j);if(m){return n.then(function(o){var p=new e(o);return p.setAsCurrent().done(function(){l.trigger("copied",l,o)})})}return n.done(function(o){l.trigger("copied",l,o)})},setAsCurrent:function(){var j=this,k=jQuery.getJSON("/history/set_as_current?id="+this.id);k.done(function(){j.trigger("set-as-current",j)});return k},toString:function(){return"History("+this.get("id")+","+this.get("name")+")"}}));e.UPDATE_DELAY=4000;e.getHistoryData=function c(j,v){v=v||{};var r=v.detailIdsFn||[];var m=v.hdcaDetailIds||[];var s=jQuery.Deferred(),p=null;function k(w){if(j==="current"){return jQuery.getJSON(galaxy_config.root+"history/current_history_json")}return jQuery.ajax(galaxy_config.root+"api/histories/"+j)}function o(w){return w&&w.empty}function q(x){if(o(x)){return[]}if(_.isFunction(r)){r=r(x)}if(_.isFunction(m)){m=m(x)}var w={};if(r.length){w.dataset_details=r.join(",")}if(m.length){w.dataset_collection_details=m.join(",")}return jQuery.ajax(galaxy_config.root+"api/histories/"+x.id+"/contents",{data:w})}var t=v.historyFn||k,u=v.contentsFn||q;var n=t(j);n.done(function(w){p=w;s.notify({status:"history data retrieved",historyJSON:p})});n.fail(function(y,w,x){s.reject(y,"loading the history")});var l=n.then(u);l.then(function(w){s.notify({status:"contents data retrieved",historyJSON:p,contentsJSON:w});s.resolve(p,w)});l.fail(function(y,w,x){s.reject(y,"loading the contents",{history:p})});return s};var f=Backbone.Collection.extend(i.LoggableMixin).extend({model:e,urlRoot:(window.galaxy_config?galaxy_config.root:"/")+"api/histories",initialize:function(k,j){j=j||{};this.log("HistoryCollection.initialize",arguments);this.includeDeleted=j.includeDeleted||false;this.setUpListeners()},setUpListeners:function a(){var j=this;this.on("change:deleted",function(k){this.debug("change:deleted",j.includeDeleted,k.get("deleted"));if(!j.includeDeleted&&k.get("deleted")){j.remove(k)}});this.on("copied",function(k,l){this.unshift(new e(l,[]))})},create:function g(m,k,j,l){var o=this,n=jQuery.getJSON(galaxy_config.root+"history/create_new_current");return n.done(function(p){var q=new e(p,[],j||{});o.unshift(q);o.trigger("new-current")})},toString:function b(){return"HistoryCollection("+this.length+")"}});return{History:e,HistoryCollection:f}});
\ No newline at end of file
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 templates/webapps/galaxy/workflow/run.mako
--- a/templates/webapps/galaxy/workflow/run.mako
+++ b/templates/webapps/galaxy/workflow/run.mako
@@ -513,7 +513,7 @@
<div class="metadataFormTitle">Workflow Parameters</div><div class="metadataFormBody">
%for parm in wf_parms:
- <div class='form-row'><label style='width:100px;'>${parm}<input style="border:2px solid ${wf_parms[parm]};border-left-width:8px;" type="text" class='wf_parm_input ptag_${parm}' name="wf_parm|${parm}" value=""/></label></div>
+ <div class='form-row'><label>${parm}<br/><input size=40 style="border:2px solid ${wf_parms[parm]};border-left-width:8px;" type="text" class='wf_parm_input ptag_${parm}' name="wf_parm|${parm}" value=""/></label></div>
%endfor
</div></div>
diff -r db72e1534441c03d2d60616bacfee5bf4ebc66cc -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 test/unit/jobs/test_command_factory.py
--- a/test/unit/jobs/test_command_factory.py
+++ b/test/unit/jobs/test_command_factory.py
@@ -72,7 +72,7 @@
self.include_metadata = True
self.include_work_dir_outputs = False
self.job_wrapper.metadata_line = TEST_METADATA_LINE
- expected_command = '%s; return_code=$?; cd %s; %s; sh -c "exit $return_code"' % (MOCK_COMMAND_LINE, getcwd(), TEST_METADATA_LINE)
+ expected_command = '%s; return_code=$?; %s; sh -c "exit $return_code"' % (MOCK_COMMAND_LINE, TEST_METADATA_LINE)
self.__assert_command_is( expected_command )
def test_empty_metadata(self):
https://bitbucket.org/galaxy/galaxy-central/commits/128373c02e90/
Changeset: 128373c02e90
Branch: stable
User: natefoo
Date: 2015-03-17 02:35:28+00:00
Summary: Added tag v15.03.1 for changeset 9f40ff3e3fe1
Affected #: 1 file
diff -r 9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 -r 128373c02e907d1256299d40e184fabb6dfb4bae .hgtags
--- a/.hgtags
+++ b/.hgtags
@@ -25,3 +25,4 @@
5cdf5dc395f16f47e88a5567775dcd21b4906f08 latest_2015.01.13
738f4ad0eb62d5e4bcf0d22a2722ee6c570382b9 v15.03
18e0d31267acda451743fcf1a09ac04fe8681d8f latest_15.03
+9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 v15.03.1
https://bitbucket.org/galaxy/galaxy-central/commits/28113eeee212/
Changeset: 28113eeee212
Branch: stable
User: natefoo
Date: 2015-03-17 02:38:27+00:00
Summary: Update tag latest_15.03 for changeset 9f40ff3e3fe1
Affected #: 1 file
diff -r 128373c02e907d1256299d40e184fabb6dfb4bae -r 28113eeee212db2eef9cf4304abe9c3b2a92abcd .hgtags
--- a/.hgtags
+++ b/.hgtags
@@ -24,5 +24,5 @@
2e8dd2949dd3eee0f56f9a3a5ebf1b2baca24aee release_2015.01.13
5cdf5dc395f16f47e88a5567775dcd21b4906f08 latest_2015.01.13
738f4ad0eb62d5e4bcf0d22a2722ee6c570382b9 v15.03
-18e0d31267acda451743fcf1a09ac04fe8681d8f latest_15.03
9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 v15.03.1
+9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 latest_15.03
https://bitbucket.org/galaxy/galaxy-central/commits/708cab19708d/
Changeset: 708cab19708d
User: natefoo
Date: 2015-03-17 02:49:15+00:00
Summary: Merge v15.03 to default
Affected #: 200 files
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a .hgtags
--- a/.hgtags
+++ b/.hgtags
@@ -22,4 +22,4 @@
2092948937ac30ef82f71463a235c66d34987088 release_2014.10.06
782fa60fc65488aea0c618d723e9a63d42caf865 latest_2014.10.06
2e8dd2949dd3eee0f56f9a3a5ebf1b2baca24aee release_2015.01.13
-d677cb314dccedd8743eba26f1d446cdf97ebf16 latest_2015.01.13
+5cdf5dc395f16f47e88a5567775dcd21b4906f08 latest_2015.01.13
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a CONTRIBUTING.md
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -25,24 +25,27 @@
## How to Contribute
-* If you are new to Git - please check out the very useful tutorials on the [GitHub training hub](https://training.github.com/kit/)
+* If you are new to Mercurial - please check out this [official
+ tutorial](http://mercurial.selenic.com/wiki/Tutorial)
-* Make sure you have a free [GitHub account](https://github.com/)
+* Make sure you have a free [Bitbucket
+ account](https://bitbucket.org/account/signup/)
-* Fork the galaxy repository on
- [GitHub](https://github.com/galaxyproject/galaxy) to make your changes.
- (While many Galaxy instances track
- [galaxy-dist](https://bitbucket.org/galaxy/galaxy-dist), active development
- happens in the galaxy GitHub repository and this is where pull requests
- should be made).
+* Fork the galaxy-central repository on
+ [Bitbucket](https://bitbucket.org/galaxy/galaxy-central/fork) to
+ make your changes. (Many Galaxy instances target
+ [galaxy-dist](https://bitbucket.org/galaxy/galaxy-dist) - but active
+ development happens on galaxy-central and this is where pull
+ requests should be made).
-* Choose the correct branch to develop your changes against.
+* Choose the correct Mercurial branch to develop your changes against.
- * Additions to the code base should be pushed to the `dev` branch (`git
- checkout dev`).
+ * Additions to the code base should be pushed to the `default`
+ branch (`hg checkout default`).
- * Most bug fixes to previously release components (things in galaxy-dist)
- should be made against the `master` branch (`git checkout master`).
+ * Most bug fixes to previously release components (things in
+ galaxy-dist) should be pushed to the `stable` branch (`hg checkout
+ stable`).
* Serious security problems should not be fixed via pull request -
please responsibly disclose these by e-mailing them (with or
@@ -73,9 +76,9 @@
with these changes. You pull request message should include:
* A description of why the change should be made.
-
+
* A description of implementation of the change.
-
+
* A description of how to test the change.
## Ideas
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a README.rst
--- a/README.rst
+++ /dev/null
@@ -1,80 +0,0 @@
-.. figure:: https://wiki.galaxyproject.org/Images/Logos?action=AttachFile&do=get&target…
- :alt: Galaxy Logo
-
-The latest information about Galaxy is always available via the Galaxy
-website: `http://galaxyproject.org/ <http://galaxyproject.org/>`__
-
-Galaxy Quickstart
-=================
-
-Galaxy requires Python 2.6 or 2.7. To check your python version, run:
-
-.. code:: console
-
- $ python -V
- Python 2.7.3
-
-Start Galaxy:
-
-.. code:: console
-
- $ sh run.sh
-
-Once Galaxy completes startup, you should be able to view Galaxy in your
-browser at:
-
-http://localhost:8080
-
-You may wish to make changes from the default configuration. This can be
-done in the ``config/galaxy.ini`` file. Tools can be either installed
-from the Tool Shed or added manually. For details please see the Galaxy
-wiki:
-
-https://wiki.galaxyproject.org/Admin/Tools/AddToolFromToolShedTutorial
-
-Not all dependencies are included for the tools provided in the sample
-``tool_conf.xml``. A full list of external dependencies is available at:
-
-https://wiki.galaxyproject.org/Admin/Tools/ToolDependencies
-
-Issues
-======
-
-Issues can be submitted to trello via the `galaxyproject
-website <http://galaxyproject.org/trello/>`__ and viewed on the `Galaxy
-Trello Board <https://trello.com/b/75c1kASa/galaxy-development>`__
-
-
-Galaxy Development
-==================
-
-Galaxy welcomes new development! There is extensive documentation on developing
-with Galaxy on the `wiki <https://wiki.galaxyproject.org/Develop>`__.
-
-Source Repository
------------------
-
-Galaxy development has moved to `Github
-<https://github.com/galaxyproject/galaxy>`__.
-
-**Developers** should develop against the ``dev`` branch in the Github
-repository.
-
-**Deployers** can continue to pull Galaxy changes from Bitbucket without
-interruption. Alternatively, deployers can track the ``master`` branch in
-Github.
-
-Syncing a Fork
---------------
-
-If you fork Galaxy to work on it, you may be interested in keeping your copy
-up to date with respect to the main repository. Github has `good documentation
-<https://help.github.com/articles/syncing-a-fork/>`__ on this.
-
-.. code:: console
-
- $ git remote add upstream https://github.com/galaxyproject/galaxy
- $ git fetch upstream
- $ git checkout dev
- $ git merge upstream/dev
-
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a README.txt
--- /dev/null
+++ b/README.txt
@@ -0,0 +1,34 @@
+GALAXY
+======
+http://galaxyproject.org/
+
+The latest information about Galaxy is always available via the Galaxy
+website above.
+
+HOW TO START
+============
+Galaxy requires Python 2.6 or 2.7. To check your python version, run:
+
+% python -V
+Python 2.7.3
+
+Start Galaxy:
+
+% sh run.sh
+
+Once Galaxy completes startup, you should be able to view Galaxy in your
+browser at:
+
+http://localhost:8080
+
+You may wish to make changes from the default configuration. This can be done
+in the config/galaxy.ini file. Tools can be either installed from the Tool Shed
+or added manually. For details please see the Galaxy wiki:
+
+https://wiki.galaxyproject.org/Admin/Tools/AddToolFromToolShedTutorial
+
+
+Not all dependencies are included for the tools provided in the sample
+tool_conf.xml. A full list of external dependencies is available at:
+
+https://wiki.galaxyproject.org/Admin/Tools/ToolDependencies
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a client/galaxy/scripts/mvc/base-mvc.js
--- a/client/galaxy/scripts/mvc/base-mvc.js
+++ b/client/galaxy/scripts/mvc/base-mvc.js
@@ -94,7 +94,7 @@
if( !options.silent ){
model.trigger( 'request', model, {}, options );
}
- var returned = {};
+ var returned;
switch( method ){
case 'create' : returned = this._create( model ); break;
case 'read' : returned = this._read( model ); break;
@@ -111,19 +111,9 @@
/** set storage to the stringified item */
_create : function( model ){
- try {
- var json = model.toJSON(),
- set = sessionStorage.setItem( model.id, JSON.stringify( json ) );
- return ( set === null )?( set ):( json );
- // DOMException is thrown in Safari if in private browsing mode and sessionStorage is attempted:
- // http://stackoverflow.com/questions/14555347
- // TODO: this could probably use a more general soln - like detecting priv. mode + safari => non-ajaxing Model
- } catch( err ){
- if( !( ( err instanceof DOMException ) && ( navigator.userAgent.indexOf("Safari") > -1 ) ) ){
- throw err;
- }
- }
- return null;
+ var json = model.toJSON(),
+ set = sessionStorage.setItem( model.id, JSON.stringify( json ) );
+ return ( set === null )?( set ):( json );
},
/** read and parse json from storage */
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a client/galaxy/scripts/mvc/tools.js
--- a/client/galaxy/scripts/mvc/tools.js
+++ b/client/galaxy/scripts/mvc/tools.js
@@ -386,14 +386,12 @@
$("#search-spinner").show();
var self = this;
this.timer = setTimeout(function () {
- // log the search to analytics
- ga( 'send', 'pageview', galaxy_config.root + '?q=' + q );
$.get(self.attributes.search_url, { query: q }, function (data) {
self.set("results", data);
$("#search-spinner").hide();
$("#search-clear-btn").show();
}, "json" );
- }, 400 );
+ }, 200 );
},
clear_search: function() {
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a client/galaxy/scripts/mvc/tools/tools-content.js
--- /dev/null
+++ b/client/galaxy/scripts/mvc/tools/tools-content.js
@@ -0,0 +1,55 @@
+define(['utils/utils'], function(Utils){
+ return Backbone.Model.extend({
+ // initialize
+ initialize: function(app) {
+ this.summary = {};
+ },
+
+ /** Add new content elements
+ */
+ add: function(content_list) {
+ // add/update content in summary list
+ for (var i in content_list) {
+ for (var j in content_list[i]) {
+ var c = content_list[i][j];
+ this.summary[c.id + '_' + c.src] = c;
+ }
+ }
+ },
+
+ /** Returns matched content from summary.
+ */
+ get: function(options) {
+ return _.findWhere(this.summary, options) || {};
+ },
+
+ /** Get details of a content by id.
+ */
+ getDetails: function(options) {
+ // check id
+ if (!options.id || options.id === 'null') {
+ options.success && options.success();
+ return;
+ }
+
+ // create url
+ var api_url = this.base_url + '/datasets/' + options.id;
+ if (options.src == 'hdca') {
+ api_url = this.base_url + '/dataset_collections/' + options.id;
+ }
+
+ // request details
+ Utils.get({
+ url : api_url,
+ success : function(response) {
+ options.success && options.success(response);
+ },
+ error : function(response) {
+ options.success && options.success();
+ console.debug('tools-content::getDetails() - Ajax request for content failed.');
+ console.debug(response);
+ }
+ });
+ }
+ });
+});
\ No newline at end of file
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a client/galaxy/scripts/mvc/tools/tools-form-base.js
--- a/client/galaxy/scripts/mvc/tools/tools-form-base.js
+++ b/client/galaxy/scripts/mvc/tools/tools-form-base.js
@@ -3,9 +3,9 @@
*/
define(['utils/utils', 'utils/deferred', 'mvc/ui/ui-portlet', 'mvc/ui/ui-misc',
'mvc/citation/citation-model', 'mvc/citation/citation-view',
- 'mvc/tools', 'mvc/tools/tools-template', 'mvc/tools/tools-section', 'mvc/tools/tools-tree'],
+ 'mvc/tools', 'mvc/tools/tools-template', 'mvc/tools/tools-content', 'mvc/tools/tools-section', 'mvc/tools/tools-tree'],
function(Utils, Deferred, Portlet, Ui, CitationModel, CitationView,
- Tools, ToolTemplate, ToolSection, ToolTree) {
+ Tools, ToolTemplate, ToolContent, ToolSection, ToolTree) {
// create form view
return Backbone.View.extend({
@@ -83,8 +83,8 @@
// creates a tree/json data structure from the input form
this.tree = new ToolTree(this);
- // keeps track of history items
- this.history = {};
+ // request history content and build form
+ this.content = new ToolContent(this);
// update model data
self.options.inputs = options && options.inputs;
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a client/galaxy/scripts/mvc/tools/tools-jobs.js
--- a/client/galaxy/scripts/mvc/tools/tools-jobs.js
+++ b/client/galaxy/scripts/mvc/tools/tools-jobs.js
@@ -157,4 +157,4 @@
}
});
-});
+});
\ No newline at end of file
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a client/galaxy/scripts/mvc/tools/tools-select-content.js
--- a/client/galaxy/scripts/mvc/tools/tools-select-content.js
+++ b/client/galaxy/scripts/mvc/tools/tools-select-content.js
@@ -179,7 +179,6 @@
/** Update content selector */
update: function(options) {
// update a particular select field
- var self = this;
function _update(field, options) {
if (field) {
// identify available options
@@ -190,8 +189,6 @@
label: item.hid + ': ' + item.name,
value: item.id
});
- // backup to local history
- self.app.history[item.id + '_' + item.src] = item;
}
// update field
field.update(select_options);
@@ -202,6 +199,9 @@
_update(this.select_single, options.hda);
_update(this.select_multiple, options.hda);
_update(this.select_collection, options.hdca);
+
+ // add to content list
+ this.app.content.add(options);
},
/** Return the currently selected dataset values */
@@ -266,7 +266,7 @@
// append to dataset ids
for (var i in id_list) {
- var details = _.findWhere(this.app.history, {
+ var details = this.app.content.get({
id : id_list[i],
src : this.list[this.current].type
});
@@ -312,7 +312,7 @@
/** Assists in identifying the batch mode */
_batch: function() {
if (this.current == 'collection') {
- var hdca = _.findWhere(this.app.history, {
+ var hdca = this.app.content.get({
id : this._select().value(),
src : 'hdca'
});
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a client/galaxy/scripts/templates/compiled/panel_section.js
--- a/client/galaxy/scripts/templates/compiled/panel_section.js
+++ b/client/galaxy/scripts/templates/compiled/panel_section.js
@@ -1,14 +1,13 @@
(function() {
var template = Handlebars.template, templates = Handlebars.templates = Handlebars.templates || {};
templates['panel_section'] = template({"compiler":[6,">= 2.0.0-beta.1"],"main":function(depth0,helpers,partials,data) {
- var helper, alias1=helpers.helperMissing, alias2="function", alias3=this.escapeExpression;
-
+ var helper, functionType="function", helperMissing=helpers.helperMissing, escapeExpression=this.escapeExpression;
return "<div class=\"toolSectionTitle\" id=\"title_"
- + alias3(((helper = (helper = helpers.id || (depth0 != null ? depth0.id : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"id","hash":{},"data":data}) : helper)))
+ + escapeExpression(((helper = (helper = helpers.id || (depth0 != null ? depth0.id : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"id","hash":{},"data":data}) : helper)))
+ "\">\n <a href=\"javascript:void(0)\"><span>"
- + alias3(((helper = (helper = helpers.name || (depth0 != null ? depth0.name : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"name","hash":{},"data":data}) : helper)))
+ + escapeExpression(((helper = (helper = helpers.name || (depth0 != null ? depth0.name : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"name","hash":{},"data":data}) : helper)))
+ "</span></a>\n</div>\n<div id=\""
- + alias3(((helper = (helper = helpers.id || (depth0 != null ? depth0.id : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"id","hash":{},"data":data}) : helper)))
+ + escapeExpression(((helper = (helper = helpers.id || (depth0 != null ? depth0.id : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"id","hash":{},"data":data}) : helper)))
+ "\" class=\"toolSectionBody\" style=\"display: none; \">\n <div class=\"toolSectionBg\"></div>\n<div>";
},"useData":true});
})();
\ No newline at end of file
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a client/galaxy/scripts/templates/compiled/tool_form.js
--- a/client/galaxy/scripts/templates/compiled/tool_form.js
+++ b/client/galaxy/scripts/templates/compiled/tool_form.js
@@ -1,28 +1,26 @@
(function() {
var template = Handlebars.template, templates = Handlebars.templates = Handlebars.templates || {};
templates['tool_form'] = template({"1":function(depth0,helpers,partials,data) {
- var stack1, helper, alias1=helpers.helperMissing, alias2="function", alias3=this.escapeExpression;
-
- return " <div class=\"form-row\">\n <label for=\""
- + alias3(((helper = (helper = helpers.name || (depth0 != null ? depth0.name : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"name","hash":{},"data":data}) : helper)))
+ var stack1, helper, functionType="function", helperMissing=helpers.helperMissing, escapeExpression=this.escapeExpression, buffer = " <div class=\"form-row\">\n <label for=\""
+ + escapeExpression(((helper = (helper = helpers.name || (depth0 != null ? depth0.name : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"name","hash":{},"data":data}) : helper)))
+ "\">"
- + alias3(((helper = (helper = helpers.label || (depth0 != null ? depth0.label : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"label","hash":{},"data":data}) : helper)))
- + ":</label>\n <div class=\"form-row-input\">\n "
- + ((stack1 = ((helper = (helper = helpers.html || (depth0 != null ? depth0.html : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"html","hash":{},"data":data}) : helper))) != null ? stack1 : "")
- + "\n </div>\n <div class=\"toolParamHelp\" style=\"clear: both;\">\n "
- + alias3(((helper = (helper = helpers.help || (depth0 != null ? depth0.help : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"help","hash":{},"data":data}) : helper)))
+ + escapeExpression(((helper = (helper = helpers.label || (depth0 != null ? depth0.label : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"label","hash":{},"data":data}) : helper)))
+ + ":</label>\n <div class=\"form-row-input\">\n ";
+ stack1 = ((helper = (helper = helpers.html || (depth0 != null ? depth0.html : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"html","hash":{},"data":data}) : helper));
+ if (stack1 != null) { buffer += stack1; }
+ return buffer + "\n </div>\n <div class=\"toolParamHelp\" style=\"clear: both;\">\n "
+ + escapeExpression(((helper = (helper = helpers.help || (depth0 != null ? depth0.help : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"help","hash":{},"data":data}) : helper)))
+ "\n </div>\n <div style=\"clear: both;\"></div>\n </div>\n";
},"compiler":[6,">= 2.0.0-beta.1"],"main":function(depth0,helpers,partials,data) {
- var stack1, helper, alias1=helpers.helperMissing, alias2="function", alias3=this.escapeExpression;
-
- return "<div class=\"toolFormTitle\">"
- + alias3(((helper = (helper = helpers.name || (depth0 != null ? depth0.name : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"name","hash":{},"data":data}) : helper)))
+ var stack1, helper, functionType="function", helperMissing=helpers.helperMissing, escapeExpression=this.escapeExpression, buffer = "<div class=\"toolFormTitle\">"
+ + escapeExpression(((helper = (helper = helpers.name || (depth0 != null ? depth0.name : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"name","hash":{},"data":data}) : helper)))
+ " (version "
- + alias3(((helper = (helper = helpers.version || (depth0 != null ? depth0.version : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"version","hash":{},"data":data}) : helper)))
- + ")</div>\n <div class=\"toolFormBody\">\n"
- + ((stack1 = helpers.each.call(depth0,(depth0 != null ? depth0.inputs : depth0),{"name":"each","hash":{},"fn":this.program(1, data, 0),"inverse":this.noop,"data":data})) != null ? stack1 : "")
- + " </div>\n <div class=\"form-row form-actions\">\n <input type=\"submit\" class=\"btn btn-primary\" name=\"runtool_btn\" value=\"Execute\">\n</div>\n<div class=\"toolHelp\">\n <div class=\"toolHelpBody\">"
- + alias3(((helper = (helper = helpers.help || (depth0 != null ? depth0.help : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"help","hash":{},"data":data}) : helper)))
+ + escapeExpression(((helper = (helper = helpers.version || (depth0 != null ? depth0.version : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"version","hash":{},"data":data}) : helper)))
+ + ")</div>\n <div class=\"toolFormBody\">\n";
+ stack1 = helpers.each.call(depth0, (depth0 != null ? depth0.inputs : depth0), {"name":"each","hash":{},"fn":this.program(1, data),"inverse":this.noop,"data":data});
+ if (stack1 != null) { buffer += stack1; }
+ return buffer + " </div>\n <div class=\"form-row form-actions\">\n <input type=\"submit\" class=\"btn btn-primary\" name=\"runtool_btn\" value=\"Execute\">\n</div>\n<div class=\"toolHelp\">\n <div class=\"toolHelpBody\">"
+ + escapeExpression(((helper = (helper = helpers.help || (depth0 != null ? depth0.help : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"help","hash":{},"data":data}) : helper)))
+ "</div>\n</div>";
},"useData":true});
})();
\ No newline at end of file
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a client/galaxy/scripts/templates/compiled/tool_link.js
--- a/client/galaxy/scripts/templates/compiled/tool_link.js
+++ b/client/galaxy/scripts/templates/compiled/tool_link.js
@@ -1,19 +1,18 @@
(function() {
var template = Handlebars.template, templates = Handlebars.templates = Handlebars.templates || {};
templates['tool_link'] = template({"compiler":[6,">= 2.0.0-beta.1"],"main":function(depth0,helpers,partials,data) {
- var helper, alias1=helpers.helperMissing, alias2="function", alias3=this.escapeExpression;
-
+ var helper, functionType="function", helperMissing=helpers.helperMissing, escapeExpression=this.escapeExpression;
return "<a class=\""
- + alias3(((helper = (helper = helpers.id || (depth0 != null ? depth0.id : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"id","hash":{},"data":data}) : helper)))
+ + escapeExpression(((helper = (helper = helpers.id || (depth0 != null ? depth0.id : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"id","hash":{},"data":data}) : helper)))
+ " tool-link\" href=\""
- + alias3(((helper = (helper = helpers.link || (depth0 != null ? depth0.link : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"link","hash":{},"data":data}) : helper)))
+ + escapeExpression(((helper = (helper = helpers.link || (depth0 != null ? depth0.link : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"link","hash":{},"data":data}) : helper)))
+ "\" target=\""
- + alias3(((helper = (helper = helpers.target || (depth0 != null ? depth0.target : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"target","hash":{},"data":data}) : helper)))
+ + escapeExpression(((helper = (helper = helpers.target || (depth0 != null ? depth0.target : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"target","hash":{},"data":data}) : helper)))
+ "\" minsizehint=\""
- + alias3(((helper = (helper = helpers.min_width || (depth0 != null ? depth0.min_width : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"min_width","hash":{},"data":data}) : helper)))
+ + escapeExpression(((helper = (helper = helpers.min_width || (depth0 != null ? depth0.min_width : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"min_width","hash":{},"data":data}) : helper)))
+ "\">"
- + alias3(((helper = (helper = helpers.name || (depth0 != null ? depth0.name : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"name","hash":{},"data":data}) : helper)))
+ + escapeExpression(((helper = (helper = helpers.name || (depth0 != null ? depth0.name : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"name","hash":{},"data":data}) : helper)))
+ "</a> "
- + alias3(((helper = (helper = helpers.description || (depth0 != null ? depth0.description : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"description","hash":{},"data":data}) : helper)));
+ + escapeExpression(((helper = (helper = helpers.description || (depth0 != null ? depth0.description : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"description","hash":{},"data":data}) : helper)));
},"useData":true});
})();
\ No newline at end of file
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a client/galaxy/scripts/templates/compiled/tool_search.js
--- a/client/galaxy/scripts/templates/compiled/tool_search.js
+++ b/client/galaxy/scripts/templates/compiled/tool_search.js
@@ -1,12 +1,11 @@
(function() {
var template = Handlebars.template, templates = Handlebars.templates = Handlebars.templates || {};
templates['tool_search'] = template({"compiler":[6,">= 2.0.0-beta.1"],"main":function(depth0,helpers,partials,data) {
- var helper, alias1=helpers.helperMissing, alias2="function", alias3=this.escapeExpression;
-
- return "<input type=\"text\" name=\"query\" placeholder=\""
- + alias3(((helper = (helper = helpers.search_hint_string || (depth0 != null ? depth0.search_hint_string : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"search_hint_string","hash":{},"data":data}) : helper)))
+ var helper, functionType="function", helperMissing=helpers.helperMissing, escapeExpression=this.escapeExpression;
+ return "<input type=\"text\" name=\"query\" value=\""
+ + escapeExpression(((helper = (helper = helpers.search_hint_string || (depth0 != null ? depth0.search_hint_string : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"search_hint_string","hash":{},"data":data}) : helper)))
+ "\" id=\"tool-search-query\" autocomplete=\"off\" class=\"search-query parent-width\" />\n<a id=\"search-clear-btn\" title=\"clear search (esc)\"></a>\n<img src=\""
- + alias3(((helper = (helper = helpers.spinner_url || (depth0 != null ? depth0.spinner_url : depth0)) != null ? helper : alias1),(typeof helper === alias2 ? helper.call(depth0,{"name":"spinner_url","hash":{},"data":data}) : helper)))
- + "\" id=\"search-spinner\" class=\"search-spinner\"/>\n";
+ + escapeExpression(((helper = (helper = helpers.spinner_url || (depth0 != null ? depth0.spinner_url : depth0)) != null ? helper : helperMissing),(typeof helper === functionType ? helper.call(depth0, {"name":"spinner_url","hash":{},"data":data}) : helper)))
+ + "\" id=\"search-spinner\" class=\"search-spinner\"/>";
},"useData":true});
})();
\ No newline at end of file
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a client/galaxy/scripts/templates/tool_search.handlebars
--- a/client/galaxy/scripts/templates/tool_search.handlebars
+++ b/client/galaxy/scripts/templates/tool_search.handlebars
@@ -1,3 +1,3 @@
-<input type="text" name="query" placeholder="{{search_hint_string}}" id="tool-search-query" autocomplete="off" class="search-query parent-width" />
+<input type="text" name="query" value="{{search_hint_string}}" id="tool-search-query" autocomplete="off" class="search-query parent-width" /><a id="search-clear-btn" title="clear search (esc)"></a>
-<img src="{{spinner_url}}" id="search-spinner" class="search-spinner"/>
+<img src="{{spinner_url}}" id="search-spinner" class="search-spinner"/>
\ No newline at end of file
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a config/auth_conf.xml.sample
--- a/config/auth_conf.xml.sample
+++ /dev/null
@@ -1,9 +0,0 @@
-<?xml version="1.0"?>
-<auth>
- <authenticator>
- <type>localdb</type>
- <options>
- <allow-password-change>true</allow-password-change>
- </options>
- </authenticator>
-</auth>
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a config/demo_sequencer_wsgi.ini.sample
--- /dev/null
+++ b/config/demo_sequencer_wsgi.ini.sample
@@ -0,0 +1,66 @@
+# ---- HTTP Server ----------------------------------------------------------
+
+[server:main]
+
+use = egg:Paste#http
+port = 9011
+host = 0.0.0.0
+use_threadpool = true
+threadpool_workers = 10
+
+# ---- Galaxy Demo Sequencer Emulator Interface -------------------------------------------------
+
+[app:main]
+
+# Specifies the factory for the universe WSGI application
+paste.app_factory = galaxy.webapps.demo_sequencer.buildapp:app_factory
+log_level = DEBUG
+
+# Where dataset files are saved
+file_path = database/demo_sequencer_files
+# Temporary storage for additional datasets, this should be shared through the cluster
+new_file_path = database/tmp
+
+# Sequencer emulator actions
+sequencer_actions_config_file = %(here)s/lib/galaxy/webapps/demo_sequencer/sequencer_actions.xml
+
+# Session support (beaker)
+use_beaker_session = True
+session_type = memory
+session_data_dir = %(here)s/database/beaker_sessions
+session_key = galaxysessions
+session_secret = changethisinproduction
+
+# Galaxy session security
+id_secret = changethisinproductiontoo
+
+# Configuration for debugging middleware
+debug = true
+use_lint = false
+
+# NEVER enable this on a public site (even test or QA)
+# use_interactive = true
+
+# this should be a comma-separated list of valid Galaxy users
+#admin_users = test(a)bx.psu.edu
+
+# Force everyone to log in (disable anonymous access)
+require_login = False
+
+# Write thread status periodically to 'heartbeat.log' (careful, uses disk space rapidly!)
+## use_heartbeat = True
+
+# Profiling middleware (cProfile based)
+## use_profile = True
+
+# Use the new iframe / javascript based layout
+use_new_layout = true
+
+# Serving static files (needed if running standalone)
+static_enabled = True
+static_cache_time = 360
+static_dir = %(here)s/static/
+static_images_dir = %(here)s/static/images
+static_favicon_dir = %(here)s/static/favicon.ico
+static_scripts_dir = %(here)s/static/scripts/
+static_style_dir = %(here)s/static/june_2007_style/blue
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a config/galaxy.ini.sample
--- a/config/galaxy.ini.sample
+++ b/config/galaxy.ini.sample
@@ -176,7 +176,7 @@
# install from in the admin interface (.sample used if default does not exist).
#tool_sheds_config_file = config/tool_sheds_conf.xml
-# Set to True to enable monitoring of tools and tool directories
+# Set to True to Enable monitoring of tools and tool directories
# listed in any tool config file specified in tool_config_file option.
# If changes are found, tools are automatically reloaded. Watchdog (
# https://pypi.python.org/pypi/watchdog ) must be installed and
@@ -184,7 +184,7 @@
# which will attempt to watch tools if the watchdog library is available
# but won't fail to load Galaxy if it is not and 'polling' which will use
# a less efficient monitoring scheme that may work in wider range of scenarios
-# than the watchdog default.
+# then the watchdog default montiory.
#watch_tools = False
# Enable automatic polling of relative tool sheds to see if any updates
@@ -844,13 +844,8 @@
#openid_config_file = config/openid_conf.xml
#openid_consumer_cache_path = database/openid_consumer_cache
-# XML config file that allows the use of different authentication providers
-# (e.g. Active Directory) instead or in addition to local authentication
-# (.sample is used if default does not exist).
-#auth_config_file = config/auth_conf.xml
-
# Optional list of email addresses of API users who can make calls on behalf of
-# other users.
+# other users
#api_allow_run_as = None
# Master key that allows many API admin actions to be used without actually
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a config/job_conf.xml.sample_advanced
--- a/config/job_conf.xml.sample_advanced
+++ b/config/job_conf.xml.sample_advanced
@@ -427,15 +427,6 @@
<param id="nativeSpecification">--mem-per-cpu=512</param><resubmit condition="memory_limit_reached" destination="bigmem" /></destination>
- <!-- Any tag param in this file can be set using an environment variable or using
- values from galaxy.ini using the from_environ and from_config attributes
- repectively. The text of the param will still be used if that environment variable
- or config value isn't set.
- -->
- <destination id="params_from_environment" runner="slurm">
- <param id="nativeSpecification" from_environ="NATIVE_SPECIFICATION">--time=00:05:00 --nodes=1</param>
- <param id="docker_enabled" from_config="use_docker">false</param>
- </destination></destinations><resources default="default">
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a config/reports_wsgi.ini.sample
--- a/config/reports_wsgi.ini.sample
+++ b/config/reports_wsgi.ini.sample
@@ -8,37 +8,10 @@
use_threadpool = true
threadpool_workers = 10
-# ---- Filters --------------------------------------------------------------
-
-# Filters sit between Galaxy and the HTTP server.
-
-# These filters are disabled by default. They can be enabled with
-# 'filter-with' in the [app:main] section below.
-
-# Define the proxy-prefix filter.
-[filter:proxy-prefix]
-use = egg:PasteDeploy#prefix
-prefix = /reports
-
# ---- Galaxy Webapps Report Interface -------------------------------------------------
[app:main]
-# -- Application and filtering
-
-# If running behind a proxy server and Galaxy is served from a subdirectory,
-# enable the proxy-prefix filter and set the prefix in the
-# [filter:proxy-prefix] section above.
-#filter-with = proxy-prefix
-
-# If proxy-prefix is enabled and you're running more than one Galaxy instance
-# behind one hostname, you will want to set this to the same path as the prefix
-# in the filter above. This value becomes the "path" attribute set in the
-# cookie so the cookies from each instance will not clobber each other.
-#cookie_path = None
-
-# -- Report
-
# Specifies the factory for the universe WSGI application
paste.app_factory = galaxy.webapps.reports.buildapp:app_factory
log_level = DEBUG
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a cron/parse_builds.py
--- a/cron/parse_builds.py
+++ b/cron/parse_builds.py
@@ -2,7 +2,7 @@
"""
Connects to the URL specified and outputs builds available at that
-DSN in tabular format. UCSC Main gateway is used as default.
+DSN in tabular format. USCS Test gateway is used as default.
build description
"""
@@ -10,10 +10,7 @@
import urllib
import xml.etree.ElementTree as ElementTree
-try:
- URL = sys.argv[1]
-except IndexError:
- URL = "http://genome.cse.ucsc.edu/cgi-bin/das/dsn"
+URL = "http://genome.cse.ucsc.edu/cgi-bin/das/dsn"
def getbuilds(url):
try:
@@ -31,7 +28,7 @@
print "?\tunspecified (?)"
sys.exit(1)
- print "#Harvested from " + URL
+ print "#Harvested from http://genome.cse.ucsc.edu/cgi-bin/das/dsn"
print "?\tunspecified (?)"
for dsn in tree:
build = dsn.find("SOURCE").attrib['id']
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a doc/source/lib/galaxy.webapps.demo_sequencer.controllers.rst
--- /dev/null
+++ b/doc/source/lib/galaxy.webapps.demo_sequencer.controllers.rst
@@ -0,0 +1,19 @@
+controllers Package
+===================
+
+:mod:`controllers` Package
+--------------------------
+
+.. automodule:: galaxy.webapps.demo_sequencer.controllers
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+:mod:`common` Module
+--------------------
+
+.. automodule:: galaxy.webapps.demo_sequencer.controllers.common
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a doc/source/lib/galaxy.webapps.demo_sequencer.framework.rst
--- /dev/null
+++ b/doc/source/lib/galaxy.webapps.demo_sequencer.framework.rst
@@ -0,0 +1,11 @@
+framework Package
+=================
+
+:mod:`framework` Package
+------------------------
+
+.. automodule:: galaxy.webapps.demo_sequencer.framework
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a doc/source/lib/galaxy.webapps.demo_sequencer.rst
--- /dev/null
+++ b/doc/source/lib/galaxy.webapps.demo_sequencer.rst
@@ -0,0 +1,51 @@
+demo_sequencer Package
+======================
+
+:mod:`demo_sequencer` Package
+-----------------------------
+
+.. automodule:: galaxy.webapps.demo_sequencer
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+:mod:`app` Module
+-----------------
+
+.. automodule:: galaxy.webapps.demo_sequencer.app
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+:mod:`buildapp` Module
+----------------------
+
+.. automodule:: galaxy.webapps.demo_sequencer.buildapp
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+:mod:`config` Module
+--------------------
+
+.. automodule:: galaxy.webapps.demo_sequencer.config
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+:mod:`registry` Module
+----------------------
+
+.. automodule:: galaxy.webapps.demo_sequencer.registry
+ :members:
+ :undoc-members:
+ :show-inheritance:
+
+Subpackages
+-----------
+
+.. toctree::
+
+ galaxy.webapps.demo_sequencer.controllers
+ galaxy.webapps.demo_sequencer.framework
+
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a doc/source/lib/galaxy.webapps.rst
--- a/doc/source/lib/galaxy.webapps.rst
+++ b/doc/source/lib/galaxy.webapps.rst
@@ -15,6 +15,7 @@
.. toctree::
galaxy.webapps.community
+ galaxy.webapps.demo_sequencer
galaxy.webapps.galaxy
galaxy.webapps.reports
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a lib/galaxy/app.py
--- a/lib/galaxy/app.py
+++ b/lib/galaxy/app.py
@@ -120,8 +120,6 @@
else:
self.openid_providers = OpenIDProviders()
# Start the heartbeat process if configured and available
- from galaxy import auth
- self.auth_manager = auth.AuthManager( self )
if self.config.use_heartbeat:
from galaxy.util import heartbeat
if heartbeat.Heartbeat:
@@ -166,11 +164,8 @@
if self.heartbeat:
self.heartbeat.shutdown()
self.update_repository_manager.shutdown()
- try:
+ if self.control_worker:
self.control_worker.shutdown()
- except AttributeError:
- # There is no control_worker
- pass
try:
# If the datatypes registry was persisted, attempt to
# remove the temporary file in which it was written.
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a lib/galaxy/auth/__init__.py
--- a/lib/galaxy/auth/__init__.py
+++ /dev/null
@@ -1,196 +0,0 @@
-"""
-Contains implementations of the authentication logic.
-"""
-
-from collections import namedtuple
-import xml.etree.ElementTree
-
-from galaxy.security.validate_user_input import validate_publicname
-
-from galaxy.util import (
- string_as_bool,
- string_as_bool_or_none,
-)
-
-from galaxy.util import plugin_config
-
-
-import logging
-log = logging.getLogger(__name__)
-
-# <auth>
-# <authenticator>
-# <type>activedirectory</type>
-# <filter>'[username]'.endswith('@students.latrobe.edu.au')</filter>
-# <options>
-# <auto-register>True</auto-register>
-# <server>ldap://STUDENTS.ltu.edu.au</server>
-# [<search-filter>(&(objectClass=user)(mail={username}))</search-filter>
-# <search-base>dc=STUDENTS,dc=ltu,dc=edu,dc=au</search-base>
-# <search-user>jsmith</search-user>
-# <search-password>mysecret</search-password>
-# <search-fields>sAMAccountName</search-fields>]
-# <bind-user>{sAMAccountName}(a)STUDENTS.ltu.edu.au</bind-user>
-# <bind-password>{password}</bind-password>
-# <auto-register-username>{sAMAccountName}</auto-register-username>
-# </options>
-# </authenticator>
-# ...
-# </auth>
-
-
-class AuthManager(object):
-
- def __init__(self, app):
- self.__app = app
- import galaxy.auth.providers
- self.__plugins_dict = plugin_config.plugins_dict( galaxy.auth.providers, 'plugin_type' )
- auth_config_file = app.config.auth_config_file
- self.__init_authenticators(auth_config_file)
-
- def __init_authenticators(self, auth_config_file):
- # parse XML
- ct = xml.etree.ElementTree.parse(auth_config_file)
- conf_root = ct.getroot()
-
- authenticators = []
- # process authenticators
- for auth_elem in conf_root.getchildren():
- type_elem = auth_elem.find('type')
- plugin = self.__plugins_dict.get(type_elem.text)()
-
- # check filterelem
- filter_elem = auth_elem.find('filter')
- if filter_elem is not None:
- filter_template = str(filter_elem.text)
- else:
- filter_template = None
-
- # extract options
- options_elem = auth_elem.find('options')
- options = {}
- if options_elem is not None:
- for opt in options_elem:
- options[opt.tag] = opt.text
- authenticator = Authenticator(
- plugin=plugin,
- filter_template=filter_template,
- options=options,
- )
- authenticators.append(authenticator)
- self.authenticators = authenticators
-
- def check_registration_allowed(self, email, password):
- """Checks if the provided email is allowed to register."""
- message = ''
- status = 'done'
- for provider, options in self.active_authenticators(email, password):
- allow_reg = _get_tri_state(options, 'allow-register', True)
- if allow_reg is None: # i.e. challenge
- auth_result, msg = provider.authenticate(email, password, options)
- if auth_result is True:
- break
- if auth_result is None:
- message = 'Invalid email address or password'
- status = 'error'
- break
- elif allow_reg is True:
- break
- elif allow_reg is False:
- message = 'Account registration not required for your account. Please simply login.'
- status = 'error'
- break
- return message, status
-
- def check_auto_registration(self, trans, email, password):
- """
- Checks the email/password using auth providers in order. If a match is
- found, returns the 'auto-register' option for that provider.
- """
- for provider, options in self.active_authenticators(email, password):
- if provider is None:
- log.debug( "Unable to find module: %s" % options )
- else:
- auth_result, auto_username = provider.authenticate(email, password, options)
- auto_username = str(auto_username).lower()
- if auth_result is True:
- # make username unique
- if validate_publicname( trans, auto_username ) != '':
- i = 1
- while i <= 10: # stop after 10 tries
- if validate_publicname( trans, "%s-%i" % (auto_username, i) ) == '':
- auto_username = "%s-%i" % (auto_username, i)
- break
- i += 1
- else:
- break # end for loop if we can't make a unique username
- log.debug( "Email: %s, auto-register with username: %s" % (email, auto_username) )
- return (_get_bool(options, 'auto-register', False), auto_username)
- elif auth_result is None:
- log.debug( "Email: %s, stopping due to failed non-continue" % (email) )
- break # end authentication (skip rest)
- return (False, '')
-
- def check_password(self, user, password):
- """Checks the email/password using auth providers."""
- for provider, options in self.active_authenticators(user.email, password):
- if provider is None:
- log.debug( "Unable to find module: %s" % options )
- else:
- auth_result = provider.authenticate_user(user, password, options)
- if auth_result is True:
- return True # accept user
- elif auth_result is None:
- break # end authentication (skip rest)
- return False
-
- def check_change_password(self, user, current_password):
- """Checks that auth provider allows password changes and current_password
- matches.
- """
- for provider, options in self.active_authenticators(user.email, current_password):
- if provider is None:
- log.debug( "Unable to find module: %s" % options )
- else:
- if _get_bool(options, "allow-password-change", False):
- auth_result = provider.authenticate_user(user, current_password, options)
- if auth_result is True:
- return (True, '') # accept user
- elif auth_result is None:
- break # end authentication (skip rest)
- else:
- return (False, 'Password change not supported')
- return (False, 'Invalid current password')
-
- def active_authenticators(self, username, password):
- """Yields AuthProvider instances for the provided configfile that match the
- filters.
- """
- try:
- for authenticator in self.authenticators:
- filter_template = authenticator.filter_template
- if filter_template:
- filter_str = filter_template.format(username=username, password=password)
- passed_filter = eval(filter_str, {"__builtins__": None}, {'str': str})
- if not passed_filter:
- continue # skip to next
- yield authenticator.plugin, authenticator.options
- except Exception:
- log.exception( "Active Authenticators Failure" )
- raise
-
-Authenticator = namedtuple('Authenticator', ['plugin', 'filter_template', 'options'])
-
-
-def _get_bool(d, k, o):
- if k in d:
- return string_as_bool(d[k])
- else:
- return o
-
-
-def _get_tri_state(d, k, o):
- if k in d:
- return string_as_bool_or_none(d[k])
- else:
- return o
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a lib/galaxy/auth/providers/__init__.py
--- a/lib/galaxy/auth/providers/__init__.py
+++ /dev/null
@@ -1,56 +0,0 @@
-"""
-Created on 15/07/2014
-
-@author: Andrew Robinson
-"""
-
-import abc
-
-
-class AuthProvider(object):
- """A base class for all Auth Providers."""
- __metaclass__ = abc.ABCMeta
-
- @abc.abstractproperty
- def plugin_type(self):
- """ Short string providing labelling this plugin """
-
- @abc.abstractmethod
- def authenticate(self, username, password, options):
- """
- Check that the username and password are correct.
-
- NOTE: Used within auto-registration to check it is ok to register this
- user.
-
- :param username: the users email address
- :type username: str
- :param password: the plain text password they typed
- :type password: str
- :param options: options provided in auth_config_file
- :type options: dict
- :returns: True: accept user, False: reject user and None: reject user
- and don't try any other providers. str is the username to register
- with if accepting
- :rtype: (bool, str)
- """
-
- @abc.abstractmethod
- def authenticate_user(self, user, password, options):
- """
- Same as authenticate() method, except an User object is provided instead
- of a username.
-
- NOTE: used on normal login to check authentication and update user
- details if required.
-
- :param username: the users email address
- :type username: str
- :param password: the plain text password they typed
- :type password: str
- :param options: options provided in auth_config_file
- :type options: dict
- :returns: True: accept user, False: reject user and None: reject user
- and don't try any other providers
- :rtype: bool
- """
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a lib/galaxy/auth/providers/activedirectory.py
--- a/lib/galaxy/auth/providers/activedirectory.py
+++ /dev/null
@@ -1,97 +0,0 @@
-"""
-Created on 15/07/2014
-
-@author: Andrew Robinson
-"""
-
-from ..providers import AuthProvider
-
-import logging
-log = logging.getLogger(__name__)
-
-
-def _get_subs(d, k, vars, default=''):
- if k in d:
- return str(d[k]).format(**vars)
- return str(default).format(**vars)
-
-
-class ActiveDirectory(AuthProvider):
- """
- Attempts to authenticate users against an Active Directory server.
-
- If options include search-fields then it will attempt to search the AD for
- those fields first. After that it will bind to the AD with the username
- (formatted as specified).
- """
- plugin_type = 'activedirectory'
-
- def authenticate(self, username, password, options):
- """
- See abstract method documentation.
- """
- log.debug("Username: %s" % username)
- log.debug("Options: %s" % options)
-
- failure_mode = False # reject but continue
- if options.get('continue-on-failure', 'False') == 'False':
- failure_mode = None # reject and do not continue
-
- try:
- import ldap
- except:
- log.debug("User: %s, ACTIVEDIRECTORY: False (no ldap)" % (username))
- return (failure_mode, '')
-
- # do AD search (if required)
- vars = {'username': username, 'password': password}
- if 'search-fields' in options:
- try:
- # setup connection
- ldap.set_option(ldap.OPT_REFERRALS, 0)
- l = ldap.initialize(_get_subs(options, 'server', vars))
- l.protocol_version = 3
- l.simple_bind_s(_get_subs(options, 'search-user', vars), _get_subs(options, 'search-password', vars))
- scope = ldap.SCOPE_SUBTREE
-
- # setup search
- attributes = map(lambda s: s.strip().format(**vars), options['search-fields'].split(','))
- result = l.search(_get_subs(options, 'search-base', vars), scope, _get_subs(options, 'search-filter', vars), attributes)
-
- # parse results
- _, suser = l.result(result, 60)
- _, attrs = suser[0]
- log.debug(("AD Search attributes: %s" % attrs))
- if hasattr(attrs, 'has_key'):
- for attr in attributes:
- if attr in attrs:
- vars[attr] = str(attrs[attr][0])
- else:
- vars[attr] = ""
- except Exception:
- log.exception('ACTIVEDIRECTORY Search Exception for User: %s' % username)
- return (failure_mode, '')
- # end search
-
- # bind as user to check their credentials
- try:
- # setup connection
- ldap.set_option(ldap.OPT_REFERRALS, 0)
- l = ldap.initialize(_get_subs(options, 'server', vars))
- l.protocol_version = 3
- l.simple_bind_s(_get_subs(options, 'bind-user', vars), _get_subs(options, 'bind-password', vars))
- except Exception:
- log.exception('ACTIVEDIRECTORY Authenticate Exception for User %s' % username)
- return (failure_mode, '')
-
- log.debug("User: %s, ACTIVEDIRECTORY: True" % (username))
- return (True, _get_subs(options, 'auto-register-username', vars))
-
- def authenticate_user(self, user, password, options):
- """
- See abstract method documentation.
- """
- return self.authenticate(user.email, password, options)[0]
-
-
-__all__ = ['ActiveDirectory']
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a lib/galaxy/auth/providers/alwaysreject.py
--- a/lib/galaxy/auth/providers/alwaysreject.py
+++ /dev/null
@@ -1,33 +0,0 @@
-"""
-Created on 16/07/2014
-
-@author: Andrew Robinson
-"""
-
-from ..providers import AuthProvider
-
-import logging
-log = logging.getLogger(__name__)
-
-
-class AlwaysReject(AuthProvider):
- """A simple authenticator that just accepts users (does not care about their
- password).
- """
- plugin_type = 'alwaysreject'
-
- def authenticate(self, username, password, options):
- """
- See abstract method documentation.
- """
- return (None, '')
-
- def authenticate_user(self, user, password, options):
- """
- See abstract method documentation.
- """
- log.debug("User: %s, ALWAYSREJECT: None" % (user.email))
- return None
-
-
-__all__ = ['AlwaysReject']
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a lib/galaxy/auth/providers/localdb.py
--- a/lib/galaxy/auth/providers/localdb.py
+++ /dev/null
@@ -1,31 +0,0 @@
-"""
-Created on 16/07/2014
-
-@author: Andrew Robinson
-"""
-
-from ..providers import AuthProvider
-import logging
-log = logging.getLogger(__name__)
-
-
-class LocalDB(AuthProvider):
- """Authenticate users against the local Galaxy database (as per usual)."""
- plugin_type = 'localdb'
-
- def authenticate(self, username, password, options):
- """
- See abstract method documentation.
- """
- return (False, '') # it can never auto-create based of localdb (chicken-egg)
-
- def authenticate_user(self, user, password, options):
- """
- See abstract method documentation.
- """
- user_ok = user.check_password(password)
- log.debug("User: %s, LOCALDB: %s" % (user.email, user_ok))
- return user_ok
-
-
-__all__ = ['LocalDB']
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a lib/galaxy/config.py
--- a/lib/galaxy/config.py
+++ b/lib/galaxy/config.py
@@ -462,7 +462,6 @@
Backwards compatibility for config files moved to the config/ dir.
"""
defaults = dict(
- auth_config_file=[ 'config/auth_conf.xml', 'config/auth_conf.xml.sample' ],
data_manager_config_file=[ 'config/data_manager_conf.xml', 'data_manager_conf.xml', 'config/data_manager_conf.xml.sample' ],
datatypes_config_file=[ 'config/datatypes_conf.xml', 'datatypes_conf.xml', 'config/datatypes_conf.xml.sample' ],
external_service_type_config_file=[ 'config/external_service_types_conf.xml', 'external_service_types_conf.xml', 'config/external_service_types_conf.xml.sample' ],
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a lib/galaxy/datatypes/converters/lped_to_pbed_converter.py
--- a/lib/galaxy/datatypes/converters/lped_to_pbed_converter.py
+++ b/lib/galaxy/datatypes/converters/lped_to_pbed_converter.py
@@ -52,7 +52,7 @@
return missval
if not missval:
missval = 'N' # punt
- f.close()
+ close(f)
return missval
def rgConv(inpedfilepath,outhtmlname,outfilepath,plink):
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a lib/galaxy/datatypes/converters/pbed_to_lped_converter.py
--- a/lib/galaxy/datatypes/converters/pbed_to_lped_converter.py
+++ b/lib/galaxy/datatypes/converters/pbed_to_lped_converter.py
@@ -4,10 +4,7 @@
# eg lped/eigen/fbat/snpmatrix all to pbed
# and pbed to lped/eigen/fbat/snpmatrix ?
# that's a lot of converters
-import os
-import subprocess
-import sys
-import time
+import sys,os,time,subprocess
prog = os.path.split(sys.argv[0])[-1]
@@ -26,21 +23,23 @@
<div class="document">
"""
-
def timenow():
"""return current time as a string
"""
return time.strftime('%d/%m/%Y %H:%M:%S', time.localtime(time.time()))
-def rgConv(inpedfilepath, outhtmlname, outfilepath, plink):
+def rgConv(inpedfilepath,outhtmlname,outfilepath,plink):
"""
"""
- basename = os.path.split(inpedfilepath)[-1] # get basename
- outroot = os.path.join(outfilepath, basename)
- cl = '%s --noweb --bfile %s --recode --out %s ' % (plink, inpedfilepath, outroot)
- p = subprocess.Popen(cl, shell=True, cwd=outfilepath)
- p.wait() # run plink
+
+ basename = os.path.split(inpedfilepath)[-1] # get basename
+ outroot = os.path.join(outfilepath,basename)
+ cl = '%s --noweb --bfile %s --recode --out %s ' % (plink,inpedfilepath,outroot)
+ p = subprocess.Popen(cl,shell=True,cwd=outfilepath)
+ retval = p.wait() # run plink
+
+
def main():
@@ -53,7 +52,7 @@
"""
nparm = 4
if len(sys.argv) < nparm:
- sys.stderr.write('PBED to LPED converter called with %s - needs %d parameters \n' % (sys.argv, nparm))
+ sys.stderr.write('## %s called with %s - needs %d parameters \n' % (myname,sys.argv,nparm))
sys.exit(1)
inpedfilepath = sys.argv[1]
outhtmlname = sys.argv[2]
@@ -63,18 +62,19 @@
except:
pass
plink = sys.argv[4]
- rgConv(inpedfilepath, outhtmlname, outfilepath, plink)
- f = file(outhtmlname, 'w')
+ rgConv(inpedfilepath,outhtmlname,outfilepath,plink)
+ f = file(outhtmlname,'w')
f.write(galhtmlprefix % prog)
flist = os.listdir(outfilepath)
- s = '## Rgenetics: http://bitbucket.org/rgalaxy Galaxy Tools %s %s' % (prog, timenow()) # becomes info
+ s = '## Rgenetics: http://bitbucket.org/rgalaxy Galaxy Tools %s %s' % (prog,timenow()) # becomes info
print s
f.write('<div>%s\n<ol>' % (s))
for i, data in enumerate( flist ):
- f.write('<li><a href="%s">%s</a></li>\n' % (os.path.split(data)[-1], os.path.split(data)[-1]))
+ f.write('<li><a href="%s">%s</a></li>\n' % (os.path.split(data)[-1],os.path.split(data)[-1]))
f.write("</div></body></html>")
f.close()
+
if __name__ == "__main__":
- main()
+ main()
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a lib/galaxy/datatypes/dataproviders/dataset.py
--- a/lib/galaxy/datatypes/dataproviders/dataset.py
+++ b/lib/galaxy/datatypes/dataproviders/dataset.py
@@ -465,8 +465,8 @@
def __iter__( self ):
for id_ in self.ids:
yield {
- 'id': id_,
- 'seq': self.source[ id_ ]
+ 'id' : id_,
+ 'seq' : self.source[ name ]
}
@@ -690,7 +690,7 @@
def __init__( self, dataset, **kwargs ):
#TODO: as samtools
raise NotImplementedError()
- super( BcftoolsDataProvider, self ).__init__( dataset, **kwargs )
+ super( BCFDataProvider, self ).__init__( dataset, **kwargs )
class BGzipTabixDataProvider( base.DataProvider ):
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a lib/galaxy/datatypes/display_applications/application.py
--- a/lib/galaxy/datatypes/display_applications/application.py
+++ b/lib/galaxy/datatypes/display_applications/application.py
@@ -244,12 +244,12 @@
link = DisplayApplicationLink.from_elem( link_elem, self )
if link:
self.links[ link.id ] = link
- try:
- for dynamic_links in elem.findall( 'dynamic_links' ):
+ for dynamic_links in elem.findall( 'dynamic_links' ):
+ try:
for link in DynamicDisplayApplicationBuilder( dynamic_links, self, self.app.datatypes_registry.build_sites ):
self.links[ link.id ] = link
- except Exception, e:
- log.error( "Error loading a set of Dynamic Display Application links: %s", e )
+ except Exception, e:
+ log.error( "Error loading a set of Dynamic Display Application links: %s", e )
def get_link( self, link_name, data, dataset_hash, user_hash, trans, app_kwds ):
#returns a link object with data knowledge to generate links
self._check_and_reload()
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a lib/galaxy/datatypes/metadata.py
--- a/lib/galaxy/datatypes/metadata.py
+++ b/lib/galaxy/datatypes/metadata.py
@@ -30,8 +30,7 @@
import logging
log = logging.getLogger(__name__)
-STATEMENTS = "__galaxy_statements__" # this is the name of the property in a Datatype class where new metadata spec element Statements are stored
-
+STATEMENTS = "__galaxy_statements__" #this is the name of the property in a Datatype class where new metadata spec element Statements are stored
class Statement( object ):
"""
@@ -44,17 +43,14 @@
self.target = target
def __call__( self, *args, **kwargs ):
- # get the locals dictionary of the frame object one down in the call stack (i.e. the Datatype class calling MetadataElement)
- class_locals = sys._getframe( 1 ).f_locals
- # get and set '__galaxy_statments__' to an empty list if not in locals dict
- statements = class_locals.setdefault( STATEMENTS, [] )
- # add Statement containing info to populate a MetadataElementSpec
- statements.append( ( self, args, kwargs ) )
+ class_locals = sys._getframe( 1 ).f_locals #get the locals dictionary of the frame object one down in the call stack (i.e. the Datatype class calling MetadataElement)
+ statements = class_locals.setdefault( STATEMENTS, [] ) #get and set '__galaxy_statments__' to an empty list if not in locals dict
+ statements.append( ( self, args, kwargs ) ) #add Statement containing info to populate a MetadataElementSpec
@classmethod
def process( cls, element ):
for statement, args, kwargs in getattr( element, STATEMENTS, [] ):
- statement.target( element, *args, **kwargs ) # statement.target is MetadataElementSpec, element is a Datatype class
+ statement.target( element, *args, **kwargs ) #statement.target is MetadataElementSpec, element is a Datatype class
class MetadataCollection( object ):
@@ -66,7 +62,7 @@
"""
def __init__(self, parent ):
self.parent = parent
- # initialize dict if needed
+ #initialize dict if needed
if self.parent._metadata is None:
self.parent._metadata = {}
@@ -76,10 +72,7 @@
return None
def set_parent( self, parent ):
- # use weakref to prevent a circular reference interfering with garbage
- # collection: hda/lda (parent) <--> MetadataCollection (self) ; needs to be
- # hashable, so cannot use proxy.
- self.__dict__["_parent"] = weakref.ref( parent )
+ self.__dict__["_parent"] = weakref.ref( parent ) # use weakref to prevent a circular reference interfering with garbage collection: hda/lda (parent) <--> MetadataCollection (self) ; needs to be hashable, so cannot use proxy.
parent = property( get_parent, set_parent )
@property
@@ -151,7 +144,7 @@
elif isinstance( json_dict, dict ):
JSONified_dict = json_dict
else:
- raise ValueError( "json_dict must be either a dictionary or a string, got %s." % ( type( json_dict ) ) )
+ raise ValueError( "json_dict must be either a dictionary or a string, got %s." % ( type( json_dict ) ) )
else:
raise ValueError( "You must provide either a filename or a json_dict" )
for name, spec in self.spec.items():
@@ -163,12 +156,12 @@
from_ext_kwds[ 'path_rewriter' ] = path_rewriter
dataset._metadata[ name ] = param.from_external_value( external_value, dataset, **from_ext_kwds )
elif name in dataset._metadata:
- # if the metadata value is not found in our externally set metadata but it has a value in the 'old'
- # metadata associated with our dataset, we'll delete it from our dataset's metadata dict
+ #if the metadata value is not found in our externally set metadata but it has a value in the 'old'
+ #metadata associated with our dataset, we'll delete it from our dataset's metadata dict
del dataset._metadata[ name ]
def to_JSON_dict( self, filename=None ):
- # galaxy.model.customtypes.json_encoder.encode()
+ #galaxy.model.customtypes.json_encoder.encode()
meta_dict = {}
dataset_meta_dict = self.parent._metadata
for name, spec in self.spec.items():
@@ -179,9 +172,7 @@
json.dump( meta_dict, open( filename, 'wb+' ) )
def __getstate__( self ):
- # cannot pickle a weakref item (self._parent), when
- # data._metadata_collection is None, it will be recreated on demand
- return None
+ return None #cannot pickle a weakref item (self._parent), when data._metadata_collection is None, it will be recreated on demand
class MetadataSpecCollection( odict ):
@@ -191,8 +182,8 @@
list. append() is also implemented for simplicity and does not
"append".
"""
- def __init__( self, dict=None ):
- odict.__init__( self, dict=None )
+ def __init__( self, dict = None ):
+ odict.__init__( self, dict = None )
def append( self, item ):
self[item.name] = item
@@ -233,8 +224,7 @@
return value
if self.spec.get("optional"):
checked = False
- if value:
- checked = "true"
+ if value: checked = "true"
checkbox = form_builder.CheckboxField( "is_" + self.spec.name, checked=checked )
return checkbox.get_html() + self.get_html_field( value=value, context=context, other_values=other_values, **kwd ).get_html()
else:
@@ -246,11 +236,11 @@
def to_safe_string( self, value ):
return sanitize_lists_to_string( self.to_string( value ) )
- def make_copy( self, value, target_context=None, source_context=None ):
+ def make_copy( self, value, target_context = None, source_context = None ):
return copy.deepcopy( value )
@classmethod
- def marshal( cls, value ):
+ def marshal ( cls, value ):
"""
This method should/can be overridden to convert the incoming
value to whatever type it is supposed to be.
@@ -295,9 +285,9 @@
Defines a metadata element and adds it to the metadata_spec (which
is a MetadataSpecCollection) of datatype.
"""
- def __init__( self, datatype, name=None, desc=None,
- param=MetadataParameter, default=None, no_value=None,
- visible=True, set_in_upload=False, **kwargs ):
+ def __init__( self, datatype,
+ name=None, desc=None, param=MetadataParameter, default=None, no_value = None,
+ visible=True, set_in_upload = False, **kwargs ):
self.name = name
self.desc = desc or name
self.default = default
@@ -327,7 +317,7 @@
return self.param.unwrap( value )
def __str__( self ):
- # TODO??: assuming param is the class of this MetadataElementSpec - add the plain class name for that
+ #TODO??: assuming param is the class of this MetadataElementSpec - add the plain class name for that
spec_dict = dict( param_class=self.param.__class__.__name__ )
spec_dict.update( self.__dict__ )
return ( "{name} ({param_class}): {desc}, defaults to '{default}'".format( **spec_dict ) )
@@ -341,7 +331,6 @@
MetadataParameter sub-classes.
"""
-
class SelectParameter( MetadataParameter ):
def __init__( self, spec ):
MetadataParameter.__init__( self, spec )
@@ -389,24 +378,18 @@
return MetadataParameter.get_html( self, value, context=context, other_values=other_values, values=values, **kwd )
def wrap( self, value, session ):
- # do we really need this (wasteful)? - yes because we are not sure that
- # all existing selects have been stored previously as lists. Also this
- # will handle the case where defaults/no_values are specified and are
- # single non-list values.
- value = self.marshal( value )
+ value = self.marshal( value ) #do we really need this (wasteful)? - yes because we are not sure that all existing selects have been stored previously as lists. Also this will handle the case where defaults/no_values are specified and are single non-list values.
if self.multiple:
return value
elif value:
- return value[0] # single select, only return the first value
+ return value[0] #single select, only return the first value
return None
@classmethod
def marshal( cls, value ):
# Store select as list, even if single item
- if value is None:
- return []
- if not isinstance( value, list ):
- return [value]
+ if value is None: return []
+ if not isinstance( value, list ): return [value]
return value
@@ -499,7 +482,7 @@
class DictParameter( MetadataParameter ):
def to_string( self, value ):
- return json.dumps( value )
+ return json.dumps( value )
def to_safe_string( self, value ):
# We do not sanitize json dicts
@@ -560,7 +543,7 @@
def make_copy( self, value, target_context, source_context ):
value = self.wrap( value, object_session( target_context.parent ) )
if value:
- new_value = galaxy.model.MetadataFile( dataset=target_context.parent, name=self.spec.name )
+ new_value = galaxy.model.MetadataFile( dataset = target_context.parent, name = self.spec.name )
object_session( target_context.parent ).add( new_value )
object_session( target_context.parent ).flush()
shutil.copy( value.file_name, new_value.file_name )
@@ -582,18 +565,14 @@
if isinstance( value, MetadataTempFile ):
mf = parent.metadata.get( self.spec.name, None)
if mf is None:
- mf = self.new_file( dataset=parent, **value.kwds )
+ mf = self.new_file( dataset = parent, **value.kwds )
# Ensure the metadata file gets updated with content
file_name = value.file_name
if path_rewriter:
# Job may have run with a different (non-local) tmp/working
# directory. Correct.
file_name = path_rewriter( file_name )
- parent.dataset.object_store.update_from_file( mf,
- file_name=file_name,
- extra_dir='_metadata_files',
- extra_dir_at_root=True,
- alt_name=os.path.basename(mf.file_name) )
+ parent.dataset.object_store.update_from_file( mf, file_name=file_name, extra_dir='_metadata_files', extra_dir_at_root=True, alt_name=os.path.basename(mf.file_name) )
os.unlink( file_name )
value = mf.id
return value
@@ -608,22 +587,22 @@
value = MetadataTempFile.to_JSON( value )
return value
- def new_file( self, dataset=None, **kwds ):
+ def new_file( self, dataset = None, **kwds ):
if object_session( dataset ):
- mf = galaxy.model.MetadataFile( name=self.spec.name, dataset=dataset, **kwds )
+ mf = galaxy.model.MetadataFile( name = self.spec.name, dataset = dataset, **kwds )
object_session( dataset ).add( mf )
- object_session( dataset ).flush() # flush to assign id
+ object_session( dataset ).flush() #flush to assign id
return mf
else:
- # we need to make a tmp file that is accessable to the head node,
- # we will be copying its contents into the MetadataFile objects filename after restoring from JSON
- # we do not include 'dataset' in the kwds passed, as from_JSON_value() will handle this for us
+ #we need to make a tmp file that is accessable to the head node,
+ #we will be copying its contents into the MetadataFile objects filename after restoring from JSON
+ #we do not include 'dataset' in the kwds passed, as from_JSON_value() will handle this for us
return MetadataTempFile( **kwds )
-# This class is used when a database file connection is not available
+#This class is used when a database file connection is not available
class MetadataTempFile( object ):
- tmp_dir = 'database/tmp' # this should be overwritten as necessary in calling scripts
+ tmp_dir = 'database/tmp' #this should be overwritten as necessary in calling scripts
def __init__( self, **kwds ):
self.kwds = kwds
@@ -632,19 +611,17 @@
@property
def file_name( self ):
if self._filename is None:
- # we need to create a tmp file, accessable across all nodes/heads, save the name, and return it
- self._filename = abspath( tempfile.NamedTemporaryFile( dir=self.tmp_dir, prefix="metadata_temp_file_" ).name )
- open( self._filename, 'wb+' ) # create an empty file, so it can't be reused using tempfile
+ #we need to create a tmp file, accessable across all nodes/heads, save the name, and return it
+ self._filename = abspath( tempfile.NamedTemporaryFile( dir = self.tmp_dir, prefix = "metadata_temp_file_" ).name )
+ open( self._filename, 'wb+' ) #create an empty file, so it can't be reused using tempfile
return self._filename
def to_JSON( self ):
- return { '__class__': self.__class__.__name__,
- 'filename': self.file_name,
- 'kwds': self.kwds }
+ return { '__class__':self.__class__.__name__, 'filename':self.file_name, 'kwds':self.kwds }
@classmethod
def from_JSON( cls, json_dict ):
- # need to ensure our keywords are not unicode
+ #need to ensure our keywords are not unicode
rval = cls( **stringify_dictionary_keys( json_dict['kwds'] ) )
rval._filename = json_dict['filename']
return rval
@@ -666,16 +643,10 @@
log.debug( 'Failed to cleanup MetadataTempFile temp files from %s: %s' % ( filename, e ) )
+#Class with methods allowing set_meta() to be called externally to the Galaxy head
class JobExternalOutputMetadataWrapper( object ):
- """
- Class with methods allowing set_meta() to be called externally to the
- Galaxy head.
- This class allows access to external metadata filenames for all outputs
- associated with a job.
- We will use JSON as the medium of exchange of information, except for the
- DatasetInstance object which will use pickle (in the future this could be
- JSONified as well)
- """
+ #this class allows access to external metadata filenames for all outputs associated with a job
+ #We will use JSON as the medium of exchange of information, except for the DatasetInstance object which will use pickle (in the future this could be JSONified as well)
def __init__( self, job ):
self.job_id = job.id
@@ -683,12 +654,12 @@
def get_output_filenames_by_dataset( self, dataset, sa_session ):
if isinstance( dataset, galaxy.model.HistoryDatasetAssociation ):
return sa_session.query( galaxy.model.JobExternalOutputMetadata ) \
- .filter_by( job_id=self.job_id, history_dataset_association_id=dataset.id ) \
- .first() # there should only be one or None
+ .filter_by( job_id = self.job_id, history_dataset_association_id = dataset.id ) \
+ .first() #there should only be one or None
elif isinstance( dataset, galaxy.model.LibraryDatasetDatasetAssociation ):
return sa_session.query( galaxy.model.JobExternalOutputMetadata ) \
- .filter_by( job_id=self.job_id, library_dataset_dataset_association_id=dataset.id ) \
- .first() # there should only be one or None
+ .filter_by( job_id = self.job_id, library_dataset_dataset_association_id = dataset.id ) \
+ .first() #there should only be one or None
return None
def get_dataset_metadata_key( self, dataset ):
@@ -696,12 +667,8 @@
# need to make different keys for them, since ids can overlap
return "%s_%d" % ( dataset.__class__.__name__, dataset.id )
- def setup_external_metadata( self, datasets, sa_session, exec_dir=None,
- tmp_dir=None, dataset_files_path=None,
- output_fnames=None, config_root=None,
- config_file=None, datatypes_config=None,
- job_metadata=None, compute_tmp_dir=None,
- include_command=True, kwds=None ):
+ def setup_external_metadata( self, datasets, sa_session, exec_dir=None, tmp_dir=None, dataset_files_path=None,
+ output_fnames=None, config_root=None, config_file=None, datatypes_config=None, job_metadata=None, compute_tmp_dir=None, kwds=None ):
kwds = kwds or {}
if tmp_dir is None:
tmp_dir = MetadataTempFile.tmp_dir
@@ -715,13 +682,13 @@
compute_path = os.path.join(compute_tmp_dir, path_relative)
return compute_path
- # fill in metadata_files_dict and return the command with args required to set metadata
+ #fill in metadata_files_dict and return the command with args required to set metadata
def __metadata_files_list_to_cmd_line( metadata_files ):
def __get_filename_override():
if output_fnames:
for dataset_path in output_fnames:
- if dataset_path.real_path == metadata_files.dataset.file_name:
- return dataset_path.false_path or dataset_path.real_path
+ if dataset_path.false_path and dataset_path.real_path == metadata_files.dataset.file_name:
+ return dataset_path.false_path
return ""
line = "%s,%s,%s,%s,%s,%s" % (
metadata_path_on_compute(metadata_files.filename_in),
@@ -746,44 +713,43 @@
metadata_files_list = []
for dataset in datasets:
key = self.get_dataset_metadata_key( dataset )
- # future note:
- # wonkiness in job execution causes build command line to be called more than once
- # when setting metadata externally, via 'auto-detect' button in edit attributes, etc.,
- # we don't want to overwrite (losing the ability to cleanup) our existing dataset keys and files,
- # so we will only populate the dictionary once
+ #future note:
+ #wonkiness in job execution causes build command line to be called more than once
+ #when setting metadata externally, via 'auto-detect' button in edit attributes, etc.,
+ #we don't want to overwrite (losing the ability to cleanup) our existing dataset keys and files,
+ #so we will only populate the dictionary once
metadata_files = self.get_output_filenames_by_dataset( dataset, sa_session )
if not metadata_files:
- metadata_files = galaxy.model.JobExternalOutputMetadata( dataset=dataset)
+ metadata_files = galaxy.model.JobExternalOutputMetadata( dataset = dataset)
metadata_files.job_id = self.job_id
- # we are using tempfile to create unique filenames, tempfile always returns an absolute path
- # we will use pathnames relative to the galaxy root, to accommodate instances where the galaxy root
- # is located differently, i.e. on a cluster node with a different filesystem structure
+ #we are using tempfile to create unique filenames, tempfile always returns an absolute path
+ #we will use pathnames relative to the galaxy root, to accommodate instances where the galaxy root
+ #is located differently, i.e. on a cluster node with a different filesystem structure
- # file to store existing dataset
- metadata_files.filename_in = abspath( tempfile.NamedTemporaryFile( dir=tmp_dir, prefix="metadata_in_%s_" % key ).name )
+ #file to store existing dataset
+ metadata_files.filename_in = abspath( tempfile.NamedTemporaryFile( dir = tmp_dir, prefix = "metadata_in_%s_" % key ).name )
- # FIXME: HACK
- # sqlalchemy introduced 'expire_on_commit' flag for sessionmaker at version 0.5x
- # This may be causing the dataset attribute of the dataset_association object to no-longer be loaded into memory when needed for pickling.
- # For now, we'll simply 'touch' dataset_association.dataset to force it back into memory.
- dataset.dataset # force dataset_association.dataset to be loaded before pickling
- # A better fix could be setting 'expire_on_commit=False' on the session, or modifying where commits occur, or ?
+ #FIXME: HACK
+ #sqlalchemy introduced 'expire_on_commit' flag for sessionmaker at version 0.5x
+ #This may be causing the dataset attribute of the dataset_association object to no-longer be loaded into memory when needed for pickling.
+ #For now, we'll simply 'touch' dataset_association.dataset to force it back into memory.
+ dataset.dataset #force dataset_association.dataset to be loaded before pickling
+ #A better fix could be setting 'expire_on_commit=False' on the session, or modifying where commits occur, or ?
cPickle.dump( dataset, open( metadata_files.filename_in, 'wb+' ) )
- # file to store metadata results of set_meta()
- metadata_files.filename_out = abspath( tempfile.NamedTemporaryFile( dir=tmp_dir, prefix="metadata_out_%s_" % key ).name )
- open( metadata_files.filename_out, 'wb+' ) # create the file on disk, so it cannot be reused by tempfile (unlikely, but possible)
- # file to store a 'return code' indicating the results of the set_meta() call
- # results code is like (True/False - if setting metadata was successful/failed , exception or string of reason of success/failure )
- metadata_files.filename_results_code = abspath( tempfile.NamedTemporaryFile( dir=tmp_dir, prefix="metadata_results_%s_" % key ).name )
- # create the file on disk, so it cannot be reused by tempfile (unlikely, but possible)
- json.dump( ( False, 'External set_meta() not called' ), open( metadata_files.filename_results_code, 'wb+' ) )
- # file to store kwds passed to set_meta()
- metadata_files.filename_kwds = abspath( tempfile.NamedTemporaryFile( dir=tmp_dir, prefix="metadata_kwds_%s_" % key ).name )
+ #file to store metadata results of set_meta()
+ metadata_files.filename_out = abspath( tempfile.NamedTemporaryFile( dir = tmp_dir, prefix = "metadata_out_%s_" % key ).name )
+ open( metadata_files.filename_out, 'wb+' ) # create the file on disk, so it cannot be reused by tempfile (unlikely, but possible)
+ #file to store a 'return code' indicating the results of the set_meta() call
+ #results code is like (True/False - if setting metadata was successful/failed , exception or string of reason of success/failure )
+ metadata_files.filename_results_code = abspath( tempfile.NamedTemporaryFile( dir = tmp_dir, prefix = "metadata_results_%s_" % key ).name )
+ json.dump( ( False, 'External set_meta() not called' ), open( metadata_files.filename_results_code, 'wb+' ) ) # create the file on disk, so it cannot be reused by tempfile (unlikely, but possible)
+ #file to store kwds passed to set_meta()
+ metadata_files.filename_kwds = abspath( tempfile.NamedTemporaryFile( dir = tmp_dir, prefix = "metadata_kwds_%s_" % key ).name )
json.dump( kwds, open( metadata_files.filename_kwds, 'wb+' ), ensure_ascii=True )
- # existing metadata file parameters need to be overridden with cluster-writable file locations
- metadata_files.filename_override_metadata = abspath( tempfile.NamedTemporaryFile( dir=tmp_dir, prefix="metadata_override_%s_" % key ).name )
- open( metadata_files.filename_override_metadata, 'wb+' ) # create the file on disk, so it cannot be reused by tempfile (unlikely, but possible)
+ #existing metadata file parameters need to be overridden with cluster-writable file locations
+ metadata_files.filename_override_metadata = abspath( tempfile.NamedTemporaryFile( dir = tmp_dir, prefix = "metadata_override_%s_" % key ).name )
+ open( metadata_files.filename_override_metadata, 'wb+' ) # create the file on disk, so it cannot be reused by tempfile (unlikely, but possible)
override_metadata = []
for meta_key, spec_value in dataset.metadata.spec.iteritems():
if isinstance( spec_value.param, FileParameter ) and dataset.metadata.get( meta_key, None ) is not None:
@@ -791,27 +757,17 @@
shutil.copy( dataset.metadata.get( meta_key, None ).file_name, metadata_temp.file_name )
override_metadata.append( ( meta_key, metadata_temp.to_JSON() ) )
json.dump( override_metadata, open( metadata_files.filename_override_metadata, 'wb+' ) )
- # add to session and flush
+ #add to session and flush
sa_session.add( metadata_files )
sa_session.flush()
metadata_files_list.append( metadata_files )
- args = "%s %s %s" % ( datatypes_config,
- job_metadata,
- " ".join( map( __metadata_files_list_to_cmd_line, metadata_files_list ) ) )
- if include_command:
- # return command required to build
- fd, fp = tempfile.mkstemp( suffix='.py', dir=tmp_dir, prefix="set_metadata_" )
- metadata_script_file = abspath( fp )
- os.fdopen( fd, 'w' ).write( 'from galaxy.metadata.set_metadata import set_metadata; set_metadata()' )
- return "python %s %s" % ( metadata_script_file, args )
- else:
- # return args to galaxy.metadata.set_metadata required to build
- return args
+ #return command required to build
+ return "%s %s %s %s %s %s %s %s" % ( os.path.join( exec_dir, 'set_metadata.sh' ), dataset_files_path, compute_tmp_dir or tmp_dir, config_root, config_file, datatypes_config, job_metadata, " ".join( map( __metadata_files_list_to_cmd_line, metadata_files_list ) ) )
def external_metadata_set_successfully( self, dataset, sa_session ):
metadata_files = self.get_output_filenames_by_dataset( dataset, sa_session )
if not metadata_files:
- return False # this file doesn't exist
+ return False # this file doesn't exist
rval, rstring = json.load( open( metadata_files.filename_results_code ) )
if not rval:
log.debug( 'setting metadata externally failed for %s %s: %s' % ( dataset.__class__.__name__, dataset.id, rstring ) )
@@ -820,15 +776,11 @@
def cleanup_external_metadata( self, sa_session ):
log.debug( 'Cleaning up external metadata files' )
for metadata_files in sa_session.query( galaxy.model.Job ).get( self.job_id ).external_output_metadata:
- # we need to confirm that any MetadataTempFile files were removed, if not we need to remove them
- # can occur if the job was stopped before completion, but a MetadataTempFile is used in the set_meta
+ #we need to confirm that any MetadataTempFile files were removed, if not we need to remove them
+ #can occur if the job was stopped before completion, but a MetadataTempFile is used in the set_meta
MetadataTempFile.cleanup_from_JSON_dict_filename( metadata_files.filename_out )
dataset_key = self.get_dataset_metadata_key( metadata_files.dataset )
- for key, fname in [ ( 'filename_in', metadata_files.filename_in ),
- ( 'filename_out', metadata_files.filename_out ),
- ( 'filename_results_code', metadata_files.filename_results_code ),
- ( 'filename_kwds', metadata_files.filename_kwds ),
- ( 'filename_override_metadata', metadata_files.filename_override_metadata ) ]:
+ for key, fname in [ ( 'filename_in', metadata_files.filename_in ), ( 'filename_out', metadata_files.filename_out ), ( 'filename_results_code', metadata_files.filename_results_code ), ( 'filename_kwds', metadata_files.filename_kwds ), ( 'filename_override_metadata', metadata_files.filename_override_metadata ) ]:
try:
os.remove( fname )
except Exception, e:
diff -r da996e89dbfb8249deeeafd30489a62970f8af7d -r 708cab19708d8158044eb5b1048740734cb2bc2a lib/galaxy/datatypes/registry.py
--- a/lib/galaxy/datatypes/registry.py
+++ b/lib/galaxy/datatypes/registry.py
@@ -482,6 +482,14 @@
data.init_meta( copy_from=data )
return data
+ def old_change_datatype( self, data, ext ):
+ """Creates and returns a new datatype based on an existing data and an extension"""
+ newdata = factory( ext )( id=data.id )
+ for key, value in data.__dict__.items():
+ setattr( newdata, key, value )
+ newdata.ext = ext
+ return newdata
+
def load_datatype_converters( self, toolbox, installed_repository_dict=None, deactivate=False ):
"""
If deactivate is False, add datatype converters from self.converters or self.proprietary_converters
@@ -645,15 +653,12 @@
<requirement type="package">samtools</requirement></requirements><action module="galaxy.tools.actions.metadata" class="SetMetadataToolAction"/>
- <command>python $set_metadata $__SET_EXTERNAL_METADATA_COMMAND_LINE__</command>
+ <command>$__SET_EXTERNAL_METADATA_COMMAND_LINE__</command><inputs><param format="data" name="input1" type="data" label="File to set metadata on."/><param name="__ORIGINAL_DATASET_STATE__" type="hidden" value=""/><param name="__SET_EXTERNAL_METADATA_COMMAND_LINE__" type="hidden" value=""/></inputs>
- <configfiles>
- <configfile name="set_metadata">from galaxy.metadata.set_metadata import set_metadata; set_metadata()</configfile>
- </configfiles></tool>
"""
tmp_name = tempfile.NamedTemporaryFile()
This diff is so big that we needed to truncate the remainder.
https://bitbucket.org/galaxy/galaxy-central/commits/6395e7035143/
Changeset: 6395e7035143
User: natefoo
Date: 2015-03-17 02:50:21+00:00
Summary: Merge v15.03.1 to default
Affected #: 27 files
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 .hgtags
--- a/.hgtags
+++ b/.hgtags
@@ -23,3 +23,6 @@
782fa60fc65488aea0c618d723e9a63d42caf865 latest_2014.10.06
2e8dd2949dd3eee0f56f9a3a5ebf1b2baca24aee release_2015.01.13
5cdf5dc395f16f47e88a5567775dcd21b4906f08 latest_2015.01.13
+738f4ad0eb62d5e4bcf0d22a2722ee6c570382b9 v15.03
+9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 v15.03.1
+9f40ff3e3fe1f7fed7e055d4269fe83199cf5ed2 latest_15.03
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 client/galaxy/scripts/mvc/history/history-model.js
--- a/client/galaxy/scripts/mvc/history/history-model.js
+++ b/client/galaxy/scripts/mvc/history/history-model.js
@@ -208,7 +208,7 @@
/** purge this history, _Mark_ing it as purged and removing all dataset data from the server */
purge : function( options ){
if( this.get( 'purged' ) ){ return jQuery.when(); }
- return this.save( { purged: true }, options );
+ return this.save( { deleted: true, purged: true }, options );
},
/** save this history, _Mark_ing it as undeleted */
undelete : function( options ){
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/datatypes/metadata.py
--- a/lib/galaxy/datatypes/metadata.py
+++ b/lib/galaxy/datatypes/metadata.py
@@ -667,11 +667,20 @@
# need to make different keys for them, since ids can overlap
return "%s_%d" % ( dataset.__class__.__name__, dataset.id )
- def setup_external_metadata( self, datasets, sa_session, exec_dir=None, tmp_dir=None, dataset_files_path=None,
- output_fnames=None, config_root=None, config_file=None, datatypes_config=None, job_metadata=None, compute_tmp_dir=None, kwds=None ):
+ def setup_external_metadata( self, datasets, sa_session, exec_dir=None,
+ tmp_dir=None, dataset_files_path=None,
+ output_fnames=None, config_root=None,
+ config_file=None, datatypes_config=None,
+ job_metadata=None, compute_tmp_dir=None,
+ include_command=True, kwds=None ):
kwds = kwds or {}
if tmp_dir is None:
tmp_dir = MetadataTempFile.tmp_dir
+ else:
+ MetadataTempFile.tmp_dir = tmp_dir
+
+ if not os.path.exists(tmp_dir):
+ os.makedirs(tmp_dir)
# path is calculated for Galaxy, may be different on compute - rewrite
# for the compute server.
@@ -687,8 +696,8 @@
def __get_filename_override():
if output_fnames:
for dataset_path in output_fnames:
- if dataset_path.false_path and dataset_path.real_path == metadata_files.dataset.file_name:
- return dataset_path.false_path
+ if dataset_path.real_path == metadata_files.dataset.file_name:
+ return dataset_path.false_path or dataset_path.real_path
return ""
line = "%s,%s,%s,%s,%s,%s" % (
metadata_path_on_compute(metadata_files.filename_in),
@@ -761,8 +770,18 @@
sa_session.add( metadata_files )
sa_session.flush()
metadata_files_list.append( metadata_files )
- #return command required to build
- return "%s %s %s %s %s %s %s %s" % ( os.path.join( exec_dir, 'set_metadata.sh' ), dataset_files_path, compute_tmp_dir or tmp_dir, config_root, config_file, datatypes_config, job_metadata, " ".join( map( __metadata_files_list_to_cmd_line, metadata_files_list ) ) )
+ args = "%s %s %s" % ( datatypes_config,
+ job_metadata,
+ " ".join( map( __metadata_files_list_to_cmd_line, metadata_files_list ) ) )
+ if include_command:
+ #return command required to build
+ fd, fp = tempfile.mkstemp( suffix='.py', dir = tmp_dir, prefix = "set_metadata_" )
+ metadata_script_file = abspath( fp )
+ os.fdopen( fd, 'w' ).write( 'from galaxy_ext.metadata.set_metadata import set_metadata; set_metadata()' )
+ return "python %s %s" % ( metadata_path_on_compute(metadata_script_file), args )
+ else:
+ # return args to galaxy_ext.metadata.set_metadata required to build
+ return args
def external_metadata_set_successfully( self, dataset, sa_session ):
metadata_files = self.get_output_filenames_by_dataset( dataset, sa_session )
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/datatypes/registry.py
--- a/lib/galaxy/datatypes/registry.py
+++ b/lib/galaxy/datatypes/registry.py
@@ -653,12 +653,15 @@
<requirement type="package">samtools</requirement></requirements><action module="galaxy.tools.actions.metadata" class="SetMetadataToolAction"/>
- <command>$__SET_EXTERNAL_METADATA_COMMAND_LINE__</command>
+ <command>python $set_metadata $__SET_EXTERNAL_METADATA_COMMAND_LINE__</command><inputs><param format="data" name="input1" type="data" label="File to set metadata on."/><param name="__ORIGINAL_DATASET_STATE__" type="hidden" value=""/><param name="__SET_EXTERNAL_METADATA_COMMAND_LINE__" type="hidden" value=""/></inputs>
+ <configfiles>
+ <configfile name="set_metadata">from galaxy_ext.metadata.set_metadata import set_metadata; set_metadata()</configfile>
+ </configfiles></tool>
"""
tmp_name = tempfile.NamedTemporaryFile()
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/jobs/__init__.py
--- a/lib/galaxy/jobs/__init__.py
+++ b/lib/galaxy/jobs/__init__.py
@@ -831,8 +831,7 @@
self.command_line, self.extra_filenames = tool_evaluator.build()
# FIXME: for now, tools get Galaxy's lib dir in their path
- if self.command_line and self.command_line.startswith( 'python' ):
- self.galaxy_lib_dir = os.path.abspath( "lib" ) # cwd = galaxy root
+ self.galaxy_lib_dir = os.path.abspath( "lib" ) # cwd = galaxy root
# Shell fragment to inject dependencies
self.dependency_shell_commands = self.tool.build_dependency_shell_commands()
# We need command_line persisted to the db in order for Galaxy to re-queue the job
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/jobs/actions/post.py
--- a/lib/galaxy/jobs/actions/post.py
+++ b/lib/galaxy/jobs/actions/post.py
@@ -156,7 +156,7 @@
# "replace" option so you can replace a portion of the name,
# support multiple #{name} in one rename action...
- if new_name.find("#{") > -1:
+ while new_name.find("#{") > -1:
to_be_replaced = ""
# This assumes a single instance of #{variable} will exist
start_pos = new_name.find("#{") + 2
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/jobs/command_factory.py
--- a/lib/galaxy/jobs/command_factory.py
+++ b/lib/galaxy/jobs/command_factory.py
@@ -145,7 +145,7 @@
metadata_command = metadata_command.strip()
if metadata_command:
commands_builder.capture_return_code()
- commands_builder.append_command("cd %s; %s" % (exec_dir, metadata_command))
+ commands_builder.append_command(metadata_command)
def __copy_if_exists_command(work_dir_output):
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/jobs/handler.py
--- a/lib/galaxy/jobs/handler.py
+++ b/lib/galaxy/jobs/handler.py
@@ -680,6 +680,9 @@
except Empty:
pass
for job, error_msg in jobs_to_check:
+ if job.state != job.states.DELETED_NEW and job.finished:
+ log.debug('Job %s already finished, not deleting or stopping', job.id)
+ continue
final_state = job.states.DELETED
if error_msg is not None:
final_state = job.states.ERROR
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/managers/hdas.py
--- a/lib/galaxy/managers/hdas.py
+++ b/lib/galaxy/managers/hdas.py
@@ -156,11 +156,12 @@
self.dataset_manager.error_unless_dataset_purge_allowed( trans, hda )
super( HDAManager, self ).purge( trans, hda, flush=flush )
- # signal to stop the creating job?
if hda.creating_job_associations:
job = hda.creating_job_associations[0].job
- job.mark_deleted( self.app.config.track_jobs_in_database )
- self.app.job_manager.job_stop_queue.put( job.id )
+ if not job.finished:
+ # signal to stop the creating job
+ job.mark_deleted( self.app.config.track_jobs_in_database )
+ self.app.job_manager.job_stop_queue.put( job.id )
# more importantly, purge dataset as well
if hda.dataset.user_can_purge:
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/metadata
--- /dev/null
+++ b/lib/galaxy/metadata
@@ -0,0 +1,1 @@
+../galaxy_ext/metadata
\ No newline at end of file
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py
+++ b/lib/galaxy/model/__init__.py
@@ -517,6 +517,9 @@
"""
Mark this job as deleted, and mark any output datasets as discarded.
"""
+ if self.finished:
+ # Do not modify the state/outputs of jobs that are already terminal
+ return
if track_jobs_in_database:
self.state = Job.states.DELETED_NEW
else:
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/objectstore/__init__.py
--- a/lib/galaxy/objectstore/__init__.py
+++ b/lib/galaxy/objectstore/__init__.py
@@ -392,39 +392,39 @@
super(NestedObjectStore, self).shutdown()
def exists(self, obj, **kwargs):
- return self.__call_method('exists', obj, False, False, **kwargs)
+ return self._call_method('exists', obj, False, False, **kwargs)
def file_ready(self, obj, **kwargs):
- return self.__call_method('file_ready', obj, False, False, **kwargs)
+ return self._call_method('file_ready', obj, False, False, **kwargs)
def create(self, obj, **kwargs):
random.choice(self.backends.values()).create(obj, **kwargs)
def empty(self, obj, **kwargs):
- return self.__call_method('empty', obj, True, False, **kwargs)
+ return self._call_method('empty', obj, True, False, **kwargs)
def size(self, obj, **kwargs):
- return self.__call_method('size', obj, 0, False, **kwargs)
+ return self._call_method('size', obj, 0, False, **kwargs)
def delete(self, obj, **kwargs):
- return self.__call_method('delete', obj, False, False, **kwargs)
+ return self._call_method('delete', obj, False, False, **kwargs)
def get_data(self, obj, **kwargs):
- return self.__call_method('get_data', obj, ObjectNotFound, True, **kwargs)
+ return self._call_method('get_data', obj, ObjectNotFound, True, **kwargs)
def get_filename(self, obj, **kwargs):
- return self.__call_method('get_filename', obj, ObjectNotFound, True, **kwargs)
+ return self._call_method('get_filename', obj, ObjectNotFound, True, **kwargs)
def update_from_file(self, obj, **kwargs):
if kwargs.get('create', False):
self.create(obj, **kwargs)
kwargs['create'] = False
- return self.__call_method('update_from_file', obj, ObjectNotFound, True, **kwargs)
+ return self._call_method('update_from_file', obj, ObjectNotFound, True, **kwargs)
def get_object_url(self, obj, **kwargs):
- return self.__call_method('get_object_url', obj, None, False, **kwargs)
+ return self._call_method('get_object_url', obj, None, False, **kwargs)
- def __call_method(self, method, obj, default, default_is_exception, **kwargs):
+ def _call_method(self, method, obj, default, default_is_exception, **kwargs):
"""
Check all children object stores for the first one with the dataset
"""
@@ -432,7 +432,7 @@
if store.exists(obj, **kwargs):
return store.__getattribute__(method)(obj, **kwargs)
if default_is_exception:
- raise default( 'objectstore, __call_method failed: %s on %s, kwargs: %s'
+ raise default( 'objectstore, _call_method failed: %s on %s, kwargs: %s'
% ( method, str( obj ), str( kwargs ) ) )
else:
return default
@@ -535,12 +535,12 @@
log.debug("Using preferred backend '%s' for creation of %s %s" % (obj.object_store_id, obj.__class__.__name__, obj.id))
self.backends[obj.object_store_id].create(obj, **kwargs)
- def __call_method(self, method, obj, default, default_is_exception, **kwargs):
+ def _call_method(self, method, obj, default, default_is_exception, **kwargs):
object_store_id = self.__get_store_id_for(obj, **kwargs)
if object_store_id is not None:
return self.backends[object_store_id].__getattribute__(method)(obj, **kwargs)
if default_is_exception:
- raise default( 'objectstore, __call_method failed: %s on %s, kwargs: %s'
+ raise default( 'objectstore, _call_method failed: %s on %s, kwargs: %s'
% ( method, str( obj ), str( kwargs ) ) )
else:
return default
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -2587,11 +2587,9 @@
hda_source_dict = {} # Mapping from HDA in history to source HDAs.
for hda in history.datasets:
source_hda = hda.copied_from_history_dataset_association
- while source_hda:#should this check library datasets as well?
- #FIXME: could be multiple copies of a hda in a single history, this does a better job of matching on cloned histories,
- #but is still less than perfect when eg individual datasets are copied between histories
- if source_hda not in hda_source_dict or source_hda.hid == hda.hid:
- hda_source_dict[ source_hda ] = hda
+ while source_hda:
+ if source_hda.dataset.id not in hda_source_dict or source_hda.hid == hda.hid:
+ hda_source_dict[ source_hda.dataset.id ] = hda
source_hda = source_hda.copied_from_history_dataset_association
# Ditto for dataset collections.
@@ -2599,8 +2597,8 @@
for hdca in history.dataset_collections:
source_hdca = hdca.copied_from_history_dataset_collection_association
while source_hdca:
- if source_hdca not in hdca_source_dict or source_hdca.hid == hdca.hid:
- hdca_source_dict[ source_hdca ] = hdca
+ if source_hdca.collection.id not in hdca_source_dict or source_hdca.hid == hdca.hid:
+ hdca_source_dict[ source_hdca.collection.id ] = hdca
source_hdca = source_hdca.copied_from_history_dataset_collection_association
# Unpack unvalidated values to strings, they'll be validated when the
@@ -2619,17 +2617,21 @@
if isinstance(value,list):
values = []
for val in value:
- if is_hashable( val ):
- if val in history.datasets:
+ if isinstance(val, trans.app.model.HistoryDatasetAssociation):
+ if val.dataset.id in hda_source_dict:
+ values.append( hda_source_dict[ val.dataset.id ] )
+ else:
values.append( val )
- elif val in hda_source_dict:
- values.append( hda_source_dict[ val ])
return values
- if is_hashable( value ) and value not in history.datasets and value in hda_source_dict:
- return hda_source_dict[ value ]
+ if isinstance(value, trans.app.model.HistoryDatasetAssociation):
+ if value.dataset.id in hda_source_dict:
+ return hda_source_dict[ value.dataset.id ]
+ if isinstance(value, trans.app.model.HistoryDatasetCollectionAssociation):
+ if value.collection.id in hdca_source_dict:
+ return hdca_source_dict[ value.collection.id ]
elif isinstance( input, DataCollectionToolParameter ):
- if is_hashable( value ) and value not in history.dataset_collections and value in hdca_source_dict:
- return hdca_source_dict[ value ]
+ if value.collection.id in hdca_source_dict:
+ return hdca_source_dict[ value.collection.id ]
visit_input_values( tool_inputs, params, rerun_callback )
def _compare_tool_version( self, trans, job ):
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/tools/actions/metadata.py
--- a/lib/galaxy/tools/actions/metadata.py
+++ b/lib/galaxy/tools/actions/metadata.py
@@ -1,11 +1,15 @@
+import logging
+
from __init__ import ToolAction
from galaxy.datatypes.metadata import JobExternalOutputMetadataWrapper
from galaxy.util.odict import odict
from galaxy.util.json import dumps
+from galaxy.jobs.datasets import DatasetPath
-import logging
+
log = logging.getLogger( __name__ )
+
class SetMetadataToolAction( ToolAction ):
"""Tool action used for setting external metadata on an existing dataset"""
@@ -65,17 +69,21 @@
#add parameters to job_parameter table
# Store original dataset state, so we can restore it. A separate table might be better (no chance of 'losing' the original state)?
incoming[ '__ORIGINAL_DATASET_STATE__' ] = dataset.state
+ input_paths = [DatasetPath( dataset.id, real_path=dataset.file_name, mutable=False )]
+ app.object_store.create(job, base_dir='job_work', dir_only=True, extra_dir=str(job.id))
+ job_working_dir = app.object_store.get_filename(job, base_dir='job_work', dir_only=True, extra_dir=str(job.id))
external_metadata_wrapper = JobExternalOutputMetadataWrapper( job )
cmd_line = external_metadata_wrapper.setup_external_metadata( dataset,
sa_session,
exec_dir = None,
- tmp_dir = app.config.new_file_path,
+ tmp_dir = job_working_dir,
dataset_files_path = app.model.Dataset.file_path,
- output_fnames = None,
+ output_fnames = input_paths,
config_root = app.config.root,
config_file = app.config.config_file,
datatypes_config = app.datatypes_registry.integrated_datatypes_configs,
job_metadata = None,
+ include_command = False,
kwds = { 'overwrite' : overwrite } )
incoming[ '__SET_EXTERNAL_METADATA_COMMAND_LINE__' ] = cmd_line
for name, value in tool.params_to_strings( incoming, app ).iteritems():
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/tools/imp_exp/__init__.py
--- a/lib/galaxy/tools/imp_exp/__init__.py
+++ b/lib/galaxy/tools/imp_exp/__init__.py
@@ -17,12 +17,15 @@
<tool id="__EXPORT_HISTORY__" name="Export History" version="0.1" tool_type="export_history"><type class="ExportHistoryTool" module="galaxy.tools"/><action module="galaxy.tools.actions.history_imp_exp" class="ExportHistoryToolAction"/>
- <command>$__EXPORT_HISTORY_COMMAND_INPUTS_OPTIONS__ $output_file</command>
+ <command>python $export_history $__EXPORT_HISTORY_COMMAND_INPUTS_OPTIONS__ $output_file</command><inputs><param name="__HISTORY_TO_EXPORT__" type="hidden"/><param name="compress" type="boolean"/><param name="__EXPORT_HISTORY_COMMAND_INPUTS_OPTIONS__" type="hidden"/></inputs>
+ <configfiles>
+ <configfile name="export_history">from galaxy.tools.imp_exp.export_history import main; main()</configfile>
+ </configfiles><outputs><data format="gzip" name="output_file"/></outputs>
@@ -530,11 +533,9 @@
options = ""
if jeha.compressed:
options = "-G"
- return "python %s %s %s %s %s" % ( os.path.join( os.path.abspath( os.getcwd() ),
- "lib/galaxy/tools/imp_exp/export_history.py" ),
- options, history_attrs_filename,
- datasets_attrs_filename,
- jobs_attrs_filename )
+ return "%s %s %s %s" % ( options, history_attrs_filename,
+ datasets_attrs_filename,
+ jobs_attrs_filename )
def cleanup_after_job( self, db_session ):
""" Remove temporary directory and attribute files generated during setup for this job. """
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/tools/imp_exp/export_history.py
--- a/lib/galaxy/tools/imp_exp/export_history.py
+++ b/lib/galaxy/tools/imp_exp/export_history.py
@@ -70,7 +70,7 @@
except Exception, e:
return 'Error creating history archive: %s' % str( e ), sys.stderr
-if __name__ == "__main__":
+def main():
# Parse command line.
parser = optparse.OptionParser()
parser.add_option( '-G', '--gzip', dest='gzip', action="store_true", help='Compress archive using gzip.' )
@@ -81,3 +81,6 @@
# Create archive.
status = create_archive( history_attrs, dataset_attrs, job_attrs, out_file, gzip )
print status
+
+if __name__ == "__main__":
+ main()
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/version.py
--- a/lib/galaxy/version.py
+++ b/lib/galaxy/version.py
@@ -1,1 +1,3 @@
VERSION_MAJOR = "15.03"
+VERSION_MINOR = "1"
+VERSION = VERSION_MAJOR + ('.' + VERSION_MINOR if VERSION_MINOR else '')
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/webapps/galaxy/api/users.py
--- a/lib/galaxy/webapps/galaxy/api/users.py
+++ b/lib/galaxy/webapps/galaxy/api/users.py
@@ -105,7 +105,7 @@
POST /api/users
Creates a new Galaxy user.
"""
- if not trans.app.config.allow_user_creation:
+ if not trans.app.config.allow_user_creation and not trans.user_is_admin():
raise exceptions.ConfigDoesNotAllowException( 'User creation is not allowed in this Galaxy instance' )
if trans.app.config.use_remote_user and trans.user_is_admin():
user = trans.get_or_create_remote_user( remote_user_email=payload['remote_user_email'] )
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/webapps/galaxy/controllers/history.py
--- a/lib/galaxy/webapps/galaxy/controllers/history.py
+++ b/lib/galaxy/webapps/galaxy/controllers/history.py
@@ -359,6 +359,15 @@
history.purged = True
self.sa_session.add( history )
self.sa_session.flush()
+ for hda in history.datasets:
+ # Not all datasets have jobs associated with them (e.g., datasets imported from libraries).
+ if hda.creating_job_associations:
+ # HDA has associated job, so try marking it deleted.
+ job = hda.creating_job_associations[0].job
+ if job.history_id == history.id and not job.finished:
+ # No need to check other outputs since the job's parent history is this history
+ job.mark_deleted( trans.app.config.track_jobs_in_database )
+ trans.app.job_manager.job_stop_queue.put( job.id )
trans.sa_session.flush()
if n_deleted:
part = "Deleted %d %s" % ( n_deleted, iff( n_deleted != 1, "histories", "history" ) )
@@ -1125,7 +1134,7 @@
if hda.creating_job_associations:
# HDA has associated job, so try marking it deleted.
job = hda.creating_job_associations[0].job
- if job.history_id == history.id and job.state in [ trans.app.model.Job.states.QUEUED, trans.app.model.Job.states.RUNNING, trans.app.model.Job.states.NEW ]:
+ if job.history_id == history.id and not job.finished:
# No need to check other outputs since the job's parent history is this history
job.mark_deleted( trans.app.config.track_jobs_in_database )
trans.app.job_manager.job_stop_queue.put( job.id )
@@ -1224,33 +1233,28 @@
# Use current history.
history = trans.history
id = trans.security.encode_id( history.id )
-
if not history:
return trans.show_error_message( "This history does not exist or you cannot export this history." )
-
- #
# If history has already been exported and it has not changed since export, stream it.
- #
jeha = history.latest_export
if jeha and jeha.up_to_date:
if jeha.ready:
if preview:
url = url_for( controller='history', action="export_archive", id=id, qualified=True )
- return trans.show_message( "History Ready: '%(n)s'. Use this link to download \
- the archive or import it to another Galaxy server: \
- <a href='%(u)s'>%(u)s</a>" % ( { 'n' : history.name, 'u' : url } ) )
+ return trans.show_message( "History Ready: '%(n)s'. Use this link to download "
+ "the archive or import it to another Galaxy server: "
+ "<a href='%(u)s'>%(u)s</a>" % ( { 'n': history.name, 'u': url } ) )
else:
return self.serve_ready_history_export( trans, jeha )
elif jeha.preparing:
- return trans.show_message( "Still exporting history %(n)s; please check back soon. Link: <a href='%(s)s'>%(s)s</a>" \
- % ( { 'n' : history.name, 's' : url_for( controller='history', action="export_archive", id=id, qualified=True ) } ) )
-
+ return trans.show_message( "Still exporting history %(n)s; please check back soon. Link: <a href='%(s)s'>%(s)s</a>"
+ % ( { 'n': history.name, 's': url_for( controller='history', action="export_archive", id=id, qualified=True ) } ) )
self.queue_history_export( trans, history, gzip=gzip, include_hidden=include_hidden, include_deleted=include_deleted )
url = url_for( controller='history', action="export_archive", id=id, qualified=True )
- return trans.show_message( "Exporting History '%(n)s'. Use this link to download \
- the archive or import it to another Galaxy server: \
- <a href='%(u)s'>%(u)s</a>" % ( { 'n' : history.name, 'u' : url } ) )
- #TODO: used in this file and index.mako
+ return trans.show_message( "Exporting History '%(n)s'. You will need to <a href='%(share)s'>make this history 'accessible'</a> in order to import this to another galaxy sever. <br/>"
+ "Use this link to download the archive or import it to another Galaxy server: "
+ "<a href='%(u)s'>%(u)s</a>" % ( { 'share': url_for(controller='history', action='sharing'), 'n': history.name, 'u': url } ) )
+ # TODO: used in this file and index.mako
@web.expose
@web.json
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy/webapps/galaxy/controllers/workflow.py
--- a/lib/galaxy/webapps/galaxy/controllers/workflow.py
+++ b/lib/galaxy/webapps/galaxy/controllers/workflow.py
@@ -1,6 +1,7 @@
import base64
import httplib
import json
+import logging
import os
import sgmllib
import urllib2
@@ -41,6 +42,8 @@
from galaxy.workflow.render import WorkflowCanvas, MARGIN, LINE_SPACING
from markupsafe import escape
+log = logging.getLogger( __name__ )
+
class StoredWorkflowListGrid( grids.Grid ):
@@ -895,8 +898,9 @@
data = json.loads( workflow_data )
except Exception, e:
data = None
- message = "The data content does not appear to be a Galaxy workflow.<br/>Exception: %s" % str( e )
+ message = "The data content does not appear to be a Galaxy workflow."
status = 'error'
+ log.exception("Error importing workflow.")
if data:
# Create workflow if possible. If a required tool is not available in the local
# Galaxy instance, the tool information will be available in the step_dict.
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy_ext/__init__.py
--- /dev/null
+++ b/lib/galaxy_ext/__init__.py
@@ -0,0 +1,4 @@
+""" Package for things which are loaded from outside Galaxy and which we can be
+sure will not conflict with the `galaxy` namespace, which may be provided by
+other packages (e.g. galaxy_ops in the toolshed).
+"""
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy_ext/metadata/__init__.py
--- /dev/null
+++ b/lib/galaxy_ext/metadata/__init__.py
@@ -0,0 +1,2 @@
+""" Work with Galaxy metadata
+"""
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 lib/galaxy_ext/metadata/set_metadata.py
--- /dev/null
+++ b/lib/galaxy_ext/metadata/set_metadata.py
@@ -0,0 +1,125 @@
+"""
+Execute an external process to set_meta() on a provided list of pickled datasets.
+
+This was formerly scripts/set_metadata.py and expects these arguments:
+
+ %prog datatypes_conf.xml job_metadata_file metadata_in,metadata_kwds,metadata_out,metadata_results_code,output_filename_override,metadata_override...
+
+Galaxy should be importable on sys.path and output_filename_override should be
+set to the path of the dataset on which metadata is being set
+(output_filename_override could previously be left empty and the path would be
+constructed automatically).
+"""
+
+import logging
+logging.basicConfig()
+log = logging.getLogger( __name__ )
+
+import cPickle
+import json
+import os
+import sys
+
+# ensure supported version
+assert sys.version_info[:2] >= ( 2, 6 ) and sys.version_info[:2] <= ( 2, 7 ), 'Python version must be 2.6 or 2.7, this is: %s' % sys.version
+
+# insert *this* galaxy before all others on sys.path
+new_path = os.path.abspath( os.path.join( os.path.dirname( __file__ ), os.pardir, os.pardir ) )
+sys.path.insert( 0, new_path )
+
+from galaxy import eggs
+import pkg_resources
+import galaxy.model.mapping # need to load this before we unpickle, in order to setup properties assigned by the mappers
+galaxy.model.Job() # this looks REAL stupid, but it is REQUIRED in order for SA to insert parameters into the classes defined by the mappers --> it appears that instantiating ANY mapper'ed class would suffice here
+from galaxy.util import stringify_dictionary_keys
+from sqlalchemy.orm import clear_mappers
+
+
+def set_meta_with_tool_provided( dataset_instance, file_dict, set_meta_kwds ):
+ # This method is somewhat odd, in that we set the metadata attributes from tool,
+ # then call set_meta, then set metadata attributes from tool again.
+ # This is intentional due to interplay of overwrite kwd, the fact that some metadata
+ # parameters may rely on the values of others, and that we are accepting the
+ # values provided by the tool as Truth.
+ for metadata_name, metadata_value in file_dict.get( 'metadata', {} ).iteritems():
+ setattr( dataset_instance.metadata, metadata_name, metadata_value )
+ dataset_instance.datatype.set_meta( dataset_instance, **set_meta_kwds )
+ for metadata_name, metadata_value in file_dict.get( 'metadata', {} ).iteritems():
+ setattr( dataset_instance.metadata, metadata_name, metadata_value )
+
+def set_metadata():
+ # locate galaxy_root for loading datatypes
+ galaxy_root = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir, os.pardir))
+ galaxy.datatypes.metadata.MetadataTempFile.tmp_dir = tool_job_working_directory = os.path.abspath(os.getcwd())
+
+ # Set up datatypes registry
+ datatypes_config = sys.argv.pop( 1 )
+ datatypes_registry = galaxy.datatypes.registry.Registry()
+ datatypes_registry.load_datatypes( root_dir=galaxy_root, config=datatypes_config )
+ galaxy.model.set_datatypes_registry( datatypes_registry )
+
+ job_metadata = sys.argv.pop( 1 )
+ existing_job_metadata_dict = {}
+ new_job_metadata_dict = {}
+ if job_metadata != "None" and os.path.exists( job_metadata ):
+ for line in open( job_metadata, 'r' ):
+ try:
+ line = stringify_dictionary_keys( json.loads( line ) )
+ if line['type'] == 'dataset':
+ existing_job_metadata_dict[ line['dataset_id'] ] = line
+ elif line['type'] == 'new_primary_dataset':
+ new_job_metadata_dict[ line[ 'filename' ] ] = line
+ except:
+ continue
+
+ for filenames in sys.argv[1:]:
+ fields = filenames.split( ',' )
+ filename_in = fields.pop( 0 )
+ filename_kwds = fields.pop( 0 )
+ filename_out = fields.pop( 0 )
+ filename_results_code = fields.pop( 0 )
+ dataset_filename_override = fields.pop( 0 )
+ # Need to be careful with the way that these parameters are populated from the filename splitting,
+ # because if a job is running when the server is updated, any existing external metadata command-lines
+ #will not have info about the newly added override_metadata file
+ if fields:
+ override_metadata = fields.pop( 0 )
+ else:
+ override_metadata = None
+ set_meta_kwds = stringify_dictionary_keys( json.load( open( filename_kwds ) ) ) # load kwds; need to ensure our keywords are not unicode
+ try:
+ dataset = cPickle.load( open( filename_in ) ) # load DatasetInstance
+ dataset.dataset.external_filename = dataset_filename_override
+ files_path = os.path.abspath(os.path.join( tool_job_working_directory, "dataset_%s_files" % (dataset.dataset.id) ))
+ dataset.dataset.external_extra_files_path = files_path
+ if dataset.dataset.id in existing_job_metadata_dict:
+ dataset.extension = existing_job_metadata_dict[ dataset.dataset.id ].get( 'ext', dataset.extension )
+ # Metadata FileParameter types may not be writable on a cluster node, and are therefore temporarily substituted with MetadataTempFiles
+ if override_metadata:
+ override_metadata = json.load( open( override_metadata ) )
+ for metadata_name, metadata_file_override in override_metadata:
+ if galaxy.datatypes.metadata.MetadataTempFile.is_JSONified_value( metadata_file_override ):
+ metadata_file_override = galaxy.datatypes.metadata.MetadataTempFile.from_JSON( metadata_file_override )
+ setattr( dataset.metadata, metadata_name, metadata_file_override )
+ file_dict = existing_job_metadata_dict.get( dataset.dataset.id, {} )
+ set_meta_with_tool_provided( dataset, file_dict, set_meta_kwds )
+ dataset.metadata.to_JSON_dict( filename_out ) # write out results of set_meta
+ json.dump( ( True, 'Metadata has been set successfully' ), open( filename_results_code, 'wb+' ) ) # setting metadata has succeeded
+ except Exception, e:
+ json.dump( ( False, str( e ) ), open( filename_results_code, 'wb+' ) ) # setting metadata has failed somehow
+
+ for i, ( filename, file_dict ) in enumerate( new_job_metadata_dict.iteritems(), start=1 ):
+ new_dataset = galaxy.model.Dataset( id=-i, external_filename=os.path.join( tool_job_working_directory, file_dict[ 'filename' ] ) )
+ extra_files = file_dict.get( 'extra_files', None )
+ if extra_files is not None:
+ new_dataset._extra_files_path = os.path.join( tool_job_working_directory, extra_files )
+ new_dataset.state = new_dataset.states.OK
+ new_dataset_instance = galaxy.model.HistoryDatasetAssociation( id=-i, dataset=new_dataset, extension=file_dict.get( 'ext', 'data' ) )
+ set_meta_with_tool_provided( new_dataset_instance, file_dict, set_meta_kwds )
+ file_dict[ 'metadata' ] = json.loads( new_dataset_instance.metadata.to_JSON_dict() ) #storing metadata in external form, need to turn back into dict, then later jsonify
+ if existing_job_metadata_dict or new_job_metadata_dict:
+ with open( job_metadata, 'wb' ) as job_metadata_fh:
+ for value in existing_job_metadata_dict.values() + new_job_metadata_dict.values():
+ job_metadata_fh.write( "%s\n" % ( json.dumps( value ) ) )
+
+ clear_mappers()
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 static/scripts/mvc/history/history-model.js
--- a/static/scripts/mvc/history/history-model.js
+++ b/static/scripts/mvc/history/history-model.js
@@ -208,7 +208,7 @@
/** purge this history, _Mark_ing it as purged and removing all dataset data from the server */
purge : function( options ){
if( this.get( 'purged' ) ){ return jQuery.when(); }
- return this.save( { purged: true }, options );
+ return this.save( { deleted: true, purged: true }, options );
},
/** save this history, _Mark_ing it as undeleted */
undelete : function( options ){
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 static/scripts/packed/mvc/history/history-model.js
--- a/static/scripts/packed/mvc/history/history-model.js
+++ b/static/scripts/packed/mvc/history/history-model.js
@@ -1,1 +1,1 @@
-define(["mvc/history/history-contents","mvc/base-mvc","utils/localization"],function(h,i,d){var e=Backbone.Model.extend(i.LoggableMixin).extend(i.mixin(i.SearchableModelMixin,{defaults:{model_class:"History",id:null,name:"Unnamed History",state:"new",diskSize:0,deleted:false},urlRoot:galaxy_config.root+"api/histories",initialize:function(k,l,j){j=j||{};this.logger=j.logger||null;this.log(this+".initialize:",k,l,j);this.log("creating history contents:",l);this.contents=new h.HistoryContents(l||[],{historyId:this.get("id")});this._setUpListeners();this.updateTimeoutId=null},_setUpListeners:function(){this.on("error",function(k,n,j,m,l){this.errorHandler(k,n,j,m,l)});if(this.contents){this.listenTo(this.contents,"error",function(){this.trigger.apply(this,["error:contents"].concat(jQuery.makeArray(arguments)))})}this.on("change:id",function(k,j){if(this.contents){this.contents.historyId=j}},this)},errorHandler:function(k,n,j,m,l){this.clearUpdateTimeout()},ownedByCurrUser:function(){if(!Galaxy||!Galaxy.currUser){return false}if(Galaxy.currUser.isAnonymous()||Galaxy.currUser.id!==this.get("user_id")){return false}return true},contentsCount:function(){return _.reduce(_.values(this.get("state_details")),function(j,k){return j+k},0)},searchAttributes:["name","annotation","tags"],searchAliases:{title:"name",tag:"tags"},checkForUpdates:function(j){if(this.contents.running().length){this.setUpdateTimeout()}else{this.trigger("ready");if(_.isFunction(j)){j.call(this)}}return this},setUpdateTimeout:function(j){j=j||e.UPDATE_DELAY;var k=this;this.clearUpdateTimeout();this.updateTimeoutId=setTimeout(function(){k.refresh()},j);return this.updateTimeoutId},clearUpdateTimeout:function(){if(this.updateTimeoutId){clearTimeout(this.updateTimeoutId);this.updateTimeoutId=null}},refresh:function(k,j){k=k||[];j=j||{};var l=this;j.data=j.data||{};if(k.length){j.data.details=k.join(",")}var m=this.contents.fetch(j);m.done(function(n){l.checkForUpdates(function(){this.fetch()})});return m},_delete:function(j){if(this.get("deleted")){return jQuery.when()}return this.save({deleted:true},j)},purge:function(j){if(this.get("purged")){return jQuery.when()}return this.save({purged:true},j)},undelete:function(j){if(!this.get("deleted")){return jQuery.when()}return this.save({deleted:false},j)},copy:function(m,k){m=(m!==undefined)?(m):(true);if(!this.id){throw new Error("You must set the history ID before copying it.")}var j={history_id:this.id};if(m){j.current=true}if(k){j.name=k}var l=this,n=jQuery.post(this.urlRoot,j);if(m){return n.then(function(o){var p=new e(o);return p.setAsCurrent().done(function(){l.trigger("copied",l,o)})})}return n.done(function(o){l.trigger("copied",l,o)})},setAsCurrent:function(){var j=this,k=jQuery.getJSON("/history/set_as_current?id="+this.id);k.done(function(){j.trigger("set-as-current",j)});return k},toString:function(){return"History("+this.get("id")+","+this.get("name")+")"}}));e.UPDATE_DELAY=4000;e.getHistoryData=function c(j,v){v=v||{};var r=v.detailIdsFn||[];var m=v.hdcaDetailIds||[];var s=jQuery.Deferred(),p=null;function k(w){if(j==="current"){return jQuery.getJSON(galaxy_config.root+"history/current_history_json")}return jQuery.ajax(galaxy_config.root+"api/histories/"+j)}function o(w){return w&&w.empty}function q(x){if(o(x)){return[]}if(_.isFunction(r)){r=r(x)}if(_.isFunction(m)){m=m(x)}var w={};if(r.length){w.dataset_details=r.join(",")}if(m.length){w.dataset_collection_details=m.join(",")}return jQuery.ajax(galaxy_config.root+"api/histories/"+x.id+"/contents",{data:w})}var t=v.historyFn||k,u=v.contentsFn||q;var n=t(j);n.done(function(w){p=w;s.notify({status:"history data retrieved",historyJSON:p})});n.fail(function(y,w,x){s.reject(y,"loading the history")});var l=n.then(u);l.then(function(w){s.notify({status:"contents data retrieved",historyJSON:p,contentsJSON:w});s.resolve(p,w)});l.fail(function(y,w,x){s.reject(y,"loading the contents",{history:p})});return s};var f=Backbone.Collection.extend(i.LoggableMixin).extend({model:e,urlRoot:(window.galaxy_config?galaxy_config.root:"/")+"api/histories",initialize:function(k,j){j=j||{};this.log("HistoryCollection.initialize",arguments);this.includeDeleted=j.includeDeleted||false;this.setUpListeners()},setUpListeners:function a(){var j=this;this.on("change:deleted",function(k){this.debug("change:deleted",j.includeDeleted,k.get("deleted"));if(!j.includeDeleted&&k.get("deleted")){j.remove(k)}});this.on("copied",function(k,l){this.unshift(new e(l,[]))})},create:function g(m,k,j,l){var o=this,n=jQuery.getJSON(galaxy_config.root+"history/create_new_current");return n.done(function(p){var q=new e(p,[],j||{});o.unshift(q);o.trigger("new-current")})},toString:function b(){return"HistoryCollection("+this.length+")"}});return{History:e,HistoryCollection:f}});
\ No newline at end of file
+define(["mvc/history/history-contents","mvc/base-mvc","utils/localization"],function(h,i,d){var e=Backbone.Model.extend(i.LoggableMixin).extend(i.mixin(i.SearchableModelMixin,{defaults:{model_class:"History",id:null,name:"Unnamed History",state:"new",diskSize:0,deleted:false},urlRoot:galaxy_config.root+"api/histories",initialize:function(k,l,j){j=j||{};this.logger=j.logger||null;this.log(this+".initialize:",k,l,j);this.log("creating history contents:",l);this.contents=new h.HistoryContents(l||[],{historyId:this.get("id")});this._setUpListeners();this.updateTimeoutId=null},_setUpListeners:function(){this.on("error",function(k,n,j,m,l){this.errorHandler(k,n,j,m,l)});if(this.contents){this.listenTo(this.contents,"error",function(){this.trigger.apply(this,["error:contents"].concat(jQuery.makeArray(arguments)))})}this.on("change:id",function(k,j){if(this.contents){this.contents.historyId=j}},this)},errorHandler:function(k,n,j,m,l){this.clearUpdateTimeout()},ownedByCurrUser:function(){if(!Galaxy||!Galaxy.currUser){return false}if(Galaxy.currUser.isAnonymous()||Galaxy.currUser.id!==this.get("user_id")){return false}return true},contentsCount:function(){return _.reduce(_.values(this.get("state_details")),function(j,k){return j+k},0)},searchAttributes:["name","annotation","tags"],searchAliases:{title:"name",tag:"tags"},checkForUpdates:function(j){if(this.contents.running().length){this.setUpdateTimeout()}else{this.trigger("ready");if(_.isFunction(j)){j.call(this)}}return this},setUpdateTimeout:function(j){j=j||e.UPDATE_DELAY;var k=this;this.clearUpdateTimeout();this.updateTimeoutId=setTimeout(function(){k.refresh()},j);return this.updateTimeoutId},clearUpdateTimeout:function(){if(this.updateTimeoutId){clearTimeout(this.updateTimeoutId);this.updateTimeoutId=null}},refresh:function(k,j){k=k||[];j=j||{};var l=this;j.data=j.data||{};if(k.length){j.data.details=k.join(",")}var m=this.contents.fetch(j);m.done(function(n){l.checkForUpdates(function(){this.fetch()})});return m},_delete:function(j){if(this.get("deleted")){return jQuery.when()}return this.save({deleted:true},j)},purge:function(j){if(this.get("purged")){return jQuery.when()}return this.save({deleted:true,purged:true},j)},undelete:function(j){if(!this.get("deleted")){return jQuery.when()}return this.save({deleted:false},j)},copy:function(m,k){m=(m!==undefined)?(m):(true);if(!this.id){throw new Error("You must set the history ID before copying it.")}var j={history_id:this.id};if(m){j.current=true}if(k){j.name=k}var l=this,n=jQuery.post(this.urlRoot,j);if(m){return n.then(function(o){var p=new e(o);return p.setAsCurrent().done(function(){l.trigger("copied",l,o)})})}return n.done(function(o){l.trigger("copied",l,o)})},setAsCurrent:function(){var j=this,k=jQuery.getJSON("/history/set_as_current?id="+this.id);k.done(function(){j.trigger("set-as-current",j)});return k},toString:function(){return"History("+this.get("id")+","+this.get("name")+")"}}));e.UPDATE_DELAY=4000;e.getHistoryData=function c(j,v){v=v||{};var r=v.detailIdsFn||[];var m=v.hdcaDetailIds||[];var s=jQuery.Deferred(),p=null;function k(w){if(j==="current"){return jQuery.getJSON(galaxy_config.root+"history/current_history_json")}return jQuery.ajax(galaxy_config.root+"api/histories/"+j)}function o(w){return w&&w.empty}function q(x){if(o(x)){return[]}if(_.isFunction(r)){r=r(x)}if(_.isFunction(m)){m=m(x)}var w={};if(r.length){w.dataset_details=r.join(",")}if(m.length){w.dataset_collection_details=m.join(",")}return jQuery.ajax(galaxy_config.root+"api/histories/"+x.id+"/contents",{data:w})}var t=v.historyFn||k,u=v.contentsFn||q;var n=t(j);n.done(function(w){p=w;s.notify({status:"history data retrieved",historyJSON:p})});n.fail(function(y,w,x){s.reject(y,"loading the history")});var l=n.then(u);l.then(function(w){s.notify({status:"contents data retrieved",historyJSON:p,contentsJSON:w});s.resolve(p,w)});l.fail(function(y,w,x){s.reject(y,"loading the contents",{history:p})});return s};var f=Backbone.Collection.extend(i.LoggableMixin).extend({model:e,urlRoot:(window.galaxy_config?galaxy_config.root:"/")+"api/histories",initialize:function(k,j){j=j||{};this.log("HistoryCollection.initialize",arguments);this.includeDeleted=j.includeDeleted||false;this.setUpListeners()},setUpListeners:function a(){var j=this;this.on("change:deleted",function(k){this.debug("change:deleted",j.includeDeleted,k.get("deleted"));if(!j.includeDeleted&&k.get("deleted")){j.remove(k)}});this.on("copied",function(k,l){this.unshift(new e(l,[]))})},create:function g(m,k,j,l){var o=this,n=jQuery.getJSON(galaxy_config.root+"history/create_new_current");return n.done(function(p){var q=new e(p,[],j||{});o.unshift(q);o.trigger("new-current")})},toString:function b(){return"HistoryCollection("+this.length+")"}});return{History:e,HistoryCollection:f}});
\ No newline at end of file
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 templates/webapps/galaxy/workflow/run.mako
--- a/templates/webapps/galaxy/workflow/run.mako
+++ b/templates/webapps/galaxy/workflow/run.mako
@@ -513,7 +513,7 @@
<div class="metadataFormTitle">Workflow Parameters</div><div class="metadataFormBody">
%for parm in wf_parms:
- <div class='form-row'><label style='width:100px;'>${parm}<input style="border:2px solid ${wf_parms[parm]};border-left-width:8px;" type="text" class='wf_parm_input ptag_${parm}' name="wf_parm|${parm}" value=""/></label></div>
+ <div class='form-row'><label>${parm}<br/><input size=40 style="border:2px solid ${wf_parms[parm]};border-left-width:8px;" type="text" class='wf_parm_input ptag_${parm}' name="wf_parm|${parm}" value=""/></label></div>
%endfor
</div></div>
diff -r 708cab19708d8158044eb5b1048740734cb2bc2a -r 6395e70351435911f7ab75863c1b48b54e20a919 test/unit/jobs/test_command_factory.py
--- a/test/unit/jobs/test_command_factory.py
+++ b/test/unit/jobs/test_command_factory.py
@@ -72,7 +72,7 @@
self.include_metadata = True
self.include_work_dir_outputs = False
self.job_wrapper.metadata_line = TEST_METADATA_LINE
- expected_command = '%s; return_code=$?; cd %s; %s; sh -c "exit $return_code"' % (MOCK_COMMAND_LINE, getcwd(), TEST_METADATA_LINE)
+ expected_command = '%s; return_code=$?; %s; sh -c "exit $return_code"' % (MOCK_COMMAND_LINE, TEST_METADATA_LINE)
self.__assert_command_is( expected_command )
def test_empty_metadata(self):
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
27 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/ae321a0084b1/
Changeset: ae321a0084b1
Branch: release_15.03
User: dannon
Date: 2015-03-10 21:14:30+00:00
Summary: Allow admin to create users via API when config.allow_user_creation is False
Affected #: 1 file
diff -r 070468c42fbfed6b28c389e7df4a38dae9bf921b -r ae321a0084b1e88b7cf0cebbe8ba465329bce565 lib/galaxy/webapps/galaxy/api/users.py
--- a/lib/galaxy/webapps/galaxy/api/users.py
+++ b/lib/galaxy/webapps/galaxy/api/users.py
@@ -105,7 +105,7 @@
POST /api/users
Creates a new Galaxy user.
"""
- if not trans.app.config.allow_user_creation:
+ if not trans.app.config.allow_user_creation and not trans.user_is_admin():
raise exceptions.ConfigDoesNotAllowException( 'User creation is not allowed in this Galaxy instance' )
if trans.app.config.use_remote_user and trans.user_is_admin():
user = trans.get_or_create_remote_user( remote_user_email=payload['remote_user_email'] )
https://bitbucket.org/galaxy/galaxy-central/commits/d3d63bcfa52c/
Changeset: d3d63bcfa52c
Branch: release_15.03
User: dannon
Date: 2015-03-11 15:12:41+00:00
Summary: No longer mark job as deleted when purging hdas. I can't find a reason we would want to do this anyway, and doing so cascades deletion of copied hdas, which is bad.
Affected #: 1 file
diff -r ae321a0084b1e88b7cf0cebbe8ba465329bce565 -r d3d63bcfa52ce1778046f1017074e5c6959dda2d lib/galaxy/managers/hdas.py
--- a/lib/galaxy/managers/hdas.py
+++ b/lib/galaxy/managers/hdas.py
@@ -159,7 +159,6 @@
# signal to stop the creating job?
if hda.creating_job_associations:
job = hda.creating_job_associations[0].job
- job.mark_deleted( self.app.config.track_jobs_in_database )
self.app.job_manager.job_stop_queue.put( job.id )
# more importantly, purge dataset as well
https://bitbucket.org/galaxy/galaxy-central/commits/bc2a337d8f30/
Changeset: bc2a337d8f30
Branch: release_15.03
User: dannon
Date: 2015-03-10 19:56:55+00:00
Summary: Cherry-pick over non-terminal-only job stop fix.
Affected #: 1 file
diff -r d3d63bcfa52ce1778046f1017074e5c6959dda2d -r bc2a337d8f30bdbca4e31e9ad56f03963c97cfff lib/galaxy/managers/hdas.py
--- a/lib/galaxy/managers/hdas.py
+++ b/lib/galaxy/managers/hdas.py
@@ -156,10 +156,11 @@
self.dataset_manager.error_unless_dataset_purge_allowed( trans, hda )
super( HDAManager, self ).purge( trans, hda, flush=flush )
- # signal to stop the creating job?
if hda.creating_job_associations:
job = hda.creating_job_associations[0].job
- self.app.job_manager.job_stop_queue.put( job.id )
+ if not job.finished:
+ # signal to stop the creating job
+ self.app.job_manager.job_stop_queue.put( job.id )
# more importantly, purge dataset as well
if hda.dataset.user_can_purge:
https://bitbucket.org/galaxy/galaxy-central/commits/5ac767b3b060/
Changeset: 5ac767b3b060
Branch: release_15.03
User: carlfeberhard
Date: 2015-03-11 17:51:54+00:00
Summary: UI, history: (in the client) when calling purge on the history model, mark the history as deleted as well
Affected #: 3 files
diff -r bc2a337d8f30bdbca4e31e9ad56f03963c97cfff -r 5ac767b3b060356c22c100936d804a77ebee9d59 client/galaxy/scripts/mvc/history/history-model.js
--- a/client/galaxy/scripts/mvc/history/history-model.js
+++ b/client/galaxy/scripts/mvc/history/history-model.js
@@ -208,7 +208,7 @@
/** purge this history, _Mark_ing it as purged and removing all dataset data from the server */
purge : function( options ){
if( this.get( 'purged' ) ){ return jQuery.when(); }
- return this.save( { purged: true }, options );
+ return this.save( { deleted: true, purged: true }, options );
},
/** save this history, _Mark_ing it as undeleted */
undelete : function( options ){
diff -r bc2a337d8f30bdbca4e31e9ad56f03963c97cfff -r 5ac767b3b060356c22c100936d804a77ebee9d59 static/scripts/mvc/history/history-model.js
--- a/static/scripts/mvc/history/history-model.js
+++ b/static/scripts/mvc/history/history-model.js
@@ -208,7 +208,7 @@
/** purge this history, _Mark_ing it as purged and removing all dataset data from the server */
purge : function( options ){
if( this.get( 'purged' ) ){ return jQuery.when(); }
- return this.save( { purged: true }, options );
+ return this.save( { deleted: true, purged: true }, options );
},
/** save this history, _Mark_ing it as undeleted */
undelete : function( options ){
diff -r bc2a337d8f30bdbca4e31e9ad56f03963c97cfff -r 5ac767b3b060356c22c100936d804a77ebee9d59 static/scripts/packed/mvc/history/history-model.js
--- a/static/scripts/packed/mvc/history/history-model.js
+++ b/static/scripts/packed/mvc/history/history-model.js
@@ -1,1 +1,1 @@
-define(["mvc/history/history-contents","mvc/base-mvc","utils/localization"],function(h,i,d){var e=Backbone.Model.extend(i.LoggableMixin).extend(i.mixin(i.SearchableModelMixin,{defaults:{model_class:"History",id:null,name:"Unnamed History",state:"new",diskSize:0,deleted:false},urlRoot:galaxy_config.root+"api/histories",initialize:function(k,l,j){j=j||{};this.logger=j.logger||null;this.log(this+".initialize:",k,l,j);this.log("creating history contents:",l);this.contents=new h.HistoryContents(l||[],{historyId:this.get("id")});this._setUpListeners();this.updateTimeoutId=null},_setUpListeners:function(){this.on("error",function(k,n,j,m,l){this.errorHandler(k,n,j,m,l)});if(this.contents){this.listenTo(this.contents,"error",function(){this.trigger.apply(this,["error:contents"].concat(jQuery.makeArray(arguments)))})}this.on("change:id",function(k,j){if(this.contents){this.contents.historyId=j}},this)},errorHandler:function(k,n,j,m,l){this.clearUpdateTimeout()},ownedByCurrUser:function(){if(!Galaxy||!Galaxy.currUser){return false}if(Galaxy.currUser.isAnonymous()||Galaxy.currUser.id!==this.get("user_id")){return false}return true},contentsCount:function(){return _.reduce(_.values(this.get("state_details")),function(j,k){return j+k},0)},searchAttributes:["name","annotation","tags"],searchAliases:{title:"name",tag:"tags"},checkForUpdates:function(j){if(this.contents.running().length){this.setUpdateTimeout()}else{this.trigger("ready");if(_.isFunction(j)){j.call(this)}}return this},setUpdateTimeout:function(j){j=j||e.UPDATE_DELAY;var k=this;this.clearUpdateTimeout();this.updateTimeoutId=setTimeout(function(){k.refresh()},j);return this.updateTimeoutId},clearUpdateTimeout:function(){if(this.updateTimeoutId){clearTimeout(this.updateTimeoutId);this.updateTimeoutId=null}},refresh:function(k,j){k=k||[];j=j||{};var l=this;j.data=j.data||{};if(k.length){j.data.details=k.join(",")}var m=this.contents.fetch(j);m.done(function(n){l.checkForUpdates(function(){this.fetch()})});return m},_delete:function(j){if(this.get("deleted")){return jQuery.when()}return this.save({deleted:true},j)},purge:function(j){if(this.get("purged")){return jQuery.when()}return this.save({purged:true},j)},undelete:function(j){if(!this.get("deleted")){return jQuery.when()}return this.save({deleted:false},j)},copy:function(m,k){m=(m!==undefined)?(m):(true);if(!this.id){throw new Error("You must set the history ID before copying it.")}var j={history_id:this.id};if(m){j.current=true}if(k){j.name=k}var l=this,n=jQuery.post(this.urlRoot,j);if(m){return n.then(function(o){var p=new e(o);return p.setAsCurrent().done(function(){l.trigger("copied",l,o)})})}return n.done(function(o){l.trigger("copied",l,o)})},setAsCurrent:function(){var j=this,k=jQuery.getJSON("/history/set_as_current?id="+this.id);k.done(function(){j.trigger("set-as-current",j)});return k},toString:function(){return"History("+this.get("id")+","+this.get("name")+")"}}));e.UPDATE_DELAY=4000;e.getHistoryData=function c(j,v){v=v||{};var r=v.detailIdsFn||[];var m=v.hdcaDetailIds||[];var s=jQuery.Deferred(),p=null;function k(w){if(j==="current"){return jQuery.getJSON(galaxy_config.root+"history/current_history_json")}return jQuery.ajax(galaxy_config.root+"api/histories/"+j)}function o(w){return w&&w.empty}function q(x){if(o(x)){return[]}if(_.isFunction(r)){r=r(x)}if(_.isFunction(m)){m=m(x)}var w={};if(r.length){w.dataset_details=r.join(",")}if(m.length){w.dataset_collection_details=m.join(",")}return jQuery.ajax(galaxy_config.root+"api/histories/"+x.id+"/contents",{data:w})}var t=v.historyFn||k,u=v.contentsFn||q;var n=t(j);n.done(function(w){p=w;s.notify({status:"history data retrieved",historyJSON:p})});n.fail(function(y,w,x){s.reject(y,"loading the history")});var l=n.then(u);l.then(function(w){s.notify({status:"contents data retrieved",historyJSON:p,contentsJSON:w});s.resolve(p,w)});l.fail(function(y,w,x){s.reject(y,"loading the contents",{history:p})});return s};var f=Backbone.Collection.extend(i.LoggableMixin).extend({model:e,urlRoot:(window.galaxy_config?galaxy_config.root:"/")+"api/histories",initialize:function(k,j){j=j||{};this.log("HistoryCollection.initialize",arguments);this.includeDeleted=j.includeDeleted||false;this.setUpListeners()},setUpListeners:function a(){var j=this;this.on("change:deleted",function(k){this.debug("change:deleted",j.includeDeleted,k.get("deleted"));if(!j.includeDeleted&&k.get("deleted")){j.remove(k)}});this.on("copied",function(k,l){this.unshift(new e(l,[]))})},create:function g(m,k,j,l){var o=this,n=jQuery.getJSON(galaxy_config.root+"history/create_new_current");return n.done(function(p){var q=new e(p,[],j||{});o.unshift(q);o.trigger("new-current")})},toString:function b(){return"HistoryCollection("+this.length+")"}});return{History:e,HistoryCollection:f}});
\ No newline at end of file
+define(["mvc/history/history-contents","mvc/base-mvc","utils/localization"],function(h,i,d){var e=Backbone.Model.extend(i.LoggableMixin).extend(i.mixin(i.SearchableModelMixin,{defaults:{model_class:"History",id:null,name:"Unnamed History",state:"new",diskSize:0,deleted:false},urlRoot:galaxy_config.root+"api/histories",initialize:function(k,l,j){j=j||{};this.logger=j.logger||null;this.log(this+".initialize:",k,l,j);this.log("creating history contents:",l);this.contents=new h.HistoryContents(l||[],{historyId:this.get("id")});this._setUpListeners();this.updateTimeoutId=null},_setUpListeners:function(){this.on("error",function(k,n,j,m,l){this.errorHandler(k,n,j,m,l)});if(this.contents){this.listenTo(this.contents,"error",function(){this.trigger.apply(this,["error:contents"].concat(jQuery.makeArray(arguments)))})}this.on("change:id",function(k,j){if(this.contents){this.contents.historyId=j}},this)},errorHandler:function(k,n,j,m,l){this.clearUpdateTimeout()},ownedByCurrUser:function(){if(!Galaxy||!Galaxy.currUser){return false}if(Galaxy.currUser.isAnonymous()||Galaxy.currUser.id!==this.get("user_id")){return false}return true},contentsCount:function(){return _.reduce(_.values(this.get("state_details")),function(j,k){return j+k},0)},searchAttributes:["name","annotation","tags"],searchAliases:{title:"name",tag:"tags"},checkForUpdates:function(j){if(this.contents.running().length){this.setUpdateTimeout()}else{this.trigger("ready");if(_.isFunction(j)){j.call(this)}}return this},setUpdateTimeout:function(j){j=j||e.UPDATE_DELAY;var k=this;this.clearUpdateTimeout();this.updateTimeoutId=setTimeout(function(){k.refresh()},j);return this.updateTimeoutId},clearUpdateTimeout:function(){if(this.updateTimeoutId){clearTimeout(this.updateTimeoutId);this.updateTimeoutId=null}},refresh:function(k,j){k=k||[];j=j||{};var l=this;j.data=j.data||{};if(k.length){j.data.details=k.join(",")}var m=this.contents.fetch(j);m.done(function(n){l.checkForUpdates(function(){this.fetch()})});return m},_delete:function(j){if(this.get("deleted")){return jQuery.when()}return this.save({deleted:true},j)},purge:function(j){if(this.get("purged")){return jQuery.when()}return this.save({deleted:true,purged:true},j)},undelete:function(j){if(!this.get("deleted")){return jQuery.when()}return this.save({deleted:false},j)},copy:function(m,k){m=(m!==undefined)?(m):(true);if(!this.id){throw new Error("You must set the history ID before copying it.")}var j={history_id:this.id};if(m){j.current=true}if(k){j.name=k}var l=this,n=jQuery.post(this.urlRoot,j);if(m){return n.then(function(o){var p=new e(o);return p.setAsCurrent().done(function(){l.trigger("copied",l,o)})})}return n.done(function(o){l.trigger("copied",l,o)})},setAsCurrent:function(){var j=this,k=jQuery.getJSON("/history/set_as_current?id="+this.id);k.done(function(){j.trigger("set-as-current",j)});return k},toString:function(){return"History("+this.get("id")+","+this.get("name")+")"}}));e.UPDATE_DELAY=4000;e.getHistoryData=function c(j,v){v=v||{};var r=v.detailIdsFn||[];var m=v.hdcaDetailIds||[];var s=jQuery.Deferred(),p=null;function k(w){if(j==="current"){return jQuery.getJSON(galaxy_config.root+"history/current_history_json")}return jQuery.ajax(galaxy_config.root+"api/histories/"+j)}function o(w){return w&&w.empty}function q(x){if(o(x)){return[]}if(_.isFunction(r)){r=r(x)}if(_.isFunction(m)){m=m(x)}var w={};if(r.length){w.dataset_details=r.join(",")}if(m.length){w.dataset_collection_details=m.join(",")}return jQuery.ajax(galaxy_config.root+"api/histories/"+x.id+"/contents",{data:w})}var t=v.historyFn||k,u=v.contentsFn||q;var n=t(j);n.done(function(w){p=w;s.notify({status:"history data retrieved",historyJSON:p})});n.fail(function(y,w,x){s.reject(y,"loading the history")});var l=n.then(u);l.then(function(w){s.notify({status:"contents data retrieved",historyJSON:p,contentsJSON:w});s.resolve(p,w)});l.fail(function(y,w,x){s.reject(y,"loading the contents",{history:p})});return s};var f=Backbone.Collection.extend(i.LoggableMixin).extend({model:e,urlRoot:(window.galaxy_config?galaxy_config.root:"/")+"api/histories",initialize:function(k,j){j=j||{};this.log("HistoryCollection.initialize",arguments);this.includeDeleted=j.includeDeleted||false;this.setUpListeners()},setUpListeners:function a(){var j=this;this.on("change:deleted",function(k){this.debug("change:deleted",j.includeDeleted,k.get("deleted"));if(!j.includeDeleted&&k.get("deleted")){j.remove(k)}});this.on("copied",function(k,l){this.unshift(new e(l,[]))})},create:function g(m,k,j,l){var o=this,n=jQuery.getJSON(galaxy_config.root+"history/create_new_current");return n.done(function(p){var q=new e(p,[],j||{});o.unshift(q);o.trigger("new-current")})},toString:function b(){return"HistoryCollection("+this.length+")"}});return{History:e,HistoryCollection:f}});
\ No newline at end of file
https://bitbucket.org/galaxy/galaxy-central/commits/58de410169f1/
Changeset: 58de410169f1
Branch: release_15.03
User: guerler
Date: 2015-03-11 21:53:22+00:00
Summary: Map imported datasets and collections to current history without hash_util
Affected #: 1 file
diff -r 5ac767b3b060356c22c100936d804a77ebee9d59 -r 58de410169f168df15f7081478fc7c8d7c8f663b lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -2587,11 +2587,9 @@
hda_source_dict = {} # Mapping from HDA in history to source HDAs.
for hda in history.datasets:
source_hda = hda.copied_from_history_dataset_association
- while source_hda:#should this check library datasets as well?
- #FIXME: could be multiple copies of a hda in a single history, this does a better job of matching on cloned histories,
- #but is still less than perfect when eg individual datasets are copied between histories
- if source_hda not in hda_source_dict or source_hda.hid == hda.hid:
- hda_source_dict[ source_hda ] = hda
+ while source_hda:
+ if source_hda.dataset.id not in hda_source_dict or source_hda.hid == hda.hid:
+ hda_source_dict[ source_hda.dataset.id ] = hda
source_hda = source_hda.copied_from_history_dataset_association
# Ditto for dataset collections.
@@ -2599,8 +2597,8 @@
for hdca in history.dataset_collections:
source_hdca = hdca.copied_from_history_dataset_collection_association
while source_hdca:
- if source_hdca not in hdca_source_dict or source_hdca.hid == hdca.hid:
- hdca_source_dict[ source_hdca ] = hdca
+ if source_hdca.collection.id not in hdca_source_dict or source_hdca.hid == hdca.hid:
+ hdca_source_dict[ source_hdca.collection.id ] = hdca
source_hdca = source_hdca.copied_from_history_dataset_collection_association
# Unpack unvalidated values to strings, they'll be validated when the
@@ -2619,17 +2617,21 @@
if isinstance(value,list):
values = []
for val in value:
- if is_hashable( val ):
- if val in history.datasets:
+ if isinstance(val, trans.app.model.HistoryDatasetAssociation):
+ if val.dataset.id in hda_source_dict:
+ values.append( hda_source_dict[ val.dataset.id ] )
+ else:
values.append( val )
- elif val in hda_source_dict:
- values.append( hda_source_dict[ val ])
return values
- if is_hashable( value ) and value not in history.datasets and value in hda_source_dict:
- return hda_source_dict[ value ]
+ if isinstance(value, trans.app.model.HistoryDatasetAssociation):
+ if value.dataset.id in hda_source_dict:
+ return hda_source_dict[ value.dataset.id ]
+ if isinstance(value, trans.app.model.HistoryDatasetCollectionAssociation):
+ if value.collection.id in hdca_source_dict:
+ return hdca_source_dict[ value.collection.id ]
elif isinstance( input, DataCollectionToolParameter ):
- if is_hashable( value ) and value not in history.dataset_collections and value in hdca_source_dict:
- return hdca_source_dict[ value ]
+ if value.collection.id in hdca_source_dict:
+ return hdca_source_dict[ value.collection.id ]
visit_input_values( tool_inputs, params, rerun_callback )
def _compare_tool_version( self, trans, job ):
https://bitbucket.org/galaxy/galaxy-central/commits/1b71c96a9008/
Changeset: 1b71c96a9008
Branch: release_15.03
User: dannon
Date: 2015-03-12 15:34:01+00:00
Summary: Sanely size the workflow parameter input boxes (at runtime), tweak formatting a bit.
Affected #: 1 file
diff -r 58de410169f168df15f7081478fc7c8d7c8f663b -r 1b71c96a9008e1e920affc4b65838ff50d13d537 templates/webapps/galaxy/workflow/run.mako
--- a/templates/webapps/galaxy/workflow/run.mako
+++ b/templates/webapps/galaxy/workflow/run.mako
@@ -513,7 +513,7 @@
<div class="metadataFormTitle">Workflow Parameters</div><div class="metadataFormBody">
%for parm in wf_parms:
- <div class='form-row'><label style='width:100px;'>${parm}<input style="border:2px solid ${wf_parms[parm]};border-left-width:8px;" type="text" class='wf_parm_input ptag_${parm}' name="wf_parm|${parm}" value=""/></label></div>
+ <div class='form-row'><label>${parm}<br/><input size=40 style="border:2px solid ${wf_parms[parm]};border-left-width:8px;" type="text" class='wf_parm_input ptag_${parm}' name="wf_parm|${parm}" value=""/></label></div>
%endfor
</div></div>
https://bitbucket.org/galaxy/galaxy-central/commits/3d809a9312d0/
Changeset: 3d809a9312d0
Branch: release_15.03
User: natefoo
Date: 2015-03-12 16:16:35+00:00
Summary: Prevent the deletion of jobs that are already terminal, and prevent
their outputs from being discarded.
Affected #: 2 files
diff -r 1b71c96a9008e1e920affc4b65838ff50d13d537 -r 3d809a9312d0a37908aef77c62fbccfbf0e41486 lib/galaxy/jobs/handler.py
--- a/lib/galaxy/jobs/handler.py
+++ b/lib/galaxy/jobs/handler.py
@@ -680,6 +680,9 @@
except Empty:
pass
for job, error_msg in jobs_to_check:
+ if job.finished:
+ log.debug('Job %s already finished, not deleting or stopping', job.id)
+ continue
final_state = job.states.DELETED
if error_msg is not None:
final_state = job.states.ERROR
diff -r 1b71c96a9008e1e920affc4b65838ff50d13d537 -r 3d809a9312d0a37908aef77c62fbccfbf0e41486 lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py
+++ b/lib/galaxy/model/__init__.py
@@ -517,6 +517,9 @@
"""
Mark this job as deleted, and mark any output datasets as discarded.
"""
+ if self.finished:
+ # Do not modify the state/outputs of jobs that are already terminal
+ return
if track_jobs_in_database:
self.state = Job.states.DELETED_NEW
else:
https://bitbucket.org/galaxy/galaxy-central/commits/ac807cb6c89b/
Changeset: ac807cb6c89b
Branch: release_15.03
User: natefoo
Date: 2015-03-12 16:22:59+00:00
Summary: Restore job deletion when purging a history and tracking jobs in the
database.
Affected #: 1 file
diff -r 3d809a9312d0a37908aef77c62fbccfbf0e41486 -r ac807cb6c89bdd7507648cd43f3aac67aaf2f6fc lib/galaxy/managers/hdas.py
--- a/lib/galaxy/managers/hdas.py
+++ b/lib/galaxy/managers/hdas.py
@@ -160,6 +160,7 @@
job = hda.creating_job_associations[0].job
if not job.finished:
# signal to stop the creating job
+ job.mark_deleted( self.app.config.track_jobs_in_database )
self.app.job_manager.job_stop_queue.put( job.id )
# more importantly, purge dataset as well
https://bitbucket.org/galaxy/galaxy-central/commits/83f798b67265/
Changeset: 83f798b67265
Branch: release_15.03
User: natefoo
Date: 2015-03-12 16:38:58+00:00
Summary: Make history purge behavior consistent across all UI paths (always
attempt to stop jobs if the creating job is in the purged history),
previously this was not implemented in the grid history view.
Affected #: 1 file
diff -r ac807cb6c89bdd7507648cd43f3aac67aaf2f6fc -r 83f798b67265729eb0f7dc9902bfcbf2219fd295 lib/galaxy/webapps/galaxy/controllers/history.py
--- a/lib/galaxy/webapps/galaxy/controllers/history.py
+++ b/lib/galaxy/webapps/galaxy/controllers/history.py
@@ -359,6 +359,15 @@
history.purged = True
self.sa_session.add( history )
self.sa_session.flush()
+ for hda in history.datasets:
+ # Not all datasets have jobs associated with them (e.g., datasets imported from libraries).
+ if hda.creating_job_associations:
+ # HDA has associated job, so try marking it deleted.
+ job = hda.creating_job_associations[0].job
+ if job.history_id == history.id and not job.finished:
+ # No need to check other outputs since the job's parent history is this history
+ job.mark_deleted( trans.app.config.track_jobs_in_database )
+ trans.app.job_manager.job_stop_queue.put( job.id )
trans.sa_session.flush()
if n_deleted:
part = "Deleted %d %s" % ( n_deleted, iff( n_deleted != 1, "histories", "history" ) )
@@ -1125,7 +1134,7 @@
if hda.creating_job_associations:
# HDA has associated job, so try marking it deleted.
job = hda.creating_job_associations[0].job
- if job.history_id == history.id and job.state in [ trans.app.model.Job.states.QUEUED, trans.app.model.Job.states.RUNNING, trans.app.model.Job.states.NEW ]:
+ if job.history_id == history.id and not job.finished:
# No need to check other outputs since the job's parent history is this history
job.mark_deleted( trans.app.config.track_jobs_in_database )
trans.app.job_manager.job_stop_queue.put( job.id )
https://bitbucket.org/galaxy/galaxy-central/commits/868e124734d4/
Changeset: 868e124734d4
Branch: release_15.03
User: dannon
Date: 2015-03-12 17:36:43+00:00
Summary: Process *all* replacements in Rename PJA, instead of just the first one
Affected #: 1 file
diff -r 83f798b67265729eb0f7dc9902bfcbf2219fd295 -r 868e124734d4577d67290f0a8468dfc53b8e4f12 lib/galaxy/jobs/actions/post.py
--- a/lib/galaxy/jobs/actions/post.py
+++ b/lib/galaxy/jobs/actions/post.py
@@ -156,7 +156,7 @@
# "replace" option so you can replace a portion of the name,
# support multiple #{name} in one rename action...
- if new_name.find("#{") > -1:
+ while new_name.find("#{") > -1:
to_be_replaced = ""
# This assumes a single instance of #{variable} will exist
start_pos = new_name.find("#{") + 2
https://bitbucket.org/galaxy/galaxy-central/commits/733ec31a4cae/
Changeset: 733ec31a4cae
Branch: release_15.03
User: natefoo
Date: 2015-03-02 19:58:49+00:00
Summary: Relocate the external set_metadata script to the galaxy_utils package.
A temporary script in the job working directory will be created to
import and call it (trusting that `$PYTHONPATH` in a job script is
always set to `galaxy/lib`). This is so the auto-detect button can defer
command line generation until job preparation time for the case that
handlers and web processes run from different paths.
Related Trello card: https://trello.com/c/v2eCOYZi
(cherry picked from commit b043d43a355e7679f9aad76ab6271f34d12a90ff)
Move set_metadata script from galaxy_utils to galaxy, remove unused
imports.
(cherry picked from commit e093f58d636fab68707d1abfff4383dd0cd086d1)
Restore old set_metadata files to prevent failure of jobs running at
upgrade time.
(cherry picked from commit 4443b64eaed14f36753198c50c7013cbfcb788c8)
Fix unit test broken by set_metadata changes.
(cherry picked from commit d2422861871e0714fd3e448c68025e12a96bd9e8)
Use the job working directory for creating MetadataFiles in external
set_metadata, rather than new_files_path.
(cherry picked from commit 8116d2c9173a0ce4fc69caf9312d36df55062da7)
Affected #: 8 files
diff -r 868e124734d4577d67290f0a8468dfc53b8e4f12 -r 733ec31a4cae0797f045a3f37f89402593c2bdc7 lib/galaxy/datatypes/metadata.py
--- a/lib/galaxy/datatypes/metadata.py
+++ b/lib/galaxy/datatypes/metadata.py
@@ -667,8 +667,12 @@
# need to make different keys for them, since ids can overlap
return "%s_%d" % ( dataset.__class__.__name__, dataset.id )
- def setup_external_metadata( self, datasets, sa_session, exec_dir=None, tmp_dir=None, dataset_files_path=None,
- output_fnames=None, config_root=None, config_file=None, datatypes_config=None, job_metadata=None, compute_tmp_dir=None, kwds=None ):
+ def setup_external_metadata( self, datasets, sa_session, exec_dir=None,
+ tmp_dir=None, dataset_files_path=None,
+ output_fnames=None, config_root=None,
+ config_file=None, datatypes_config=None,
+ job_metadata=None, compute_tmp_dir=None,
+ include_command=True, kwds=None ):
kwds = kwds or {}
if tmp_dir is None:
tmp_dir = MetadataTempFile.tmp_dir
@@ -761,8 +765,22 @@
sa_session.add( metadata_files )
sa_session.flush()
metadata_files_list.append( metadata_files )
- #return command required to build
- return "%s %s %s %s %s %s %s %s" % ( os.path.join( exec_dir, 'set_metadata.sh' ), dataset_files_path, compute_tmp_dir or tmp_dir, config_root, config_file, datatypes_config, job_metadata, " ".join( map( __metadata_files_list_to_cmd_line, metadata_files_list ) ) )
+ args = "%s %s %s %s %s %s %s" % ( dataset_files_path,
+ compute_tmp_dir or tmp_dir,
+ config_root,
+ config_file,
+ datatypes_config,
+ job_metadata,
+ " ".join( map( __metadata_files_list_to_cmd_line, metadata_files_list ) ) )
+ if include_command:
+ #return command required to build
+ fd, fp = tempfile.mkstemp( suffix='.py', dir = tmp_dir, prefix = "set_metadata_" )
+ metadata_script_file = abspath( fp )
+ os.fdopen( fd, 'w' ).write( 'from galaxy.metadata.set_metadata import set_metadata; set_metadata()' )
+ return "python %s %s" % ( metadata_script_file, args )
+ else:
+ # return args to galaxy.metadata.set_metadata required to build
+ return args
def external_metadata_set_successfully( self, dataset, sa_session ):
metadata_files = self.get_output_filenames_by_dataset( dataset, sa_session )
diff -r 868e124734d4577d67290f0a8468dfc53b8e4f12 -r 733ec31a4cae0797f045a3f37f89402593c2bdc7 lib/galaxy/datatypes/registry.py
--- a/lib/galaxy/datatypes/registry.py
+++ b/lib/galaxy/datatypes/registry.py
@@ -653,12 +653,15 @@
<requirement type="package">samtools</requirement></requirements><action module="galaxy.tools.actions.metadata" class="SetMetadataToolAction"/>
- <command>$__SET_EXTERNAL_METADATA_COMMAND_LINE__</command>
+ <command>python $set_metadata $__SET_EXTERNAL_METADATA_COMMAND_LINE__</command><inputs><param format="data" name="input1" type="data" label="File to set metadata on."/><param name="__ORIGINAL_DATASET_STATE__" type="hidden" value=""/><param name="__SET_EXTERNAL_METADATA_COMMAND_LINE__" type="hidden" value=""/></inputs>
+ <configfiles>
+ <configfile name="set_metadata">from galaxy.metadata.set_metadata import set_metadata; set_metadata()</configfile>
+ </configfiles></tool>
"""
tmp_name = tempfile.NamedTemporaryFile()
diff -r 868e124734d4577d67290f0a8468dfc53b8e4f12 -r 733ec31a4cae0797f045a3f37f89402593c2bdc7 lib/galaxy/jobs/__init__.py
--- a/lib/galaxy/jobs/__init__.py
+++ b/lib/galaxy/jobs/__init__.py
@@ -831,8 +831,7 @@
self.command_line, self.extra_filenames = tool_evaluator.build()
# FIXME: for now, tools get Galaxy's lib dir in their path
- if self.command_line and self.command_line.startswith( 'python' ):
- self.galaxy_lib_dir = os.path.abspath( "lib" ) # cwd = galaxy root
+ self.galaxy_lib_dir = os.path.abspath( "lib" ) # cwd = galaxy root
# Shell fragment to inject dependencies
self.dependency_shell_commands = self.tool.build_dependency_shell_commands()
# We need command_line persisted to the db in order for Galaxy to re-queue the job
diff -r 868e124734d4577d67290f0a8468dfc53b8e4f12 -r 733ec31a4cae0797f045a3f37f89402593c2bdc7 lib/galaxy/jobs/command_factory.py
--- a/lib/galaxy/jobs/command_factory.py
+++ b/lib/galaxy/jobs/command_factory.py
@@ -145,7 +145,7 @@
metadata_command = metadata_command.strip()
if metadata_command:
commands_builder.capture_return_code()
- commands_builder.append_command("cd %s; %s" % (exec_dir, metadata_command))
+ commands_builder.append_command(metadata_command)
def __copy_if_exists_command(work_dir_output):
diff -r 868e124734d4577d67290f0a8468dfc53b8e4f12 -r 733ec31a4cae0797f045a3f37f89402593c2bdc7 lib/galaxy/metadata/__init__.py
--- /dev/null
+++ b/lib/galaxy/metadata/__init__.py
@@ -0,0 +1,2 @@
+""" Work with Galaxy metadata
+"""
diff -r 868e124734d4577d67290f0a8468dfc53b8e4f12 -r 733ec31a4cae0797f045a3f37f89402593c2bdc7 lib/galaxy/metadata/set_metadata.py
--- /dev/null
+++ b/lib/galaxy/metadata/set_metadata.py
@@ -0,0 +1,122 @@
+"""
+Execute an external process to set_meta() on a provided list of pickled datasets.
+
+This was formerly scripts/set_metadata.py and expects the same arguments as
+that script.
+"""
+
+import logging
+logging.basicConfig()
+log = logging.getLogger( __name__ )
+
+import cPickle
+import json
+import os
+import sys
+
+# ensure supported version
+assert sys.version_info[:2] >= ( 2, 6 ) and sys.version_info[:2] <= ( 2, 7 ), 'Python version must be 2.6 or 2.7, this is: %s' % sys.version
+
+new_path = [ os.path.join( os.getcwd(), "lib" ) ]
+new_path.extend( sys.path[ 1: ] ) # remove scripts/ from the path
+sys.path = new_path
+
+from galaxy import eggs
+import pkg_resources
+import galaxy.model.mapping # need to load this before we unpickle, in order to setup properties assigned by the mappers
+galaxy.model.Job() # this looks REAL stupid, but it is REQUIRED in order for SA to insert parameters into the classes defined by the mappers --> it appears that instantiating ANY mapper'ed class would suffice here
+from galaxy.util import stringify_dictionary_keys
+from sqlalchemy.orm import clear_mappers
+
+
+def set_meta_with_tool_provided( dataset_instance, file_dict, set_meta_kwds ):
+ # This method is somewhat odd, in that we set the metadata attributes from tool,
+ # then call set_meta, then set metadata attributes from tool again.
+ # This is intentional due to interplay of overwrite kwd, the fact that some metadata
+ # parameters may rely on the values of others, and that we are accepting the
+ # values provided by the tool as Truth.
+ for metadata_name, metadata_value in file_dict.get( 'metadata', {} ).iteritems():
+ setattr( dataset_instance.metadata, metadata_name, metadata_value )
+ dataset_instance.datatype.set_meta( dataset_instance, **set_meta_kwds )
+ for metadata_name, metadata_value in file_dict.get( 'metadata', {} ).iteritems():
+ setattr( dataset_instance.metadata, metadata_name, metadata_value )
+
+def set_metadata():
+ # locate galaxy_root for loading datatypes
+ galaxy_root = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir, os.pardir))
+ galaxy.datatypes.metadata.MetadataTempFile.tmp_dir = tool_job_working_directory = os.path.abspath(os.getcwd())
+
+ # Set up datatypes registry
+ datatypes_config = sys.argv.pop( 1 )
+ datatypes_registry = galaxy.datatypes.registry.Registry()
+ datatypes_registry.load_datatypes( root_dir=config_root, config=datatypes_config )
+ galaxy.model.set_datatypes_registry( datatypes_registry )
+
+ job_metadata = sys.argv.pop( 1 )
+ existing_job_metadata_dict = {}
+ new_job_metadata_dict = {}
+ if job_metadata != "None" and os.path.exists( job_metadata ):
+ for line in open( job_metadata, 'r' ):
+ try:
+ line = stringify_dictionary_keys( json.loads( line ) )
+ if line['type'] == 'dataset':
+ existing_job_metadata_dict[ line['dataset_id'] ] = line
+ elif line['type'] == 'new_primary_dataset':
+ new_job_metadata_dict[ line[ 'filename' ] ] = line
+ except:
+ continue
+
+ for filenames in sys.argv[1:]:
+ fields = filenames.split( ',' )
+ filename_in = fields.pop( 0 )
+ filename_kwds = fields.pop( 0 )
+ filename_out = fields.pop( 0 )
+ filename_results_code = fields.pop( 0 )
+ dataset_filename_override = fields.pop( 0 )
+ # Need to be careful with the way that these parameters are populated from the filename splitting,
+ # because if a job is running when the server is updated, any existing external metadata command-lines
+ #will not have info about the newly added override_metadata file
+ if fields:
+ override_metadata = fields.pop( 0 )
+ else:
+ override_metadata = None
+ set_meta_kwds = stringify_dictionary_keys( json.load( open( filename_kwds ) ) ) # load kwds; need to ensure our keywords are not unicode
+ try:
+ dataset = cPickle.load( open( filename_in ) ) # load DatasetInstance
+ if dataset_filename_override:
+ dataset.dataset.external_filename = dataset_filename_override
+ files_path = os.path.abspath(os.path.join( tool_job_working_directory, "dataset_%s_files" % (dataset.dataset.id) ))
+ dataset.dataset.external_extra_files_path = files_path
+ if dataset.dataset.id in existing_job_metadata_dict:
+ dataset.extension = existing_job_metadata_dict[ dataset.dataset.id ].get( 'ext', dataset.extension )
+ # Metadata FileParameter types may not be writable on a cluster node, and are therefore temporarily substituted with MetadataTempFiles
+ if override_metadata:
+ override_metadata = json.load( open( override_metadata ) )
+ for metadata_name, metadata_file_override in override_metadata:
+ if galaxy.datatypes.metadata.MetadataTempFile.is_JSONified_value( metadata_file_override ):
+ metadata_file_override = galaxy.datatypes.metadata.MetadataTempFile.from_JSON( metadata_file_override )
+ setattr( dataset.metadata, metadata_name, metadata_file_override )
+ file_dict = existing_job_metadata_dict.get( dataset.dataset.id, {} )
+ set_meta_with_tool_provided( dataset, file_dict, set_meta_kwds )
+ dataset.metadata.to_JSON_dict( filename_out ) # write out results of set_meta
+ json.dump( ( True, 'Metadata has been set successfully' ), open( filename_results_code, 'wb+' ) ) # setting metadata has succeeded
+ except Exception, e:
+ json.dump( ( False, str( e ) ), open( filename_results_code, 'wb+' ) ) # setting metadata has failed somehow
+
+ for i, ( filename, file_dict ) in enumerate( new_job_metadata_dict.iteritems(), start=1 ):
+ new_dataset = galaxy.model.Dataset( id=-i, external_filename=os.path.join( tool_job_working_directory, file_dict[ 'filename' ] ) )
+ extra_files = file_dict.get( 'extra_files', None )
+ if extra_files is not None:
+ new_dataset._extra_files_path = os.path.join( tool_job_working_directory, extra_files )
+ new_dataset.state = new_dataset.states.OK
+ new_dataset_instance = galaxy.model.HistoryDatasetAssociation( id=-i, dataset=new_dataset, extension=file_dict.get( 'ext', 'data' ) )
+ set_meta_with_tool_provided( new_dataset_instance, file_dict, set_meta_kwds )
+ file_dict[ 'metadata' ] = json.loads( new_dataset_instance.metadata.to_JSON_dict() ) #storing metadata in external form, need to turn back into dict, then later jsonify
+ if existing_job_metadata_dict or new_job_metadata_dict:
+ with open( job_metadata, 'wb' ) as job_metadata_fh:
+ for value in existing_job_metadata_dict.values() + new_job_metadata_dict.values():
+ job_metadata_fh.write( "%s\n" % ( json.dumps( value ) ) )
+
+ clear_mappers()
+ # Shut down any additional threads that might have been created via the ObjectStore
+ object_store.shutdown()
diff -r 868e124734d4577d67290f0a8468dfc53b8e4f12 -r 733ec31a4cae0797f045a3f37f89402593c2bdc7 lib/galaxy/tools/actions/metadata.py
--- a/lib/galaxy/tools/actions/metadata.py
+++ b/lib/galaxy/tools/actions/metadata.py
@@ -76,6 +76,7 @@
config_file = app.config.config_file,
datatypes_config = app.datatypes_registry.integrated_datatypes_configs,
job_metadata = None,
+ include_command = False,
kwds = { 'overwrite' : overwrite } )
incoming[ '__SET_EXTERNAL_METADATA_COMMAND_LINE__' ] = cmd_line
for name, value in tool.params_to_strings( incoming, app ).iteritems():
diff -r 868e124734d4577d67290f0a8468dfc53b8e4f12 -r 733ec31a4cae0797f045a3f37f89402593c2bdc7 test/unit/jobs/test_command_factory.py
--- a/test/unit/jobs/test_command_factory.py
+++ b/test/unit/jobs/test_command_factory.py
@@ -72,7 +72,7 @@
self.include_metadata = True
self.include_work_dir_outputs = False
self.job_wrapper.metadata_line = TEST_METADATA_LINE
- expected_command = '%s; return_code=$?; cd %s; %s; sh -c "exit $return_code"' % (MOCK_COMMAND_LINE, getcwd(), TEST_METADATA_LINE)
+ expected_command = '%s; return_code=$?; %s; sh -c "exit $return_code"' % (MOCK_COMMAND_LINE, TEST_METADATA_LINE)
self.__assert_command_is( expected_command )
def test_empty_metadata(self):
https://bitbucket.org/galaxy/galaxy-central/commits/56ea9cfb4394/
Changeset: 56ea9cfb4394
Branch: release_15.03
User: natefoo
Date: 2015-03-04 18:44:17+00:00
Summary: Run history export jobs in the same manner as metadata auto-detection
(so the absolute path of the web galaxy dir is not part of the command
line and can be computed at runtime).
(cherry picked from commit de5d6d11503f931dd58f7f09094c45c39578e494)
Affected #: 2 files
diff -r 733ec31a4cae0797f045a3f37f89402593c2bdc7 -r 56ea9cfb4394efbd3e56dd0a0965602be085b6a7 lib/galaxy/tools/imp_exp/__init__.py
--- a/lib/galaxy/tools/imp_exp/__init__.py
+++ b/lib/galaxy/tools/imp_exp/__init__.py
@@ -17,12 +17,15 @@
<tool id="__EXPORT_HISTORY__" name="Export History" version="0.1" tool_type="export_history"><type class="ExportHistoryTool" module="galaxy.tools"/><action module="galaxy.tools.actions.history_imp_exp" class="ExportHistoryToolAction"/>
- <command>$__EXPORT_HISTORY_COMMAND_INPUTS_OPTIONS__ $output_file</command>
+ <command>python $export_history $__EXPORT_HISTORY_COMMAND_INPUTS_OPTIONS__ $output_file</command><inputs><param name="__HISTORY_TO_EXPORT__" type="hidden"/><param name="compress" type="boolean"/><param name="__EXPORT_HISTORY_COMMAND_INPUTS_OPTIONS__" type="hidden"/></inputs>
+ <configfiles>
+ <configfile name="export_history">from galaxy.tools.imp_exp.export_history import main; main()</configfile>
+ </configfiles><outputs><data format="gzip" name="output_file"/></outputs>
@@ -530,11 +533,9 @@
options = ""
if jeha.compressed:
options = "-G"
- return "python %s %s %s %s %s" % ( os.path.join( os.path.abspath( os.getcwd() ),
- "lib/galaxy/tools/imp_exp/export_history.py" ),
- options, history_attrs_filename,
- datasets_attrs_filename,
- jobs_attrs_filename )
+ return "%s %s %s %s" % ( options, history_attrs_filename,
+ datasets_attrs_filename,
+ jobs_attrs_filename )
def cleanup_after_job( self, db_session ):
""" Remove temporary directory and attribute files generated during setup for this job. """
diff -r 733ec31a4cae0797f045a3f37f89402593c2bdc7 -r 56ea9cfb4394efbd3e56dd0a0965602be085b6a7 lib/galaxy/tools/imp_exp/export_history.py
--- a/lib/galaxy/tools/imp_exp/export_history.py
+++ b/lib/galaxy/tools/imp_exp/export_history.py
@@ -70,7 +70,7 @@
except Exception, e:
return 'Error creating history archive: %s' % str( e ), sys.stderr
-if __name__ == "__main__":
+def main():
# Parse command line.
parser = optparse.OptionParser()
parser.add_option( '-G', '--gzip', dest='gzip', action="store_true", help='Compress archive using gzip.' )
@@ -81,3 +81,6 @@
# Create archive.
status = create_archive( history_attrs, dataset_attrs, job_attrs, out_file, gzip )
print status
+
+if __name__ == "__main__":
+ main()
https://bitbucket.org/galaxy/galaxy-central/commits/ace41a078a44/
Changeset: ace41a078a44
Branch: release_15.03
User: natefoo
Date: 2015-03-13 02:11:39+00:00
Summary: Add minor and combined major+minor version to version.py.
Affected #: 1 file
diff -r 56ea9cfb4394efbd3e56dd0a0965602be085b6a7 -r ace41a078a44cb3fe32c6bcdc9e051ba72278d3f lib/galaxy/version.py
--- a/lib/galaxy/version.py
+++ b/lib/galaxy/version.py
@@ -1,1 +1,3 @@
VERSION_MAJOR = "15.03"
+VERSION_MINOR = None
+VERSION = VERSION_MAJOR + ('.' + VERSION_MINOR if VERSION_MINOR else '')
https://bitbucket.org/galaxy/galaxy-central/commits/b194d6db2f76/
Changeset: b194d6db2f76
Branch: release_15.03
User: natefoo
Date: 2015-03-13 02:31:30+00:00
Summary: Fix things that I somehow missed in the rebase for PR #22.
Affected #: 2 files
diff -r ace41a078a44cb3fe32c6bcdc9e051ba72278d3f -r b194d6db2f765daabe5bfab74b9f3bc58df267b7 lib/galaxy/datatypes/metadata.py
--- a/lib/galaxy/datatypes/metadata.py
+++ b/lib/galaxy/datatypes/metadata.py
@@ -765,13 +765,9 @@
sa_session.add( metadata_files )
sa_session.flush()
metadata_files_list.append( metadata_files )
- args = "%s %s %s %s %s %s %s" % ( dataset_files_path,
- compute_tmp_dir or tmp_dir,
- config_root,
- config_file,
- datatypes_config,
- job_metadata,
- " ".join( map( __metadata_files_list_to_cmd_line, metadata_files_list ) ) )
+ args = "%s %s %s" % ( datatypes_config,
+ job_metadata,
+ " ".join( map( __metadata_files_list_to_cmd_line, metadata_files_list ) ) )
if include_command:
#return command required to build
fd, fp = tempfile.mkstemp( suffix='.py', dir = tmp_dir, prefix = "set_metadata_" )
diff -r ace41a078a44cb3fe32c6bcdc9e051ba72278d3f -r b194d6db2f765daabe5bfab74b9f3bc58df267b7 lib/galaxy/metadata/set_metadata.py
--- a/lib/galaxy/metadata/set_metadata.py
+++ b/lib/galaxy/metadata/set_metadata.py
@@ -1,8 +1,14 @@
"""
Execute an external process to set_meta() on a provided list of pickled datasets.
-This was formerly scripts/set_metadata.py and expects the same arguments as
-that script.
+This was formerly scripts/set_metadata.py and expects these arguments:
+
+ %prog datatypes_conf.xml job_metadata_file metadata_in,metadata_kwds,metadata_out,metadata_results_code,output_filename_override,metadata_override...
+
+Galaxy should be importable on sys.path and output_filename_override should be
+set to the path of the dataset on which metadata is being set
+(output_filename_override could previously be left empty and the path would be
+constructed automatically).
"""
import logging
@@ -49,7 +55,7 @@
# Set up datatypes registry
datatypes_config = sys.argv.pop( 1 )
datatypes_registry = galaxy.datatypes.registry.Registry()
- datatypes_registry.load_datatypes( root_dir=config_root, config=datatypes_config )
+ datatypes_registry.load_datatypes( root_dir=galaxy_root, config=datatypes_config )
galaxy.model.set_datatypes_registry( datatypes_registry )
job_metadata = sys.argv.pop( 1 )
@@ -83,8 +89,7 @@
set_meta_kwds = stringify_dictionary_keys( json.load( open( filename_kwds ) ) ) # load kwds; need to ensure our keywords are not unicode
try:
dataset = cPickle.load( open( filename_in ) ) # load DatasetInstance
- if dataset_filename_override:
- dataset.dataset.external_filename = dataset_filename_override
+ dataset.dataset.external_filename = dataset_filename_override
files_path = os.path.abspath(os.path.join( tool_job_working_directory, "dataset_%s_files" % (dataset.dataset.id) ))
dataset.dataset.external_extra_files_path = files_path
if dataset.dataset.id in existing_job_metadata_dict:
@@ -118,5 +123,3 @@
job_metadata_fh.write( "%s\n" % ( json.dumps( value ) ) )
clear_mappers()
- # Shut down any additional threads that might have been created via the ObjectStore
- object_store.shutdown()
https://bitbucket.org/galaxy/galaxy-central/commits/3d1aa30fbe57/
Changeset: 3d1aa30fbe57
Branch: release_15.03
User: natefoo
Date: 2015-03-13 03:45:05+00:00
Summary: Another change missed in the rebase/commit.
Affected #: 1 file
diff -r b194d6db2f765daabe5bfab74b9f3bc58df267b7 -r 3d1aa30fbe57f94e65963f1a45c472f9762d34de lib/galaxy/datatypes/metadata.py
--- a/lib/galaxy/datatypes/metadata.py
+++ b/lib/galaxy/datatypes/metadata.py
@@ -691,8 +691,8 @@
def __get_filename_override():
if output_fnames:
for dataset_path in output_fnames:
- if dataset_path.false_path and dataset_path.real_path == metadata_files.dataset.file_name:
- return dataset_path.false_path
+ if dataset_path.real_path == metadata_files.dataset.file_name:
+ return dataset_path.false_path or dataset_path.real_path
return ""
line = "%s,%s,%s,%s,%s,%s" % (
metadata_path_on_compute(metadata_files.filename_in),
https://bitbucket.org/galaxy/galaxy-central/commits/5bf564a741ca/
Changeset: 5bf564a741ca
Branch: release_15.03
User: natefoo
Date: 2015-03-13 03:56:21+00:00
Summary: DELETED_NEW is part of the Job.finished property...
Affected #: 1 file
diff -r 3d1aa30fbe57f94e65963f1a45c472f9762d34de -r 5bf564a741ca8b8d87f27776e3de99a7287f7ed9 lib/galaxy/jobs/handler.py
--- a/lib/galaxy/jobs/handler.py
+++ b/lib/galaxy/jobs/handler.py
@@ -680,7 +680,7 @@
except Empty:
pass
for job, error_msg in jobs_to_check:
- if job.finished:
+ if job.state != job.states.DELETED_NEW and job.finished:
log.debug('Job %s already finished, not deleting or stopping', job.id)
continue
final_state = job.states.DELETED
https://bitbucket.org/galaxy/galaxy-central/commits/4efc589120a6/
Changeset: 4efc589120a6
Branch: release_15.03
User: natefoo
Date: 2015-03-13 04:36:25+00:00
Summary: One more change missed in my rebase.
Affected #: 1 file
diff -r 5bf564a741ca8b8d87f27776e3de99a7287f7ed9 -r 4efc589120a6f1273382ea1bd31fbd6b6486dcbd lib/galaxy/tools/actions/metadata.py
--- a/lib/galaxy/tools/actions/metadata.py
+++ b/lib/galaxy/tools/actions/metadata.py
@@ -2,6 +2,7 @@
from galaxy.datatypes.metadata import JobExternalOutputMetadataWrapper
from galaxy.util.odict import odict
from galaxy.util.json import dumps
+from galaxy.jobs.datasets import DatasetPath
import logging
log = logging.getLogger( __name__ )
@@ -65,13 +66,14 @@
#add parameters to job_parameter table
# Store original dataset state, so we can restore it. A separate table might be better (no chance of 'losing' the original state)?
incoming[ '__ORIGINAL_DATASET_STATE__' ] = dataset.state
+ input_paths = [DatasetPath( dataset.id, real_path=dataset.file_name, mutable=False )]
external_metadata_wrapper = JobExternalOutputMetadataWrapper( job )
cmd_line = external_metadata_wrapper.setup_external_metadata( dataset,
sa_session,
exec_dir = None,
tmp_dir = app.config.new_file_path,
dataset_files_path = app.model.Dataset.file_path,
- output_fnames = None,
+ output_fnames = input_paths,
config_root = app.config.root,
config_file = app.config.config_file,
datatypes_config = app.datatypes_registry.integrated_datatypes_configs,
https://bitbucket.org/galaxy/galaxy-central/commits/84e6ea54c10c/
Changeset: 84e6ea54c10c
Branch: release_15.03
User: natefoo
Date: 2015-03-13 16:30:57+00:00
Summary: Use job working directory for MetadataFiles created with auto-detect.
Affected #: 2 files
diff -r 4efc589120a6f1273382ea1bd31fbd6b6486dcbd -r 84e6ea54c10c58ade9ce36c824571f43024dfcb5 lib/galaxy/datatypes/metadata.py
--- a/lib/galaxy/datatypes/metadata.py
+++ b/lib/galaxy/datatypes/metadata.py
@@ -676,6 +676,11 @@
kwds = kwds or {}
if tmp_dir is None:
tmp_dir = MetadataTempFile.tmp_dir
+ else:
+ MetadataTempFile.tmp_dir = tmp_dir
+
+ if not os.path.exists(tmp_dir):
+ os.makedirs(tmp_dir)
# path is calculated for Galaxy, may be different on compute - rewrite
# for the compute server.
diff -r 4efc589120a6f1273382ea1bd31fbd6b6486dcbd -r 84e6ea54c10c58ade9ce36c824571f43024dfcb5 lib/galaxy/tools/actions/metadata.py
--- a/lib/galaxy/tools/actions/metadata.py
+++ b/lib/galaxy/tools/actions/metadata.py
@@ -1,12 +1,15 @@
+import logging
+
from __init__ import ToolAction
from galaxy.datatypes.metadata import JobExternalOutputMetadataWrapper
from galaxy.util.odict import odict
from galaxy.util.json import dumps
from galaxy.jobs.datasets import DatasetPath
-import logging
+
log = logging.getLogger( __name__ )
+
class SetMetadataToolAction( ToolAction ):
"""Tool action used for setting external metadata on an existing dataset"""
@@ -67,11 +70,12 @@
# Store original dataset state, so we can restore it. A separate table might be better (no chance of 'losing' the original state)?
incoming[ '__ORIGINAL_DATASET_STATE__' ] = dataset.state
input_paths = [DatasetPath( dataset.id, real_path=dataset.file_name, mutable=False )]
+ job_working_dir = app.object_store.get_filename(job, base_dir='job_work', dir_only=True, extra_dir=str(job.id))
external_metadata_wrapper = JobExternalOutputMetadataWrapper( job )
cmd_line = external_metadata_wrapper.setup_external_metadata( dataset,
sa_session,
exec_dir = None,
- tmp_dir = app.config.new_file_path,
+ tmp_dir = job_working_dir,
dataset_files_path = app.model.Dataset.file_path,
output_fnames = input_paths,
config_root = app.config.root,
https://bitbucket.org/galaxy/galaxy-central/commits/bb2f159e899d/
Changeset: bb2f159e899d
Branch: release_15.03
User: natefoo
Date: 2015-03-13 17:17:07+00:00
Summary: Use the correct path to the set_metadata temp script on remote compute
resources.
Affected #: 1 file
diff -r 84e6ea54c10c58ade9ce36c824571f43024dfcb5 -r bb2f159e899d88cdcc5eaec45c9550203ac6d0d7 lib/galaxy/datatypes/metadata.py
--- a/lib/galaxy/datatypes/metadata.py
+++ b/lib/galaxy/datatypes/metadata.py
@@ -778,7 +778,7 @@
fd, fp = tempfile.mkstemp( suffix='.py', dir = tmp_dir, prefix = "set_metadata_" )
metadata_script_file = abspath( fp )
os.fdopen( fd, 'w' ).write( 'from galaxy.metadata.set_metadata import set_metadata; set_metadata()' )
- return "python %s %s" % ( metadata_script_file, args )
+ return "python %s %s" % ( metadata_path_on_compute(metadata_script_file), args )
else:
# return args to galaxy.metadata.set_metadata required to build
return args
https://bitbucket.org/galaxy/galaxy-central/commits/03f569f9eb69/
Changeset: 03f569f9eb69
Branch: release_15.03
User: natefoo
Date: 2015-03-13 18:31:58+00:00
Summary: Move set_metadata (once again), to galaxy_ext so that the `galaxy`
namespace does not conflict with others (e.g. the galaxy-ops package in
the toolshed).
Affected #: 7 files
diff -r bb2f159e899d88cdcc5eaec45c9550203ac6d0d7 -r 03f569f9eb693b41c14e8e6e5b0fa30576905c7c lib/galaxy/datatypes/metadata.py
--- a/lib/galaxy/datatypes/metadata.py
+++ b/lib/galaxy/datatypes/metadata.py
@@ -777,10 +777,10 @@
#return command required to build
fd, fp = tempfile.mkstemp( suffix='.py', dir = tmp_dir, prefix = "set_metadata_" )
metadata_script_file = abspath( fp )
- os.fdopen( fd, 'w' ).write( 'from galaxy.metadata.set_metadata import set_metadata; set_metadata()' )
+ os.fdopen( fd, 'w' ).write( 'from galaxy_ext.metadata.set_metadata import set_metadata; set_metadata()' )
return "python %s %s" % ( metadata_path_on_compute(metadata_script_file), args )
else:
- # return args to galaxy.metadata.set_metadata required to build
+ # return args to galaxy_ext.metadata.set_metadata required to build
return args
def external_metadata_set_successfully( self, dataset, sa_session ):
diff -r bb2f159e899d88cdcc5eaec45c9550203ac6d0d7 -r 03f569f9eb693b41c14e8e6e5b0fa30576905c7c lib/galaxy/datatypes/registry.py
--- a/lib/galaxy/datatypes/registry.py
+++ b/lib/galaxy/datatypes/registry.py
@@ -660,7 +660,7 @@
<param name="__SET_EXTERNAL_METADATA_COMMAND_LINE__" type="hidden" value=""/></inputs><configfiles>
- <configfile name="set_metadata">from galaxy.metadata.set_metadata import set_metadata; set_metadata()</configfile>
+ <configfile name="set_metadata">from galaxy_ext.metadata.set_metadata import set_metadata; set_metadata()</configfile></configfiles></tool>
"""
diff -r bb2f159e899d88cdcc5eaec45c9550203ac6d0d7 -r 03f569f9eb693b41c14e8e6e5b0fa30576905c7c lib/galaxy/metadata/__init__.py
--- a/lib/galaxy/metadata/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-""" Work with Galaxy metadata
-"""
diff -r bb2f159e899d88cdcc5eaec45c9550203ac6d0d7 -r 03f569f9eb693b41c14e8e6e5b0fa30576905c7c lib/galaxy/metadata/set_metadata.py
--- a/lib/galaxy/metadata/set_metadata.py
+++ /dev/null
@@ -1,125 +0,0 @@
-"""
-Execute an external process to set_meta() on a provided list of pickled datasets.
-
-This was formerly scripts/set_metadata.py and expects these arguments:
-
- %prog datatypes_conf.xml job_metadata_file metadata_in,metadata_kwds,metadata_out,metadata_results_code,output_filename_override,metadata_override...
-
-Galaxy should be importable on sys.path and output_filename_override should be
-set to the path of the dataset on which metadata is being set
-(output_filename_override could previously be left empty and the path would be
-constructed automatically).
-"""
-
-import logging
-logging.basicConfig()
-log = logging.getLogger( __name__ )
-
-import cPickle
-import json
-import os
-import sys
-
-# ensure supported version
-assert sys.version_info[:2] >= ( 2, 6 ) and sys.version_info[:2] <= ( 2, 7 ), 'Python version must be 2.6 or 2.7, this is: %s' % sys.version
-
-new_path = [ os.path.join( os.getcwd(), "lib" ) ]
-new_path.extend( sys.path[ 1: ] ) # remove scripts/ from the path
-sys.path = new_path
-
-from galaxy import eggs
-import pkg_resources
-import galaxy.model.mapping # need to load this before we unpickle, in order to setup properties assigned by the mappers
-galaxy.model.Job() # this looks REAL stupid, but it is REQUIRED in order for SA to insert parameters into the classes defined by the mappers --> it appears that instantiating ANY mapper'ed class would suffice here
-from galaxy.util import stringify_dictionary_keys
-from sqlalchemy.orm import clear_mappers
-
-
-def set_meta_with_tool_provided( dataset_instance, file_dict, set_meta_kwds ):
- # This method is somewhat odd, in that we set the metadata attributes from tool,
- # then call set_meta, then set metadata attributes from tool again.
- # This is intentional due to interplay of overwrite kwd, the fact that some metadata
- # parameters may rely on the values of others, and that we are accepting the
- # values provided by the tool as Truth.
- for metadata_name, metadata_value in file_dict.get( 'metadata', {} ).iteritems():
- setattr( dataset_instance.metadata, metadata_name, metadata_value )
- dataset_instance.datatype.set_meta( dataset_instance, **set_meta_kwds )
- for metadata_name, metadata_value in file_dict.get( 'metadata', {} ).iteritems():
- setattr( dataset_instance.metadata, metadata_name, metadata_value )
-
-def set_metadata():
- # locate galaxy_root for loading datatypes
- galaxy_root = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir, os.pardir))
- galaxy.datatypes.metadata.MetadataTempFile.tmp_dir = tool_job_working_directory = os.path.abspath(os.getcwd())
-
- # Set up datatypes registry
- datatypes_config = sys.argv.pop( 1 )
- datatypes_registry = galaxy.datatypes.registry.Registry()
- datatypes_registry.load_datatypes( root_dir=galaxy_root, config=datatypes_config )
- galaxy.model.set_datatypes_registry( datatypes_registry )
-
- job_metadata = sys.argv.pop( 1 )
- existing_job_metadata_dict = {}
- new_job_metadata_dict = {}
- if job_metadata != "None" and os.path.exists( job_metadata ):
- for line in open( job_metadata, 'r' ):
- try:
- line = stringify_dictionary_keys( json.loads( line ) )
- if line['type'] == 'dataset':
- existing_job_metadata_dict[ line['dataset_id'] ] = line
- elif line['type'] == 'new_primary_dataset':
- new_job_metadata_dict[ line[ 'filename' ] ] = line
- except:
- continue
-
- for filenames in sys.argv[1:]:
- fields = filenames.split( ',' )
- filename_in = fields.pop( 0 )
- filename_kwds = fields.pop( 0 )
- filename_out = fields.pop( 0 )
- filename_results_code = fields.pop( 0 )
- dataset_filename_override = fields.pop( 0 )
- # Need to be careful with the way that these parameters are populated from the filename splitting,
- # because if a job is running when the server is updated, any existing external metadata command-lines
- #will not have info about the newly added override_metadata file
- if fields:
- override_metadata = fields.pop( 0 )
- else:
- override_metadata = None
- set_meta_kwds = stringify_dictionary_keys( json.load( open( filename_kwds ) ) ) # load kwds; need to ensure our keywords are not unicode
- try:
- dataset = cPickle.load( open( filename_in ) ) # load DatasetInstance
- dataset.dataset.external_filename = dataset_filename_override
- files_path = os.path.abspath(os.path.join( tool_job_working_directory, "dataset_%s_files" % (dataset.dataset.id) ))
- dataset.dataset.external_extra_files_path = files_path
- if dataset.dataset.id in existing_job_metadata_dict:
- dataset.extension = existing_job_metadata_dict[ dataset.dataset.id ].get( 'ext', dataset.extension )
- # Metadata FileParameter types may not be writable on a cluster node, and are therefore temporarily substituted with MetadataTempFiles
- if override_metadata:
- override_metadata = json.load( open( override_metadata ) )
- for metadata_name, metadata_file_override in override_metadata:
- if galaxy.datatypes.metadata.MetadataTempFile.is_JSONified_value( metadata_file_override ):
- metadata_file_override = galaxy.datatypes.metadata.MetadataTempFile.from_JSON( metadata_file_override )
- setattr( dataset.metadata, metadata_name, metadata_file_override )
- file_dict = existing_job_metadata_dict.get( dataset.dataset.id, {} )
- set_meta_with_tool_provided( dataset, file_dict, set_meta_kwds )
- dataset.metadata.to_JSON_dict( filename_out ) # write out results of set_meta
- json.dump( ( True, 'Metadata has been set successfully' ), open( filename_results_code, 'wb+' ) ) # setting metadata has succeeded
- except Exception, e:
- json.dump( ( False, str( e ) ), open( filename_results_code, 'wb+' ) ) # setting metadata has failed somehow
-
- for i, ( filename, file_dict ) in enumerate( new_job_metadata_dict.iteritems(), start=1 ):
- new_dataset = galaxy.model.Dataset( id=-i, external_filename=os.path.join( tool_job_working_directory, file_dict[ 'filename' ] ) )
- extra_files = file_dict.get( 'extra_files', None )
- if extra_files is not None:
- new_dataset._extra_files_path = os.path.join( tool_job_working_directory, extra_files )
- new_dataset.state = new_dataset.states.OK
- new_dataset_instance = galaxy.model.HistoryDatasetAssociation( id=-i, dataset=new_dataset, extension=file_dict.get( 'ext', 'data' ) )
- set_meta_with_tool_provided( new_dataset_instance, file_dict, set_meta_kwds )
- file_dict[ 'metadata' ] = json.loads( new_dataset_instance.metadata.to_JSON_dict() ) #storing metadata in external form, need to turn back into dict, then later jsonify
- if existing_job_metadata_dict or new_job_metadata_dict:
- with open( job_metadata, 'wb' ) as job_metadata_fh:
- for value in existing_job_metadata_dict.values() + new_job_metadata_dict.values():
- job_metadata_fh.write( "%s\n" % ( json.dumps( value ) ) )
-
- clear_mappers()
diff -r bb2f159e899d88cdcc5eaec45c9550203ac6d0d7 -r 03f569f9eb693b41c14e8e6e5b0fa30576905c7c lib/galaxy_ext/__init__.py
--- /dev/null
+++ b/lib/galaxy_ext/__init__.py
@@ -0,0 +1,4 @@
+""" Package for things which are loaded from outside Galaxy and which we can be
+sure will not conflict with the `galaxy` namespace, which may be provided by
+other packages (e.g. galaxy_ops in the toolshed).
+"""
diff -r bb2f159e899d88cdcc5eaec45c9550203ac6d0d7 -r 03f569f9eb693b41c14e8e6e5b0fa30576905c7c lib/galaxy_ext/metadata/__init__.py
--- /dev/null
+++ b/lib/galaxy_ext/metadata/__init__.py
@@ -0,0 +1,2 @@
+""" Work with Galaxy metadata
+"""
diff -r bb2f159e899d88cdcc5eaec45c9550203ac6d0d7 -r 03f569f9eb693b41c14e8e6e5b0fa30576905c7c lib/galaxy_ext/metadata/set_metadata.py
--- /dev/null
+++ b/lib/galaxy_ext/metadata/set_metadata.py
@@ -0,0 +1,125 @@
+"""
+Execute an external process to set_meta() on a provided list of pickled datasets.
+
+This was formerly scripts/set_metadata.py and expects these arguments:
+
+ %prog datatypes_conf.xml job_metadata_file metadata_in,metadata_kwds,metadata_out,metadata_results_code,output_filename_override,metadata_override...
+
+Galaxy should be importable on sys.path and output_filename_override should be
+set to the path of the dataset on which metadata is being set
+(output_filename_override could previously be left empty and the path would be
+constructed automatically).
+"""
+
+import logging
+logging.basicConfig()
+log = logging.getLogger( __name__ )
+
+import cPickle
+import json
+import os
+import sys
+
+# ensure supported version
+assert sys.version_info[:2] >= ( 2, 6 ) and sys.version_info[:2] <= ( 2, 7 ), 'Python version must be 2.6 or 2.7, this is: %s' % sys.version
+
+new_path = [ os.path.join( os.getcwd(), "lib" ) ]
+new_path.extend( sys.path[ 1: ] ) # remove scripts/ from the path
+sys.path = new_path
+
+from galaxy import eggs
+import pkg_resources
+import galaxy.model.mapping # need to load this before we unpickle, in order to setup properties assigned by the mappers
+galaxy.model.Job() # this looks REAL stupid, but it is REQUIRED in order for SA to insert parameters into the classes defined by the mappers --> it appears that instantiating ANY mapper'ed class would suffice here
+from galaxy.util import stringify_dictionary_keys
+from sqlalchemy.orm import clear_mappers
+
+
+def set_meta_with_tool_provided( dataset_instance, file_dict, set_meta_kwds ):
+ # This method is somewhat odd, in that we set the metadata attributes from tool,
+ # then call set_meta, then set metadata attributes from tool again.
+ # This is intentional due to interplay of overwrite kwd, the fact that some metadata
+ # parameters may rely on the values of others, and that we are accepting the
+ # values provided by the tool as Truth.
+ for metadata_name, metadata_value in file_dict.get( 'metadata', {} ).iteritems():
+ setattr( dataset_instance.metadata, metadata_name, metadata_value )
+ dataset_instance.datatype.set_meta( dataset_instance, **set_meta_kwds )
+ for metadata_name, metadata_value in file_dict.get( 'metadata', {} ).iteritems():
+ setattr( dataset_instance.metadata, metadata_name, metadata_value )
+
+def set_metadata():
+ # locate galaxy_root for loading datatypes
+ galaxy_root = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir, os.pardir))
+ galaxy.datatypes.metadata.MetadataTempFile.tmp_dir = tool_job_working_directory = os.path.abspath(os.getcwd())
+
+ # Set up datatypes registry
+ datatypes_config = sys.argv.pop( 1 )
+ datatypes_registry = galaxy.datatypes.registry.Registry()
+ datatypes_registry.load_datatypes( root_dir=galaxy_root, config=datatypes_config )
+ galaxy.model.set_datatypes_registry( datatypes_registry )
+
+ job_metadata = sys.argv.pop( 1 )
+ existing_job_metadata_dict = {}
+ new_job_metadata_dict = {}
+ if job_metadata != "None" and os.path.exists( job_metadata ):
+ for line in open( job_metadata, 'r' ):
+ try:
+ line = stringify_dictionary_keys( json.loads( line ) )
+ if line['type'] == 'dataset':
+ existing_job_metadata_dict[ line['dataset_id'] ] = line
+ elif line['type'] == 'new_primary_dataset':
+ new_job_metadata_dict[ line[ 'filename' ] ] = line
+ except:
+ continue
+
+ for filenames in sys.argv[1:]:
+ fields = filenames.split( ',' )
+ filename_in = fields.pop( 0 )
+ filename_kwds = fields.pop( 0 )
+ filename_out = fields.pop( 0 )
+ filename_results_code = fields.pop( 0 )
+ dataset_filename_override = fields.pop( 0 )
+ # Need to be careful with the way that these parameters are populated from the filename splitting,
+ # because if a job is running when the server is updated, any existing external metadata command-lines
+ #will not have info about the newly added override_metadata file
+ if fields:
+ override_metadata = fields.pop( 0 )
+ else:
+ override_metadata = None
+ set_meta_kwds = stringify_dictionary_keys( json.load( open( filename_kwds ) ) ) # load kwds; need to ensure our keywords are not unicode
+ try:
+ dataset = cPickle.load( open( filename_in ) ) # load DatasetInstance
+ dataset.dataset.external_filename = dataset_filename_override
+ files_path = os.path.abspath(os.path.join( tool_job_working_directory, "dataset_%s_files" % (dataset.dataset.id) ))
+ dataset.dataset.external_extra_files_path = files_path
+ if dataset.dataset.id in existing_job_metadata_dict:
+ dataset.extension = existing_job_metadata_dict[ dataset.dataset.id ].get( 'ext', dataset.extension )
+ # Metadata FileParameter types may not be writable on a cluster node, and are therefore temporarily substituted with MetadataTempFiles
+ if override_metadata:
+ override_metadata = json.load( open( override_metadata ) )
+ for metadata_name, metadata_file_override in override_metadata:
+ if galaxy.datatypes.metadata.MetadataTempFile.is_JSONified_value( metadata_file_override ):
+ metadata_file_override = galaxy.datatypes.metadata.MetadataTempFile.from_JSON( metadata_file_override )
+ setattr( dataset.metadata, metadata_name, metadata_file_override )
+ file_dict = existing_job_metadata_dict.get( dataset.dataset.id, {} )
+ set_meta_with_tool_provided( dataset, file_dict, set_meta_kwds )
+ dataset.metadata.to_JSON_dict( filename_out ) # write out results of set_meta
+ json.dump( ( True, 'Metadata has been set successfully' ), open( filename_results_code, 'wb+' ) ) # setting metadata has succeeded
+ except Exception, e:
+ json.dump( ( False, str( e ) ), open( filename_results_code, 'wb+' ) ) # setting metadata has failed somehow
+
+ for i, ( filename, file_dict ) in enumerate( new_job_metadata_dict.iteritems(), start=1 ):
+ new_dataset = galaxy.model.Dataset( id=-i, external_filename=os.path.join( tool_job_working_directory, file_dict[ 'filename' ] ) )
+ extra_files = file_dict.get( 'extra_files', None )
+ if extra_files is not None:
+ new_dataset._extra_files_path = os.path.join( tool_job_working_directory, extra_files )
+ new_dataset.state = new_dataset.states.OK
+ new_dataset_instance = galaxy.model.HistoryDatasetAssociation( id=-i, dataset=new_dataset, extension=file_dict.get( 'ext', 'data' ) )
+ set_meta_with_tool_provided( new_dataset_instance, file_dict, set_meta_kwds )
+ file_dict[ 'metadata' ] = json.loads( new_dataset_instance.metadata.to_JSON_dict() ) #storing metadata in external form, need to turn back into dict, then later jsonify
+ if existing_job_metadata_dict or new_job_metadata_dict:
+ with open( job_metadata, 'wb' ) as job_metadata_fh:
+ for value in existing_job_metadata_dict.values() + new_job_metadata_dict.values():
+ job_metadata_fh.write( "%s\n" % ( json.dumps( value ) ) )
+
+ clear_mappers()
https://bitbucket.org/galaxy/galaxy-central/commits/439b5c72d17f/
Changeset: 439b5c72d17f
Branch: release_15.03
User: natefoo
Date: 2015-03-13 18:32:54+00:00
Summary: Ensure that set_metadata always loads the `galaxy` module from the same
version of Galaxy it is in.
Affected #: 1 file
diff -r 03f569f9eb693b41c14e8e6e5b0fa30576905c7c -r 439b5c72d17f6572629ca87f6b199120cb2f74ca lib/galaxy_ext/metadata/set_metadata.py
--- a/lib/galaxy_ext/metadata/set_metadata.py
+++ b/lib/galaxy_ext/metadata/set_metadata.py
@@ -23,9 +23,9 @@
# ensure supported version
assert sys.version_info[:2] >= ( 2, 6 ) and sys.version_info[:2] <= ( 2, 7 ), 'Python version must be 2.6 or 2.7, this is: %s' % sys.version
-new_path = [ os.path.join( os.getcwd(), "lib" ) ]
-new_path.extend( sys.path[ 1: ] ) # remove scripts/ from the path
-sys.path = new_path
+# insert *this* galaxy before all others on sys.path
+new_path = os.path.abspath( os.path.join( os.path.dirname( __file__ ), os.pardir, os.pardir ) )
+sys.path.insert( 0, new_path )
from galaxy import eggs
import pkg_resources
https://bitbucket.org/galaxy/galaxy-central/commits/3341160d0919/
Changeset: 3341160d0919
Branch: release_15.03
User: natefoo
Date: 2015-03-13 20:56:10+00:00
Summary: galaxy.metadata symlink so running jobs do not fail.
Affected #: 1 file
diff -r 439b5c72d17f6572629ca87f6b199120cb2f74ca -r 3341160d09195487382b2613b7e309eb5aaf08b7 lib/galaxy/metadata
--- /dev/null
+++ b/lib/galaxy/metadata
@@ -0,0 +1,1 @@
+../galaxy_ext/metadata
\ No newline at end of file
https://bitbucket.org/galaxy/galaxy-central/commits/728bc8fe323b/
Changeset: 728bc8fe323b
Branch: release_15.03
User: natefoo
Date: 2015-03-16 15:05:37+00:00
Summary: In the metadata tool action, ensure job_working_directory exists before
trying to get its path. Otherwise, hierarchical/distributed object
stores will be broken.
Affected #: 1 file
diff -r 3341160d09195487382b2613b7e309eb5aaf08b7 -r 728bc8fe323b879c542ea87e313b2cb9a263b79c lib/galaxy/tools/actions/metadata.py
--- a/lib/galaxy/tools/actions/metadata.py
+++ b/lib/galaxy/tools/actions/metadata.py
@@ -70,6 +70,7 @@
# Store original dataset state, so we can restore it. A separate table might be better (no chance of 'losing' the original state)?
incoming[ '__ORIGINAL_DATASET_STATE__' ] = dataset.state
input_paths = [DatasetPath( dataset.id, real_path=dataset.file_name, mutable=False )]
+ app.object_store.create(job, base_dir='job_work', dir_only=True, extra_dir=str(job.id))
job_working_dir = app.object_store.get_filename(job, base_dir='job_work', dir_only=True, extra_dir=str(job.id))
external_metadata_wrapper = JobExternalOutputMetadataWrapper( job )
cmd_line = external_metadata_wrapper.setup_external_metadata( dataset,
https://bitbucket.org/galaxy/galaxy-central/commits/4733f50fccca/
Changeset: 4733f50fccca
Branch: release_15.03
User: natefoo
Date: 2015-03-16 19:18:53+00:00
Summary: Because __call_method was private, DistributedObjectStore's
__call_method was never being called, only HierarchicalObjectStore's. .
Make it pseudo-protected by renaming to _call_method.
Affected #: 1 file
diff -r 728bc8fe323b879c542ea87e313b2cb9a263b79c -r 4733f50fccca4328eb544f44721ce3ec6315ec09 lib/galaxy/objectstore/__init__.py
--- a/lib/galaxy/objectstore/__init__.py
+++ b/lib/galaxy/objectstore/__init__.py
@@ -392,39 +392,39 @@
super(NestedObjectStore, self).shutdown()
def exists(self, obj, **kwargs):
- return self.__call_method('exists', obj, False, False, **kwargs)
+ return self._call_method('exists', obj, False, False, **kwargs)
def file_ready(self, obj, **kwargs):
- return self.__call_method('file_ready', obj, False, False, **kwargs)
+ return self._call_method('file_ready', obj, False, False, **kwargs)
def create(self, obj, **kwargs):
random.choice(self.backends.values()).create(obj, **kwargs)
def empty(self, obj, **kwargs):
- return self.__call_method('empty', obj, True, False, **kwargs)
+ return self._call_method('empty', obj, True, False, **kwargs)
def size(self, obj, **kwargs):
- return self.__call_method('size', obj, 0, False, **kwargs)
+ return self._call_method('size', obj, 0, False, **kwargs)
def delete(self, obj, **kwargs):
- return self.__call_method('delete', obj, False, False, **kwargs)
+ return self._call_method('delete', obj, False, False, **kwargs)
def get_data(self, obj, **kwargs):
- return self.__call_method('get_data', obj, ObjectNotFound, True, **kwargs)
+ return self._call_method('get_data', obj, ObjectNotFound, True, **kwargs)
def get_filename(self, obj, **kwargs):
- return self.__call_method('get_filename', obj, ObjectNotFound, True, **kwargs)
+ return self._call_method('get_filename', obj, ObjectNotFound, True, **kwargs)
def update_from_file(self, obj, **kwargs):
if kwargs.get('create', False):
self.create(obj, **kwargs)
kwargs['create'] = False
- return self.__call_method('update_from_file', obj, ObjectNotFound, True, **kwargs)
+ return self._call_method('update_from_file', obj, ObjectNotFound, True, **kwargs)
def get_object_url(self, obj, **kwargs):
- return self.__call_method('get_object_url', obj, None, False, **kwargs)
+ return self._call_method('get_object_url', obj, None, False, **kwargs)
- def __call_method(self, method, obj, default, default_is_exception, **kwargs):
+ def _call_method(self, method, obj, default, default_is_exception, **kwargs):
"""
Check all children object stores for the first one with the dataset
"""
@@ -432,7 +432,7 @@
if store.exists(obj, **kwargs):
return store.__getattribute__(method)(obj, **kwargs)
if default_is_exception:
- raise default( 'objectstore, __call_method failed: %s on %s, kwargs: %s'
+ raise default( 'objectstore, _call_method failed: %s on %s, kwargs: %s'
% ( method, str( obj ), str( kwargs ) ) )
else:
return default
@@ -535,12 +535,12 @@
log.debug("Using preferred backend '%s' for creation of %s %s" % (obj.object_store_id, obj.__class__.__name__, obj.id))
self.backends[obj.object_store_id].create(obj, **kwargs)
- def __call_method(self, method, obj, default, default_is_exception, **kwargs):
+ def _call_method(self, method, obj, default, default_is_exception, **kwargs):
object_store_id = self.__get_store_id_for(obj, **kwargs)
if object_store_id is not None:
return self.backends[object_store_id].__getattribute__(method)(obj, **kwargs)
if default_is_exception:
- raise default( 'objectstore, __call_method failed: %s on %s, kwargs: %s'
+ raise default( 'objectstore, _call_method failed: %s on %s, kwargs: %s'
% ( method, str( obj ), str( kwargs ) ) )
else:
return default
https://bitbucket.org/galaxy/galaxy-central/commits/589af3c3c437/
Changeset: 589af3c3c437
Branch: release_15.03
User: dannon
Date: 2015-03-16 20:28:20+00:00
Summary: Don't log exception to errormsg on workflow import failure.
Affected #: 1 file
diff -r 4733f50fccca4328eb544f44721ce3ec6315ec09 -r 589af3c3c43703a0c6b3f3d69542658c3b958b10 lib/galaxy/webapps/galaxy/controllers/workflow.py
--- a/lib/galaxy/webapps/galaxy/controllers/workflow.py
+++ b/lib/galaxy/webapps/galaxy/controllers/workflow.py
@@ -1,6 +1,7 @@
import base64
import httplib
import json
+import logging
import os
import sgmllib
import urllib2
@@ -41,6 +42,8 @@
from galaxy.workflow.render import WorkflowCanvas, MARGIN, LINE_SPACING
from markupsafe import escape
+log = logging.getLogger( __name__ )
+
class StoredWorkflowListGrid( grids.Grid ):
@@ -895,8 +898,9 @@
data = json.loads( workflow_data )
except Exception, e:
data = None
- message = "The data content does not appear to be a Galaxy workflow.<br/>Exception: %s" % str( e )
+ message = "The data content does not appear to be a Galaxy workflow."
status = 'error'
+ log.exception("Error importing workflow.")
if data:
# Create workflow if possible. If a required tool is not available in the local
# Galaxy instance, the tool information will be available in the step_dict.
https://bitbucket.org/galaxy/galaxy-central/commits/274bc3b9763c/
Changeset: 274bc3b9763c
Branch: release_15.03
User: dannon
Date: 2015-03-16 22:21:08+00:00
Summary: Add information about making history accessible to initial export archive message.
Affected #: 1 file
diff -r 589af3c3c43703a0c6b3f3d69542658c3b958b10 -r 274bc3b9763cb3956af678cf5043561641e27200 lib/galaxy/webapps/galaxy/controllers/history.py
--- a/lib/galaxy/webapps/galaxy/controllers/history.py
+++ b/lib/galaxy/webapps/galaxy/controllers/history.py
@@ -1233,33 +1233,28 @@
# Use current history.
history = trans.history
id = trans.security.encode_id( history.id )
-
if not history:
return trans.show_error_message( "This history does not exist or you cannot export this history." )
-
- #
# If history has already been exported and it has not changed since export, stream it.
- #
jeha = history.latest_export
if jeha and jeha.up_to_date:
if jeha.ready:
if preview:
url = url_for( controller='history', action="export_archive", id=id, qualified=True )
- return trans.show_message( "History Ready: '%(n)s'. Use this link to download \
- the archive or import it to another Galaxy server: \
- <a href='%(u)s'>%(u)s</a>" % ( { 'n' : history.name, 'u' : url } ) )
+ return trans.show_message( "History Ready: '%(n)s'. Use this link to download "
+ "the archive or import it to another Galaxy server: "
+ "<a href='%(u)s'>%(u)s</a>" % ( { 'n': history.name, 'u': url } ) )
else:
return self.serve_ready_history_export( trans, jeha )
elif jeha.preparing:
- return trans.show_message( "Still exporting history %(n)s; please check back soon. Link: <a href='%(s)s'>%(s)s</a>" \
- % ( { 'n' : history.name, 's' : url_for( controller='history', action="export_archive", id=id, qualified=True ) } ) )
-
+ return trans.show_message( "Still exporting history %(n)s; please check back soon. Link: <a href='%(s)s'>%(s)s</a>"
+ % ( { 'n': history.name, 's': url_for( controller='history', action="export_archive", id=id, qualified=True ) } ) )
self.queue_history_export( trans, history, gzip=gzip, include_hidden=include_hidden, include_deleted=include_deleted )
url = url_for( controller='history', action="export_archive", id=id, qualified=True )
- return trans.show_message( "Exporting History '%(n)s'. Use this link to download \
- the archive or import it to another Galaxy server: \
- <a href='%(u)s'>%(u)s</a>" % ( { 'n' : history.name, 'u' : url } ) )
- #TODO: used in this file and index.mako
+ return trans.show_message( "Exporting History '%(n)s'. You will need to <a href='%(share)s'>make this history 'accessible'</a> in order to import this to another galaxy sever. <br/>"
+ "Use this link to download the archive or import it to another Galaxy server: "
+ "<a href='%(u)s'>%(u)s</a>" % ( { 'share': url_for(controller='history', action='sharing'), 'n': history.name, 'u': url } ) )
+ # TODO: used in this file and index.mako
@web.expose
@web.json
https://bitbucket.org/galaxy/galaxy-central/commits/01412241542a/
Changeset: 01412241542a
Branch: release_15.03
User: natefoo
Date: 2015-03-17 03:24:47+00:00
Summary: Bump version to 15.03.1
Affected #: 1 file
diff -r 274bc3b9763cb3956af678cf5043561641e27200 -r 01412241542ab4f3b0fa67139394b30277d5be49 lib/galaxy/version.py
--- a/lib/galaxy/version.py
+++ b/lib/galaxy/version.py
@@ -1,3 +1,3 @@
VERSION_MAJOR = "15.03"
-VERSION_MINOR = None
+VERSION_MINOR = "1"
VERSION = VERSION_MAJOR + ('.' + VERSION_MINOR if VERSION_MINOR else '')
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
Branch: refs/heads/master
Home: https://github.com/galaxyproject/usegalaxy-playbook
Commit: 62da611fdbb069af5ce635aa0106783d96d8be8a
https://github.com/galaxyproject/usegalaxy-playbook/commit/62da611fdbb069af…
Author: Nate Coraor <nate(a)bx.psu.edu>
Date: 2015-03-16 (Mon, 16 Mar 2015)
Changed paths:
M files/galaxy/test.galaxyproject.org/var/integrated_tool_panel.xml
M files/galaxy/test.galaxyproject.org/var/shed_tool_conf.xml
Log Message:
-----------
Test mutable configs
1
0
Branch: refs/heads/master
Home: https://github.com/galaxyproject/usegalaxy-playbook
Commit: cf46e993118b36edc373ff9325d1076af06dd40c
https://github.com/galaxyproject/usegalaxy-playbook/commit/cf46e993118b36ed…
Author: Nate Coraor <nate(a)bx.psu.edu>
Date: 2015-03-16 (Mon, 16 Mar 2015)
Changed paths:
M stage/group_vars/all.yml
Log Message:
-----------
Update Test
Commit: dec6ea707eba96be53b45a127d90d52152b83580
https://github.com/galaxyproject/usegalaxy-playbook/commit/dec6ea707eba96be…
Author: Nate Coraor <nate(a)bx.psu.edu>
Date: 2015-03-16 (Mon, 16 Mar 2015)
Changed paths:
M production/group_vars/all.yml
Log Message:
-----------
Update Main
Commit: f59bdf180f74335e777a71c3433687ed196ca9e0
https://github.com/galaxyproject/usegalaxy-playbook/commit/f59bdf180f74335e…
Author: Nate Coraor <nate(a)bx.psu.edu>
Date: 2015-03-16 (Mon, 16 Mar 2015)
Changed paths:
M stage/group_vars/all.yml
Log Message:
-----------
Update Test
Commit: b1d340bb9db3cfe1d4e74104a5afda9fdbd9ff4c
https://github.com/galaxyproject/usegalaxy-playbook/commit/b1d340bb9db3cfe1…
Author: Nate Coraor <nate(a)bx.psu.edu>
Date: 2015-03-16 (Mon, 16 Mar 2015)
Changed paths:
M stage/group_vars/pulsarservers.yml
Log Message:
-----------
Update Test Pulsar
Compare: https://github.com/galaxyproject/usegalaxy-playbook/compare/466dd27ea9e3...…
1
0
[galaxyproject/usegalaxy-playbook] 559c89: Switch Main from hierarchical to distributed objec...
by GitHub 16 Mar '15
by GitHub 16 Mar '15
16 Mar '15
Branch: refs/heads/master
Home: https://github.com/galaxyproject/usegalaxy-playbook
Commit: 559c8960e3d8418ceea14dbf8248d996d266d588
https://github.com/galaxyproject/usegalaxy-playbook/commit/559c8960e3d8418c…
Author: Nate Coraor <nate(a)bx.psu.edu>
Date: 2015-03-16 (Mon, 16 Mar 2015)
Changed paths:
M files/galaxy/usegalaxy.org/config/object_store_conf.xml
Log Message:
-----------
Switch Main from hierarchical to distributed object store, should improve performance due to reduced filesystem hits.
Commit: 52da15cf843fac7ef4cdb60398831585adb12467
https://github.com/galaxyproject/usegalaxy-playbook/commit/52da15cf843fac7e…
Author: Nate Coraor <nate(a)bx.psu.edu>
Date: 2015-03-16 (Mon, 16 Mar 2015)
Changed paths:
M stage/group_vars/all.yml
Log Message:
-----------
Update Test
Commit: 466dd27ea9e38a15b64e8335d059b9beab75bde6
https://github.com/galaxyproject/usegalaxy-playbook/commit/466dd27ea9e38a15…
Author: Nate Coraor <nate(a)bx.psu.edu>
Date: 2015-03-16 (Mon, 16 Mar 2015)
Changed paths:
M production/group_vars/all.yml
Log Message:
-----------
Update Main.
Compare: https://github.com/galaxyproject/usegalaxy-playbook/compare/0a8d3816bf0c...…
1
0
15 Mar '15
Branch: refs/heads/master
Home: https://github.com/galaxyproject/ansible-galaxy
Commit: f8674f21471549facb2654786e6faaaf9e8d0d92
https://github.com/galaxyproject/ansible-galaxy/commit/f8674f21471549facb26…
Author: Simon Belluzzo <simon(a)belluzzo.id.au>
Date: 2015-03-15 (Sun, 15 Mar 2015)
Changed paths:
M README.md
Log Message:
-----------
Fix typo in config variable in README.
Commit: e2613391783b516b8c764d59ce0d2ae23f326afd
https://github.com/galaxyproject/ansible-galaxy/commit/e2613391783b516b8c76…
Author: Dannon Baker <dannon.baker(a)gmail.com>
Date: 2015-03-15 (Sun, 15 Mar 2015)
Changed paths:
M README.md
Log Message:
-----------
Merge pull request #5 from simonalpha/master
Fix typo in config variable in README.
Compare: https://github.com/galaxyproject/ansible-galaxy/compare/da2036faddca...e261…
1
0
[galaxyproject/ansible-galaxy] 94526a: Allow using role with different args in same play
by GitHub 14 Mar '15
by GitHub 14 Mar '15
14 Mar '15
Branch: refs/heads/master
Home: https://github.com/galaxyproject/ansible-galaxy
Commit: 94526aee4588bad242e61f07e81e663021dfd198
https://github.com/galaxyproject/ansible-galaxy/commit/94526aee4588bad242e6…
Author: Simon Belluzzo <simon(a)belluzzo.id.au>
Date: 2015-03-14 (Sat, 14 Mar 2015)
Changed paths:
M meta/main.yml
Log Message:
-----------
Allow using role with different args in same play
By default, Ansible will only run a role once, no matter the number
of times it occurs as a dependency. This makes it difficult to package
the example play in this repo for more complex use. This is resolved as
detailed in http://docs.ansible.com/playbooks_roles.html#role-dependencies
Commit: da2036faddcae92e0b49ba2324ee1212057e587e
https://github.com/galaxyproject/ansible-galaxy/commit/da2036faddcae92e0b49…
Author: Enis Afgan <afgane(a)gmail.com>
Date: 2015-03-14 (Sat, 14 Mar 2015)
Changed paths:
M meta/main.yml
Log Message:
-----------
Merge pull request #4 from simonalpha/master
Add allow_duplicates to role metadata
Compare: https://github.com/galaxyproject/ansible-galaxy/compare/91a40eef3e77...da20…
1
0
Branch: refs/heads/master
Home: https://github.com/galaxyproject/usegalaxy-playbook
Commit: 0a8d3816bf0c43cb35726467647aa2e47f91a0cb
https://github.com/galaxyproject/usegalaxy-playbook/commit/0a8d3816bf0c43cb…
Author: Nate Coraor <nate(a)bx.psu.edu>
Date: 2015-03-13 (Fri, 13 Mar 2015)
Changed paths:
M production/group_vars/all.yml
Log Message:
-----------
Update Main
1
0
Branch: refs/heads/master
Home: https://github.com/galaxyproject/usegalaxy-playbook
Commit: 069dfa4a18be36b397f098b13080624f5fc8ce17
https://github.com/galaxyproject/usegalaxy-playbook/commit/069dfa4a18be36b3…
Author: Nate Coraor <nate(a)bx.psu.edu>
Date: 2015-03-13 (Fri, 13 Mar 2015)
Changed paths:
M stage/group_vars/all.yml
Log Message:
-----------
Update Test
Commit: 2892cd70f4666c0b1eb905b6a74813f296b9f039
https://github.com/galaxyproject/usegalaxy-playbook/commit/2892cd70f4666c0b…
Author: Nate Coraor <nate(a)bx.psu.edu>
Date: 2015-03-13 (Fri, 13 Mar 2015)
Changed paths:
M files/galaxy/test.galaxyproject.org/var/integrated_tool_panel.xml
M files/galaxy/test.galaxyproject.org/var/shed_tool_conf.xml
Log Message:
-----------
Mutable configs on Test. The installed_changeset_revision for a number of gops tools were invalid.
Compare: https://github.com/galaxyproject/usegalaxy-playbook/compare/94b5caf42739...…
1
0