galaxy-commits
Threads by month
- ----- 2025 -----
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
May 2014
- 1 participants
- 242 discussions

commit/galaxy-central: greg: Import fix in ~/tool_shed/galaxy_install/migrate/common.py - thanks Nicola Soranzo
by commits-noreply@bitbucket.org 06 May '14
by commits-noreply@bitbucket.org 06 May '14
06 May '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/6d6449af7d62/
Changeset: 6d6449af7d62
User: greg
Date: 2014-05-06 18:00:54
Summary: Import fix in ~/tool_shed/galaxy_install/migrate/common.py - thanks Nicola Soranzo
Affected #: 1 file
diff -r c78a4a3d51cdcd99754822d479798ed324ed9655 -r 6d6449af7d623b9b4a5bb0118750ad49f725b49a lib/tool_shed/galaxy_install/migrate/common.py
--- a/lib/tool_shed/galaxy_install/migrate/common.py
+++ b/lib/tool_shed/galaxy_install/migrate/common.py
@@ -2,7 +2,7 @@
import os
import sys
import galaxy.config
-from tool_shed.galaxy_install import install_manager, installed_repository_manager
+from tool_shed.galaxy_install import tool_migration_manager, installed_repository_manager
class MigrateToolsApplication( object, galaxy.config.ConfiguresGalaxyMixin ):
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0

commit/galaxy-central: jmchilton: Merged in jmchilton/galaxy-central-fork-1 (pull request #379)
by commits-noreply@bitbucket.org 06 May '14
by commits-noreply@bitbucket.org 06 May '14
06 May '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/c78a4a3d51cd/
Changeset: c78a4a3d51cd
User: jmchilton
Date: 2014-05-06 17:08:39
Summary: Merged in jmchilton/galaxy-central-fork-1 (pull request #379)
Disable select2 for all multi-select inputs.
Affected #: 1 file
diff -r 09f80fcdfd85bf1b009b9c831e51fa92a6ff9306 -r c78a4a3d51cdcd99754822d479798ed324ed9655 static/scripts/galaxy.base.js
--- a/static/scripts/galaxy.base.js
+++ b/static/scripts/galaxy.base.js
@@ -261,7 +261,7 @@
select_elts = select_elts || $('select');
select_elts.each( function() {
- var select_elt = $(this);
+ var select_elt = $(this).not('[multiple]');
// Make sure that options is within range.
var num_options = select_elt.find('option').length;
if ( (num_options < min_length) || (num_options > max_length) ) {
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/0f854a6a519c/
Changeset: 0f854a6a519c
User: jmchilton
Date: 2014-04-25 20:00:09
Summary: Disable select2 for all multi-select inputs.
These have been nothing but trouble (they have been disabled some places) and are awkward in those places that have not been changed.
Is there any place in Galaxy where multi-select inputs + select2 are being used to create a desirable UX? If yes, can we just create a class for these particular widgets and target it with a boolean expression on the same line?
Affected #: 1 file
diff -r 666a75d94ee91091d94b65ffbfe700224ad58f15 -r 0f854a6a519ce3a86e04701f01de42af93ed3d5d static/scripts/galaxy.base.js
--- a/static/scripts/galaxy.base.js
+++ b/static/scripts/galaxy.base.js
@@ -261,7 +261,7 @@
select_elts = select_elts || $('select');
select_elts.each( function() {
- var select_elt = $(this);
+ var select_elt = $(this).not('[multiple]');
// Make sure that options is within range.
var num_options = select_elt.find('option').length;
if ( (num_options < min_length) || (num_options > max_length) ) {
https://bitbucket.org/galaxy/galaxy-central/commits/c78a4a3d51cd/
Changeset: c78a4a3d51cd
User: jmchilton
Date: 2014-05-06 17:08:39
Summary: Merged in jmchilton/galaxy-central-fork-1 (pull request #379)
Disable select2 for all multi-select inputs.
Affected #: 1 file
diff -r 09f80fcdfd85bf1b009b9c831e51fa92a6ff9306 -r c78a4a3d51cdcd99754822d479798ed324ed9655 static/scripts/galaxy.base.js
--- a/static/scripts/galaxy.base.js
+++ b/static/scripts/galaxy.base.js
@@ -261,7 +261,7 @@
select_elts = select_elts || $('select');
select_elts.each( function() {
- var select_elt = $(this);
+ var select_elt = $(this).not('[multiple]');
// Make sure that options is within range.
var num_options = select_elt.find('option').length;
if ( (num_options < min_length) || (num_options > max_length) ) {
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0

commit/galaxy-central: greg: Change the name of the current InstallManager to the more appropriate ToolMigrationManager since it handles only tool migrations. This allows for the InstallManager to be used for more general Tool Shed install processes.
by commits-noreply@bitbucket.org 06 May '14
by commits-noreply@bitbucket.org 06 May '14
06 May '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/09f80fcdfd85/
Changeset: 09f80fcdfd85
User: greg
Date: 2014-05-06 16:34:28
Summary: Change the name of the current InstallManager to the more appropriate ToolMigrationManager since it handles only tool migrations. This allows for the InstallManager to be used for more general Tool Shed install processes.
Affected #: 11 files
diff -r 912ba22e939f505a33ef83d3bfbdbe26aa9c6a9f -r 09f80fcdfd85bf1b009b9c831e51fa92a6ff9306 lib/galaxy/webapps/galaxy/controllers/admin.py
--- a/lib/galaxy/webapps/galaxy/controllers/admin.py
+++ b/lib/galaxy/webapps/galaxy/controllers/admin.py
@@ -821,7 +821,7 @@
tool_dependencies_dict = {}
repository_name = elem.get( 'name' )
changeset_revision = elem.get( 'changeset_revision' )
- url = '%s/repository/get_tool_dependencies?name=%s&owner=devteam&changeset_revision=%s&from_install_manager=True' % \
+ url = '%s/repository/get_tool_dependencies?name=%s&owner=devteam&changeset_revision=%s' % \
( tool_shed_url, repository_name, changeset_revision )
text = common_util.tool_shed_get( trans.app, tool_shed_url, url )
if text:
diff -r 912ba22e939f505a33ef83d3bfbdbe26aa9c6a9f -r 09f80fcdfd85bf1b009b9c831e51fa92a6ff9306 lib/galaxy/webapps/galaxy/controllers/admin_toolshed.py
--- a/lib/galaxy/webapps/galaxy/controllers/admin_toolshed.py
+++ b/lib/galaxy/webapps/galaxy/controllers/admin_toolshed.py
@@ -457,7 +457,7 @@
tool_shed_repository=tool_shed_repository,
tool_dependencies_config=tool_dependencies_config,
tool_dependencies=tool_dependencies,
- from_install_manager=False )
+ from_tool_migration_manager=False )
for installed_tool_dependency in installed_tool_dependencies:
if installed_tool_dependency.status == trans.app.install_model.ToolDependency.installation_status.ERROR:
text = util.unicodify( installed_tool_dependency.error_message )
diff -r 912ba22e939f505a33ef83d3bfbdbe26aa9c6a9f -r 09f80fcdfd85bf1b009b9c831e51fa92a6ff9306 lib/tool_shed/galaxy_install/install_manager.py
--- a/lib/tool_shed/galaxy_install/install_manager.py
+++ b/lib/tool_shed/galaxy_install/install_manager.py
@@ -1,643 +1,8 @@
-"""
-Manage automatic installation of tools configured in the xxx.xml files in ~/scripts/migrate_tools (e.g., 0002_tools.xml).
-All of the tools were at some point included in the Galaxy distribution, but are now hosted in the main Galaxy tool shed.
-"""
-import json
+import logging
import os
-import shutil
-import tempfile
-import threading
-import logging
-from galaxy import util
-from galaxy.tools import ToolSection
-import tool_shed.util.shed_util_common as suc
-from tool_shed.util import common_install_util
-from tool_shed.util import common_util
-from tool_shed.util import datatype_util
-from tool_shed.util import hg_util
-from tool_shed.util import metadata_util
-from tool_shed.util import tool_dependency_util
-from tool_shed.util import tool_util
-from tool_shed.util import xml_util
-from galaxy.util.odict import odict
log = logging.getLogger( __name__ )
class InstallManager( object ):
-
- def __init__( self, app, latest_migration_script_number, tool_shed_install_config, migrated_tools_config, install_dependencies ):
- """
- Check tool settings in tool_shed_install_config and install all repositories that are not already installed. The tool
- panel configuration file is the received migrated_tools_config, which is the reserved file named migrated_tools_conf.xml.
- """
- self.app = app
- self.toolbox = self.app.toolbox
- self.migrated_tools_config = migrated_tools_config
- # If install_dependencies is True but tool_dependency_dir is not set, do not attempt to install but print informative error message.
- if install_dependencies and app.config.tool_dependency_dir is None:
- message = 'You are attempting to install tool dependencies but do not have a value for "tool_dependency_dir" set in your universe_wsgi.ini '
- message += 'file. Set this location value to the path where you want tool dependencies installed and rerun the migration script.'
- raise Exception( message )
- # Get the local non-shed related tool panel configs (there can be more than one, and the default name is tool_conf.xml).
- self.proprietary_tool_confs = self.non_shed_tool_panel_configs
- self.proprietary_tool_panel_elems = self.get_proprietary_tool_panel_elems( latest_migration_script_number )
- # Set the location where the repositories will be installed by retrieving the tool_path setting from migrated_tools_config.
- tree, error_message = xml_util.parse_xml( migrated_tools_config )
- if tree is None:
- print error_message
- else:
- root = tree.getroot()
- self.tool_path = root.get( 'tool_path' )
- print "Repositories will be installed into configured tool_path location ", str( self.tool_path )
- # Parse tool_shed_install_config to check each of the tools.
- self.tool_shed_install_config = tool_shed_install_config
- tree, error_message = xml_util.parse_xml( tool_shed_install_config )
- if tree is None:
- print error_message
- else:
- root = tree.getroot()
- defined_tool_shed_url = root.get( 'name' )
- self.tool_shed_url = common_util.get_tool_shed_url_from_tool_shed_registry( self.app, defined_tool_shed_url )
- self.tool_shed = common_util.remove_protocol_and_port_from_tool_shed_url( self.tool_shed_url )
- self.repository_owner = common_util.REPOSITORY_OWNER
- index, self.shed_config_dict = suc.get_shed_tool_conf_dict( app, self.migrated_tools_config )
- # Since tool migration scripts can be executed any number of times, we need to make sure the appropriate tools are defined in
- # tool_conf.xml. If no tools associated with the migration stage are defined, no repositories will be installed on disk.
- # The default behavior is that the tool shed is down.
- tool_shed_accessible = False
- tool_panel_configs = common_util.get_non_shed_tool_panel_configs( app )
- if tool_panel_configs:
- # The missing_tool_configs_dict contents are something like:
- # {'emboss_antigenic.xml': [('emboss', '5.0.0', 'package', '\nreadme blah blah blah\n')]}
- tool_shed_accessible, missing_tool_configs_dict = common_util.check_for_missing_tools( app,
- tool_panel_configs,
- latest_migration_script_number )
- else:
- # It doesn't matter if the tool shed is accessible since there are no migrated tools defined in the local Galaxy instance, but
- # we have to set the value of tool_shed_accessible to True so that the value of migrate_tools.version can be correctly set in
- # the database.
- tool_shed_accessible = True
- missing_tool_configs_dict = odict()
- if tool_shed_accessible:
- if len( self.proprietary_tool_confs ) == 1:
- plural = ''
- file_names = self.proprietary_tool_confs[ 0 ]
- else:
- plural = 's'
- file_names = ', '.join( self.proprietary_tool_confs )
- if missing_tool_configs_dict:
- for proprietary_tool_conf in self.proprietary_tool_confs:
- # Create a backup of the tool configuration in the un-migrated state.
- shutil.copy( proprietary_tool_conf, '%s-pre-stage-%04d' % ( proprietary_tool_conf, latest_migration_script_number ) )
- for repository_elem in root:
- # Make sure we have a valid repository tag.
- if self.__is_valid_repository_tag( repository_elem ):
- # Get all repository dependencies for the repository defined by the current repository_elem. Repository dependency
- # definitions contained in tool shed repositories with migrated tools must never define a relationship to a repository
- # dependency that contains a tool. The repository dependency can only contain items that are not loaded into the Galaxy
- # tool panel (e.g., tool dependency definitions, custom datatypes, etc). This restriction must be followed down the
- # entire dependency hierarchy.
- name = repository_elem.get( 'name' )
- changeset_revision = repository_elem.get( 'changeset_revision' )
- tool_shed_accessible, repository_dependencies_dict = \
- common_util.get_repository_dependencies( app,
- self.tool_shed_url,
- name,
- self.repository_owner,
- changeset_revision )
- # Make sure all repository dependency records exist (as tool_shed_repository table rows) in the Galaxy database.
- created_tool_shed_repositories = self.create_or_update_tool_shed_repository_records( name,
- changeset_revision,
- repository_dependencies_dict )
- # Order the repositories for proper installation. This process is similar to the process used when installing tool
- # shed repositories (i.e., the order_components_for_installation() method in ~/lib/tool_shed/galaxy_install/
- # repository_util), but does not handle managing tool panel sections and other components since repository dependency
- # definitions contained in tool shed repositories with migrated tools must never define a relationship to a repository
- # dependency that contains a tool.
- ordered_tool_shed_repositories = self.order_repositories_for_installation( created_tool_shed_repositories,
- repository_dependencies_dict )
-
- for tool_shed_repository in ordered_tool_shed_repositories:
- is_repository_dependency = self.__is_repository_dependency( name, changeset_revision, tool_shed_repository )
- self.install_repository( repository_elem,
- tool_shed_repository,
- install_dependencies,
- is_repository_dependency=is_repository_dependency )
- else:
- message = "\nNo tools associated with migration stage %s are defined in your " % str( latest_migration_script_number )
- message += "file%s named %s,\nso no repositories will be installed on disk.\n" % ( plural, file_names )
- print message
- else:
- message = "\nThe main Galaxy tool shed is not currently available, so skipped migration stage %s.\n" % str( latest_migration_script_number )
- message += "Try again later.\n"
- print message
-
- def create_or_update_tool_shed_repository_record( self, name, owner, changeset_revision, description=None ):
-
- # Install path is of the form: <tool path>/<tool shed>/repos/<repository owner>/<repository name>/<installed changeset revision>
- relative_clone_dir = os.path.join( self.tool_shed, 'repos', owner, name, changeset_revision )
- clone_dir = os.path.join( self.tool_path, relative_clone_dir )
- if not self.__iscloned( clone_dir ):
- repository_clone_url = os.path.join( self.tool_shed_url, 'repos', owner, name )
- relative_install_dir = os.path.join( relative_clone_dir, name )
- install_dir = os.path.join( clone_dir, name )
- ctx_rev = suc.get_ctx_rev( self.app, self.tool_shed_url, name, owner, changeset_revision )
- tool_shed_repository = suc.create_or_update_tool_shed_repository( app=self.app,
- name=name,
- description=description,
- installed_changeset_revision=changeset_revision,
- ctx_rev=ctx_rev,
- repository_clone_url=repository_clone_url,
- metadata_dict={},
- status=self.app.install_model.ToolShedRepository.installation_status.NEW,
- current_changeset_revision=None,
- owner=self.repository_owner,
- dist_to_shed=True )
- return tool_shed_repository
- return None
-
- def create_or_update_tool_shed_repository_records( self, name, changeset_revision, repository_dependencies_dict ):
- """
- Make sure the repository defined by name and changeset_revision and all of its repository dependencies have
- associated tool_shed_repository table rows in the Galaxy database.
- """
- created_tool_shed_repositories = []
- description = repository_dependencies_dict.get( 'description', None )
- tool_shed_repository = self.create_or_update_tool_shed_repository_record( name,
- self.repository_owner,
- changeset_revision,
- description=description )
- if tool_shed_repository:
- created_tool_shed_repositories.append( tool_shed_repository )
- for rd_key, rd_tups in repository_dependencies_dict.items():
- if rd_key in [ 'root_key', 'description' ]:
- continue
- for rd_tup in rd_tups:
- parsed_rd_tup = common_util.parse_repository_dependency_tuple( rd_tup )
- rd_tool_shed, rd_name, rd_owner, rd_changeset_revision = parsed_rd_tup[ 0:4 ]
- # TODO: Make sure the repository description is applied to the new repository record during installation.
- tool_shed_repository = self.create_or_update_tool_shed_repository_record( rd_name,
- rd_owner,
- rd_changeset_revision,
- description=None )
- if tool_shed_repository:
- created_tool_shed_repositories.append( tool_shed_repository )
- return created_tool_shed_repositories
-
- def filter_and_persist_proprietary_tool_panel_configs( self, tool_configs_to_filter ):
- """Eliminate all entries in all non-shed-related tool panel configs for all tool config file names in the received tool_configs_to_filter."""
- for proprietary_tool_conf in self.proprietary_tool_confs:
- persist_required = False
- tree, error_message = xml_util.parse_xml( proprietary_tool_conf )
- if tree:
- root = tree.getroot()
- for elem in root:
- if elem.tag == 'tool':
- # Tools outside of sections.
- file_path = elem.get( 'file', None )
- if file_path:
- if file_path in tool_configs_to_filter:
- root.remove( elem )
- persist_required = True
- elif elem.tag == 'section':
- # Tools contained in a section.
- for section_elem in elem:
- if section_elem.tag == 'tool':
- file_path = section_elem.get( 'file', None )
- if file_path:
- if file_path in tool_configs_to_filter:
- elem.remove( section_elem )
- persist_required = True
- if persist_required:
- fh = tempfile.NamedTemporaryFile( 'wb', prefix="tmp-toolshed-fapptpc" )
- tmp_filename = fh.name
- fh.close()
- fh = open( tmp_filename, 'wb' )
- tree.write( tmp_filename, encoding='utf-8', xml_declaration=True )
- fh.close()
- shutil.move( tmp_filename, os.path.abspath( proprietary_tool_conf ) )
- os.chmod( proprietary_tool_conf, 0644 )
-
- def get_containing_tool_sections( self, tool_config ):
- """
- If tool_config is defined somewhere in self.proprietary_tool_panel_elems, return True and a list of ToolSections in which the
- tool is displayed. If the tool is displayed outside of any sections, None is appended to the list.
- """
- tool_sections = []
- is_displayed = False
- for proprietary_tool_panel_elem in self.proprietary_tool_panel_elems:
- if proprietary_tool_panel_elem.tag == 'tool':
- # The proprietary_tool_panel_elem looks something like <tool file="emboss_5/emboss_antigenic.xml" />.
- proprietary_tool_config = proprietary_tool_panel_elem.get( 'file' )
- if tool_config == proprietary_tool_config:
- # The tool is loaded outside of any sections.
- tool_sections.append( None )
- if not is_displayed:
- is_displayed = True
- if proprietary_tool_panel_elem.tag == 'section':
- # The proprietary_tool_panel_elem looks something like <section name="EMBOSS" id="EMBOSSLite">.
- for section_elem in proprietary_tool_panel_elem:
- if section_elem.tag == 'tool':
- # The section_elem looks something like <tool file="emboss_5/emboss_antigenic.xml" />.
- proprietary_tool_config = section_elem.get( 'file' )
- if tool_config == proprietary_tool_config:
- # The tool is loaded inside of the section_elem.
- tool_sections.append( ToolSection( proprietary_tool_panel_elem ) )
- if not is_displayed:
- is_displayed = True
- return is_displayed, tool_sections
-
- def get_guid( self, repository_clone_url, relative_install_dir, tool_config ):
- if self.shed_config_dict.get( 'tool_path' ):
- relative_install_dir = os.path.join( self.shed_config_dict[ 'tool_path' ], relative_install_dir )
- tool_config_filename = suc.strip_path( tool_config )
- for root, dirs, files in os.walk( relative_install_dir ):
- if root.find( '.hg' ) < 0 and root.find( 'hgrc' ) < 0:
- if '.hg' in dirs:
- dirs.remove( '.hg' )
- for name in files:
- filename = suc.strip_path( name )
- if filename == tool_config_filename:
- full_path = str( os.path.abspath( os.path.join( root, name ) ) )
- tool = self.toolbox.load_tool( full_path )
- return suc.generate_tool_guid( repository_clone_url, tool )
- # Not quite sure what should happen here, throw an exception or what?
- return None
-
- def get_prior_install_required_dict( self, tool_shed_repositories, repository_dependencies_dict ):
- """
- Return a dictionary whose keys are the received tsr_ids and whose values are a list of tsr_ids, each of which is contained in the received
- list of tsr_ids and whose associated repository must be installed prior to the repository associated with the tsr_id key.
- """
- # Initialize the dictionary.
- prior_install_required_dict = {}
- tsr_ids = [ tool_shed_repository.id for tool_shed_repository in tool_shed_repositories ]
- for tsr_id in tsr_ids:
- prior_install_required_dict[ tsr_id ] = []
- # Inspect the repository dependencies about to be installed and populate the dictionary.
- for rd_key, rd_tups in repository_dependencies_dict.items():
- if rd_key in [ 'root_key', 'description' ]:
- continue
- for rd_tup in rd_tups:
- prior_install_ids = []
- tool_shed, name, owner, changeset_revision, prior_installation_required, only_if_compiling_contained_td = \
- common_util.parse_repository_dependency_tuple( rd_tup )
- if util.asbool( prior_installation_required ):
- for tsr in tool_shed_repositories:
- if tsr.name == name and tsr.owner == owner and tsr.changeset_revision == changeset_revision:
- prior_install_ids.append( tsr.id )
- prior_install_required_dict[ tsr.id ] = prior_install_ids
- return prior_install_required_dict
-
- def get_proprietary_tool_panel_elems( self, latest_tool_migration_script_number ):
- """
- Parse each config in self.proprietary_tool_confs (the default is tool_conf.xml) and generate a list of Elements that are
- either ToolSection elements or Tool elements. These will be used to generate new entries in the migrated_tools_conf.xml
- file for the installed tools.
- """
- tools_xml_file_path = os.path.abspath( os.path.join( 'scripts', 'migrate_tools', '%04d_tools.xml' % latest_tool_migration_script_number ) )
- # Parse the XML and load the file attributes for later checking against the integrated elements from self.proprietary_tool_confs.
- migrated_tool_configs = []
- tree, error_message = xml_util.parse_xml( tools_xml_file_path )
- if tree is None:
- return []
- root = tree.getroot()
- for elem in root:
- if elem.tag == 'repository':
- for tool_elem in elem:
- migrated_tool_configs.append( tool_elem.get( 'file' ) )
- # Parse each file in self.proprietary_tool_confs and generate the integrated list of tool panel Elements that contain them.
- tool_panel_elems = []
- for proprietary_tool_conf in self.proprietary_tool_confs:
- tree, error_message = xml_util.parse_xml( proprietary_tool_conf )
- if tree is None:
- return []
- root = tree.getroot()
- for elem in root:
- if elem.tag == 'tool':
- # Tools outside of sections.
- file_path = elem.get( 'file', None )
- if file_path:
- if file_path in migrated_tool_configs:
- if elem not in tool_panel_elems:
- tool_panel_elems.append( elem )
- elif elem.tag == 'section':
- # Tools contained in a section.
- for section_elem in elem:
- if section_elem.tag == 'tool':
- file_path = section_elem.get( 'file', None )
- if file_path:
- if file_path in migrated_tool_configs:
- # Append the section, not the tool.
- if elem not in tool_panel_elems:
- tool_panel_elems.append( elem )
- return tool_panel_elems
-
- def handle_repository_contents( self, tool_shed_repository, repository_clone_url, relative_install_dir, repository_elem,
- install_dependencies, is_repository_dependency=False ):
- """
- Generate the metadata for the installed tool shed repository, among other things. If the installed tool_shed_repository
- contains tools that are loaded into the Galaxy tool panel, this method will automatically eliminate all entries for each
- of the tools defined in the received repository_elem from all non-shed-related tool panel configuration files since the
- entries are automatically added to the reserved migrated_tools_conf.xml file as part of the migration process.
- """
- tool_configs_to_filter = []
- tool_panel_dict_for_display = odict()
- if self.tool_path:
- repo_install_dir = os.path.join( self.tool_path, relative_install_dir )
- else:
- repo_install_dir = relative_install_dir
- if not is_repository_dependency:
- for tool_elem in repository_elem:
- # The tool_elem looks something like this: <tool id="EMBOSS: antigenic1" version="5.0.0" file="emboss_antigenic.xml" />
- tool_config = tool_elem.get( 'file' )
- guid = self.get_guid( repository_clone_url, relative_install_dir, tool_config )
- # See if tool_config is defined inside of a section in self.proprietary_tool_panel_elems.
- is_displayed, tool_sections = self.get_containing_tool_sections( tool_config )
- if is_displayed:
- tool_panel_dict_for_tool_config = \
- tool_util.generate_tool_panel_dict_for_tool_config( guid, tool_config, tool_sections=tool_sections )
- # The tool-panel_dict has the following structure.
- # {<Tool guid> : [{ tool_config : <tool_config_file>, id: <ToolSection id>, version : <ToolSection version>,
- # name : <TooSection name>}]}
- for k, v in tool_panel_dict_for_tool_config.items():
- tool_panel_dict_for_display[ k ] = v
- for tool_panel_dict in v:
- # Keep track of tool config file names associated with entries that have been made to the
- # migrated_tools_conf.xml file so they can be eliminated from all non-shed-related tool panel configs.
- if tool_config not in tool_configs_to_filter:
- tool_configs_to_filter.append( tool_config )
- else:
- print 'The tool "%s" (%s) has not been enabled because it is not defined in a proprietary tool config (%s).' \
- % ( guid, tool_config, ", ".join( self.proprietary_tool_confs or [] ) )
- if tool_configs_to_filter:
- lock = threading.Lock()
- lock.acquire( True )
- try:
- self.filter_and_persist_proprietary_tool_panel_configs( tool_configs_to_filter )
- except Exception, e:
- log.exception( "Exception attempting to filter and persist non-shed-related tool panel configs:\n%s" % str( e ) )
- finally:
- lock.release()
- metadata_dict, invalid_file_tups = \
- metadata_util.generate_metadata_for_changeset_revision( app=self.app,
- repository=tool_shed_repository,
- changeset_revision=tool_shed_repository.changeset_revision,
- repository_clone_url=repository_clone_url,
- shed_config_dict = self.shed_config_dict,
- relative_install_dir=relative_install_dir,
- repository_files_dir=None,
- resetting_all_metadata_on_repository=False,
- updating_installed_repository=False,
- persist=True )
- tool_shed_repository.metadata = metadata_dict
- self.app.install_model.context.add( tool_shed_repository )
- self.app.install_model.context.flush()
- has_tool_dependencies = self.__has_tool_dependencies( metadata_dict )
- if has_tool_dependencies:
- # All tool_dependency objects must be created before the tools are processed even if no
- # tool dependencies will be installed.
- tool_dependencies = tool_dependency_util.create_tool_dependency_objects( self.app,
- tool_shed_repository,
- relative_install_dir,
- set_status=True )
- else:
- tool_dependencies = None
- if 'tools' in metadata_dict:
- sample_files = metadata_dict.get( 'sample_files', [] )
- sample_files = [ str( s ) for s in sample_files ]
- tool_index_sample_files = tool_util.get_tool_index_sample_files( sample_files )
- tool_util.copy_sample_files( self.app, tool_index_sample_files, tool_path=self.tool_path )
- sample_files_copied = [ s for s in tool_index_sample_files ]
- repository_tools_tups = suc.get_repository_tools_tups( self.app, metadata_dict )
- if repository_tools_tups:
- # Handle missing data table entries for tool parameters that are dynamically generated select lists.
- repository_tools_tups = tool_util.handle_missing_data_table_entry( self.app, relative_install_dir, self.tool_path, repository_tools_tups )
- # Handle missing index files for tool parameters that are dynamically generated select lists.
- repository_tools_tups, sample_files_copied = tool_util.handle_missing_index_file( self.app,
- self.tool_path,
- sample_files,
- repository_tools_tups,
- sample_files_copied )
- # Copy remaining sample files included in the repository to the ~/tool-data directory of the local Galaxy instance.
- tool_util.copy_sample_files( self.app, sample_files, tool_path=self.tool_path, sample_files_copied=sample_files_copied )
- if not is_repository_dependency:
- tool_util.add_to_tool_panel( self.app,
- tool_shed_repository.name,
- repository_clone_url,
- tool_shed_repository.installed_changeset_revision,
- repository_tools_tups,
- self.repository_owner,
- self.migrated_tools_config,
- tool_panel_dict=tool_panel_dict_for_display,
- new_install=True )
- if install_dependencies and tool_dependencies and has_tool_dependencies:
- # Install tool dependencies.
- suc.update_tool_shed_repository_status( self.app,
- tool_shed_repository,
- self.app.install_model.ToolShedRepository.installation_status.INSTALLING_TOOL_DEPENDENCIES )
- # Get the tool_dependencies.xml file from disk.
- tool_dependencies_config = suc.get_config_from_disk( 'tool_dependencies.xml', repo_install_dir )
- installed_tool_dependencies = common_install_util.handle_tool_dependencies( app=self.app,
- tool_shed_repository=tool_shed_repository,
- tool_dependencies_config=tool_dependencies_config,
- tool_dependencies=tool_dependencies,
- from_install_manager=True )
- for installed_tool_dependency in installed_tool_dependencies:
- if installed_tool_dependency.status == self.app.install_model.ToolDependency.installation_status.ERROR:
- print '\nThe following error occurred from the InstallManager while installing tool dependency ', installed_tool_dependency.name, ':'
- print installed_tool_dependency.error_message, '\n\n'
- if 'datatypes' in metadata_dict:
- tool_shed_repository.status = self.app.install_model.ToolShedRepository.installation_status.LOADING_PROPRIETARY_DATATYPES
- if not tool_shed_repository.includes_datatypes:
- tool_shed_repository.includes_datatypes = True
- self.app.install_model.context.add( tool_shed_repository )
- self.app.install_model.context.flush()
- work_dir = tempfile.mkdtemp( prefix="tmp-toolshed-hrc" )
- datatypes_config = suc.get_config_from_disk( suc.DATATYPES_CONFIG_FILENAME, repo_install_dir )
- # Load proprietary data types required by tools. The value of override is not important here since the Galaxy server will be started
- # after this installation completes.
- converter_path, display_path = datatype_util.alter_config_and_load_prorietary_datatypes( self.app, datatypes_config, repo_install_dir, override=False ) #repo_install_dir was relative_install_dir
- if converter_path or display_path:
- # Create a dictionary of tool shed repository related information.
- repository_dict = datatype_util.create_repository_dict_for_proprietary_datatypes( tool_shed=self.tool_shed_url,
- name=tool_shed_repository.name,
- owner=self.repository_owner,
- installed_changeset_revision=tool_shed_repository.installed_changeset_revision,
- tool_dicts=metadata_dict.get( 'tools', [] ),
- converter_path=converter_path,
- display_path=display_path )
- if converter_path:
- # Load proprietary datatype converters
- self.app.datatypes_registry.load_datatype_converters( self.toolbox, installed_repository_dict=repository_dict )
- if display_path:
- # Load proprietary datatype display applications
- self.app.datatypes_registry.load_display_applications( installed_repository_dict=repository_dict )
- suc.remove_dir( work_dir )
-
- def install_repository( self, repository_elem, tool_shed_repository, install_dependencies, is_repository_dependency=False ):
- """Install a single repository, loading contained tools into the tool panel."""
- # Install path is of the form: <tool path>/<tool shed>/repos/<repository owner>/<repository name>/<installed changeset revision>
- relative_clone_dir = os.path.join( tool_shed_repository.tool_shed,
- 'repos',
- tool_shed_repository.owner,
- tool_shed_repository.name,
- tool_shed_repository.installed_changeset_revision )
- clone_dir = os.path.join( self.tool_path, relative_clone_dir )
- cloned_ok = self.__iscloned( clone_dir )
- is_installed = False
- # Any of the following states should count as installed in this context.
- if tool_shed_repository.status in [ self.app.install_model.ToolShedRepository.installation_status.INSTALLED,
- self.app.install_model.ToolShedRepository.installation_status.ERROR,
- self.app.install_model.ToolShedRepository.installation_status.UNINSTALLED,
- self.app.install_model.ToolShedRepository.installation_status.DEACTIVATED ]:
- is_installed = True
- if cloned_ok and is_installed:
- print "Skipping automatic install of repository '", tool_shed_repository.name, "' because it has already been installed in location ", clone_dir
- else:
- repository_clone_url = os.path.join( self.tool_shed_url, 'repos', tool_shed_repository.owner, tool_shed_repository.name )
- relative_install_dir = os.path.join( relative_clone_dir, tool_shed_repository.name )
- install_dir = os.path.join( clone_dir, tool_shed_repository.name )
- ctx_rev = suc.get_ctx_rev( self.app,
- self.tool_shed_url,
- tool_shed_repository.name,
- tool_shed_repository.owner,
- tool_shed_repository.installed_changeset_revision )
- if not cloned_ok:
- suc.update_tool_shed_repository_status( self.app,
- tool_shed_repository,
- self.app.install_model.ToolShedRepository.installation_status.CLONING )
- cloned_ok, error_message = hg_util.clone_repository( repository_clone_url, os.path.abspath( install_dir ), ctx_rev )
- if cloned_ok and not is_installed:
- self.handle_repository_contents( tool_shed_repository=tool_shed_repository,
- repository_clone_url=repository_clone_url,
- relative_install_dir=relative_install_dir,
- repository_elem=repository_elem,
- install_dependencies=install_dependencies,
- is_repository_dependency=is_repository_dependency )
- self.app.install_model.context.refresh( tool_shed_repository )
- metadata_dict = tool_shed_repository.metadata
- if 'tools' in metadata_dict:
- suc.update_tool_shed_repository_status( self.app,
- tool_shed_repository,
- self.app.install_model.ToolShedRepository.installation_status.SETTING_TOOL_VERSIONS )
- # Get the tool_versions from the tool shed for each tool in the installed change set.
- url = '%s/repository/get_tool_versions?name=%s&owner=%s&changeset_revision=%s' % \
- ( self.tool_shed_url, tool_shed_repository.name, self.repository_owner, tool_shed_repository.installed_changeset_revision )
- text = common_util.tool_shed_get( self.app, self.tool_shed_url, url )
- if text:
- tool_version_dicts = json.loads( text )
- tool_util.handle_tool_versions( self.app, tool_version_dicts, tool_shed_repository )
- else:
- # Set the tool versions since they seem to be missing for this repository in the tool shed.
- # CRITICAL NOTE: These default settings may not properly handle all parent/child associations.
- for tool_dict in metadata_dict[ 'tools' ]:
- flush_needed = False
- tool_id = tool_dict[ 'guid' ]
- old_tool_id = tool_dict[ 'id' ]
- tool_version = tool_dict[ 'version' ]
- tool_version_using_old_id = tool_util.get_tool_version( self.app, old_tool_id )
- tool_version_using_guid = tool_util.get_tool_version( self.app, tool_id )
- if not tool_version_using_old_id:
- tool_version_using_old_id = self.app.install_model.ToolVersion( tool_id=old_tool_id,
- tool_shed_repository=tool_shed_repository )
- self.app.install_model.context.add( tool_version_using_old_id )
- self.app.install_model.context.flush()
- if not tool_version_using_guid:
- tool_version_using_guid = self.app.install_model.ToolVersion( tool_id=tool_id,
- tool_shed_repository=tool_shed_repository )
- self.app.install_model.context.add( tool_version_using_guid )
- self.app.install_model.context.flush()
- # Associate the two versions as parent / child.
- tool_version_association = tool_util.get_tool_version_association( self.app,
- tool_version_using_old_id,
- tool_version_using_guid )
- if not tool_version_association:
- tool_version_association = self.app.install_model.ToolVersionAssociation( tool_id=tool_version_using_guid.id,
- parent_id=tool_version_using_old_id.id )
- self.app.install_model.context.add( tool_version_association )
- self.app.install_model.context.flush()
- suc.update_tool_shed_repository_status( self.app, tool_shed_repository, self.app.install_model.ToolShedRepository.installation_status.INSTALLED )
- else:
- print 'Error attempting to clone repository %s: %s' % ( str( tool_shed_repository.name ), str( error_message ) )
- suc.update_tool_shed_repository_status( self.app,
- tool_shed_repository,
- self.app.install_model.ToolShedRepository.installation_status.ERROR,
- error_message=error_message )
-
- @property
- def non_shed_tool_panel_configs( self ):
- return common_util.get_non_shed_tool_panel_configs( self.app )
-
- def order_repositories_for_installation( self, tool_shed_repositories, repository_dependencies_dict ):
- """
- Some repositories may have repository dependencies that are required to be installed before the dependent
- repository. This method will inspect the list of repositories about to be installed and make sure to order
- them appropriately. For each repository about to be installed, if required repositories are not contained
- in the list of repositories about to be installed, then they are not considered. Repository dependency
- definitions that contain circular dependencies should not result in an infinite loop, but obviously prior
- installation will not be handled for one or more of the repositories that require prior installation. This
- process is similar to the process used when installing tool shed repositories (i.e., the
- order_components_for_installation() method in ~/lib/tool_shed/galaxy_install/repository_util), but does not
- handle managing tool panel sections and other components since repository dependency definitions contained
- in tool shed repositories with migrated tools must never define a relationship to a repository dependency
- that contains a tool.
- """
- ordered_tool_shed_repositories = []
- ordered_tsr_ids = []
- processed_tsr_ids = []
- prior_install_required_dict = self.get_prior_install_required_dict( tool_shed_repositories, repository_dependencies_dict )
- tsr_ids = [ tool_shed_repository.id for tool_shed_repository in tool_shed_repositories ]
- while len( processed_tsr_ids ) != len( prior_install_required_dict.keys() ):
- tsr_id = suc.get_next_prior_import_or_install_required_dict_entry( prior_install_required_dict, processed_tsr_ids )
- processed_tsr_ids.append( tsr_id )
- # Create the ordered_tsr_ids, the ordered_repo_info_dicts and the ordered_tool_panel_section_keys lists.
- if tsr_id not in ordered_tsr_ids:
- prior_install_required_ids = prior_install_required_dict[ tsr_id ]
- for prior_install_required_id in prior_install_required_ids:
- if prior_install_required_id not in ordered_tsr_ids:
- # Install the associated repository dependency first.
- ordered_tsr_ids.append( prior_install_required_id )
- ordered_tsr_ids.append( tsr_id )
- for ordered_tsr_id in ordered_tsr_ids:
- for tool_shed_repository in tool_shed_repositories:
- if tool_shed_repository.id == ordered_tsr_id:
- ordered_tool_shed_repositories.append( tool_shed_repository )
- break
- return ordered_tool_shed_repositories
-
- def __has_tool_dependencies( self, metadata_dict ):
- '''Determine if the provided metadata_dict specifies tool dependencies.'''
- # The use of the orphan_tool_dependencies category in metadata has been deprecated, but we still need to check in case
- # the metadata is out of date.
- if 'tool_dependencies' in metadata_dict or 'orphan_tool_dependencies' in metadata_dict:
- return True
- return False
-
- def __iscloned( self, clone_dir ):
- full_path = os.path.abspath( clone_dir )
- if os.path.exists( full_path ):
- for root, dirs, files in os.walk( full_path ):
- if '.hg' in dirs:
- # Assume that the repository has been installed if we find a .hg directory.
- return True
- return False
-
- def __is_repository_dependency( self, name, changeset_revision, tool_shed_repository ):
- '''Determine if the provided tool shed repository is a repository dependency.'''
- if str( tool_shed_repository.name ) != str( name ) or \
- str( tool_shed_repository.owner ) != str( self.repository_owner ) or \
- str( tool_shed_repository.changeset_revision ) != str( changeset_revision ):
- return True
- return False
-
- def __is_valid_repository_tag( self, elem ):
- # <repository name="emboss_datatypes" description="Datatypes for Emboss tools" changeset_revision="a89163f31369" />
- if elem.tag != 'repository':
- return False
- if not elem.get( 'name' ):
- return False
- if not elem.get( 'changeset_revision' ):
- return False
- return True
+ pass
\ No newline at end of file
diff -r 912ba22e939f505a33ef83d3bfbdbe26aa9c6a9f -r 09f80fcdfd85bf1b009b9c831e51fa92a6ff9306 lib/tool_shed/galaxy_install/migrate/common.py
--- a/lib/tool_shed/galaxy_install/migrate/common.py
+++ b/lib/tool_shed/galaxy_install/migrate/common.py
@@ -46,17 +46,19 @@
# Get the latest tool migration script number to send to the Install manager.
latest_migration_script_number = int( tools_migration_config.split( '_' )[ 0 ] )
- # The value of migrated_tools_config is migrated_tools_conf.xml, and is reserved for containing only those tools that have been
- # eliminated from the distribution and moved to the tool shed. A side-effect of instantiating the InstallManager is the automatic
+ # The value of migrated_tools_config is migrated_tools_conf.xml, and is reserved for
+ # containing only those tools that have been eliminated from the distribution and moved
+ # to the tool shed. A side-effect of instantiating the ToolMigrationlManager is the automatic
# installation of all appropriate tool shed repositories.
- self.install_manager = install_manager.InstallManager( app=self,
- latest_migration_script_number=latest_migration_script_number,
- tool_shed_install_config=os.path.join( self.config.root,
- 'scripts',
- 'migrate_tools',
- tools_migration_config ),
- migrated_tools_config=self.config.migrated_tools_config,
- install_dependencies=install_dependencies )
+ self.tool_migration_manager = \
+ tool_migration_manager.ToolMigrationManager( app=self,
+ latest_migration_script_number=latest_migration_script_number,
+ tool_shed_install_config=os.path.join( self.config.root,
+ 'scripts',
+ 'migrate_tools',
+ tools_migration_config ),
+ migrated_tools_config=self.config.migrated_tools_config,
+ install_dependencies=install_dependencies )
@property
def sa_session( self ):
diff -r 912ba22e939f505a33ef83d3bfbdbe26aa9c6a9f -r 09f80fcdfd85bf1b009b9c831e51fa92a6ff9306 lib/tool_shed/galaxy_install/repository_util.py
--- a/lib/tool_shed/galaxy_install/repository_util.py
+++ b/lib/tool_shed/galaxy_install/repository_util.py
@@ -625,7 +625,7 @@
tool_shed_repository=tool_shed_repository,
tool_dependencies_config=tool_dependencies_config,
tool_dependencies=tool_shed_repository.tool_dependencies,
- from_install_manager=False )
+ from_tool_migration_manager=False )
suc.remove_dir( work_dir )
suc.update_tool_shed_repository_status( trans.app,
tool_shed_repository,
@@ -881,7 +881,7 @@
tool_shed_repository=repository,
tool_dependencies_config=tool_dependencies_config,
tool_dependencies=repository.tool_dependencies,
- from_install_manager=False )
+ from_tool_migration_manager=False )
for installed_tool_dependency in installed_tool_dependencies:
if installed_tool_dependency.status in [ trans.install_model.ToolDependency.installation_status.ERROR ]:
repair_dict = add_repair_dict_entry( repository.name, installed_tool_dependency.error_message )
diff -r 912ba22e939f505a33ef83d3bfbdbe26aa9c6a9f -r 09f80fcdfd85bf1b009b9c831e51fa92a6ff9306 lib/tool_shed/galaxy_install/tool_dependencies/install_util.py
--- a/lib/tool_shed/galaxy_install/tool_dependencies/install_util.py
+++ b/lib/tool_shed/galaxy_install/tool_dependencies/install_util.py
@@ -175,7 +175,7 @@
return text
-def handle_complex_repository_dependency_for_package( app, elem, package_name, package_version, tool_shed_repository, from_install_manager=False ):
+def handle_complex_repository_dependency_for_package( app, elem, package_name, package_version, tool_shed_repository, from_tool_migration_manager=False ):
"""
Inspect the repository defined by a complex repository dependency definition and take certain steps to enable installation
of the received package name and version to proceed. The received elem is the <repository> tag set which defines the complex
@@ -224,7 +224,7 @@
# framework which installs repositories in 2 stages, those of type tool_dependency_definition
# followed by those containing valid tools and tool functional test components. Neither of these
# scenarios apply when the install manager is running.
- if from_install_manager:
+ if from_tool_migration_manager:
can_install_tool_dependency = True
else:
# Notice that we'll throw away the following tool_dependency if it can be installed.
@@ -306,7 +306,7 @@
tool_dependency = tool_dependency_util.mark_tool_dependency_installed( app, tool_dependency )
return tool_dependency
-def install_package( app, elem, tool_shed_repository, tool_dependencies=None, from_install_manager=False ):
+def install_package( app, elem, tool_shed_repository, tool_dependencies=None, from_tool_migration_manager=False ):
"""
Install a tool dependency package defined by the XML element elem. The value of tool_dependencies is
a partial or full list of ToolDependency records associated with the tool_shed_repository.
@@ -325,7 +325,7 @@
package_name,
package_version,
tool_shed_repository,
- from_install_manager=from_install_manager )
+ from_tool_migration_manager=from_tool_migration_manager )
for rd_tool_dependency in rd_tool_dependencies:
if rd_tool_dependency.status == app.install_model.ToolDependency.installation_status.ERROR:
# We'll log the error here, but continue installing packages since some may not require this dependency.
@@ -348,7 +348,7 @@
# framework which installs repositories in 2 stages, those of type tool_dependency_definition
# followed by those containing valid tools and tool functional test components. Neither of these
# scenarios apply when the install manager is running.
- if from_install_manager:
+ if from_tool_migration_manager:
can_install_tool_dependency = True
else:
# Notice that we'll throw away the following tool_dependency if it can be installed.
diff -r 912ba22e939f505a33ef83d3bfbdbe26aa9c6a9f -r 09f80fcdfd85bf1b009b9c831e51fa92a6ff9306 lib/tool_shed/galaxy_install/tool_migration_manager.py
--- /dev/null
+++ b/lib/tool_shed/galaxy_install/tool_migration_manager.py
@@ -0,0 +1,643 @@
+"""
+Manage automatic installation of tools configured in the xxx.xml files in ~/scripts/migrate_tools (e.g., 0002_tools.xml).
+All of the tools were at some point included in the Galaxy distribution, but are now hosted in the main Galaxy tool shed.
+"""
+import json
+import os
+import shutil
+import tempfile
+import threading
+import logging
+from galaxy import util
+from galaxy.tools import ToolSection
+import tool_shed.util.shed_util_common as suc
+from tool_shed.util import common_install_util
+from tool_shed.util import common_util
+from tool_shed.util import datatype_util
+from tool_shed.util import hg_util
+from tool_shed.util import metadata_util
+from tool_shed.util import tool_dependency_util
+from tool_shed.util import tool_util
+from tool_shed.util import xml_util
+from galaxy.util.odict import odict
+
+log = logging.getLogger( __name__ )
+
+
+class ToolMigrationlManager( object ):
+
+ def __init__( self, app, latest_migration_script_number, tool_shed_install_config, migrated_tools_config, install_dependencies ):
+ """
+ Check tool settings in tool_shed_install_config and install all repositories that are not already installed. The tool
+ panel configuration file is the received migrated_tools_config, which is the reserved file named migrated_tools_conf.xml.
+ """
+ self.app = app
+ self.toolbox = self.app.toolbox
+ self.migrated_tools_config = migrated_tools_config
+ # If install_dependencies is True but tool_dependency_dir is not set, do not attempt to install but print informative error message.
+ if install_dependencies and app.config.tool_dependency_dir is None:
+ message = 'You are attempting to install tool dependencies but do not have a value for "tool_dependency_dir" set in your universe_wsgi.ini '
+ message += 'file. Set this location value to the path where you want tool dependencies installed and rerun the migration script.'
+ raise Exception( message )
+ # Get the local non-shed related tool panel configs (there can be more than one, and the default name is tool_conf.xml).
+ self.proprietary_tool_confs = self.non_shed_tool_panel_configs
+ self.proprietary_tool_panel_elems = self.get_proprietary_tool_panel_elems( latest_migration_script_number )
+ # Set the location where the repositories will be installed by retrieving the tool_path setting from migrated_tools_config.
+ tree, error_message = xml_util.parse_xml( migrated_tools_config )
+ if tree is None:
+ print error_message
+ else:
+ root = tree.getroot()
+ self.tool_path = root.get( 'tool_path' )
+ print "Repositories will be installed into configured tool_path location ", str( self.tool_path )
+ # Parse tool_shed_install_config to check each of the tools.
+ self.tool_shed_install_config = tool_shed_install_config
+ tree, error_message = xml_util.parse_xml( tool_shed_install_config )
+ if tree is None:
+ print error_message
+ else:
+ root = tree.getroot()
+ defined_tool_shed_url = root.get( 'name' )
+ self.tool_shed_url = common_util.get_tool_shed_url_from_tool_shed_registry( self.app, defined_tool_shed_url )
+ self.tool_shed = common_util.remove_protocol_and_port_from_tool_shed_url( self.tool_shed_url )
+ self.repository_owner = common_util.REPOSITORY_OWNER
+ index, self.shed_config_dict = suc.get_shed_tool_conf_dict( app, self.migrated_tools_config )
+ # Since tool migration scripts can be executed any number of times, we need to make sure the appropriate tools are defined in
+ # tool_conf.xml. If no tools associated with the migration stage are defined, no repositories will be installed on disk.
+ # The default behavior is that the tool shed is down.
+ tool_shed_accessible = False
+ tool_panel_configs = common_util.get_non_shed_tool_panel_configs( app )
+ if tool_panel_configs:
+ # The missing_tool_configs_dict contents are something like:
+ # {'emboss_antigenic.xml': [('emboss', '5.0.0', 'package', '\nreadme blah blah blah\n')]}
+ tool_shed_accessible, missing_tool_configs_dict = common_util.check_for_missing_tools( app,
+ tool_panel_configs,
+ latest_migration_script_number )
+ else:
+ # It doesn't matter if the tool shed is accessible since there are no migrated tools defined in the local Galaxy instance, but
+ # we have to set the value of tool_shed_accessible to True so that the value of migrate_tools.version can be correctly set in
+ # the database.
+ tool_shed_accessible = True
+ missing_tool_configs_dict = odict()
+ if tool_shed_accessible:
+ if len( self.proprietary_tool_confs ) == 1:
+ plural = ''
+ file_names = self.proprietary_tool_confs[ 0 ]
+ else:
+ plural = 's'
+ file_names = ', '.join( self.proprietary_tool_confs )
+ if missing_tool_configs_dict:
+ for proprietary_tool_conf in self.proprietary_tool_confs:
+ # Create a backup of the tool configuration in the un-migrated state.
+ shutil.copy( proprietary_tool_conf, '%s-pre-stage-%04d' % ( proprietary_tool_conf, latest_migration_script_number ) )
+ for repository_elem in root:
+ # Make sure we have a valid repository tag.
+ if self.__is_valid_repository_tag( repository_elem ):
+ # Get all repository dependencies for the repository defined by the current repository_elem. Repository dependency
+ # definitions contained in tool shed repositories with migrated tools must never define a relationship to a repository
+ # dependency that contains a tool. The repository dependency can only contain items that are not loaded into the Galaxy
+ # tool panel (e.g., tool dependency definitions, custom datatypes, etc). This restriction must be followed down the
+ # entire dependency hierarchy.
+ name = repository_elem.get( 'name' )
+ changeset_revision = repository_elem.get( 'changeset_revision' )
+ tool_shed_accessible, repository_dependencies_dict = \
+ common_util.get_repository_dependencies( app,
+ self.tool_shed_url,
+ name,
+ self.repository_owner,
+ changeset_revision )
+ # Make sure all repository dependency records exist (as tool_shed_repository table rows) in the Galaxy database.
+ created_tool_shed_repositories = self.create_or_update_tool_shed_repository_records( name,
+ changeset_revision,
+ repository_dependencies_dict )
+ # Order the repositories for proper installation. This process is similar to the process used when installing tool
+ # shed repositories (i.e., the order_components_for_installation() method in ~/lib/tool_shed/galaxy_install/
+ # repository_util), but does not handle managing tool panel sections and other components since repository dependency
+ # definitions contained in tool shed repositories with migrated tools must never define a relationship to a repository
+ # dependency that contains a tool.
+ ordered_tool_shed_repositories = self.order_repositories_for_installation( created_tool_shed_repositories,
+ repository_dependencies_dict )
+
+ for tool_shed_repository in ordered_tool_shed_repositories:
+ is_repository_dependency = self.__is_repository_dependency( name, changeset_revision, tool_shed_repository )
+ self.install_repository( repository_elem,
+ tool_shed_repository,
+ install_dependencies,
+ is_repository_dependency=is_repository_dependency )
+ else:
+ message = "\nNo tools associated with migration stage %s are defined in your " % str( latest_migration_script_number )
+ message += "file%s named %s,\nso no repositories will be installed on disk.\n" % ( plural, file_names )
+ print message
+ else:
+ message = "\nThe main Galaxy tool shed is not currently available, so skipped migration stage %s.\n" % str( latest_migration_script_number )
+ message += "Try again later.\n"
+ print message
+
+ def create_or_update_tool_shed_repository_record( self, name, owner, changeset_revision, description=None ):
+
+ # Install path is of the form: <tool path>/<tool shed>/repos/<repository owner>/<repository name>/<installed changeset revision>
+ relative_clone_dir = os.path.join( self.tool_shed, 'repos', owner, name, changeset_revision )
+ clone_dir = os.path.join( self.tool_path, relative_clone_dir )
+ if not self.__iscloned( clone_dir ):
+ repository_clone_url = os.path.join( self.tool_shed_url, 'repos', owner, name )
+ relative_install_dir = os.path.join( relative_clone_dir, name )
+ install_dir = os.path.join( clone_dir, name )
+ ctx_rev = suc.get_ctx_rev( self.app, self.tool_shed_url, name, owner, changeset_revision )
+ tool_shed_repository = suc.create_or_update_tool_shed_repository( app=self.app,
+ name=name,
+ description=description,
+ installed_changeset_revision=changeset_revision,
+ ctx_rev=ctx_rev,
+ repository_clone_url=repository_clone_url,
+ metadata_dict={},
+ status=self.app.install_model.ToolShedRepository.installation_status.NEW,
+ current_changeset_revision=None,
+ owner=self.repository_owner,
+ dist_to_shed=True )
+ return tool_shed_repository
+ return None
+
+ def create_or_update_tool_shed_repository_records( self, name, changeset_revision, repository_dependencies_dict ):
+ """
+ Make sure the repository defined by name and changeset_revision and all of its repository dependencies have
+ associated tool_shed_repository table rows in the Galaxy database.
+ """
+ created_tool_shed_repositories = []
+ description = repository_dependencies_dict.get( 'description', None )
+ tool_shed_repository = self.create_or_update_tool_shed_repository_record( name,
+ self.repository_owner,
+ changeset_revision,
+ description=description )
+ if tool_shed_repository:
+ created_tool_shed_repositories.append( tool_shed_repository )
+ for rd_key, rd_tups in repository_dependencies_dict.items():
+ if rd_key in [ 'root_key', 'description' ]:
+ continue
+ for rd_tup in rd_tups:
+ parsed_rd_tup = common_util.parse_repository_dependency_tuple( rd_tup )
+ rd_tool_shed, rd_name, rd_owner, rd_changeset_revision = parsed_rd_tup[ 0:4 ]
+ # TODO: Make sure the repository description is applied to the new repository record during installation.
+ tool_shed_repository = self.create_or_update_tool_shed_repository_record( rd_name,
+ rd_owner,
+ rd_changeset_revision,
+ description=None )
+ if tool_shed_repository:
+ created_tool_shed_repositories.append( tool_shed_repository )
+ return created_tool_shed_repositories
+
+ def filter_and_persist_proprietary_tool_panel_configs( self, tool_configs_to_filter ):
+ """Eliminate all entries in all non-shed-related tool panel configs for all tool config file names in the received tool_configs_to_filter."""
+ for proprietary_tool_conf in self.proprietary_tool_confs:
+ persist_required = False
+ tree, error_message = xml_util.parse_xml( proprietary_tool_conf )
+ if tree:
+ root = tree.getroot()
+ for elem in root:
+ if elem.tag == 'tool':
+ # Tools outside of sections.
+ file_path = elem.get( 'file', None )
+ if file_path:
+ if file_path in tool_configs_to_filter:
+ root.remove( elem )
+ persist_required = True
+ elif elem.tag == 'section':
+ # Tools contained in a section.
+ for section_elem in elem:
+ if section_elem.tag == 'tool':
+ file_path = section_elem.get( 'file', None )
+ if file_path:
+ if file_path in tool_configs_to_filter:
+ elem.remove( section_elem )
+ persist_required = True
+ if persist_required:
+ fh = tempfile.NamedTemporaryFile( 'wb', prefix="tmp-toolshed-fapptpc" )
+ tmp_filename = fh.name
+ fh.close()
+ fh = open( tmp_filename, 'wb' )
+ tree.write( tmp_filename, encoding='utf-8', xml_declaration=True )
+ fh.close()
+ shutil.move( tmp_filename, os.path.abspath( proprietary_tool_conf ) )
+ os.chmod( proprietary_tool_conf, 0644 )
+
+ def get_containing_tool_sections( self, tool_config ):
+ """
+ If tool_config is defined somewhere in self.proprietary_tool_panel_elems, return True and a list of ToolSections in which the
+ tool is displayed. If the tool is displayed outside of any sections, None is appended to the list.
+ """
+ tool_sections = []
+ is_displayed = False
+ for proprietary_tool_panel_elem in self.proprietary_tool_panel_elems:
+ if proprietary_tool_panel_elem.tag == 'tool':
+ # The proprietary_tool_panel_elem looks something like <tool file="emboss_5/emboss_antigenic.xml" />.
+ proprietary_tool_config = proprietary_tool_panel_elem.get( 'file' )
+ if tool_config == proprietary_tool_config:
+ # The tool is loaded outside of any sections.
+ tool_sections.append( None )
+ if not is_displayed:
+ is_displayed = True
+ if proprietary_tool_panel_elem.tag == 'section':
+ # The proprietary_tool_panel_elem looks something like <section name="EMBOSS" id="EMBOSSLite">.
+ for section_elem in proprietary_tool_panel_elem:
+ if section_elem.tag == 'tool':
+ # The section_elem looks something like <tool file="emboss_5/emboss_antigenic.xml" />.
+ proprietary_tool_config = section_elem.get( 'file' )
+ if tool_config == proprietary_tool_config:
+ # The tool is loaded inside of the section_elem.
+ tool_sections.append( ToolSection( proprietary_tool_panel_elem ) )
+ if not is_displayed:
+ is_displayed = True
+ return is_displayed, tool_sections
+
+ def get_guid( self, repository_clone_url, relative_install_dir, tool_config ):
+ if self.shed_config_dict.get( 'tool_path' ):
+ relative_install_dir = os.path.join( self.shed_config_dict[ 'tool_path' ], relative_install_dir )
+ tool_config_filename = suc.strip_path( tool_config )
+ for root, dirs, files in os.walk( relative_install_dir ):
+ if root.find( '.hg' ) < 0 and root.find( 'hgrc' ) < 0:
+ if '.hg' in dirs:
+ dirs.remove( '.hg' )
+ for name in files:
+ filename = suc.strip_path( name )
+ if filename == tool_config_filename:
+ full_path = str( os.path.abspath( os.path.join( root, name ) ) )
+ tool = self.toolbox.load_tool( full_path )
+ return suc.generate_tool_guid( repository_clone_url, tool )
+ # Not quite sure what should happen here, throw an exception or what?
+ return None
+
+ def get_prior_install_required_dict( self, tool_shed_repositories, repository_dependencies_dict ):
+ """
+ Return a dictionary whose keys are the received tsr_ids and whose values are a list of tsr_ids, each of which is contained in the received
+ list of tsr_ids and whose associated repository must be installed prior to the repository associated with the tsr_id key.
+ """
+ # Initialize the dictionary.
+ prior_install_required_dict = {}
+ tsr_ids = [ tool_shed_repository.id for tool_shed_repository in tool_shed_repositories ]
+ for tsr_id in tsr_ids:
+ prior_install_required_dict[ tsr_id ] = []
+ # Inspect the repository dependencies about to be installed and populate the dictionary.
+ for rd_key, rd_tups in repository_dependencies_dict.items():
+ if rd_key in [ 'root_key', 'description' ]:
+ continue
+ for rd_tup in rd_tups:
+ prior_install_ids = []
+ tool_shed, name, owner, changeset_revision, prior_installation_required, only_if_compiling_contained_td = \
+ common_util.parse_repository_dependency_tuple( rd_tup )
+ if util.asbool( prior_installation_required ):
+ for tsr in tool_shed_repositories:
+ if tsr.name == name and tsr.owner == owner and tsr.changeset_revision == changeset_revision:
+ prior_install_ids.append( tsr.id )
+ prior_install_required_dict[ tsr.id ] = prior_install_ids
+ return prior_install_required_dict
+
+ def get_proprietary_tool_panel_elems( self, latest_tool_migration_script_number ):
+ """
+ Parse each config in self.proprietary_tool_confs (the default is tool_conf.xml) and generate a list of Elements that are
+ either ToolSection elements or Tool elements. These will be used to generate new entries in the migrated_tools_conf.xml
+ file for the installed tools.
+ """
+ tools_xml_file_path = os.path.abspath( os.path.join( 'scripts', 'migrate_tools', '%04d_tools.xml' % latest_tool_migration_script_number ) )
+ # Parse the XML and load the file attributes for later checking against the integrated elements from self.proprietary_tool_confs.
+ migrated_tool_configs = []
+ tree, error_message = xml_util.parse_xml( tools_xml_file_path )
+ if tree is None:
+ return []
+ root = tree.getroot()
+ for elem in root:
+ if elem.tag == 'repository':
+ for tool_elem in elem:
+ migrated_tool_configs.append( tool_elem.get( 'file' ) )
+ # Parse each file in self.proprietary_tool_confs and generate the integrated list of tool panel Elements that contain them.
+ tool_panel_elems = []
+ for proprietary_tool_conf in self.proprietary_tool_confs:
+ tree, error_message = xml_util.parse_xml( proprietary_tool_conf )
+ if tree is None:
+ return []
+ root = tree.getroot()
+ for elem in root:
+ if elem.tag == 'tool':
+ # Tools outside of sections.
+ file_path = elem.get( 'file', None )
+ if file_path:
+ if file_path in migrated_tool_configs:
+ if elem not in tool_panel_elems:
+ tool_panel_elems.append( elem )
+ elif elem.tag == 'section':
+ # Tools contained in a section.
+ for section_elem in elem:
+ if section_elem.tag == 'tool':
+ file_path = section_elem.get( 'file', None )
+ if file_path:
+ if file_path in migrated_tool_configs:
+ # Append the section, not the tool.
+ if elem not in tool_panel_elems:
+ tool_panel_elems.append( elem )
+ return tool_panel_elems
+
+ def handle_repository_contents( self, tool_shed_repository, repository_clone_url, relative_install_dir, repository_elem,
+ install_dependencies, is_repository_dependency=False ):
+ """
+ Generate the metadata for the installed tool shed repository, among other things. If the installed tool_shed_repository
+ contains tools that are loaded into the Galaxy tool panel, this method will automatically eliminate all entries for each
+ of the tools defined in the received repository_elem from all non-shed-related tool panel configuration files since the
+ entries are automatically added to the reserved migrated_tools_conf.xml file as part of the migration process.
+ """
+ tool_configs_to_filter = []
+ tool_panel_dict_for_display = odict()
+ if self.tool_path:
+ repo_install_dir = os.path.join( self.tool_path, relative_install_dir )
+ else:
+ repo_install_dir = relative_install_dir
+ if not is_repository_dependency:
+ for tool_elem in repository_elem:
+ # The tool_elem looks something like this: <tool id="EMBOSS: antigenic1" version="5.0.0" file="emboss_antigenic.xml" />
+ tool_config = tool_elem.get( 'file' )
+ guid = self.get_guid( repository_clone_url, relative_install_dir, tool_config )
+ # See if tool_config is defined inside of a section in self.proprietary_tool_panel_elems.
+ is_displayed, tool_sections = self.get_containing_tool_sections( tool_config )
+ if is_displayed:
+ tool_panel_dict_for_tool_config = \
+ tool_util.generate_tool_panel_dict_for_tool_config( guid, tool_config, tool_sections=tool_sections )
+ # The tool-panel_dict has the following structure.
+ # {<Tool guid> : [{ tool_config : <tool_config_file>, id: <ToolSection id>, version : <ToolSection version>,
+ # name : <TooSection name>}]}
+ for k, v in tool_panel_dict_for_tool_config.items():
+ tool_panel_dict_for_display[ k ] = v
+ for tool_panel_dict in v:
+ # Keep track of tool config file names associated with entries that have been made to the
+ # migrated_tools_conf.xml file so they can be eliminated from all non-shed-related tool panel configs.
+ if tool_config not in tool_configs_to_filter:
+ tool_configs_to_filter.append( tool_config )
+ else:
+ print 'The tool "%s" (%s) has not been enabled because it is not defined in a proprietary tool config (%s).' \
+ % ( guid, tool_config, ", ".join( self.proprietary_tool_confs or [] ) )
+ if tool_configs_to_filter:
+ lock = threading.Lock()
+ lock.acquire( True )
+ try:
+ self.filter_and_persist_proprietary_tool_panel_configs( tool_configs_to_filter )
+ except Exception, e:
+ log.exception( "Exception attempting to filter and persist non-shed-related tool panel configs:\n%s" % str( e ) )
+ finally:
+ lock.release()
+ metadata_dict, invalid_file_tups = \
+ metadata_util.generate_metadata_for_changeset_revision( app=self.app,
+ repository=tool_shed_repository,
+ changeset_revision=tool_shed_repository.changeset_revision,
+ repository_clone_url=repository_clone_url,
+ shed_config_dict = self.shed_config_dict,
+ relative_install_dir=relative_install_dir,
+ repository_files_dir=None,
+ resetting_all_metadata_on_repository=False,
+ updating_installed_repository=False,
+ persist=True )
+ tool_shed_repository.metadata = metadata_dict
+ self.app.install_model.context.add( tool_shed_repository )
+ self.app.install_model.context.flush()
+ has_tool_dependencies = self.__has_tool_dependencies( metadata_dict )
+ if has_tool_dependencies:
+ # All tool_dependency objects must be created before the tools are processed even if no
+ # tool dependencies will be installed.
+ tool_dependencies = tool_dependency_util.create_tool_dependency_objects( self.app,
+ tool_shed_repository,
+ relative_install_dir,
+ set_status=True )
+ else:
+ tool_dependencies = None
+ if 'tools' in metadata_dict:
+ sample_files = metadata_dict.get( 'sample_files', [] )
+ sample_files = [ str( s ) for s in sample_files ]
+ tool_index_sample_files = tool_util.get_tool_index_sample_files( sample_files )
+ tool_util.copy_sample_files( self.app, tool_index_sample_files, tool_path=self.tool_path )
+ sample_files_copied = [ s for s in tool_index_sample_files ]
+ repository_tools_tups = suc.get_repository_tools_tups( self.app, metadata_dict )
+ if repository_tools_tups:
+ # Handle missing data table entries for tool parameters that are dynamically generated select lists.
+ repository_tools_tups = tool_util.handle_missing_data_table_entry( self.app, relative_install_dir, self.tool_path, repository_tools_tups )
+ # Handle missing index files for tool parameters that are dynamically generated select lists.
+ repository_tools_tups, sample_files_copied = tool_util.handle_missing_index_file( self.app,
+ self.tool_path,
+ sample_files,
+ repository_tools_tups,
+ sample_files_copied )
+ # Copy remaining sample files included in the repository to the ~/tool-data directory of the local Galaxy instance.
+ tool_util.copy_sample_files( self.app, sample_files, tool_path=self.tool_path, sample_files_copied=sample_files_copied )
+ if not is_repository_dependency:
+ tool_util.add_to_tool_panel( self.app,
+ tool_shed_repository.name,
+ repository_clone_url,
+ tool_shed_repository.installed_changeset_revision,
+ repository_tools_tups,
+ self.repository_owner,
+ self.migrated_tools_config,
+ tool_panel_dict=tool_panel_dict_for_display,
+ new_install=True )
+ if install_dependencies and tool_dependencies and has_tool_dependencies:
+ # Install tool dependencies.
+ suc.update_tool_shed_repository_status( self.app,
+ tool_shed_repository,
+ self.app.install_model.ToolShedRepository.installation_status.INSTALLING_TOOL_DEPENDENCIES )
+ # Get the tool_dependencies.xml file from disk.
+ tool_dependencies_config = suc.get_config_from_disk( 'tool_dependencies.xml', repo_install_dir )
+ installed_tool_dependencies = common_install_util.handle_tool_dependencies( app=self.app,
+ tool_shed_repository=tool_shed_repository,
+ tool_dependencies_config=tool_dependencies_config,
+ tool_dependencies=tool_dependencies,
+ from_tool_migration_manager=True )
+ for installed_tool_dependency in installed_tool_dependencies:
+ if installed_tool_dependency.status == self.app.install_model.ToolDependency.installation_status.ERROR:
+ print '\nThe following error occurred from the InstallManager while installing tool dependency ', installed_tool_dependency.name, ':'
+ print installed_tool_dependency.error_message, '\n\n'
+ if 'datatypes' in metadata_dict:
+ tool_shed_repository.status = self.app.install_model.ToolShedRepository.installation_status.LOADING_PROPRIETARY_DATATYPES
+ if not tool_shed_repository.includes_datatypes:
+ tool_shed_repository.includes_datatypes = True
+ self.app.install_model.context.add( tool_shed_repository )
+ self.app.install_model.context.flush()
+ work_dir = tempfile.mkdtemp( prefix="tmp-toolshed-hrc" )
+ datatypes_config = suc.get_config_from_disk( suc.DATATYPES_CONFIG_FILENAME, repo_install_dir )
+ # Load proprietary data types required by tools. The value of override is not important here since the Galaxy server will be started
+ # after this installation completes.
+ converter_path, display_path = datatype_util.alter_config_and_load_prorietary_datatypes( self.app, datatypes_config, repo_install_dir, override=False ) #repo_install_dir was relative_install_dir
+ if converter_path or display_path:
+ # Create a dictionary of tool shed repository related information.
+ repository_dict = datatype_util.create_repository_dict_for_proprietary_datatypes( tool_shed=self.tool_shed_url,
+ name=tool_shed_repository.name,
+ owner=self.repository_owner,
+ installed_changeset_revision=tool_shed_repository.installed_changeset_revision,
+ tool_dicts=metadata_dict.get( 'tools', [] ),
+ converter_path=converter_path,
+ display_path=display_path )
+ if converter_path:
+ # Load proprietary datatype converters
+ self.app.datatypes_registry.load_datatype_converters( self.toolbox, installed_repository_dict=repository_dict )
+ if display_path:
+ # Load proprietary datatype display applications
+ self.app.datatypes_registry.load_display_applications( installed_repository_dict=repository_dict )
+ suc.remove_dir( work_dir )
+
+ def install_repository( self, repository_elem, tool_shed_repository, install_dependencies, is_repository_dependency=False ):
+ """Install a single repository, loading contained tools into the tool panel."""
+ # Install path is of the form: <tool path>/<tool shed>/repos/<repository owner>/<repository name>/<installed changeset revision>
+ relative_clone_dir = os.path.join( tool_shed_repository.tool_shed,
+ 'repos',
+ tool_shed_repository.owner,
+ tool_shed_repository.name,
+ tool_shed_repository.installed_changeset_revision )
+ clone_dir = os.path.join( self.tool_path, relative_clone_dir )
+ cloned_ok = self.__iscloned( clone_dir )
+ is_installed = False
+ # Any of the following states should count as installed in this context.
+ if tool_shed_repository.status in [ self.app.install_model.ToolShedRepository.installation_status.INSTALLED,
+ self.app.install_model.ToolShedRepository.installation_status.ERROR,
+ self.app.install_model.ToolShedRepository.installation_status.UNINSTALLED,
+ self.app.install_model.ToolShedRepository.installation_status.DEACTIVATED ]:
+ is_installed = True
+ if cloned_ok and is_installed:
+ print "Skipping automatic install of repository '", tool_shed_repository.name, "' because it has already been installed in location ", clone_dir
+ else:
+ repository_clone_url = os.path.join( self.tool_shed_url, 'repos', tool_shed_repository.owner, tool_shed_repository.name )
+ relative_install_dir = os.path.join( relative_clone_dir, tool_shed_repository.name )
+ install_dir = os.path.join( clone_dir, tool_shed_repository.name )
+ ctx_rev = suc.get_ctx_rev( self.app,
+ self.tool_shed_url,
+ tool_shed_repository.name,
+ tool_shed_repository.owner,
+ tool_shed_repository.installed_changeset_revision )
+ if not cloned_ok:
+ suc.update_tool_shed_repository_status( self.app,
+ tool_shed_repository,
+ self.app.install_model.ToolShedRepository.installation_status.CLONING )
+ cloned_ok, error_message = hg_util.clone_repository( repository_clone_url, os.path.abspath( install_dir ), ctx_rev )
+ if cloned_ok and not is_installed:
+ self.handle_repository_contents( tool_shed_repository=tool_shed_repository,
+ repository_clone_url=repository_clone_url,
+ relative_install_dir=relative_install_dir,
+ repository_elem=repository_elem,
+ install_dependencies=install_dependencies,
+ is_repository_dependency=is_repository_dependency )
+ self.app.install_model.context.refresh( tool_shed_repository )
+ metadata_dict = tool_shed_repository.metadata
+ if 'tools' in metadata_dict:
+ suc.update_tool_shed_repository_status( self.app,
+ tool_shed_repository,
+ self.app.install_model.ToolShedRepository.installation_status.SETTING_TOOL_VERSIONS )
+ # Get the tool_versions from the tool shed for each tool in the installed change set.
+ url = '%s/repository/get_tool_versions?name=%s&owner=%s&changeset_revision=%s' % \
+ ( self.tool_shed_url, tool_shed_repository.name, self.repository_owner, tool_shed_repository.installed_changeset_revision )
+ text = common_util.tool_shed_get( self.app, self.tool_shed_url, url )
+ if text:
+ tool_version_dicts = json.loads( text )
+ tool_util.handle_tool_versions( self.app, tool_version_dicts, tool_shed_repository )
+ else:
+ # Set the tool versions since they seem to be missing for this repository in the tool shed.
+ # CRITICAL NOTE: These default settings may not properly handle all parent/child associations.
+ for tool_dict in metadata_dict[ 'tools' ]:
+ flush_needed = False
+ tool_id = tool_dict[ 'guid' ]
+ old_tool_id = tool_dict[ 'id' ]
+ tool_version = tool_dict[ 'version' ]
+ tool_version_using_old_id = tool_util.get_tool_version( self.app, old_tool_id )
+ tool_version_using_guid = tool_util.get_tool_version( self.app, tool_id )
+ if not tool_version_using_old_id:
+ tool_version_using_old_id = self.app.install_model.ToolVersion( tool_id=old_tool_id,
+ tool_shed_repository=tool_shed_repository )
+ self.app.install_model.context.add( tool_version_using_old_id )
+ self.app.install_model.context.flush()
+ if not tool_version_using_guid:
+ tool_version_using_guid = self.app.install_model.ToolVersion( tool_id=tool_id,
+ tool_shed_repository=tool_shed_repository )
+ self.app.install_model.context.add( tool_version_using_guid )
+ self.app.install_model.context.flush()
+ # Associate the two versions as parent / child.
+ tool_version_association = tool_util.get_tool_version_association( self.app,
+ tool_version_using_old_id,
+ tool_version_using_guid )
+ if not tool_version_association:
+ tool_version_association = self.app.install_model.ToolVersionAssociation( tool_id=tool_version_using_guid.id,
+ parent_id=tool_version_using_old_id.id )
+ self.app.install_model.context.add( tool_version_association )
+ self.app.install_model.context.flush()
+ suc.update_tool_shed_repository_status( self.app, tool_shed_repository, self.app.install_model.ToolShedRepository.installation_status.INSTALLED )
+ else:
+ print 'Error attempting to clone repository %s: %s' % ( str( tool_shed_repository.name ), str( error_message ) )
+ suc.update_tool_shed_repository_status( self.app,
+ tool_shed_repository,
+ self.app.install_model.ToolShedRepository.installation_status.ERROR,
+ error_message=error_message )
+
+ @property
+ def non_shed_tool_panel_configs( self ):
+ return common_util.get_non_shed_tool_panel_configs( self.app )
+
+ def order_repositories_for_installation( self, tool_shed_repositories, repository_dependencies_dict ):
+ """
+ Some repositories may have repository dependencies that are required to be installed before the dependent
+ repository. This method will inspect the list of repositories about to be installed and make sure to order
+ them appropriately. For each repository about to be installed, if required repositories are not contained
+ in the list of repositories about to be installed, then they are not considered. Repository dependency
+ definitions that contain circular dependencies should not result in an infinite loop, but obviously prior
+ installation will not be handled for one or more of the repositories that require prior installation. This
+ process is similar to the process used when installing tool shed repositories (i.e., the
+ order_components_for_installation() method in ~/lib/tool_shed/galaxy_install/repository_util), but does not
+ handle managing tool panel sections and other components since repository dependency definitions contained
+ in tool shed repositories with migrated tools must never define a relationship to a repository dependency
+ that contains a tool.
+ """
+ ordered_tool_shed_repositories = []
+ ordered_tsr_ids = []
+ processed_tsr_ids = []
+ prior_install_required_dict = self.get_prior_install_required_dict( tool_shed_repositories, repository_dependencies_dict )
+ tsr_ids = [ tool_shed_repository.id for tool_shed_repository in tool_shed_repositories ]
+ while len( processed_tsr_ids ) != len( prior_install_required_dict.keys() ):
+ tsr_id = suc.get_next_prior_import_or_install_required_dict_entry( prior_install_required_dict, processed_tsr_ids )
+ processed_tsr_ids.append( tsr_id )
+ # Create the ordered_tsr_ids, the ordered_repo_info_dicts and the ordered_tool_panel_section_keys lists.
+ if tsr_id not in ordered_tsr_ids:
+ prior_install_required_ids = prior_install_required_dict[ tsr_id ]
+ for prior_install_required_id in prior_install_required_ids:
+ if prior_install_required_id not in ordered_tsr_ids:
+ # Install the associated repository dependency first.
+ ordered_tsr_ids.append( prior_install_required_id )
+ ordered_tsr_ids.append( tsr_id )
+ for ordered_tsr_id in ordered_tsr_ids:
+ for tool_shed_repository in tool_shed_repositories:
+ if tool_shed_repository.id == ordered_tsr_id:
+ ordered_tool_shed_repositories.append( tool_shed_repository )
+ break
+ return ordered_tool_shed_repositories
+
+ def __has_tool_dependencies( self, metadata_dict ):
+ '''Determine if the provided metadata_dict specifies tool dependencies.'''
+ # The use of the orphan_tool_dependencies category in metadata has been deprecated, but we still need to check in case
+ # the metadata is out of date.
+ if 'tool_dependencies' in metadata_dict or 'orphan_tool_dependencies' in metadata_dict:
+ return True
+ return False
+
+ def __iscloned( self, clone_dir ):
+ full_path = os.path.abspath( clone_dir )
+ if os.path.exists( full_path ):
+ for root, dirs, files in os.walk( full_path ):
+ if '.hg' in dirs:
+ # Assume that the repository has been installed if we find a .hg directory.
+ return True
+ return False
+
+ def __is_repository_dependency( self, name, changeset_revision, tool_shed_repository ):
+ '''Determine if the provided tool shed repository is a repository dependency.'''
+ if str( tool_shed_repository.name ) != str( name ) or \
+ str( tool_shed_repository.owner ) != str( self.repository_owner ) or \
+ str( tool_shed_repository.changeset_revision ) != str( changeset_revision ):
+ return True
+ return False
+
+ def __is_valid_repository_tag( self, elem ):
+ # <repository name="emboss_datatypes" description="Datatypes for Emboss tools" changeset_revision="a89163f31369" />
+ if elem.tag != 'repository':
+ return False
+ if not elem.get( 'name' ):
+ return False
+ if not elem.get( 'changeset_revision' ):
+ return False
+ return True
diff -r 912ba22e939f505a33ef83d3bfbdbe26aa9c6a9f -r 09f80fcdfd85bf1b009b9c831e51fa92a6ff9306 lib/tool_shed/util/common_install_util.py
--- a/lib/tool_shed/util/common_install_util.py
+++ b/lib/tool_shed/util/common_install_util.py
@@ -483,7 +483,7 @@
all_required_repo_info_dict[ 'all_repo_info_dicts' ] = all_repo_info_dicts
return all_required_repo_info_dict
-def handle_tool_dependencies( app, tool_shed_repository, tool_dependencies_config, tool_dependencies, from_install_manager=False ):
+def handle_tool_dependencies( app, tool_shed_repository, tool_dependencies_config, tool_dependencies, from_tool_migration_manager=False ):
"""
Install and build tool dependencies defined in the tool_dependencies_config. This config's tag sets can currently refer to installation
methods in Galaxy's tool_dependencies module. In the future, proprietary fabric scripts contained in the repository will be supported.
@@ -542,7 +542,7 @@
elem,
tool_shed_repository,
tool_dependencies=tool_dependencies,
- from_install_manager=from_install_manager )
+ from_tool_migration_manager=from_tool_migration_manager )
except Exception, e:
error_message = "Error installing tool dependency package %s version %s: %s" % ( str( package_name ), str( package_version ), str( e ) )
log.exception( error_message )
diff -r 912ba22e939f505a33ef83d3bfbdbe26aa9c6a9f -r 09f80fcdfd85bf1b009b9c831e51fa92a6ff9306 lib/tool_shed/util/common_util.py
--- a/lib/tool_shed/util/common_util.py
+++ b/lib/tool_shed/util/common_util.py
@@ -163,7 +163,7 @@
def get_tool_dependencies( app, tool_shed_url, repository_name, repository_owner, changeset_revision ):
tool_dependencies = []
tool_shed_accessible = True
- url = '%s/repository/get_tool_dependencies?name=%s&owner=%s&changeset_revision=%s&from_install_manager=True' % \
+ url = '%s/repository/get_tool_dependencies?name=%s&owner=%s&changeset_revision=%s' % \
( tool_shed_url, repository_name, repository_owner, changeset_revision )
try:
text = tool_shed_get( app, tool_shed_url, url )
diff -r 912ba22e939f505a33ef83d3bfbdbe26aa9c6a9f -r 09f80fcdfd85bf1b009b9c831e51fa92a6ff9306 lib/tool_shed/util/shed_util_common.py
--- a/lib/tool_shed/util/shed_util_common.py
+++ b/lib/tool_shed/util/shed_util_common.py
@@ -182,7 +182,7 @@
If a record defined by the received tool shed, repository name and owner does not exist, create
a new record with the received information.
"""
- # The received value for dist_to_shed will be True if the InstallManager is installing a repository
+ # The received value for dist_to_shed will be True if the ToolMigrationManager is installing a repository
# that contains tools or datatypes that used to be in the Galaxy distribution, but have been moved
# to the main Galaxy tool shed.
if current_changeset_revision is None:
diff -r 912ba22e939f505a33ef83d3bfbdbe26aa9c6a9f -r 09f80fcdfd85bf1b009b9c831e51fa92a6ff9306 scripts/migrate_tools/migrate_tools.py
--- a/scripts/migrate_tools/migrate_tools.py
+++ b/scripts/migrate_tools/migrate_tools.py
@@ -1,8 +1,8 @@
"""
-This script will start up its own web application which includes an InstallManager (~/lib/galaxy/tool_shed/install_manager.py).
+This script will start up its own web application which includes a ToolMigrationManager (~/lib/galaxy/tool_shed/tool_migration_manager.py).
For each tool discovered missing, the tool shed repository that contains it will be installed on disk and a new entry will be
created for it in the migrated_tools_conf.xml file. These entries will be made so that the tool panel will be displayed the same
-as it was before the tools were eliminated from the Galaxy distribution. The InstallManager will properly handle entries in
+as it was before the tools were eliminated from the Galaxy distribution. The ToolMigrationManager will properly handle entries in
migrated_tools_conf.xml for tools outside tool panel sections as well as tools inside tool panel sections, depending upon the
layout of the local tool_conf.xml file. Entries will not be created in migrated_tools_conf.xml for tools included in the tool
shed repository but not defined in tool_conf.xml.
@@ -19,7 +19,7 @@
from tool_shed.galaxy_install.migrate.common import MigrateToolsApplication
app = MigrateToolsApplication( sys.argv[ 1 ] )
-non_shed_tool_confs = app.install_manager.proprietary_tool_confs
+non_shed_tool_confs = app.tool_migration_manager.proprietary_tool_confs
if len( non_shed_tool_confs ) == 1:
plural = ''
file_names = non_shed_tool_confs[ 0 ]
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
44 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/a153472ba81a/
Changeset: a153472ba81a
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Record history content type with output of history contents API.
Downstream in dataset collections branch this isn't just dataset, including this will allow clients to distinguish between different kinds of content.
Affected #: 2 files
diff -r 9e0362d990f88502b77211d2d3f7f5459123f52c -r a153472ba81ab8a8eebf55bdc3358a83d447c6cc lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py
+++ b/lib/galaxy/model/__init__.py
@@ -1932,6 +1932,7 @@
purged = hda.purged,
visible = hda.visible,
state = hda.state,
+ history_content_type=hda.history_content_type,
file_size = int( hda.get_size() ),
update_time = hda.update_time.isoformat(),
data_type = hda.ext,
@@ -2001,6 +2002,10 @@
return changed
+ @property
+ def history_content_type( self ):
+ return "dataset"
+
class HistoryDatasetAssociationDisplayAtAuthorization( object ):
def __init__( self, hda=None, user=None, site=None ):
diff -r 9e0362d990f88502b77211d2d3f7f5459123f52c -r a153472ba81ab8a8eebf55bdc3358a83d447c6cc lib/galaxy/webapps/galaxy/api/history_contents.py
--- a/lib/galaxy/webapps/galaxy/api/history_contents.py
+++ b/lib/galaxy/webapps/galaxy/api/history_contents.py
@@ -138,6 +138,7 @@
'visible': hda.visible,
'purged': hda.purged,
'hid' : hda.hid,
+ 'history_content_type' : hda.history_content_type,
'url' : url_for( 'history_content', history_id=encoded_history_id, id=encoded_id, ),
}
https://bitbucket.org/galaxy/galaxy-central/commits/9fc72733a583/
Changeset: 9fc72733a583
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Refactor backbone models to allow different kinds of history contents.
In particular, add a new backbone parent model (HistoryContent) refactored out of HistoryDatasetAssociation. This makes room for a forth coming HistoryDatasetCollectionAssociation model that will be emitted from the history contents API.
There are likely better ways to share common functionality between a dataset and a dataset collection contents - more JS savy developers should feel free to refactor into smaller mixins for instance. For now HistoryContent is just composed of the overlap in functionality between dataset and collection representations downstream and is an quick and dirty way to share said functionality.
Affected #: 1 file
diff -r a153472ba81ab8a8eebf55bdc3358a83d447c6cc -r 9fc72733a583de0b40c36507eea7568fd5acbdfe static/scripts/mvc/dataset/hda-model.js
--- a/static/scripts/mvc/dataset/hda-model.js
+++ b/static/scripts/mvc/dataset/hda-model.js
@@ -3,16 +3,156 @@
"utils/localization"
], function( baseMVC, _l ){
//==============================================================================
-/** @class (HDA) model for a Galaxy dataset
- * related to a history.
- * @name HistoryDatasetAssociation
+/** @class model for contents related to a history.
+ * @name HistoryContent
*
* @augments Backbone.Model
* @borrows LoggableMixin#logger as #logger
* @borrows LoggableMixin#log as #log
* @constructs
*/
-var HistoryDatasetAssociation = Backbone.Model.extend( baseMVC.LoggableMixin ).extend(
+var HistoryContent = Backbone.Model.extend( baseMVC.LoggableMixin ).extend( {
+
+ /** fetch location of this HDA's history in the api */
+ urlRoot: galaxy_config.root + 'api/histories/',
+ /** full url spec. for this HDA */
+ url : function(){
+ return this.urlRoot + this.get( 'history_id' ) + '/contents/' + this.get('history_content_type') + 's/' + this.get( 'id' );
+ },
+
+ /** the more common alias of visible */
+ hidden : function(){
+ return !this.get( 'visible' );
+ },
+
+ // ........................................................................ ajax
+
+ /** save this HDA, _Mark_ing it as deleted (just a flag) */
+ 'delete' : function _delete( options ){
+ if( this.get( 'deleted' ) ){ return jQuery.when(); }
+ return this.save( { deleted: true }, options );
+ },
+ /** save this HDA, _Mark_ing it as undeleted */
+ undelete : function _undelete( options ){
+ if( !this.get( 'deleted' ) || this.get( 'purged' ) ){ return jQuery.when(); }
+ return this.save( { deleted: false }, options );
+ },
+
+ /** save this HDA as not visible */
+ hide : function _hide( options ){
+ if( !this.get( 'visible' ) ){ return jQuery.when(); }
+ return this.save( { visible: false }, options );
+ },
+ /** save this HDA as visible */
+ unhide : function _uhide( options ){
+ if( this.get( 'visible' ) ){ return jQuery.when(); }
+ return this.save( { visible: true }, options );
+ },
+
+ /** based on show_deleted, show_hidden (gen. from the container control),
+ * would this ds show in the list of ds's?
+ * @param {Boolean} show_deleted are we showing deleted hdas?
+ * @param {Boolean} show_hidden are we showing hidden hdas?
+ */
+ isVisible : function( show_deleted, show_hidden ){
+ var isVisible = true;
+ if( ( !show_deleted )
+ && ( this.get( 'deleted' ) || this.get( 'purged' ) ) ){
+ isVisible = false;
+ }
+ if( ( !show_hidden )
+ && ( !this.get( 'visible' ) ) ){
+ isVisible = false;
+ }
+ return isVisible;
+ },
+
+ /** search the attribute with key attrKey for the string searchFor; T/F if found */
+ searchAttribute : function( attrKey, searchFor ){
+ var attrVal = this.get( attrKey );
+ //console.debug( 'searchAttribute', attrKey, attrVal, searchFor );
+ // bail if empty searchFor or unsearchable values
+ if( !searchFor
+ || ( attrVal === undefined || attrVal === null ) ){
+ return false;
+ }
+ // pass to sep. fn for deep search of array attributes
+ if( _.isArray( attrVal ) ){ return this._searchArrayAttribute( attrVal, searchFor ); }
+ return ( attrVal.toString().toLowerCase().indexOf( searchFor.toLowerCase() ) !== -1 );
+ },
+
+ /** deep(er) search for array attributes; T/F if found */
+ _searchArrayAttribute : function( array, searchFor ){
+ //console.debug( '_searchArrayAttribute', array, searchFor );
+ searchFor = searchFor.toLowerCase();
+ //precondition: searchFor has already been validated as non-empty string
+ //precondition: assumes only 1 level array
+ //TODO: could possibly break up searchFor more (CSV...)
+ return _.any( array, function( elem ){
+ return ( elem.toString().toLowerCase().indexOf( searchFor.toLowerCase() ) !== -1 );
+ });
+ },
+
+ /** search all searchAttributes for the string searchFor,
+ * returning a list of keys of attributes that contain searchFor
+ */
+ search : function( searchFor ){
+ var model = this;
+ return _.filter( this.searchAttributes, function( key ){
+ return model.searchAttribute( key, searchFor );
+ });
+ },
+
+ /** alias of search, but returns a boolean; accepts attribute specifiers where
+ * the attributes searched can be narrowed to a single attribute using
+ * the form: matches( 'genome_build=hg19' )
+ * (the attribute keys allowed can also be aliases to the true attribute key;
+ * see searchAliases above)
+ * @param {String} term plain text or ATTR_SPECIFIER sep. key=val pair
+ * @returns {Boolean} was term found in (any) attribute(s)
+ */
+ matches : function( term ){
+ var ATTR_SPECIFIER = '=',
+ split = term.split( ATTR_SPECIFIER );
+ // attribute is specified - search only that
+ if( split.length >= 2 ){
+ var attrKey = split[0];
+ attrKey = this.searchAliases[ attrKey ] || attrKey;
+ return this.searchAttribute( attrKey, split[1] );
+ }
+ // no attribute is specified - search all attributes in searchAttributes
+ return !!this.search( term ).length;
+ },
+
+ /** an implicit AND search for all terms; IOW, an hda must match all terms given
+ * where terms is a whitespace separated value string.
+ * e.g. given terms of: 'blah bler database=hg19'
+ * an HDA would have to have attributes containing blah AND bler AND a genome_build == hg19
+ * To include whitespace in terms: wrap the term in double quotations.
+ */
+ matchesAll : function( terms ){
+ var model = this;
+ // break the terms up by whitespace and filter out the empty strings
+ terms = terms.match( /(".*"|\w*=".*"|\S*)/g ).filter( function( s ){ return !!s; });
+ return _.all( terms, function( term ){
+ term = term.replace( /"/g, '' );
+ return model.matches( term );
+ });
+ }
+
+} );
+
+//==============================================================================
+/** @class (HDA) model for a Galaxy dataset
+ * related to a history.
+ * @name HistoryDatasetAssociation
+ *
+ * @augments HistoryContent
+ * @borrows LoggableMixin#logger as #logger
+ * @borrows LoggableMixin#log as #log
+ * @constructs
+ */
+var HistoryDatasetAssociation = HistoryContent.extend(
/** @lends HistoryDatasetAssociation.prototype */{
///** logger used to record this.log messages, commonly set to console */
@@ -27,6 +167,7 @@
history_id : null,
// often used with tagging
model_class : 'HistoryDatasetAssociation',
+ history_content_type : 'dataset',
hid : 0,
// ---whereas these are Dataset related/inherited
@@ -58,13 +199,6 @@
annotation : ''
},
- /** fetch location of this HDA's history in the api */
- urlRoot: galaxy_config.root + 'api/histories/',
- /** full url spec. for this HDA */
- url : function(){
- return this.urlRoot + this.get( 'history_id' ) + '/contents/' + this.get( 'id' );
- },
-
/** controller urls assoc. with this HDA */
urls : function(){
var id = this.get( 'id' );
@@ -122,29 +256,6 @@
return ( this.get( 'deleted' ) || this.get( 'purged' ) );
},
- /** based on show_deleted, show_hidden (gen. from the container control),
- * would this ds show in the list of ds's?
- * @param {Boolean} show_deleted are we showing deleted hdas?
- * @param {Boolean} show_hidden are we showing hidden hdas?
- */
- isVisible : function( show_deleted, show_hidden ){
- var isVisible = true;
- if( ( !show_deleted )
- && ( this.get( 'deleted' ) || this.get( 'purged' ) ) ){
- isVisible = false;
- }
- if( ( !show_hidden )
- && ( !this.get( 'visible' ) ) ){
- isVisible = false;
- }
- return isVisible;
- },
-
- /** the more common alias of visible */
- hidden : function(){
- return !this.get( 'visible' );
- },
-
/** Is this HDA in a 'ready' state; where 'Ready' states are states where no
* processing (for the ds) is left to do on the server.
*/
@@ -257,79 +368,6 @@
tag : 'tags'
},
- /** search the attribute with key attrKey for the string searchFor; T/F if found */
- searchAttribute : function( attrKey, searchFor ){
- var attrVal = this.get( attrKey );
- //console.debug( 'searchAttribute', attrKey, attrVal, searchFor );
- // bail if empty searchFor or unsearchable values
- if( !searchFor
- || ( attrVal === undefined || attrVal === null ) ){
- return false;
- }
- // pass to sep. fn for deep search of array attributes
- if( _.isArray( attrVal ) ){ return this._searchArrayAttribute( attrVal, searchFor ); }
- return ( attrVal.toString().toLowerCase().indexOf( searchFor.toLowerCase() ) !== -1 );
- },
-
- /** deep(er) search for array attributes; T/F if found */
- _searchArrayAttribute : function( array, searchFor ){
- //console.debug( '_searchArrayAttribute', array, searchFor );
- searchFor = searchFor.toLowerCase();
- //precondition: searchFor has already been validated as non-empty string
- //precondition: assumes only 1 level array
- //TODO: could possibly break up searchFor more (CSV...)
- return _.any( array, function( elem ){
- return ( elem.toString().toLowerCase().indexOf( searchFor.toLowerCase() ) !== -1 );
- });
- },
-
- /** search all searchAttributes for the string searchFor,
- * returning a list of keys of attributes that contain searchFor
- */
- search : function( searchFor ){
- var model = this;
- return _.filter( this.searchAttributes, function( key ){
- return model.searchAttribute( key, searchFor );
- });
- },
-
- /** alias of search, but returns a boolean; accepts attribute specifiers where
- * the attributes searched can be narrowed to a single attribute using
- * the form: matches( 'genome_build=hg19' )
- * (the attribute keys allowed can also be aliases to the true attribute key;
- * see searchAliases above)
- * @param {String} term plain text or ATTR_SPECIFIER sep. key=val pair
- * @returns {Boolean} was term found in (any) attribute(s)
- */
- matches : function( term ){
- var ATTR_SPECIFIER = '=',
- split = term.split( ATTR_SPECIFIER );
- // attribute is specified - search only that
- if( split.length >= 2 ){
- var attrKey = split[0];
- attrKey = this.searchAliases[ attrKey ] || attrKey;
- return this.searchAttribute( attrKey, split[1] );
- }
- // no attribute is specified - search all attributes in searchAttributes
- return !!this.search( term ).length;
- },
-
- /** an implicit AND search for all terms; IOW, an hda must match all terms given
- * where terms is a whitespace separated value string.
- * e.g. given terms of: 'blah bler database=hg19'
- * an HDA would have to have attributes containing blah AND bler AND a genome_build == hg19
- * To include whitespace in terms: wrap the term in double quotations.
- */
- matchesAll : function( terms ){
- var model = this;
- // break the terms up by whitespace and filter out the empty strings
- terms = terms.match( /(".*"|\w*=".*"|\S*)/g ).filter( function( s ){ return !!s; });
- return _.all( terms, function( term ){
- term = term.replace( /"/g, '' );
- return model.matches( term );
- });
- },
-
// ........................................................................ misc
/** String representation */
toString : function(){
@@ -398,6 +436,7 @@
//==============================================================================
/** @class Backbone collection of (HDA) models
+ * TODO: Rename HistoryContentCollection
*
* @borrows LoggableMixin#logger as #logger
* @borrows LoggableMixin#log as #log
@@ -405,7 +444,13 @@
*/
var HDACollection = Backbone.Collection.extend( baseMVC.LoggableMixin ).extend(
/** @lends HDACollection.prototype */{
- model : HistoryDatasetAssociation,
+ model : function( attrs, options ) {
+ if( attrs.history_content_type == "dataset" ) {
+ return new HistoryDatasetAssociation( attrs, options );
+ } else {
+ // TODO: Handle unknown history_content_type...
+ }
+ },
///** logger used to record this.log messages, commonly set to console */
//// comment this out to suppress log output
https://bitbucket.org/galaxy/galaxy-central/commits/8d58146fdd4c/
Changeset: 8d58146fdd4c
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Refactor backbone views to allow different kinds of history contents.
This makes room for a forth coming HistoryDatasetCollectionAssociation model that will be emitted from the history contents API. Same comment as previous changeset applies - this is a potentially crude way to share duplicated functionality and a better design would be most welcome.
Affected #: 4 files
diff -r 9fc72733a583de0b40c36507eea7568fd5acbdfe -r 8d58146fdd4c9cd46e63804b64959a150554ff30 static/scripts/mvc/dataset/hda-base.js
--- a/static/scripts/mvc/dataset/hda-base.js
+++ b/static/scripts/mvc/dataset/hda-base.js
@@ -4,28 +4,167 @@
"utils/localization"
], function( hdaModel, baseMVC, _l ){
/* global Backbone */
-//==============================================================================
-/** @class Read only view for HistoryDatasetAssociation.
- * @name HDABaseView
+
+/** @class Read only view for history content views to extend.
+ * @name HistoryContentBaseView
*
* @augments Backbone.View
* @borrows LoggableMixin#logger as #logger
* @borrows LoggableMixin#log as #log
* @constructs
*/
-var HDABaseView = Backbone.View.extend( baseMVC.LoggableMixin ).extend(
+var HistoryContentBaseView = Backbone.View.extend( baseMVC.LoggableMixin ).extend(
+/** @lends HistoryContentBaseView.prototype */{
+ tagName : "div",
+ fxSpeed : 'fast',
+
+ _queueNewRender : function( $newRender, fade ) {
+ fade = ( fade === undefined )?( true ):( fade );
+ var view = this;
+
+ // fade the old render out (if desired)
+ if( fade ){
+ $( view ).queue( function( next ){ this.$el.fadeOut( view.fxSpeed, next ); });
+ }
+ // empty the old render, update to any new HDA state, swap in the new render contents, handle multi-select
+ $( view ).queue( function( next ){
+ this.$el.empty()
+ .attr( 'class', view.className ).addClass( 'state-' + view.model.get( 'state' ) )
+ .append( $newRender.children() );
+ if( this.selectable ){ this.showSelector( 0 ); }
+ next();
+ });
+ // fade the new in
+ if( fade ){
+ $( view ).queue( function( next ){ this.$el.fadeIn( view.fxSpeed, next ); });
+ }
+ // trigger an event to know we're ready
+ $( view ).queue( function( next ){
+ this.trigger( 'rendered', view );
+ if( this.model.inReadyState() ){
+ this.trigger( 'rendered:ready', view );
+ }
+ if( this.draggable ){ this.draggableOn(); }
+ next();
+ });
+ },
+
+ /** Show or hide the body/details of history content.
+ * note: if the model does not have detailed data, fetch that data before showing the body
+ * @param {Event} event the event that triggered this (@link HDABaseView#events)
+ * @param {Boolean} expanded if true, expand; if false, collapse
+ * @fires body-expanded when a body has been expanded
+ * @fires body-collapsed when a body has been collapsed
+ */
+ toggleBodyVisibility : function( event, expand ){
+ // bail (with propagation) if keydown and not space or enter
+ var KEYCODE_SPACE = 32, KEYCODE_RETURN = 13;
+ if( event && ( event.type === 'keydown' )
+ && !( event.keyCode === KEYCODE_SPACE || event.keyCode === KEYCODE_RETURN ) ){
+ return true;
+ }
+
+ var $body = this.$el.find( '.dataset-body' );
+ expand = ( expand === undefined )?( !$body.is( ':visible' ) ):( expand );
+ if( expand ){
+ this.expandBody();
+ } else {
+ this.collapseBody();
+ }
+ return false;
+ },
+
+ // ......................................................................... selection
+ /** display a (fa-icon) checkbox on the left of the hda that fires events when checked
+ * Note: this also hides the primary actions
+ */
+ showSelector : function(){
+ // make sure selected state is represented properly
+ if( this.selected ){
+ this.select( null, true );
+ }
+
+ this.selectable = true;
+ this.trigger( 'selectable', true, this );
+
+ this.$( '.dataset-primary-actions' ).hide();
+ this.$( '.dataset-selector' ).show();
+ },
+
+ /** remove the selection checkbox */
+ hideSelector : function(){
+ // reverse the process from showSelect
+ this.selectable = false;
+ this.trigger( 'selectable', false, this );
+
+ this.$( '.dataset-selector' ).hide();
+ this.$( '.dataset-primary-actions' ).show();
+ },
+
+ toggleSelector : function(){
+ if( !this.$el.find( '.dataset-selector' ).is( ':visible' ) ){
+ this.showSelector();
+ } else {
+ this.hideSelector();
+ }
+ },
+
+ /** event handler for selection (also programmatic selection)
+ */
+ select : function( event ){
+ // switch icon, set selected, and trigger event
+ this.$el.find( '.dataset-selector span' )
+ .removeClass( 'fa-square-o' ).addClass( 'fa-check-square-o' );
+ if( !this.selected ){
+ this.trigger( 'selected', this );
+ this.selected = true;
+ }
+ return false;
+ },
+
+ /** event handler for clearing selection (also programmatic deselection)
+ */
+ deselect : function( event ){
+ // switch icon, set selected, and trigger event
+ this.$el.find( '.dataset-selector span' )
+ .removeClass( 'fa-check-square-o' ).addClass( 'fa-square-o' );
+ if( this.selected ){
+ this.trigger( 'de-selected', this );
+ this.selected = false;
+ }
+ return false;
+ },
+
+ toggleSelect : function( event ){
+ if( this.selected ){
+ this.deselect( event );
+ } else {
+ this.select( event );
+ }
+ },
+
+});
+
+
+//==============================================================================
+/** @class Read only view for HistoryDatasetAssociation.
+ * @name HDABaseView
+ *
+ * @augments HistoryContentBaseView
+ * @borrows LoggableMixin#logger as #logger
+ * @borrows LoggableMixin#log as #log
+ * @constructs
+ */
+var HDABaseView = HistoryContentBaseView.extend(
/** @lends HDABaseView.prototype */{
///** logger used to record this.log messages, commonly set to console */
//// comment this out to suppress log output
//logger : console,
- tagName : "div",
className : "dataset hda history-panel-hda",
id : function(){ return 'hda-' + this.model.get( 'id' ); },
- fxSpeed : 'fast',
-
// ......................................................................... set up
/** Set up the view, cache url templates, bind listeners
* @param {Object} attributes
@@ -90,9 +229,6 @@
* @returns {Object} this HDABaseView
*/
render : function( fade ){
- fade = ( fade === undefined )?( true ):( fade );
- var view = this;
-
//HACK: hover exit doesn't seem to be called on prev. tooltips when RE-rendering - so: no tooltip hide
// handle that here by removing previous view's tooltips
this.$el.find("[title]").tooltip( "destroy" );
@@ -101,32 +237,9 @@
this.urls = this.model.urls();
var $newRender = this._buildNewRender();
- // fade the old render out (if desired)
- if( fade ){
- $( view ).queue( function( next ){ this.$el.fadeOut( view.fxSpeed, next ); });
- }
- // empty the old render, update to any new HDA state, swap in the new render contents, handle multi-select
- $( view ).queue( function( next ){
- this.$el.empty()
- .attr( 'class', view.className ).addClass( 'state-' + view.model.get( 'state' ) )
- .append( $newRender.children() );
- if( this.selectable ){ this.showSelector( 0 ); }
- next();
- });
- // fade the new in
- if( fade ){
- $( view ).queue( function( next ){ this.$el.fadeIn( view.fxSpeed, next ); });
- }
- // trigger an event to know we're ready
- $( view ).queue( function( next ){
- this.trigger( 'rendered', view );
- if( this.model.inReadyState() ){
- this.trigger( 'rendered:ready', view );
- }
- if( this.draggable ){ this.draggableOn(); }
- next();
- });
+ this._queueNewRender( $newRender, fade );
return this;
+
},
_buildNewRender : function(){
@@ -434,31 +547,6 @@
'click .dataset-selector' : 'toggleSelect'
},
- /** Show or hide the body/details of an HDA.
- * note: if the model does not have detailed data, fetch that data before showing the body
- * @param {Event} event the event that triggered this (@link HDABaseView#events)
- * @param {Boolean} expanded if true, expand; if false, collapse
- * @fires body-expanded when a body has been expanded
- * @fires body-collapsed when a body has been collapsed
- */
- toggleBodyVisibility : function( event, expand ){
- // bail (with propagation) if keydown and not space or enter
- var KEYCODE_SPACE = 32, KEYCODE_RETURN = 13;
- if( event && ( event.type === 'keydown' )
- && !( event.keyCode === KEYCODE_SPACE || event.keyCode === KEYCODE_RETURN ) ){
- return true;
- }
-
- var $body = this.$el.find( '.dataset-body' );
- expand = ( expand === undefined )?( !$body.is( ':visible' ) ):( expand );
- if( expand ){
- this.expandBody();
- } else {
- this.collapseBody();
- }
- return false;
- },
-
/** Render and show the full, detailed body of this view including extra data and controls.
* @fires body-expanded when a body has been expanded
*/
@@ -500,75 +588,6 @@
});
},
- // ......................................................................... selection
- /** display a (fa-icon) checkbox on the left of the hda that fires events when checked
- * Note: this also hides the primary actions
- */
- showSelector : function(){
- // make sure selected state is represented properly
- if( this.selected ){
- this.select( null, true );
- }
-
- this.selectable = true;
- this.trigger( 'selectable', true, this );
-
- this.$( '.dataset-primary-actions' ).hide();
- this.$( '.dataset-selector' ).show();
- },
-
- /** remove the selection checkbox */
- hideSelector : function(){
- // reverse the process from showSelect
- this.selectable = false;
- this.trigger( 'selectable', false, this );
-
- this.$( '.dataset-selector' ).hide();
- this.$( '.dataset-primary-actions' ).show();
- },
-
- toggleSelector : function(){
- if( !this.$el.find( '.dataset-selector' ).is( ':visible' ) ){
- this.showSelector();
- } else {
- this.hideSelector();
- }
- },
-
- /** event handler for selection (also programmatic selection)
- */
- select : function( event ){
- // switch icon, set selected, and trigger event
- this.$el.find( '.dataset-selector span' )
- .removeClass( 'fa-square-o' ).addClass( 'fa-check-square-o' );
- if( !this.selected ){
- this.trigger( 'selected', this );
- this.selected = true;
- }
- return false;
- },
-
- /** event handler for clearing selection (also programmatic deselection)
- */
- deselect : function( event ){
- // switch icon, set selected, and trigger event
- this.$el.find( '.dataset-selector span' )
- .removeClass( 'fa-check-square-o' ).addClass( 'fa-square-o' );
- if( this.selected ){
- this.trigger( 'de-selected', this );
- this.selected = false;
- }
- return false;
- },
-
- toggleSelect : function( event ){
- if( this.selected ){
- this.deselect( event );
- } else {
- this.select( event );
- }
- },
-
// ......................................................................... drag/drop
draggableOn : function(){
this.draggable = true;
@@ -793,5 +812,6 @@
//==============================================================================
return {
+ HistoryContentBaseView : HistoryContentBaseView,
HDABaseView : HDABaseView
};});
diff -r 9fc72733a583de0b40c36507eea7568fd5acbdfe -r 8d58146fdd4c9cd46e63804b64959a150554ff30 static/scripts/mvc/history/annotated-history-panel.js
--- a/static/scripts/mvc/history/annotated-history-panel.js
+++ b/static/scripts/mvc/history/annotated-history-panel.js
@@ -88,7 +88,7 @@
/** attach an hdaView to the panel
* In this override, wrap the hdaView in a table row and cell, adding a 2nd cell for the hda annotation
*/
- attachHdaView : function( hdaView, $whereTo ){
+ attachContentView : function( hdaView, $whereTo ){
$whereTo = $whereTo || this.$el;
// build a row around the dataset with the std hdaView in the first cell and the annotation in the next
var stateClass = _.find( hdaView.el.classList, function( c ){ return ( /^state\-/ ).test( c ); }),
diff -r 9fc72733a583de0b40c36507eea7568fd5acbdfe -r 8d58146fdd4c9cd46e63804b64959a150554ff30 static/scripts/mvc/history/history-panel.js
--- a/static/scripts/mvc/history/history-panel.js
+++ b/static/scripts/mvc/history/history-panel.js
@@ -256,10 +256,10 @@
} // otherwise, the hdaView rendering should handle it
},
- /** Create an HDA view for the given HDA (but leave attachment for addHdaView above)
+ /** Create an HDA view for the given HDA (but leave attachment for addContentView above)
* @param {HistoryDatasetAssociation} hda
*/
- _createHdaView : function( hda ){
+ _createContentView : function( hda ){
var hdaId = hda.get( 'id' ),
hdaView = new this.HDAViewClass({
model : hda,
diff -r 9fc72733a583de0b40c36507eea7568fd5acbdfe -r 8d58146fdd4c9cd46e63804b64959a150554ff30 static/scripts/mvc/history/readonly-history-panel.js
--- a/static/scripts/mvc/history/readonly-history-panel.js
+++ b/static/scripts/mvc/history/readonly-history-panel.js
@@ -419,7 +419,7 @@
// bind events from the model's hda collection
// note: don't listen to the hdas for errors, history will pass that to us
//this.model.hdas.on( 'reset', this.addAll, this );
- this.model.hdas.on( 'add', this.addHdaView, this );
+ this.model.hdas.on( 'add', this.addContentView, this );
// on a model error - bounce it up to the panel and remove it from the model
this.model.on( 'error error:hdas', function( model, xhr, options, msg ){
@@ -591,13 +591,13 @@
visibleHdas.each( function( hda ){
// render it (NOTE: reverse order, newest on top (prepend))
var hdaId = hda.get( 'id' ),
- hdaView = panel._createHdaView( hda );
+ hdaView = panel._createContentView( hda );
newHdaViews[ hdaId ] = hdaView;
// persist selection
if( _.contains( panel.selectedHdaIds, hdaId ) ){
hdaView.selected = true;
}
- panel.attachHdaView( hdaView.render(), $whereTo );
+ panel.attachContentView( hdaView.render(), $whereTo );
});
}
this.hdaViews = newHdaViews;
@@ -608,7 +608,7 @@
/** Create an HDA view for the given HDA and set up listeners (but leave attachment for addHdaView)
* @param {HistoryDatasetAssociation} hda
*/
- _createHdaView : function( hda ){
+ _createContentView : function( hda ){
var hdaId = hda.get( 'id' ),
hdaView = new this.HDAViewClass({
model : hda,
@@ -642,7 +642,7 @@
},
/** attach an hdaView to the panel */
- attachHdaView : function( hdaView, $whereTo ){
+ attachContentView : function( hdaView, $whereTo ){
$whereTo = $whereTo || this.$el;
var $datasetsList = this.$datasetsList( $whereTo );
$datasetsList.prepend( hdaView.$el );
@@ -652,7 +652,7 @@
/** Add an hda view to the panel for the given hda
* @param {HistoryDatasetAssociation} hda
*/
- addHdaView : function( hda ){
+ addContentView : function( hda ){
this.log( 'add.' + this, hda );
var panel = this;
@@ -672,11 +672,11 @@
}
},
function createAndPrepend( next ){
- var hdaView = panel._createHdaView( hda );
+ var hdaView = panel._createContentView( hda );
panel.hdaViews[ hda.id ] = hdaView;
hdaView.render().$el.hide();
panel.scrollToTop();
- panel.attachHdaView( hdaView );
+ panel.attachContentView( hdaView );
hdaView.$el.slideDown( panel.fxSpeed );
}
]);
https://bitbucket.org/galaxy/galaxy-central/commits/14deb9452f87/
Changeset: 14deb9452f87
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Rename historyPanel.refershHdas to refreshContents.
Affected #: 5 files
diff -r 8d58146fdd4c9cd46e63804b64959a150554ff30 -r 14deb9452f8796079005009fd4a652285a036d32 static/scripts/mvc/history/readonly-history-panel.js
--- a/static/scripts/mvc/history/readonly-history-panel.js
+++ b/static/scripts/mvc/history/readonly-history-panel.js
@@ -686,7 +686,7 @@
//TODO: removeHdaView?
/** convenience alias to the model. Updates the hda list only (not the history) */
- refreshHdas : function( detailIds, options ){
+ refreshContents : function( detailIds, options ){
if( this.model ){
return this.model.refresh( detailIds, options );
}
diff -r 8d58146fdd4c9cd46e63804b64959a150554ff30 -r 14deb9452f8796079005009fd4a652285a036d32 static/scripts/mvc/upload/upload-view.js
--- a/static/scripts/mvc/upload/upload-view.js
+++ b/static/scripts/mvc/upload/upload-view.js
@@ -378,7 +378,7 @@
this._updateScreen();
// update galaxy history
- Galaxy.currHistoryPanel.refreshHdas();
+ Galaxy.currHistoryPanel.refreshContents();
},
// error
diff -r 8d58146fdd4c9cd46e63804b64959a150554ff30 -r 14deb9452f8796079005009fd4a652285a036d32 templates/base/base_panels.mako
--- a/templates/base/base_panels.mako
+++ b/templates/base/base_panels.mako
@@ -208,7 +208,7 @@
} );
// show the dataset we created above in the history panel
- Galaxy && Galaxy.currHistoryPanel && Galaxy.currHistoryPanel.refreshHdas();
+ Galaxy && Galaxy.currHistoryPanel && Galaxy.currHistoryPanel.refreshContents();
if (upload_error == true) {
return false;
diff -r 8d58146fdd4c9cd46e63804b64959a150554ff30 -r 14deb9452f8796079005009fd4a652285a036d32 templates/webapps/galaxy/tool_executed.mako
--- a/templates/webapps/galaxy/tool_executed.mako
+++ b/templates/webapps/galaxy/tool_executed.mako
@@ -10,7 +10,7 @@
// refresh the history panel to include any new datasets created by the tool
if( top.Galaxy && top.Galaxy.currHistoryPanel ){
- top.Galaxy.currHistoryPanel.refreshHdas();
+ top.Galaxy.currHistoryPanel.refreshContents();
}
%if trans.user:
diff -r 8d58146fdd4c9cd46e63804b64959a150554ff30 -r 14deb9452f8796079005009fd4a652285a036d32 templates/webapps/galaxy/workflow/run_complete.mako
--- a/templates/webapps/galaxy/workflow/run_complete.mako
+++ b/templates/webapps/galaxy/workflow/run_complete.mako
@@ -25,6 +25,6 @@
<script type="text/javascript">
if( top.Galaxy && top.Galaxy.currHistoryPanel ){
- top.Galaxy.currHistoryPanel.refreshHdas();
+ top.Galaxy.currHistoryPanel.refreshContents();
}
</script>
https://bitbucket.org/galaxy/galaxy-central/commits/430df6281c5e/
Changeset: 430df6281c5e
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Add simple module verions of validation stuff in base controller.
Also delegate controller base methods to module to eliminate code duplication.
Affected #: 2 files
diff -r 14deb9452f8796079005009fd4a652285a036d32 -r 430df6281c5eee2303176d7ee40649dec0f1f37a lib/galaxy/util/validation.py
--- /dev/null
+++ b/lib/galaxy/util/validation.py
@@ -0,0 +1,36 @@
+""" Module for validation of incoming inputs.
+
+TODO: Refactor BaseController references to similar methods to use this module.
+"""
+from galaxy import exceptions
+from galaxy.util.sanitize_html import sanitize_html
+
+
+def validate_and_sanitize_basestring( key, val ):
+ if not isinstance( val, basestring ):
+ raise exceptions.RequestParameterInvalidException( '%s must be a string or unicode: %s'
+ % ( key, str( type( val ) ) ) )
+ return unicode( sanitize_html( val, 'utf-8', 'text/html' ), 'utf-8' )
+
+
+def validate_and_sanitize_basestring_list( key, val ):
+ try:
+ assert isinstance( val, list )
+ return [ unicode( sanitize_html( t, 'utf-8', 'text/html' ), 'utf-8' ) for t in val ]
+ except ( AssertionError, TypeError ):
+ raise exceptions.RequestParameterInvalidException( '%s must be a list of strings: %s'
+ % ( key, str( type( val ) ) ) )
+
+
+def validate_boolean( key, val ):
+ if not isinstance( val, bool ):
+ raise exceptions.RequestParameterInvalidException( '%s must be a boolean: %s'
+ % ( key, str( type( val ) ) ) )
+ return val
+
+
+#TODO:
+#def validate_integer( self, key, val, min, max ):
+#def validate_float( self, key, val, min, max ):
+#def validate_number( self, key, val, min, max ):
+#def validate_genome_build( self, key, val ):
diff -r 14deb9452f8796079005009fd4a652285a036d32 -r 430df6281c5eee2303176d7ee40649dec0f1f37a lib/galaxy/web/base/controller.py
--- a/lib/galaxy/web/base/controller.py
+++ b/lib/galaxy/web/base/controller.py
@@ -43,6 +43,7 @@
from galaxy.util.json import to_json_string
from galaxy.workflow.modules import ToolModule
from galaxy.workflow.steps import attach_ordered_steps
+from galaxy.util import validation
log = logging.getLogger( __name__ )
@@ -183,30 +184,14 @@
# incoming param validation
# should probably be in sep. serializer class/object _used_ by controller
def validate_and_sanitize_basestring( self, key, val ):
- if not isinstance( val, basestring ):
- raise exceptions.RequestParameterInvalidException( '%s must be a string or unicode: %s'
- %( key, str( type( val ) ) ) )
- return unicode( sanitize_html( val, 'utf-8', 'text/html' ), 'utf-8' )
+ return validation.validate_and_sanitize_basestring( key, val )
def validate_and_sanitize_basestring_list( self, key, val ):
- try:
- assert isinstance( val, list )
- return [ unicode( sanitize_html( t, 'utf-8', 'text/html' ), 'utf-8' ) for t in val ]
- except ( AssertionError, TypeError ), err:
- raise exceptions.RequestParameterInvalidException( '%s must be a list of strings: %s'
- %( key, str( type( val ) ) ) )
+ return validation.validate_and_sanitize_basestring_list( key, val )
def validate_boolean( self, key, val ):
- if not isinstance( val, bool ):
- raise exceptions.RequestParameterInvalidException( '%s must be a boolean: %s'
- %( key, str( type( val ) ) ) )
- return val
+ return validation.validate_boolean( key, val )
- #TODO:
- #def validate_integer( self, key, val, min, max ):
- #def validate_float( self, key, val, min, max ):
- #def validate_number( self, key, val, min, max ):
- #def validate_genome_build( self, key, val ):
Root = BaseController
https://bitbucket.org/galaxy/galaxy-central/commits/228599d3714a/
Changeset: 228599d3714a
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Extract base class from DataToolParameter to be shared with DataCollectionToolParameter.
Affected #: 1 file
diff -r 430df6281c5eee2303176d7ee40649dec0f1f37a -r 228599d3714a70c3c165dca0d010b5c0dbca7d40 lib/galaxy/tools/parameters/basic.py
--- a/lib/galaxy/tools/parameters/basic.py
+++ b/lib/galaxy/tools/parameters/basic.py
@@ -1548,25 +1548,33 @@
pass
-class DataToolParameter( ToolParameter ):
- # TODO, Nate: Make sure the following unit tests appropriately test the dataset security
- # components. Add as many additional tests as necessary.
- """
- Parameter that takes on one (or many) or a specific set of values.
+class BaseDataToolParameter( ToolParameter ):
- TODO: There should be an alternate display that allows single selects to be
- displayed as radio buttons and multiple selects as a set of checkboxes
+ def __init__( self, tool, elem, trans ):
+ super(BaseDataToolParameter, self).__init__( tool, elem )
- TODO: The following must be fixed to test correctly for the new security_check tag in
- the DataToolParameter ( the last test below is broken ) Nate's next pass at the dataset
- security stuff will dramatically alter this anyway.
- """
+ def _get_history( self, trans, history=None ):
+ class_name = self.__class__.__name__
+ assert trans is not None, "%s requires a trans" % class_name
+ if history is None:
+ history = trans.get_history()
+ assert history is not None, "%s requires a history" % class_name
+ return history
- def __init__( self, tool, elem, trans=None):
- ToolParameter.__init__( self, tool, elem )
- # Add metadata validator
- if not string_as_bool( elem.get( 'no_validation', False ) ):
- self.validators.append( validation.MetadataValidator() )
+ def _ensure_selection( self, field ):
+ set_selected = field.get_selected( return_label=True, return_value=True, multi=False ) is not None
+ # Ensure than an item is always selected
+ if self.optional:
+ if set_selected:
+ field.add_option( "Selection is Optional", 'None', False )
+ else:
+ field.add_option( "Selection is Optional", 'None', True )
+ elif not set_selected and bool( field.options ):
+ # Select the last item
+ a, b, c = field.options[-1]
+ field.options[-1] = a, b, True
+
+ def _datatypes_registery( self, trans, tool ):
# Find datatypes_registry
if tool is None:
if trans:
@@ -1580,6 +1588,11 @@
datatypes_registry.load_datatypes()
else:
datatypes_registry = tool.app.datatypes_registry
+ return datatypes_registry
+
+ def _parse_formats( self, trans, tool, elem ):
+ datatypes_registry = self._datatypes_registery( trans, tool )
+
# Build tuple of classes for supported data formats
formats = []
self.extensions = elem.get( 'format', 'data' ).split( "," )
@@ -1587,7 +1600,8 @@
for extension in normalized_extensions:
formats.append( datatypes_registry.get_datatype_by_extension( extension ) )
self.formats = formats
- self.multiple = string_as_bool( elem.get( 'multiple', False ) )
+
+ def _parse_options( self, elem ):
# TODO: Enhance dynamic options for DataToolParameters. Currently,
# only the special case key='build' of type='data_meta' is
# a valid filter
@@ -1603,6 +1617,30 @@
#this behavior needs to be entirely reworked (in a backwards compatible manner)
self.options_filter_attribute = options.get( 'options_filter_attribute', None )
self.is_dynamic = self.options is not None
+
+
+class DataToolParameter( BaseDataToolParameter ):
+ # TODO, Nate: Make sure the following unit tests appropriately test the dataset security
+ # components. Add as many additional tests as necessary.
+ """
+ Parameter that takes on one (or many) or a specific set of values.
+
+ TODO: There should be an alternate display that allows single selects to be
+ displayed as radio buttons and multiple selects as a set of checkboxes
+
+ TODO: The following must be fixed to test correctly for the new security_check tag in
+ the DataToolParameter ( the last test below is broken ) Nate's next pass at the dataset
+ security stuff will dramatically alter this anyway.
+ """
+
+ def __init__( self, tool, elem, trans=None):
+ super(DataToolParameter, self).__init__( tool, elem, trans )
+ # Add metadata validator
+ if not string_as_bool( elem.get( 'no_validation', False ) ):
+ self.validators.append( validation.MetadataValidator() )
+ self._parse_formats( trans, tool, elem )
+ self.multiple = string_as_bool( elem.get( 'multiple', False ) )
+ self._parse_options( elem )
# Load conversions required for the dataset input
self.conversions = []
for conv_elem in elem.findall( "conversion" ):
@@ -1824,27 +1862,6 @@
ref = ref()
return ref
- def _get_history( self, trans, history=None ):
- class_name = self.__class__.__name__
- assert trans is not None, "%s requires a trans" % class_name
- if history is None:
- history = trans.get_history()
- assert history is not None, "%s requires a history" % class_name
- return history
-
- def _ensure_selection( self, field ):
- set_selected = field.get_selected( return_label=True, return_value=True, multi=False ) is not None
- # Ensure than an item is always selected
- if self.optional:
- if set_selected:
- field.add_option( "Selection is Optional", 'None', False )
- else:
- field.add_option( "Selection is Optional", 'None', True )
- elif not set_selected and bool( field.options ):
- # Select the last item
- a, b, c = field.options[-1]
- field.options[-1] = a, b, True
-
class HiddenDataToolParameter( HiddenToolParameter, DataToolParameter ):
"""
https://bitbucket.org/galaxy/galaxy-central/commits/5c7c36503c60/
Changeset: 5c7c36503c60
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - models - basics.
Mapping, model definition, database migration, and model unit tests to support database collections.
Affected #: 5 files
diff -r 228599d3714a70c3c165dca0d010b5c0dbca7d40 -r 5c7c36503c604dff6c6530d452cd89c845fda55e lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py
+++ b/lib/galaxy/model/__init__.py
@@ -315,6 +315,8 @@
self.parameters = []
self.input_datasets = []
self.output_datasets = []
+ self.input_dataset_collections = []
+ self.output_dataset_collections = []
self.input_library_datasets = []
self.output_library_datasets = []
self.state = Job.states.NEW
@@ -454,6 +456,10 @@
self.input_datasets.append( JobToInputDatasetAssociation( name, dataset ) )
def add_output_dataset( self, name, dataset ):
self.output_datasets.append( JobToOutputDatasetAssociation( name, dataset ) )
+ def add_input_dataset_collection( self, name, dataset ):
+ self.input_dataset_collections.append( JobToInputDatasetCollectionAssociation( name, dataset ) )
+ def add_output_dataset_collection( self, name, dataset ):
+ self.output_dataset_collections.append( JobToOutputDatasetCollectionAssociation( name, dataset ) )
def add_input_library_dataset( self, name, dataset ):
self.input_library_datasets.append( JobToInputLibraryDatasetAssociation( name, dataset ) )
def add_output_library_dataset( self, name, dataset ):
@@ -696,6 +702,19 @@
self.name = name
self.dataset = dataset
+
+class JobToInputDatasetCollectionAssociation( object ):
+ def __init__( self, name, dataset ):
+ self.name = name
+ self.dataset = dataset
+
+
+class JobToOutputDatasetCollectionAssociation( object ):
+ def __init__( self, name, dataset_collection ):
+ self.name = name
+ self.dataset_collection = dataset_collection
+
+
class JobToInputLibraryDatasetAssociation( object ):
def __init__( self, name, dataset ):
self.name = name
@@ -706,6 +725,13 @@
self.name = name
self.dataset = dataset
+
+class ImplicitlyCreatedDatasetCollectionInput( object ):
+ def __init__( self, name, input_dataset_collection ):
+ self.name = name
+ self.input_dataset_collection = input_dataset_collection
+
+
class PostJobAction( object ):
def __init__( self, action_type, workflow_step, output_name = None, action_arguments = None):
self.action_type = action_type
@@ -907,6 +933,14 @@
self.datasets.append( dataset )
return dataset
+ def add_dataset_collection( self, history_dataset_collection, set_hid=True ):
+ if set_hid:
+ history_dataset_collection.hid = self._next_hid()
+ history_dataset_collection.history = self
+ # TODO: quota?
+ self.dataset_collections.append( history_dataset_collection )
+ return history_dataset_collection
+
def copy( self, name=None, target_user=None, activatable=False, all_datasets=False ):
"""
Return a copy of this history using the given `name` and `target_user`.
@@ -2521,6 +2555,276 @@
try: os.unlink( self.file_name )
except Exception, e: print "Failed to purge associated file (%s) from disk: %s" % ( self.file_name, e )
+
+DEFAULT_COLLECTION_NAME = "Unnamed Collection"
+
+
+class DatasetCollection( object, Dictifiable, UsesAnnotations ):
+ """
+ """
+ dict_collection_visible_keys = ( 'id', 'name', 'collection_type' )
+ dict_element_visible_keys = ( 'id', 'name', 'collection_type' )
+
+ def __init__(
+ self,
+ id=None,
+ collection_type=None,
+ ):
+ self.id = id
+ self.collection_type = collection_type
+
+ @property
+ def dataset_instances( self ):
+ instances = []
+ for element in self.elements:
+ if element.is_collection:
+ instances.extend( element.child_collection.dataset_instances )
+ else:
+ instance = element.dataset_instance
+ instances.append( instance )
+ return instances
+
+ @property
+ def state( self ):
+ # TODO: DatasetCollection state handling...
+ return 'ok'
+
+ def validate( self ):
+ if self.collection_type is None:
+ raise Exception("Each dataset collection must define a collection type.")
+
+ def __getitem__( self, key ):
+ get_by_attribute = "element_index" if isinstance( key, int ) else "element_identifier"
+ for element in self.elements:
+ if getattr( element, get_by_attribute ) == key:
+ return element
+ error_message = "Dataset collection has no %s with key %s." % ( get_by_attribute, key )
+ raise KeyError( error_message )
+
+ def set_from_dict( self, new_data ):
+ editable_keys = ( 'name' )
+ changed = {}
+
+ # unknown keys are ignored here
+ for key in [ k for k in new_data.keys() if k in editable_keys ]:
+ new_val = new_data[ key ]
+ old_val = self.__getattribute__( key )
+ if new_val == old_val:
+ continue
+
+ self.__setattr__( key, new_val )
+ changed[ key ] = new_val
+
+ return changed
+
+
+class DatasetCollectionInstance( object, HasName ):
+ """
+ """
+ def __init__(
+ self,
+ collection=None,
+ deleted=False,
+ ):
+ # Relationships
+ self.collection = collection
+ # Since deleted property is shared between history and dataset collections,
+ # it could be on either table - some places in the code however it is convient
+ # it is on instance instead of collection.
+ self.deleted = deleted
+
+ @property
+ def state( self ):
+ return self.collection.state
+
+ def display_name( self ):
+ return self.get_display_name()
+
+ def _base_to_dict( self, view ):
+ return dict(
+ id=self.id,
+ name=self.name,
+ collection_type=self.collection.collection_type,
+ type="collection", # contents type (distinguished from file or folder (in case of library))
+ )
+
+ def set_from_dict( self, new_data ):
+ """
+ Set object attributes to the values in dictionary new_data limiting
+ to only those keys in dict_element_visible_keys.
+
+ Returns a dictionary of the keys, values that have been changed.
+ """
+ # precondition: keys are proper, values are parsed and validated
+ changed = self.collection.set_from_dict( new_data )
+
+ # unknown keys are ignored here
+ for key in [ k for k in new_data.keys() if k in self.editable_keys ]:
+ new_val = new_data[ key ]
+ old_val = self.__getattribute__( key )
+ if new_val == old_val:
+ continue
+
+ self.__setattr__( key, new_val )
+ changed[ key ] = new_val
+
+ return changed
+
+
+class HistoryDatasetCollectionAssociation( DatasetCollectionInstance, Dictifiable ):
+ """ Associates a DatasetCollection with a History. """
+ editable_keys = ( 'name', 'deleted', 'visible' )
+
+ def __init__(
+ self,
+ id=None,
+ hid=None,
+ collection=None,
+ history=None,
+ name=None,
+ deleted=False,
+ visible=True,
+ copied_from_history_dataset_collection_association=None,
+ implicit_output_name=None,
+ implicit_input_collections=[],
+ ):
+ super( HistoryDatasetCollectionAssociation, self ).__init__(
+ collection=collection,
+ deleted=deleted,
+ )
+ self.id = id
+ self.hid = hid
+ self.history = history
+ self.name = name
+ self.visible = visible
+ self.copied_from_history_dataset_collection_association = copied_from_history_dataset_collection_association
+ self.implicit_output_name = implicit_output_name
+ self.implicit_input_collections = implicit_input_collections
+
+ @property
+ def history_content_type( self ):
+ return "dataset_collection"
+
+ def to_dict( self, view='collection' ):
+ dict_value = dict(
+ hid=self.hid,
+ history_id=self.history.id,
+ history_content_type=self.history_content_type,
+ visible=self.visible,
+ deleted=self.deleted,
+ **self._base_to_dict(view=view)
+ )
+ return dict_value
+
+ def add_implicit_input_collection( self, name, history_dataset_collection ):
+ self.implicit_input_collections.append( ImplicitlyCreatedDatasetCollectionInput( name, history_dataset_collection) )
+
+ def find_implicit_input_collection( self, name ):
+ matching_collection = None
+ for implicit_input_collection in self.implicit_input_collections:
+ if implicit_input_collection.name == name:
+ matching_collection = implicit_input_collection.input_dataset_collection
+ break
+ return matching_collection
+
+
+class LibraryDatasetCollectionAssociation( DatasetCollectionInstance, Dictifiable ):
+ """ Associates a DatasetCollection with a library folder. """
+ editable_keys = ( 'name', 'deleted' )
+
+ def __init__(
+ self,
+ id=None,
+ collection=None,
+ name=None,
+ deleted=False,
+ folder=None,
+ ):
+ super(LibraryDatasetCollectionAssociation, self).__init__(
+ collection=collection,
+ deleted=deleted,
+ )
+ self.id = id
+ self.folder = folder
+ self.name = name
+
+ def to_dict( self, view='collection' ):
+ dict_value = dict(
+ folder_id=self.folder.id,
+ **self._base_to_dict(view=view)
+ )
+ return dict_value
+
+
+class DatasetCollectionElement( object, Dictifiable ):
+ """ Associates a DatasetInstance (hda or ldda) with a DatasetCollection. """
+ # actionable dataset id needs to be available via API...
+ dict_collection_visible_keys = ( 'id', 'element_type', 'element_index', 'element_identifier' )
+ dict_element_visible_keys = ( 'id', 'element_type', 'element_index', 'element_identifier' )
+
+ def __init__(
+ self,
+ id=None,
+ collection=None,
+ element=None,
+ element_index=None,
+ element_identifier=None,
+ ):
+ if isinstance(element, HistoryDatasetAssociation):
+ self.hda = element
+ #self.instance_type = 'hda'
+ elif isinstance(element, LibraryDatasetDatasetAssociation):
+ self.ldda = element
+ #self.instance_type = 'ldda'
+ elif isinstance( element, DatasetCollection ):
+ self.child_collection = element
+ else:
+ raise AttributeError( 'Unknown element type provided: %s' % type( element ) )
+
+ self.id = id
+ self.collection = collection
+ self.element_index = element_index
+ self.element_identifier = element_identifier or str(element_index)
+
+ @property
+ def element_type( self ):
+ if self.hda:
+ return "hda"
+ elif self.ldda:
+ return "ldda"
+ elif self.child_collection:
+ #TOOD: Rename element_type to element_type.
+ return "dataset_collection"
+ else:
+ raise Exception( "Unknown element instance type" )
+
+ @property
+ def is_collection( self ):
+ return self.element_type == "dataset_collection"
+
+ @property
+ def element_object( self ):
+ if self.hda:
+ return self.hda
+ elif self.ldda:
+ return self.ldda
+ elif self.child_collection:
+ return self.child_collection
+ else:
+ raise Exception( "Unknown element instance type" )
+
+ @property
+ def dataset_instance( self ):
+ element_object = self.element_object
+ if isinstance( element_object, DatasetCollection ):
+ raise AttributeError( "Nested collection has no associated dataset_instance." )
+ return element_object
+
+ @property
+ def dataset( self ):
+ return self.dataset_instance.dataset
+
+
class Event( object ):
def __init__( self, message=None, history=None, user=None, galaxy_session=None ):
self.history = history
@@ -3525,6 +3829,15 @@
class VisualizationTagAssociation ( ItemTagAssociation ):
pass
+
+class HistoryDatasetCollectionTagAssociation( ItemTagAssociation ):
+ pass
+
+
+class LibraryDatasetCollectionTagAssociation( ItemTagAssociation ):
+ pass
+
+
class ToolTagAssociation( ItemTagAssociation ):
def __init__( self, id=None, user=None, tool_id=None, tag_id=None, user_tname=None, value=None ):
self.id = id
@@ -3555,6 +3868,15 @@
class VisualizationAnnotationAssociation( object ):
pass
+
+class HistoryDatasetCollectionAnnotationAssociation( object ):
+ pass
+
+
+class LibraryDatasetCollectionAnnotationAssociation( object ):
+ pass
+
+
# Item rating classes.
class ItemRatingAssociation( object ):
@@ -3588,6 +3910,17 @@
def set_item( self, visualization ):
self.visualization = visualization
+
+class HistoryDatasetCollectionRatingAssociation( ItemRatingAssociation ):
+ def set_item( self, dataset_collection ):
+ self.dataset_collection = dataset_collection
+
+
+class LibraryDatasetCollectionRatingAssociation( ItemRatingAssociation ):
+ def set_item( self, dataset_collection ):
+ self.dataset_collection = dataset_collection
+
+
#Data Manager Classes
class DataManagerHistoryAssociation( object ):
def __init__( self, id=None, history=None, user=None ):
diff -r 228599d3714a70c3c165dca0d010b5c0dbca7d40 -r 5c7c36503c604dff6c6530d452cd89c845fda55e lib/galaxy/model/mapping.py
--- a/lib/galaxy/model/mapping.py
+++ b/lib/galaxy/model/mapping.py
@@ -428,6 +428,18 @@
Column( "dataset_id", Integer, ForeignKey( "history_dataset_association.id" ), index=True ),
Column( "name", String(255) ) )
+model.JobToInputDatasetCollectionAssociation.table = Table( "job_to_input_dataset_collection", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
+ Column( "dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
+ Column( "name", Unicode(255) ) )
+
+model.JobToOutputDatasetCollectionAssociation.table = Table( "job_to_output_dataset_collection", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
+ Column( "dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
+ Column( "name", Unicode(255) ) )
+
model.JobToInputLibraryDatasetAssociation.table = Table( "job_to_input_library_dataset", metadata,
Column( "id", Integer, primary_key=True ),
Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
@@ -440,6 +452,12 @@
Column( "ldda_id", Integer, ForeignKey( "library_dataset_dataset_association.id" ), index=True ),
Column( "name", String(255) ) )
+model.ImplicitlyCreatedDatasetCollectionInput.table = Table( "implicitly_created_dataset_collection_inputs", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
+ Column( "input_dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
+ Column( "name", Unicode(255) ) )
+
model.JobExternalOutputMetadata.table = Table( "job_external_output_metadata", metadata,
Column( "id", Integer, primary_key=True ),
Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
@@ -580,6 +598,46 @@
Column( "socket", Integer ),
Column( "params", JSONType ) )
+model.DatasetCollection.table = Table( "dataset_collection", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "collection_type", Unicode(255), nullable=False ),
+ Column( "create_time", DateTime, default=now ),
+ Column( "update_time", DateTime, default=now, onupdate=now ),
+)
+
+model.HistoryDatasetCollectionAssociation.table = Table( "history_dataset_collection_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "collection_id", Integer, ForeignKey( "dataset_collection.id" ), index=True ),
+ Column( "history_id", Integer, ForeignKey( "history.id" ), index=True ),
+ Column( "name", TrimmedString( 255 ) ),
+ Column( "hid", Integer ),
+ Column( "visible", Boolean ),
+ Column( "deleted", Boolean, default=False ),
+ Column( "copied_from_history_dataset_collection_association_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), nullable=True ),
+ Column( "implicit_output_name", Unicode(255), nullable=True ),
+)
+
+model.LibraryDatasetCollectionAssociation.table = Table( "library_dataset_collection_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "collection_id", Integer, ForeignKey( "dataset_collection.id" ), index=True ),
+ Column( "folder_id", Integer, ForeignKey( "library_folder.id" ), index=True ),
+ Column( "name", TrimmedString( 255 ) ),
+ Column( "deleted", Boolean, default=False ),
+)
+
+model.DatasetCollectionElement.table = Table( "dataset_collection_element", metadata,
+ Column( "id", Integer, primary_key=True ),
+ # Parent collection id describing what collection this element belongs to.
+ Column( "dataset_collection_id", Integer, ForeignKey( "dataset_collection.id" ), index=True, nullable=False ),
+ # Child defined by this association - HDA, LDDA, or another dataset association...
+ Column( "hda_id", Integer, ForeignKey( "history_dataset_association.id" ), index=True, nullable=True ),
+ Column( "ldda_id", Integer, ForeignKey( "library_dataset_dataset_association.id" ), index=True, nullable=True ),
+ Column( "child_collection_id", Integer, ForeignKey( "dataset_collection.id" ), index=True, nullable=True ),
+ # Element index and identifier to define this parent-child relationship.
+ Column( "element_index", Integer ),
+ Column( "element_identifier", Unicode(255), ),
+)
+
model.Event.table = Table( "event", metadata,
Column( "id", Integer, primary_key=True ),
Column( "create_time", DateTime, default=now ),
@@ -998,6 +1056,24 @@
Column( "value", TrimmedString(255), index=True),
Column( "user_value", TrimmedString(255), index=True) )
+model.HistoryDatasetCollectionTagAssociation.table = Table( "history_dataset_collection_tag_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "history_dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
+ Column( "tag_id", Integer, ForeignKey( "tag.id" ), index=True ),
+ Column( "user_id", Integer, ForeignKey( "galaxy_user.id" ), index=True ),
+ Column( "user_tname", TrimmedString(255), index=True),
+ Column( "value", TrimmedString(255), index=True),
+ Column( "user_value", TrimmedString(255), index=True) )
+
+model.LibraryDatasetCollectionTagAssociation.table = Table( "library_dataset_collection_tag_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "library_dataset_collection_id", Integer, ForeignKey( "library_dataset_collection_association.id" ), index=True ),
+ Column( "tag_id", Integer, ForeignKey( "tag.id" ), index=True ),
+ Column( "user_id", Integer, ForeignKey( "galaxy_user.id" ), index=True ),
+ Column( "user_tname", TrimmedString(255), index=True),
+ Column( "value", TrimmedString(255), index=True),
+ Column( "user_value", TrimmedString(255), index=True) )
+
model.ToolTagAssociation.table = Table( "tool_tag_association", metadata,
Column( "id", Integer, primary_key=True ),
Column( "tool_id", TrimmedString(255), index=True ),
@@ -1045,6 +1121,18 @@
Column( "user_id", Integer, ForeignKey( "galaxy_user.id" ), index=True ),
Column( "annotation", TEXT, index=True) )
+model.HistoryDatasetCollectionAnnotationAssociation.table = Table( "history_dataset_collection_annotation_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "history_dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
+ Column( "user_id", Integer, ForeignKey( "galaxy_user.id" ), index=True ),
+ Column( "annotation", TEXT, index=True) )
+
+model.LibraryDatasetCollectionAnnotationAssociation.table = Table( "library_dataset_collection_annotation_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "library_dataset_collection_id", Integer, ForeignKey( "library_dataset_collection_association.id" ), index=True ),
+ Column( "user_id", Integer, ForeignKey( "galaxy_user.id" ), index=True ),
+ Column( "annotation", TEXT, index=True) )
+
# Ratings tables.
model.HistoryRatingAssociation.table = Table( "history_rating_association", metadata,
Column( "id", Integer, primary_key=True ),
@@ -1076,6 +1164,18 @@
Column( "user_id", Integer, ForeignKey( "galaxy_user.id" ), index=True ),
Column( "rating", Integer, index=True) )
+model.HistoryDatasetCollectionRatingAssociation.table = Table( "history_dataset_collection_rating_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "history_dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
+ Column( "user_id", Integer, ForeignKey( "galaxy_user.id" ), index=True ),
+ Column( "rating", Integer, index=True) )
+
+model.LibraryDatasetCollectionRatingAssociation.table = Table( "library_dataset_collection_rating_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "library_dataset_collection_id", Integer, ForeignKey( "library_dataset_collection_association.id" ), index=True ),
+ Column( "user_id", Integer, ForeignKey( "galaxy_user.id" ), index=True ),
+ Column( "rating", Integer, index=True) )
+
# User tables.
model.UserPreference.table = Table( "user_preference", metadata,
@@ -1342,9 +1442,33 @@
properties=dict( galaxy_sessions=relation( model.GalaxySessionToHistoryAssociation ),
datasets=relation( model.HistoryDatasetAssociation, backref="history", order_by=asc(model.HistoryDatasetAssociation.table.c.hid) ),
exports=relation( model.JobExportHistoryArchive, primaryjoin=( model.JobExportHistoryArchive.table.c.history_id == model.History.table.c.id ), order_by=desc( model.JobExportHistoryArchive.table.c.id ) ),
- active_datasets=relation( model.HistoryDatasetAssociation, primaryjoin=( ( model.HistoryDatasetAssociation.table.c.history_id == model.History.table.c.id ) & not_( model.HistoryDatasetAssociation.table.c.deleted ) ), order_by=asc( model.HistoryDatasetAssociation.table.c.hid ), viewonly=True ),
- visible_datasets=relation( model.HistoryDatasetAssociation, primaryjoin=( ( model.HistoryDatasetAssociation.table.c.history_id == model.History.table.c.id ) & not_( model.HistoryDatasetAssociation.table.c.deleted ) & model.HistoryDatasetAssociation.table.c.visible ),
- order_by=asc( model.HistoryDatasetAssociation.table.c.hid ), viewonly=True ),
+ active_datasets=relation(
+ model.HistoryDatasetAssociation,
+ primaryjoin=(
+ ( model.HistoryDatasetAssociation.table.c.history_id == model.History.table.c.id ) & not_( model.HistoryDatasetAssociation.table.c.deleted )
+ ),
+ order_by=asc( model.HistoryDatasetAssociation.table.c.hid ),
+ viewonly=True
+ ),
+ active_dataset_collections=relation(
+ model.HistoryDatasetCollectionAssociation,
+ primaryjoin=(
+ ( model.HistoryDatasetCollectionAssociation.table.c.history_id ) == model.History.table.c.id ) & not_( model.HistoryDatasetCollectionAssociation.table.c.deleted ),
+ order_by=asc( model.HistoryDatasetCollectionAssociation.table.c.id ),
+ viewonly=True,
+ ), # TODO:orderbyhid
+ visible_datasets=relation(
+ model.HistoryDatasetAssociation,
+ primaryjoin=( ( model.HistoryDatasetAssociation.table.c.history_id == model.History.table.c.id ) & not_( model.HistoryDatasetAssociation.table.c.deleted ) & model.HistoryDatasetAssociation.table.c.visible ),
+ order_by=asc( model.HistoryDatasetAssociation.table.c.hid ),
+ viewonly=True,
+ ),
+ visible_dataset_collections=relation(
+ model.HistoryDatasetCollectionAssociation,
+ primaryjoin=( ( model.HistoryDatasetCollectionAssociation.table.c.history_id == model.History.table.c.id ) & not_( model.HistoryDatasetCollectionAssociation.table.c.deleted ) & model.HistoryDatasetCollectionAssociation.table.c.visible ),
+ order_by=asc( model.HistoryDatasetCollectionAssociation.table.c.id ),
+ viewonly=True,
+ ),
tags=relation( model.HistoryTagAssociation, order_by=model.HistoryTagAssociation.table.c.id, backref="histories" ),
annotations=relation( model.HistoryAnnotationAssociation, order_by=model.HistoryAnnotationAssociation.table.c.id, backref="histories" ),
ratings=relation( model.HistoryRatingAssociation, order_by=model.HistoryRatingAssociation.table.c.id, backref="histories" ) )
@@ -1604,6 +1728,17 @@
job=relation( model.Job ), dataset=relation(
model.HistoryDatasetAssociation, lazy=False ) ) )
+mapper( model.JobToInputDatasetCollectionAssociation,
+ model.JobToInputDatasetCollectionAssociation.table, properties=dict(
+ job=relation( model.Job ), dataset_collection=relation(
+ model.HistoryDatasetCollectionAssociation, lazy=False,
+ backref="dependent_jobs" ) ) )
+
+mapper( model.JobToOutputDatasetCollectionAssociation,
+ model.JobToOutputDatasetCollectionAssociation.table, properties=dict(
+ job=relation( model.Job ), dataset_collection=relation(
+ model.HistoryDatasetCollectionAssociation, lazy=False ) ) )
+
mapper( model.JobToInputLibraryDatasetAssociation,
model.JobToInputLibraryDatasetAssociation.table, properties=dict(
job=relation( model.Job ), dataset=relation(
@@ -1635,6 +1770,15 @@
task=relation( model.Task, backref="numeric_metrics" ),
)
+simple_mapping(
+ model.ImplicitlyCreatedDatasetCollectionInput,
+ input_dataset_collection=relation(
+ model.HistoryDatasetCollectionAssociation,
+ primaryjoin=( ( model.HistoryDatasetCollectionAssociation.table.c.id == model.ImplicitlyCreatedDatasetCollectionInput.table.c.input_dataset_collection_id ) ),
+ # backref="implicitly_created_dataset_collections",
+ ),
+)
+
mapper( model.JobParameter, model.JobParameter.table )
mapper( model.JobExternalOutputMetadata, model.JobExternalOutputMetadata.table,
@@ -1688,6 +1832,61 @@
mapper( model.TransferJob, model.TransferJob.table,
properties = {} )
+
+simple_mapping( model.DatasetCollection,
+ elements=relation(
+ model.DatasetCollectionElement,
+ primaryjoin=( model.DatasetCollection.table.c.id == model.DatasetCollectionElement.table.c.dataset_collection_id ),
+ remote_side=[ model.DatasetCollectionElement.table.c.dataset_collection_id ],
+ backref="collection",
+ order_by=model.DatasetCollectionElement.table.c.element_index,
+ ),
+)
+
+simple_mapping( model.HistoryDatasetCollectionAssociation,
+ collection=relation( model.DatasetCollection ),
+ history=relation( model.History, backref='dataset_collections' ),
+ copied_from_history_dataset_collection_association=relation(
+ model.HistoryDatasetCollectionAssociation,
+ primaryjoin=( model.HistoryDatasetCollectionAssociation.table.c.copied_from_history_dataset_collection_association_id == model.HistoryDatasetCollectionAssociation.table.c.id ),
+ remote_side=[model.HistoryDatasetCollectionAssociation.table.c.id],
+ uselist=False ),
+ copied_to_history_dataset_collection_associations=relation(
+ model.HistoryDatasetCollectionAssociation,
+ primaryjoin=( model.HistoryDatasetCollectionAssociation.table.c.copied_from_history_dataset_collection_association_id == model.HistoryDatasetCollectionAssociation.table.c.id ) ),
+ implicit_input_collections=relation(
+ model.ImplicitlyCreatedDatasetCollectionInput,
+ primaryjoin=( ( model.HistoryDatasetCollectionAssociation.table.c.id == model.ImplicitlyCreatedDatasetCollectionInput.table.c.dataset_collection_id ) ),
+ backref="dataset_collection",
+ ),
+ tags=relation( model.HistoryDatasetCollectionTagAssociation, order_by=model.HistoryDatasetCollectionTagAssociation.table.c.id, backref='dataset_collections' ),
+ annotations=relation( model.HistoryDatasetCollectionAnnotationAssociation, order_by=model.HistoryDatasetCollectionAnnotationAssociation.table.c.id, backref="dataset_collections" ),
+ ratings=relation( model.HistoryDatasetCollectionRatingAssociation, order_by=model.HistoryDatasetCollectionRatingAssociation.table.c.id, backref="dataset_collections" ),
+)
+
+simple_mapping( model.LibraryDatasetCollectionAssociation,
+ collection=relation( model.DatasetCollection ),
+ folder=relation( model.LibraryFolder, backref='dataset_collections' ),
+ tags=relation( model.LibraryDatasetCollectionTagAssociation, order_by=model.LibraryDatasetCollectionTagAssociation.table.c.id, backref='dataset_collections' ),
+ annotations=relation( model.LibraryDatasetCollectionAnnotationAssociation, order_by=model.LibraryDatasetCollectionAnnotationAssociation.table.c.id, backref="dataset_collections" ),
+ ratings=relation( model.LibraryDatasetCollectionRatingAssociation, order_by=model.LibraryDatasetCollectionRatingAssociation.table.c.id, backref="dataset_collections" ),
+)
+
+simple_mapping( model.DatasetCollectionElement,
+ hda=relation(
+ model.HistoryDatasetAssociation,
+ primaryjoin=( model.DatasetCollectionElement.table.c.hda_id == model.HistoryDatasetAssociation.table.c.id )
+ ),
+ ldda=relation(
+ model.LibraryDatasetDatasetAssociation,
+ primaryjoin=( model.DatasetCollectionElement.table.c.ldda_id == model.LibraryDatasetDatasetAssociation.table.c.id )
+ ),
+ child_collection=relation(
+ model.DatasetCollection,
+ primaryjoin=( model.DatasetCollectionElement.table.c.child_collection_id == model.DatasetCollection.table.c.id ),
+ ),
+)
+
mapper( model.Event, model.Event.table,
properties=dict( history=relation( model.History ),
galaxy_session=relation( model.GalaxySession ),
@@ -1848,6 +2047,10 @@
tag_mapping( model.VisualizationTagAssociation, "tagged_visualizations" )
+tag_mapping( model.HistoryDatasetCollectionTagAssociation, "tagged_history_dataset_collections" )
+
+tag_mapping( model.LibraryDatasetCollectionTagAssociation, "tagged_library_dataset_collections" )
+
tag_mapping( model.ToolTagAssociation, "tagged_tools" )
@@ -1868,6 +2071,10 @@
annotation_mapping( model.VisualizationAnnotationAssociation, visualization=model.Visualization )
+annotation_mapping( model.HistoryDatasetCollectionAnnotationAssociation, history_dataset_collection=model.HistoryDatasetCollectionAssociation )
+
+annotation_mapping( model.LibraryDatasetCollectionAnnotationAssociation, library_dataset_collection=model.LibraryDatasetCollectionAssociation )
+
# Rating tables.
def rating_mapping( rating_class, **kwds ):
@@ -1884,6 +2091,10 @@
rating_mapping( model.VisualizationRatingAssociation, visualizaiton=model.Visualization )
+rating_mapping( model.HistoryDatasetCollectionRatingAssociation, history_dataset_collection=model.HistoryDatasetCollectionAssociation )
+
+rating_mapping( model.LibraryDatasetCollectionRatingAssociation, libary_dataset_collection=model.LibraryDatasetCollectionAssociation )
+
#Data Manager tables
mapper( model.DataManagerHistoryAssociation, model.DataManagerHistoryAssociation.table,
properties=dict( history=relation( model.History ),
@@ -1915,6 +2126,8 @@
class_mapper(model.HistoryDatasetAssociation).add_property( "creating_job_associations", relation( model.JobToOutputDatasetAssociation ) )
class_mapper(model.LibraryDatasetDatasetAssociation).add_property( "creating_job_associations", relation( model.JobToOutputLibraryDatasetAssociation ) )
+class_mapper(model.HistoryDatasetCollectionAssociation).add_property( "creating_job_associations", relation( model.JobToOutputDatasetCollectionAssociation ) )
+
# Helper methods.
def db_next_hid( self ):
diff -r 228599d3714a70c3c165dca0d010b5c0dbca7d40 -r 5c7c36503c604dff6c6530d452cd89c845fda55e lib/galaxy/model/migrate/versions/0120_dataset_collections.py
--- /dev/null
+++ b/lib/galaxy/model/migrate/versions/0120_dataset_collections.py
@@ -0,0 +1,171 @@
+"""
+Migration script for tables related to dataset collections.
+"""
+
+from sqlalchemy import *
+from sqlalchemy.orm import *
+from migrate import *
+from migrate.changeset import *
+from galaxy.model.custom_types import *
+
+import datetime
+now = datetime.datetime.utcnow
+
+import logging
+log = logging.getLogger( __name__ )
+
+metadata = MetaData()
+
+DatasetCollection_table = Table( "dataset_collection", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "collection_type", Unicode(255), nullable=False, ),
+ Column( "create_time", DateTime, default=now ),
+ Column( "update_time", DateTime, default=now, onupdate=now ),
+)
+
+HistoryDatasetCollectionAssociation_table = Table( "history_dataset_collection_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "collection_id", Integer, ForeignKey( "dataset_collection.id" ), index=True ),
+ Column( "history_id", Integer, ForeignKey( "history.id" ), index=True ),
+ Column( "hid", Integer ),
+ Column( "name", TrimmedString( 255 ) ),
+ Column( "deleted", Boolean, default=False ),
+ Column( "visible", Boolean, default=True ),
+ Column( "copied_from_history_dataset_collection_association_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), nullable=True ),
+ Column( "implicit_output_name", Unicode(255), nullable=True ),
+)
+
+LibraryDatasetCollectionAssociation_table = Table( "library_dataset_collection_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "collection_id", Integer, ForeignKey( "dataset_collection.id" ), index=True ),
+ Column( "name", TrimmedString( 255 ) ),
+ Column( "deleted", Boolean, default=False ),
+ Column( "folder_id", Integer, ForeignKey( "library_folder.id" ), index=True ),
+)
+
+DatasetCollectionElement_table = Table( "dataset_collection_element", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "dataset_collection_id", Integer, ForeignKey( "dataset_collection.id" ), index=True, nullable=False ),
+ Column( "hda_id", Integer, ForeignKey( "history_dataset_association.id" ), index=True, nullable=True ),
+ Column( "ldda_id", Integer, ForeignKey( "library_dataset_dataset_association.id" ), index=True, nullable=True ),
+ Column( "child_collection_id", Integer, ForeignKey( "dataset_collection.id" ), index=True, nullable=True ),
+ Column( "element_index", Integer, nullable=False ),
+ Column( "element_identifier", Unicode(255), nullable=False ),
+)
+
+HistoryDatasetCollectionAnnotationAssociation_table = Table( "history_dataset_collection_annotation_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "history_dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
+ Column( "user_id", Integer, ForeignKey( "galaxy_user.id" ), index=True ),
+ Column( "annotation", TEXT )
+)
+
+LibraryDatasetCollectionAnnotationAssociation_table = Table( "library_dataset_collection_annotation_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "library_dataset_collection_id", Integer, ForeignKey( "library_dataset_collection_association.id" ), index=True ),
+ Column( "user_id", Integer, ForeignKey( "galaxy_user.id" ), index=True ),
+ Column( "annotation", TEXT )
+)
+
+HistoryDatasetCollectionRatingAssociation_table = Table( "history_dataset_collection_rating_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "history_dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
+ Column( "user_id", Integer, ForeignKey( "galaxy_user.id" ), index=True ),
+ Column( "rating", Integer, index=True)
+)
+
+LibraryDatasetCollectionRatingAssociation_table = Table( "library_dataset_collection_rating_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "library_dataset_collection_id", Integer, ForeignKey( "library_dataset_collection_association.id" ), index=True ),
+ Column( "user_id", Integer, ForeignKey( "galaxy_user.id" ), index=True ),
+ Column( "rating", Integer, index=True)
+)
+
+HistoryDatasetCollectionTagAssociation_table = Table( "history_dataset_collection_tag_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "history_dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
+ Column( "tag_id", Integer, ForeignKey( "tag.id" ), index=True ),
+ Column( "user_id", Integer, ForeignKey( "galaxy_user.id" ), index=True ),
+ Column( "user_tname", Unicode(255), index=True),
+ Column( "value", Unicode(255), index=True),
+ Column( "user_value", Unicode(255), index=True)
+)
+
+LibraryDatasetCollectionTagAssociation_table = Table( "library_dataset_collection_tag_association", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "library_dataset_collection_id", Integer, ForeignKey( "library_dataset_collection_association.id" ), index=True ),
+ Column( "tag_id", Integer, ForeignKey( "tag.id" ), index=True ),
+ Column( "user_id", Integer, ForeignKey( "galaxy_user.id" ), index=True ),
+ Column( "user_tname", Unicode(255), index=True),
+ Column( "value", Unicode(255), index=True),
+ Column( "user_value", Unicode(255), index=True)
+)
+
+JobToInputDatasetCollectionAssociation_table = Table( "job_to_input_dataset_collection", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
+ Column( "dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
+ Column( "name", Unicode(255) )
+)
+
+JobToOutputDatasetCollectionAssociation_table = Table( "job_to_output_dataset_collection", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
+ Column( "dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
+ Column( "name", Unicode(255) )
+)
+
+ImplicitlyCreatedDatasetCollectionInput_table = Table( "implicitly_created_dataset_collection_inputs", metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
+ Column( "input_dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
+ Column( "name", Unicode(255) ) )
+
+TABLES = [
+ DatasetCollection_table,
+ HistoryDatasetCollectionAssociation_table,
+ LibraryDatasetCollectionAssociation_table,
+ DatasetCollectionElement_table,
+ JobToInputDatasetCollectionAssociation_table,
+ JobToOutputDatasetCollectionAssociation_table,
+ ImplicitlyCreatedDatasetCollectionInput_table,
+ HistoryDatasetCollectionAnnotationAssociation_table,
+ HistoryDatasetCollectionRatingAssociation_table,
+ HistoryDatasetCollectionTagAssociation_table,
+ LibraryDatasetCollectionAnnotationAssociation_table,
+ LibraryDatasetCollectionRatingAssociation_table,
+ LibraryDatasetCollectionTagAssociation_table,
+]
+
+
+def upgrade(migrate_engine):
+ metadata.bind = migrate_engine
+ print __doc__
+ metadata.reflect()
+
+ for table in TABLES:
+ __create(table)
+
+
+def downgrade(migrate_engine):
+ metadata.bind = migrate_engine
+ metadata.reflect()
+
+ for table in TABLES:
+ __drop(table)
+
+
+def __create(table):
+ try:
+ table.create()
+ except Exception as e:
+ print str(e)
+ log.exception("Creating %s table failed: %s" % (table.name, str( e ) ) )
+
+
+def __drop(table):
+ try:
+ table.drop()
+ except Exception as e:
+ print str(e)
+ log.exception("Dropping %s table failed: %s" % (table.name, str( e ) ) )
diff -r 228599d3714a70c3c165dca0d010b5c0dbca7d40 -r 5c7c36503c604dff6c6530d452cd89c845fda55e lib/galaxy/webapps/galaxy/api/history_contents.py
--- a/lib/galaxy/webapps/galaxy/api/history_contents.py
+++ b/lib/galaxy/webapps/galaxy/api/history_contents.py
@@ -139,7 +139,7 @@
'purged': hda.purged,
'hid' : hda.hid,
'history_content_type' : hda.history_content_type,
- 'url' : url_for( 'history_content', history_id=encoded_history_id, id=encoded_id, ),
+ 'url' : url_for( 'history_content', history_id=encoded_history_id, id=encoded_id, type="dataset" ),
}
def _detailed_hda_dict( self, trans, hda ):
diff -r 228599d3714a70c3c165dca0d010b5c0dbca7d40 -r 5c7c36503c604dff6c6530d452cd89c845fda55e test/unit/test_galaxy_mapping.py
--- a/test/unit/test_galaxy_mapping.py
+++ b/test/unit/test_galaxy_mapping.py
@@ -55,6 +55,15 @@
self.persist( visualization )
persist_and_check_annotation( model.VisualizationAnnotationAssociation, visualization=visualization )
+ dataset_collection = model.DatasetCollection( collection_type="paired" )
+ history_dataset_collection = model.HistoryDatasetCollectionAssociation( collection=dataset_collection )
+ self.persist( history_dataset_collection )
+ persist_and_check_annotation( model.HistoryDatasetCollectionAnnotationAssociation, history_dataset_collection=history_dataset_collection )
+
+ library_dataset_collection = model.LibraryDatasetCollectionAssociation( collection=dataset_collection )
+ self.persist( library_dataset_collection )
+ persist_and_check_annotation( model.LibraryDatasetCollectionAnnotationAssociation, library_dataset_collection=library_dataset_collection )
+
def test_ratings( self ):
model = self.model
@@ -96,6 +105,15 @@
self.persist( visualization )
persist_and_check_rating( model.VisualizationRatingAssociation, visualization=visualization )
+ dataset_collection = model.DatasetCollection( collection_type="paired" )
+ history_dataset_collection = model.HistoryDatasetCollectionAssociation( collection=dataset_collection )
+ self.persist( history_dataset_collection )
+ persist_and_check_rating( model.HistoryDatasetCollectionRatingAssociation, history_dataset_collection=history_dataset_collection )
+
+ library_dataset_collection = model.LibraryDatasetCollectionAssociation( collection=dataset_collection )
+ self.persist( library_dataset_collection )
+ persist_and_check_rating( model.LibraryDatasetCollectionRatingAssociation, library_dataset_collection=library_dataset_collection )
+
def test_display_name( self ):
def assert_display_name_converts_to_unicode( item, name ):
@@ -161,6 +179,59 @@
visualization.user = u
tag_and_test( visualization, model.VisualizationTagAssociation, "tagged_visualizations" )
+ dataset_collection = model.DatasetCollection( collection_type="paired" )
+ history_dataset_collection = model.HistoryDatasetCollectionAssociation( collection=dataset_collection )
+ tag_and_test( history_dataset_collection, model.HistoryDatasetCollectionTagAssociation, "tagged_history_dataset_collections" )
+
+ library_dataset_collection = model.LibraryDatasetCollectionAssociation( collection=dataset_collection )
+ tag_and_test( library_dataset_collection, model.LibraryDatasetCollectionTagAssociation, "tagged_library_dataset_collections" )
+
+ def test_collections_in_histories(self):
+ model = self.model
+
+ u = model.User( email="mary(a)example.com", password="password" )
+ h1 = model.History( name="History 1", user=u)
+ d1 = model.HistoryDatasetAssociation( extension="txt", history=h1, create_dataset=True, sa_session=model.session )
+ d2 = model.HistoryDatasetAssociation( extension="txt", history=h1, create_dataset=True, sa_session=model.session )
+
+ c1 = model.DatasetCollection(collection_type="pair")
+ hc1 = model.HistoryDatasetCollectionAssociation(history=h1, collection=c1, name="HistoryCollectionTest1")
+
+ dce1 = model.DatasetCollectionElement(collection=c1, element=d1, element_identifier="left")
+ dce2 = model.DatasetCollectionElement(collection=c1, element=d2, element_identifier="right")
+
+ self.persist( u, h1, d1, d2, c1, hc1, dce1, dce2 )
+
+ loaded_dataset_collection = self.query( model.HistoryDatasetCollectionAssociation ).filter( model.HistoryDatasetCollectionAssociation.name == "HistoryCollectionTest1" ).first().collection
+ self.assertEquals(len(loaded_dataset_collection.elements), 2)
+ assert loaded_dataset_collection.collection_type == "pair"
+ assert loaded_dataset_collection[ "left" ] == dce1
+ assert loaded_dataset_collection[ "right" ] == dce2
+
+ def test_collections_in_library_folders(self):
+ model = self.model
+
+ u = model.User( email="mary2(a)example.com", password="password" )
+ lf = model.LibraryFolder( name="RootFolder" )
+ l = model.Library( name="Library1", root_folder=lf )
+ ld1 = model.LibraryDataset( )
+ ld2 = model.LibraryDataset( )
+ #self.persist( u, l, lf, ld1, ld2, expunge=False )
+
+ ldda1 = model.LibraryDatasetDatasetAssociation( extension="txt", library_dataset=ld1 )
+ ldda2 = model.LibraryDatasetDatasetAssociation( extension="txt", library_dataset=ld1 )
+ #self.persist( ld1, ld2, ldda1, ldda2, expunge=False )
+
+ c1 = model.DatasetCollection(collection_type="pair")
+ dce1 = model.DatasetCollectionElement(collection=c1, element=ldda1)
+ dce2 = model.DatasetCollectionElement(collection=c1, element=ldda2)
+ self.persist( u, l, lf, ld1, ld2, c1, ldda1, ldda2, dce1, dce2 )
+
+ # TODO:
+ #loaded_dataset_collection = self.query( model.DatasetCollection ).filter( model.DatasetCollection.name == "LibraryCollectionTest1" ).first()
+ #self.assertEquals(len(loaded_dataset_collection.datasets), 2)
+ #assert loaded_dataset_collection.collection_type == "pair"
+
def test_basic( self ):
model = self.model
https://bitbucket.org/galaxy/galaxy-central/commits/d54db279b2d6/
Changeset: d54db279b2d6
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - models - mapping to allow HDCA to hide HDAs under it.
Should help keep the history panel in check initially hopefully.
Affected #: 2 files
diff -r 5c7c36503c604dff6c6530d452cd89c845fda55e -r d54db279b2d693cb9c454c166ac97afd049980ce lib/galaxy/model/mapping.py
--- a/lib/galaxy/model/mapping.py
+++ b/lib/galaxy/model/mapping.py
@@ -111,6 +111,7 @@
Column( "deleted", Boolean, index=True, default=False ),
Column( "purged", Boolean, index=True, default=False ),
Column( "visible", Boolean ),
+ Column( "hidden_beneath_collection_instance_id", ForeignKey( "history_dataset_collection_association.id" ), nullable=True ),
Column( "extended_metadata_id", Integer,
ForeignKey( "extended_metadata.id" ), index=True )
)
@@ -1391,7 +1392,13 @@
extended_metadata=relation(
model.ExtendedMetadata,
primaryjoin=( ( model.HistoryDatasetAssociation.table.c.extended_metadata_id == model.ExtendedMetadata.table.c.id ) )
- )
+ ),
+ hidden_beneath_collection_instance=relation(
+ model.HistoryDatasetCollectionAssociation,
+ primaryjoin=( ( model.HistoryDatasetAssociation.table.c.hidden_beneath_collection_instance_id == model.HistoryDatasetCollectionAssociation.table.c.id ) ),
+ uselist=False,
+ backref="hidden_dataset_instances",
+ )
)
simple_mapping( model.Dataset,
diff -r 5c7c36503c604dff6c6530d452cd89c845fda55e -r d54db279b2d693cb9c454c166ac97afd049980ce lib/galaxy/model/migrate/versions/0120_dataset_collections.py
--- a/lib/galaxy/model/migrate/versions/0120_dataset_collections.py
+++ b/lib/galaxy/model/migrate/versions/0120_dataset_collections.py
@@ -121,6 +121,10 @@
Column( "input_dataset_collection_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), index=True ),
Column( "name", Unicode(255) ) )
+# TODO: Find a better name for this column...
+HiddenBeneathCollection_column = Column( "hidden_beneath_collection_instance_id", Integer, ForeignKey( "history_dataset_collection_association.id" ), nullable=True )
+
+
TABLES = [
DatasetCollection_table,
HistoryDatasetCollectionAssociation_table,
@@ -146,6 +150,13 @@
for table in TABLES:
__create(table)
+ try:
+ hda_table = Table( "history_dataset_association", metadata, autoload=True )
+ HiddenBeneathCollection_column.create( hda_table )
+ except Exception as e:
+ print str(e)
+ log.exception( "Creating HDA column failed." )
+
def downgrade(migrate_engine):
metadata.bind = migrate_engine
@@ -154,6 +165,14 @@
for table in TABLES:
__drop(table)
+ try:
+ hda_table = Table( "history_dataset_association", metadata, autoload=True )
+ hidden_beneath_collection_instance_id_col = hda_table.c.hidden_beneath_collection_instance_id
+ hidden_beneath_collection_instance_id_col.drop()
+ except Exception as e:
+ print str(e)
+ log.exception( "Dropping HDA column failed." )
+
def __create(table):
try:
https://bitbucket.org/galaxy/galaxy-central/commits/70d10977f488/
Changeset: 70d10977f488
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - models - add history.active_contents.
Get all active datasets and collections ordered together by hid. Used downstream for instance when extracting workflows from histories.
Affected #: 1 file
diff -r d54db279b2d693cb9c454c166ac97afd049980ce -r 70d10977f488488da9c004260845546f708cc564 lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py
+++ b/lib/galaxy/model/__init__.py
@@ -1079,6 +1079,12 @@
self._active_datasets_children_and_roles = query.all()
return self._active_datasets_children_and_roles
+ @property
+ def active_contents( self ):
+ """ Return all active contents ordered by hid.
+ """
+ return self.contents_iter( types=[ "dataset", "dataset_collection" ], deleted=False, visible=True )
+
def contents_iter( self, **kwds ):
"""
Fetch filtered list of contents of history.
https://bitbucket.org/galaxy/galaxy-central/commits/6e3ba71820e1/
Changeset: 6e3ba71820e1
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - models - history copying.
Copy history dataset collection associations when histories copied.
Affected #: 1 file
diff -r 70d10977f488488da9c004260845546f708cc564 -r 6e3ba71820e1b48644769d53399f9114af345716 lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py
+++ b/lib/galaxy/model/__init__.py
@@ -981,6 +981,19 @@
db_session.flush()
# Copy annotation.
self.copy_item_annotation( db_session, self.user, hda, target_user, new_hda )
+ # Copy history dataset collections
+ if all_datasets:
+ hdcas = self.dataset_collections
+ else:
+ hdcas = self.active_dataset_collections
+ for hdca in hdcas:
+ new_hdca = hdca.copy( )
+ new_history.add_dataset_collection( new_hdca, set_hid=False )
+ db_session.add( new_hdca )
+ db_session.flush()
+ # Copy annotation.
+ self.copy_item_annotation( db_session, self.user, hdca, target_user, new_hdca )
+
new_history.hid_counter = self.hid_counter
db_session.add( new_history )
db_session.flush()
@@ -2607,6 +2620,16 @@
error_message = "Dataset collection has no %s with key %s." % ( get_by_attribute, key )
raise KeyError( error_message )
+ def copy( self ):
+ new_collection = DatasetCollection(
+ collection_type=self.collection_type,
+ )
+ for element in self.elements:
+ element.copy_to_collection( new_collection )
+ object_session( self ).add( new_collection )
+ object_session( self ).flush()
+ return new_collection
+
def set_from_dict( self, new_data ):
editable_keys = ( 'name' )
changed = {}
@@ -2733,6 +2756,24 @@
break
return matching_collection
+ def copy( self ):
+ """
+ Create a copy of this history dataset collection association. Copy
+ underlying collection.
+ """
+ hdca = HistoryDatasetCollectionAssociation(
+ hid=self.hid,
+ collection=self.collection.copy(),
+ visible=self.visible,
+ deleted=self.deleted,
+ name=self.name,
+ copied_from_history_dataset_collection_association=self,
+ )
+
+ object_session( self ).add( hdca )
+ object_session( self ).flush()
+ return hdca
+
class LibraryDatasetCollectionAssociation( DatasetCollectionInstance, Dictifiable ):
""" Associates a DatasetCollection with a library folder. """
@@ -2830,6 +2871,15 @@
def dataset( self ):
return self.dataset_instance.dataset
+ def copy_to_collection( self, collection ):
+ new_element = DatasetCollectionElement(
+ element=self.element_object,
+ collection=collection,
+ element_index=self.element_index,
+ element_identifier=self.element_identifier,
+ )
+ return new_element
+
class Event( object ):
def __init__( self, message=None, history=None, user=None, galaxy_session=None ):
https://bitbucket.org/galaxy/galaxy-central/commits/66ed5342e855/
Changeset: 66ed5342e855
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - infrastructure glue.
Add an API and service layer for dataset collections.
Affected #: 17 files
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/app.py
--- a/lib/galaxy/app.py
+++ b/lib/galaxy/app.py
@@ -5,6 +5,7 @@
from galaxy import config, jobs
import galaxy.model
import galaxy.security
+from galaxy import dataset_collections
import galaxy.quota
from galaxy.tags.tag_handler import GalaxyTagHandler
from galaxy.visualization.genomes import Genomes
@@ -54,6 +55,8 @@
self._configure_security()
# Tag handler
self.tag_handler = GalaxyTagHandler()
+ # Dataset Collection Plugins
+ self.dataset_collections_service = dataset_collections.DatasetCollectionsService(self)
# Genomes
self.genomes = Genomes( self )
# Data providers registry.
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/dataset_collections/__init__.py
--- /dev/null
+++ b/lib/galaxy/dataset_collections/__init__.py
@@ -0,0 +1,266 @@
+from .registry import DatasetCollectionTypesRegistry
+from .structure import get_structure
+
+from galaxy import model
+from galaxy.exceptions import MessageException
+from galaxy.exceptions import ItemAccessibilityException
+from galaxy.exceptions import RequestParameterInvalidException
+from galaxy.web.base.controller import (
+ UsesHistoryDatasetAssociationMixin,
+ UsesLibraryMixinItems,
+ UsesTagsMixin,
+)
+from galaxy.managers import hdas # TODO: Refactor all mixin use into managers.
+
+from galaxy.util import validation
+from galaxy.util import odict
+
+import logging
+log = logging.getLogger( __name__ )
+
+
+ERROR_INVALID_ELEMENTS_SPECIFICATION = "Create called with invalid parameters, must specify element identifiers."
+ERROR_NO_COLLECTION_TYPE = "Create called without specifing a collection type."
+
+
+class DatasetCollectionsService(
+ UsesHistoryDatasetAssociationMixin,
+ UsesLibraryMixinItems,
+ UsesTagsMixin,
+):
+ """
+ Abstraction for interfacing with dataset collections instance - ideally abstarcts
+ out model and plugin details.
+ """
+
+ def __init__( self, app ):
+ self.type_registry = DatasetCollectionTypesRegistry( app )
+ self.model = app.model
+ self.security = app.security
+ self.hda_manager = hdas.HDAManager()
+
+ def create(
+ self,
+ trans,
+ parent, # PRECONDITION: security checks on ability to add to parent occurred during load.
+ name,
+ collection_type,
+ element_identifiers=None,
+ elements=None,
+ implicit_collection_info=None,
+ ):
+ """
+ """
+ dataset_collection = self.__create_dataset_collection(
+ trans=trans,
+ collection_type=collection_type,
+ element_identifiers=element_identifiers,
+ elements=elements,
+ )
+ if isinstance( parent, model.History ):
+ dataset_collection_instance = self.model.HistoryDatasetCollectionAssociation(
+ collection=dataset_collection,
+ name=name,
+ )
+ if implicit_collection_info:
+ for input_name, input_collection in implicit_collection_info[ "implicit_inputs" ]:
+ dataset_collection_instance.add_implicit_input_collection( input_name, input_collection )
+ dataset_collection_instance.implicit_output_name = implicit_collection_info[ "implicit_output_name" ]
+ # Handle setting hid
+ parent.add_dataset_collection( dataset_collection_instance )
+ elif isinstance( parent, model.LibraryFolder ):
+ dataset_collection_instance = self.model.LibraryDatasetCollectionAssociation(
+ collection=dataset_collection,
+ folder=parent,
+ name=name,
+ )
+ else:
+ message = "Internal logic error - create called with unknown parent type %s" % type( parent )
+ log.exception( message )
+ raise MessageException( message )
+
+ return self.__persist( dataset_collection_instance )
+
+ def __create_dataset_collection(
+ self,
+ trans,
+ collection_type,
+ element_identifiers=None,
+ elements=None,
+ ):
+ if element_identifiers is None and elements is None:
+ raise RequestParameterInvalidException( ERROR_INVALID_ELEMENTS_SPECIFICATION )
+ if not collection_type:
+ raise RequestParameterInvalidException( ERROR_NO_COLLECTION_TYPE )
+ rank_collection_type = collection_type.split( ":" )[ 0 ]
+ if elements is None:
+ if rank_collection_type != collection_type:
+ # Nested collection - recursively create collections and update identifiers.
+ self.__recursively_create_collections( trans, element_identifiers )
+ elements = self.__load_elements( trans, element_identifiers )
+ # else if elements is set, it better be an ordered dict!
+
+ type_plugin = self.__type_plugin( rank_collection_type )
+ dataset_collection = type_plugin.build_collection( elements )
+ dataset_collection.collection_type = collection_type
+ return dataset_collection
+
+ def delete( self, trans, instance_type, id ):
+ dataset_collection_instance = self.get_dataset_collection_instance( trans, instance_type, id, check_ownership=True )
+ dataset_collection_instance.deleted = True
+ trans.sa_session.add( dataset_collection_instance )
+ trans.sa_session.flush( )
+
+ def update( self, trans, instance_type, id, payload ):
+ dataset_collection_instance = self.get_dataset_collection_instance( trans, instance_type, id, check_ownership=True )
+ if trans.user is None:
+ anon_allowed_payload = {}
+ if 'deleted' in payload:
+ anon_allowed_payload[ 'deleted' ] = payload[ 'deleted' ]
+ if 'visible' in payload:
+ anon_allowed_payload[ 'visible' ] = payload[ 'visible' ]
+ payload = self._validate_and_parse_update_payload( anon_allowed_payload )
+ else:
+ payload = self._validate_and_parse_update_payload( payload )
+ changed = self._set_from_dict( trans, dataset_collection_instance, payload )
+ return changed
+
+ def _set_from_dict( self, trans, dataset_collection_instance, new_data ):
+ # Blatantly stolen from UsesHistoryDatasetAssociationMixin.set_hda_from_dict.
+
+ # send what we can down into the model
+ changed = dataset_collection_instance.set_from_dict( new_data )
+ # the rest (often involving the trans) - do here
+ if 'annotation' in new_data.keys() and trans.get_user():
+ dataset_collection_instance.add_item_annotation( trans.sa_session, trans.get_user(), dataset_collection_instance.collection, new_data[ 'annotation' ] )
+ changed[ 'annotation' ] = new_data[ 'annotation' ]
+ if 'tags' in new_data.keys() and trans.get_user():
+ self.set_tags_from_list( trans, dataset_collection_instance.collection, new_data[ 'tags' ], user=trans.user )
+
+ if changed.keys():
+ trans.sa_session.flush()
+
+ return changed
+
+ def _validate_and_parse_update_payload( self, payload ):
+ validated_payload = {}
+ for key, val in payload.items():
+ if val is None:
+ continue
+ if key in ( 'name' ):
+ val = validation.validate_and_sanitize_basestring( key, val )
+ validated_payload[ key ] = val
+ if key in ( 'deleted', 'visible' ):
+ validated_payload[ key ] = validation.validate_boolean( key, val )
+ elif key == 'tags':
+ validated_payload[ key ] = validation.validate_and_sanitize_basestring_list( key, val )
+ return validated_payload
+
+ def history_dataset_collections(self, history, query):
+ collections = history.dataset_collections
+ collection_type = query.get( "collection_type", None )
+ if collection_type:
+ collections = filter( lambda c: c.collection.collection_type == collection_type, collections )
+ return collections
+
+ def __persist( self, dataset_collection_instance ):
+ context = self.model.context
+ context.add( dataset_collection_instance )
+ context.flush()
+ return dataset_collection_instance
+
+ def __recursively_create_collections( self, trans, element_identifiers ):
+ # TODO: Optimize - don't recheck parent, reload created model, just use as is.
+ for index, element_identifier in enumerate( element_identifiers ):
+ try:
+ if not element_identifier[ "src" ] == "new_collection":
+ # not a new collection, keep moving...
+ continue
+ except KeyError:
+ # Not a dictionary, just an id of an HDA - move along.
+ continue
+
+ # element identifier is a dict with src new_collection...
+ collection_type = element_identifier.get( "collection_type", None )
+ if not collection_type:
+ raise RequestParameterInvalidException( "No collection_type define for nested collection." )
+ collection = self.__create_dataset_collection(
+ trans=trans,
+ collection_type=collection_type,
+ element_identifiers=element_identifier[ "element_identifiers" ],
+ )
+ self.__persist( collection )
+ element_identifier[ "src" ] = "dc"
+ element_identifier[ "id" ] = trans.security.encode_id( collection.id )
+
+ return element_identifiers
+
+ def __load_elements( self, trans, element_identifiers ):
+ elements = odict.odict()
+ for element_identifier in element_identifiers:
+ elements[ element_identifier[ "name" ] ] = self.__load_element( trans, element_identifier )
+ return elements
+
+ def __load_element( self, trans, element_identifier ):
+ #if not isinstance( element_identifier, dict ):
+ # # Is allowing this to just be the id of an hda too clever? Somewhat
+ # # consistent with other API methods though.
+ # element_identifier = dict( src='hda', id=str( element_identifier ) )
+
+ # dateset_identifier is dict {src=hda|ldda, id=<encoded_id>}
+ try:
+ src_type = element_identifier.get( 'src', 'hda' )
+ except AttributeError:
+ raise MessageException( "Dataset collection element definition (%s) not dictionary-like." % element_identifier )
+ encoded_id = element_identifier.get( 'id', None )
+ if not src_type or not encoded_id:
+ raise RequestParameterInvalidException( "Problem decoding element identifier %s" % element_identifier )
+
+ if src_type == 'hda':
+ decoded_id = int( trans.app.security.decode_id( encoded_id ) )
+ element = self.hda_manager.get( trans, decoded_id, check_ownership=False )
+ elif src_type == 'ldda':
+ element = self.get_library_dataset_dataset_association( trans, encoded_id )
+ elif src_type == 'hdca':
+ # TODO: Option to copy? Force copy? Copy or allow if not owned?
+ element = self.__get_history_collection_instance( trans, encoded_id ).collection
+ # TODO: ldca.
+ elif src_type == "dc":
+ # TODO: Force only used internally during nested creation so no
+ # need to recheck security.
+ element = self.get_dataset_collection( trans, encoded_id )
+ else:
+ raise RequestParameterInvalidException( "Unknown src_type parameter supplied '%s'." % src_type )
+ return element
+
+ def __type_plugin( self, collection_type ):
+ return self.type_registry.get( collection_type )
+
+ def get_dataset_collection_instance( self, trans, instance_type, id, **kwds ):
+ """
+ """
+ if instance_type == "history":
+ return self.__get_history_collection_instance( trans, id, **kwds )
+ elif instance_type == "library":
+ return self.__get_library_collection_instance( trans, id, **kwds )
+
+ def get_dataset_collection( self, trans, encoded_id ):
+ collection_id = int( trans.app.security.decode_id( encoded_id ) )
+ collection = trans.sa_session.query( trans.app.model.DatasetCollection ).get( collection_id )
+ return collection
+
+ def __get_history_collection_instance( self, trans, id, check_ownership=False, check_accessible=True ):
+ instance_id = int( trans.app.security.decode_id( id ) )
+ collection_instance = trans.sa_session.query( trans.app.model.HistoryDatasetCollectionAssociation ).get( instance_id )
+ self.security_check( trans, collection_instance.history, check_ownership=check_ownership, check_accessible=check_accessible )
+ return collection_instance
+
+ def __get_library_collection_instance( self, trans, id, check_ownership=False, check_accessible=True ):
+ if check_ownership:
+ raise NotImplemented( "Functionality (getting library dataset collection with ownership check) unimplemented." )
+ instance_id = int( trans.security.decode_id( id ) )
+ collection_instance = trans.sa_session.query( trans.app.model.LibraryDatasetCollectionAssociation ).get( instance_id )
+ if check_accessible:
+ if not trans.app.security_agent.can_access_library_item( trans.get_current_user_roles(), collection_instance, trans.user ):
+ raise ItemAccessibilityException( "LibraryDatasetCollectionAssociation is not accessible to the current user", type='error' )
+ return collection_instance
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/dataset_collections/registry.py
--- /dev/null
+++ b/lib/galaxy/dataset_collections/registry.py
@@ -0,0 +1,14 @@
+from .types import list
+from .types import paired
+
+
+PLUGIN_CLASSES = [list.ListDatasetCollectionType, paired.PairedDatasetCollectionType]
+
+
+class DatasetCollectionTypesRegistry(object):
+
+ def __init__(self, app):
+ self.__plugins = dict( [ ( p.collection_type, p() ) for p in PLUGIN_CLASSES ] )
+
+ def get( self, plugin_type ):
+ return self.__plugins[ plugin_type ]
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/dataset_collections/structure.py
--- /dev/null
+++ b/lib/galaxy/dataset_collections/structure.py
@@ -0,0 +1,81 @@
+""" Module for reasoning about structure of and matching hierarchical collections of data.
+"""
+import logging
+log = logging.getLogger( __name__ )
+
+
+class Leaf( object ):
+
+ def __len__( self ):
+ return 1
+
+ @property
+ def is_leaf( self ):
+ return True
+
+leaf = Leaf()
+
+
+class Tree( object ):
+
+ def __init__( self, dataset_collection ):
+ self.collection_type = dataset_collection.collection_type
+ children = []
+ for element in dataset_collection.elements:
+ child_collection = element.child_collection
+ if child_collection:
+ children.append( ( element.element_identifier, Tree( child_collection ) ) )
+ elif element.hda:
+ children.append( ( element.element_identifier, leaf ) )
+
+ self.children = children
+
+ @property
+ def is_leaf( self ):
+ return False
+
+ def can_match( self, other_structure ):
+ if self.collection_type != other_structure.collection_type:
+ # TODO: generalize
+ return False
+
+ if len( self.children ) != len( other_structure.children ):
+ return False
+
+ for my_child, other_child in zip( self.children, other_structure.children ):
+ if my_child[ 0 ] != other_child[ 0 ]: # Different identifiers, TODO: generalize
+ return False
+
+ # At least one is nested collection...
+ if my_child[ 1 ].is_leaf != other_child[ 1 ].is_leaf:
+ return False
+
+ if not my_child[ 1 ].is_leaf and not my_child[ 1 ].can_match( other_child[ 1 ]):
+ return False
+
+ return True
+
+ def __len__( self ):
+ return sum( [ len( c[ 1 ] ) for c in self.children ] )
+
+ def element_identifiers_for_datasets( self, trans, datasets ):
+ element_identifiers = []
+ for identifier, child in self.children:
+ if isinstance( child, Tree ):
+ child_identifiers = child.element_identifiers_for_datasets( trans, datasets[ 0:len( child ) ] )
+ child_identifiers[ "name" ] = identifier
+ element_identifiers.append( child_identifiers )
+ else:
+ element_identifiers.append( dict( name=identifier, src="hda", id=trans.security.encode_id( datasets[ 0 ].id ) ) )
+
+ datasets = datasets[ len( child ): ]
+
+ return dict(
+ src="new_collection",
+ collection_type=self.collection_type,
+ element_identifiers=element_identifiers,
+ )
+
+
+def get_structure( dataset_collection_instance ):
+ return Tree( dataset_collection_instance.collection )
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/dataset_collections/types/__init__.py
--- /dev/null
+++ b/lib/galaxy/dataset_collections/types/__init__.py
@@ -0,0 +1,34 @@
+from galaxy import exceptions
+from abc import ABCMeta
+from abc import abstractmethod
+
+from galaxy import model
+
+import logging
+log = logging.getLogger( __name__ )
+
+
+class DatasetCollectionType(object):
+ __metaclass__ = ABCMeta
+
+ @abstractmethod
+ def build_collection( self, dataset_instances ):
+ """
+ Build DatasetCollection with populated DatasetcollectionElement objects
+ corresponding to the supplied dataset instances or throw exception if
+ this is not a valid collection of the specified type.
+ """
+
+
+class BaseDatasetCollectionType( DatasetCollectionType ):
+
+ def _validation_failed( self, message ):
+ raise exceptions.ObjectAttributeInvalidException( message )
+
+ def _new_collection_for_elements( self, elements ):
+ dataset_collection = model.DatasetCollection( )
+ for index, element in enumerate( elements ):
+ element.element_index = index
+ element.collection = dataset_collection
+ dataset_collection.elements = elements
+ return dataset_collection
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/dataset_collections/types/list.py
--- /dev/null
+++ b/lib/galaxy/dataset_collections/types/list.py
@@ -0,0 +1,23 @@
+from ..types import BaseDatasetCollectionType
+
+from galaxy.model import DatasetCollectionElement
+
+
+class ListDatasetCollectionType( BaseDatasetCollectionType ):
+ """ A flat list of named elements.
+ """
+ collection_type = "list"
+
+ def __init__( self ):
+ pass
+
+ def build_collection( self, elements ):
+ associations = []
+ for identifier, element in elements.iteritems():
+ association = DatasetCollectionElement(
+ element=element,
+ element_identifier=identifier,
+ )
+ associations.append( association )
+
+ return self._new_collection_for_elements( associations )
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/dataset_collections/types/paired.py
--- /dev/null
+++ b/lib/galaxy/dataset_collections/types/paired.py
@@ -0,0 +1,31 @@
+from ..types import BaseDatasetCollectionType
+
+from galaxy.model import DatasetCollectionElement
+
+LEFT_IDENTIFIER = "left"
+RIGHT_IDENTIFIER = "right"
+
+
+class PairedDatasetCollectionType( BaseDatasetCollectionType ):
+ """
+ Paired (left/right) datasets.
+ """
+ collection_type = "paired"
+
+ def __init__( self ):
+ pass
+
+ def build_collection( self, elements ):
+ left_dataset = elements.get("left", None)
+ right_dataset = elements.get("right", None)
+ if not left_dataset or not right_dataset:
+ self._validation_failed("Paired instance must define 'left' and 'right' datasets .")
+ left_association = DatasetCollectionElement(
+ element=left_dataset,
+ element_identifier=LEFT_IDENTIFIER,
+ )
+ right_association = DatasetCollectionElement(
+ element=right_dataset,
+ element_identifier=RIGHT_IDENTIFIER,
+ )
+ return self._new_collection_for_elements([left_association, right_association])
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/dataset_collections/util.py
--- /dev/null
+++ b/lib/galaxy/dataset_collections/util.py
@@ -0,0 +1,52 @@
+from galaxy import exceptions
+from galaxy import web
+from galaxy import model
+
+
+def api_payload_to_create_params( payload ):
+ """
+ Cleanup API payload to pass into dataset_collections.
+ """
+ required_parameters = [ "collection_type", "element_identifiers" ]
+ missing_parameters = [ p for p in required_parameters if p not in payload ]
+ if missing_parameters:
+ message = "Missing required parameters %s" % missing_parameters
+ raise exceptions.ObjectAttributeMissingException( message )
+
+ params = dict(
+ collection_type=payload.get( "collection_type" ),
+ element_identifiers=payload.get( "element_identifiers" ),
+ name=payload.get( "name", None ),
+ )
+
+ return params
+
+
+def dictify_dataset_collection_instance( dataset_colleciton_instance, parent, security, view="element" ):
+ dict_value = dataset_colleciton_instance.to_dict( view=view )
+ encoded_id = security.encode_id( dataset_colleciton_instance.id )
+ if isinstance( parent, model.History ):
+ encoded_history_id = security.encode_id( parent.id )
+ dict_value[ 'url' ] = web.url_for( 'history_content', history_id=encoded_history_id, id=encoded_id, type="dataset_collection" )
+ elif isinstance( parent, model.LibraryFolder ):
+ encoded_library_id = security.encode_id( parent.library.id )
+ encoded_folder_id = security.encode_id( parent.id )
+ # TODO: Work in progress - this end-point is not right yet...
+ dict_value[ 'url' ] = web.url_for( 'library_content', library_id=encoded_library_id, id=encoded_id, folder_id=encoded_folder_id )
+ if view == "element":
+ dict_value[ 'elements' ] = map( dictify_element, dataset_colleciton_instance.collection.elements )
+ security.encode_dict_ids( dict_value ) # TODO: Use Kyle's recusrive formulation of this.
+ return dict_value
+
+
+def dictify_element( element ):
+ dictified = element.to_dict( view="element" )
+ object_detials = element.element_object.to_dict()
+ if element.child_collection:
+ # Recursively yield elements for each nested collection...
+ object_detials[ "elements" ] = map( dictify_element, element.child_collection.elements )
+
+ dictified[ "object" ] = object_detials
+ return dictified
+
+__all__ = [ api_payload_to_create_params, dictify_dataset_collection_instance ]
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py
+++ b/lib/galaxy/model/__init__.py
@@ -1109,6 +1109,8 @@
iters = []
if 'dataset' in types:
iters.append( self.__dataset_contents_iter( **kwds ) )
+ if 'dataset_collection' in types:
+ iters.append( self.__collection_contents_iter( **kwds ) )
return galaxy.util.merge_sorted_iterables( operator.attrgetter( "hid" ), *iters )
def __dataset_contents_iter(self, **kwds):
@@ -1138,6 +1140,9 @@
else:
return query
+ def __collection_contents_iter( self, **kwds ):
+ return self.__filter_contents( HistoryDatasetCollectionAssociation, **kwds )
+
def copy_tags_from(self,target_user,source_history):
for src_shta in source_history.tags:
new_shta = src_shta.copy()
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/security/__init__.py
--- a/lib/galaxy/security/__init__.py
+++ b/lib/galaxy/security/__init__.py
@@ -563,6 +563,8 @@
return self.can_access_library( roles, item.folder.parent_library ) and self.can_access_dataset( roles, item.library_dataset_dataset_association.dataset )
elif type( item ) == self.model.LibraryDatasetDatasetAssociation:
return self.can_access_library( roles, item.library_dataset.folder.parent_library ) and self.can_access_dataset( roles, item.dataset )
+ elif type( item ) == self.model.LibraryDatasetCollectionAssociation:
+ return self.can_access_library( roles, item.folder.parent_library )
else:
log.warning( 'Unknown library item type: %s' % type( item ) )
return False
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/web/base/controller.py
--- a/lib/galaxy/web/base/controller.py
+++ b/lib/galaxy/web/base/controller.py
@@ -983,6 +983,27 @@
return ( ( trans.user_is_admin() )
or ( trans.app.security_agent.can_add_library_item( trans.get_current_user_roles(), item ) ) )
+ def check_user_can_add_to_library_item( self, trans, item, check_accessible=True ):
+ """
+ Raise exception if user cannot add to the specified library item (i.e.
+ Folder). Can set check_accessible to False if folder was loaded with
+ this check.
+ """
+ if not trans.user:
+ return False
+
+ current_user_roles = trans.get_current_user_roles()
+ if trans.user_is_admin():
+ return True
+
+ if check_accessible:
+ if not trans.app.security_agent.can_access_library_item( current_user_roles, item, trans.user ):
+ raise ItemAccessibilityException( )
+
+ if not trans.app.security_agent.can_add_library_item( trans.get_current_user_roles(), item ):
+ # Slight misuse of ItemOwnershipException?
+ raise ItemOwnershipException( "User cannot add to library item." )
+
def copy_hda_to_library_folder( self, trans, hda, library_folder, roles=None, ldda_message='' ):
#PRECONDITION: permissions for this action on hda and library_folder have been checked
roles = roles or []
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/webapps/galaxy/api/dataset_collections.py
--- /dev/null
+++ b/lib/galaxy/webapps/galaxy/api/dataset_collections.py
@@ -0,0 +1,75 @@
+from galaxy.web import _future_expose_api as expose_api
+
+from galaxy.web.base.controller import BaseAPIController
+from galaxy.web.base.controller import UsesHistoryMixin
+from galaxy.web.base.controller import UsesLibraryMixinItems
+
+from galaxy.dataset_collections.util import api_payload_to_create_params
+from galaxy.dataset_collections.util import dictify_dataset_collection_instance
+
+from logging import getLogger
+log = getLogger( __name__ )
+
+
+class DatasetCollectionsController(
+ BaseAPIController,
+ UsesHistoryMixin,
+ UsesLibraryMixinItems,
+):
+
+ @expose_api
+ def index( self, trans, **kwd ):
+ trans.response.status = 501
+ return 'not implemented'
+
+ @expose_api
+ def create( self, trans, payload, **kwd ):
+ """
+ * POST /api/dataset_collections:
+ create a new dataset collection instance.
+
+ :type payload: dict
+ :param payload: (optional) dictionary structure containing:
+ * collection_type: dataset colltion type to create.
+ * instance_type: Instance type - 'history' or 'library'.
+ * name: the new dataset collections's name
+ * datasets: object describing datasets for collection
+ :rtype: dict
+ :returns: element view of new dataset collection
+ """
+ # TODO: Error handling...
+ create_params = api_payload_to_create_params( payload )
+ instance_type = payload.pop( "instance_type", "history" )
+ if instance_type == "history":
+ history_id = payload.get( 'history_id' )
+ history = self.get_history( trans, history_id, check_ownership=True, check_accessible=False )
+ create_params[ "parent" ] = history
+ elif instance_type == "library":
+ folder_id = payload.get( 'folder_id' )
+ library_folder = self.get_library_folder( trans, folder_id, check_accessible=True )
+ self.check_user_can_add_to_library_item( trans, library_folder, check_accessible=False )
+ create_params[ "parent" ] = library_folder
+ else:
+ trans.status = 501
+ return
+ dataset_collection_instance = self.__service( trans ).create( trans=trans, **create_params )
+ return dictify_dataset_collection_instance( dataset_collection_instance, security=trans.security, parent=create_params[ "parent" ] )
+
+ @expose_api
+ def show( self, trans, instance_type, id, **kwds ):
+ dataset_collection_instance = self.__service( trans ).get(
+ id=id,
+ instance_type=instance_type,
+ )
+ if instance_type == 'history':
+ parent = dataset_collection_instance.history
+ elif instance_type == 'library':
+ parent = dataset_collection_instance.folder
+ else:
+ trans.status = 501
+ return
+ return dictify_dataset_collection_instance( trans, dataset_collection_instance, parent )
+
+ def __service( self, trans ):
+ service = trans.app.dataset_collections_service
+ return service
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/webapps/galaxy/api/history_contents.py
--- a/lib/galaxy/webapps/galaxy/api/history_contents.py
+++ b/lib/galaxy/webapps/galaxy/api/history_contents.py
@@ -15,6 +15,9 @@
from galaxy.web.base.controller import UsesLibraryMixinItems
from galaxy.web.base.controller import UsesTagsMixin
+from galaxy.dataset_collections.util import api_payload_to_create_params
+from galaxy.dataset_collections.util import dictify_dataset_collection_instance
+
from galaxy.web.base.controller import url_for
from galaxy.managers import histories
@@ -87,7 +90,7 @@
if types:
types = util.listify(types)
else:
- types = [ 'dataset' ]
+ types = [ 'dataset', "dataset_collection" ]
contents_kwds = {'types': types}
if ids:
@@ -112,7 +115,8 @@
rval.append( self._detailed_hda_dict( trans, content ) )
else:
rval.append( self._summary_hda_dict( trans, history_id, content ) )
-
+ elif isinstance(content, trans.app.model.HistoryDatasetCollectionAssociation):
+ rval.append( self.__collection_dict( trans, content ) )
return rval
#TODO: move to model or Mixin
@@ -142,6 +146,9 @@
'url' : url_for( 'history_content', history_id=encoded_history_id, id=encoded_id, type="dataset" ),
}
+ def __collection_dict( self, trans, dataset_collection_instance, view="collection" ):
+ return dictify_dataset_collection_instance( dataset_collection_instance, security=trans.security, parent=dataset_collection_instance.history, view=view )
+
def _detailed_hda_dict( self, trans, hda ):
"""
Detailed dictionary of hda values.
@@ -177,9 +184,26 @@
contents_type = kwd.get('type', 'dataset')
if contents_type == 'dataset':
return self.__show_dataset( trans, id, **kwd )
+ elif contents_type == 'dataset_collection':
+ return self.__show_dataset_collection( trans, id, history_id, **kwd )
else:
return self.__handle_unknown_contents_type( trans, contents_type )
+ def __show_dataset_collection( self, trans, id, history_id, **kwd ):
+ try:
+ service = trans.app.dataset_collections_service
+ dataset_collection_instance = service.get_dataset_collection_instance(
+ trans=trans,
+ instance_type='history',
+ id=id,
+ )
+ return self.__collection_dict( trans, dataset_collection_instance, view="element" )
+ except Exception, e:
+ msg = "Error in history API at listing dataset collection: %s" % ( str(e) )
+ log.error( msg, exc_info=True )
+ trans.response.status = 500
+ return msg
+
def __show_dataset( self, trans, id, **kwd ):
hda = self.mgrs.hdas.get( trans, self._decode_id( trans, id ), check_ownership=False, check_accessible=True )
#if hda.history.id != self._decode_id( trans, history_id ):
@@ -214,13 +238,20 @@
:rtype: dict
:returns: dictionary containing detailed information for the new HDA
"""
- #TODO: convert existing, accessible hda - model.DatasetInstance(or hda.datatype).get_converter_types
- history = self.mgrs.histories.get( trans, self._decode_id( trans, history_id ),
- check_ownership=True, check_accessible=False )
+ # get the history, if anon user and requesting current history - allow it
+ if( ( trans.user == None )
+ and ( history_id == trans.security.encode_id( trans.history.id ) ) ):
+ history = trans.history
+ # otherwise, check permissions for the history first
+ else:
+ history = self.mgrs.histories.get( trans, self._decode_id( trans, history_id ),
+ check_ownership=True, check_accessible=True )
type = payload.get('type', 'dataset')
if type == 'dataset':
return self.__create_dataset( trans, history, payload, **kwd )
+ elif type == 'dataset_collection':
+ return self.__create_dataset_collection( trans, history, payload, **kwd )
else:
return self.__handle_unknown_contents_type( trans, type )
@@ -260,6 +291,12 @@
hda_dict[ 'display_apps' ] = self.get_display_apps( trans, hda )
return hda_dict
+ def __create_dataset_collection( self, trans, history, payload, **kwd ):
+ create_params = api_payload_to_create_params( payload )
+ service = trans.app.dataset_collections_service
+ dataset_collection_instance = service.create( trans, parent=history, **create_params )
+ return self.__collection_dict( trans, dataset_collection_instance )
+
@expose_api_anonymous
def update( self, trans, history_id, id, payload, **kwd ):
"""
@@ -286,6 +323,8 @@
contents_type = kwd.get('type', 'dataset')
if contents_type == "dataset":
return self.__update_dataset( trans, history_id, id, payload, **kwd )
+ elif contents_type == "dataset_collection":
+ return self.__update_dataset_collection( trans, history_id, id, payload, **kwd )
else:
return self.__handle_unknown_contents_type( trans, contents_type )
@@ -325,6 +364,9 @@
return changed
+ def __update_dataset_collection( self, trans, history_id, id, payload, **kwd ):
+ return trans.app.dataset_collections_service.update( trans, "history", id, payload )
+
#TODO: allow anonymous del/purge and test security on this
@expose_api
def delete( self, trans, history_id, id, purge=False, **kwd ):
@@ -356,6 +398,9 @@
contents_type = kwd.get('type', 'dataset')
if contents_type == "dataset":
return self.__delete_dataset( trans, history_id, id, purge=purge, **kwd )
+ elif contents_type == "dataset_collection":
+ trans.app.dataset_collections_service.delete( trans, "history", id )
+ return { 'id' : id, "deleted": True }
else:
return self.__handle_unknown_contents_type( trans, contents_type )
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/webapps/galaxy/api/library_contents.py
--- a/lib/galaxy/webapps/galaxy/api/library_contents.py
+++ b/lib/galaxy/webapps/galaxy/api/library_contents.py
@@ -5,7 +5,8 @@
from galaxy import web
from galaxy import exceptions
from galaxy.web import _future_expose_api as expose_api
-from galaxy.web import _future_expose_api_anonymous as expose_api_anonymous
+from galaxy.dataset_collections.util import api_payload_to_create_params
+from galaxy.dataset_collections.util import dictify_dataset_collection_instance
from galaxy.web.base.controller import BaseAPIController, UsesLibraryMixin, UsesLibraryMixinItems
from galaxy.web.base.controller import UsesHistoryDatasetAssociationMixin
from galaxy.web.base.controller import HTTPBadRequest, url_for
@@ -167,7 +168,7 @@
return "Missing required 'create_type' parameter."
else:
create_type = payload.pop( 'create_type' )
- if create_type not in ( 'file', 'folder' ):
+ if create_type not in ( 'file', 'folder', 'collection' ):
trans.response.status = 400
return "Invalid value for 'create_type' parameter ( %s ) specified." % create_type
@@ -202,6 +203,15 @@
status, output = trans.webapp.controllers['library_common'].upload_library_dataset( trans, 'api', library_id, real_folder_id, **payload )
elif create_type == 'folder':
status, output = trans.webapp.controllers['library_common'].create_folder( trans, 'api', real_folder_id, library_id, **payload )
+ elif create_type == 'collection':
+ # Not delegating to library_common, so need to check access to parent
+ # folder here.
+ self.check_user_can_add_to_library_item( trans, parent, check_accessible=True )
+ create_params = api_payload_to_create_params( payload )
+ create_params[ 'parent' ] = parent
+ service = trans.app.dataset_collections_service
+ dataset_collection_instance = service.create( **create_params )
+ return [ dictify_dataset_collection_instance( dataset_collection_instance, security=trans.security, parent=parent ) ]
if status != 200:
trans.response.status = status
return output
@@ -280,6 +290,8 @@
library = self.get_library( trans, library_id, check_accessible=True )
folder = self.get_library_folder( trans, folder_id, check_accessible=True )
+ # TOOD: refactor to use check_user_can_add_to_library_item, eliminate boolean
+ # can_current_user_add_to_library_item.
if not self.can_current_user_add_to_library_item( trans, folder ):
trans.response.status = 403
return { 'error' : 'user has no permission to add to library folder (%s)' %( folder_id ) }
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 lib/galaxy/webapps/galaxy/buildapp.py
--- a/lib/galaxy/webapps/galaxy/buildapp.py
+++ b/lib/galaxy/webapps/galaxy/buildapp.py
@@ -78,6 +78,7 @@
valid_history_contents_types = [
'dataset',
+ 'dataset_collection',
]
# This must come before history contents below.
# Accesss HDA details via histories/:history_id/contents/datasets/:hda_id
@@ -135,6 +136,7 @@
path_prefix='/api/histories/:history_id/contents/:history_content_id' )
webapp.mapper.resource( 'dataset', 'datasets', path_prefix='/api' )
+ webapp.mapper.resource( 'dataset_collection', 'dataset_collections', path_prefix='/api/')
webapp.mapper.resource( 'sample', 'samples', path_prefix='/api' )
webapp.mapper.resource( 'request', 'requests', path_prefix='/api' )
webapp.mapper.resource( 'form', 'forms', path_prefix='/api' )
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 test/api/test_history_contents.py
--- a/test/api/test_history_contents.py
+++ b/test/api/test_history_contents.py
@@ -3,6 +3,7 @@
from .helpers import TestsDatasets
from .helpers import LibraryPopulator
+from .test_dataset_collections import DatasetCollectionPopulator
from base.interactor import (
put_request,
delete_request,
@@ -15,6 +16,7 @@
def setUp( self ):
super( HistoryContentsApiTestCase, self ).setUp()
self.history_id = self._new_history()
+ self.dataset_collection_populator = DatasetCollectionPopulator( self.galaxy_interactor )
def test_index_hda_summary( self ):
hda1 = self._new_dataset( self.history_id )
@@ -84,6 +86,48 @@
assert delete_response.status_code < 300 # Something in the 200s :).
assert str( self.__show( hda1 ).json()[ "deleted" ] ).lower() == "true"
+ def test_dataset_collections( self ):
+ payload = self.dataset_collection_populator.create_pair_payload(
+ self.history_id,
+ type="dataset_collection"
+ )
+ pre_collection_count = self.__count_contents( type="dataset_collection" )
+ pre_dataset_count = self.__count_contents( type="dataset" )
+ pre_combined_count = self.__count_contents( type="dataset,dataset_collection" )
+
+ dataset_collection_response = self._post( "histories/%s/contents" % self.history_id, payload )
+
+ self._assert_status_code_is( dataset_collection_response, 200 )
+ dataset_collection = dataset_collection_response.json()
+ self._assert_has_keys( dataset_collection, "url", "name", "deleted" )
+
+ post_collection_count = self.__count_contents( type="dataset_collection" )
+ post_dataset_count = self.__count_contents( type="dataset" )
+ post_combined_count = self.__count_contents( type="dataset,dataset_collection" )
+
+ # Test filtering types with index.
+ assert pre_collection_count == 0
+ assert post_collection_count == 1
+ assert post_combined_count == pre_dataset_count + 1
+ assert post_combined_count == pre_combined_count + 1
+ assert pre_dataset_count == post_dataset_count
+
+ # Test show dataset colleciton.
+ collection_url = "histories/%s/contents/dataset_collections/%s" % ( self.history_id, dataset_collection[ "id" ] )
+ show_response = self._get( collection_url )
+ self._assert_status_code_is( show_response, 200 )
+ dataset_collection = show_response.json()
+ self._assert_has_keys( dataset_collection, "url", "name", "deleted" )
+
+ assert not dataset_collection[ "deleted" ]
+
+ delete_response = delete_request( self._api_url( collection_url, use_key=True ) )
+ self._assert_status_code_is( delete_response, 200 )
+
+ show_response = self._get( collection_url )
+ dataset_collection = show_response.json()
+ assert dataset_collection[ "deleted" ]
+
def __show( self, hda ):
show_response = self._get( "histories/%s/contents/%s" % ( self.history_id, hda[ "id" ] ) )
return show_response
diff -r 6e3ba71820e1b48644769d53399f9114af345716 -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 test/functional/api/test_dataset_collections.py
--- /dev/null
+++ b/test/functional/api/test_dataset_collections.py
@@ -0,0 +1,189 @@
+from base import api
+import json
+from .helpers import DatasetPopulator
+
+
+# TODO: Move into helpers with rest of populators
+class DatasetCollectionPopulator( object ):
+
+ def __init__( self, galaxy_interactor ):
+ self.galaxy_interactor = galaxy_interactor
+ self.dataset_populator = DatasetPopulator( galaxy_interactor )
+
+ def create_pair_in_history( self, history_id, **kwds ):
+ payload = self.create_pair_payload(
+ history_id,
+ instance_type="history",
+ **kwds
+ )
+ return self.__create( payload )
+
+ def create_list_in_history( self, history_id, **kwds ):
+ payload = self.create_list_payload(
+ history_id,
+ instance_type="history",
+ **kwds
+ )
+ return self.__create( payload )
+
+ def create_list_payload( self, history_id, **kwds ):
+ return self.__create_payload( history_id, identifiers_func=self.list_identifiers, collection_type="list", **kwds )
+
+ def create_pair_payload( self, history_id, **kwds ):
+ return self.__create_payload( history_id, identifiers_func=self.pair_identifiers, collection_type="paired", **kwds )
+
+ def __create_payload( self, history_id, identifiers_func, collection_type, **kwds ):
+ contents = None
+ if "contents" in kwds:
+ contents = kwds[ "contents" ]
+ del kwds[ "contents" ]
+
+ if "element_identifiers" not in kwds:
+ kwds[ "element_identifiers" ] = json.dumps( identifiers_func( history_id, contents=contents ) )
+
+ payload = dict(
+ history_id=history_id,
+ collection_type=collection_type,
+ **kwds
+ )
+ return payload
+
+ def pair_identifiers( self, history_id, contents=None ):
+ hda1, hda2 = self.__datasets( history_id, count=2, contents=contents )
+
+ element_identifiers = [
+ dict( name="left", src="hda", id=hda1[ "id" ] ),
+ dict( name="right", src="hda", id=hda2[ "id" ] ),
+ ]
+ return element_identifiers
+
+ def list_identifiers( self, history_id, contents=None ):
+ hda1, hda2, hda3 = self.__datasets( history_id, count=3, contents=contents )
+ element_identifiers = [
+ dict( name="data1", src="hda", id=hda1[ "id" ] ),
+ dict( name="data2", src="hda", id=hda2[ "id" ] ),
+ dict( name="data3", src="hda", id=hda3[ "id" ] ),
+ ]
+ return element_identifiers
+
+ def __create( self, payload ):
+ create_response = self.galaxy_interactor.post( "dataset_collections", data=payload )
+ return create_response
+
+ def __datasets( self, history_id, count, contents=None ):
+ datasets = []
+ for i in xrange( count ):
+ new_kwds = {}
+ if contents:
+ new_kwds[ "content" ] = contents[ i ]
+ datasets.append( self.dataset_populator.new_dataset( history_id, **new_kwds ) )
+ return datasets
+
+
+class DatasetCollectionApiTestCase( api.ApiTestCase ):
+
+ def setUp( self ):
+ super( DatasetCollectionApiTestCase, self ).setUp()
+ self.dataset_populator = DatasetPopulator( self.galaxy_interactor )
+ self.dataset_collection_populator = DatasetCollectionPopulator( self.galaxy_interactor )
+ self.history_id = self.dataset_populator.new_history()
+
+ def test_create_pair_from_history( self ):
+ payload = self.dataset_collection_populator.create_pair_payload(
+ self.history_id,
+ instance_type="history",
+ )
+ create_response = self._post( "dataset_collections", payload )
+ dataset_collection = self._check_create_response( create_response )
+ returned_datasets = dataset_collection[ "elements" ]
+ assert len( returned_datasets ) == 2, dataset_collection
+
+ def test_create_list_from_history( self ):
+ element_identifiers = self.dataset_collection_populator.list_identifiers( self.history_id )
+
+ payload = dict(
+ instance_type="history",
+ history_id=self.history_id,
+ element_identifiers=json.dumps(element_identifiers),
+ collection_type="list",
+ )
+
+ create_response = self._post( "dataset_collections", payload )
+ dataset_collection = self._check_create_response( create_response )
+ returned_datasets = dataset_collection[ "elements" ]
+ assert len( returned_datasets ) == 3, dataset_collection
+
+ def test_create_list_of_existing_pairs( self ):
+ pair_payload = self.dataset_collection_populator.create_pair_payload(
+ self.history_id,
+ instance_type="history",
+ )
+ pair_create_response = self._post( "dataset_collections", pair_payload )
+ dataset_collection = self._check_create_response( pair_create_response )
+ hdca_id = dataset_collection[ "id" ]
+
+ element_identifiers = [
+ dict( name="test1", src="hdca", id=hdca_id )
+ ]
+
+ payload = dict(
+ instance_type="history",
+ history_id=self.history_id,
+ element_identifiers=json.dumps(element_identifiers),
+ collection_type="list",
+ )
+ create_response = self._post( "dataset_collections", payload )
+ dataset_collection = self._check_create_response( create_response )
+ returned_collections = dataset_collection[ "elements" ]
+ assert len( returned_collections ) == 1, dataset_collection
+
+ def test_create_list_of_new_pairs( self ):
+ pair_identifiers = self.dataset_collection_populator.pair_identifiers( self.history_id )
+ element_identifiers = [ dict(
+ src="new_collection",
+ name="test_pair",
+ collection_type="paired",
+ element_identifiers=pair_identifiers,
+ ) ]
+ payload = dict(
+ collection_type="list:paired",
+ instance_type="history",
+ history_id=self.history_id,
+ name="nested_collecion",
+ element_identifiers=json.dumps( element_identifiers ),
+ )
+ create_response = self._post( "dataset_collections", payload )
+ dataset_collection = self._check_create_response( create_response )
+ assert dataset_collection[ "collection_type" ] == "list:paired"
+ returned_collections = dataset_collection[ "elements" ]
+ assert len( returned_collections ) == 1, dataset_collection
+ pair_1_element = returned_collections[ 0 ]
+ self._assert_has_keys( pair_1_element, "element_index" )
+ pair_1_object = pair_1_element[ "object" ]
+ self._assert_has_keys( pair_1_object, "collection_type", "elements" )
+ self.assertEquals( pair_1_object[ "collection_type" ], "paired" )
+ pair_elements = pair_1_object[ "elements" ]
+ assert len( pair_elements ) == 2
+ pair_1_element_1 = pair_elements[ 0 ]
+ assert pair_1_element_1[ "element_index" ] == 0
+
+ def test_hda_security( self ):
+ element_identifiers = self.dataset_collection_populator.pair_identifiers( self.history_id )
+
+ with self._different_user( ):
+ history_id = self.dataset_populator.new_history()
+ payload = dict(
+ instance_type="history",
+ history_id=history_id,
+ element_identifiers=json.dumps(element_identifiers),
+ collection_type="paired",
+ )
+
+ create_response = self._post( "dataset_collections", payload )
+ self._assert_status_code_is( create_response, 403 )
+
+ def _check_create_response( self, create_response ):
+ self._assert_status_code_is( create_response, 200 )
+ dataset_collection = create_response.json()
+ self._assert_has_keys( dataset_collection, "elements", "url", "name", "collection_type" )
+ return dataset_collection
https://bitbucket.org/galaxy/galaxy-central/commits/9146cb42529f/
Changeset: 9146cb42529f
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - tool parameters - basics.
Add parameter, wrapper, etc... to explicitly utilize dataset collections from tools. These require actually modifing or creating new tools to take in "data_collection" parameters. Downstream changesets will provide a number of infrastructure enhancements to allow existing tools to "map" and "reduce" dataset collections with existing tools.
Affected #: 9 files
diff -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 -r 9146cb42529fa53be7870d1174e77d2aecddd99c lib/galaxy/dataset_collections/__init__.py
--- a/lib/galaxy/dataset_collections/__init__.py
+++ b/lib/galaxy/dataset_collections/__init__.py
@@ -158,9 +158,7 @@
def history_dataset_collections(self, history, query):
collections = history.dataset_collections
- collection_type = query.get( "collection_type", None )
- if collection_type:
- collections = filter( lambda c: c.collection.collection_type == collection_type, collections )
+ collections = filter( query.direct_match, collections )
return collections
def __persist( self, dataset_collection_instance ):
diff -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 -r 9146cb42529fa53be7870d1174e77d2aecddd99c lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -75,6 +75,7 @@
SelectToolParameterWrapper,
DatasetFilenameWrapper,
DatasetListWrapper,
+ DatasetCollectionWrapper,
)
diff -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 -r 9146cb42529fa53be7870d1174e77d2aecddd99c lib/galaxy/tools/actions/__init__.py
--- a/lib/galaxy/tools/actions/__init__.py
+++ b/lib/galaxy/tools/actions/__init__.py
@@ -3,12 +3,14 @@
from galaxy.exceptions import ObjectInvalid
from galaxy.model import LibraryDatasetDatasetAssociation
from galaxy.tools.parameters import DataToolParameter
+from galaxy.tools.parameters import DataCollectionToolParameter
from galaxy.tools.parameters.wrapped import WrappedParameters
from galaxy.util.json import from_json_string
from galaxy.util.json import to_json_string
from galaxy.util.none_like import NoneDataset
from galaxy.util.odict import odict
from galaxy.util.template import fill_template
+from galaxy.util import listify
from galaxy.web import url_for
import logging
@@ -109,6 +111,21 @@
for conversion_name, conversion_data in conversions:
#allow explicit conversion to be stored in job_parameter table
target_dict[ conversion_name ] = conversion_data.id # a more robust way to determine JSONable value is desired
+ elif isinstance( input, DataCollectionToolParameter ):
+ for i, v in enumerate( value.collection.datasets ):
+ data = v
+ current_user_roles = trans.get_current_user_roles()
+ if not trans.app.security_agent.can_access_dataset( current_user_roles, data.dataset ):
+ raise Exception( "User does not have permission to use a dataset (%s) provided for input." % data.id )
+ # Skipping implicit conversion stuff for now, revisit at
+ # some point and figure out if implicitly converting a
+ # dataset collection makes senese.
+
+ #if i == 0:
+ # # Allow copying metadata to output, first item will be source.
+ # input_datasets[ prefix + input.name ] = data.dataset_instance
+ input_datasets[ prefix + input.name + str( i + 1 ) ] = data
+
tool.visit_inputs( param_values, visitor )
return input_datasets
diff -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 -r 9146cb42529fa53be7870d1174e77d2aecddd99c lib/galaxy/tools/evaluation.py
--- a/lib/galaxy/tools/evaluation.py
+++ b/lib/galaxy/tools/evaluation.py
@@ -8,6 +8,7 @@
from galaxy.tools.wrappers import (
DatasetFilenameWrapper,
DatasetListWrapper,
+ DatasetCollectionWrapper,
LibraryDatasetValueWrapper,
SelectToolParameterWrapper,
InputValueWrapper,
@@ -15,6 +16,7 @@
)
from galaxy.tools.parameters.basic import (
DataToolParameter,
+ DataCollectionToolParameter,
LibraryDatasetToolParameter,
SelectToolParameter,
)
@@ -188,10 +190,22 @@
wrapper_kwds[ "dataset_path" ] = input_dataset_paths[ real_path ]
input_values[ input.name ] = \
DatasetFilenameWrapper( dataset, **wrapper_kwds )
+ elif isinstance( input, DataCollectionToolParameter ):
+ dataset_collection = input_values[ input.name ]
+ wrapper_kwds = dict(
+ datatypes_registry=self.app.datatypes_registry,
+ dataset_paths=input_dataset_paths,
+ tool=self,
+ name=input.name
+ )
+ wrapper = DatasetCollectionWrapper(
+ dataset_collection,
+ **wrapper_kwds
+ )
+ input_values[ input.name ] = wrapper
elif isinstance( input, SelectToolParameter ):
input_values[ input.name ] = SelectToolParameterWrapper(
input, input_values[ input.name ], self.app, other_values=param_dict, path_rewriter=self.unstructured_path_rewriter )
-
elif isinstance( input, LibraryDatasetToolParameter ):
# TODO: Handle input rewrites in here? How to test LibraryDatasetToolParameters?
input_values[ input.name ] = LibraryDatasetValueWrapper(
@@ -207,6 +221,8 @@
self.__walk_inputs( self.tool.inputs, param_dict, wrap_input )
def __populate_input_dataset_wrappers(self, param_dict, input_datasets, input_dataset_paths):
+ # TODO: Update this method for dataset collections? Need to test. -John.
+
## FIXME: when self.check_values==True, input datasets are being wrapped
## twice (above and below, creating 2 separate
## DatasetFilenameWrapper objects - first is overwritten by
diff -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 -r 9146cb42529fa53be7870d1174e77d2aecddd99c lib/galaxy/tools/parameters/basic.py
--- a/lib/galaxy/tools/parameters/basic.py
+++ b/lib/galaxy/tools/parameters/basic.py
@@ -16,7 +16,9 @@
from sanitize import ToolParameterSanitizer
import validation
import dynamic_options
+from ..parameters import history_query
from .dataset_matcher import DatasetMatcher
+from .dataset_matcher import DatasetCollectionMatcher
# For BaseURLToolParameter
from galaxy.web import url_for
from galaxy.model.item_attrs import Dictifiable
@@ -1863,6 +1865,88 @@
return ref
+class DataCollectionToolParameter( BaseDataToolParameter ):
+ """
+ """
+
+ def __init__( self, tool, elem, trans=None ):
+ super(DataCollectionToolParameter, self).__init__( tool, elem, trans )
+ self.history_query = history_query.HistoryQuery.from_parameter_elem( elem )
+ self._parse_formats( trans, tool, elem )
+ self.multiple = False # Accessed on DataToolParameter a lot, may want in future
+ self._parse_options( elem ) # TODO: Review and test.
+
+ def get_html_field( self, trans=None, value=None, other_values={} ):
+ # dropped refresh values, may be needed..
+ field = form_builder.SelectField( self.name, self.multiple, None, self.refresh_on_change, refresh_on_change_values=self.refresh_on_change_values )
+ history = self._get_history( trans )
+ dataset_collections = trans.app.dataset_collections_service.history_dataset_collections( history, self.history_query )
+ dataset_matcher = DatasetMatcher( trans, self, value, other_values )
+ dataset_collection_matcher = DatasetCollectionMatcher( dataset_matcher )
+
+ for dataset_collection_instance in dataset_collections:
+ log.info("Processing dataset collection instance....")
+ if not dataset_collection_matcher.hdca_match( dataset_collection_instance ):
+ continue
+ instance_id = dataset_collection_instance.hid
+ instance_name = dataset_collection_instance.name
+ selected = ( value and ( dataset_collection_instance == value ) )
+ if dataset_collection_instance.visible:
+ hidden_text = ""
+ else:
+ hidden_text = " (hidden)"
+ field.add_option( "%s:%s %s" % ( instance_id, hidden_text, instance_name ), dataset_collection_instance.id, selected )
+ self._ensure_selection( field )
+ return field
+
+ def from_html( self, value, trans, other_values={} ):
+ if not value and not self.optional:
+ raise ValueError( "History does not include a dataset of the required format / build" )
+ if value in [None, "None"]:
+ return None
+ if isinstance( value, str ) and value.find( "," ) > 0:
+ value = [ int( value_part ) for value_part in value.split( "," ) ]
+ elif isinstance( value, trans.app.model.HistoryDatasetCollectionAssociation ):
+ rval = value
+ elif isinstance( value, dict ) and 'src' in value and 'id' in value:
+ if value['src'] == 'hdca':
+ rval = trans.sa_session.query( trans.app.model.HistoryDatasetCollectionAssociation ).get( trans.app.security.decode_id(value['id']) )
+ else:
+ rval = trans.sa_session.query( trans.app.model.HistoryDatasetCollectionAssociation ).get( value )
+ if rval:
+ if rval.deleted:
+ raise ValueError( "The previously selected dataset collection has been deleted" )
+ # TODO: Handle error states, implement error states ...
+ return rval
+
+ def to_string( self, value, app ):
+ if value is None or isinstance( value, basestring ):
+ return value
+ elif isinstance( value, int ):
+ return str( value )
+ try:
+ return value.id
+ except:
+ return str( value )
+
+ def to_python( self, value, app ):
+ # Both of these values indicate that no dataset is selected. However, 'None'
+ # indicates that the dataset is optional, while '' indicates that it is not.
+ if value is None or value == '' or value == 'None':
+ return value
+ return app.model.context.query( app.model.HistoryDatasetCollectionAssociation ).get( int( value ) )
+
+ def value_to_display_text( self, value, app ):
+ try:
+ display_text = "%s: %s" % ( value.hid, value.name )
+ except AttributeError:
+ display_text = "No dataset collection."
+ return display_text
+
+ def validate( self, value, history=None ):
+ return True # TODO
+
+
class HiddenDataToolParameter( HiddenToolParameter, DataToolParameter ):
"""
Hidden parameter that behaves as a DataToolParameter. As with all hidden
@@ -1980,6 +2064,7 @@
file=FileToolParameter,
ftpfile=FTPFileToolParameter,
data=DataToolParameter,
+ data_collection=DataCollectionToolParameter,
library_data=LibraryDatasetToolParameter,
drill_down=DrillDownSelectToolParameter
)
diff -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 -r 9146cb42529fa53be7870d1174e77d2aecddd99c lib/galaxy/tools/parameters/dataset_matcher.py
--- a/lib/galaxy/tools/parameters/dataset_matcher.py
+++ b/lib/galaxy/tools/parameters/dataset_matcher.py
@@ -60,13 +60,13 @@
return HdaImplicitMatch( hda, target_ext )
return False
- def hda_match( self, hda, check_implicit_conversions=True ):
+ def hda_match( self, hda, check_implicit_conversions=True, ensure_visible=True ):
""" If HDA is accessible, return information about whether it could
match this parameter and if so how. See valid_hda_match for more
information.
"""
accessible = self.hda_accessible( hda )
- if accessible and ( hda.visible or ( self.selected( hda ) and not hda.implicitly_converted_parent_datasets ) ):
+ if accessible and ( not ensure_visible or hda.visible or ( self.selected( hda ) and not hda.implicitly_converted_parent_datasets ) ):
# If we are sending data to an external application, then we need to make sure there are no roles
# associated with the dataset that restrict its access from "public".
require_public = self.tool and self.tool.tool_type == 'data_destination'
@@ -123,4 +123,28 @@
return True
-__all__ = [ DatasetMatcher ]
+class DatasetCollectionMatcher( object ):
+
+ def __init__( self, dataset_matcher ):
+ self.dataset_matcher = dataset_matcher
+
+ def __valid_element( self, element ):
+ # Simplify things for now and assume these are hdas and not implicit
+ # converts. One could imagine handling both of those cases down the
+ # road.
+ hda = element.hda
+ if not hda:
+ return False
+ hda_match = self.dataset_matcher.hda_match( hda, ensure_visible=False )
+ return hda_match and not hda_match.implicit_conversion
+
+ def hdca_match( self, history_dataset_collection_association ):
+ valid = True
+ for element in history_dataset_collection_association.collection.datasets:
+ if not self.__valid_element( element ):
+ valid = False
+ break
+ return valid
+
+
+__all__ = [ DatasetMatcher, DatasetCollectionMatcher ]
diff -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 -r 9146cb42529fa53be7870d1174e77d2aecddd99c lib/galaxy/tools/parameters/history_query.py
--- /dev/null
+++ b/lib/galaxy/tools/parameters/history_query.py
@@ -0,0 +1,24 @@
+import logging
+log = logging.getLogger( __name__ )
+
+
+class HistoryQuery( object ):
+ """ An object for describing the collections to pull out of a history,
+ used by DataCollectionToolParameter.
+ """
+
+ def __init__( self, **kwargs ):
+ self.collection_type = kwargs.get( "collection_type", None )
+
+ @staticmethod
+ def from_parameter_elem( elem ):
+ """ Take in a tool parameter element.
+ """
+ kwargs = dict( collection_type=elem.get( "collection_type", None ) )
+ return HistoryQuery( **kwargs )
+
+ def direct_match( self, hdca ):
+ if self.collection_type and hdca.collection.collection_type != self.collection_type:
+ return False
+
+ return True
diff -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 -r 9146cb42529fa53be7870d1174e77d2aecddd99c lib/galaxy/tools/parameters/wrapped.py
--- a/lib/galaxy/tools/parameters/wrapped.py
+++ b/lib/galaxy/tools/parameters/wrapped.py
@@ -2,6 +2,7 @@
from galaxy.tools.parameters.basic import (
DataToolParameter,
+ DataCollectionToolParameter,
SelectToolParameter,
)
from galaxy.tools.parameters.grouping import (
@@ -57,6 +58,13 @@
name=input.name )
elif isinstance( input, SelectToolParameter ):
input_values[ input.name ] = galaxy.tools.SelectToolParameterWrapper( input, input_values[ input.name ], tool.app, other_values=incoming )
+ elif isinstance( input, DataCollectionToolParameter ):
+ input_values[ input.name ] = galaxy.tools.DatasetCollectionWrapper(
+ input_values[ input.name ],
+ datatypes_registry=trans.app.datatypes_registry,
+ tool=tool,
+ name=input.name,
+ )
else:
input_values[ input.name ] = galaxy.tools.InputValueWrapper( input, input_values[ input.name ], incoming )
diff -r 66ed5342e8555f41621a391c1ca014a6b8bedd91 -r 9146cb42529fa53be7870d1174e77d2aecddd99c lib/galaxy/tools/wrappers.py
--- a/lib/galaxy/tools/wrappers.py
+++ b/lib/galaxy/tools/wrappers.py
@@ -1,5 +1,9 @@
import pipes
from galaxy.util.none_like import NoneDataset
+from galaxy.util import odict
+
+from logging import getLogger
+log = getLogger( __name__ )
class ToolParameterValueWrapper( object ):
@@ -180,6 +184,10 @@
self.false_path = getattr( dataset_path, "false_path", None )
self.false_extra_files_path = getattr( dataset_path, "false_extra_files_path", None )
+ @property
+ def is_collection( self ):
+ return False
+
def __str__( self ):
if self.false_path is not None:
return self.false_path
@@ -198,7 +206,18 @@
return bool( self.dataset )
-class DatasetListWrapper( list ):
+class HasDatasets:
+
+ def _dataset_wrapper( self, dataset, dataset_paths, **kwargs ):
+ wrapper_kwds = kwargs.copy()
+ if dataset:
+ real_path = dataset.file_name
+ if real_path in dataset_paths:
+ wrapper_kwds[ "dataset_path" ] = dataset_paths[ real_path ]
+ return DatasetFilenameWrapper( dataset, **wrapper_kwds )
+
+
+class DatasetListWrapper( list, HasDatasets ):
"""
"""
def __init__( self, datasets, dataset_paths=[], **kwargs ):
@@ -206,14 +225,55 @@
datasets = [datasets]
def to_wrapper( dataset ):
- wrapper_kwds = kwargs.copy()
- if dataset:
- #A None dataset does not have a filename
- real_path = dataset.file_name
- if real_path in dataset_paths:
- wrapper_kwds[ "dataset_path" ] = dataset_paths[ real_path ]
- return DatasetFilenameWrapper( dataset, **wrapper_kwds )
+ return self._dataset_wrapper( dataset, dataset_paths, **kwargs )
list.__init__( self, map( to_wrapper, datasets ) )
def __str__( self ):
return ','.join( map( str, self ) )
+
+
+class DatasetCollectionWrapper( object, HasDatasets ):
+
+ def __init__( self, has_collection, dataset_paths=[], **kwargs ):
+ super(DatasetCollectionWrapper, self).__init__()
+
+ if hasattr( has_collection, "name" ):
+ # It is a HistoryDatasetCollectionAssociation
+ collection = has_collection.collection
+ self.name = has_collection.name
+ else:
+ # It is a DatasetCollectionElement instance referencing another collection
+ collection = has_collection.child_collection
+ self.name = has_collection.element_identifier
+
+ elements = collection.elements
+ element_instances = odict.odict()
+
+ element_instance_list = []
+ for dataset_collection_element in elements:
+ element_object = dataset_collection_element.element_object
+ element_identifier = dataset_collection_element.element_identifier
+
+ if dataset_collection_element.is_collection:
+ element_wrapper = DatasetCollectionWrapper( dataset_collection_element, dataset_paths, **kwargs )
+ else:
+ element_wrapper = self._dataset_wrapper( element_object, dataset_paths, **kwargs)
+
+ element_instances[element_identifier] = element_wrapper
+ element_instance_list.append( element_wrapper )
+
+ self.element_instances = element_instances
+ self.element_instance_list = element_instance_list
+
+ def keys( self ):
+ return self.element_instances.keys()
+
+ @property
+ def is_collection( self ):
+ return True
+
+ def __getattr__( self, key ):
+ return self.element_instances[ key ]
+
+ def __iter__( self ):
+ return self.element_instance_list.__iter__()
https://bitbucket.org/galaxy/galaxy-central/commits/6125ff499e43/
Changeset: 6125ff499e43
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - tool parameters - enhanced rerun.
Rerunning tools in same history will work without this, but getting rerun to work on imported histories requires this hack (mirroring the same hack for HDAs in the same file).
Affected #: 1 file
diff -r 9146cb42529fa53be7870d1174e77d2aecddd99c -r 6125ff499e4320bd128f02a30dbd23bcfd0c39d1 lib/galaxy/webapps/galaxy/controllers/tool_runner.py
--- a/lib/galaxy/webapps/galaxy/controllers/tool_runner.py
+++ b/lib/galaxy/webapps/galaxy/controllers/tool_runner.py
@@ -11,6 +11,7 @@
from galaxy.tools.parameters import params_to_incoming
from galaxy.tools.parameters import visit_input_values
from galaxy.tools.parameters.basic import DataToolParameter
+from galaxy.tools.parameters.basic import DataCollectionToolParameter
from galaxy.tools.parameters.basic import UnvalidatedValue
from galaxy.util.bunch import Bunch
from galaxy.util.hash_util import is_hashable
@@ -186,6 +187,15 @@
if source_hda not in hda_source_dict or source_hda.hid == hda.hid:
hda_source_dict[ source_hda ] = hda
source_hda = source_hda.copied_from_history_dataset_association
+ # Ditto for dataset collections.
+ hdca_source_dict = {}
+ for hdca in history.dataset_collections:
+ source_hdca = hdca.copied_from_history_dataset_collection_association
+ while source_hdca:
+ if source_hdca not in hdca_source_dict or source_hdca.hid == hdca.hid:
+ hdca_source_dict[ source_hdca ] = hdca
+ source_hdca = source_hdca.copied_from_history_dataset_collection_association
+
# Unpack unvalidated values to strings, they'll be validated when the
# form is submitted (this happens when re-running a job that was
# initially run by a workflow)
@@ -210,6 +220,10 @@
return values
if is_hashable( value ) and value not in history.datasets and value in hda_source_dict:
return hda_source_dict[ value ]
+ elif isinstance( input, DataCollectionToolParameter ):
+ if is_hashable( value ) and value not in history.dataset_collections and value in hdca_source_dict:
+ return hdca_source_dict[ value ]
+
visit_input_values( tool.inputs, params_objects, rerun_callback )
# Create a fake tool_state for the tool, with the parameters values
state = tool.new_state( trans )
https://bitbucket.org/galaxy/galaxy-central/commits/3fd5d1b51669/
Changeset: 3fd5d1b51669
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - tool parameters - allow tool tests.
With example tool tests demonstrating some basic features available to cheetah for data_collection parameters.
Affected #: 5 files
diff -r 6125ff499e4320bd128f02a30dbd23bcfd0c39d1 -r 3fd5d1b5166944bae469cc0b451d74a6789c5a16 lib/galaxy/tools/test.py
--- a/lib/galaxy/tools/test.py
+++ b/lib/galaxy/tools/test.py
@@ -189,6 +189,12 @@
if not isinstance(param_value, list):
param_value = [ param_value ]
processed_value = [ self.__add_uploaded_dataset( context.for_state(), v, param_extra, value ) for v in param_value ]
+ if isinstance( value, basic.DataCollectionToolParameter ):
+ assert 'collection' in param_extra
+ collection_def = param_extra[ 'collection' ]
+ for ( name, value, extra ) in collection_def.collect_inputs():
+ require_file( name, value, extra, self.required_files )
+ processed_value = collection_def
else:
processed_value = param_value
expanded_inputs[ context.for_state() ] = processed_value
@@ -274,6 +280,8 @@
attrib['metadata'].append( child )
elif child.tag == 'edit_attributes':
attrib['edit_attributes'].append( child )
+ elif child.tag == 'collection':
+ attrib[ 'collection' ] = TestCollectionDef( child )
if composite_data_name:
# Composite datasets need implicit renaming;
# inserted at front of list so explicit declarations
@@ -442,6 +450,33 @@
return 0
+class TestCollectionDef( object ):
+
+ def __init__( self, elem ):
+ self.elements = []
+ attrib = dict( elem.attrib )
+ self.collection_type = attrib[ "type" ]
+ self.name = attrib.get( "name", "Unnamed Collection" )
+ for element in elem.findall( "element" ):
+ element_attrib = dict( element.attrib )
+ element_identifier = element_attrib[ "name" ]
+ nested_collection_elem = element.find( "collection" )
+ if nested_collection_elem:
+ self.elements.append( ( element_identifier, TestCollectionDef( nested_collection_elem ) ) )
+ else:
+ self.elements.append( ( element_identifier, parse_param_elem( element ) ) )
+
+ def collect_inputs( self ):
+ inputs = []
+ for element in self.elements:
+ value = element[ 1 ]
+ if isinstance( value, TestCollectionDef ):
+ inputs.extend( value.collect_inputs() )
+ else:
+ inputs.append( value )
+ return inputs
+
+
def expand_input_elems( root_elem, prefix="" ):
__append_prefix_to_params( root_elem, prefix )
diff -r 6125ff499e4320bd128f02a30dbd23bcfd0c39d1 -r 3fd5d1b5166944bae469cc0b451d74a6789c5a16 test/base/interactor.py
--- a/test/base/interactor.py
+++ b/test/base/interactor.py
@@ -1,6 +1,7 @@
import os
from StringIO import StringIO
from galaxy.tools.parameters import grouping
+from galaxy.tools import test
from galaxy import eggs
eggs.require( "requests" )
from galaxy import util
@@ -177,7 +178,10 @@
values = [value] if not isinstance(value, list) else value
new_values = []
for value in values:
- if value in self.uploads:
+ if isinstance( value, test.TestCollectionDef ):
+ hdca_id = self._create_collection( history_id, value )
+ new_values = [ dict( src="hdca", id=hdca_id ) ]
+ elif value in self.uploads:
new_values.append( self.uploads[ value ] )
else:
new_values.append( value )
@@ -195,6 +199,33 @@
except KeyError:
raise Exception( datasets_object[ 'message' ] )
+ def _create_collection( self, history_id, collection_def ):
+ create_payload = dict(
+ name=collection_def.name,
+ element_identifiers=dumps( self._element_identifiers( collection_def ) ),
+ collection_type=collection_def.collection_type,
+ history_id=history_id,
+ )
+ return self._post( "dataset_collections", data=create_payload ).json()[ "id" ]
+
+ def _element_identifiers( self, collection_def ):
+ element_identifiers = []
+ for ( element_identifier, element ) in collection_def.elements:
+ if isinstance( element, test.TestCollectionDef ):
+ subelement_identifiers = self._element_identifiers( element )
+ element = dict(
+ name=element_identifier,
+ src="new_collection",
+ collection_type=element.collection_type,
+ element_identifiers=subelement_identifiers
+ )
+ else:
+ element_name = element[ 0 ]
+ element = self.uploads[ element[ 1 ] ].copy()
+ element[ "name" ] = element_name
+ element_identifiers.append( element )
+ return element_identifiers
+
def __dictify_outputs( self, datasets_object ):
## Convert outputs list to a dictionary that can be accessed by
## output_name so can be more flexiable about ordering of outputs
diff -r 6125ff499e4320bd128f02a30dbd23bcfd0c39d1 -r 3fd5d1b5166944bae469cc0b451d74a6789c5a16 test/functional/tools/collection_nested_test.xml
--- /dev/null
+++ b/test/functional/tools/collection_nested_test.xml
@@ -0,0 +1,51 @@
+<tool id="collection_nested_test" name="collection_nested_test" version="0.1.0">
+ <command>
+ echo #for $f in $f1# ${f.is_collection} #end for# >> $out1;
+ cat #for $f in $f1# #if $f.is_collection# #for $inner in $f# ${inner} #end for# #else# $f # #end if# #end for# >> $out2
+ </command>
+ <inputs>
+ <param name="f1" type="data_collection" />
+ </inputs>
+ <outputs>
+ <data format="txt" name="out1" />
+ <data format="txt" name="out2" />
+ </outputs>
+ <tests>
+ <test>
+ <param name="f1">
+ <collection type="list:paired">
+ <element name="i1">
+ <collection type="paired">
+ <element name="left" value="simple_line.txt" />
+ <element name="right" value="simple_line_alternative.txt" />
+ </collection>
+ </element>
+ </collection>
+ </param>
+ <output name="out1">
+ <assert_contents>
+ <has_line line="True" />
+ </assert_contents>
+ </output>
+ <output name="out2">
+ <assert_contents>
+ <has_line line="This is a line of text." />
+ <has_line line="This is a different line of text." />
+ </assert_contents>
+ </output>
+ </test>
+ <test>
+ <param name="f1">
+ <collection type="paired">
+ <element name="left" value="simple_line.txt" />
+ <element name="right" value="simple_line_alternative.txt" />
+ </collection>
+ </param>
+ <output name="out1">
+ <assert_contents>
+ <has_line line="False False" />
+ </assert_contents>
+ </output>
+ </test>
+ </tests>
+</tool>
diff -r 6125ff499e4320bd128f02a30dbd23bcfd0c39d1 -r 3fd5d1b5166944bae469cc0b451d74a6789c5a16 test/functional/tools/collection_paired_test.xml
--- /dev/null
+++ b/test/functional/tools/collection_paired_test.xml
@@ -0,0 +1,27 @@
+<tool id="collection_paired_test" name="paired_test" version="0.1.0">
+ <command>
+ cat #for $f in $f1# ${f} #end for# >> $out1;
+ </command>
+ <inputs>
+ <param name="f1" type="data_collection" />
+ </inputs>
+ <outputs>
+ <data format="txt" name="out1" />
+ </outputs>
+ <tests>
+ <test>
+ <param name="f1">
+ <collection type="paired">
+ <element name="left" value="simple_line.txt" />
+ <element name="right" value="simple_line_alternative.txt" />
+ </collection>
+ </param>
+ <output name="out1">
+ <assert_contents>
+ <has_line line="This is a line of text." />
+ <has_line line="This is a different line of text." />
+ </assert_contents>
+ </output>
+ </test>
+ </tests>
+</tool>
diff -r 6125ff499e4320bd128f02a30dbd23bcfd0c39d1 -r 3fd5d1b5166944bae469cc0b451d74a6789c5a16 test/functional/tools/samples_tool_conf.xml
--- a/test/functional/tools/samples_tool_conf.xml
+++ b/test/functional/tools/samples_tool_conf.xml
@@ -18,4 +18,6 @@
<tool file="parallelism_optional.xml" /><tool file="implicit_default_conds.xml" /><tool file="multi_data_param.xml" />
+ <tool file="collection_paired_test.xml" />
+ <tool file="collection_nested_test.xml" /></toolbox>
\ No newline at end of file
https://bitbucket.org/galaxy/galaxy-central/commits/513acdcaca4d/
Changeset: 513acdcaca4d
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Rework logic related to computing selected datasets.
Allow dataset ids to be used. This will make implementing multi-running of tools possible without expanding values out to hdas. check_params will still be called on each permutation so this shouldn't result in security problems.
Affected #: 1 file
diff -r 3fd5d1b5166944bae469cc0b451d74a6789c5a16 -r 513acdcaca4d112f64b8cbe8fa2d62f3c69b6589 lib/galaxy/tools/parameters/dataset_matcher.py
--- a/lib/galaxy/tools/parameters/dataset_matcher.py
+++ b/lib/galaxy/tools/parameters/dataset_matcher.py
@@ -80,7 +80,10 @@
""" Given value for DataToolParameter, is this HDA "selected".
"""
value = self.value
- return value and hda in value
+ if value and str( value[ 0 ] ).isdigit():
+ return hda.id in map(int, value)
+ else:
+ return value and hda in value
def filter( self, hda ):
""" Filter out this value based on other values for job (if
https://bitbucket.org/galaxy/galaxy-central/commits/d9d4e2c0c9db/
Changeset: d9d4e2c0c9db
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Refactor Tool.handle_input result to return list of pairs instead of odict.
This added generality of allowing multiple outputs with the same output name will allow handle_input to execute multiple jobs in subsequent changesets.
Affected #: 3 files
diff -r 513acdcaca4d112f64b8cbe8fa2d62f3c69b6589 -r d9d4e2c0c9db2cee3789cfbb6c69ac58a2aea0d8 lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -1965,7 +1965,7 @@
message = 'Error executing tool: %s' % str(e)
return False, message
if isinstance( out_data, odict ):
- return True, out_data
+ return True, out_data.items()
else:
if isinstance( out_data, str ):
message = out_data
diff -r 513acdcaca4d112f64b8cbe8fa2d62f3c69b6589 -r d9d4e2c0c9db2cee3789cfbb6c69ac58a2aea0d8 lib/galaxy/webapps/galaxy/api/tools.py
--- a/lib/galaxy/webapps/galaxy/api/tools.py
+++ b/lib/galaxy/webapps/galaxy/api/tools.py
@@ -133,7 +133,7 @@
return { "message": { "type": "error", "data" : vars[ 'errors' ] } }
# TODO: check for errors and ensure that output dataset(s) are available.
- output_datasets = vars.get( 'out_data', {} ).iteritems()
+ output_datasets = vars.get( 'out_data', [] )
rval = {
"outputs": []
}
diff -r 513acdcaca4d112f64b8cbe8fa2d62f3c69b6589 -r d9d4e2c0c9db2cee3789cfbb6c69ac58a2aea0d8 templates/webapps/galaxy/tool_executed.mako
--- a/templates/webapps/galaxy/tool_executed.mako
+++ b/templates/webapps/galaxy/tool_executed.mako
@@ -48,7 +48,7 @@
<p>The following job has been successfully added to the queue:</p>
-%for data in out_data.values():
+%for _, data in out_data:
<div style="padding: 10px"><b> ${data.hid}: ${data.name}</b></div>
%endfor
https://bitbucket.org/galaxy/galaxy-central/commits/2787ab3eb163/
Changeset: 2787ab3eb163
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Allow running tools on multiple datasets simulatenously (backend+API).
Each data tool parameter can be specified via a similar named parameter but with the suffix |__multirun__. This second parameter variant should be a list of datasets - one job will be created for each such dataset. In addition to various unit tests, various functional tests demonstrates this functionality in `test/functional/api/test_tools.py`.
Affected #: 9 files
diff -r d9d4e2c0c9db2cee3789cfbb6c69ac58a2aea0d8 -r 2787ab3eb1633fb55f5a88340835c13e3ad92d75 lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -54,6 +54,7 @@
from galaxy.tools.filters import FilterFactory
from galaxy.tools.test import parse_tests_elem
from galaxy.util import listify, parse_xml, rst_to_html, string_as_bool, string_to_object, xml_text, xml_to_string
+from galaxy.tools.parameters.meta import expand_meta_parameters
from galaxy.util.bunch import Bunch
from galaxy.util.expressions import ExpressionContext
from galaxy.util.hash_util import hmac_new
@@ -67,6 +68,7 @@
from tool_shed.util import common_util
from tool_shed.util import shed_util_common as suc
from .loader import load_tool, template_macro_params
+from .execute import execute as execute_job
from .wrappers import (
ToolParameterValueWrapper,
RawObjectWrapper,
@@ -1907,7 +1909,21 @@
message = 'Failure executing tool (attempting to rerun invalid job).'
return 'message.mako', dict( status='error', message=message, refresh_frames=[] )
- state, state_new = self.__fetch_state( trans, incoming, history, all_pages=all_pages )
+ # Fixed set of input parameters may correspond to any number of jobs.
+ # Expand these out to individual parameters for given jobs (tool
+ # executions).
+ expanded_incomings = expand_meta_parameters( trans, incoming, self.inputs )
+
+ # Remapping a single job to many jobs doesn't make sense, so disable
+ # remap if multi-runs of tools are being used.
+ if rerun_remap_job_id and len( expanded_incomings ) > 1:
+ message = 'Failure executing tool (cannot create multiple jobs when remapping existing job).'
+ return 'message.mako', dict( status='error', message=message, refresh_frames=[] )
+
+ all_states = []
+ for expanded_incoming in expanded_incomings:
+ state, state_new = self.__fetch_state( trans, expanded_incoming, history, all_pages=all_pages )
+ all_states.append( state )
if state_new:
# This feels a bit like a hack. It allows forcing full processing
# of inputs even when there is no state in the incoming dictionary
@@ -1921,7 +1937,13 @@
self.update_state( trans, self.inputs_by_page[state.page], state.inputs, incoming, old_errors=old_errors or {}, source=source )
return "tool_form.mako", dict( errors={}, tool_state=state, param_values={}, incoming={} )
- errors, params = self.__check_param_values( trans, incoming, state, old_errors, process_state, history=history, source=source )
+ all_errors = []
+ all_params = []
+ for expanded_incoming, expanded_state in zip(expanded_incomings, all_states):
+ errors, params = self.__check_param_values( trans, expanded_incoming, expanded_state, old_errors, process_state, history=history, source=source )
+ all_errors.append( errors )
+ all_params.append( params )
+
if self.__should_refresh_state( incoming ):
template, template_vars = self.__handle_state_refresh( trans, state, errors )
else:
@@ -1929,19 +1951,19 @@
# If there were errors, we stay on the same page and display
# error messages
- if errors:
+ if any( all_errors ):
error_message = "One or more errors were found in the input you provided. The specific errors are marked below."
template = "tool_form.mako"
template_vars = dict( errors=errors, tool_state=state, incoming=incoming, error_message=error_message )
# If we've completed the last page we can execute the tool
elif all_pages or state.page == self.last_page:
- tool_executed, result = self.__handle_tool_execute( trans, rerun_remap_job_id, params, history )
- if tool_executed:
+ execution_tracker = execute_job( trans, self, all_params, history=history, rerun_remap_job_id=rerun_remap_job_id )
+ if execution_tracker.successful_jobs:
template = 'tool_executed.mako'
- template_vars = dict( out_data=result )
+ template_vars = dict( out_data=execution_tracker.output_datasets, num_jobs=len( execution_tracker.successful_jobs ), job_errors=execution_tracker.execution_errors )
else:
template = 'message.mako'
- template_vars = dict( status='error', message=result, refresh_frames=[] )
+ template_vars = dict( status='error', message=execution_tracker.execution_errors[0], refresh_frames=[] )
# Otherwise move on to the next page
else:
template, template_vars = self.__handle_page_advance( trans, state, errors )
@@ -1950,13 +1972,14 @@
def __should_refresh_state( self, incoming ):
return not( 'runtool_btn' in incoming or 'URL' in incoming or 'ajax_upload' in incoming )
- def __handle_tool_execute( self, trans, rerun_remap_job_id, params, history ):
+ def handle_single_execution( self, trans, rerun_remap_job_id, params, history ):
"""
Return a pair with whether execution is successful as well as either
resulting output data or an error message indicating the problem.
"""
try:
- _, out_data = self.execute( trans, incoming=params, history=history, rerun_remap_job_id=rerun_remap_job_id )
+ params = self.__remove_meta_properties( params )
+ job, out_data = self.execute( trans, incoming=params, history=history, rerun_remap_job_id=rerun_remap_job_id )
except httpexceptions.HTTPFound, e:
#if it's a paste redirect exception, pass it up the stack
raise e
@@ -1965,7 +1988,7 @@
message = 'Error executing tool: %s' % str(e)
return False, message
if isinstance( out_data, odict ):
- return True, out_data.items()
+ return job, out_data.items()
else:
if isinstance( out_data, str ):
message = out_data
@@ -2414,8 +2437,27 @@
if error:
errors[ input.name ] = error
state[ input.name ] = value
+ state.update( self.__meta_properties_for_state( key, incoming, incoming_value, value ) )
return errors
+ def __remove_meta_properties( self, incoming ):
+ result = incoming.copy()
+ meta_property_suffixes = [
+ "__multirun__",
+ ]
+ for key, value in incoming.iteritems():
+ if any( map( lambda s: key.endswith(s), meta_property_suffixes ) ):
+ del result[ key ]
+ return result
+
+ def __meta_properties_for_state( self, key, incoming, incoming_val, state_val ):
+ meta_properties = {}
+ multirun_key = "%s|__multirun__" % key
+ if multirun_key in incoming:
+ multi_value = incoming[ multirun_key ]
+ meta_properties[ multirun_key ] = multi_value
+ return meta_properties
+
@property
def params_with_missing_data_table_entry( self ):
"""
diff -r d9d4e2c0c9db2cee3789cfbb6c69ac58a2aea0d8 -r 2787ab3eb1633fb55f5a88340835c13e3ad92d75 lib/galaxy/tools/evaluation.py
--- a/lib/galaxy/tools/evaluation.py
+++ b/lib/galaxy/tools/evaluation.py
@@ -23,6 +23,9 @@
from galaxy.tools.parameters.grouping import Conditional, Repeat
from galaxy.jobs.datasets import dataset_path_rewrites
+import logging
+log = logging.getLogger( __name__ )
+
class ToolEvaluator( object ):
""" An abstraction linking together a tool and a job runtime to evaluate
diff -r d9d4e2c0c9db2cee3789cfbb6c69ac58a2aea0d8 -r 2787ab3eb1633fb55f5a88340835c13e3ad92d75 lib/galaxy/tools/execute.py
--- /dev/null
+++ b/lib/galaxy/tools/execute.py
@@ -0,0 +1,47 @@
+"""
+Once state information has been calculated, handle actually executing tools
+from various states and tracking results.
+
+Later will also create implicit dataset collections from these executions.
+"""
+
+import logging
+log = logging.getLogger( __name__ )
+
+
+def execute( trans, tool, param_combinations, history, rerun_remap_job_id=None ):
+ """
+ Execute a tool and return object containing summary (output data, number of
+ failures, etc...).
+ """
+ execution_tracker = ToolExecutionTracker( tool, param_combinations )
+ for params in execution_tracker.param_combinations:
+ job, result = tool.handle_single_execution( trans, rerun_remap_job_id, params, history )
+ if job:
+ execution_tracker.record_success( job, result )
+ else:
+ execution_tracker.record_error( result )
+ return execution_tracker
+
+
+class ToolExecutionTracker( object ):
+ """
+ """
+
+ def __init__( self, tool, param_combinations ):
+ self.tool = tool
+ self.param_combinations = param_combinations
+ self.successful_jobs = []
+ self.failed_jobs = 0
+ self.execution_errors = []
+ self.output_datasets = []
+
+ def record_success( self, job, outputs ):
+ self.successful_jobs.append( job )
+ self.output_datasets.extend( outputs )
+
+ def record_error( self, error ):
+ self.failed_jobs += 1
+ self.execution_errors.append( error )
+
+__all__ = [ execute ]
diff -r d9d4e2c0c9db2cee3789cfbb6c69ac58a2aea0d8 -r 2787ab3eb1633fb55f5a88340835c13e3ad92d75 lib/galaxy/tools/parameters/meta.py
--- /dev/null
+++ b/lib/galaxy/tools/parameters/meta.py
@@ -0,0 +1,38 @@
+from galaxy.util import permutations
+from galaxy import util
+
+
+def expand_meta_parameters( trans, incoming, inputs ):
+ """
+ Take in a dictionary of raw incoming parameters and expand to a list
+ of expanded incoming parameters (one set of parameters per tool
+ execution).
+ """
+
+ def classifier( input_key ):
+ multirun_key = "%s|__multirun__" % input_key
+ if multirun_key in incoming:
+ multi_value = util.listify( incoming[ multirun_key ] )
+ if len( multi_value ) > 1:
+ return permutations.input_classification.MULTIPLIED, multi_value
+ else:
+ if len( multi_value ) == 0:
+ multi_value = None
+ return permutations.input_classification.SINGLE, multi_value[ 0 ]
+ else:
+ return permutations.input_classification.SINGLE, incoming[ input_key ]
+ # Stick an unexpanded version of multirun keys so they can be replaced,
+ # by expand_mult_inputs.
+ incoming_template = incoming.copy()
+
+ # Will reuse this in subsequent work, so design this way now...
+ def try_replace_key( key, suffix ):
+ found = key.endswith( suffix )
+ if found:
+ simple_key = key[ 0:-len( suffix ) ]
+ if simple_key not in incoming_template:
+ incoming_template[ simple_key ] = None
+ return found
+ for key, value in incoming.iteritems():
+ try_replace_key( key, "|__multirun__" )
+ return permutations.expand_multi_inputs( incoming_template, classifier )
diff -r d9d4e2c0c9db2cee3789cfbb6c69ac58a2aea0d8 -r 2787ab3eb1633fb55f5a88340835c13e3ad92d75 lib/galaxy/util/permutations.py
--- /dev/null
+++ b/lib/galaxy/util/permutations.py
@@ -0,0 +1,114 @@
+""" There is some shared logic between matching/multiplying inputs in workflows
+and tools. This module is meant to capture some general permutation logic that
+can be applicable for both cases but will only be used in the newer tools case
+first.
+
+Maybe this doesn't make sense and maybe much of this stuff could be replaced
+with itertools product and permutations. These are open questions.
+"""
+from galaxy.util.bunch import Bunch
+from galaxy.exceptions import MessageException
+
+input_classification = Bunch(
+ SINGLE="single",
+ MATCHED="matched",
+ MULTIPLIED="multiplied",
+)
+
+
+class InputMatchedException( MessageException ):
+ """ Indicates problem matching inputs while building up inputs
+ permutations. """
+
+
+def expand_multi_inputs( inputs, classifier, key_filter=None ):
+ key_filter = key_filter or ( lambda x: True )
+
+ single_inputs, matched_multi_inputs, multiplied_multi_inputs = __split_inputs(
+ inputs,
+ classifier,
+ key_filter
+ )
+
+ # Build up every combination of inputs to be run together.
+ input_combos = __extend_with_matched_combos( single_inputs, matched_multi_inputs )
+ input_combos = __extend_with_multiplied_combos( input_combos, multiplied_multi_inputs )
+
+ return input_combos
+
+
+def __split_inputs( inputs, classifier, key_filter ):
+ key_filter = key_filter or ( lambda x: True )
+ input_keys = filter( key_filter, inputs )
+
+ single_inputs = {}
+ matched_multi_inputs = {}
+ multiplied_multi_inputs = {}
+
+ for input_key in input_keys:
+ input_type, expanded_val = classifier( input_key )
+ if input_type == input_classification.SINGLE:
+ single_inputs[ input_key ] = expanded_val
+ elif input_type == input_classification.MATCHED:
+ matched_multi_inputs[ input_key ] = expanded_val
+ elif input_type == input_classification.MULTIPLIED:
+ multiplied_multi_inputs[ input_key ] = expanded_val
+
+ return ( single_inputs, matched_multi_inputs, multiplied_multi_inputs )
+
+
+def __extend_with_matched_combos( single_inputs, multi_inputs ):
+ """
+
+ {a => 1, b => 2} and {c => {3, 4}, d => {5, 6}}
+
+ Becomes
+
+ [ {a => 1, b => 2, c => 3, d => 5}, {a => 1, b => 2, c => 4, d => 6}, ]
+
+ """
+
+ if len( multi_inputs ) == 0:
+ return [ single_inputs ]
+
+ matched_multi_inputs = []
+
+ first_multi_input_key = multi_inputs.keys()[ 0 ]
+ first_multi_value = multi_inputs.get(first_multi_input_key)
+
+ for value in first_multi_value:
+ new_inputs = __copy_and_extend_inputs( single_inputs, first_multi_input_key, value )
+ matched_multi_inputs.append( new_inputs )
+
+ for multi_input_key, multi_input_values in multi_inputs.iteritems():
+ if multi_input_key == first_multi_input_key:
+ continue
+ if len( multi_input_values ) != len( first_multi_value ):
+ raise InputMatchedException()
+
+ for index, value in enumerate( multi_input_values ):
+ matched_multi_inputs[ index ][ multi_input_key ] = value
+
+ return matched_multi_inputs
+
+
+def __extend_with_multiplied_combos( input_combos, multi_inputs ):
+ combos = input_combos
+
+ for multi_input_key, multi_input_value in multi_inputs.iteritems():
+ iter_combos = []
+
+ for combo in combos:
+ for input_value in multi_input_value:
+ iter_combo = __copy_and_extend_inputs( combo, multi_input_key, input_value )
+ iter_combos.append( iter_combo )
+
+ combos = iter_combos
+
+ return combos
+
+
+def __copy_and_extend_inputs( inputs, key, value ):
+ new_inputs = dict( inputs )
+ new_inputs[ key ] = value
+ return new_inputs
diff -r d9d4e2c0c9db2cee3789cfbb6c69ac58a2aea0d8 -r 2787ab3eb1633fb55f5a88340835c13e3ad92d75 lib/galaxy/webapps/galaxy/controllers/async.py
--- a/lib/galaxy/webapps/galaxy/controllers/async.py
+++ b/lib/galaxy/webapps/galaxy/controllers/async.py
@@ -135,4 +135,4 @@
trans.sa_session.flush()
- return trans.fill_template( 'tool_executed.mako', history=history, toolbox=toolbox, tool=tool, util=util, out_data={} )
+ return trans.fill_template( 'tool_executed.mako', history=history, toolbox=toolbox, tool=tool, util=util, out_data={}, num_jobs=1, job_errors=[] )
diff -r d9d4e2c0c9db2cee3789cfbb6c69ac58a2aea0d8 -r 2787ab3eb1633fb55f5a88340835c13e3ad92d75 templates/webapps/galaxy/tool_executed.mako
--- a/templates/webapps/galaxy/tool_executed.mako
+++ b/templates/webapps/galaxy/tool_executed.mako
@@ -46,8 +46,20 @@
<div class="donemessagelarge">
-<p>The following job has been successfully added to the queue:</p>
+%if num_jobs > 1:
+ <% jobs_str = "%d jobs have" % num_jobs %>
+%else:
+ <% jobs_str = "A job has" %>
+%endif
+%if len(out_data) == 1:
+ <% datasets_str = "dataset" %>
+%else:
+ <% datasets_str = "datasets" %>
+%endif
+<p>
+ ${jobs_str} been successfully added to the queue - resulting in the following ${datasets_str}:
+</p>
%for _, data in out_data:
<div style="padding: 10px"><b> ${data.hid}: ${data.name}</b></div>
%endfor
@@ -65,6 +77,18 @@
</div>
+%if job_errors:
+<div class="errormessagelarge">
+ There were errors setting up ${len(job_errors)} submitted job(s):
+ <ul>
+ <!-- Styling on this list is a little flat. Consider identing these error messages. -->
+ %for job_error in job_errors:
+ <li><b>${job_error}</b></li>
+ %endfor
+ </ul>
+</div>
+%endif
+
</body></html>
diff -r d9d4e2c0c9db2cee3789cfbb6c69ac58a2aea0d8 -r 2787ab3eb1633fb55f5a88340835c13e3ad92d75 test/api/test_tools.py
--- a/test/api/test_tools.py
+++ b/test/api/test_tools.py
@@ -81,6 +81,76 @@
output1_content = self._get_content( history_id, dataset=output1 )
self.assertEqual( output1_content.strip(), "Cat1Test\nCat2Test" )
+ def test_multirun_cat1( self ):
+ self.__skip_without_tool( "cat1" )
+ history_id = self.dataset_populator.new_history()
+ new_dataset1 = self.dataset_populator.new_dataset( history_id, content='123' )
+ new_dataset2 = self.dataset_populator.new_dataset( history_id, content='456' )
+ inputs = {
+ "input1|__multirun__": [
+ dataset_to_param( new_dataset1 ),
+ dataset_to_param( new_dataset2 ),
+ ],
+ }
+ outputs = self._cat1_outputs( history_id, inputs=inputs )
+ self.assertEquals( len( outputs ), 2 )
+ self.dataset_populator.wait_for_history( history_id, assert_ok=True )
+ output1 = outputs[ 0 ]
+ output2 = outputs[ 1 ]
+ output1_content = self._get_content( history_id, dataset=output1 )
+ output2_content = self._get_content( history_id, dataset=output2 )
+ self.assertEquals( output1_content.strip(), "123" )
+ self.assertEquals( output2_content.strip(), "456" )
+
+ def test_multirun_in_repeat( self ):
+ self.__skip_without_tool( "cat1" )
+ history_id = self.dataset_populator.new_history()
+ new_dataset1 = self.dataset_populator.new_dataset( history_id, content='123' )
+ new_dataset2 = self.dataset_populator.new_dataset( history_id, content='456' )
+ common_dataset = self.dataset_populator.new_dataset( history_id, content='Common' )
+ inputs = {
+ "input1": dataset_to_param( common_dataset ),
+ 'queries_0|input2|__multirun__': [
+ dataset_to_param( new_dataset1 ),
+ dataset_to_param( new_dataset2 ),
+ ],
+ }
+ outputs = self._cat1_outputs( history_id, inputs=inputs )
+ self.assertEquals( len( outputs ), 2 )
+ self.dataset_populator.wait_for_history( history_id, assert_ok=True, timeout=10 )
+ output1 = outputs[ 0 ]
+ output2 = outputs[ 1 ]
+ output1_content = self._get_content( history_id, dataset=output1 )
+ output2_content = self._get_content( history_id, dataset=output2 )
+ self.assertEquals( output1_content.strip(), "Common\n123" )
+ self.assertEquals( output2_content.strip(), "Common\n456" )
+
+ def test_multirun_on_multiple_inputs( self ):
+ self.__skip_without_tool( "cat1" )
+ history_id = self.dataset_populator.new_history()
+ new_dataset1 = self.dataset_populator.new_dataset( history_id, content='123' )
+ new_dataset2 = self.dataset_populator.new_dataset( history_id, content='456' )
+ new_dataset3 = self.dataset_populator.new_dataset( history_id, content='789' )
+ new_dataset4 = self.dataset_populator.new_dataset( history_id, content='0ab' )
+ inputs = {
+ "input1|__multirun__": [
+ dataset_to_param( new_dataset1 ),
+ dataset_to_param( new_dataset2 ),
+ ],
+ 'queries_0|input2|__multirun__': [
+ dataset_to_param( new_dataset3 ),
+ dataset_to_param( new_dataset4 ),
+ ],
+ }
+ outputs = self._cat1_outputs( history_id, inputs=inputs )
+ self.assertEquals( len( outputs ), 4 )
+ self.dataset_populator.wait_for_history( history_id, assert_ok=True )
+ outputs_contents = [ self._get_content( history_id, dataset=o ).strip() for o in outputs ]
+ assert "123\n789" in outputs_contents
+ assert "456\n789" in outputs_contents
+ assert "123\n0ab" in outputs_contents
+ assert "456\n0ab" in outputs_contents
+
def _cat1_outputs( self, history_id, inputs ):
return self._run_outputs( self._run_cat1( history_id, inputs ) )
diff -r d9d4e2c0c9db2cee3789cfbb6c69ac58a2aea0d8 -r 2787ab3eb1633fb55f5a88340835c13e3ad92d75 test/unit/tools/test_execution.py
--- a/test/unit/tools/test_execution.py
+++ b/test/unit/tools/test_execution.py
@@ -171,6 +171,82 @@
state = self.__assert_rerenders_tool_without_errors( template, template_vars )
assert hda == state.inputs[ "param1" ]
+ def test_simple_multirun_state_update( self ):
+ hda1, hda2 = self.__setup_multirun_job()
+ template, template_vars = self.__handle_with_incoming( **{
+ "param1|__multirun__": [ 1, 2 ],
+ } )
+ state = self.__assert_rerenders_tool_without_errors( template, template_vars )
+ self.__assert_state_serializable( state )
+ self.assertEquals( state.inputs[ "param1|__multirun__" ], [ 1, 2 ] )
+
+ def test_simple_multirun_execution( self ):
+ hda1, hda2 = self.__setup_multirun_job()
+ template, template_vars = self.__handle_with_incoming( **{
+ "param1|__multirun__": [ 1, 2 ],
+ "runtool_btn": "dummy",
+ } )
+ self.__assert_exeuted( template, template_vars )
+ # Tool 'executed' twice, with param1 as hda1 and hda2 respectively.
+ assert len( self.tool_action.execution_call_args ) == 2
+ self.assertEquals( self.tool_action.execution_call_args[ 0 ][ "incoming" ][ "param1" ], hda1 )
+ self.assertEquals( self.tool_action.execution_call_args[ 1 ][ "incoming" ][ "param1" ], hda2 )
+
+ def test_cannot_multirun_and_remap( self ):
+ hda1, hda2 = self.__setup_multirun_job()
+ template, template_vars = self.__handle_with_incoming( **{
+ "param1|__multirun__": [ 1, 2 ],
+ "rerun_remap_job_id": self.app.security.encode_id(123), # Not encoded
+ "runtool_btn": "dummy",
+ } )
+ self.assertEquals( template, "message.mako" )
+ assert template_vars[ "status" ] == "error"
+ assert "multiple jobs" in template_vars[ "message" ]
+
+ def test_multirun_with_state_updates( self ):
+ hda1, hda2 = self.__setup_multirun_job()
+
+ # Fresh state contains no repeat elements
+ template, template_vars = self.__handle_with_incoming()
+ state = self.__assert_rerenders_tool_without_errors( template, template_vars )
+ assert len( state.inputs[ "repeat1" ] ) == 0
+ self.__assert_state_serializable( state )
+
+ # Hitting add button adds repeat element
+ template, template_vars = self.__handle_with_incoming( **{
+ "param1|__multirun__": [ 1, 2 ],
+ "repeat1_add": "dummy",
+ } )
+ state = self.__assert_rerenders_tool_without_errors( template, template_vars )
+ assert len( state.inputs[ "repeat1" ] ) == 1
+ self.assertEquals( state.inputs[ "param1|__multirun__" ], [ 1, 2 ] )
+ self.__assert_state_serializable( state )
+
+ # Hitting add button again adds another repeat element
+ template, template_vars = self.__handle_with_incoming( state, **{
+ "repeat1_add": "dummy",
+ "repeat1_0|param2": 1,
+ } )
+ state = self.__assert_rerenders_tool_without_errors( template, template_vars )
+ self.assertEquals( state.inputs[ "param1|__multirun__" ], [ 1, 2 ] )
+ assert len( state.inputs[ "repeat1" ] ) == 2
+ assert state.inputs[ "repeat1" ][ 0 ][ "param2" ] == hda1
+ self.__assert_state_serializable( state )
+
+ # Hitting remove drops a repeat element
+ template, template_vars = self.__handle_with_incoming( state, repeat1_1_remove="dummy" )
+ state = self.__assert_rerenders_tool_without_errors( template, template_vars )
+ assert len( state.inputs[ "repeat1" ] ) == 1
+ self.__assert_state_serializable( state )
+
+ def __assert_state_serializable( self, state ):
+ self.__state_to_string( state ) # Will thrown exception if there is a problem...
+
+ def __setup_multirun_job( self ):
+ self._init_tool( tools_support.SIMPLE_CAT_TOOL_CONTENTS )
+ hda1, hda2 = self.__add_dataset( 1 ), self.__add_dataset( 2 )
+ return hda1, hda2
+
def __handle_with_incoming( self, previous_state=None, **kwds ):
""" Execute tool.handle_input with incoming specified by kwds
(optionally extending a previous state).
@@ -187,6 +263,10 @@
def __to_incoming( self, state, **kwds ):
new_incoming = {}
params_to_incoming( new_incoming, self.tool.inputs, state.inputs, self.app )
+ # Copy meta parameters over lost by params_to_incoming...
+ for key, value in state.inputs.iteritems():
+ if key.endswith( "|__multirun__" ):
+ new_incoming[ key ] = value
new_incoming[ "tool_state" ] = self.__state_to_string( state )
new_incoming.update( kwds )
return new_incoming
https://bitbucket.org/galaxy/galaxy-central/commits/2c2c599a65d7/
Changeset: 2c2c599a65d7
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Improvements to test_execution.py
Affected #: 1 file
diff -r 2787ab3eb1633fb55f5a88340835c13e3ad92d75 -r 2c2c599a65d734ba6386f8bd76f213ece7e207b0 test/unit/tools/test_execution.py
--- a/test/unit/tools/test_execution.py
+++ b/test/unit/tools/test_execution.py
@@ -290,7 +290,11 @@
def __assert_exeuted( self, template, template_vars ):
if template == "tool_form.mako":
self.__assert_no_errors( template_vars )
- self.assertEquals(template, "tool_executed.mako")
+ self.assertEquals(
+ template,
+ "tool_executed.mako",
+ "Expected tools_execute template - got template %s with vars %s" % ( template, template_vars)
+ )
def __assert_no_errors( self, template_vars ):
assert "errors" in template_vars, "tool_form.mako rendered without errors defintion."
@@ -338,7 +342,7 @@
if num_calls > self.error_message_after_excution:
return None, "Test Error Message"
- return None, odict(dict(out1="1"))
+ return galaxy.model.Job(), odict(dict(out1="1"))
def raise_exception( self, after_execution=0 ):
self.exception_after_exection = after_execution
https://bitbucket.org/galaxy/galaxy-central/commits/0afc6080e504/
Changeset: 0afc6080e504
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Allow running tools on multiple datasets simulatenously (UI).
Update tool form. This commit needs work to ensure this only affects the tool_form.mako.
Affected #: 5 files
diff -r 2c2c599a65d734ba6386f8bd76f213ece7e207b0 -r 0afc6080e504e42cee1a30785cb0ebf6bffe4ca0 lib/galaxy/tools/parameters/basic.py
--- a/lib/galaxy/tools/parameters/basic.py
+++ b/lib/galaxy/tools/parameters/basic.py
@@ -13,6 +13,8 @@
from galaxy.web import form_builder
from galaxy.util.bunch import Bunch
from galaxy.util import string_as_bool, sanitize_param, unicodify
+from galaxy.util import listify
+from galaxy.util.odict import odict
from sanitize import ToolParameterSanitizer
import validation
import dynamic_options
@@ -1620,6 +1622,13 @@
self.options_filter_attribute = options.get( 'options_filter_attribute', None )
self.is_dynamic = self.options is not None
+ def _switch_fields( self, fields, default_field ):
+ if len(fields) > 1:
+ field = form_builder.SwitchingSelectField( fields, default_field=default_field )
+ else:
+ field = fields.values()[0]
+ return field
+
class DataToolParameter( BaseDataToolParameter ):
# TODO, Nate: Make sure the following unit tests appropriately test the dataset security
@@ -1654,12 +1663,41 @@
self.conversions.append( ( name, conv_extensions, conv_types ) )
def get_html_field( self, trans=None, value=None, other_values={} ):
- history = self._get_history( trans )
if value is not None:
if type( value ) != list:
value = [ value ]
+
+ history = self._get_history( trans )
dataset_matcher = DatasetMatcher( trans, self, value, other_values )
- field = form_builder.SelectField( self.name, self.multiple, None, self.refresh_on_change, refresh_on_change_values=self.refresh_on_change_values )
+ multiple = self.multiple
+ fields = odict()
+ if multiple:
+ # Select one dataset, run one job.
+ default_field = "multiselect_single"
+ multi_select = self._get_select_dataset_field( history, dataset_matcher, multiple=True )
+ fields[ "multiselect_single" ] = multi_select
+ else:
+ # Select one dataset, run one job.
+ default_field = "select_single"
+ single_select = self._get_select_dataset_field( history, dataset_matcher, multiple=False )
+ fields[ "select_single" ] = single_select
+
+ if self.__display_multirun_option():
+ # Select multiple datasets, run multiple jobs.
+ multirun_key = "%s|__multirun__" % self.name
+ if multirun_key in (other_values or {}):
+ multirun_value = listify( other_values[ multirun_key ] )
+ if multirun_value and len( multirun_value ) > 1:
+ default_field = "select_multiple"
+ else:
+ multirun_value = value
+ multi_dataset_matcher = DatasetMatcher( trans, self, multirun_value, other_values )
+ multi_select = self._get_select_dataset_field( history, multi_dataset_matcher, multiple=True, suffix="|__multirun__" )
+ fields[ "select_multiple" ] = multi_select
+
+ return self._switch_fields( fields, default_field=default_field )
+
+ def _get_select_dataset_field( self, history, dataset_matcher, multiple=False, suffix="" ):
# CRUCIAL: the dataset_collector function needs to be local to DataToolParameter.get_html_field()
def dataset_collector( hdas, parent_hid ):
@@ -1686,6 +1724,10 @@
field.add_option( "%s: (as %s) %s" % ( hid, target_ext, hda_name ), hda.id, selected )
# Also collect children via association object
dataset_collector( hda.children, hid )
+
+ field_name = "%s%s" % ( self.name, suffix )
+ field = form_builder.SelectField( field_name, multiple, None, self.refresh_on_change, refresh_on_change_values=self.refresh_on_change_values )
+
dataset_collector( history.active_datasets_children_and_roles, None )
self._ensure_selection( field )
return field
@@ -1845,6 +1887,16 @@
self.tool.visit_inputs( other_values, visitor )
return False not in converter_safe
+ def __display_multirun_option( self ):
+ """ Certain parameters may not make sense to allow multi-run variants
+ of for instance if other parameters are filtered or contrained based on
+ this one. TODO: Figure out if these exist and how to detect them (
+ for instance should I just be checking dynamic options).
+ """
+ allow = True
+ # TODO: allow should be false in some in cases...
+ return allow
+
def _options_filter_attribute( self, value ):
#HACK to get around current hardcoded limitation of when a set of dynamic options is defined for a DataToolParameter
#it always causes available datasets to be filtered by dbkey
diff -r 2c2c599a65d734ba6386f8bd76f213ece7e207b0 -r 0afc6080e504e42cee1a30785cb0ebf6bffe4ca0 lib/galaxy/web/form_builder.py
--- a/lib/galaxy/web/form_builder.py
+++ b/lib/galaxy/web/form_builder.py
@@ -7,6 +7,7 @@
from operator import itemgetter
from cgi import escape
from galaxy.util import restore_text, relpath, nice_size, unicodify
+from galaxy.util.json import to_json_string
from galaxy.web import url_for
from binascii import hexlify
@@ -406,6 +407,14 @@
return selected_options
return None
+ def to_dict( self ):
+ return dict(
+ name=self.name,
+ multiple=self.multiple,
+ options=self.options
+ )
+
+
class DrillDownField( BaseField ):
"""
A hierarchical select field, which allows users to 'drill down' a tree-like set of options.
@@ -535,6 +544,42 @@
rval.append( '</div>' )
return unicodify( '\n'.join( rval ) )
+
+class SwitchingSelectField(BaseField):
+
+ def __init__( self, delegate_fields, default_field=None ):
+ self.delegate_fields = delegate_fields
+ self.default_field = default_field or delegate_fields.keys()[ 0 ]
+
+ @property
+ def primary_field( self ):
+ primary_field = self.delegate_fields[ self.delegate_fields.keys()[ 0 ] ]
+ return primary_field
+
+ def get_html( self, prefix="", disabled=False ):
+ primary_field = self.primary_field
+ html = '<div class="switch-option">'
+ html += primary_field.get_html( prefix=prefix, disabled=disabled )
+ html += '<input name="__switch_default__" type="hidden" value="%s" />' % self.default_field
+ options = []
+ for name, delegate_field in self.delegate_fields.items():
+ field = to_json_string( delegate_field.to_dict() )
+ option = " '%s': %s" % ( name, field )
+ options.append( option )
+ html += '<script>$(document).ready( function() {\nvar switchOptions = {\n'
+ html += ','.join( options )
+ html += '};\n'
+ html += 'if ( window.enhanced_galaxy_tools ) {\n'
+ html += 'require( [ "galaxy.tools" ], function( mod_tools ) { new mod_tools.SwitchSelectView({\n'
+ html += 'el: $(\'[name="%s%s"]\').closest( "div.switch-option" ),' % ( prefix, primary_field.name )
+ html += 'default_option: "%s",\n' % self.default_field
+ html += 'prefix: "%s",\n' % prefix
+ html += 'switch_options: switchOptions }); } )\n'
+ html += "}"
+ html += '});\n</script></div>'
+ return html
+
+
class AddressField(BaseField):
@staticmethod
def fields():
diff -r 2c2c599a65d734ba6386f8bd76f213ece7e207b0 -r 0afc6080e504e42cee1a30785cb0ebf6bffe4ca0 static/scripts/galaxy.tools.js
--- a/static/scripts/galaxy.tools.js
+++ b/static/scripts/galaxy.tools.js
@@ -1,5 +1,5 @@
// dependencies
-define([ "mvc/tools" ], function( Tools ) {
+define([ "libs/underscore", "mvc/tools" ], function( _, Tools ) {
var checkUncheckAll = function( name, check ) {
$("input[name='" + name + "'][type='checkbox']").attr('checked', !!check);
@@ -17,4 +17,110 @@
$(this).append(select_link).append(" ").append(unselect_link);
});
+ var SELECTION_TYPE = {
+ 'select_single': {
+ 'icon_class': 'fa-file-o',
+ 'select_by': 'Run tool on single input',
+ 'allow_remap': true
+ },
+ 'select_multiple': {
+ 'icon_class': 'fa-files-o',
+ 'select_by': 'Run tool in parallel across multiple datasets',
+ 'allow_remap': false,
+ 'min_option_count': 2 // Don't show multiple select switch if only
+ // one dataset available.
+ },
+ 'select_collection': { // NOT YET IMPLEMENTED
+ 'icon_class': 'fa-folder-o',
+ 'select_by': 'Run tool in parallel across dataset collection',
+ 'allow_remap': false
+ },
+ 'multiselect_single': {
+ 'icon_class': 'fa-list-alt',
+ 'select_by': 'Run tool over multiple datasets',
+ 'allow_remap': true
+ }
+ };
+
+ var SwitchSelectView = Backbone.View.extend({
+ initialize: function( data ) {
+ var defaultOption = data.default_option;
+ var defaultIndex = null;
+ var switchOptions = data.switch_options;
+ this.switchOptions = switchOptions;
+ this.prefix = data.prefix;
+ var el = this.$el;
+ var view = this;
+
+ var index = 0;
+ _.each( this.switchOptions, function( option, onValue ) {
+ var numValues = _.size( option.options );
+ var selectionType = SELECTION_TYPE[ onValue ];
+ var iIndex = index++;
+ var hidden = false;
+ if( defaultOption == onValue ) {
+ defaultIndex = iIndex;
+ } else if( numValues < ( selectionType.min_option_count || 1 ) ) {
+ hidden = true;
+ }
+ if( ! hidden ) {
+ var button = $('<i class="fa ' + selectionType['icon_class'] + '" style="padding-left: 5px; padding-right: 2px;"></i>').click(function() {
+ view.enableSelectBy( iIndex, onValue );
+ }).attr(
+ 'title',
+ selectionType['select_by']
+ );
+ view.formRow().find( "label" ).append( button );
+ }
+ });
+ if( defaultIndex != null) {
+ view.enableSelectBy( defaultIndex, defaultOption );
+ }
+ },
+
+ formRow: function() {
+ return this.$el.closest( ".form-row" );
+ },
+
+ render: function() {
+ },
+
+ enableSelectBy: function( enableIndex, onValue ) {
+ var selectionType = SELECTION_TYPE[onValue];
+ if(selectionType["allow_remap"]) {
+ $("div#remap-row").css("display", "none");
+ } else {
+ $("div#remap-row").css("display", "inherit");
+ }
+ this.formRow().find( "i" ).each(function(index, iElement) {
+ if(index == enableIndex) {
+ $(iElement).css('color', 'black');
+ } else {
+ $(iElement).css('color', 'Gray');
+ }
+ });
+ var $select = this.$( "select" );
+ var options = this.switchOptions[ onValue ];
+ $select.attr( "name", this.prefix + options.name );
+ $select.attr( "multiple", options.multiple );
+ // Replace options regardless.
+ var select2ed = this.$(".select2-container").length > 0;
+ $select.html(""); // clear out select list
+ _.each( options.options, function( option ) {
+ var text = option[0];
+ var value = option[1];
+ var selected = option[2];
+ $select.append($("<option />", {text: text, val: value, selected: selected}));
+ });
+ if( select2ed ) {
+ // Without this select2 does not update options.
+ $select.select2();
+ }
+ }
+ });
+
+ return {
+ SwitchSelectView: SwitchSelectView
+ };
+
});
diff -r 2c2c599a65d734ba6386f8bd76f213ece7e207b0 -r 0afc6080e504e42cee1a30785cb0ebf6bffe4ca0 templates/webapps/galaxy/tool_form.mako
--- a/templates/webapps/galaxy/tool_form.mako
+++ b/templates/webapps/galaxy/tool_form.mako
@@ -16,6 +16,7 @@
${h.js( "galaxy.panels", "libs/jquery/jstorage" )}
<script type="text/javascript">
require( [ "galaxy.tools" ] );
+ window.enhanced_galaxy_tools = true;
</script><script type="text/javascript">
$(function() {
diff -r 2c2c599a65d734ba6386f8bd76f213ece7e207b0 -r 0afc6080e504e42cee1a30785cb0ebf6bffe4ca0 test/unit/tools/test_data_parameters.py
--- a/test/unit/tools/test_data_parameters.py
+++ b/test/unit/tools/test_data_parameters.py
@@ -39,7 +39,7 @@
hda1 = MockHistoryDatasetAssociation( name="hda1", id=1 )
hda2 = MockHistoryDatasetAssociation( name="hda2", id=2 )
self.stub_active_datasets( hda1, hda2 )
- field = self.param.get_html_field( trans=self.trans )
+ field = self._simple_field()
assert len( field.options ) == 2
assert field.options[ 0 ][ 0 ] == "1: hda1"
assert field.options[ 1 ][ 0 ] == "2: hda2"
@@ -48,7 +48,7 @@
assert not field.options[ 0 ][ 2 ] # Others not selected
hda2.datatype_matches = False
- field = self.param.get_html_field( trans=self.trans )
+ field = self._simple_field()
assert len( field.options ) == 1
assert field.options[ 0 ][ 2 ] is True # Last one selected
@@ -58,8 +58,8 @@
self.stub_active_datasets( hda1, hda2 )
hda1.visible = False
hda2.visible = False
- field = self.param.get_html_field( trans=self.trans, value=hda2 )
- assert len( field.options ) == 1 # hda1 not an option, not visible or selected
+ field = self._simple_field( value=hda2 )
+ self.assertEquals( len( field.options ), 1 ) # hda1 not an option, not visible or selected
assert field.options[ 0 ][ 0 ] == "2: (hidden) hda2"
def test_field_implicit_conversion_new( self ):
@@ -67,7 +67,7 @@
hda1.datatype_matches = False
hda1.conversion_destination = ( "tabular", None )
self.stub_active_datasets( hda1 )
- field = self.param.get_html_field( trans=self.trans )
+ field = self._simple_field()
assert len( field.options ) == 1
assert field.options[ 0 ][ 0 ] == "1: (as tabular) hda1"
assert field.options[ 0 ][ 1 ] == 1
@@ -77,7 +77,7 @@
hda1.datatype_matches = False
hda1.conversion_destination = ( "tabular", MockHistoryDatasetAssociation( name="hda1converted", id=2 ) )
self.stub_active_datasets( hda1 )
- field = self.param.get_html_field( trans=self.trans )
+ field = self._simple_field()
assert len( field.options ) == 1
assert field.options[ 0 ][ 0 ] == "1: (as tabular) hda1"
# This is difference with previous test, value is existing
@@ -86,16 +86,16 @@
def test_field_multiple( self ):
self.multiple = True
- field = self.param.get_html_field( trans=self.trans )
+ field = self._simple_field()
assert field.multiple
def test_field_empty_selection( self ):
- field = self.param.get_html_field( trans=self.trans )
+ field = self._simple_field()
assert len( field.options ) == 0
def test_field_empty_selection_optional( self ):
self.optional = True
- field = self.param.get_html_field( trans=self.trans )
+ field = self._simple_field()
assert len( field.options ) == 1
option = field.options[ 0 ]
assert option[ 0 ] == "Selection is Optional"
@@ -171,6 +171,12 @@
def stub_active_datasets( self, *hdas ):
self.test_history._active_datasets_children_and_roles = hdas
+ def _simple_field( self, **kwds ):
+ field = self.param.get_html_field( trans=self.trans, **kwds )
+ if hasattr( field, "primary_field" ):
+ field = field.primary_field
+ return field
+
@property
def param( self ):
if not self._param:
https://bitbucket.org/galaxy/galaxy-central/commits/0251eeedd884/
Changeset: 0251eeedd884
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Rework dataset collection parameter to use switching field (like data parameter).
Will allow interesting map/reduce junk over nested collections downstream.
Affected #: 1 file
diff -r 0afc6080e504e42cee1a30785cb0ebf6bffe4ca0 -r 0251eeedd88431e4a0762db4a3c905136f81e22d lib/galaxy/tools/parameters/basic.py
--- a/lib/galaxy/tools/parameters/basic.py
+++ b/lib/galaxy/tools/parameters/basic.py
@@ -1930,14 +1930,20 @@
def get_html_field( self, trans=None, value=None, other_values={} ):
# dropped refresh values, may be needed..
+ default_field = "single_collection"
+ fields = {}
+
+ history = self._get_history( trans )
+ fields[ "single_collection" ] = self._get_single_collection_field( trans, history=history, value=value, other_values=other_values )
+ return self._switch_fields( fields, default_field=default_field )
+
+ def _get_single_collection_field( self, trans, history, value, other_values ):
field = form_builder.SelectField( self.name, self.multiple, None, self.refresh_on_change, refresh_on_change_values=self.refresh_on_change_values )
- history = self._get_history( trans )
dataset_collections = trans.app.dataset_collections_service.history_dataset_collections( history, self.history_query )
dataset_matcher = DatasetMatcher( trans, self, value, other_values )
dataset_collection_matcher = DatasetCollectionMatcher( dataset_matcher )
for dataset_collection_instance in dataset_collections:
- log.info("Processing dataset collection instance....")
if not dataset_collection_matcher.hdca_match( dataset_collection_instance ):
continue
instance_id = dataset_collection_instance.hid
https://bitbucket.org/galaxy/galaxy-central/commits/8f6412298498/
Changeset: 8f6412298498
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Add tool meta parameter exception.
Affected #: 2 files
diff -r 0251eeedd88431e4a0762db4a3c905136f81e22d -r 8f64122984989bc13b5029b4d7abc9c95d2724f5 lib/galaxy/exceptions/__init__.py
--- a/lib/galaxy/exceptions/__init__.py
+++ b/lib/galaxy/exceptions/__init__.py
@@ -68,6 +68,10 @@
status_code = 400
err_code = error_codes.USER_REQUEST_MISSING_PARAMETER
+class ToolMetaParameterException( MessageException ):
+ status_code = 400
+ err_code = error_codes.USER_TOOL_META_PARAMETER_PROBLEM
+
class RequestParameterInvalidException( MessageException ):
status_code = 400
err_code = error_codes.USER_REQUEST_INVALID_PARAMETER
diff -r 0251eeedd88431e4a0762db4a3c905136f81e22d -r 8f64122984989bc13b5029b4d7abc9c95d2724f5 lib/galaxy/exceptions/error_codes.json
--- a/lib/galaxy/exceptions/error_codes.json
+++ b/lib/galaxy/exceptions/error_codes.json
@@ -55,6 +55,11 @@
"message": "The request contains unknown type of contents."
},
{
+ "name": "USER_TOOL_META_PARAMETER_PROBLEM",
+ "code": 400011,
+ "message": "Supplied incorrect or incompatible tool meta parameters."
+ },
+ {
"name": "USER_NO_API_KEY",
"code": 403001,
"message": "API authentication required for this request"
https://bitbucket.org/galaxy/galaxy-central/commits/5c15644414d5/
Changeset: 5c15644414d5
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - "map" with existing tools (backend).
Allow replacing data parameter inputs with collections - this will cause the tool to produce multiple jobs for the submission - one for each combination of input parameters after being matched up (linked). In addition to various unit tests, functional tests demonstrate the API usage in `test/functional/api/test_tools.py`.
Affected #: 6 files
diff -r 8f64122984989bc13b5029b4d7abc9c95d2724f5 -r 5c15644414d58b65a6c72891ee4cfbea7ba18bab lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -35,6 +35,7 @@
from galaxy import jobs, model
from galaxy.jobs.error_level import StdioErrorLevel
from galaxy.datatypes.metadata import JobExternalOutputMetadataWrapper
+from galaxy import exceptions
from galaxy.jobs import ParallelismInfo
from galaxy.tools.actions import DefaultToolAction
from galaxy.tools.actions.data_source import DataSourceToolAction
@@ -1914,6 +1915,9 @@
# executions).
expanded_incomings = expand_meta_parameters( trans, incoming, self.inputs )
+ if not expanded_incomings:
+ raise exceptions.MessageException( "Tool execution failed, trying to run a tool over an empty collection." )
+
# Remapping a single job to many jobs doesn't make sense, so disable
# remap if multi-runs of tools are being used.
if rerun_remap_job_id and len( expanded_incomings ) > 1:
@@ -2444,6 +2448,7 @@
result = incoming.copy()
meta_property_suffixes = [
"__multirun__",
+ "__collection_multirun__",
]
for key, value in incoming.iteritems():
if any( map( lambda s: key.endswith(s), meta_property_suffixes ) ):
diff -r 8f64122984989bc13b5029b4d7abc9c95d2724f5 -r 5c15644414d58b65a6c72891ee4cfbea7ba18bab lib/galaxy/tools/actions/__init__.py
--- a/lib/galaxy/tools/actions/__init__.py
+++ b/lib/galaxy/tools/actions/__init__.py
@@ -112,7 +112,7 @@
#allow explicit conversion to be stored in job_parameter table
target_dict[ conversion_name ] = conversion_data.id # a more robust way to determine JSONable value is desired
elif isinstance( input, DataCollectionToolParameter ):
- for i, v in enumerate( value.collection.datasets ):
+ for i, v in enumerate( value.collection.dataset_instances ):
data = v
current_user_roles = trans.get_current_user_roles()
if not trans.app.security_agent.can_access_dataset( current_user_roles, data.dataset ):
diff -r 8f64122984989bc13b5029b4d7abc9c95d2724f5 -r 5c15644414d58b65a6c72891ee4cfbea7ba18bab lib/galaxy/tools/parameters/dataset_matcher.py
--- a/lib/galaxy/tools/parameters/dataset_matcher.py
+++ b/lib/galaxy/tools/parameters/dataset_matcher.py
@@ -135,6 +135,13 @@
# Simplify things for now and assume these are hdas and not implicit
# converts. One could imagine handling both of those cases down the
# road.
+ if element.ldda:
+ return False
+
+ child_collection = element.child_collection
+ if child_collection:
+ return self.dataset_collection_match( child_collection )
+
hda = element.hda
if not hda:
return False
@@ -142,12 +149,14 @@
return hda_match and not hda_match.implicit_conversion
def hdca_match( self, history_dataset_collection_association ):
+ return self.dataset_collection_match( history_dataset_collection_association.collection )
+
+ def dataset_collection_match( self, dataset_collection ):
valid = True
- for element in history_dataset_collection_association.collection.datasets:
+ for element in dataset_collection.elements:
if not self.__valid_element( element ):
valid = False
break
return valid
-
__all__ = [ DatasetMatcher, DatasetCollectionMatcher ]
diff -r 8f64122984989bc13b5029b4d7abc9c95d2724f5 -r 5c15644414d58b65a6c72891ee4cfbea7ba18bab lib/galaxy/tools/parameters/meta.py
--- a/lib/galaxy/tools/parameters/meta.py
+++ b/lib/galaxy/tools/parameters/meta.py
@@ -1,5 +1,7 @@
from galaxy.util import permutations
+from galaxy import model
from galaxy import util
+from galaxy import exceptions
def expand_meta_parameters( trans, incoming, inputs ):
@@ -21,6 +23,18 @@
return permutations.input_classification.SINGLE, multi_value[ 0 ]
else:
return permutations.input_classification.SINGLE, incoming[ input_key ]
+
+ def collection_classifier( input_key ):
+ multirun_key = "%s|__collection_multirun__" % input_key
+ if multirun_key in incoming:
+ encoded_hdc_id = incoming[ multirun_key ]
+ hdc_id = trans.app.security.decode_id( encoded_hdc_id )
+ hdc = trans.sa_session.query( model.HistoryDatasetCollectionAssociation ).get( hdc_id )
+ hdas = hdc.collection.dataset_instances
+ return permutations.input_classification.MATCHED, hdas
+ else:
+ return permutations.input_classification.SINGLE, incoming[ input_key ]
+
# Stick an unexpanded version of multirun keys so they can be replaced,
# by expand_mult_inputs.
incoming_template = incoming.copy()
@@ -33,6 +47,19 @@
if simple_key not in incoming_template:
incoming_template[ simple_key ] = None
return found
+
+ multirun_found = False
+ collection_multirun_found = False
for key, value in incoming.iteritems():
- try_replace_key( key, "|__multirun__" )
- return permutations.expand_multi_inputs( incoming_template, classifier )
+ multirun_found = try_replace_key( key, "|__multirun__" ) or multirun_found
+ collection_multirun_found = try_replace_key( key, "|__collection_multirun__" )
+
+ if multirun_found and collection_multirun_found:
+ # In theory doable, but to complicated for a first pass.
+ message = "Cannot specify parallel execution across both multiple datasets and dataset collections."
+ raise exceptions.ToolMetaParameterException( message )
+
+ if multirun_found:
+ return permutations.expand_multi_inputs( incoming_template, classifier )
+ else:
+ return permutations.expand_multi_inputs( incoming_template, collection_classifier )
diff -r 8f64122984989bc13b5029b4d7abc9c95d2724f5 -r 5c15644414d58b65a6c72891ee4cfbea7ba18bab test/api/test_tools.py
--- a/test/api/test_tools.py
+++ b/test/api/test_tools.py
@@ -2,6 +2,7 @@
from base import api
from operator import itemgetter
from .helpers import DatasetPopulator
+from .test_dataset_collections import DatasetCollectionPopulator
class ToolsTestCase( api.ApiTestCase ):
@@ -9,6 +10,7 @@
def setUp( self ):
super( ToolsTestCase, self ).setUp( )
self.dataset_populator = DatasetPopulator( self.galaxy_interactor )
+ self.dataset_collection_populator = DatasetCollectionPopulator( self.galaxy_interactor )
def test_index( self ):
tool_ids = self.__tool_ids()
@@ -151,6 +153,55 @@
assert "123\n0ab" in outputs_contents
assert "456\n0ab" in outputs_contents
+ def test_map_over_collection( self ):
+ self.__skip_without_tool( "cat1" )
+ history_id = self.dataset_populator.new_history()
+ hdca_id = self.__build_pair( history_id, [ "123", "456" ] )
+ inputs = {
+ "input1|__collection_multirun__": hdca_id,
+ }
+ outputs = self._cat1_outputs( history_id, inputs=inputs )
+ self.assertEquals( len( outputs ), 2 )
+ self.dataset_populator.wait_for_history( history_id, assert_ok=True )
+ output1 = outputs[ 0 ]
+ output2 = outputs[ 1 ]
+ output1_content = self._get_content( history_id, dataset=output1 )
+ output2_content = self._get_content( history_id, dataset=output2 )
+ self.assertEquals( output1_content.strip(), "123" )
+ self.assertEquals( output2_content.strip(), "456" )
+
+ def test_map_over_two_collections( self ):
+ history_id = self.dataset_populator.new_history()
+ hdca1_id = self.__build_pair( history_id, [ "123", "456" ] )
+ hdca2_id = self.__build_pair( history_id, [ "789", "0ab" ] )
+ inputs = {
+ "input1|__collection_multirun__": hdca1_id,
+ "queries_0|input2|__collection_multirun__": hdca2_id,
+ }
+ outputs = self._cat1_outputs( history_id, inputs=inputs )
+ self.assertEquals( len( outputs ), 2 )
+ self.dataset_populator.wait_for_history( history_id, assert_ok=True )
+ output1 = outputs[ 0 ]
+ output2 = outputs[ 1 ]
+ output1_content = self._get_content( history_id, dataset=output1 )
+ output2_content = self._get_content( history_id, dataset=output2 )
+ self.assertEquals( output1_content.strip(), "123\n789" )
+ self.assertEquals( output2_content.strip(), "456\n0ab" )
+
+ def test_cannot_map_over_incompatible_collections( self ):
+ self.__skip_without_tool( "cat1" )
+ history_id = self.dataset_populator.new_history()
+ hdca1_id = self.__build_pair( history_id, [ "123", "456" ] )
+ hdca2_id = self.dataset_collection_populator.create_list_in_history( history_id ).json()[ "id" ]
+ inputs = {
+ "input1|__collection_multirun__": hdca1_id,
+ "queries_0|input2|__collection_multirun__": hdca2_id,
+ }
+ run_response = self._run_cat1( history_id, inputs )
+ # TODO: Fix this error checking once switch over to new API decorator
+ # on server.
+ assert run_response.status_code >= 400
+
def _cat1_outputs( self, history_id, inputs ):
return self._run_outputs( self._run_cat1( history_id, inputs ) )
@@ -209,6 +260,11 @@
if tool_id not in self.__tool_ids( ):
raise SkipTest( )
+ def __build_pair( self, history_id, contents ):
+ create_response = self.dataset_collection_populator.create_pair_in_history( history_id, contents=contents )
+ hdca_id = create_response.json()[ "id" ]
+ return hdca_id
+
def dataset_to_param( dataset ):
return dict(
diff -r 8f64122984989bc13b5029b4d7abc9c95d2724f5 -r 5c15644414d58b65a6c72891ee4cfbea7ba18bab test/unit/tools/test_execution.py
--- a/test/unit/tools/test_execution.py
+++ b/test/unit/tools/test_execution.py
@@ -239,6 +239,31 @@
assert len( state.inputs[ "repeat1" ] ) == 1
self.__assert_state_serializable( state )
+ def test_collection_multirun_with_state_updates( self ):
+ hda1, hda2 = self.__setup_multirun_job()
+ collection = self.__history_dataset_collection_for( [ hda1, hda2 ] )
+ collection_id = self.app.security.encode_id( collection.id )
+ template, template_vars = self.__handle_with_incoming( **{
+ "param1|__collection_multirun__": collection_id,
+ "runtool_btn": "dummy",
+ } )
+ self.__assert_exeuted( template, template_vars )
+
+ def __history_dataset_collection_for( self, hdas, id=1234 ):
+ collection = galaxy.model.DatasetCollection()
+ to_element = lambda hda: galaxy.model.DatasetCollectionElement(
+ collection=collection,
+ element=hda,
+ )
+ collection.datasets = map(to_element, hdas)
+ history_dataset_collection_association = galaxy.model.HistoryDatasetCollectionAssociation(
+ id=id,
+ collection=collection,
+ )
+ hdcas = self.trans.sa_session.model_objects[ galaxy.model.HistoryDatasetCollectionAssociation ]
+ hdcas[ id ] = history_dataset_collection_association
+ return history_dataset_collection_association
+
def __assert_state_serializable( self, state ):
self.__state_to_string( state ) # Will thrown exception if there is a problem...
https://bitbucket.org/galaxy/galaxy-central/commits/91b65da31341/
Changeset: 91b65da31341
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - "map" with existing tools (UI).
Add UI components needed for previous commit.
Affected #: 1 file
diff -r 5c15644414d58b65a6c72891ee4cfbea7ba18bab -r 91b65da313410e85bda795241f50a5e9f9417804 lib/galaxy/tools/parameters/basic.py
--- a/lib/galaxy/tools/parameters/basic.py
+++ b/lib/galaxy/tools/parameters/basic.py
@@ -1694,9 +1694,30 @@
multi_dataset_matcher = DatasetMatcher( trans, self, multirun_value, other_values )
multi_select = self._get_select_dataset_field( history, multi_dataset_matcher, multiple=True, suffix="|__multirun__" )
fields[ "select_multiple" ] = multi_select
+ collection_field = self._get_select_dataset_collection_fields( dataset_matcher, multiple=False )
+ fields[ "select_collection" ] = collection_field
return self._switch_fields( fields, default_field=default_field )
+ def _get_select_dataset_collection_fields( self, history, dataset_matcher, multiple=False ):
+ value = dataset_matcher.value
+ if value is not None:
+ if type( value ) != list:
+ value = [ value ]
+
+ field_name = "%s|__collection_multirun__" % self.name
+ field = form_builder.SelectField( field_name, multiple, None, self.refresh_on_change, refresh_on_change_values=self.refresh_on_change_values )
+
+ dataset_collection_matcher = DatasetCollectionMatcher( dataset_matcher )
+
+ for history_dataset_collection in history.dataset_collections:
+ if dataset_collection_matcher.hdca_match( history_dataset_collection ):
+ name = history_dataset_collection.name
+ id = dataset_matcher.trans.security.encode_id( history_dataset_collection.id )
+ field.add_option( name, id, False )
+
+ return field
+
def _get_select_dataset_field( self, history, dataset_matcher, multiple=False, suffix="" ):
# CRUCIAL: the dataset_collector function needs to be local to DataToolParameter.get_html_field()
https://bitbucket.org/galaxy/galaxy-central/commits/dca2a89f9806/
Changeset: dca2a89f9806
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - implicit "map" creates collections.
When running normal tools with normal data inputs across dataset collections in parallel - a collection will be created for each output with a "structure" matching that of the input (for instance pairs will create pairs, lists will create lists with the same identifiers, etc...). A previous changeset added the ability to run the tool in parallel - this changeset extends that functionality to create analogous collections from these parallel runs.
For example, if one filters a pair of FASTQ files a pair is created out of the result. Likewise, if one filters a list of FASTQ files - a list collection with same cardinality is built from the results.
There is a lot left TODO here - for one a lot of this logic should be moved into the dataset_collection module. The matching needs to be exact right now - not a problem for pairs (every 'element' has name 'left' or 'right') but for lists with element names - these have to match exactly - but a 'list' like samp1_l, samp2_l, samp3_l should be able to match against samp1_r, samp2_r, samp3_r and create a new list samp1, samp2, and samp3. Even if there is no matching prefixes a new 'unlabeled' list should be able to be created.
Affected #: 8 files
diff -r 91b65da313410e85bda795241f50a5e9f9417804 -r dca2a89f9806429d2e23be42387d35d0f097680f lib/galaxy/dataset_collections/__init__.py
--- a/lib/galaxy/dataset_collections/__init__.py
+++ b/lib/galaxy/dataset_collections/__init__.py
@@ -1,5 +1,5 @@
from .registry import DatasetCollectionTypesRegistry
-from .structure import get_structure
+from .matching import MatchingCollections
from galaxy import model
from galaxy.exceptions import MessageException
@@ -234,6 +234,13 @@
def __type_plugin( self, collection_type ):
return self.type_registry.get( collection_type )
+ def match_collections( self, collections_to_match ):
+ """
+ May seem odd to place it here, but planning to grow sophistication and
+ get plugin types involved so it will likely make sense in the future.
+ """
+ return MatchingCollections.for_collections( collections_to_match )
+
def get_dataset_collection_instance( self, trans, instance_type, id, **kwds ):
"""
"""
diff -r 91b65da313410e85bda795241f50a5e9f9417804 -r dca2a89f9806429d2e23be42387d35d0f097680f lib/galaxy/dataset_collections/matching.py
--- /dev/null
+++ b/lib/galaxy/dataset_collections/matching.py
@@ -0,0 +1,62 @@
+from galaxy.util import bunch
+from galaxy import exceptions
+from .structure import get_structure
+
+CANNOT_MATCH_ERROR_MESSAGE = "Cannot match collection types."
+
+
+class CollectionsToMatch( object ):
+ """ Structure representing a set of collections that need to be matched up
+ when running tools (possibly workflows in the future as well).
+ """
+
+ def __init__( self ):
+ self.collections = {}
+
+ def add( self, input_name, hdca ):
+ self.collections[ input_name ] = bunch.Bunch(
+ hdca=hdca,
+ )
+
+ def has_collections( self ):
+ return len( self.collections ) > 0
+
+ def iteritems( self ):
+ return self.collections.iteritems()
+
+
+class MatchingCollections( object ):
+ """ Structure holding the result of matching a list of collections
+ together. This class being different than the class above and being
+ created in the dataset_collections_service layer may seem like
+ overkill but I suspect in the future plugins will be subtypable for
+ instance so matching collections will need to make heavy use of the
+ dataset collection type registry managed by the dataset collections
+ sevice - hence the complexity now.
+ """
+
+ def __init__( self ):
+ self.structure = None
+ self.collections = {}
+
+ def __attempt_add_to_match( self, input_name, hdca ):
+ structure = get_structure( hdca )
+ if not self.structure:
+ self.structure = structure
+ self.collections[ input_name ] = hdca
+ else:
+ if not self.structure.can_match( structure ):
+ raise exceptions.MessageException( CANNOT_MATCH_ERROR_MESSAGE )
+ self.collections[ input_name ] = hdca
+
+ @staticmethod
+ def for_collections( collections_to_match ):
+ if not collections_to_match.has_collections():
+ return None
+
+ matching_collections = MatchingCollections()
+ for input_key, to_match in collections_to_match.iteritems():
+ hdca = to_match.hdca
+ matching_collections.__attempt_add_to_match( input_key, hdca )
+
+ return matching_collections
diff -r 91b65da313410e85bda795241f50a5e9f9417804 -r dca2a89f9806429d2e23be42387d35d0f097680f lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -1913,7 +1913,7 @@
# Fixed set of input parameters may correspond to any number of jobs.
# Expand these out to individual parameters for given jobs (tool
# executions).
- expanded_incomings = expand_meta_parameters( trans, incoming, self.inputs )
+ expanded_incomings, collection_info = expand_meta_parameters( trans, incoming, self.inputs )
if not expanded_incomings:
raise exceptions.MessageException( "Tool execution failed, trying to run a tool over an empty collection." )
@@ -1961,7 +1961,7 @@
template_vars = dict( errors=errors, tool_state=state, incoming=incoming, error_message=error_message )
# If we've completed the last page we can execute the tool
elif all_pages or state.page == self.last_page:
- execution_tracker = execute_job( trans, self, all_params, history=history, rerun_remap_job_id=rerun_remap_job_id )
+ execution_tracker = execute_job( trans, self, all_params, history=history, rerun_remap_job_id=rerun_remap_job_id, collection_info=collection_info )
if execution_tracker.successful_jobs:
template = 'tool_executed.mako'
template_vars = dict( out_data=execution_tracker.output_datasets, num_jobs=len( execution_tracker.successful_jobs ), job_errors=execution_tracker.execution_errors )
diff -r 91b65da313410e85bda795241f50a5e9f9417804 -r dca2a89f9806429d2e23be42387d35d0f097680f lib/galaxy/tools/execute.py
--- a/lib/galaxy/tools/execute.py
+++ b/lib/galaxy/tools/execute.py
@@ -1,47 +1,117 @@
"""
Once state information has been calculated, handle actually executing tools
-from various states and tracking results.
-
-Later will also create implicit dataset collections from these executions.
+from various states, tracking results, and building implicit dataset
+collections from matched collections.
"""
+import collections
+from galaxy.tools.actions import on_text_for_names
import logging
log = logging.getLogger( __name__ )
-def execute( trans, tool, param_combinations, history, rerun_remap_job_id=None ):
+def execute( trans, tool, param_combinations, history, rerun_remap_job_id=None, collection_info=None ):
"""
Execute a tool and return object containing summary (output data, number of
failures, etc...).
"""
- execution_tracker = ToolExecutionTracker( tool, param_combinations )
+ execution_tracker = ToolExecutionTracker( tool, param_combinations, collection_info )
for params in execution_tracker.param_combinations:
job, result = tool.handle_single_execution( trans, rerun_remap_job_id, params, history )
if job:
execution_tracker.record_success( job, result )
else:
execution_tracker.record_error( result )
+
+ if collection_info:
+ history = history or tool.get_default_history_by_trans( trans )
+ execution_tracker.create_output_collections( trans, history, params )
+
return execution_tracker
class ToolExecutionTracker( object ):
- """
- """
- def __init__( self, tool, param_combinations ):
+ def __init__( self, tool, param_combinations, collection_info ):
self.tool = tool
self.param_combinations = param_combinations
+ self.collection_info = collection_info
self.successful_jobs = []
self.failed_jobs = 0
self.execution_errors = []
self.output_datasets = []
+ self.output_datasets_by_output_name = collections.defaultdict(list)
+ self.created_collections = []
def record_success( self, job, outputs ):
self.successful_jobs.append( job )
self.output_datasets.extend( outputs )
+ for output_name, output_dataset in outputs:
+ self.output_datasets_by_output_name[ output_name ].append( output_dataset )
def record_error( self, error ):
self.failed_jobs += 1
self.execution_errors.append( error )
+ def create_output_collections( self, trans, history, params ):
+ # TODO: Move this function - it doesn't belong here but it does need
+ # the information in this class and potential extensions.
+ if self.failed_jobs > 0:
+ return []
+
+ structure = self.collection_info.structure
+ collections = self.collection_info.collections.values()
+
+ # params is just one sample tool param execution with parallelized
+ # collection replaced with a specific dataset. Need to replace this
+ # with the collection and wrap everything up so can evaluate output
+ # label.
+ params.update( self.collection_info.collections ) # Replace datasets
+ # with source collections
+ # for labelling outputs.
+
+ collection_names = map( lambda c: "collection %d" % c.hid, collections )
+ on_text = on_text_for_names( collection_names )
+
+ collections = {}
+
+ implicit_inputs = list(self.collection_info.collections.iteritems())
+ for output_name, outputs_datasets in self.output_datasets_by_output_name.iteritems():
+ if not len( structure ) == len( outputs_datasets ):
+ # Output does not have the same structure, if all jobs were
+ # successfully submitted this shouldn't have happened.
+ log.warn( "Problem matching up datasets while attempting to create implicit dataset collections")
+ continue
+ output = self.tool.outputs[ output_name ]
+ element_identifiers_for_datasets = structure.element_identifiers_for_datasets( trans, outputs_datasets )
+
+ implicit_collection_info = dict(
+ implicit_inputs=implicit_inputs,
+ implicit_output_name=output_name
+ )
+ try:
+ output_collection_name = self.tool_action.get_output_name(
+ output,
+ dataset=None,
+ tool=self.tool,
+ on_text=on_text,
+ trans=trans,
+ params=params,
+ incoming=None,
+ job_params=None,
+ )
+ except Exception:
+ output_collection_name = "%s across %s" % ( self.tool.name, on_text )
+
+ collection = trans.app.dataset_collections_service.create(
+ trans=trans,
+ parent=history,
+ name=output_collection_name,
+ element_identifiers=element_identifiers_for_datasets[ "element_identifiers" ],
+ collection_type=structure.collection_type,
+ implicit_collection_info=implicit_collection_info,
+ )
+ collections[ output_name ] = collection
+ self.created_collections = collections
+
__all__ = [ execute ]
diff -r 91b65da313410e85bda795241f50a5e9f9417804 -r dca2a89f9806429d2e23be42387d35d0f097680f lib/galaxy/tools/parameters/basic.py
--- a/lib/galaxy/tools/parameters/basic.py
+++ b/lib/galaxy/tools/parameters/basic.py
@@ -1694,7 +1694,7 @@
multi_dataset_matcher = DatasetMatcher( trans, self, multirun_value, other_values )
multi_select = self._get_select_dataset_field( history, multi_dataset_matcher, multiple=True, suffix="|__multirun__" )
fields[ "select_multiple" ] = multi_select
- collection_field = self._get_select_dataset_collection_fields( dataset_matcher, multiple=False )
+ collection_field = self._get_select_dataset_collection_fields( history, dataset_matcher, multiple=False )
fields[ "select_collection" ] = collection_field
return self._switch_fields( fields, default_field=default_field )
diff -r 91b65da313410e85bda795241f50a5e9f9417804 -r dca2a89f9806429d2e23be42387d35d0f097680f lib/galaxy/tools/parameters/meta.py
--- a/lib/galaxy/tools/parameters/meta.py
+++ b/lib/galaxy/tools/parameters/meta.py
@@ -2,6 +2,10 @@
from galaxy import model
from galaxy import util
from galaxy import exceptions
+from galaxy.dataset_collections import matching
+
+import logging
+log = logging.getLogger( __name__ )
def expand_meta_parameters( trans, incoming, inputs ):
@@ -24,12 +28,15 @@
else:
return permutations.input_classification.SINGLE, incoming[ input_key ]
+ collections_to_match = matching.CollectionsToMatch()
+
def collection_classifier( input_key ):
multirun_key = "%s|__collection_multirun__" % input_key
if multirun_key in incoming:
encoded_hdc_id = incoming[ multirun_key ]
hdc_id = trans.app.security.decode_id( encoded_hdc_id )
hdc = trans.sa_session.query( model.HistoryDatasetCollectionAssociation ).get( hdc_id )
+ collections_to_match.add( input_key, hdc )
hdas = hdc.collection.dataset_instances
return permutations.input_classification.MATCHED, hdas
else:
@@ -52,7 +59,7 @@
collection_multirun_found = False
for key, value in incoming.iteritems():
multirun_found = try_replace_key( key, "|__multirun__" ) or multirun_found
- collection_multirun_found = try_replace_key( key, "|__collection_multirun__" )
+ collection_multirun_found = try_replace_key( key, "|__collection_multirun__" ) or collection_multirun_found
if multirun_found and collection_multirun_found:
# In theory doable, but to complicated for a first pass.
@@ -60,6 +67,11 @@
raise exceptions.ToolMetaParameterException( message )
if multirun_found:
- return permutations.expand_multi_inputs( incoming_template, classifier )
+ return permutations.expand_multi_inputs( incoming_template, classifier ), None
else:
- return permutations.expand_multi_inputs( incoming_template, collection_classifier )
+ expanded_incomings = permutations.expand_multi_inputs( incoming_template, collection_classifier )
+ if collections_to_match.has_collections():
+ collection_info = trans.app.dataset_collections_service.match_collections( collections_to_match )
+ else:
+ collection_info = None
+ return expanded_incomings, collection_info
diff -r 91b65da313410e85bda795241f50a5e9f9417804 -r dca2a89f9806429d2e23be42387d35d0f097680f test/unit/tools/test_execution.py
--- a/test/unit/tools/test_execution.py
+++ b/test/unit/tools/test_execution.py
@@ -243,6 +243,9 @@
hda1, hda2 = self.__setup_multirun_job()
collection = self.__history_dataset_collection_for( [ hda1, hda2 ] )
collection_id = self.app.security.encode_id( collection.id )
+ self.app.dataset_collections_service = Bunch(
+ match_collections=lambda collections: None
+ )
template, template_vars = self.__handle_with_incoming( **{
"param1|__collection_multirun__": collection_id,
"runtool_btn": "dummy",
diff -r 91b65da313410e85bda795241f50a5e9f9417804 -r dca2a89f9806429d2e23be42387d35d0f097680f test/unit/tools_support.py
--- a/test/unit/tools_support.py
+++ b/test/unit/tools_support.py
@@ -122,6 +122,7 @@
self.job_queue = NoopQueue()
self.security_agent = GalaxyRBACAgent( self.model )
self.tool_data_tables = {}
+ self.dataset_collections_service = None
class MockContext(object):
https://bitbucket.org/galaxy/galaxy-central/commits/8e9dc1d384c7/
Changeset: 8e9dc1d384c7
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - "reduce" with existing tools.
Allow users to select dataset collections in place of individual datasets for data tool parameters with multiple="true" enabled (if all elements of collection would be valid as input to this parameter).
Restrict collection reductions to flat collections. If a user wanted to reduce a nested collection they probably want to map of the subcollections reducing each and building a collection of the reductions. TODO: The sentence is probably unintelligiable, need to provide a concrete example.
A functional test demonstrating these reductions in included.
Affected #: 7 files
diff -r dca2a89f9806429d2e23be42387d35d0f097680f -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 lib/galaxy/tools/actions/__init__.py
--- a/lib/galaxy/tools/actions/__init__.py
+++ b/lib/galaxy/tools/actions/__init__.py
@@ -2,6 +2,7 @@
from galaxy.exceptions import ObjectInvalid
from galaxy.model import LibraryDatasetDatasetAssociation
+from galaxy import model
from galaxy.tools.parameters import DataToolParameter
from galaxy.tools.parameters import DataCollectionToolParameter
from galaxy.tools.parameters.wrapped import WrappedParameters
@@ -11,6 +12,7 @@
from galaxy.util.odict import odict
from galaxy.util.template import fill_template
from galaxy.util import listify
+from galaxy.util.json import to_json_string
from galaxy.web import url_for
import logging
@@ -129,6 +131,27 @@
tool.visit_inputs( param_values, visitor )
return input_datasets
+ def collect_input_dataset_collections( self, tool, param_values, trans ):
+ input_dataset_collections = dict()
+
+ def visitor( prefix, input, value, parent=None ):
+ if isinstance( input, DataToolParameter ):
+ if isinstance( value, model.HistoryDatasetCollectionAssociation ):
+ input_dataset_collections[ prefix + input.name ] = ( value, True )
+ target_dict = parent
+ if not target_dict:
+ target_dict = param_values
+ # This is just a DataToolParameter, so replace this
+ # collection with individual datasets. Database will still
+ # record collection which should be enought for workflow
+ # extraction and tool rerun.
+ target_dict[ input.name ] = value.collection.dataset_instances[:] # shallow copy
+ elif isinstance( input, DataCollectionToolParameter ):
+ input_dataset_collections[ prefix + input.name ] = ( value, False )
+
+ tool.visit_inputs( param_values, visitor )
+ return input_dataset_collections
+
def execute(self, tool, trans, incoming={}, return_job=False, set_output_hid=True, set_output_history=True, history=None, job_params=None, rerun_remap_job_id=None):
"""
Executes a tool, creating job and tool outputs, associating them, and
@@ -140,6 +163,9 @@
history = tool.get_default_history_by_trans( trans, create=True )
out_data = odict()
+ # Track input dataset collections - but replace with simply lists so collect
+ # input datasets can process these normally.
+ inp_dataset_collections = self.collect_input_dataset_collections( tool, incoming, trans )
# Collect any input datasets from the incoming parameters
inp_data = self.collect_input_datasets( tool, incoming, trans )
@@ -345,6 +371,13 @@
# FIXME: Don't need all of incoming here, just the defined parameters
# from the tool. We need to deal with tools that pass all post
# parameters to the command as a special case.
+ for name, ( dataset_collection, reduced ) in inp_dataset_collections.iteritems():
+ # TODO: Does this work if nested in repeat/conditional?
+ if reduced:
+ incoming[ name ] = "__collection_reduce__|%s" % dataset_collection.id
+ # Should verify security? We check security of individual
+ # datasets below?
+ job.add_input_dataset_collection( name, dataset_collection )
for name, value in tool.params_to_strings( incoming, trans.app ).iteritems():
job.add_parameter( name, value )
current_user_roles = trans.get_current_user_roles()
diff -r dca2a89f9806429d2e23be42387d35d0f097680f -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 lib/galaxy/tools/evaluation.py
--- a/lib/galaxy/tools/evaluation.py
+++ b/lib/galaxy/tools/evaluation.py
@@ -142,8 +142,11 @@
def wrap_input( input_values, input ):
if isinstance( input, DataToolParameter ) and input.multiple:
+ dataset_instances = input_values[ input.name ]
+ if isinstance( dataset_instances, model.HistoryDatasetCollectionAssociation ):
+ dataset_instances = dataset_instances.collection.dataset_instances[:]
input_values[ input.name ] = \
- DatasetListWrapper( input_values[ input.name ],
+ DatasetListWrapper( dataset_instances,
dataset_paths=input_dataset_paths,
datatypes_registry=self.app.datatypes_registry,
tool=self.tool,
diff -r dca2a89f9806429d2e23be42387d35d0f097680f -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 lib/galaxy/tools/parameters/basic.py
--- a/lib/galaxy/tools/parameters/basic.py
+++ b/lib/galaxy/tools/parameters/basic.py
@@ -1676,6 +1676,14 @@
default_field = "multiselect_single"
multi_select = self._get_select_dataset_field( history, dataset_matcher, multiple=True )
fields[ "multiselect_single" ] = multi_select
+
+ if self.__display_multirun_option():
+ collection_select = self._get_select_dataset_collection_fields( history, dataset_matcher, suffix="", reduction=True )
+ if collection_select.get_selected(return_value=True):
+ default_field = "multiselect_collection"
+ fields[ "multiselect_collection" ] = collection_select
+ self._ensure_selection( collection_select )
+
else:
# Select one dataset, run one job.
default_field = "select_single"
@@ -1694,27 +1702,36 @@
multi_dataset_matcher = DatasetMatcher( trans, self, multirun_value, other_values )
multi_select = self._get_select_dataset_field( history, multi_dataset_matcher, multiple=True, suffix="|__multirun__" )
fields[ "select_multiple" ] = multi_select
- collection_field = self._get_select_dataset_collection_fields( history, dataset_matcher, multiple=False )
+ collection_field = self._get_select_dataset_collection_fields( history, dataset_matcher, multiple=False, reduction=False )
fields[ "select_collection" ] = collection_field
return self._switch_fields( fields, default_field=default_field )
- def _get_select_dataset_collection_fields( self, history, dataset_matcher, multiple=False ):
+ def _get_select_dataset_collection_fields( self, history, dataset_matcher, multiple=False, suffix="|__collection_multirun__", reduction=False ):
+ if not reduction:
+ value_modifier = lambda x: x
+ else:
+ value_modifier = lambda value: "__collection_reduce__|%s" % value
+
value = dataset_matcher.value
if value is not None:
if type( value ) != list:
value = [ value ]
- field_name = "%s|__collection_multirun__" % self.name
+ field_name = "%s%s" % ( self.name, suffix )
field = form_builder.SelectField( field_name, multiple, None, self.refresh_on_change, refresh_on_change_values=self.refresh_on_change_values )
dataset_collection_matcher = DatasetCollectionMatcher( dataset_matcher )
for history_dataset_collection in history.dataset_collections:
- if dataset_collection_matcher.hdca_match( history_dataset_collection ):
+ if dataset_collection_matcher.hdca_match( history_dataset_collection, reduction=reduction ):
name = history_dataset_collection.name
- id = dataset_matcher.trans.security.encode_id( history_dataset_collection.id )
- field.add_option( name, id, False )
+ hid = str( history_dataset_collection.hid )
+ hidden_text = "" # TODO
+ id = value_modifier( dataset_matcher.trans.security.encode_id( history_dataset_collection.id ) )
+ selected = value and history_dataset_collection in value
+ text = "%s:%s %s" % ( hid, hidden_text, name )
+ field.add_option( text, id, selected )
return field
@@ -1815,6 +1832,10 @@
elif isinstance( value, dict ) and 'src' in value and 'id' in value:
if value['src'] == 'hda':
rval = trans.sa_session.query( trans.app.model.HistoryDatasetAssociation ).get( trans.app.security.decode_id(value['id']) )
+ elif str( value ).startswith( "__collection_reduce__|" ):
+ encoded_id = str( value )[ len( "__collection_reduce__|" ): ]
+ decoded_id = trans.app.security.decode_id( encoded_id )
+ rval = trans.sa_session.query( trans.app.model.HistoryDatasetCollectionAssociation ).get( decoded_id )
else:
rval = trans.sa_session.query( trans.app.model.HistoryDatasetAssociation ).get( value )
if isinstance( rval, list ):
@@ -1825,7 +1846,7 @@
if v:
if v.deleted:
raise ValueError( "The previously selected dataset has been previously deleted" )
- if v.dataset.state in [ galaxy.model.Dataset.states.ERROR, galaxy.model.Dataset.states.DISCARDED ]:
+ if hasattr( v, "dataset" ) and v.dataset.state in [ galaxy.model.Dataset.states.ERROR, galaxy.model.Dataset.states.DISCARDED ]:
raise ValueError( "The previously selected dataset has entered an unusable state" )
return rval
@@ -1840,6 +1861,8 @@
return None
elif isinstance( value, list ):
return ",".join( [ str( self.to_string( val, app ) ) for val in value ] )
+ elif isinstance( value, app.model.HistoryDatasetCollectionAssociation ):
+ return "__collection_reduce__|%s" % app.security.encode_id( value.id )
try:
return value.id
except:
@@ -1855,6 +1878,12 @@
values = value.split(",")
# TODO: Optimize. -John
return [ app.model.context.query( app.model.HistoryDatasetAssociation ).get( int( val ) ) for val in values if val not in none_values ]
+ # Not sure if following case is needed, if yes deduplicate with above code.
+ elif str( value ).startswith( "__collection_reduce__|" ):
+ # When coming from HTML this id would be encoded, in database it
+ # really should be decoded however.
+ decoded_id = str( value )[ len( "__collection_reduce__|" ): ]
+ return app.model.context.query( app.model.HistoryDatasetCollectionAssociation ).get( decoded_id )
return app.model.context.query( app.model.HistoryDatasetAssociation ).get( int( value ) )
def to_param_dict_string( self, value, other_values={} ):
@@ -1874,9 +1903,13 @@
def validate( self, value, history=None ):
for validator in self.validators:
- if value and self.multiple and isinstance(value, list):
- for v in value:
- validator.validate( v, history )
+ if value and self.multiple:
+ if isinstance(value, list):
+ for v in value:
+ validator.validate( v, history )
+ elif isinstance(value, galaxy.model.HistoryDatasetCollectionAssociation):
+ for v in value.collection.dataset_instances:
+ validator.validate( v, history )
else:
validator.validate( value, history )
diff -r dca2a89f9806429d2e23be42387d35d0f097680f -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 lib/galaxy/tools/parameters/dataset_matcher.py
--- a/lib/galaxy/tools/parameters/dataset_matcher.py
+++ b/lib/galaxy/tools/parameters/dataset_matcher.py
@@ -148,8 +148,12 @@
hda_match = self.dataset_matcher.hda_match( hda, ensure_visible=False )
return hda_match and not hda_match.implicit_conversion
- def hdca_match( self, history_dataset_collection_association ):
- return self.dataset_collection_match( history_dataset_collection_association.collection )
+ def hdca_match( self, history_dataset_collection_association, reduction=False ):
+ dataset_collection = history_dataset_collection_association.collection
+ if reduction and dataset_collection.collection_type.find( ":" ) > 0:
+ return False
+ else:
+ return self.dataset_collection_match( dataset_collection )
def dataset_collection_match( self, dataset_collection ):
valid = True
diff -r dca2a89f9806429d2e23be42387d35d0f097680f -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 static/scripts/galaxy.tools.js
--- a/static/scripts/galaxy.tools.js
+++ b/static/scripts/galaxy.tools.js
@@ -39,6 +39,11 @@
'icon_class': 'fa-list-alt',
'select_by': 'Run tool over multiple datasets',
'allow_remap': true
+ },
+ 'multiselect_collection': {
+ 'icon_class': 'fa-folder-o',
+ 'select_by': 'Run tool over dataset collection',
+ 'allow_remap': false,
}
};
diff -r dca2a89f9806429d2e23be42387d35d0f097680f -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 test/api/test_tools.py
--- a/test/api/test_tools.py
+++ b/test/api/test_tools.py
@@ -202,6 +202,25 @@
# on server.
assert run_response.status_code >= 400
+ def test_reduce_collections( self ):
+ self.__skip_without_tool( "multi_data_param" )
+ history_id = self.dataset_populator.new_history()
+ hdca1_id = self.__build_pair( history_id, [ "123", "456" ] )
+ hdca2_id = self.dataset_collection_populator.create_list_in_history( history_id ).json()[ "id" ]
+ inputs = {
+ "f1": "__collection_reduce__|%s" % hdca1_id,
+ "f2": "__collection_reduce__|%s" % hdca2_id,
+ }
+ outputs = self._run_and_get_outputs( "multi_data_param", history_id, inputs )
+ assert len( outputs ) == 2
+ self.dataset_populator.wait_for_history( history_id, assert_ok=True )
+ output1 = outputs[ 0 ]
+ output2 = outputs[ 1 ]
+ output1_content = self._get_content( history_id, dataset=output1 )
+ output2_content = self._get_content( history_id, dataset=output2 )
+ assert output1_content.strip() == "123\n456"
+ assert len( output2_content.strip().split("\n") ) == 3, output2_content
+
def _cat1_outputs( self, history_id, inputs ):
return self._run_outputs( self._run_cat1( history_id, inputs ) )
diff -r dca2a89f9806429d2e23be42387d35d0f097680f -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 test/functional/tools/collection_paired_test.xml
--- a/test/functional/tools/collection_paired_test.xml
+++ b/test/functional/tools/collection_paired_test.xml
@@ -3,7 +3,7 @@
cat #for $f in $f1# ${f} #end for# >> $out1;
</command><inputs>
- <param name="f1" type="data_collection" />
+ <param name="f1" type="data_collection" collection_type="paired" /></inputs><outputs><data format="txt" name="out1" />
https://bitbucket.org/galaxy/galaxy-central/commits/76856c3a746f/
Changeset: 76856c3a746f
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - allow mapping collection params over subcollections.
Affected #: 9 files
diff -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 -r 76856c3a746f63aa2fe0b124fa94553e80fc8adf lib/galaxy/dataset_collections/matching.py
--- a/lib/galaxy/dataset_collections/matching.py
+++ b/lib/galaxy/dataset_collections/matching.py
@@ -13,9 +13,10 @@
def __init__( self ):
self.collections = {}
- def add( self, input_name, hdca ):
+ def add( self, input_name, hdca, subcollection_type=None ):
self.collections[ input_name ] = bunch.Bunch(
hdca=hdca,
+ subcollection_type=subcollection_type,
)
def has_collections( self ):
@@ -39,8 +40,8 @@
self.structure = None
self.collections = {}
- def __attempt_add_to_match( self, input_name, hdca ):
- structure = get_structure( hdca )
+ def __attempt_add_to_match( self, input_name, hdca, subcollection_type ):
+ structure = get_structure( hdca, subcollection_type=subcollection_type )
if not self.structure:
self.structure = structure
self.collections[ input_name ] = hdca
@@ -57,6 +58,7 @@
matching_collections = MatchingCollections()
for input_key, to_match in collections_to_match.iteritems():
hdca = to_match.hdca
- matching_collections.__attempt_add_to_match( input_key, hdca )
+ subcollection_type = to_match = to_match.subcollection_type
+ matching_collections.__attempt_add_to_match( input_key, hdca, subcollection_type=subcollection_type )
return matching_collections
diff -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 -r 76856c3a746f63aa2fe0b124fa94553e80fc8adf lib/galaxy/dataset_collections/structure.py
--- a/lib/galaxy/dataset_collections/structure.py
+++ b/lib/galaxy/dataset_collections/structure.py
@@ -18,13 +18,17 @@
class Tree( object ):
- def __init__( self, dataset_collection ):
+ def __init__( self, dataset_collection, subcollection_type ):
self.collection_type = dataset_collection.collection_type
+ self.subcollection_type = subcollection_type
children = []
for element in dataset_collection.elements:
child_collection = element.child_collection
if child_collection:
- children.append( ( element.element_identifier, Tree( child_collection ) ) )
+ if child_collection.collection_type == subcollection_type:
+ children.append( ( element.element_identifier, leaf ) )
+ else:
+ children.append( ( element.element_identifier, Tree( child_collection, subcollection_type=subcollection_type ) ) )
elif element.hda:
children.append( ( element.element_identifier, leaf ) )
@@ -77,5 +81,5 @@
)
-def get_structure( dataset_collection_instance ):
- return Tree( dataset_collection_instance.collection )
+def get_structure( dataset_collection_instance, subcollection_type=None ):
+ return Tree( dataset_collection_instance.collection, subcollection_type=subcollection_type )
diff -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 -r 76856c3a746f63aa2fe0b124fa94553e80fc8adf lib/galaxy/dataset_collections/subcollections.py
--- /dev/null
+++ b/lib/galaxy/dataset_collections/subcollections.py
@@ -0,0 +1,25 @@
+from galaxy import exceptions
+
+
+def split_dataset_collection_instance( dataset_collection_instance, collection_type ):
+ """ Split up collection into collection.
+ """
+ return _split_dataset_collection( dataset_collection_instance.collection, collection_type )
+
+
+def _split_dataset_collection( dataset_collection, collection_type ):
+ this_collection_type = dataset_collection.collection_type
+ if not this_collection_type.endswith( collection_type ) or this_collection_type == collection_type:
+ raise exceptions.MessageException( "Cannot split collection in desired fashion." )
+
+ split_elements = []
+ for element in dataset_collection.elements:
+ child_collection = element.child_collection
+ if child_collection is None:
+ raise exceptions.MessageException( "Cannot split collection in desired fashion." )
+ if child_collection.collection_type == collection_type:
+ split_elements.append( element )
+ else:
+ split_elements.extend( _split_dataset_collection( element.child_collection ) )
+
+ return split_elements
diff -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 -r 76856c3a746f63aa2fe0b124fa94553e80fc8adf lib/galaxy/tools/parameters/basic.py
--- a/lib/galaxy/tools/parameters/basic.py
+++ b/lib/galaxy/tools/parameters/basic.py
@@ -1984,11 +1984,13 @@
def get_html_field( self, trans=None, value=None, other_values={} ):
# dropped refresh values, may be needed..
- default_field = "single_collection"
- fields = {}
+ default_field = "select_single_collection"
+ fields = odict()
history = self._get_history( trans )
- fields[ "single_collection" ] = self._get_single_collection_field( trans, history=history, value=value, other_values=other_values )
+ fields[ "select_single_collection" ] = self._get_single_collection_field( trans=trans, history=history, value=value, other_values=other_values )
+ fields[ "select_map_over_collections" ] = self._get_select_dataset_collection_field( trans=trans, history=history, value=value, other_values=other_values )
+
return self._switch_fields( fields, default_field=default_field )
def _get_single_collection_field( self, trans, history, value, other_values ):
@@ -2011,6 +2013,29 @@
self._ensure_selection( field )
return field
+ def _get_select_dataset_collection_field( self, trans, history, multiple=False, suffix="|__subcollection_multirun__", value=None, other_values=None ):
+ field_name = "%s%s" % ( self.name, suffix )
+ field = form_builder.SelectField( field_name, multiple, None, self.refresh_on_change, refresh_on_change_values=self.refresh_on_change_values )
+ dataset_matcher = DatasetMatcher( trans, self, value, other_values )
+ dataset_collection_matcher = DatasetCollectionMatcher( dataset_matcher )
+
+ for history_dataset_collection in history.dataset_collections:
+ if not self.history_query.can_map_over( history_dataset_collection ):
+ continue
+
+ datasets_match = dataset_collection_matcher.hdca_match( history_dataset_collection )
+ if datasets_match:
+ name = history_dataset_collection.name
+ hid = str( history_dataset_collection.hid )
+ hidden_text = "" # TODO
+ subcollection_type = self.history_query.collection_type
+ id = "%s|%s" % ( dataset_matcher.trans.security.encode_id( history_dataset_collection.id ), subcollection_type )
+ text = "%s:%s %s" % ( hid, hidden_text, name )
+
+ field.add_option( text, id, False )
+
+ return field
+
def from_html( self, value, trans, other_values={} ):
if not value and not self.optional:
raise ValueError( "History does not include a dataset of the required format / build" )
@@ -2020,12 +2045,20 @@
value = [ int( value_part ) for value_part in value.split( "," ) ]
elif isinstance( value, trans.app.model.HistoryDatasetCollectionAssociation ):
rval = value
+ elif isinstance( value, trans.app.model.DatasetCollectionElement ):
+ # When mapping over nested collection - this paramter will recieve
+ # a DatasetCollectionElement instead of a
+ # HistoryDatasetCollectionAssociation.
+ rval = value
elif isinstance( value, dict ) and 'src' in value and 'id' in value:
if value['src'] == 'hdca':
rval = trans.sa_session.query( trans.app.model.HistoryDatasetCollectionAssociation ).get( trans.app.security.decode_id(value['id']) )
- else:
- rval = trans.sa_session.query( trans.app.model.HistoryDatasetCollectionAssociation ).get( value )
- if rval:
+ elif isinstance( value, basestring ):
+ if value.startswith( "dce:" ):
+ rval = trans.sa_session.query( trans.app.model.DatasetCollectionElement ).get( value[ len( "dce:"): ] )
+ else:
+ rval = trans.sa_session.query( trans.app.model.HistoryDatasetCollectionAssociation ).get( value )
+ if rval and isinstance( rval, trans.app.model.HistoryDatasetCollectionAssociation ):
if rval.deleted:
raise ValueError( "The previously selected dataset collection has been deleted" )
# TODO: Handle error states, implement error states ...
@@ -2034,11 +2067,13 @@
def to_string( self, value, app ):
if value is None or isinstance( value, basestring ):
return value
- elif isinstance( value, int ):
- return str( value )
try:
- return value.id
- except:
+ if isinstance( value, galaxy.model.DatasetCollectionElement ):
+ return "dce:%s" % value.id
+ else:
+ return "hdca:%s" % value.id
+ except Exception:
+ # This is not good...
return str( value )
def to_python( self, value, app ):
@@ -2046,11 +2081,25 @@
# indicates that the dataset is optional, while '' indicates that it is not.
if value is None or value == '' or value == 'None':
return value
- return app.model.context.query( app.model.HistoryDatasetCollectionAssociation ).get( int( value ) )
+
+ if not isinstance( value, basestring ):
+ raise ValueError( "Can not convert data collection parameter value to python object - %s" % value )
+
+ if value.startswith( "dce:" ):
+ dce = app.model.context.query( app.model.DatasetCollectionElement ).get( int( value[ len( "dce:" ): ] ) )
+ return dce
+ elif value.startswith( "hdca:" ):
+ hdca = app.model.context.query( app.model.HistoryDatasetCollectionAssociation ).get( int( value[ len( "hdca:" ): ] ) )
+ return hdca
+ else:
+ raise ValueError( "Can not convert data collection parameter value to python object - %s" % value )
def value_to_display_text( self, value, app ):
try:
- display_text = "%s: %s" % ( value.hid, value.name )
+ if isinstance( value, galaxy.model.HistoryDatasetCollectionAssociation ):
+ display_text = "%s: %s" % ( value.hid, value.name )
+ else:
+ display_text = "Element %d:%s" % ( value.identifier_index, value.identifier_name )
except AttributeError:
display_text = "No dataset collection."
return display_text
diff -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 -r 76856c3a746f63aa2fe0b124fa94553e80fc8adf lib/galaxy/tools/parameters/history_query.py
--- a/lib/galaxy/tools/parameters/history_query.py
+++ b/lib/galaxy/tools/parameters/history_query.py
@@ -22,3 +22,12 @@
return False
return True
+
+ def can_map_over( self, hdca ):
+ if not self.collection_type:
+ return False
+
+ # Can map a list:pair repeatedly over a pair parameter
+ hdca_collection_type = hdca.collection.collection_type
+ can = hdca_collection_type.endswith( self.collection_type ) and hdca_collection_type != self.collection_type
+ return can
diff -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 -r 76856c3a746f63aa2fe0b124fa94553e80fc8adf lib/galaxy/tools/parameters/meta.py
--- a/lib/galaxy/tools/parameters/meta.py
+++ b/lib/galaxy/tools/parameters/meta.py
@@ -3,6 +3,7 @@
from galaxy import util
from galaxy import exceptions
from galaxy.dataset_collections import matching
+from galaxy.dataset_collections import subcollections
import logging
log = logging.getLogger( __name__ )
@@ -42,6 +43,20 @@
else:
return permutations.input_classification.SINGLE, incoming[ input_key ]
+ def subcollection_classifier( input_key ):
+ multirun_key = "%s|__subcollection_multirun__" % input_key
+ if multirun_key in incoming:
+ incoming_val = incoming[ multirun_key ]
+ # value will be "hdca_id|subcollection_type"
+ encoded_hdc_id, subcollection_type = incoming_val.split( "|", 1 )
+ hdc_id = trans.app.security.decode_id( encoded_hdc_id )
+ hdc = trans.sa_session.query( model.HistoryDatasetCollectionAssociation ).get( hdc_id )
+ collections_to_match.add( input_key, hdc, subcollection_type=subcollection_type )
+ subcollection_elements = subcollections.split_dataset_collection_instance( hdc, subcollection_type )
+ return permutations.input_classification.MATCHED, subcollection_elements
+ else:
+ return permutations.input_classification.SINGLE, incoming[ input_key ]
+
# Stick an unexpanded version of multirun keys so they can be replaced,
# by expand_mult_inputs.
incoming_template = incoming.copy()
@@ -57,17 +72,26 @@
multirun_found = False
collection_multirun_found = False
+ subcollection_multirun_found = False
for key, value in incoming.iteritems():
multirun_found = try_replace_key( key, "|__multirun__" ) or multirun_found
collection_multirun_found = try_replace_key( key, "|__collection_multirun__" ) or collection_multirun_found
+ subcollection_multirun_found = try_replace_key( key, "|__subcollection_multirun__" ) or subcollection_multirun_found
- if multirun_found and collection_multirun_found:
+ if sum( [ 1 if f else 0 for f in [ multirun_found, collection_multirun_found, subcollection_multirun_found ] ] ) > 1:
# In theory doable, but to complicated for a first pass.
message = "Cannot specify parallel execution across both multiple datasets and dataset collections."
raise exceptions.ToolMetaParameterException( message )
if multirun_found:
return permutations.expand_multi_inputs( incoming_template, classifier ), None
+ elif subcollection_multirun_found:
+ expanded_incomings = permutations.expand_multi_inputs( incoming_template, subcollection_classifier )
+ if collections_to_match.has_collections():
+ collection_info = trans.app.dataset_collections_service.match_collections( collections_to_match )
+ else:
+ collection_info = None
+ return expanded_incomings, collection_info
else:
expanded_incomings = permutations.expand_multi_inputs( incoming_template, collection_classifier )
if collections_to_match.has_collections():
diff -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 -r 76856c3a746f63aa2fe0b124fa94553e80fc8adf static/scripts/galaxy.tools.js
--- a/static/scripts/galaxy.tools.js
+++ b/static/scripts/galaxy.tools.js
@@ -44,6 +44,16 @@
'icon_class': 'fa-folder-o',
'select_by': 'Run tool over dataset collection',
'allow_remap': false,
+ },
+ 'select_single_collection': {
+ 'icon_class': 'fa-file-o',
+ 'select_by': 'Run tool on single collection',
+ 'allow_remap': true
+ },
+ 'select_map_over_collections': {
+ 'icon_class': 'fa-folder-o',
+ 'select_by': 'Map tool over compontents of nested collection',
+ 'allow_remap': false,
}
};
diff -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 -r 76856c3a746f63aa2fe0b124fa94553e80fc8adf test/api/test_tools.py
--- a/test/api/test_tools.py
+++ b/test/api/test_tools.py
@@ -221,6 +221,31 @@
assert output1_content.strip() == "123\n456"
assert len( output2_content.strip().split("\n") ) == 3, output2_content
+ def test_subcollection_mapping( self ):
+ self.__skip_without_tool( "collection_paired_test" )
+ history_id = self.dataset_populator.new_history()
+ hdca1_id = self.__build_pair( history_id, [ "123", "456" ] )
+ hdca2_id = self.__build_pair( history_id, [ "789", "0ab" ] )
+
+ response = self.dataset_collection_populator.create_list_from_pairs( history_id, [ hdca1_id, hdca2_id ] )
+ self._assert_status_code_is( response, 200 )
+ hdca_list_id = response.json()[ "id" ]
+ inputs = {
+ "f1|__subcollection_multirun__": "%s|paired" % hdca_list_id
+ }
+ # Following wait not really needed - just getting so many database
+ # locked errors with sqlite.
+ self.dataset_populator.wait_for_history( history_id, assert_ok=True )
+ outputs = self._run_and_get_outputs( "collection_paired_test", history_id, inputs )
+ assert len( outputs ), 2
+ self.dataset_populator.wait_for_history( history_id, assert_ok=True )
+ output1 = outputs[ 0 ]
+ output2 = outputs[ 1 ]
+ output1_content = self._get_content( history_id, dataset=output1 )
+ output2_content = self._get_content( history_id, dataset=output2 )
+ assert output1_content.strip() == "123\n456", output1_content
+ assert output2_content.strip() == "789\n0ab", output2_content
+
def _cat1_outputs( self, history_id, inputs ):
return self._run_outputs( self._run_cat1( history_id, inputs ) )
diff -r 8e9dc1d384c73f3c505d14d0ddb461e965f58df0 -r 76856c3a746f63aa2fe0b124fa94553e80fc8adf test/functional/api/test_dataset_collections.py
--- a/test/functional/api/test_dataset_collections.py
+++ b/test/functional/api/test_dataset_collections.py
@@ -10,6 +10,23 @@
self.galaxy_interactor = galaxy_interactor
self.dataset_populator = DatasetPopulator( galaxy_interactor )
+ def create_list_from_pairs( self, history_id, pairs ):
+ element_identifiers = []
+ for i, pair in enumerate( pairs ):
+ element_identifiers.append( dict(
+ name="test%d" % i,
+ src="hdca",
+ id=pair
+ ) )
+
+ payload = dict(
+ instance_type="history",
+ history_id=history_id,
+ element_identifiers=json.dumps(element_identifiers),
+ collection_type="list:paired",
+ )
+ return self.__create( payload )
+
def create_pair_in_history( self, history_id, **kwds ):
payload = self.create_pair_payload(
history_id,
https://bitbucket.org/galaxy/galaxy-central/commits/31442f29f469/
Changeset: 31442f29f469
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Refactor workflow module to allow multiple kinds of inputs.
Affected #: 1 file
diff -r 76856c3a746f63aa2fe0b124fa94553e80fc8adf -r 31442f29f4694d7edcc6e54e68db7d9baeaf64db lib/galaxy/workflow/modules.py
--- a/lib/galaxy/workflow/modules.py
+++ b/lib/galaxy/workflow/modules.py
@@ -111,21 +111,19 @@
raise TypeError( "Abstract method" )
-class InputDataModule( WorkflowModule ):
- type = "data_input"
- name = "Input dataset"
+class InputModule( WorkflowModule ):
@classmethod
def new( Class, trans, tool_id=None ):
module = Class( trans )
- module.state = dict( name="Input Dataset" )
+ module.state = dict( name=Class.default_name )
return module
@classmethod
def from_dict( Class, trans, d, secure=True ):
module = Class( trans )
state = from_json_string( d["tool_state"] )
- module.state = dict( name=state.get( "name", "Input Dataset" ) )
+ module.state = dict( name=state.get( "name", Class.default_name ) )
return module
@classmethod
@@ -191,6 +189,12 @@
return None, dict( output=state.inputs['input'])
+class InputDataModule( InputModule ):
+ type = "data_input"
+ name = "Input dataset"
+ default_name = "Input Dataset"
+
+
class ToolModule( WorkflowModule ):
type = "tool"
https://bitbucket.org/galaxy/galaxy-central/commits/f897110a03c0/
Changeset: f897110a03c0
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - workflows - allow data_collection params.
Extend workflow running, extraction, viewing to allow tools with data_collection parameters.
This is brutally ugly because "isinstance( input, DataToolParameter )" which was already not very Pythonic has been updated to be "isinstance( input, DataToolParameter ) or isinstance( input, DataCollectionToolParameter )". This should all be reworked with an "is_data_parameter" on base ToolParameter.
Affected #: 12 files
diff -r 31442f29f4694d7edcc6e54e68db7d9baeaf64db -r f897110a03c0190b367c4450e5fc9c3c8f911b82 lib/galaxy/tools/parameters/basic.py
--- a/lib/galaxy/tools/parameters/basic.py
+++ b/lib/galaxy/tools/parameters/basic.py
@@ -1982,6 +1982,10 @@
self.multiple = False # Accessed on DataToolParameter a lot, may want in future
self._parse_options( elem ) # TODO: Review and test.
+ @property
+ def collection_type( self ):
+ return self.elem.get( "collection_type", None )
+
def get_html_field( self, trans=None, value=None, other_values={} ):
# dropped refresh values, may be needed..
default_field = "select_single_collection"
diff -r 31442f29f4694d7edcc6e54e68db7d9baeaf64db -r f897110a03c0190b367c4450e5fc9c3c8f911b82 lib/galaxy/web/base/controller.py
--- a/lib/galaxy/web/base/controller.py
+++ b/lib/galaxy/web/base/controller.py
@@ -40,6 +40,7 @@
from galaxy.datatypes.metadata import FileParameter
from galaxy.tools.parameters import RuntimeValue, visit_input_values
from galaxy.tools.parameters.basic import DataToolParameter
+from galaxy.tools.parameters.basic import DataCollectionToolParameter
from galaxy.util.json import to_json_string
from galaxy.workflow.modules import ToolModule
from galaxy.workflow.steps import attach_ordered_steps
@@ -1834,6 +1835,9 @@
# Get input dataset name; default to 'Input Dataset'
name = module.state.get( 'name', 'Input Dataset')
step_dict['inputs'].append( { "name" : name, "description" : annotation_str } )
+ elif module.type == "data_collection_input":
+ name = module.state.get( 'name', 'Input Dataset Collection' )
+ step_dict['inputs'].append( { "name" : name, "description" : annotation_str } )
else:
# Step is a tool and may have runtime inputs.
for name, val in module.state.inputs.items():
@@ -1870,10 +1874,10 @@
if step.type is None or step.type == 'tool':
# Determine full (prefixed) names of valid input datasets
data_input_names = {}
+
def callback( input, value, prefixed_name, prefixed_label ):
- if isinstance( input, DataToolParameter ):
+ if isinstance( input, DataToolParameter ) or isinstance( input, DataCollectionToolParameter ):
data_input_names[ prefixed_name ] = True
-
# FIXME: this updates modules silently right now; messages from updates should be provided.
module.check_and_update_state()
visit_input_values( module.tool.inputs, module.state.inputs, callback )
diff -r 31442f29f4694d7edcc6e54e68db7d9baeaf64db -r f897110a03c0190b367c4450e5fc9c3c8f911b82 lib/galaxy/webapps/galaxy/api/workflows.py
--- a/lib/galaxy/webapps/galaxy/api/workflows.py
+++ b/lib/galaxy/webapps/galaxy/api/workflows.py
@@ -172,6 +172,7 @@
history = self.get_history( trans, from_history_id, check_ownership=False, check_accessible=True )
job_ids = map( trans.security.decode_id, payload.get( "job_ids", [] ) )
dataset_ids = map( trans.security.decode_id, payload.get( "dataset_ids", [] ) )
+ dataset_collection_ids = map( trans.security.decode_id, payload.get( "dataset_collection_ids", [] ) )
workflow_name = payload[ "workflow_name" ]
stored_workflow = extract_workflow(
trans=trans,
@@ -179,6 +180,7 @@
history=history,
job_ids=job_ids,
dataset_ids=dataset_ids,
+ dataset_collection_ids=dataset_collection_ids,
workflow_name=workflow_name,
)
item = stored_workflow.to_dict( value_mapper={ "id": trans.security.encode_id } )
diff -r 31442f29f4694d7edcc6e54e68db7d9baeaf64db -r f897110a03c0190b367c4450e5fc9c3c8f911b82 lib/galaxy/webapps/galaxy/controllers/workflow.py
--- a/lib/galaxy/webapps/galaxy/controllers/workflow.py
+++ b/lib/galaxy/webapps/galaxy/controllers/workflow.py
@@ -20,6 +20,7 @@
from galaxy.model.item_attrs import UsesItemRatings
from galaxy.model.mapping import desc
from galaxy.tools.parameters.basic import DataToolParameter
+from galaxy.tools.parameters.basic import DataCollectionToolParameter
from galaxy.tools.parameters import visit_input_values
from galaxy.util.sanitize_html import sanitize_html
from galaxy.web import error, url_for
@@ -757,15 +758,20 @@
}
# Connections
input_connections = step.input_connections
+ input_connections_type = {}
multiple_input = {} # Boolean value indicating if this can be mutliple
if step.type is None or step.type == 'tool':
# Determine full (prefixed) names of valid input datasets
data_input_names = {}
def callback( input, value, prefixed_name, prefixed_label ):
- if isinstance( input, DataToolParameter ):
+ if isinstance( input, DataToolParameter ) or isinstance( input, DataCollectionToolParameter ):
data_input_names[ prefixed_name ] = True
multiple_input[ prefixed_name ] = input.multiple
+ if isinstance( input, DataToolParameter ):
+ input_connections_type[ input.name ] = "dataset"
+ if isinstance( input, DataCollectionToolParameter ):
+ input_connections_type[ input.name ] = "dataset_collection"
visit_input_values( module.tool.inputs, module.state.inputs, callback )
# Filter
# FIXME: this removes connection without displaying a message currently!
@@ -787,7 +793,10 @@
# Encode input connections as dictionary
input_conn_dict = {}
for conn in input_connections:
- conn_dict = dict( id=conn.output_step.order_index, output_name=conn.output_name )
+ input_type = "dataset"
+ if conn.input_name in input_connections_type:
+ input_type = input_connections_type[ conn.input_name ]
+ conn_dict = dict( id=conn.output_step.order_index, output_name=conn.output_name, input_type=input_type )
if conn.input_name in multiple_input:
if conn.input_name in input_conn_dict:
input_conn_dict[ conn.input_name ].append( conn_dict )
@@ -826,7 +835,8 @@
steps_by_external_id = {}
errors = []
for key, step_dict in data['steps'].iteritems():
- if step_dict['type'] != 'data_input' and step_dict['tool_id'] not in trans.app.toolbox.tools_by_id:
+ is_input = step_dict[ 'type' ] in [ 'data_input', 'data_collection_input' ]
+ if not is_input and step_dict['tool_id'] not in trans.app.toolbox.tools_by_id:
errors.append("Step %s requires tool '%s'." % (step_dict['id'], step_dict['tool_id']))
if errors:
return dict( name=workflow.name,
@@ -1188,7 +1198,7 @@
return dict( ext_to_class_name=ext_to_class_name, class_to_classes=class_to_classes )
@web.expose
- def build_from_current_history( self, trans, job_ids=None, dataset_ids=None, workflow_name=None ):
+ def build_from_current_history( self, trans, job_ids=None, dataset_ids=None, dataset_collection_ids=None, workflow_name=None ):
user = trans.get_user()
history = trans.get_history()
if not user:
@@ -1208,6 +1218,7 @@
user=user,
job_ids=job_ids,
dataset_ids=dataset_ids,
+ dataset_collection_ids=dataset_collection_ids,
workflow_name=workflow_name
)
# Index page with message
@@ -1633,7 +1644,6 @@
return canvas
-## ---- Utility methods -------------------------------------------------------
def _build_workflow_on_str(instance_ds_names):
# Returns suffix for new histories based on multi input iteration
num_multi_inputs = len(instance_ds_names)
diff -r 31442f29f4694d7edcc6e54e68db7d9baeaf64db -r f897110a03c0190b367c4450e5fc9c3c8f911b82 lib/galaxy/workflow/extract.py
--- a/lib/galaxy/workflow/extract.py
+++ b/lib/galaxy/workflow/extract.py
@@ -21,8 +21,8 @@
WARNING_SOME_DATASETS_NOT_READY = "Some datasets still queued or running were ignored"
-def extract_workflow( trans, user, history=None, job_ids=None, dataset_ids=None, workflow_name=None ):
- steps = extract_steps( trans, history=history, job_ids=job_ids, dataset_ids=dataset_ids )
+def extract_workflow( trans, user, history=None, job_ids=None, dataset_ids=None, dataset_collection_ids=None, workflow_name=None ):
+ steps = extract_steps( trans, history=history, job_ids=job_ids, dataset_ids=dataset_ids, dataset_collection_ids=dataset_collection_ids )
# Workflow to populate
workflow = model.Workflow()
workflow.name = workflow_name
@@ -48,7 +48,7 @@
return stored
-def extract_steps( trans, history=None, job_ids=None, dataset_ids=None ):
+def extract_steps( trans, history=None, job_ids=None, dataset_ids=None, dataset_collection_ids=None ):
# Ensure job_ids and dataset_ids are lists (possibly empty)
if job_ids is None:
job_ids = []
@@ -58,9 +58,14 @@
dataset_ids = []
elif type( dataset_ids ) is not list:
dataset_ids = [ dataset_ids ]
+ if dataset_collection_ids is None:
+ dataset_collection_ids = []
+ elif type( dataset_collection_ids) is not list:
+ dataset_collection_ids = [ dataset_collection_ids ]
# Convert both sets of ids to integers
job_ids = [ int( id ) for id in job_ids ]
dataset_ids = [ int( id ) for id in dataset_ids ]
+ dataset_collection_ids = [ int( id ) for id in dataset_collection_ids ]
# Find each job, for security we (implicately) check that they are
# associated witha job in the current history.
jobs, warnings = summarize( trans, history=history )
@@ -75,6 +80,12 @@
step.tool_inputs = dict( name="Input Dataset" )
hid_to_output_pair[ hid ] = ( step, 'output' )
steps.append( step )
+ for hid in dataset_collection_ids:
+ step = model.WorkflowStep()
+ step.type = 'data_collection_input'
+ step.tool_inputs = dict( name="Input Dataset Collection" )
+ hid_to_output_pair[ hid ] = ( step, 'output' )
+ steps.append( step )
# Tool steps
for job_id in job_ids:
assert job_id in jobs_by_id, "Attempt to create workflow with job not connected to current history"
@@ -114,6 +125,20 @@
self.id = "fake_%s" % dataset.id
+class DatasetCollectionCreationJob( object ):
+
+ def __init__( self, dataset_collection ):
+ self.is_fake = True
+ self.id = "fake_%s" % dataset_collection.id
+ self.from_jobs = None
+ self.name = "Dataset Collection Creation"
+ self.disabled_why = "Dataset collection created in a way not compatible with workflows"
+
+ def set_jobs( self, jobs ):
+ assert jobs is not None
+ self.from_jobs = jobs
+
+
def summarize( trans, history=None ):
""" Return mapping of job description to datasets for active items in
supplied history - needed for building workflow from a history.
@@ -126,11 +151,12 @@
# Get the jobs that created the datasets
warnings = set()
jobs = odict()
- for dataset in history.active_datasets:
+
+ def append_dataset( dataset ):
# FIXME: Create "Dataset.is_finished"
if dataset.state in ( 'new', 'running', 'queued' ):
warnings.add( WARNING_SOME_DATASETS_NOT_READY )
- continue
+ return
#if this hda was copied from another, we need to find the job that created the origial hda
job_hda = dataset
@@ -147,6 +173,13 @@
else:
jobs[ job ] = [ ( assoc.name, dataset ) ]
+ for content in history.active_contents:
+ if content.history_content_type == "dataset_collection":
+ job = DatasetCollectionCreationJob( content )
+ jobs[ job ] = [ ( None, content ) ]
+ collection_jobs[ content ] = job
+ else:
+ append_dataset( content )
return jobs, warnings
diff -r 31442f29f4694d7edcc6e54e68db7d9baeaf64db -r f897110a03c0190b367c4450e5fc9c3c8f911b82 lib/galaxy/workflow/modules.py
--- a/lib/galaxy/workflow/modules.py
+++ b/lib/galaxy/workflow/modules.py
@@ -12,6 +12,7 @@
from galaxy.jobs.actions.post import ActionBox
from galaxy.model import PostJobAction
from galaxy.tools.parameters import check_param, DataToolParameter, DummyDataset, RuntimeValue, visit_input_values
+from galaxy.tools.parameters import DataCollectionToolParameter
from galaxy.util.bunch import Bunch
from galaxy.util.json import from_json_string, to_json_string
@@ -195,6 +196,65 @@
default_name = "Input Dataset"
+class InputDataCollectionModule( InputModule ):
+ default_name = "Input Dataset Collection"
+ default_collection_type = "list"
+ type = "data_collection_input"
+ name = "Input dataset collection"
+ collection_type = default_collection_type
+
+ @classmethod
+ def new( Class, trans, tool_id=None ):
+ module = Class( trans )
+ module.state = dict( name=Class.default_name, collection_type=Class.default_collection_type )
+ return module
+
+ @classmethod
+ def from_dict( Class, trans, d, secure=True ):
+ module = Class( trans )
+ state = from_json_string( d["tool_state"] )
+ module.state = dict(
+ name=state.get( "name", Class.default_name ),
+ collection_type=state.get( "collection_type", Class.default_collection_type )
+ )
+ return module
+
+ @classmethod
+ def from_workflow_step( Class, trans, step ):
+ module = Class( trans )
+ module.state = dict(
+ name=Class.default_name,
+ collection_type=Class.default_collection_type
+ )
+ for key in [ "name", "collection_type" ]:
+ if step.tool_inputs and key in step.tool_inputs:
+ module.state[ key ] = step.tool_inputs[ key ]
+ return module
+
+ def get_runtime_inputs( self, filter_set=['data'] ):
+ label = self.state.get( "name", self.default_name )
+ input_element = Element( "param", name="input", label=label, type="data_collection")
+ return dict( input=DataCollectionToolParameter( None, input_element, self.trans ) )
+
+ def get_config_form( self ):
+ form = web.FormBuilder(
+ title=self.name
+ ).add_text(
+ "name", "Name", value=self.state['name']
+ ).add_text(
+ "collection_type", "Collection Type", value=self.state[ "collection_type" ]
+ )
+ return self.trans.fill_template( "workflow/editor_generic_form.mako",
+ module=self, form=form )
+
+ def update_state( self, incoming ):
+ self.state[ 'name' ] = incoming.get( 'name', self.default_name )
+ self.state[ 'collection_type' ] = incoming.get( 'collection_type', self.collection_type )
+
+ def get_data_outputs( self ):
+ return [ dict( name='output', extensions=['input_collection'], collection_type=self.state[ 'collection_type' ] ) ]
+
+
class ToolModule( WorkflowModule ):
type = "tool"
@@ -324,7 +384,16 @@
name=prefixed_name,
label=prefixed_label,
multiple=input.multiple,
- extensions=input.extensions ) )
+ extensions=input.extensions,
+ input_type="dataset", ) )
+ if isinstance( input, DataCollectionToolParameter ):
+ data_inputs.append( dict(
+ name=prefixed_name,
+ label=prefixed_label,
+ multiple=input.multiple,
+ input_type="dataset_collection",
+ collection_type=input.collection_type,
+ ) )
visit_input_values( self.tool.inputs, self.state.inputs, callback )
return data_inputs
@@ -374,7 +443,7 @@
def item_callback( trans, key, input, value, error, old_value, context ):
# Dummy value for Data parameters
- if isinstance( input, DataToolParameter ):
+ if isinstance( input, DataToolParameter ) or isinstance( input, DataCollectionToolParameter ):
return DummyDataset(), None
# Deal with build/runtime (does not apply to Data parameters)
if key == make_buildtime_key:
@@ -414,6 +483,9 @@
replacement = [] if not connections else [DummyDataset() for conn in connections]
else:
replacement = DummyDataset()
+ elif isinstance( input, DataCollectionToolParameter ):
+ if connections is None or prefixed_name in input_connections_by_name:
+ replacement = DummyDataset()
return replacement
visit_input_values( self.tool.inputs, self.state.inputs, callback )
@@ -447,4 +519,4 @@
type = step.type
return self.module_types[type].from_workflow_step( trans, step )
-module_factory = WorkflowModuleFactory( dict( data_input=InputDataModule, tool=ToolModule ) )
+module_factory = WorkflowModuleFactory( dict( data_input=InputDataModule, data_collection_input=InputDataCollectionModule, tool=ToolModule ) )
diff -r 31442f29f4694d7edcc6e54e68db7d9baeaf64db -r f897110a03c0190b367c4450e5fc9c3c8f911b82 lib/galaxy/workflow/run.py
--- a/lib/galaxy/workflow/run.py
+++ b/lib/galaxy/workflow/run.py
@@ -4,6 +4,7 @@
from galaxy.jobs.actions.post import ActionBox
from galaxy.tools.parameters.basic import DataToolParameter
+from galaxy.tools.parameters.basic import DataCollectionToolParameter
from galaxy.tools.parameters import visit_input_values
from galaxy.util.odict import odict
from galaxy.workflow import modules
@@ -107,7 +108,7 @@
# Connect up
def callback( input, value, prefixed_name, prefixed_label ):
replacement = None
- if isinstance( input, DataToolParameter ):
+ if isinstance( input, DataToolParameter ) or isinstance( input, DataCollectionToolParameter ):
replacement = self._replacement_for_input( input, prefixed_name, step )
return replacement
try:
diff -r 31442f29f4694d7edcc6e54e68db7d9baeaf64db -r f897110a03c0190b367c4450e5fc9c3c8f911b82 templates/webapps/galaxy/workflow/build_from_current_history.mako
--- a/templates/webapps/galaxy/workflow/build_from_current_history.mako
+++ b/templates/webapps/galaxy/workflow/build_from_current_history.mako
@@ -60,7 +60,7 @@
</table>
%if disabled:
<hr>
- <div><input type="checkbox" name="dataset_ids" value="${data.hid}" checked="true" />${_('Treat as input dataset')}</div>
+ <div><input type="checkbox" name="${data.history_content_type}_ids" value="${data.hid}" checked="true" />${_('Treat as input dataset')}</div>
%endif
</div></%def>
@@ -103,6 +103,7 @@
if hasattr( job, 'is_fake' ) and job.is_fake:
cls += " toolFormDisabled"
disabled = True
+ tool_name = getattr( job, 'name', tool_name )
else:
tool = app.toolbox.get_tool( job.tool_id )
if tool:
@@ -116,6 +117,8 @@
tool_version_warning = 'Dataset was created with tool version "%s", but workflow extraction will use version "%s".' % ( job.tool_version, tool.version )
else:
tool_version_warning = ''
+ if disabled:
+ disabled_why = getattr( job, 'disabled_why', "This tool cannot be used in workflows" )
%><tr>
@@ -125,7 +128,7 @@
<div class="toolFormTitle">${tool_name}</div><div class="toolFormBody">
%if disabled:
- <div style="font-style: italic; color: gray">This tool cannot be used in workflows</div>
+ <div style="font-style: italic; color: gray">${disabled_why}</div>
%else:
<div><input type="checkbox" name="job_ids" value="${job.id}" checked="true" />Include "${tool_name}" in workflow</div>
%if tool_version_warning:
diff -r 31442f29f4694d7edcc6e54e68db7d9baeaf64db -r f897110a03c0190b367c4450e5fc9c3c8f911b82 templates/webapps/galaxy/workflow/display.mako
--- a/templates/webapps/galaxy/workflow/display.mako
+++ b/templates/webapps/galaxy/workflow/display.mako
@@ -3,6 +3,7 @@
<%!
from galaxy.tools.parameters import DataToolParameter, RuntimeValue
+ from galaxy.tools.parameters import DataCollectionToolParameter
from galaxy.web import form_builder
%>
@@ -41,7 +42,7 @@
<div class="${cls}"><label>${param.get_label()}</label><div>
- %if isinstance( param, DataToolParameter ):
+ %if isinstance( param, DataToolParameter ) or isinstance( param, DataCollectionToolParameter ):
%if ( prefix + param.name ) in step.input_connections_by_name:
<%
conns = step.input_connections_by_name[ prefix + param.name ]
diff -r 31442f29f4694d7edcc6e54e68db7d9baeaf64db -r f897110a03c0190b367c4450e5fc9c3c8f911b82 templates/webapps/galaxy/workflow/editor.mako
--- a/templates/webapps/galaxy/workflow/editor.mako
+++ b/templates/webapps/galaxy/workflow/editor.mako
@@ -363,6 +363,9 @@
<div class="toolTitle"><a href="#" onclick="add_node_for_module( 'data_input', 'Input Dataset' )">Input dataset</a></div>
+ <div class="toolTitle">
+ <a href="#" onclick="add_node_for_module( 'data_collection_input', 'Input Dataset Collection' )">Input dataset collection</a>
+ </div></div></div></div>
diff -r 31442f29f4694d7edcc6e54e68db7d9baeaf64db -r f897110a03c0190b367c4450e5fc9c3c8f911b82 templates/webapps/galaxy/workflow/editor_tool_form.mako
--- a/templates/webapps/galaxy/workflow/editor_tool_form.mako
+++ b/templates/webapps/galaxy/workflow/editor_tool_form.mako
@@ -1,5 +1,6 @@
<%
from galaxy.tools.parameters import DataToolParameter, RuntimeValue
+from galaxy.tools.parameters import DataCollectionToolParameter
from galaxy.util.expressions import ExpressionContext
%>
@@ -60,6 +61,13 @@
<div>
Data input '${param.name}' (${" or ".join( param.extensions )})
</div>
+ %elif type( param ) is DataCollectionToolParameter:
+ <label>
+ ${param.get_label()}
+ </label>
+ <div>
+ Data collection input '${param.name}'
+ </div>
%else:
%if isinstance( value, RuntimeValue ):
<label>
diff -r 31442f29f4694d7edcc6e54e68db7d9baeaf64db -r f897110a03c0190b367c4450e5fc9c3c8f911b82 templates/webapps/galaxy/workflow/run.mako
--- a/templates/webapps/galaxy/workflow/run.mako
+++ b/templates/webapps/galaxy/workflow/run.mako
@@ -290,6 +290,7 @@
<%
from galaxy.tools.parameters import DataToolParameter, RuntimeValue
+from galaxy.tools.parameters import DataCollectionToolParameter
from galaxy.jobs.actions.post import ActionBox
import re
import colorsys
@@ -361,7 +362,7 @@
<div class="${cls}"><label>${param.get_label()}</label><div>
- %if isinstance( param, DataToolParameter ):
+ %if isinstance( param, DataToolParameter ) or isinstance( param, DataCollectionToolParameter ):
%if ( prefix + param.name ) in step.input_connections_by_name:
<%
conns = step.input_connections_by_name[ prefix + param.name ]
@@ -378,7 +379,7 @@
if not enable_unique_defaults:
del already_used[:]
%>
- %if step.type == 'data_input':
+ %if step.type in [ 'data_input', 'data_collection_input' ]:
##Input Dataset Step, wrap for multiinput.
<span class="runtime-form-row"><span class='multiinput_wrap'>
https://bitbucket.org/galaxy/galaxy-central/commits/05b0fa593120/
Changeset: 05b0fa593120
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - workflows - allow implicit mapping steps.
While running a workflow, if a data tool parameter is encountered with a dataset collection as input - run N jobs and create a dataset collection for each of the outputs (similar functionality added for tools in previous changeset). These collections are then used in subsequent steps while connecting steps so the parallelization continues as does the creation of dataset collections.
Much and more remains to be done and tested - chief among these concerns I think is that in order to test this I needed to hand modify a workflow json file. The ability to use this feature should be encorporated into the editor and the interface/controller for extracting a workflow from a history.
Affected #: 4 files
diff -r f897110a03c0190b367c4450e5fc9c3c8f911b82 -r 05b0fa593120b66ec963508e4ca5416d6589fe0a lib/galaxy/dataset_collections/matching.py
--- a/lib/galaxy/dataset_collections/matching.py
+++ b/lib/galaxy/dataset_collections/matching.py
@@ -50,6 +50,9 @@
raise exceptions.MessageException( CANNOT_MATCH_ERROR_MESSAGE )
self.collections[ input_name ] = hdca
+ def slice_collections( self ):
+ return self.structure.walk_collections( self.collections )
+
@staticmethod
def for_collections( collections_to_match ):
if not collections_to_match.has_collections():
diff -r f897110a03c0190b367c4450e5fc9c3c8f911b82 -r 05b0fa593120b66ec963508e4ca5416d6589fe0a lib/galaxy/dataset_collections/structure.py
--- a/lib/galaxy/dataset_collections/structure.py
+++ b/lib/galaxy/dataset_collections/structure.py
@@ -34,6 +34,21 @@
self.children = children
+ def walk_collections( self, hdca_dict ):
+ return self._walk_collections( dict_map( lambda hdca: hdca.collection, hdca_dict ) )
+
+ def _walk_collections( self, collection_dict ):
+ for ( identifier, substructure ) in self.children:
+ def element( collection ):
+ return collection[ identifier ]
+
+ if substructure.is_leaf:
+ yield dict_map( element, collection_dict )
+ else:
+ sub_collections = dict_map( lambda collection: element( collection ).child_collection )
+ for element in substructure._walk_collections( sub_collections ):
+ yield element
+
@property
def is_leaf( self ):
return False
@@ -81,5 +96,9 @@
)
+def dict_map( func, input_dict ):
+ return dict( [ ( k, func(v) ) for k, v in input_dict.iteritems() ] )
+
+
def get_structure( dataset_collection_instance, subcollection_type=None ):
return Tree( dataset_collection_instance.collection, subcollection_type=subcollection_type )
diff -r f897110a03c0190b367c4450e5fc9c3c8f911b82 -r 05b0fa593120b66ec963508e4ca5416d6589fe0a lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -957,6 +957,18 @@
self.rerun_remap_job_id = None
self.inputs = params_from_strings( tool.inputs, values, app, ignore_errors=True )
+ def copy( self ):
+ """
+ WARNING! Makes a shallow copy, *SHOULD* rework to have it make a deep
+ copy.
+ """
+ new_state = DefaultToolState()
+ new_state.page = self.page
+ new_state.rerun_remap_job_id = self.rerun_remap_job_id
+ # This need to be copied.
+ new_state.inputs = self.inputs
+ return new_state
+
class ToolOutput( object, Dictifiable ):
"""
diff -r f897110a03c0190b367c4450e5fc9c3c8f911b82 -r 05b0fa593120b66ec963508e4ca5416d6589fe0a lib/galaxy/workflow/run.py
--- a/lib/galaxy/workflow/run.py
+++ b/lib/galaxy/workflow/run.py
@@ -1,14 +1,22 @@
from galaxy import model
from galaxy import exceptions
+from galaxy import util
+
+from galaxy.dataset_collections import matching
from galaxy.jobs.actions.post import ActionBox
from galaxy.tools.parameters.basic import DataToolParameter
from galaxy.tools.parameters.basic import DataCollectionToolParameter
from galaxy.tools.parameters import visit_input_values
+from galaxy.tools.parameters.wrapped import make_dict_copy
+from galaxy.tools.execute import execute
from galaxy.util.odict import odict
from galaxy.workflow import modules
+import logging
+log = logging.getLogger( __name__ )
+
class WorkflowRunConfig( object ):
""" Wrapper around all the ways a workflow execution can be parameterized.
@@ -76,13 +84,13 @@
self._populate_state( )
for step in self.workflow.steps:
- job = None
- job = self._invoke_step( step )
- # Record invocation
- workflow_invocation_step = model.WorkflowInvocationStep()
- workflow_invocation_step.workflow_invocation = workflow_invocation
- workflow_invocation_step.workflow_step = step
- workflow_invocation_step.job = job
+ jobs = self._invoke_step( step )
+ for job in util.listify( jobs ):
+ # Record invocation
+ workflow_invocation_step = model.WorkflowInvocationStep()
+ workflow_invocation_step.workflow_invocation = workflow_invocation
+ workflow_invocation_step.workflow_step = step
+ workflow_invocation_step.job = job
# All jobs ran successfully, so we can save now
self.trans.sa_session.add( workflow_invocation )
@@ -93,35 +101,85 @@
def _invoke_step( self, step ):
if step.type == 'tool' or step.type is None:
- job = self._execute_tool_step( step )
+ jobs = self._execute_tool_step( step )
else:
- job = self._execute_input_step( step )
+ jobs = self._execute_input_step( step )
- return job
+ return jobs
def _execute_tool_step( self, step ):
trans = self.trans
outputs = self.outputs
tool = trans.app.toolbox.get_tool( step.tool_id )
+ tool_state = step.state
- # Connect up
+ collections_to_match = self._find_collections_to_match( tool, step )
+ # Have implicit collections...
+ if collections_to_match.has_collections():
+ collection_info = self.trans.app.dataset_collections_service.match_collections( collections_to_match )
+ else:
+ collection_info = None
+
+ param_combinations = []
+ if collection_info:
+ iteration_elements_iter = collection_info.slice_collections()
+ else:
+ iteration_elements_iter = [ None ]
+
+ for iteration_elements in iteration_elements_iter:
+ execution_state = tool_state.copy()
+ # TODO: Move next step into copy()
+ execution_state.inputs = make_dict_copy( execution_state.inputs )
+
+ # Connect up
+ def callback( input, value, prefixed_name, prefixed_label ):
+ replacement = None
+ if isinstance( input, DataToolParameter ) or isinstance( input, DataCollectionToolParameter ):
+ # TODO: Handle multiple differently...
+ if iteration_elements and isinstance( input, DataToolParameter ) and prefixed_name in iteration_elements:
+ replacement = iteration_elements[ prefixed_name ].dataset_instance
+ else:
+ replacement = self._replacement_for_input( input, prefixed_name, step )
+ return replacement
+ try:
+ # Replace DummyDatasets with historydatasetassociations
+ visit_input_values( tool.inputs, execution_state.inputs, callback )
+ except KeyError, k:
+ message_template = "Error due to input mapping of '%s' in '%s'. A common cause of this is conditional outputs that cannot be determined until runtime, please review your workflow."
+ message = message_template % (tool.name, k.message)
+ raise exceptions.MessageException( message )
+ param_combinations.append( execution_state.inputs )
+
+ execution_tracker = execute(
+ trans=self.trans,
+ tool=tool,
+ param_combinations=param_combinations,
+ history=self.target_history,
+ collection_info=collection_info,
+ )
+ if collection_info:
+ outputs[ step.id ] = execution_tracker.created_collections
+ else:
+ outputs[ step.id ] = dict( execution_tracker.output_datasets )
+
+ jobs = execution_tracker.successful_jobs
+ for job in jobs:
+ self._handle_post_job_actions( step, job )
+ return jobs
+
+ def _find_collections_to_match( self, tool, step ):
+ collections_to_match = matching.CollectionsToMatch()
+
def callback( input, value, prefixed_name, prefixed_label ):
- replacement = None
- if isinstance( input, DataToolParameter ) or isinstance( input, DataCollectionToolParameter ):
- replacement = self._replacement_for_input( input, prefixed_name, step )
- return replacement
- try:
- # Replace DummyDatasets with historydatasetassociations
- visit_input_values( tool.inputs, step.state.inputs, callback )
- except KeyError, k:
- raise exceptions.MessageException( "Error due to input mapping of '%s' in '%s'. A common cause of this is conditional outputs that cannot be determined until runtime, please review your workflow." % (tool.name, k.message))
- # Execute it
- job, out_data = tool.execute( trans, step.state.inputs, history=self.target_history )
- outputs[ step.id ] = out_data
+ is_data_param = isinstance( input, DataToolParameter )
+ if is_data_param and not input.multiple:
+ data = self._replacement_for_input( input, prefixed_name, step )
+ if isinstance( data, model.HistoryDatasetCollectionAssociation ):
+ collections_to_match.add( prefixed_name, data )
- self._handle_post_job_actions( step, job )
- return job
+ visit_input_values( tool.inputs, step.state.inputs, callback )
+ return collections_to_match
def _execute_input_step( self, step ):
trans = self.trans
https://bitbucket.org/galaxy/galaxy-central/commits/7ad563d3b213/
Changeset: 7ad563d3b213
User: jmchilton
Date: 2014-05-06 15:54:30
Summary: Dataset collections - workflows - implicit "reduce" steps.
Affected #: 1 file
diff -r 05b0fa593120b66ec963508e4ca5416d6589fe0a -r 7ad563d3b2134b053b2e2727f5a3d5835ca163b7 lib/galaxy/workflow/run.py
--- a/lib/galaxy/workflow/run.py
+++ b/lib/galaxy/workflow/run.py
@@ -220,6 +220,12 @@
connection = step.input_connections_by_name[ prefixed_name ]
if input.multiple:
replacement = [ outputs[ c.output_step.id ][ c.output_name ] for c in connection ]
+ # If replacement is just one dataset collection, replace tool
+ # input with dataset collection - tool framework will extract
+ # datasets properly.
+ if len( replacement ) == 1:
+ if isinstance( replacement[ 0 ], model.HistoryDatasetCollectionAssociation ):
+ replacement = replacement[ 0 ]
else:
replacement = outputs[ connection[ 0 ].output_step.id ][ connection[ 0 ].output_name ]
return replacement
https://bitbucket.org/galaxy/galaxy-central/commits/18b12144a710/
Changeset: 18b12144a710
User: jmchilton
Date: 2014-05-06 15:54:31
Summary: Dataset collections - abstraction for collection types.
Perhaps over-engineering but want to centralize logic for reasoning about what kinds of collections are subcollections of other types, etc....
Affected #: 8 files
diff -r 7ad563d3b2134b053b2e2727f5a3d5835ca163b7 -r 18b12144a7108d39e630ad5a7fbd4cfcf1dbc255 lib/galaxy/dataset_collections/__init__.py
--- a/lib/galaxy/dataset_collections/__init__.py
+++ b/lib/galaxy/dataset_collections/__init__.py
@@ -1,5 +1,6 @@
from .registry import DatasetCollectionTypesRegistry
from .matching import MatchingCollections
+from .type_description import CollectionTypeDescriptionFactory
from galaxy import model
from galaxy.exceptions import MessageException
@@ -35,6 +36,7 @@
def __init__( self, app ):
self.type_registry = DatasetCollectionTypesRegistry( app )
+ self.collection_type_descriptions = CollectionTypeDescriptionFactory( self.type_registry )
self.model = app.model
self.security = app.security
self.hda_manager = hdas.HDAManager()
@@ -92,15 +94,15 @@
raise RequestParameterInvalidException( ERROR_INVALID_ELEMENTS_SPECIFICATION )
if not collection_type:
raise RequestParameterInvalidException( ERROR_NO_COLLECTION_TYPE )
- rank_collection_type = collection_type.split( ":" )[ 0 ]
+ collection_type_description = self.collection_type_descriptions.for_collection_type( collection_type )
if elements is None:
- if rank_collection_type != collection_type:
+ if collection_type_description.has_subcollections( ):
# Nested collection - recursively create collections and update identifiers.
self.__recursively_create_collections( trans, element_identifiers )
elements = self.__load_elements( trans, element_identifiers )
# else if elements is set, it better be an ordered dict!
- type_plugin = self.__type_plugin( rank_collection_type )
+ type_plugin = collection_type_description.rank_type_plugin()
dataset_collection = type_plugin.build_collection( elements )
dataset_collection.collection_type = collection_type
return dataset_collection
@@ -231,15 +233,12 @@
raise RequestParameterInvalidException( "Unknown src_type parameter supplied '%s'." % src_type )
return element
- def __type_plugin( self, collection_type ):
- return self.type_registry.get( collection_type )
-
def match_collections( self, collections_to_match ):
"""
May seem odd to place it here, but planning to grow sophistication and
get plugin types involved so it will likely make sense in the future.
"""
- return MatchingCollections.for_collections( collections_to_match )
+ return MatchingCollections.for_collections( collections_to_match, self.collection_type_descriptions )
def get_dataset_collection_instance( self, trans, instance_type, id, **kwds ):
"""
diff -r 7ad563d3b2134b053b2e2727f5a3d5835ca163b7 -r 18b12144a7108d39e630ad5a7fbd4cfcf1dbc255 lib/galaxy/dataset_collections/matching.py
--- a/lib/galaxy/dataset_collections/matching.py
+++ b/lib/galaxy/dataset_collections/matching.py
@@ -40,8 +40,8 @@
self.structure = None
self.collections = {}
- def __attempt_add_to_match( self, input_name, hdca, subcollection_type ):
- structure = get_structure( hdca, subcollection_type=subcollection_type )
+ def __attempt_add_to_match( self, input_name, hdca, collection_type_description, subcollection_type ):
+ structure = get_structure( hdca, collection_type_description, leaf_subcollection_type=subcollection_type )
if not self.structure:
self.structure = structure
self.collections[ input_name ] = hdca
@@ -54,7 +54,7 @@
return self.structure.walk_collections( self.collections )
@staticmethod
- def for_collections( collections_to_match ):
+ def for_collections( collections_to_match, collection_type_descriptions ):
if not collections_to_match.has_collections():
return None
@@ -62,6 +62,7 @@
for input_key, to_match in collections_to_match.iteritems():
hdca = to_match.hdca
subcollection_type = to_match = to_match.subcollection_type
- matching_collections.__attempt_add_to_match( input_key, hdca, subcollection_type=subcollection_type )
+ collection_type_description = collection_type_descriptions.for_collection_type( hdca.collection.collection_type )
+ matching_collections.__attempt_add_to_match( input_key, hdca, collection_type_description, subcollection_type )
return matching_collections
diff -r 7ad563d3b2134b053b2e2727f5a3d5835ca163b7 -r 18b12144a7108d39e630ad5a7fbd4cfcf1dbc255 lib/galaxy/dataset_collections/registry.py
--- a/lib/galaxy/dataset_collections/registry.py
+++ b/lib/galaxy/dataset_collections/registry.py
@@ -7,7 +7,7 @@
class DatasetCollectionTypesRegistry(object):
- def __init__(self, app):
+ def __init__( self, app ):
self.__plugins = dict( [ ( p.collection_type, p() ) for p in PLUGIN_CLASSES ] )
def get( self, plugin_type ):
diff -r 7ad563d3b2134b053b2e2727f5a3d5835ca163b7 -r 18b12144a7108d39e630ad5a7fbd4cfcf1dbc255 lib/galaxy/dataset_collections/structure.py
--- a/lib/galaxy/dataset_collections/structure.py
+++ b/lib/galaxy/dataset_collections/structure.py
@@ -18,17 +18,18 @@
class Tree( object ):
- def __init__( self, dataset_collection, subcollection_type ):
- self.collection_type = dataset_collection.collection_type
- self.subcollection_type = subcollection_type
+ def __init__( self, dataset_collection, collection_type_description, leaf_subcollection_type ):
+ self.collection_type_description = collection_type_description
+ self.leaf_subcollection_type = leaf_subcollection_type # collection_type to trim tree at...
children = []
for element in dataset_collection.elements:
child_collection = element.child_collection
if child_collection:
- if child_collection.collection_type == subcollection_type:
+ subcollection_type_description = collection_type_description.subcollection_type_description() # Type description of children
+ if subcollection_type_description.can_match_type( leaf_subcollection_type ):
children.append( ( element.element_identifier, leaf ) )
else:
- children.append( ( element.element_identifier, Tree( child_collection, subcollection_type=subcollection_type ) ) )
+ children.append( ( element.element_identifier, Tree( child_collection, collection_type_description=subcollection_type_description, leaf_subcollection_type=leaf_subcollection_type ) ) )
elif element.hda:
children.append( ( element.element_identifier, leaf ) )
@@ -54,7 +55,7 @@
return False
def can_match( self, other_structure ):
- if self.collection_type != other_structure.collection_type:
+ if not self.collection_type_description.can_match_type( other_structure.collection_type_description ):
# TODO: generalize
return False
@@ -91,7 +92,7 @@
return dict(
src="new_collection",
- collection_type=self.collection_type,
+ collection_type=self.collection_type_description.collection_type,
element_identifiers=element_identifiers,
)
@@ -100,5 +101,5 @@
return dict( [ ( k, func(v) ) for k, v in input_dict.iteritems() ] )
-def get_structure( dataset_collection_instance, subcollection_type=None ):
- return Tree( dataset_collection_instance.collection, subcollection_type=subcollection_type )
+def get_structure( dataset_collection_instance, collection_type_description, leaf_subcollection_type=None ):
+ return Tree( dataset_collection_instance.collection, collection_type_description, leaf_subcollection_type=leaf_subcollection_type )
diff -r 7ad563d3b2134b053b2e2727f5a3d5835ca163b7 -r 18b12144a7108d39e630ad5a7fbd4cfcf1dbc255 lib/galaxy/dataset_collections/type_description.py
--- /dev/null
+++ b/lib/galaxy/dataset_collections/type_description.py
@@ -0,0 +1,89 @@
+
+
+class CollectionTypeDescriptionFactory( object ):
+
+ def __init__( self, type_registry ):
+ # taking in type_registry though not using it, because we will someday
+ # I think.
+ self.type_registry = type_registry
+
+ def for_collection_type( self, collection_type ):
+ return CollectionTypeDescription( collection_type, self )
+
+
+class CollectionTypeDescription( object ):
+ """ Abstraction over dataset collection type that ties together string
+ reprentation in database/model with type registry.
+
+
+ >>> nested_type_description = CollectionTypeDescription( "list:paired", None )
+ >>> paired_type_description = CollectionTypeDescription( "paired", None )
+ >>> nested_type_description.has_subcollections_of_type( "list" )
+ False
+ >>> nested_type_description.has_subcollections_of_type( "list:paired" )
+ False
+ >>> nested_type_description.has_subcollections_of_type( "paired" )
+ True
+ >>> nested_type_description.has_subcollections_of_type( paired_type_description )
+ True
+ >>> nested_type_description.has_subcollections( )
+ True
+ >>> paired_type_description.has_subcollections( )
+ False
+ >>> paired_type_description.rank_collection_type()
+ 'paired'
+ >>> nested_type_description.rank_collection_type()
+ 'list'
+ """
+
+ def __init__( self, collection_type, collection_type_description_factory ):
+ self.collection_type = collection_type
+ self.collection_type_description_factory = collection_type_description_factory
+ self.__has_subcollections = self.collection_type.find( ":" ) > 0
+
+ def has_subcollections_of_type( self, other_collection_type ):
+ """ Take in another type (either flat string or another
+ CollectionTypeDescription) and determine if this collection contains
+ subcollections matching that type.
+
+ The way this is used in map/reduce it seems to make the most sense
+ for this to return True if these subtypes are proper (i.e. a type
+ is not considered to have subcollections of its own type).
+ """
+ if hasattr( other_collection_type, 'collection_type' ):
+ other_collection_type = other_collection_type.collection_type
+ collection_type = self.collection_type
+ return collection_type.endswith( other_collection_type ) and collection_type != other_collection_type
+
+ def is_subcollection_of_type( self, other_collection_type ):
+ if not hasattr( other_collection_type, 'collection_type' ):
+ other_collection_type = self.collection_type_description_factory.for_collection_type( other_collection_type )
+ return other_collection_type.has_subcollections_of_type( self )
+
+ def can_match_type( self, other_collection_type ):
+ if hasattr( other_collection_type, 'collection_type' ):
+ other_collection_type = other_collection_type.collection_type
+ collection_type = self.collection_type
+ return other_collection_type == collection_type
+
+ def subcollection_type_description( self ):
+ if not self.__has_subcollections:
+ raise ValueError( "Cannot generate subcollection type description for flat type %s" % self.collection_type )
+ subcollection_type = self.collection_type.split( ":", 1 )[ 1 ]
+ return self.collection_type_description_factory.for_collection_type( subcollection_type )
+
+ def has_subcollections( self ):
+ return self.__has_subcollections
+
+ def rank_collection_type( self ):
+ """ Return the top-level collection type corresponding to this
+ collection type. For instance the "rank" type of a list of paired
+ data ("list:paired") is "list".
+ """
+ return self.collection_type.split( ":" )[ 0 ]
+
+ def rank_type_plugin( self ):
+ return self.collection_type_description_factory.type_registry.get( self.rank_collection_type() )
+
+ def __str__( self ):
+ return "CollectionTypeDescription[%s]" % self.collection_type
diff -r 7ad563d3b2134b053b2e2727f5a3d5835ca163b7 -r 18b12144a7108d39e630ad5a7fbd4cfcf1dbc255 lib/galaxy/tools/execute.py
--- a/lib/galaxy/tools/execute.py
+++ b/lib/galaxy/tools/execute.py
@@ -108,7 +108,7 @@
parent=history,
name=output_collection_name,
element_identifiers=element_identifiers_for_datasets[ "element_identifiers" ],
- collection_type=structure.collection_type,
+ collection_type=structure.collection_type_description.collection_type,
implicit_collection_info=implicit_collection_info,
)
collections[ output_name ] = collection
diff -r 7ad563d3b2134b053b2e2727f5a3d5835ca163b7 -r 18b12144a7108d39e630ad5a7fbd4cfcf1dbc255 lib/galaxy/tools/parameters/basic.py
--- a/lib/galaxy/tools/parameters/basic.py
+++ b/lib/galaxy/tools/parameters/basic.py
@@ -1977,7 +1977,7 @@
def __init__( self, tool, elem, trans=None ):
super(DataCollectionToolParameter, self).__init__( tool, elem, trans )
- self.history_query = history_query.HistoryQuery.from_parameter_elem( elem )
+ self.elem = elem
self._parse_formats( trans, tool, elem )
self.multiple = False # Accessed on DataToolParameter a lot, may want in future
self._parse_options( elem ) # TODO: Review and test.
@@ -1986,6 +1986,10 @@
def collection_type( self ):
return self.elem.get( "collection_type", None )
+ def _history_query( self, trans ):
+ dataset_collection_type_descriptions = trans.app.dataset_collections_service.collection_type_descriptions
+ return history_query.HistoryQuery.from_parameter_elem( self.elem, dataset_collection_type_descriptions )
+
def get_html_field( self, trans=None, value=None, other_values={} ):
# dropped refresh values, may be needed..
default_field = "select_single_collection"
@@ -1999,7 +2003,7 @@
def _get_single_collection_field( self, trans, history, value, other_values ):
field = form_builder.SelectField( self.name, self.multiple, None, self.refresh_on_change, refresh_on_change_values=self.refresh_on_change_values )
- dataset_collections = trans.app.dataset_collections_service.history_dataset_collections( history, self.history_query )
+ dataset_collections = trans.app.dataset_collections_service.history_dataset_collections( history, self._history_query( trans ) )
dataset_matcher = DatasetMatcher( trans, self, value, other_values )
dataset_collection_matcher = DatasetCollectionMatcher( dataset_matcher )
@@ -2024,7 +2028,7 @@
dataset_collection_matcher = DatasetCollectionMatcher( dataset_matcher )
for history_dataset_collection in history.dataset_collections:
- if not self.history_query.can_map_over( history_dataset_collection ):
+ if not self._history_query( trans ).can_map_over( history_dataset_collection ):
continue
datasets_match = dataset_collection_matcher.hdca_match( history_dataset_collection )
@@ -2032,7 +2036,7 @@
name = history_dataset_collection.name
hid = str( history_dataset_collection.hid )
hidden_text = "" # TODO
- subcollection_type = self.history_query.collection_type
+ subcollection_type = self._history_query( trans ).collection_type_description.collection_type
id = "%s|%s" % ( dataset_matcher.trans.security.encode_id( history_dataset_collection.id ), subcollection_type )
text = "%s:%s %s" % ( hid, hidden_text, name )
diff -r 7ad563d3b2134b053b2e2727f5a3d5835ca163b7 -r 18b12144a7108d39e630ad5a7fbd4cfcf1dbc255 lib/galaxy/tools/parameters/history_query.py
--- a/lib/galaxy/tools/parameters/history_query.py
+++ b/lib/galaxy/tools/parameters/history_query.py
@@ -8,26 +8,31 @@
"""
def __init__( self, **kwargs ):
- self.collection_type = kwargs.get( "collection_type", None )
+ self.collection_type_description = kwargs.get( "collection_type_description", None )
@staticmethod
- def from_parameter_elem( elem ):
+ def from_parameter_elem( elem, collection_type_descriptions ):
""" Take in a tool parameter element.
"""
- kwargs = dict( collection_type=elem.get( "collection_type", None ) )
+ collection_type = elem.get( "collection_type", None )
+ if collection_type:
+ collection_type_description = collection_type_descriptions.for_collection_type( collection_type )
+ else:
+ collection_type_description = None
+ kwargs = dict( collection_type_description=collection_type_description )
return HistoryQuery( **kwargs )
def direct_match( self, hdca ):
- if self.collection_type and hdca.collection.collection_type != self.collection_type:
+ collection_type_description = self.collection_type_description
+ if collection_type_description and not collection_type_description.can_match_type( hdca.collection.collection_type ):
return False
return True
def can_map_over( self, hdca ):
- if not self.collection_type:
+ collection_type_description = self.collection_type_description
+ if not collection_type_description:
return False
- # Can map a list:pair repeatedly over a pair parameter
hdca_collection_type = hdca.collection.collection_type
- can = hdca_collection_type.endswith( self.collection_type ) and hdca_collection_type != self.collection_type
- return can
+ return collection_type_description.is_subcollection_of_type( hdca_collection_type )
https://bitbucket.org/galaxy/galaxy-central/commits/34d1dd36a82b/
Changeset: 34d1dd36a82b
User: jmchilton
Date: 2014-05-06 15:54:31
Summary: Dataset collections - hide HDAs behind implicitly mapped HDCAs.
Affected #: 3 files
diff -r 18b12144a7108d39e630ad5a7fbd4cfcf1dbc255 -r 34d1dd36a82b211b8aa06c0158eb6e66f2c79b4b lib/galaxy/dataset_collections/__init__.py
--- a/lib/galaxy/dataset_collections/__init__.py
+++ b/lib/galaxy/dataset_collections/__init__.py
@@ -67,6 +67,10 @@
if implicit_collection_info:
for input_name, input_collection in implicit_collection_info[ "implicit_inputs" ]:
dataset_collection_instance.add_implicit_input_collection( input_name, input_collection )
+ for output_dataset in implicit_collection_info.get( "outputs_datasets" ):
+ output_dataset.hidden_beneath_collection_instance = dataset_collection_instance
+ trans.sa_session.add( output_dataset )
+
dataset_collection_instance.implicit_output_name = implicit_collection_info[ "implicit_output_name" ]
# Handle setting hid
parent.add_dataset_collection( dataset_collection_instance )
diff -r 18b12144a7108d39e630ad5a7fbd4cfcf1dbc255 -r 34d1dd36a82b211b8aa06c0158eb6e66f2c79b4b lib/galaxy/jobs/__init__.py
--- a/lib/galaxy/jobs/__init__.py
+++ b/lib/galaxy/jobs/__init__.py
@@ -964,6 +964,8 @@
trynum += 1
log.warning( 'Error accessing %s, will retry: %s', dataset.dataset.file_name, e )
time.sleep( 2 )
+ if dataset.hidden_beneath_collection_instance:
+ dataset.visible = False
dataset.blurb = 'done'
dataset.peek = 'no peek'
dataset.info = (dataset.info or '')
diff -r 18b12144a7108d39e630ad5a7fbd4cfcf1dbc255 -r 34d1dd36a82b211b8aa06c0158eb6e66f2c79b4b lib/galaxy/tools/execute.py
--- a/lib/galaxy/tools/execute.py
+++ b/lib/galaxy/tools/execute.py
@@ -87,7 +87,8 @@
implicit_collection_info = dict(
implicit_inputs=implicit_inputs,
- implicit_output_name=output_name
+ implicit_output_name=output_name,
+ outputs_datasets=outputs_datasets
)
try:
output_collection_name = self.tool_action.get_output_name(
@@ -112,6 +113,7 @@
implicit_collection_info=implicit_collection_info,
)
collections[ output_name ] = collection
+
self.created_collections = collections
__all__ = [ execute ]
https://bitbucket.org/galaxy/galaxy-central/commits/bf60fa79bff1/
Changeset: bf60fa79bff1
User: jmchilton
Date: 2014-05-06 15:54:31
Summary: Dataset collections - workflows - argument extraction for implicit mapping steps.
Second try at this.... I am not saying this good, but it is better than me previous attempt.
Affected #: 1 file
diff -r 34d1dd36a82b211b8aa06c0158eb6e66f2c79b4b -r bf60fa79bff110f52a2ad0fa73485ee64ac66320 lib/galaxy/workflow/extract.py
--- a/lib/galaxy/workflow/extract.py
+++ b/lib/galaxy/workflow/extract.py
@@ -18,6 +18,9 @@
order_workflow_steps_with_levels
)
+import logging
+log = logging.getLogger( __name__ )
+
WARNING_SOME_DATASETS_NOT_READY = "Some datasets still queued or running were ignored"
@@ -68,7 +71,8 @@
dataset_collection_ids = [ int( id ) for id in dataset_collection_ids ]
# Find each job, for security we (implicately) check that they are
# associated witha job in the current history.
- jobs, warnings = summarize( trans, history=history )
+ summary = WorkflowSummary( trans, history )
+ jobs = summary.jobs
jobs_by_id = dict( ( job.id, job ) for job in jobs.keys() )
steps = []
steps_by_job_id = {}
@@ -99,6 +103,11 @@
# an earlier job can be used as an input to a later
# job.
for other_hid, input_name in associations:
+ if job in summary.implicit_map_jobs:
+ an_implicit_output_collection = jobs[ job ][ 0 ][ 1 ]
+ input_collection = an_implicit_output_collection.find_implicit_input_collection( input_name )
+ if input_collection:
+ other_hid = input_collection.hid
if other_hid in hid_to_output_pair:
other_step, other_name = hid_to_output_pair[ other_hid ]
conn = model.WorkflowStepConnection()
@@ -145,42 +154,90 @@
Formerly call get_job_dict in workflow web controller.
"""
- if not history:
- history = trans.get_history()
+ summary = WorkflowSummary( trans, history )
+ return summary.jobs, summary.warnings
- # Get the jobs that created the datasets
- warnings = set()
- jobs = odict()
- def append_dataset( dataset ):
- # FIXME: Create "Dataset.is_finished"
- if dataset.state in ( 'new', 'running', 'queued' ):
- warnings.add( WARNING_SOME_DATASETS_NOT_READY )
+class WorkflowSummary( object ):
+
+ def __init__( self, trans, history ):
+ if not history:
+ history = trans.get_history()
+ self.history = history
+ self.warnings = set()
+ self.jobs = odict()
+ self.implicit_map_jobs = []
+ self.__summarize()
+
+ def __summarize( self ):
+ # Make a first pass handle all singleton jobs, input dataset and dataset collections
+ # just grab the implicitly mapped jobs and handle in second pass. Second pass is
+ # needed because cannot allow selection of individual datasets from an implicit
+ # mapping during extraction - you get the collection or nothing.
+ implicit_outputs = []
+ for content in self.history.active_contents:
+ if content.history_content_type == "dataset_collection":
+ if not content.implicit_output_name:
+ job = DatasetCollectionCreationJob( content )
+ self.jobs[ job ] = [ ( None, content ) ]
+ else:
+ implicit_outputs.append( content )
+ else:
+ self.__append_dataset( content )
+
+ for dataset_collection in implicit_outputs:
+ # TODO: Optimize db call
+ # TODO: Ensure this is deterministic, must get same job
+ # for each dataset collection.
+ dataset_instance = dataset_collection.collection.dataset_instances[ 0 ]
+ if not self.__check_state( dataset_instance ):
+ # Just checking the state of one instance, don't need more but
+ # makes me wonder if even need this check at all?
+ continue
+
+ job_hda = self.__original_hda( dataset_instance )
+ if not job_hda.creating_job_associations:
+ log.warn( "An implicitly create output dataset collection doesn't have a creating_job_association, should not happen!" )
+ job = DatasetCollectionCreationJob( dataset_collection )
+ self.jobs[ job ] = [ ( None, dataset_collection ) ]
+
+ for assoc in job_hda.creating_job_associations:
+ job = assoc.job
+ if job not in self.jobs or self.jobs[ job ][ 0 ][ 1 ].history_content_type == "dataset":
+ self.jobs[ job ] = [ ( assoc.name, dataset_collection ) ]
+ self.implicit_map_jobs.append( job )
+ else:
+ self.jobs[ job ].append( ( assoc.name, dataset_collection ) )
+
+ def __append_dataset( self, dataset ):
+ if not self.__check_state( dataset ):
return
- #if this hda was copied from another, we need to find the job that created the origial hda
- job_hda = dataset
- while job_hda.copied_from_history_dataset_association:
- job_hda = job_hda.copied_from_history_dataset_association
+ job_hda = self.__original_hda( dataset )
if not job_hda.creating_job_associations:
- jobs[ FakeJob( dataset ) ] = [ ( None, dataset ) ]
+ self.jobs[ FakeJob( dataset ) ] = [ ( None, dataset ) ]
for assoc in job_hda.creating_job_associations:
job = assoc.job
- if job in jobs:
- jobs[ job ].append( ( assoc.name, dataset ) )
+ if job in self.jobs:
+ self.jobs[ job ].append( ( assoc.name, dataset ) )
else:
- jobs[ job ] = [ ( assoc.name, dataset ) ]
+ self.jobs[ job ] = [ ( assoc.name, dataset ) ]
- for content in history.active_contents:
- if content.history_content_type == "dataset_collection":
- job = DatasetCollectionCreationJob( content )
- jobs[ job ] = [ ( None, content ) ]
- collection_jobs[ content ] = job
- else:
- append_dataset( content )
- return jobs, warnings
+ def __original_hda( self, hda ):
+ #if this hda was copied from another, we need to find the job that created the origial hda
+ job_hda = hda
+ while job_hda.copied_from_history_dataset_association:
+ job_hda = job_hda.copied_from_history_dataset_association
+ return job_hda
+
+ def __check_state( self, hda ):
+ # FIXME: Create "Dataset.is_finished"
+ if hda.state in ( 'new', 'running', 'queued' ):
+ self.warnings.add( WARNING_SOME_DATASETS_NOT_READY )
+ return
+ return hda
def step_inputs( trans, job ):
https://bitbucket.org/galaxy/galaxy-central/commits/cb84a4609f1b/
Changeset: cb84a4609f1b
User: jmchilton
Date: 2014-05-06 15:54:31
Summary: Dataset collections - augment workflow extraction for copied/imported dataset collections.
Affected #: 1 file
diff -r bf60fa79bff110f52a2ad0fa73485ee64ac66320 -r cb84a4609f1bffbac38234d67580a9812619be89 lib/galaxy/workflow/extract.py
--- a/lib/galaxy/workflow/extract.py
+++ b/lib/galaxy/workflow/extract.py
@@ -177,6 +177,7 @@
implicit_outputs = []
for content in self.history.active_contents:
if content.history_content_type == "dataset_collection":
+ content = self.__original_hdca( content )
if not content.implicit_output_name:
job = DatasetCollectionCreationJob( content )
self.jobs[ job ] = [ ( None, content ) ]
@@ -225,6 +226,11 @@
else:
self.jobs[ job ] = [ ( assoc.name, dataset ) ]
+ def __original_hdca( self, hdca ):
+ while hdca.copied_from_history_dataset_collection_association:
+ hdca = hdca.copied_from_history_dataset_collection_association
+ return hdca
+
def __original_hda( self, hda ):
#if this hda was copied from another, we need to find the job that created the origial hda
job_hda = hda
https://bitbucket.org/galaxy/galaxy-central/commits/543496aa3518/
Changeset: 543496aa3518
User: jmchilton
Date: 2014-05-06 15:54:31
Summary: Dataset collections - workflows - allow nested subcollection mapping during execution.
Affected #: 2 files
diff -r cb84a4609f1bffbac38234d67580a9812619be89 -r 543496aa3518de307cda6c38d6c85450f6a9089c lib/galaxy/workflow/modules.py
--- a/lib/galaxy/workflow/modules.py
+++ b/lib/galaxy/workflow/modules.py
@@ -233,7 +233,8 @@
def get_runtime_inputs( self, filter_set=['data'] ):
label = self.state.get( "name", self.default_name )
- input_element = Element( "param", name="input", label=label, type="data_collection")
+ collection_type = self.state.get( "collection_type", self.default_collection_type )
+ input_element = Element( "param", name="input", label=label, type="data_collection", collection_type=collection_type )
return dict( input=DataCollectionToolParameter( None, input_element, self.trans ) )
def get_config_form( self ):
@@ -242,6 +243,7 @@
).add_text(
"name", "Name", value=self.state['name']
).add_text(
+ # TODO: clean this up...
"collection_type", "Collection Type", value=self.state[ "collection_type" ]
)
return self.trans.fill_template( "workflow/editor_generic_form.mako",
diff -r cb84a4609f1bffbac38234d67580a9812619be89 -r 543496aa3518de307cda6c38d6c85450f6a9089c lib/galaxy/workflow/run.py
--- a/lib/galaxy/workflow/run.py
+++ b/lib/galaxy/workflow/run.py
@@ -136,9 +136,13 @@
def callback( input, value, prefixed_name, prefixed_label ):
replacement = None
if isinstance( input, DataToolParameter ) or isinstance( input, DataCollectionToolParameter ):
- # TODO: Handle multiple differently...
- if iteration_elements and isinstance( input, DataToolParameter ) and prefixed_name in iteration_elements:
- replacement = iteration_elements[ prefixed_name ].dataset_instance
+ if iteration_elements and prefixed_name in iteration_elements:
+ if isinstance( input, DataToolParameter ):
+ # Pull out dataset instance from element.
+ replacement = iteration_elements[ prefixed_name ].dataset_instance
+ else:
+ # If collection - just use element model object.
+ replacement = iteration_elements[ prefixed_name ]
else:
replacement = self._replacement_for_input( input, prefixed_name, step )
return replacement
@@ -159,7 +163,7 @@
collection_info=collection_info,
)
if collection_info:
- outputs[ step.id ] = execution_tracker.created_collections
+ outputs[ step.id ] = dict( execution_tracker.created_collections )
else:
outputs[ step.id ] = dict( execution_tracker.output_datasets )
@@ -178,6 +182,13 @@
if isinstance( data, model.HistoryDatasetCollectionAssociation ):
collections_to_match.add( prefixed_name, data )
+ is_data_collection_param = isinstance( input, DataCollectionToolParameter )
+ if is_data_collection_param and not input.multiple:
+ data = self._replacement_for_input( input, prefixed_name, step )
+ history_query = input._history_query( self.trans )
+ if history_query.can_map_over( data ):
+ collections_to_match.add( prefixed_name, data, subcollection_type=input.collection_type )
+
visit_input_values( tool.inputs, step.state.inputs, callback )
return collections_to_match
https://bitbucket.org/galaxy/galaxy-central/commits/b980e1f4a9aa/
Changeset: b980e1f4a9aa
User: jmchilton
Date: 2014-05-06 15:54:31
Summary: Dataset collections - create lists and pairs via HDA multi-select.
Affected #: 2 files
diff -r 543496aa3518de307cda6c38d6c85450f6a9089c -r b980e1f4a9aa35ec51fc5aa1e2f96d746c7f42e0 static/scripts/mvc/dataset/hda-model.js
--- a/static/scripts/mvc/dataset/hda-model.js
+++ b/static/scripts/mvc/dataset/hda-model.js
@@ -624,6 +624,73 @@
Backbone.Collection.prototype.set.call( this, models, options );
},
+ /** Convert this ad-hoc collection of HDAs to a formal collection tracked
+ by the server.
+ **/
+ promoteToHistoryDatasetCollection : function _promote( history, collection_type, options ){
+ options = options || {};
+ options.url = this.url();
+ options.type = "POST";
+ var full_collection_type = collection_type;
+ var element_identifiers = [],
+ name = null;
+
+ // This mechanism is rough - no error handling, allows invalid selections, no way
+ // for user to pick/override element identifiers. This is only really meant
+ if( collection_type == "list" ) {
+ this.chain().each( function( hda ) {
+ // TODO: Handle duplicate names.
+ var name = hda.attributes.name;
+ var id = hda.id;
+ var content_type = hda.attributes.history_content_type;
+ if( content_type == "dataset" ) {
+ if( full_collection_type != "list" ) {
+ console.log( "Invalid collection type" );
+ }
+ element_identifiers.push( { name: name, src: "hda", id: id } );
+ } else {
+ if( full_collection_type == "list" ) {
+ full_collection_type = "list:" + hda.attributes.collection_type;
+ } else {
+ if( full_collection_type != "list:" + hda.attributes.collection_type ) {
+ console.log( "Invalid collection type" );
+ }
+ }
+ element_identifiers.push( { name: name, src: "hdca", id: id } );
+ }
+ });
+ name = "New Dataset List";
+ } else if( collection_type == "paired" ) {
+ var ids = this.ids();
+ if( ids.length != 2 ){
+ // TODO: Do something...
+ }
+ element_identifiers.push( { name: "left", src: "hda", id: ids[ 0 ] } );
+ element_identifiers.push( { name: "right", src: "hda", id: ids[ 1 ] } );
+ name = "New Dataset Pair";
+ }
+ options.data = {type: "dataset_collection",
+ name: name,
+ collection_type: full_collection_type,
+ element_identifiers: JSON.stringify(element_identifiers),
+ };
+
+ var xhr = jQuery.ajax( options );
+ xhr.done( function( message, status, responseObj ){
+ history.refresh( );
+ });
+ xhr.fail( function( xhr, status, message ){
+ if( xhr.responseJSON && xhr.responseJSON.error ){
+ error = xhr.responseJSON.error;
+ } else {
+ error = xhr.responseJSON;
+ }
+ xhr.responseText = error;
+ // Do something?
+ });
+ return xhr;
+ },
+
/** String representation. */
toString : function(){
return ([ 'HDACollection(', [ this.historyId, this.length ].join(), ')' ].join( '' ));
diff -r 543496aa3518de307cda6c38d6c85450f6a9089c -r b980e1f4a9aa35ec51fc5aa1e2f96d746c7f42e0 static/scripts/mvc/history/history-panel.js
--- a/static/scripts/mvc/history/history-panel.js
+++ b/static/scripts/mvc/history/history-panel.js
@@ -221,8 +221,7 @@
var action = hdaModel.HistoryDatasetAssociation.prototype.undelete;
panel.getSelectedHdaCollection().ajaxQueue( action );
}
- }
- ];
+ } ];
if( panel.purgeAllowed ){
actions.push({
html: _l( 'Permanently delete datasets' ), func: function(){
@@ -233,6 +232,17 @@
}
});
}
+ actions.push( {
+ html: _l( 'Build Dataset List (Experimental)' ), func: function() {
+ panel.getSelectedHdaCollection().promoteToHistoryDatasetCollection( panel.model, "list" );
+ }
+ } );
+ actions.push( {
+ // TODO: Only show quick pair if two things selected.
+ html: _l( 'Build Dataset Pair (Experimental)' ), func: function() {
+ panel.getSelectedHdaCollection().promoteToHistoryDatasetCollection( panel.model, "paired" );
+ }
+ } );
return new PopupMenu( $where.find( '.history-dataset-action-popup-btn' ), actions );
},
https://bitbucket.org/galaxy/galaxy-central/commits/a3a2f96a68f2/
Changeset: a3a2f96a68f2
User: jmchilton
Date: 2014-05-06 15:54:31
Summary: Dataset collections - backbone model for history content.
Affected #: 1 file
diff -r b980e1f4a9aa35ec51fc5aa1e2f96d746c7f42e0 -r a3a2f96a68f2f3c97d31b69fa3ac717f892cbd63 static/scripts/mvc/dataset/hda-model.js
--- a/static/scripts/mvc/dataset/hda-model.js
+++ b/static/scripts/mvc/dataset/hda-model.js
@@ -447,6 +447,8 @@
model : function( attrs, options ) {
if( attrs.history_content_type == "dataset" ) {
return new HistoryDatasetAssociation( attrs, options );
+ } else if( attrs.history_content_type == "dataset_collection" ) {
+ return new HistoryDatasetCollectionAssociation( attrs, options );
} else {
// TODO: Handle unknown history_content_type...
}
@@ -697,6 +699,57 @@
}
});
+var HistoryDatasetCollectionAssociation = HistoryContent.extend(
+/** @lends HistoryDatasetCollectionAssociation.prototype */{
+ /** default attributes for a model */
+ defaults : {
+ // parent (containing) history
+ history_id : null,
+ // often used with tagging
+ model_class : 'HistoryDatasetCollectionAssociation',
+ history_content_type : 'dataset_collection',
+ hid : 0,
+
+ id : null,
+ name : '(unnamed dataset collection)',
+ // one of HistoryDatasetAssociation.STATES, calling them all 'ok' for now.
+ state : 'ok',
+
+ accessible : true,
+ deleted : false,
+ visible : true,
+
+ purged : false, // Purged doesn't make sense for collections - at least right now.
+
+ tags : [],
+ annotation : ''
+ },
+ urls : function(){
+ },
+
+ inReadyState : function(){
+ return true; // TODO
+ },
+
+ // ........................................................................ search
+ /** what attributes of an collection will be used in a text search */
+ searchAttributes : [
+ 'name'
+ ],
+
+ /** our attr keys don't often match the labels we display to the user - so, when using
+ * attribute specifiers ('name="bler"') in a term, allow passing in aliases for the
+ * following attr keys.
+ */
+ searchAliases : {
+ title : 'name'
+ // TODO: Add tag...
+ },
+
+});
+
+
+
//==============================================================================
return {
HistoryDatasetAssociation : HistoryDatasetAssociation,
https://bitbucket.org/galaxy/galaxy-central/commits/1adf9edf112b/
Changeset: 1adf9edf112b
User: jmchilton
Date: 2014-05-06 15:54:31
Summary: More reworking of backbone views to allow multiple content types.
Affected #: 2 files
diff -r a3a2f96a68f2f3c97d31b69fa3ac717f892cbd63 -r 1adf9edf112b8fda48fcea30779ddb265ce3dfec static/scripts/mvc/history/history-panel.js
--- a/static/scripts/mvc/history/history-panel.js
+++ b/static/scripts/mvc/history/history-panel.js
@@ -271,6 +271,10 @@
*/
_createContentView : function( hda ){
var hdaId = hda.get( 'id' ),
+ historyContentType = hda.get( 'history_content_type' ),
+ hdaView = null;
+
+ if( historyContentType == "dataset" ) {
hdaView = new this.HDAViewClass({
model : hda,
linkTarget : this.linkTarget,
@@ -283,6 +287,7 @@
tagsEditorShown : ( this.tagsEditor && !this.tagsEditor.hidden ),
annotationEditorShown : ( this.annotationEditor && !this.annotationEditor.hidden )
});
+ }
this._setUpHdaListeners( hdaView );
return hdaView;
},
diff -r a3a2f96a68f2f3c97d31b69fa3ac717f892cbd63 -r 1adf9edf112b8fda48fcea30779ddb265ce3dfec static/scripts/mvc/history/readonly-history-panel.js
--- a/static/scripts/mvc/history/readonly-history-panel.js
+++ b/static/scripts/mvc/history/readonly-history-panel.js
@@ -610,6 +610,9 @@
*/
_createContentView : function( hda ){
var hdaId = hda.get( 'id' ),
+ historyContentType = hda.get( "history_content_type" ),
+ hdaView = null;
+ if( historyContentType == "dataset" ) {
hdaView = new this.HDAViewClass({
model : hda,
linkTarget : this.linkTarget,
@@ -618,6 +621,7 @@
hasUser : this.model.ownedByCurrUser(),
logger : this.logger
});
+ }
this._setUpHdaListeners( hdaView );
return hdaView;
},
https://bitbucket.org/galaxy/galaxy-central/commits/4b68fd381c21/
Changeset: 4b68fd381c21
User: jmchilton
Date: 2014-05-06 15:54:31
Summary: Dataset collections - backbone view for history content.
Affected #: 4 files
diff -r 1adf9edf112b8fda48fcea30779ddb265ce3dfec -r 4b68fd381c2128220176797108febcbe679a39fe static/scripts/mvc/collection/dataset-collection-base.js
--- /dev/null
+++ b/static/scripts/mvc/collection/dataset-collection-base.js
@@ -0,0 +1,191 @@
+define([
+ "mvc/dataset/hda-model",
+ "mvc/dataset/hda-base"
+], function( hdaModel, hdaBase ){
+/* global Backbone, LoggableMixin */
+//==============================================================================
+/** @class Read only view for HistoryDatasetCollectionAssociation.
+ * @name HDABaseView
+ *
+ * @augments Backbone.View
+ * @borrows LoggableMixin#logger as #logger
+ * @borrows LoggableMixin#log as #log
+ * @constructs
+ */
+var DatasetCollectionBaseView = hdaBase.HistoryContentBaseView.extend({
+ className : "dataset hda history-panel-hda",
+ id : function(){ return 'hdca-' + this.model.get( 'id' ); },
+
+ initialize : function( attributes ){
+ if( attributes.logger ){ this.logger = this.model.logger = attributes.logger; }
+ this.log( this + '.initialize:', attributes );
+ /** is the view currently in selection mode? */
+ this.selectable = attributes.selectable || false;
+ //this.log( '\t selectable:', this.selectable );
+ /** is the view currently selected? */
+ this.selected = attributes.selected || false;
+ /** is the body of this collection view expanded/not? */
+ this.expanded = attributes.expanded || false;
+ },
+
+ render : function( fade ){
+ var $newRender = this._buildNewRender();
+
+ this._queueNewRender( $newRender, fade );
+ return this;
+ },
+
+ // main template for folder browsing
+ templateSkeleton : function (){
+ return [
+ '<div class="dataset hda">',
+ ' <div class="dataset-warnings">',
+ '<% if ( deleted ) { %>',
+ ' <div class="dataset-deleted-msg warningmessagesmall"><strong>',
+ ' This dataset has been deleted.', // Localize?
+ ' </div>',
+ '<% } %>',
+ '<% if ( ! visible ) { %>',
+ ' <div class="dataset-hidden-msg warningmessagesmall"><strong>',
+ ' This dataset has been hidden.', // Localize?
+ ' </div>',
+ '<% } %>',
+ ' </div>',
+ ' <div class="dataset-selector"><span class="fa fa-2x fa-square-o"></span></div>',
+ ' <div class="dataset-primary-actions"></div>',
+ ' <div class="dataset-title-bar clear" tabindex="0">',
+ ' <span class="dataset-state-icon state-icon"></span>',
+ ' <div class="dataset-title">',
+ ' <span class="hda-hid"><%= hid %></span>',
+ ' <span class="dataset-name"><%= name %></span>',
+ ' </div>',
+ ' </div>',
+ ' <div class="dataset-body"></div>',
+ '</div>',
+ ].join( '' );
+ },
+
+ templateBody : function() {
+ return [
+ '<div class="dataset-body">',
+ ' <div class="dataset-summary">',
+ ' A dataset collection.',
+ ' </div>',
+
+ ].join( '' );
+
+ },
+
+ _buildNewRender : function(){
+ var $newRender = $( _.template(this.templateSkeleton(), this.model.toJSON() ) );
+ $newRender.find( '.dataset-primary-actions' ).append( this._render_titleButtons() );
+ $newRender.children( '.dataset-body' ).replaceWith( this._render_body() );
+ this._setUpBehaviors( $newRender );
+ return $newRender;
+ },
+
+ // ................................................................................ titlebar buttons
+ /** Render icon-button group for the common, most easily accessed actions.
+ * @returns {jQuery} rendered DOM
+ */
+ _render_titleButtons : function(){
+ // render just the display for read-only
+ return [ ];
+ },
+
+ // ......................................................................... state body renderers
+ /** Render the enclosing div of the collection body and, if expanded, the html in the body
+ * @returns {jQuery} rendered DOM
+ */
+ _render_body : function(){
+ var $body = $( '<div>Error: unknown state "' + this.model.get( 'state' ) + '".</div>' ),
+ // cheesy: get function by assumed matching name
+ renderFn = this[ '_render_body_' + this.model.get( 'state' ) ];
+ if( _.isFunction( renderFn ) ){
+ $body = renderFn.call( this );
+ }
+ this._setUpBehaviors( $body );
+
+ // only render the body html if it's being shown
+ if( this.expanded ){
+ $body.show();
+ }
+ return $body;
+ },
+
+ /** set up js behaviors, event handlers for elements within the given container
+ * @param {jQuery} $container jq object that contains the elements to process (defaults to this.$el)
+ */
+ _setUpBehaviors : function( $container ){
+ $container = $container || this.$el;
+ // set up canned behavior on children (bootstrap, popupmenus, editable_text, etc.)
+ make_popup_menus( $container );
+ $container.find( '[title]' ).tooltip({ placement : 'bottom' });
+ },
+
+ // TODO: Eliminate duplication between following event map and one for HDAs.
+
+ // ......................................................................... events
+ /** event map */
+ events : {
+ // expand the body when the title is clicked or when in focus and space or enter is pressed
+ 'click .dataset-title-bar' : 'toggleBodyVisibility',
+ 'keydown .dataset-title-bar' : 'toggleBodyVisibility',
+
+ // toggle selected state
+ 'click .dataset-selector' : 'toggleSelect'
+ },
+
+ /** Render and show the full, detailed body of this view including extra data and controls.
+ * @fires body-expanded when a body has been expanded
+ */
+ expandBody : function(){
+ var contentView = this;
+
+ function _renderBodyAndExpand(){
+ contentView.$el.children( '.dataset-body' ).replaceWith( contentView._render_body() );
+ contentView.$el.children( '.dataset-body' ).slideDown( contentView.fxSpeed, function(){
+ contentView.expanded = true;
+ contentView.trigger( 'body-expanded', contentView.model.get( 'id' ) );
+ });
+ }
+ // TODO: Fetch more details like HDA view...
+ _renderBodyAndExpand();
+ },
+
+ /** Hide the body/details of an HDA.
+ * @fires body-collapsed when a body has been collapsed
+ */
+ collapseBody : function(){
+ var hdaView = this;
+ this.$el.children( '.dataset-body' ).slideUp( hdaView.fxSpeed, function(){
+ hdaView.expanded = false;
+ hdaView.trigger( 'body-collapsed', hdaView.model.get( 'id' ) );
+ });
+ },
+
+
+ /** Render an 'ok' collection.
+ * @param {jQuery} parent DOM to which to append this body
+ */
+ _render_body_ok : function(){
+ // most common state renderer and the most complicated
+ var view = this,
+ $body = $( _.template(this.templateBody(), this.model.toJSON() ) )
+
+ // return shortened form if del'd (no display apps or peek?)
+ if( this.model.get( 'deleted' ) ){
+ return $body;
+ }
+
+ return $body;
+ }
+
+});
+
+//==============================================================================
+return {
+ DatasetCollectionBaseView : DatasetCollectionBaseView
+};
+
+});
\ No newline at end of file
diff -r 1adf9edf112b8fda48fcea30779ddb265ce3dfec -r 4b68fd381c2128220176797108febcbe679a39fe static/scripts/mvc/collection/dataset-collection-edit.js
--- /dev/null
+++ b/static/scripts/mvc/collection/dataset-collection-edit.js
@@ -0,0 +1,69 @@
+define([
+ "mvc/dataset/hda-model",
+ "mvc/collection/dataset-collection-base",
+], function( hdaModel, datasetCollectionBase ){
+//==============================================================================
+/** @class Editing view for HistoryDatasetCollectionAssociation.
+ * @name DatasetCollectionEditView
+ *
+ * @augments DatasetCollectionBaseView
+ * @constructs
+ */
+var DatasetCollectionEditView = datasetCollectionBase.DatasetCollectionBaseView.extend( {
+
+ initialize : function( attributes ){
+ datasetCollectionBase.DatasetCollectionBaseView.prototype.initialize.call( this, attributes );
+ },
+
+ // ......................................................................... edit attr, delete
+ /** Render icon-button group for the common, most easily accessed actions.
+ * Overrides _render_titleButtons to include editting related buttons.
+ * @see DatasetCollectionBaseView#_render_titleButtons
+ * @returns {jQuery} rendered DOM
+ */
+ _render_titleButtons : function(){
+ // render the display, edit attr and delete icon-buttons
+ return datasetCollectionBase.DatasetCollectionBaseView.prototype._render_titleButtons.call( this ).concat([
+ this._render_deleteButton()
+ ]);
+ },
+
+ /** Render icon-button to delete this hda.
+ * @returns {jQuery} rendered DOM
+ */
+ _render_deleteButton : function(){
+ // don't show delete if...
+ if( ( this.model.get( 'state' ) === hdaModel.HistoryDatasetAssociation.STATES.NEW )
+ || ( this.model.get( 'state' ) === hdaModel.HistoryDatasetAssociation.STATES.NOT_VIEWABLE )
+ || ( !this.model.get( 'accessible' ) ) ){
+ return null;
+ }
+
+ var self = this,
+ deleteBtnData = {
+ title : _l( 'Delete' ),
+ classes : 'dataset-delete',
+ onclick : function() {
+ // ...bler... tooltips being left behind in DOM (hover out never called on deletion)
+ self.$el.find( '.icon-btn.dataset-delete' ).trigger( 'mouseout' );
+ self.model[ 'delete' ]();
+ }
+ };
+ if( this.model.get( 'deleted' ) ){
+ deleteBtnData = {
+ title : _l( 'Dataset collection is already deleted' ),
+ disabled : true
+ };
+ }
+ deleteBtnData.faIcon = 'fa-times';
+ return faIconButton( deleteBtnData );
+ },
+
+});
+
+//==============================================================================
+return {
+ DatasetCollectionEditView : DatasetCollectionEditView
+};
+
+});
\ No newline at end of file
diff -r 1adf9edf112b8fda48fcea30779ddb265ce3dfec -r 4b68fd381c2128220176797108febcbe679a39fe static/scripts/mvc/history/history-panel.js
--- a/static/scripts/mvc/history/history-panel.js
+++ b/static/scripts/mvc/history/history-panel.js
@@ -1,11 +1,12 @@
define([
"mvc/dataset/hda-model",
"mvc/dataset/hda-edit",
+ "mvc/collection/dataset-collection-edit",
"mvc/history/readonly-history-panel",
"mvc/tags",
"mvc/annotations",
"utils/localization"
-], function( hdaModel, hdaEdit, readonlyPanel, tagsMod, annotationsMod, _l ){
+], function( hdaModel, hdaEdit, datasetCollectionEdit, readonlyPanel, tagsMod, annotationsMod, _l ){
/* =============================================================================
TODO:
@@ -287,6 +288,15 @@
tagsEditorShown : ( this.tagsEditor && !this.tagsEditor.hidden ),
annotationEditorShown : ( this.annotationEditor && !this.annotationEditor.hidden )
});
+ } else if ( historyContentType == "dataset_collection" ) {
+ hdaView = new datasetCollectionEdit.DatasetCollectionEditView({
+ model : hda,
+ linkTarget : this.linkTarget,
+ expanded : this.storage.get( 'expandedHdas' )[ hdaId ],
+ //draggable : true,
+ hasUser : this.model.ownedByCurrUser(),
+ logger : this.logger
+ });
}
this._setUpHdaListeners( hdaView );
return hdaView;
diff -r 1adf9edf112b8fda48fcea30779ddb265ce3dfec -r 4b68fd381c2128220176797108febcbe679a39fe static/scripts/mvc/history/readonly-history-panel.js
--- a/static/scripts/mvc/history/readonly-history-panel.js
+++ b/static/scripts/mvc/history/readonly-history-panel.js
@@ -1,10 +1,11 @@
define([
"mvc/history/history-model",
+ "mvc/collection/dataset-collection-base",
"mvc/dataset/hda-base",
"mvc/user/user-model",
"mvc/base-mvc",
"utils/localization"
-], function( historyModel, hdaBase, userModel, baseMVC, _l ){
+], function( historyModel, datasetCollectionBase, hdaBase, userModel, baseMVC, _l ){
// ============================================================================
/** session storage for individual history preferences */
var HistoryPrefs = baseMVC.SessionStorageModel.extend({
@@ -621,6 +622,15 @@
hasUser : this.model.ownedByCurrUser(),
logger : this.logger
});
+ } else {
+ hdaView = new datasetCollectionBase.DatasetCollectionBaseView({
+ model : hda,
+ linkTarget : this.linkTarget,
+ expanded : this.storage.get( 'expandedHdas' )[ hdaId ],
+ //draggable : true,
+ hasUser : this.model.ownedByCurrUser(),
+ logger : this.logger
+ });
}
this._setUpHdaListeners( hdaView );
return hdaView;
https://bitbucket.org/galaxy/galaxy-central/commits/cf447e711854/
Changeset: cf447e711854
User: jmchilton
Date: 2014-05-06 15:54:31
Summary: Add helper destroyInvalidConnections to Terminal.
Used in current workflow and reused in different context in collection code.
Affected #: 1 file
diff -r 4b68fd381c2128220176797108febcbe679a39fe -r cf447e71185434d0b476cec08c6fbc3fb901acba static/scripts/galaxy.workflow_editor.canvas.js
--- a/static/scripts/galaxy.workflow_editor.canvas.js
+++ b/static/scripts/galaxy.workflow_editor.canvas.js
@@ -24,6 +24,14 @@
$.each( this.connectors.slice(), function( _, c ) {
c.destroy();
});
+ },
+ destroyInvalidConnections: function( ) {
+ var terminal = this;
+ _.each( terminal.connectors, function( connector ) {
+ if( connector.handle1 && ! terminal.attachable( connector.handle1 ) ) {
+ connector.destroy();
+ }
+ } );
}
} );
@@ -802,11 +810,7 @@
} else {
var terminal = terminalView.el.terminal;
terminal.update( input );
- _.each( terminal.connectors, function( connector ) {
- if( connector.handle1 && ! terminal.attachable( connector.handle1 ) ) {
- connector.destroy();
- }
- } );
+ terminal.destroyInvalidConnections();
}
this.terminalViews[ input.name ] = terminalView;
var terminalElement = terminalView.el;
https://bitbucket.org/galaxy/galaxy-central/commits/94c09b3a8146/
Changeset: 94c09b3a8146
User: jmchilton
Date: 2014-05-06 15:54:31
Summary: Dataset collections - update workflow editor for collection inputs, mapping, and reduction.
This commit enables the workflow editor to deal with input collection data parameter types and inputs (easy) and much more complicatedly reason about mappings and reductions over inputs and collection inputs. Not sure I can really describe the new interface in a changeset - essentially it is more complicated to determine if a input can be connected to by an output - and that connection affects what are valid other inputs and what the outputs are.
Affected #: 7 files
diff -r cf447e71185434d0b476cec08c6fbc3fb901acba -r 94c09b3a81468710e75ea020397b9af5e197f3d6 lib/galaxy/jobs/__init__.py
--- a/lib/galaxy/jobs/__init__.py
+++ b/lib/galaxy/jobs/__init__.py
@@ -964,7 +964,7 @@
trynum += 1
log.warning( 'Error accessing %s, will retry: %s', dataset.dataset.file_name, e )
time.sleep( 2 )
- if dataset.hidden_beneath_collection_instance:
+ if getattr( dataset, "hidden_beneath_collection_instance", None ):
dataset.visible = False
dataset.blurb = 'done'
dataset.peek = 'no peek'
diff -r cf447e71185434d0b476cec08c6fbc3fb901acba -r 94c09b3a81468710e75ea020397b9af5e197f3d6 lib/galaxy/web/framework/__init__.py
--- a/lib/galaxy/web/framework/__init__.py
+++ b/lib/galaxy/web/framework/__init__.py
@@ -1324,7 +1324,7 @@
"""
Simple class describing a form input element
"""
- def __init__( self, type, name, label, value=None, error=None, help=None, use_label=True ):
+ def __init__( self, type, name, label, value=None, error=None, help=None, use_label=True, extra_attributes={}, **kwargs ):
self.type = type
self.name = name
self.label = label
@@ -1332,6 +1332,22 @@
self.error = error
self.help = help
self.use_label = use_label
+ self.extra_attributes = extra_attributes
+
+
+class DatalistInput( FormInput ):
+ """ Data list input """
+
+ def __init__( self, name, *args, **kwargs ):
+ if 'extra_attributes' not in kwargs:
+ kwargs[ 'extra_attributes' ] = {}
+ kwargs[ 'extra_attributes' ][ 'list' ] = name
+ FormInput.__init__( self, None, name, *args, **kwargs )
+ self.options = kwargs.get( 'options', {} )
+
+ def body_html( self ):
+ options = "".join( [ "<option value='%s'>%s</option>" % ( key, value ) for key, value in self.options.iteritems() ] )
+ return """<datalist id="%s">%s</datalist>""" % ( self.name, options )
class SelectInput( FormInput ):
diff -r cf447e71185434d0b476cec08c6fbc3fb901acba -r 94c09b3a81468710e75ea020397b9af5e197f3d6 lib/galaxy/workflow/modules.py
--- a/lib/galaxy/workflow/modules.py
+++ b/lib/galaxy/workflow/modules.py
@@ -14,6 +14,7 @@
from galaxy.tools.parameters import check_param, DataToolParameter, DummyDataset, RuntimeValue, visit_input_values
from galaxy.tools.parameters import DataCollectionToolParameter
from galaxy.util.bunch import Bunch
+from galaxy.util import odict
from galaxy.util.json import from_json_string, to_json_string
log = logging.getLogger( __name__ )
@@ -238,14 +239,24 @@
return dict( input=DataCollectionToolParameter( None, input_element, self.trans ) )
def get_config_form( self ):
+ type_hints = odict.odict()
+ type_hints[ "list" ] = "List of Datasets"
+ type_hints[ "paired" ] = "Dataset Pair"
+ type_hints[ "list:paired" ] = "List of Dataset Pairs"
+
+ type_input = web.framework.DatalistInput(
+ name="collection_type",
+ label="Collection Type",
+ value=self.state[ "collection_type" ],
+ extra_attributes=dict(refresh_on_change='true'),
+ options=type_hints
+ )
form = web.FormBuilder(
title=self.name
).add_text(
"name", "Name", value=self.state['name']
- ).add_text(
- # TODO: clean this up...
- "collection_type", "Collection Type", value=self.state[ "collection_type" ]
)
+ form.inputs.append( type_input )
return self.trans.fill_template( "workflow/editor_generic_form.mako",
module=self, form=form )
@@ -395,6 +406,7 @@
multiple=input.multiple,
input_type="dataset_collection",
collection_type=input.collection_type,
+ extensions=input.extensions,
) )
visit_input_values( self.tool.inputs, self.state.inputs, callback )
diff -r cf447e71185434d0b476cec08c6fbc3fb901acba -r 94c09b3a81468710e75ea020397b9af5e197f3d6 static/scripts/galaxy.workflow_editor.canvas.js
--- a/static/scripts/galaxy.workflow_editor.canvas.js
+++ b/static/scripts/galaxy.workflow_editor.canvas.js
@@ -1,3 +1,161 @@
+function CollectionTypeDescription( collectionType ) {
+ this.collectionType = collectionType;
+ this.isCollection = true;
+ this.rank = collectionType.split(":").length;
+}
+
+$.extend( CollectionTypeDescription.prototype, {
+ append: function( otherCollectionTypeDescription ) {
+ if( otherCollectionTypeDescription === NULL_COLLECTION_TYPE_DESCRIPTION ) {
+ return this;
+ }
+ if( otherCollectionTypeDescription === ANY_COLLECTION_TYPE_DESCRIPTION ) {
+ return otherCollectionType;
+ }
+ return new CollectionTypeDescription( this.collectionType + ":" + otherCollectionTypeDescription.collectionType );
+ },
+ canMatch: function( otherCollectionTypeDescription ) {
+ if( otherCollectionTypeDescription === NULL_COLLECTION_TYPE_DESCRIPTION ) {
+ return false;
+ }
+ if( otherCollectionTypeDescription === ANY_COLLECTION_TYPE_DESCRIPTION ) {
+ return true;
+ }
+ return otherCollectionTypeDescription.collectionType == this.collectionType;
+ },
+ canMapOver: function( otherCollectionTypeDescription ) {
+ if( otherCollectionTypeDescription === NULL_COLLECTION_TYPE_DESCRIPTION ) {
+ return false;
+ }
+ if( otherCollectionTypeDescription === ANY_COLLECTION_TYPE_DESCRIPTION ) {
+ return false;
+ }
+ if( this.rank <= otherCollectionTypeDescription.rank ) {
+ // Cannot map over self...
+ return false;
+ }
+ var requiredSuffix = otherCollectionTypeDescription.collectionType
+ return this._endsWith( this.collectionType, requiredSuffix );
+ },
+ effectiveMapOver: function( otherCollectionTypeDescription ) {
+ var otherCollectionType = otherCollectionTypeDescription.collectionType;
+ var effectiveCollectionType = this.collectionType.substring( 0, this.collectionType.length - otherCollectionType.length - 1 );
+ return new CollectionTypeDescription( effectiveCollectionType );
+ },
+ equal: function( otherCollectionTypeDescription ) {
+ return otherCollectionTypeDescription.collectionType == this.collectionType;
+ },
+ toString: function() {
+ return "CollectionType[" + this.collectionType + "]";
+ },
+ _endsWith: function( str, suffix ) {
+ return str.indexOf(suffix, str.length - suffix.length) !== -1;
+ }
+} );
+
+NULL_COLLECTION_TYPE_DESCRIPTION = {
+ isCollection: false,
+ canMatch: function( other ) { return false; },
+ canMapOver: function( other ) {
+ return false;
+ },
+ toString: function() {
+ return "NullCollectionType[]";
+ },
+ append: function( otherCollectionType ) {
+ return otherCollectionType;
+ },
+ equal: function( other ) {
+ return other === this;
+ }
+};
+
+ANY_COLLECTION_TYPE_DESCRIPTION = {
+ isCollection: true,
+ canMatch: function( other ) { return NULL_COLLECTION_TYPE_DESCRIPTION !== other; },
+ canMapOver: function( other ) {
+ return false;
+ },
+ toString: function() {
+ return "AnyCollectionType[]";
+ },
+ append: function( otherCollectionType ) {
+ throw "Cannot append to ANY_COLLECTION_TYPE_DESCRIPTION";
+ },
+ equal: function( other ) {
+ return other === this;
+ }
+};
+
+var TerminalMapping = Backbone.Model.extend( {
+ initialize: function( attr ) {
+ this.mapOver = attr.mapOver || NULL_COLLECTION_TYPE_DESCRIPTION;
+ this.terminal = attr.terminal;
+ this.terminal.terminalMapping = this;
+ },
+ disableMapOver: function() {
+ this.setMapOver( NULL_COLLECTION_TYPE_DESCRIPTION );
+ },
+ setMapOver: function( collectionTypeDescription ) {
+ // TODO: Can I use "attributes" or something to auto trigger "change"
+ // event?
+ this.mapOver = collectionTypeDescription;
+ this.trigger("change");
+ }
+} );
+
+var TerminalMappingView = Backbone.View.extend( {
+ tagName: "div",
+ className: "fa-icon-button fa fa-folder-o",
+
+ initialize: function( options ) {
+ var mapText = "Run tool in parallel over collection";
+ this.$el.tooltip( {delay: 500, title: mapText } );
+ this.model.bind( "change", _.bind( this.render, this ) );
+ },
+
+ render: function() {
+ if( this.model.mapOver.isCollection ) {
+ this.$el.show();
+ } else {
+ this.$el.hide();
+ }
+ },
+
+} );
+
+var InputTerminalMappingView = TerminalMappingView.extend( {
+ events: {
+ "click": "onClick",
+ "mouseenter": "onMouseEnter",
+ "mouseleave": "onMouseLeave",
+ },
+ onMouseEnter: function( e ) {
+ var model = this.model;
+ if( ! model.terminal.connected() && model.mapOver.isCollection ) {
+ this.$el.color( "red" );
+ }
+ },
+ onMouseLeave: function( e ) {
+ this.$el.color( "blue" );
+ },
+ onClick: function( e ) {
+ var model = this.model;
+ if( ! model.terminal.connected() && model.mapOver.isCollection ) {
+ // TODO: Consider prompting...
+ model.terminal.resetMapping();
+ }
+ },
+} );
+
+var InputTerminalMapping = TerminalMapping;
+var InputCollectionTerminalMapping = TerminalMapping;
+var OutputTerminalMapping = TerminalMapping;
+var OutputTerminalMappingView = TerminalMappingView;
+var InputCollectionTerminalMappingView = InputTerminalMappingView;
+var OutputCollectionTerminalMapping = TerminalMapping;
+var OutputCollectionTerminalMappingView = TerminalMappingView;
+
var Terminal = Backbone.Model.extend( {
initialize: function( attr ) {
this.element = attr.element;
@@ -13,6 +171,7 @@
this.connectors.splice( $.inArray( connector, this.connectors ), 1 );
if ( this.node ) {
this.node.markChanged();
+ this.resetMappingIfNeeded();
}
},
redraw: function () {
@@ -26,31 +185,73 @@
});
},
destroyInvalidConnections: function( ) {
- var terminal = this;
- _.each( terminal.connectors, function( connector ) {
- if( connector.handle1 && ! terminal.attachable( connector.handle1 ) ) {
- connector.destroy();
- }
+ _.each( this.connectors, function( connector ) {
+ connector.destroyIfInvalid();
} );
- }
+ },
+ setMapOver : function( val ) {
+ if( this.multiple ) {
+ return; // Cannot set this to be multirun...
+ }
+
+ if( ! this.mapOver().equal( val ) ) {
+ this.terminalMapping.setMapOver( val );
+ _.each( this.node.output_terminals, function( outputTerminal ) {
+ outputTerminal.setMapOver( val );
+ } );
+ }
+ },
+ mapOver: function( ) {
+ if ( ! this.terminalMapping ) {
+ return NULL_COLLECTION_TYPE_DESCRIPTION;
+ } else {
+ return this.terminalMapping.mapOver;
+ }
+ },
+ isMappedOver: function( ) {
+ return this.terminalMapping && this.terminalMapping.mapOver.isCollection;
+ },
+ resetMapping: function() {
+ this.terminalMapping.disableMapOver();
+ },
+
+ resetMappingIfNeeded: function( ) {}, // Subclasses should override this...
+
} );
var OutputTerminal = Terminal.extend( {
initialize: function( attr ) {
Terminal.prototype.initialize.call( this, attr );
this.datatypes = attr.datatypes;
+ },
+
+ resetMappingIfNeeded: function( ) {
+ if( ! this.node.hasMappedOverInputTerminals() ) {
+ this.resetMapping();
+ }
+ },
+
+ resetMapping: function() {
+ this.terminalMapping.disableMapOver();
+ _.each( this.connectors, function( connector ) {
+ var connectedInput = connector.handle2;
+ if( connectedInput ) {
+ // Not exactly right because this is still connected.
+ // Either rewrite resetMappingIfNeeded or disconnect
+ // and reconnect if valid.
+ connectedInput.resetMappingIfNeeded();
+ connector.destroyIfInvalid();
+ }
+ } );
}
+
} );
-var InputTerminal = Terminal.extend( {
+var BaseInputTerminal = Terminal.extend( {
initialize: function( attr ) {
Terminal.prototype.initialize.call( this, attr );
- this.update( attr.input );
- },
- update: function( input ) {
- this.datatypes = input.extensions;
- this.multiple = input.multiple;
+ this.update( attr.input ); // subclasses should implement this...
},
canAccept: function ( other ) {
if( this._inputFilled() ) {
@@ -59,10 +260,83 @@
return this.attachable( other );
}
},
- _inputFilled: function( ) {
- return ! ( this.connectors.length < 1 || this.multiple );
+ resetMappingIfNeeded: function( ) {
+ var mapOver = this.mapOver();
+ if( ! mapOver.isCollection ) {
+ return;
+ }
+ // No output terminals are counting on this being mapped
+ // over if connected inputs are still mapped over or if none
+ // of the outputs are connected...
+ var reset = this.node.hasConnectedMappedInputTerminals() ||
+ ( ! this.node.hasConnectedOutputTerminals() );
+ if( reset ) {
+ this.resetMapping();
+ }
},
- attachable: function( other ) {
+ resetMapping: function() {
+ this.terminalMapping.disableMapOver();
+ if( ! this.node.hasMappedOverInputTerminals() ) {
+ _.each( this.node.output_terminals, function( terminal) {
+ // This shouldn't be called if there are mapped over
+ // outputs.
+ terminal.resetMapping();
+ } );
+ }
+ },
+ connected: function() {
+ return this.connectors.length !== 0;
+ },
+ _inputFilled: function() {
+ var inputFilled;
+ if( ! this.connected() ) {
+ inputFilled = false;
+ } else {
+ if( this.multiple ) {
+ if( ! this.connected() ) {
+ inputFilled = false;
+ } else {
+ var firstOutput = this.connectors[ 0 ].handle1;
+ if( firstOutput === null ){
+ inputFilled = false;
+ } else {
+ if( firstOutput.isDataCollectionInput || firstOutput.isMappedOver() || firstOutput.datatypes.indexOf( "input_collection" ) > 0 ) {
+ inputFilled = true;
+ } else {
+ inputFilled = false;
+ }
+ }
+ }
+ } else {
+ inputFilled = true;
+ }
+ }
+ return inputFilled;
+ },
+ _mappingConstraints: function( ) {
+ // If this is a connected terminal, return list of collection types
+ // other terminals connected to node are constraining mapping to.
+ if( ! this.node ) {
+ return []; // No node - completely unconstrained
+ }
+ var mapOver = this.mapOver();
+ if( mapOver.isCollection ) {
+ return [ mapOver ];
+ }
+
+ var constraints = [];
+ if( ! this.node.hasConnectedOutputTerminals() ) {
+ _.each( this.node.connectedMappedInputTerminals(), function( inputTerminal ) {
+ constraints.push( inputTerminal.mapOver() );
+ } );
+ } else {
+ // All outputs should have same mapOver status - least specific.
+ constraints.push( _.first( _.values( this.node.output_terminals ) ).mapOver() );
+ }
+ return constraints;
+ },
+ _producesAcceptableDatatype: function( other ) {
+ // other is a non-collection output...
for ( var t in this.datatypes ) {
var cat_outputs = new Array();
cat_outputs = cat_outputs.concat(other.datatypes);
@@ -76,16 +350,156 @@
}
// FIXME: No idea what to do about case when datatype is 'input'
for ( var other_datatype_i in cat_outputs ) {
- if ( cat_outputs[other_datatype_i] == "input" || issubtype( cat_outputs[other_datatype_i], this.datatypes[t] ) ) {
+ var other_datatype = cat_outputs[other_datatype_i];
+ if ( other_datatype == "input" || other_datatype == "input_collection" || issubtype( cat_outputs[other_datatype_i], this.datatypes[t] ) ) {
return true;
}
}
}
return false;
+ },
+ _otherCollectionType: function( other ) {
+ var otherCollectionType = NULL_COLLECTION_TYPE_DESCRIPTION;
+ if( other.isDataCollectionInput ) {
+ otherCollectionType = other.collectionType;
+ } else {
+ var otherMapOver = other.mapOver();
+ if( otherMapOver.isCollection ) {
+ otherCollectionType = otherMapOver;
+ }
+ }
+ return otherCollectionType;
+ },
+} );
+
+
+
+
+
+var InputTerminal = BaseInputTerminal.extend( {
+ update: function( input ) {
+ this.datatypes = input.extensions;
+ this.multiple = input.multiple;
+ this.collection = false;
+ },
+ connect: function( connector ) {
+ BaseInputTerminal.prototype.connect.call( this, connector );
+ var other_output = connector.handle1;
+ if( ! other_output ) {
+ return;
+ }
+ var otherCollectionType = this._otherCollectionType( other_output );
+ if( otherCollectionType.isCollection ) {
+ this.setMapOver( otherCollectionType );
+ }
+ },
+ attachable: function( other ) {
+ var otherCollectionType = this._otherCollectionType( other );
+ var thisMapOver = this.mapOver();
+ if( otherCollectionType.isCollection ) {
+ // TODO: Handle if this multiple....
+ if( thisMapOver.isCollection && thisMapOver.canMatch( otherCollectionType ) ) {
+ return this._producesAcceptableDatatype( other );
+ } else {
+ // Need to check if this would break constraints...
+ var mappingConstraints = this._mappingConstraints();
+ if( mappingConstraints.every( _.bind( otherCollectionType.canMatch, otherCollectionType ) ) ) {
+ return this._producesAcceptableDatatype( other );
+ } else {
+ return false;
+ }
+ }
+ } else if( thisMapOver.isCollection ) {
+ // Attempting to match a non-collection output to an
+ // explicitly collection input.
+ return false;
+ }
+ return this._producesAcceptableDatatype( other );
+ }
+
+});
+
+var InputCollectionTerminal = BaseInputTerminal.extend( {
+ update: function( input ) {
+ this.multiple = false;
+ this.collection = true;
+ this.datatypes = input.extensions;
+ if( input.collection_type ) {
+ this.collectionType = new CollectionTypeDescription( input.collection_type );
+ } else {
+ this.collectionType = ANY_COLLECTION_TYPE_DESCRIPTION;
+ }
+ },
+ connect: function( connector ) {
+ BaseInputTerminal.prototype.connect.call( this, connector );
+ var other = connector.handle1;
+ if( ! other ) {
+ return;
+ }
+
+ var effectiveMapOver = this._effectiveMapOver( other );
+ this.setMapOver( effectiveMapOver );
+ },
+ _effectiveMapOver: function( other ) {
+ var collectionType = this.collectionType;
+ var otherCollectionType = this._otherCollectionType( other );
+ if( ! collectionType.canMatch( otherCollectionType ) ) {
+ return otherCollectionType.effectiveMapOver( collectionType );
+ } else {
+ return NULL_COLLECTION_TYPE_DESCRIPTION;
+ }
+ },
+ _effectiveCollectionType: function( ) {
+ var collectionType = this.collectionType;
+ var thisMapOver = this.mapOver();
+ return thisMapOver.append( collectionType );
+ },
+ attachable: function ( other ) {
+ var otherCollectionType = this._otherCollectionType( other );
+ if( otherCollectionType.isCollection ) {
+ var effectiveCollectionType = this._effectiveCollectionType( );
+ var thisMapOver = this.mapOver();
+ if( effectiveCollectionType.canMatch( otherCollectionType ) ) {
+ // Only way a direct match...
+ return this._producesAcceptableDatatype( other );
+ // Otherwise we need to mapOver
+ } else if( thisMapOver.isCollection ) {
+ // In this case, mapOver already set and we didn't match skipping...
+ return false;
+ } else if( otherCollectionType.canMapOver( this.collectionType ) ) {
+ var effectiveMapOver = this._effectiveMapOver( other );
+ if( ! effectiveMapOver.isCollection ) {
+ return false;
+ }
+ // Need to check if this would break constraints...
+ var mappingConstraints = this._mappingConstraints();
+ if( mappingConstraints.every( effectiveMapOver.canMatch ) ) {
+ return this._producesAcceptableDatatype( other );
+ }
+ }
+ }
+ return false;
}
});
-
+var OutputCollectionTerminal = Terminal.extend( {
+ initialize: function( attr ) {
+ Terminal.prototype.initialize.call( this, attr );
+ this.datatypes = attr.datatypes;
+ this.collectionType = new CollectionTypeDescription( attr.collection_type );
+ this.isDataCollectionInput = true;
+ },
+ update: function( output ) {
+ var newCollectionType = new CollectionTypeDescription( output.collection_type );
+ if( newCollectionType.collectionType != this.collectionType.collectionType ) {
+ _.each( this.connectors, function( connector ) {
+ // TODO: consider checking if connection valid before removing...
+ connector.destroy();
+ } );
+ }
+ this.collectionType = newCollectionType;
+ }
+} );
function Connector( handle1, handle2 ) {
this.canvas = null;
@@ -116,6 +530,11 @@
}
$(this.canvas).remove();
},
+ destroyIfInvalid: function() {
+ if( this.handle1 && this.handle2 && ! this.handle2.attachable( this.handle1 ) ) {
+ this.destroy();
+ }
+ },
redraw : function () {
var canvas_container = $("#canvas-container");
if ( ! this.canvas ) {
@@ -165,20 +584,52 @@
end_x -= canvas_left;
end_y -= canvas_top;
// Draw the line
+
+ var c = this.canvas.getContext("2d"),
+ start_offsets = null,
+ end_offsets = null;
+ var num_offsets = 1;
+ if ( this.handle1 && this.handle1.isMappedOver() ) {
+ var start_offsets = [ -6, -3, 0, 3, 6 ];
+ num_offsets = 5;
+ } else {
+ var start_offsets = [ 0 ];
+ }
+ if ( this.handle2 && this.handle2.isMappedOver() ) {
+ var end_offsets = [ -6, -3, 0, 3, 6 ];
+ num_offsets = 5;
+ } else {
+ var end_offsets = [ 0 ];
+ }
+ var connector = this;
+ for( var i = 0; i < num_offsets; i++ ) {
+ var inner_width = 5,
+ outer_width = 7;
+ if( start_offsets.length > 1 || end_offsets.length > 1 ) {
+ // We have a multi-run, using many lines, make them small.
+ inner_width = 1;
+ outer_width = 3;
+ }
+ connector.draw_outlined_curve( start_x, start_y, end_x, end_y, cp_shift, inner_width, outer_width, start_offsets[ i % start_offsets.length ], end_offsets[ i % end_offsets.length ] );
+ }
+ },
+ draw_outlined_curve : function( start_x, start_y, end_x, end_y, cp_shift, inner_width, outer_width, offset_start, offset_end ) {
+ var offset_start = offset_start || 0;
+ var offset_end = offset_end || 0;
var c = this.canvas.getContext("2d");
c.lineCap = "round";
c.strokeStyle = this.outer_color;
- c.lineWidth = 7;
+ c.lineWidth = outer_width;
c.beginPath();
- c.moveTo( start_x, start_y );
- c.bezierCurveTo( start_x + cp_shift, start_y, end_x - cp_shift, end_y, end_x, end_y );
+ c.moveTo( start_x, start_y + offset_start );
+ c.bezierCurveTo( start_x + cp_shift, start_y + offset_start, end_x - cp_shift, end_y + offset_end, end_x, end_y + offset_end);
c.stroke();
// Inner line
c.strokeStyle = this.inner_color;
- c.lineWidth = 5;
+ c.lineWidth = inner_width;
c.beginPath();
- c.moveTo( start_x, start_y );
- c.bezierCurveTo( start_x + cp_shift, start_y, end_x - cp_shift, end_y, end_x, end_y );
+ c.moveTo( start_x, start_y + offset_start );
+ c.bezierCurveTo( start_x + cp_shift, start_y + offset_start, end_x - cp_shift, end_y + offset_end, end_x, end_y + offset_end );
c.stroke();
}
} );
@@ -191,6 +642,66 @@
this.output_terminals = {};
this.tool_errors = {};
},
+ connectedOutputTerminals: function() {
+ var connected_outputs = [];
+ $.each( this.output_terminals, function( _, t ) {
+ if( t.connectors.length > 0 ) {
+ connected_outputs.push( t );
+ }
+ } );
+ return connected_outputs;
+ },
+ hasConnectedOutputTerminals: function() {
+ // return this.connectedOutputTerminals().length > 0; <- optimized this
+ var outputTerminals = this.output_terminals;
+ for( var outputName in outputTerminals ) {
+ if( outputTerminals[ outputName ].connectors.length > 0 ) {
+ return true;
+ }
+ }
+ return false;
+ },
+ connectedMappedInputTerminals: function() {
+ return this._connectedMappedTerminals( this.input_terminals );
+ },
+ hasConnectedMappedInputTerminals: function() {
+ // return this.connectedMappedInputTerminals().length > 0; <- optimized this
+ var inputTerminals = this.input_terminals;
+ for( var inputName in inputTerminals ) {
+ var inputTerminal = inputTerminals[ inputName ];
+ if( inputTerminal.connectors.length > 0 && inputTerminal.isMappedOver() ) {
+ return true;
+ }
+ }
+ return false;
+ },
+ _connectedMappedTerminals: function( all_terminals ) {
+ var mapped_outputs = [];
+ $.each( all_terminals, function( _, t ) {
+ var mapOver = t.mapOver();
+ if( mapOver.isCollection ) {
+ if( t.connectors.length > 0 ) {
+ mapped_outputs.push( t );
+ }
+ }
+ });
+ return mapped_outputs;
+ },
+ hasMappedOverInputTerminals: function() {
+ var found = false;
+ _.each( this.input_terminals, function( t ) {
+ var mapOver = t.mapOver();
+ if( mapOver.isCollection ) {
+ found = true;
+ }
+ } );
+ return found;
+ },
+ forceDisconnectOutputs: function() {
+ _.each( this.output_terminals, function( terminal ) {
+ terminal.disconnectAll();
+ } );
+ },
redraw : function () {
$.each( this.input_terminals, function( _, t ) {
t.redraw();
@@ -259,8 +770,11 @@
this.form_html = data.form_html;
this.tool_errors = data.tool_errors;
this.annotation = data['annotation'];
- var pja_in = $.parseJSON(data.post_job_actions);
- this.post_job_actions = pja_in ? pja_in : {};
+ if( "post_job_actions" in data ) {
+ // Won't be present in response for data inputs
+ var pja_in = $.parseJSON(data.post_job_actions);
+ this.post_job_actions = pja_in ? pja_in : {};
+ }
node.nodeView.renderToolErrors();
// Update input rows
var old_body = nodeView.$( "div.inputs" );
@@ -275,6 +789,14 @@
unusedView.el.terminal.destroy();
} );
nodeView.terminalViews = newTerminalViews;
+ // In general workflow editor assumes tool outputs don't change in # or
+ // type (not really valid right?) but adding special logic here for
+ // data collection input parameters that can have their collection
+ // change.
+ if( data.data_outputs.length == 1 && "collection_type" in data.data_outputs[ 0 ] ) {
+ nodeView.updateDataOutput( data.data_outputs[ 0 ] );
+ }
+
old_body.replaceWith( new_body );
// If active, reactivate with new form_html
this.markChanged();
@@ -766,6 +1288,7 @@
this.tool_body.find( "div" ).remove();
this.newInputsDiv().appendTo( this.tool_body );
this.terminalViews = {};
+ this.outputTerminlViews = {};
},
render : function() {
@@ -803,7 +1326,8 @@
}
var terminalView = this.terminalViews[ input.name ];
if( ! terminalView ) {
- terminalView = new InputTerminalView( {
+ var terminalViewClass = ( input.input_type == "dataset_collection" ) ? InputCollectionTerminalView : InputTerminalView;
+ terminalView = new terminalViewClass( {
node: this.node,
input: input
} );
@@ -822,13 +1346,14 @@
} );
var ib = inputView.$el;
- var terminalElement = inputView.terminalElement;
- body.append( ib.prepend( terminalElement ) );
+ // Append to new body
+ body.append( ib.prepend( terminalView.terminalElements() ) );
return terminalView;
},
addDataOutput: function( output ) {
- var terminalView = new OutputTerminalView( {
+ var terminalViewClass = ( output.collection_type ) ? OutputCollectionTerminalView : OutputTerminalView;
+ var terminalView = new terminalViewClass( {
node: this.node,
output: output
} );
@@ -837,7 +1362,12 @@
"terminalElement": terminalView.el,
"nodeView": this,
} );
- this.tool_body.append( outputView.$el.append( outputView.terminalElement ) );
+ this.tool_body.append( outputView.$el.append( terminalView.terminalElements() ) );
+ },
+
+ updateDataOutput: function( output ) {
+ var outputTerminal = this.node.output_terminals[ output.name ];
+ outputTerminal.update( output );
}
} );
@@ -938,7 +1468,9 @@
var output = this.output;
var label = output.name;
var node = this.nodeView.node;
- if ( output.extensions.indexOf( 'input' ) < 0 ) {
+
+ var isInput = output.extensions.indexOf( 'input' ) >= 0 || output.extensions.indexOf( 'input_collection' ) >= 0;
+ if ( ! isInput ) {
label = label + " (" + output.extensions.join(", ") + ")";
}
this.$el.html( label )
@@ -968,16 +1500,39 @@
} );
+var TerminalView = Backbone.View.extend( {
-var InputTerminalView = Backbone.View.extend( {
+ setupMappingView: function( terminal ) {
+ var terminalMapping = new this.terminalMappingClass( { terminal: terminal } );
+ var terminalMappingView = new this.terminalMappingViewClass( { model: terminalMapping } );
+ terminalMappingView.render();
+ terminal.terminalMappingView = terminalMappingView;
+ this.terminalMappingView = terminalMappingView;
+ },
+
+ terminalElements: function() {
+ if( this.terminalMappingView ) {
+ return [ this.terminalMappingView.el, this.el ];
+ } else{
+ return [ this.el ];
+ }
+ }
+
+} );
+
+
+var BaseInputTerminalView = TerminalView.extend( {
className: "terminal input-terminal",
initialize: function( options ) {
var node = options.node;
var input = options.input;
-
var name = input.name;
- var terminal = this.el.terminal = new InputTerminal( { element: this.el, input: input } );
+ var terminal = this.terminalForInput( input );
+ if( ! terminal.multiple ) {
+ this.setupMappingView( terminal );
+ }
+ this.el.terminal = terminal;
terminal.node = node;
terminal.name = name;
node.input_terminals[name] = terminal;
@@ -1049,20 +1604,37 @@
} );
+var InputTerminalView = BaseInputTerminalView.extend( {
+ terminalMappingClass: InputTerminalMapping,
+ terminalMappingViewClass: InputTerminalMappingView,
+ terminalForInput: function( input ) {
+ return new InputTerminal( { element: this.el, input: input } );
+ },
-var OutputTerminalView = Backbone.View.extend( {
+} );
+
+var InputCollectionTerminalView = BaseInputTerminalView.extend( {
+ terminalMappingClass: InputCollectionTerminalMapping,
+ terminalMappingViewClass: InputCollectionTerminalMappingView,
+
+ terminalForInput: function( input ) {
+ return new InputCollectionTerminal( { element: this.el, input: input } );
+ },
+
+} );
+
+var BaseOutputTerminalView = TerminalView.extend( {
className: "terminal output-terminal",
initialize: function( options ) {
var node = options.node;
var output = options.output;
+
var name = output.name;
- var type = output.extensions;
-
- var element = this.el;
- var terminal_element = element;
- var terminal = element.terminal = new OutputTerminal( {element: element, datatypes: type } );
+ var terminal = this.terminalForOutput( output );
+ this.setupMappingView( terminal );
+ this.el.terminal = terminal;
terminal.node = node;
terminal.name = name;
node.output_terminals[name] = terminal;
@@ -1112,6 +1684,29 @@
} );
+var OutputTerminalView = BaseOutputTerminalView.extend( {
+ terminalMappingClass: OutputTerminalMapping,
+ terminalMappingViewClass: OutputTerminalMappingView,
+
+ terminalForOutput: function( output ) {
+ var type = output.extensions;
+ var terminal = new OutputTerminal( { element: this.el, datatypes: type } );
+ return terminal;
+ },
+
+} );
+
+var OutputCollectionTerminalView = BaseOutputTerminalView.extend( {
+ terminalMappingClass: OutputCollectionTerminalMapping,
+ terminalMappingViewClass: OutputCollectionTerminalMappingView,
+
+ terminalForOutput: function( output ) {
+ var collection_type = output.collection_type;
+ var terminal = new OutputCollectionTerminal( { element: this.el, collection_type: collection_type, datatypes: output.extensions } );
+ return terminal;
+ },
+
+} );
////////////
diff -r cf447e71185434d0b476cec08c6fbc3fb901acba -r 94c09b3a81468710e75ea020397b9af5e197f3d6 static/scripts/galaxy.workflows.js
--- a/static/scripts/galaxy.workflows.js
+++ b/static/scripts/galaxy.workflows.js
@@ -496,6 +496,9 @@
$(this).find( "select[refresh_on_change='true']").change( function() {
$(form).submit();
});
+ $(this).find( "input[refresh_on_change='true']").change( function() {
+ $(form).submit();
+ });
$(this).find( ".popupmenu" ).each( function() {
var id = $(this).parents( "div.form-row" ).attr( 'id' );
var b = $('<a class="popup-arrow" id="popup-arrow-for-' + id + '">▼</a>');
diff -r cf447e71185434d0b476cec08c6fbc3fb901acba -r 94c09b3a81468710e75ea020397b9af5e197f3d6 templates/webapps/galaxy/workflow/editor_generic_form.mako
--- a/templates/webapps/galaxy/workflow/editor_generic_form.mako
+++ b/templates/webapps/galaxy/workflow/editor_generic_form.mako
@@ -9,13 +9,24 @@
cls = "form-row"
if input.error:
cls += " form-row-error"
+ extra_attributes = ""
+ for key, value in getattr( input, "extra_attributes", {} ).iteritems():
+ extra_attributes += " %s=\"%s\"" % ( key, value )
+ type_attribute = ""
+ if input.type:
+ type_attribute = "type=\"input.type\""
%><div class="${cls}"><label>
${input.label}:
</label><div style="float: left; width: 250px; margin-right: 10px;">
- <input type="${input.type}" name="${input.name | h}" value="${input.value | h}" size="30">
+
+ <input ${type_attribute} name="${input.name | h}" value="${input.value | h}" size="30" ${extra_attributes}>
+ %if hasattr( input, "body_html" ):
+ ${input.body_html()}
+ </input>
+ %endif
</div>
%if input.error:
<div style="float: left; color: red; font-weight: bold; padding-top: 1px; padding-bottom: 3px;">
diff -r cf447e71185434d0b476cec08c6fbc3fb901acba -r 94c09b3a81468710e75ea020397b9af5e197f3d6 test/qunit/tests/workflow_editor_tests.js
--- a/test/qunit/tests/workflow_editor_tests.js
+++ b/test/qunit/tests/workflow_editor_tests.js
@@ -3,10 +3,12 @@
define([
"galaxy.workflow_editor.canvas",
"jquery",
+ "libs/bootstrap", // Required by galaxy.workflow_editor.canvas
"sinon-qunit"
], function(
workflowEditor,
$,
+ bootstrap,
sinon
){
"use strict";
@@ -89,6 +91,7 @@
},
test_accept: function( other ) {
other = other || { node: {}, datatypes: [ "txt" ] };
+ other.mapOver = function() { return NULL_COLLECTION_TYPE_DESCRIPTION; };
return this.input_terminal.canAccept( other );
},
pja_change_datatype_node: function( output_name, newtype ) {
@@ -221,6 +224,12 @@
} );
} );
+ test( "can accept list collection for multiple input parameters if datatypes match", function() {
+ var self = this;
+ this.multiple();
+ ok( self.test_accept() );
+ } );
+
module( "Connector test", {
} );
@@ -262,8 +271,8 @@
test( "initial redraw", function() {
with_canvas_container( function( canvas_container ) {
- var input = { connect: sinon.spy(), element: $("<div>") };
- var output = { connect: sinon.spy(), element: $("<div>") };
+ var input = { connect: sinon.spy(), element: $("<div>"), isMappedOver: function() { return false; } };
+ var output = { connect: sinon.spy(), element: $("<div>"), isMappedOver: function() { return false; } };
var connector = new Connector( input, output );
connector.redraw();
@@ -457,7 +466,7 @@
/* global NodeView */
module( "Node view ", {
setup: function() {
- this.set_for_node( { input_terminals: {}, output_terminals: {}, markChanged: function() {} } );
+ this.set_for_node( { input_terminals: {}, output_terminals: {}, markChanged: function() {}, terminalMapping: { disableMapOver: function() {} } } );
},
set_for_node: function( node ) {
var element = $("<div><div class='toolFormBody'></div></div>");
@@ -468,7 +477,8 @@
var terminal = this.view.node.input_terminals[ "TestName" ];
var outputTerminal = new OutputTerminal( { name: "TestOuptut", datatypes: [ outputType ] } );
- outputTerminal.node = { markChanged: function() {}, post_job_actions: [] };
+ outputTerminal.node = { markChanged: function() {}, post_job_actions: [], hasMappedOverInputTerminals: function() { return false; } };
+ outputTerminal.terminalMapping = { disableMapOver: function() {}, mapOver: NULL_COLLECTION_TYPE_DESCRIPTION };
var c = new Connector( outputTerminal, terminal );
return c;
@@ -570,4 +580,357 @@
// TODO: Test bindings
+ module( "CollectionTypeDescription", {
+ listType: function() {
+ return new CollectionTypeDescription( "list" );
+ },
+ pairedType: function() {
+ return new CollectionTypeDescription( "paired" );
+ },
+ pairedListType: function() {
+ return new CollectionTypeDescription( "list:paired" );
+ }
+ } );
+
+ test( "canMatch", function() {
+ ok( this.listType().canMatch( this.listType() ) );
+ ok( ! this.listType().canMatch( this.pairedType() ) );
+ ok( ! this.listType().canMatch( this.pairedListType() ) );
+ } );
+
+ test( "canMatch special types", function() {
+ ok( this.listType().canMatch( ANY_COLLECTION_TYPE_DESCRIPTION ) );
+ ok( ANY_COLLECTION_TYPE_DESCRIPTION.canMatch( this.pairedListType() ) );
+
+ ok( ! this.listType().canMatch( NULL_COLLECTION_TYPE_DESCRIPTION ) );
+ ok( ! NULL_COLLECTION_TYPE_DESCRIPTION.canMatch( this.pairedListType() ) );
+ } );
+
+ test( "canMapOver", function() {
+ ok( ! this.listType().canMapOver( this.listType() ) );
+ ok( ! this.listType().canMapOver( this.pairedType() ) );
+ ok( this.pairedListType().canMapOver( this.pairedType() ) );
+ ok( ! this.listType().canMapOver( this.pairedListType() ) );
+ } );
+
+ test( "canMapOver special types", function() {
+ ok( ! this.listType().canMapOver( NULL_COLLECTION_TYPE_DESCRIPTION ) );
+ ok( ! NULL_COLLECTION_TYPE_DESCRIPTION.canMapOver( this.pairedListType() ) );
+
+ // Following two should be able to be relaxed someday maybe - but the
+ // tracking gets tricky I think. For now mapping only works for explicitly
+ // defined collection types.
+ ok( ! this.listType().canMapOver( ANY_COLLECTION_TYPE_DESCRIPTION ) );
+ ok( ! ANY_COLLECTION_TYPE_DESCRIPTION.canMapOver( this.pairedListType() ) );
+ } );
+
+ test( "append", function( other ) {
+ var appendedType = this.listType().append( this.pairedType() );
+ equal( appendedType.collectionType, "list:paired" );
+ } );
+
+ test( "isCollection", function() {
+ ok( this.listType().isCollection );
+ ok( ANY_COLLECTION_TYPE_DESCRIPTION.isCollection );
+ ok( ! NULL_COLLECTION_TYPE_DESCRIPTION.isCollection );
+ } );
+
+ test( "equal", function() {
+ ok( ! this.listType().equal( this.pairedType() ) );
+ ok( this.listType().equal( this.listType() ) );
+
+ ok( ANY_COLLECTION_TYPE_DESCRIPTION.equal( ANY_COLLECTION_TYPE_DESCRIPTION ) );
+ ok( ! ANY_COLLECTION_TYPE_DESCRIPTION.equal( NULL_COLLECTION_TYPE_DESCRIPTION ) );
+ ok( ! ANY_COLLECTION_TYPE_DESCRIPTION.equal( this.pairedType() ) );
+ ok( ! this.pairedType().equal( ANY_COLLECTION_TYPE_DESCRIPTION ) );
+
+ ok( NULL_COLLECTION_TYPE_DESCRIPTION.equal( NULL_COLLECTION_TYPE_DESCRIPTION ) );
+ ok( ! NULL_COLLECTION_TYPE_DESCRIPTION.equal( ANY_COLLECTION_TYPE_DESCRIPTION ) );
+ ok( ! NULL_COLLECTION_TYPE_DESCRIPTION.equal( this.listType() ) );
+ ok( ! this.listType().equal( NULL_COLLECTION_TYPE_DESCRIPTION ) );
+
+ } );
+
+ module( "TerminalMapping", {
+
+ } );
+
+ test( "default constructor", function() {
+ var terminal = {};
+ var mapping = new TerminalMapping( { terminal: terminal } );
+ ok( terminal.terminalMapping === mapping );
+ ok( mapping.mapOver === NULL_COLLECTION_TYPE_DESCRIPTION );
+ } );
+
+ test( "constructing with mapOver", function() {
+ var terminal = {};
+ var mapping = new TerminalMapping( { terminal: terminal, mapOver: new CollectionTypeDescription( "list" ) } );
+ ok( mapping.mapOver.collectionType == "list" );
+ } );
+
+ test( "disableMapOver", function() {
+ var terminal = {};
+ var mapping = new TerminalMapping( { terminal: terminal, mapOver: new CollectionTypeDescription( "list" ) } );
+ var changeSpy = sinon.spy();
+ mapping.bind( "change", changeSpy );
+ mapping.disableMapOver();
+ ok( mapping.mapOver === NULL_COLLECTION_TYPE_DESCRIPTION );
+ ok( changeSpy.called );
+ } );
+
+ module( "terminal mapping logic", {
+ newInputTerminal: function( mapOver, input, node ) {
+ input = input || {};
+ node = node || this.newNode();
+ if( ! ( 'extensions' in input ) ) {
+ input[ 'extensions'] = [ 'data' ];
+ }
+ var inputEl = $("<div>")[ 0 ];
+ var inputTerminal = new InputTerminal( { element: inputEl, input: input } );
+ var inputTerminalMapping = new InputTerminalMapping( { terminal: inputTerminal } );
+ inputTerminal.node = node;
+ if( mapOver ) {
+ inputTerminal.setMapOver( new CollectionTypeDescription( mapOver ) );
+ }
+ return inputTerminal;
+ },
+ newInputCollectionTerminal: function( input, node ) {
+ input = input || {};
+ node = node || this.newNode();
+ if( ! ( 'extensions' in input ) ) {
+ input[ 'extensions'] = [ 'data' ];
+ }
+ var inputEl = $("<div>")[ 0 ];
+ var inputTerminal = new InputCollectionTerminal( { element: inputEl, input: input } );
+ var inputTerminalMapping = new InputCollectionTerminalMapping( { terminal: inputTerminal } );
+ inputTerminal.node = node;
+ return inputTerminal;
+ },
+ newOutputTerminal: function( mapOver, output, node ) {
+ output = output || {};
+ node = node || this.newNode();
+ if( ! ( 'extensions' in output ) ) {
+ output[ 'extensions'] = [ 'data' ];
+ }
+ var outputEl = $("<div>")[ 0 ];
+ var outputTerminal = new OutputTerminal( { element: outputEl, datatypes: output.extensions } );
+ var outputTerminalMapping = new OutputTerminalMapping( { terminal: outputTerminal } );
+ outputTerminal.node = node;
+ if( mapOver ) {
+ outputTerminal.setMapOver( new CollectionTypeDescription( mapOver ) );
+ }
+ return outputTerminal;
+ },
+ newNode: function( ) {
+ var nodeEl = $("<div>")[ 0 ];
+ var node = new Node( { element: nodeEl } );
+ return node;
+ },
+ addOutput: function( terminal, connected ) {
+ var connectedOutput = this.newOutputTerminal();
+ var inputTerminal = this.newInputTerminal();
+ var node = terminal.node;
+ if( connected ) {
+ with_workflow_global( function() {
+ new Connector( inputTerminal, connectedOutput );
+ } );
+ }
+ this._addTerminalTo( connectedOutput, node.output_terminals );
+ return connectedOutput;
+ },
+ addConnectedOutput: function( terminal ) {
+ return this.addOutput( terminal, true );
+ },
+ addConnectedInput: function( terminal ) {
+ var connectedInput = this.newInputTerminal();
+ var outputTerminal = this.newOutputTerminal();
+ var node = terminal.node;
+ with_workflow_global( function() {
+ new Connector( connectedInput, outputTerminal );
+ } );
+ this._addTerminalTo( connectedInput, node.input_terminals );
+ return connectedInput;
+ },
+ _addTerminalTo: function( terminal, terminals ) {
+ var name = "other";
+ while( name in terminals ) {
+ name += "_";
+ }
+ terminals[ name ] = terminal;
+ },
+ verifyNotAttachable: function( inputTerminal, output ) {
+ var outputTerminal;
+ var outputTerminal;
+ if( typeof( output ) == "string" ) {
+ // Just given a collection type... create terminal out of it.
+ outputTerminal = this.newOutputTerminal( output );
+ } else {
+ outputTerminal = output;
+ }
+
+ ok( ! inputTerminal.attachable( outputTerminal ) );
+ },
+ verifyAttachable: function( inputTerminal, output ) {
+ var outputTerminal;
+ if( typeof( output ) == "string" ) {
+ // Just given a collection type... create terminal out of it.
+ outputTerminal = this.newOutputTerminal( output );
+ } else {
+ outputTerminal = output;
+ }
+
+ ok( inputTerminal.attachable( outputTerminal ) );
+
+ // Go further... make sure datatypes are being enforced
+ inputTerminal.datatypes = [ "bam" ];
+ outputTerminal.datatypes = [ "txt" ];
+ ok( ! inputTerminal.attachable( outputTerminal ) );
+ },
+ verifyMappedOver: function( terminal ) {
+ ok( terminal.terminalMapping.mapOver.isCollection );
+ },
+ verifyNotMappedOver: function( terminal ) {
+ ok( ! terminal.terminalMapping.mapOver.isCollection );
+ },
+ } );
+
+ test( "unconstrained input can be mapped over", function() {
+ var inputTerminal1 = this.newInputTerminal();
+ this.verifyAttachable( inputTerminal1, "list" );
+ } );
+
+ test( "unmapped input can be mapped over if matching connected input terminals map type", function() {
+ var inputTerminal1 = this.newInputTerminal();
+ var connectedInput1 = this.addConnectedInput( inputTerminal1 );
+ var connectedInput2 = this.addConnectedInput( inputTerminal1 );
+ connectedInput2.setMapOver( new CollectionTypeDescription( "list") );
+ this.verifyAttachable( inputTerminal1, "list" );
+ } );
+
+ test( "unmapped input cannot be mapped over if not matching connected input terminals map type", function() {
+ var inputTerminal1 = this.newInputTerminal();
+ var connectedInput = this.addConnectedInput( inputTerminal1 );
+ connectedInput.setMapOver( new CollectionTypeDescription( "paired" ) );
+ this.verifyNotAttachable( inputTerminal1, "list" );
+ } );
+
+ test( "unmapped input with unmapped, connected outputs cannot be mapped over", function() {
+ // It would invalidate the connections - someday maybe we could try to
+ // recursively map over everything down the DAG - it would be expensive
+ // to check that though.
+ var inputTerminal1 = this.newInputTerminal();
+ this.addConnectedOutput( inputTerminal1 );
+ this.verifyNotAttachable( inputTerminal1, "list" );
+ } );
+
+ test( "unmapped input with connected mapped outputs can be mapped over if matching", function() {
+ // It would invalidate the connections - someday maybe we could try to
+ // recursively map over everything down the DAG - it would be expensive
+ // to check that though.
+ var inputTerminal1 = this.newInputTerminal();
+ var connectedOutput = this.addConnectedOutput( inputTerminal1 );
+ connectedOutput.setMapOver( new CollectionTypeDescription( "list" ) );
+ this.verifyAttachable( inputTerminal1, "list" );
+ } );
+
+ test( "unmapped input with connected mapped outputs cannot be mapped over if mapover not matching", function() {
+ // It would invalidate the connections - someday maybe we could try to
+ // recursively map over everything down the DAG - it would be expensive
+ // to check that though.
+ var inputTerminal1 = this.newInputTerminal();
+ var connectedOutput = this.addConnectedOutput( inputTerminal1 );
+ connectedOutput.setMapOver( new CollectionTypeDescription( "paired" ) );
+ this.verifyNotAttachable( inputTerminal1, "list" );
+ } );
+
+ test( "explicitly constrained input can not be mapped over by incompatible collection type", function() {
+ var inputTerminal1 = this.newInputTerminal();
+ inputTerminal1.setMapOver( new CollectionTypeDescription( "paired" ) );
+ this.verifyNotAttachable( inputTerminal1, "list" );
+ } );
+
+ test( "explicitly constrained input can be mapped over by compatible collection type", function() {
+ var inputTerminal1 = this.newInputTerminal();
+ inputTerminal1.setMapOver( new CollectionTypeDescription( "list" ) );
+ this.verifyAttachable( inputTerminal1, "list" );
+ } );
+
+ test( "unconstrained collection input can be mapped over", function() {
+ var inputTerminal1 = this.newInputCollectionTerminal( { collection_type: "paired" } );
+ this.verifyAttachable( inputTerminal1, "list:paired" );
+ } );
+
+ test( "unconstrained collection input cannot be mapped over by incompatible type", function() {
+ var inputTerminal1 = this.newInputCollectionTerminal( { collection_type: "list" } ); // Would need to be paired...
+ this.verifyNotAttachable( inputTerminal1, "list:paired" );
+ } );
+
+ test( "explicitly mapped over collection input can be attached by explicit mapping", function() {
+ var inputTerminal1 = this.newInputCollectionTerminal( { collection_type: "paired" } );
+ inputTerminal1.setMapOver( new CollectionTypeDescription( "list" ) );
+ this.verifyAttachable( inputTerminal1, "list:paired" );
+ } );
+
+ test( "explicitly mapped over collection input can be attached by explicit mapping", function() {
+ var inputTerminal1 = this.newInputCollectionTerminal( { collection_type: "list:paired" } );
+ inputTerminal1.setMapOver( new CollectionTypeDescription( "list" ) );
+ // effectively input is list:list:paired so shouldn't be able to attach
+ this.verifyNotAttachable( inputTerminal1, "list:paired" );
+ } );
+
+ test( "resetMappingIfNeeded does nothing if not mapped", function() {
+ var inputTerminal1 = this.newInputTerminal();
+ inputTerminal1.resetMappingIfNeeded();
+ this.verifyNotMappedOver( inputTerminal1 );
+ } );
+
+ test( "resetMappingIfNeeded resets unconstrained input", function() {
+ var inputTerminal1 = this.newInputTerminal( "list" );
+ this.verifyMappedOver( inputTerminal1 );
+ inputTerminal1.resetMappingIfNeeded();
+ this.verifyNotMappedOver( inputTerminal1 );
+ } );
+
+ test( "resetMappingIfNeeded does not reset if connected output depends on being mapped", function() {
+ var inputTerminal1 = this.newInputTerminal( "list" );
+ var connectedOutput = this.addConnectedOutput( inputTerminal1 );
+ connectedOutput.setMapOver( new CollectionTypeDescription( "list" ) );
+ inputTerminal1.resetMappingIfNeeded();
+ this.verifyMappedOver( inputTerminal1 );
+ } );
+
+ test( "resetMappingIfNeeded resets if node outputs are not connected to anything", function() {
+ var inputTerminal1 = this.newInputTerminal( "list" );
+ var output = this.addOutput( inputTerminal1 );
+ output.setMapOver( new CollectionTypeDescription( "list" ) );
+ inputTerminal1.resetMappingIfNeeded();
+ this.verifyNotMappedOver( inputTerminal1 );
+ } );
+
+ test( "resetMappingIfNeeded an input resets node outputs if they not connected to anything", function() {
+ var inputTerminal1 = this.newInputTerminal( "list" );
+ var output = this.addOutput( inputTerminal1 );
+ output.setMapOver( new CollectionTypeDescription( "list" ) );
+ inputTerminal1.resetMappingIfNeeded();
+ this.verifyNotMappedOver( output );
+ } );
+
+ test( "resetMappingIfNeeded resets if not last mapped over input", function() {
+ // Idea here is that other nodes are forcing output to still be mapped
+ // over so don't need to disconnect output nodes.
+ var inputTerminal1 = this.newInputTerminal( "list" );
+ var connectedInput1 = this.addConnectedInput( inputTerminal1 );
+ connectedInput1.setMapOver( new CollectionTypeDescription( "list" ) );
+ var connectedOutput = this.addConnectedOutput( inputTerminal1 );
+ connectedOutput.setMapOver( new CollectionTypeDescription( "list" ) );
+ inputTerminal1.resetMappingIfNeeded();
+ // inputTerminal1 can be reset because connectedInput1
+ // is still forcing connectedOutput to be mapped over,
+ // so verify inputTerminal1 is rest and connectedInput1
+ // and connectedOutput are untouched.
+ this.verifyNotMappedOver( inputTerminal1 );
+ this.verifyMappedOver( connectedInput1 );
+ this.verifyMappedOver( connectedOutput );
+ } );
+
});
\ No newline at end of file
https://bitbucket.org/galaxy/galaxy-central/commits/2bc485799a18/
Changeset: 2bc485799a18
User: jmchilton
Date: 2014-05-06 15:54:31
Summary: Dataset collections - tools API returns implicitly created collections.
Use improvements to write better functional tests, ensure element identifiers preserved.
Affected #: 5 files
diff -r 94c09b3a81468710e75ea020397b9af5e197f3d6 -r 2bc485799a188719cdf163c3a4d075cff4341fee lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -1976,7 +1976,13 @@
execution_tracker = execute_job( trans, self, all_params, history=history, rerun_remap_job_id=rerun_remap_job_id, collection_info=collection_info )
if execution_tracker.successful_jobs:
template = 'tool_executed.mako'
- template_vars = dict( out_data=execution_tracker.output_datasets, num_jobs=len( execution_tracker.successful_jobs ), job_errors=execution_tracker.execution_errors )
+ template_vars = dict(
+ out_data=execution_tracker.output_datasets,
+ num_jobs=len( execution_tracker.successful_jobs ),
+ job_errors=execution_tracker.execution_errors,
+ jobs=execution_tracker.successful_jobs,
+ implicit_collections=execution_tracker.created_collections,
+ )
else:
template = 'message.mako'
template_vars = dict( status='error', message=execution_tracker.execution_errors[0], refresh_frames=[] )
diff -r 94c09b3a81468710e75ea020397b9af5e197f3d6 -r 2bc485799a188719cdf163c3a4d075cff4341fee lib/galaxy/tools/execute.py
--- a/lib/galaxy/tools/execute.py
+++ b/lib/galaxy/tools/execute.py
@@ -41,7 +41,7 @@
self.execution_errors = []
self.output_datasets = []
self.output_datasets_by_output_name = collections.defaultdict(list)
- self.created_collections = []
+ self.created_collections = {}
def record_success( self, job, outputs ):
self.successful_jobs.append( job )
diff -r 94c09b3a81468710e75ea020397b9af5e197f3d6 -r 2bc485799a188719cdf163c3a4d075cff4341fee lib/galaxy/webapps/galaxy/api/tools.py
--- a/lib/galaxy/webapps/galaxy/api/tools.py
+++ b/lib/galaxy/webapps/galaxy/api/tools.py
@@ -7,6 +7,7 @@
from galaxy.visualization.genomes import GenomeRegion
from galaxy.util.json import to_json_string
from galaxy.visualization.data_providers.genome import *
+from galaxy.dataset_collections.util import dictify_dataset_collection_instance
import logging
log = logging.getLogger( __name__ )
@@ -135,8 +136,16 @@
# TODO: check for errors and ensure that output dataset(s) are available.
output_datasets = vars.get( 'out_data', [] )
rval = {
- "outputs": []
+ "outputs": [],
+ "jobs": [],
+ "implicit_collections": [],
}
+
+ job_errors = vars.get( 'job_errors', [] )
+ if job_errors:
+ # If we are here - some jobs were successfully executed but some failed.
+ rval[ "errors" ] = job_errors
+
outputs = rval[ "outputs" ]
#TODO:?? poss. only return ids?
for output_name, output in output_datasets:
@@ -144,8 +153,18 @@
#add the output name back into the output data structure
#so it's possible to figure out which newly created elements
#correspond with which tool file outputs
- output_dict['output_name'] = output_name
+ output_dict[ 'output_name' ] = output_name
outputs.append( trans.security.encode_dict_ids( output_dict ) )
+
+ for job in vars[ 'jobs' ]:
+ rval[ 'jobs' ].append( self.encode_all_ids( trans, job.to_dict( view='collection' ), recursive=True ) )
+
+ for output_name, collection_instance in vars.get( 'implicit_collections', {} ).iteritems():
+ history = target_history or trans.history
+ output_dict = dictify_dataset_collection_instance( collection_instance, security=trans.security, parent=history )
+ output_dict[ 'output_name' ] = output_name
+ rval[ 'implicit_collections' ].append( output_dict )
+
return rval
#
diff -r 94c09b3a81468710e75ea020397b9af5e197f3d6 -r 2bc485799a188719cdf163c3a4d075cff4341fee test/api/test_tools.py
--- a/test/api/test_tools.py
+++ b/test/api/test_tools.py
@@ -160,8 +160,14 @@
inputs = {
"input1|__collection_multirun__": hdca_id,
}
- outputs = self._cat1_outputs( history_id, inputs=inputs )
+ create = self._run_cat1( history_id, inputs=inputs, assert_ok=True )
+ outputs = create[ 'outputs' ]
+ jobs = create[ 'jobs' ]
+ implicit_collections = create[ 'implicit_collections' ]
+ self.assertEquals( len( jobs ), 2 )
self.assertEquals( len( outputs ), 2 )
+ self.assertEquals( len( implicit_collections ), 1 )
+
self.dataset_populator.wait_for_history( history_id, assert_ok=True )
output1 = outputs[ 0 ]
output2 = outputs[ 1 ]
@@ -170,6 +176,34 @@
self.assertEquals( output1_content.strip(), "123" )
self.assertEquals( output2_content.strip(), "456" )
+ def test_map_over_nested_collections( self ):
+ self.__skip_without_tool( "cat1" )
+ history_id = self.dataset_populator.new_history()
+ hdca_id = self.__build_nested_list( history_id )
+ inputs = {
+ "input1|__collection_multirun__": hdca_id,
+ }
+ create = self._run_cat1( history_id, inputs=inputs, assert_ok=True )
+ outputs = create[ 'outputs' ]
+ jobs = create[ 'jobs' ]
+ implicit_collections = create[ 'implicit_collections' ]
+ self.assertEquals( len( jobs ), 4 )
+ self.assertEquals( len( outputs ), 4 )
+ self.assertEquals( len( implicit_collections ), 1 )
+ implicit_collection = implicit_collections[ 0 ]
+ self._assert_has_keys( implicit_collection, "collection_type", "elements" )
+ assert implicit_collection[ "collection_type" ] == "list:paired"
+ assert len( implicit_collection[ "elements" ] ) == 2
+ first_element, second_element = implicit_collection[ "elements" ]
+ assert first_element[ "element_identifier" ] == "test0"
+ assert second_element[ "element_identifier" ] == "test1"
+
+ first_object = first_element[ "object" ]
+ assert first_object[ "collection_type" ] == "paired"
+ assert len( first_object[ "elements" ] ) == 2
+ first_object_left_element = first_object[ "elements" ][ 0 ]
+ self.assertEquals( outputs[ 0 ][ "id" ], first_object_left_element[ "object" ][ "id" ] )
+
def test_map_over_two_collections( self ):
history_id = self.dataset_populator.new_history()
hdca1_id = self.__build_pair( history_id, [ "123", "456" ] )
@@ -211,7 +245,10 @@
"f1": "__collection_reduce__|%s" % hdca1_id,
"f2": "__collection_reduce__|%s" % hdca2_id,
}
- outputs = self._run_and_get_outputs( "multi_data_param", history_id, inputs )
+ create = self._run( "multi_data_param", history_id, inputs, assert_ok=True )
+ outputs = create[ 'outputs' ]
+ jobs = create[ 'jobs' ]
+ assert len( jobs ) == 1
assert len( outputs ) == 2
self.dataset_populator.wait_for_history( history_id, assert_ok=True )
output1 = outputs[ 0 ]
@@ -224,12 +261,7 @@
def test_subcollection_mapping( self ):
self.__skip_without_tool( "collection_paired_test" )
history_id = self.dataset_populator.new_history()
- hdca1_id = self.__build_pair( history_id, [ "123", "456" ] )
- hdca2_id = self.__build_pair( history_id, [ "789", "0ab" ] )
-
- response = self.dataset_collection_populator.create_list_from_pairs( history_id, [ hdca1_id, hdca2_id ] )
- self._assert_status_code_is( response, 200 )
- hdca_list_id = response.json()[ "id" ]
+ hdca_list_id = self.__build_nested_list( history_id )
inputs = {
"f1|__subcollection_multirun__": "%s|paired" % hdca_list_id
}
@@ -253,22 +285,25 @@
return self._run_outputs( self._run( tool_id, history_id, inputs ) )
def _run_outputs( self, create_response ):
- self._assert_status_code_is( create_response, 200 )
- create = create_response.json()
- self._assert_has_keys( create, 'outputs' )
- return create[ 'outputs' ]
+ self._assert_status_code_is( create_response, 200, assert_ok=True )[ 'outputs' ]
- def _run_cat1( self, history_id, inputs ):
- return self._run( 'cat1', history_id, inputs )
+ def _run_cat1( self, history_id, inputs, assert_ok=False ):
+ return self._run( 'cat1', history_id, inputs, assert_ok=assert_ok )
- def _run( self, tool_id, history_id, inputs ):
+ def _run( self, tool_id, history_id, inputs, assert_ok=False ):
payload = self.dataset_populator.run_tool_payload(
tool_id=tool_id,
inputs=inputs,
history_id=history_id,
)
create_response = self._post( "tools", data=payload )
- return create_response
+ if assert_ok:
+ self._assert_status_code_is( create_response, 200 )
+ create = create_response.json()
+ self._assert_has_keys( create, 'outputs' )
+ return create
+ else:
+ return create_response
def _upload_and_get_content( self, content, **upload_kwds ):
history_id = self.dataset_populator.new_history()
@@ -304,6 +339,15 @@
if tool_id not in self.__tool_ids( ):
raise SkipTest( )
+ def __build_nested_list( self, history_id ):
+ hdca1_id = self.__build_pair( history_id, [ "123", "456" ] )
+ hdca2_id = self.__build_pair( history_id, [ "789", "0ab" ] )
+
+ response = self.dataset_collection_populator.create_list_from_pairs( history_id, [ hdca1_id, hdca2_id ] )
+ self._assert_status_code_is( response, 200 )
+ hdca_list_id = response.json()[ "id" ]
+ return hdca_list_id
+
def __build_pair( self, history_id, contents ):
create_response = self.dataset_collection_populator.create_pair_in_history( history_id, contents=contents )
hdca_id = create_response.json()[ "id" ]
diff -r 94c09b3a81468710e75ea020397b9af5e197f3d6 -r 2bc485799a188719cdf163c3a4d075cff4341fee test/unit/tools/test_execution.py
--- a/test/unit/tools/test_execution.py
+++ b/test/unit/tools/test_execution.py
@@ -191,6 +191,7 @@
assert len( self.tool_action.execution_call_args ) == 2
self.assertEquals( self.tool_action.execution_call_args[ 0 ][ "incoming" ][ "param1" ], hda1 )
self.assertEquals( self.tool_action.execution_call_args[ 1 ][ "incoming" ][ "param1" ], hda2 )
+ self.assertEquals( len( template_vars[ "jobs" ] ), 2 )
def test_cannot_multirun_and_remap( self ):
hda1, hda2 = self.__setup_multirun_job()
https://bitbucket.org/galaxy/galaxy-central/commits/db98ed916ec4/
Changeset: db98ed916ec4
User: jmchilton
Date: 2014-05-06 15:54:31
Summary: Rework encode_all_ids in BaseController so it can be used outside of controllers.
Moved into SecurityHelper. Also fixed up style, elminated unconditional exception handling, and added unit tests.
Affected #: 3 files
diff -r 2bc485799a188719cdf163c3a4d075cff4341fee -r db98ed916ec46c2fe7a2b491875e69274b3c1e6b lib/galaxy/web/base/controller.py
--- a/lib/galaxy/web/base/controller.py
+++ b/lib/galaxy/web/base/controller.py
@@ -161,26 +161,7 @@
It might be useful to turn this in to a decorator
"""
- if type( rval ) != dict:
- return rval
- for k, v in rval.items():
- if (k == 'id' or k.endswith( '_id' )) and v is not None and k not in ['tool_id']:
- try:
- rval[k] = trans.security.encode_id( v )
- except:
- pass # probably already encoded
- if (k.endswith("_ids") and type(v) == list):
- try:
- o = []
- for i in v:
- o.append(trans.security.encode_id( i ))
- rval[k] = o
- except:
- pass
- else:
- if recursive and type(v) == dict:
- rval[k] = self.encode_all_ids(trans, v, recursive)
- return rval
+ return trans.security.encode_all_ids( rval, recursive=recursive )
# incoming param validation
# should probably be in sep. serializer class/object _used_ by controller
diff -r 2bc485799a188719cdf163c3a4d075cff4341fee -r db98ed916ec46c2fe7a2b491875e69274b3c1e6b lib/galaxy/web/security/__init__.py
--- a/lib/galaxy/web/security/__init__.py
+++ b/lib/galaxy/web/security/__init__.py
@@ -64,6 +64,33 @@
return a_dict
+ def encode_all_ids( self, rval, recursive=False ):
+ """
+ Encodes all integer values in the dict rval whose keys are 'id' or end
+ with '_id' excluding `tool_id` which are consumed and produced as is
+ via the API.
+ """
+ if not isinstance( rval, dict ):
+ return rval
+ for k, v in rval.items():
+ if ( k == 'id' or k.endswith( '_id' ) ) and v is not None and k not in [ 'tool_id' ]:
+ try:
+ rval[ k ] = self.encode_id( v )
+ except Exception:
+ pass # probably already encoded
+ if ( k.endswith( "_ids" ) and isinstance( v, list ) ):
+ try:
+ o = []
+ for i in v:
+ o.append( self.encode_id( i ) )
+ rval[ k ] = o
+ except Exception:
+ pass
+ else:
+ if recursive and isinstance( v, dict ):
+ rval[ k ] = self.encode_all_ids( v, recursive )
+ return rval
+
def decode_id( self, obj_id, kind=None ):
id_cipher = self.__id_cipher( kind )
return int( id_cipher.decrypt( obj_id.decode( 'hex' ) ).lstrip( "!" ) )
diff -r 2bc485799a188719cdf163c3a4d075cff4341fee -r db98ed916ec46c2fe7a2b491875e69274b3c1e6b test/unit/test_security_helper.py
--- a/test/unit/test_security_helper.py
+++ b/test/unit/test_security_helper.py
@@ -12,6 +12,26 @@
assert 1 == test_helper_1.decode_id( test_helper_1.encode_id( 1 ) )
+def test_nested_encoding():
+ # Does nothing if not a dict
+ assert test_helper_1.encode_all_ids( 1 ) == 1
+
+ # Encodes top-level things ending in _id
+ assert test_helper_1.encode_all_ids( dict( history_id=1 ) )[ "history_id" ] == test_helper_1.encode_id( 1 )
+ # ..except tool_id
+ assert test_helper_1.encode_all_ids( dict( tool_id=1 ) )[ "tool_id" ] == 1
+
+ # Encodes lists at top level is end in _ids
+ expected_ids = [ test_helper_1.encode_id( 1 ), test_helper_1.encode_id( 2 ) ]
+ assert test_helper_1.encode_all_ids( dict( history_ids=[ 1, 2 ] ) )[ "history_ids" ] == expected_ids
+
+ # Encodes nested stuff if and only if recursive set to true.
+ nested_dict = dict( objects=dict( history_ids=[ 1, 2 ] ) )
+ assert test_helper_1.encode_all_ids( nested_dict )[ "objects" ][ "history_ids" ] == [ 1, 2 ]
+ assert test_helper_1.encode_all_ids( nested_dict, recursive=False )[ "objects" ][ "history_ids" ] == [ 1, 2 ]
+ assert test_helper_1.encode_all_ids( nested_dict, recursive=True )[ "objects" ][ "history_ids" ] == expected_ids
+
+
def test_per_kind_encode_deocde():
# Different ids are encoded differently
assert test_helper_1.encode_id( 1, kind="k1" ) != test_helper_1.encode_id( 2, kind="k1" )
https://bitbucket.org/galaxy/galaxy-central/commits/912ba22e939f/
Changeset: 912ba22e939f
User: jmchilton
Date: 2014-05-06 15:54:31
Summary: Dataset collections - fix to ensure element ids from API are always encoded.
Affected #: 2 files
diff -r db98ed916ec46c2fe7a2b491875e69274b3c1e6b -r 912ba22e939f505a33ef83d3bfbdbe26aa9c6a9f lib/galaxy/dataset_collections/util.py
--- a/lib/galaxy/dataset_collections/util.py
+++ b/lib/galaxy/dataset_collections/util.py
@@ -35,7 +35,7 @@
dict_value[ 'url' ] = web.url_for( 'library_content', library_id=encoded_library_id, id=encoded_id, folder_id=encoded_folder_id )
if view == "element":
dict_value[ 'elements' ] = map( dictify_element, dataset_colleciton_instance.collection.elements )
- security.encode_dict_ids( dict_value ) # TODO: Use Kyle's recusrive formulation of this.
+ security.encode_all_ids( dict_value, recursive=True ) # TODO: Use Kyle's recusrive formulation of this.
return dict_value
diff -r db98ed916ec46c2fe7a2b491875e69274b3c1e6b -r 912ba22e939f505a33ef83d3bfbdbe26aa9c6a9f lib/galaxy/web/security/__init__.py
--- a/lib/galaxy/web/security/__init__.py
+++ b/lib/galaxy/web/security/__init__.py
@@ -89,6 +89,8 @@
else:
if recursive and isinstance( v, dict ):
rval[ k ] = self.encode_all_ids( v, recursive )
+ elif recursive and isinstance( v, list ):
+ rval[ k ] = map( lambda el: self.encode_all_ids( el, True), v )
return rval
def decode_id( self, obj_id, kind=None ):
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0

commit/galaxy-central: greg: Back out the changes in 41bb600 that resulted in changes to the way the Tool Shed's package installation framework handled dependencies that are contained in the same repository as tools that require them.
by commits-noreply@bitbucket.org 06 May '14
by commits-noreply@bitbucket.org 06 May '14
06 May '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/9e0362d990f8/
Changeset: 9e0362d990f8
User: greg
Date: 2014-05-06 15:24:36
Summary: Back out the changes in 41bb600 that resulted in changes to the way the Tool Shed's package installation framework handled dependencies that are contained in the same repository as tools that require them.
Affected #: 2 files
diff -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b -r 9e0362d990f88502b77211d2d3f7f5459123f52c lib/tool_shed/galaxy_install/tool_dependencies/install_util.py
--- a/lib/tool_shed/galaxy_install/tool_dependencies/install_util.py
+++ b/lib/tool_shed/galaxy_install/tool_dependencies/install_util.py
@@ -289,55 +289,6 @@
raise Exception( message )
return handled_tool_dependencies
-def handle_env_vars_for_set_environment_tool_dependency( app, tool_shed_repository, tool_shed_repository_install_dir ):
- env_var_name = 'PATH'
- install_dir = \
- tool_dependency_util.get_tool_dependency_install_dir( app=app,
- repository_name=tool_shed_repository.name,
- repository_owner=tool_shed_repository.owner,
- repository_changeset_revision=tool_shed_repository.installed_changeset_revision,
- tool_dependency_type='set_environment',
- tool_dependency_name=env_var_name,
- tool_dependency_version=None )
- env_var_dict = dict( name=env_var_name, action='prepend_to', value=tool_shed_repository_install_dir )
- if not os.path.exists( install_dir ):
- os.makedirs( install_dir )
- status = app.install_model.ToolDependency.installation_status.INSTALLING
- tool_dependency = \
- tool_dependency_util.create_or_update_tool_dependency( app=app,
- tool_shed_repository=tool_shed_repository,
- name=env_var_name,
- version=None,
- type='set_environment',
- status=status,
- set_status=True )
- env_file_builder = EnvFileBuilder( install_dir )
- return_code = env_file_builder.append_line( make_executable=True, **env_var_dict )
- if return_code:
- error_message = 'Error creating env.sh file for tool dependency %s, return_code: %s' % \
- ( str( tool_dependency.name ), str( return_code ) )
- log.debug( error_message )
- status = app.install_model.ToolDependency.installation_status.ERROR
- tool_dependency = \
- tool_dependency_util.set_tool_dependency_attributes( app,
- tool_dependency=tool_dependency,
- status=status,
- error_message=error_message,
- remove_from_disk=False )
- else:
- if tool_dependency.status not in [ app.install_model.ToolDependency.installation_status.ERROR,
- app.install_model.ToolDependency.installation_status.INSTALLED ]:
- status = app.install_model.ToolDependency.installation_status.INSTALLED
- tool_dependency = \
- tool_dependency_util.set_tool_dependency_attributes( app,
- tool_dependency=tool_dependency,
- status=status,
- error_message=None,
- remove_from_disk=False )
- log.debug( 'Environment variable %s set in %s for tool dependency %s.' % \
- ( str( env_var_name ), str( install_dir ), str( tool_dependency.name ) ) )
- return tool_dependency
-
def install_and_build_package_via_fabric( app, tool_shed_repository, tool_dependency, actions_dict ):
sa_session = app.install_model.context
try:
@@ -700,7 +651,6 @@
tool_dependencies = []
env_var_version = elem.get( 'version', '1.0' )
tool_shed_repository_install_dir = fabric_util.get_tool_shed_repository_install_dir( app, tool_shed_repository )
- tool_shed_repository_install_dir_added_to_path = False
for env_var_elem in elem:
# Althoug we're in a loop here, this method will always return only a single ToolDependency or None.
env_var_name = env_var_elem.get( 'name', None )
@@ -771,16 +721,6 @@
status=status,
error_message=error_message,
remove_from_disk=False )
- if tool_dependency.status != app.install_model.ToolDependency.installation_status.ERROR:
- if env_var_dict[ 'name' ] == 'PATH' and \
- env_var_dict[ 'action' ] in [ 'prepend_to', 'set_to', 'append_to' ] and \
- env_var_dict[ 'value' ] == install_environment.tool_shed_repository_install_dir:
- tool_shed_repository_install_dir_added_to_path = True
- tool_dependencies.append( tool_dependency )
- if not tool_shed_repository_install_dir_added_to_path:
- tool_dependency = handle_env_vars_for_set_environment_tool_dependency( app,
- tool_shed_repository,
- tool_shed_repository_install_dir )
tool_dependencies.append( tool_dependency )
return tool_dependencies
diff -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b -r 9e0362d990f88502b77211d2d3f7f5459123f52c lib/tool_shed/util/metadata_util.py
--- a/lib/tool_shed/util/metadata_util.py
+++ b/lib/tool_shed/util/metadata_util.py
@@ -23,7 +23,6 @@
from tool_shed.galaxy_install.tool_dependencies import install_util
from tool_shed.galaxy_install.tool_dependencies import td_common_util
import tool_shed.repository_types.util as rt_util
-from xml.etree import ElementTree as XmlET
eggs.require( 'mercurial' )
@@ -922,28 +921,8 @@
invalid_tool_dependencies_dict = {}
valid_repository_dependency_tups = []
invalid_repository_dependency_tups = []
- needs_set_environment_tool_dependency_for_path = False
tools_metadata = metadata_dict.get( 'tools', None )
- if tools_metadata is not None:
- for tools_dict in tools_metadata:
- requirements = tools_dict.get( 'requirements', None )
- if requirements is not None:
- for requirements_dict in requirements:
- if requirements_dict[ 'type' ] == 'set_environment' and requirements_dict[ 'name' ] == 'PATH':
- needs_set_environment_tool_dependency_for_path = True
- break
description = root.get( 'description' )
- if needs_set_environment_tool_dependency_for_path:
- # Add this to the in-memory XML tree that is parsed to determine the database tool dependency records. This will not
- # modify the on-disk tool dependency definitions, but is needed in order for the tool to correctly source the env.sh
- # file that was generated for the PATH variable.
- # <set_environment version="1.0">
- # <environment_variable action="prepend_to" name="PATH">$REPOSITORY_INSTALL_DIR</environment_variable>
- # </set_environment>
- env_var_elem_attributes = dict( name='PATH', action='prepend_to' )
- set_environment_elem = xml_util.create_element( 'set_environment', attributes=dict( version='1.0' ) )
- XmlET.SubElement( set_environment_elem, 'environment_variable', attrib=env_var_elem_attributes )
- root.append( set_environment_elem )
for elem in root:
if elem.tag == 'package':
valid_tool_dependencies_dict, invalid_tool_dependencies_dict, repository_dependency_tup, repository_dependency_is_valid, message = \
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0

commit/galaxy-central: jmchilton: Merged in jmchilton/galaxy-central-fork-1 (pull request #356)
by commits-noreply@bitbucket.org 06 May '14
by commits-noreply@bitbucket.org 06 May '14
06 May '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/8e6cda4c1b3d/
Changeset: 8e6cda4c1b3d
User: jmchilton
Date: 2014-05-06 15:13:29
Summary: Merged in jmchilton/galaxy-central-fork-1 (pull request #356)
Enhancements for Runtime Discovered (Collected Primary) Datasets
Affected #: 13 files
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -42,6 +42,7 @@
from galaxy.tools.deps import build_dependency_manager
from galaxy.tools.deps.requirements import parse_requirements_from_xml
from galaxy.tools.parameters import check_param, params_from_strings, params_to_strings
+from galaxy.tools.parameters import output_collect
from galaxy.tools.parameters.basic import (BaseURLToolParameter,
DataToolParameter, HiddenToolParameter, LibraryDatasetToolParameter,
SelectToolParameter, ToolParameter, UnvalidatedValue,
@@ -1452,6 +1453,7 @@
output.hidden = string_as_bool( data_elem.get("hidden", "") )
output.tool = self
output.actions = ToolOutputActionGroup( output, data_elem.find( 'actions' ) )
+ output.dataset_collectors = output_collect.dataset_collectors_from_elem( data_elem )
self.outputs[ output.name ] = output
# TODO: Include the tool's name in any parsing warnings.
@@ -2800,94 +2802,7 @@
Find any additional datasets generated by a tool and attach (for
cases where number of outputs is not known in advance).
"""
- new_primary_datasets = {}
- try:
- json_file = open( os.path.join( job_working_directory, jobs.TOOL_PROVIDED_JOB_METADATA_FILE ), 'r' )
- for line in json_file:
- line = json.loads( line )
- if line.get( 'type' ) == 'new_primary_dataset':
- new_primary_datasets[ os.path.split( line.get( 'filename' ) )[-1] ] = line
- except Exception:
- # This should not be considered an error or warning condition, this file is optional
- pass
- # Loop through output file names, looking for generated primary
- # datasets in form of:
- # 'primary_associatedWithDatasetID_designation_visibility_extension(_DBKEY)'
- primary_datasets = {}
- for name, outdata in output.items():
- filenames = []
- if 'new_file_path' in self.app.config.collect_outputs_from:
- filenames.extend( glob.glob(os.path.join(self.app.config.new_file_path, "primary_%i_*" % outdata.id) ) )
- if 'job_working_directory' in self.app.config.collect_outputs_from:
- filenames.extend( glob.glob(os.path.join(job_working_directory, "primary_%i_*" % outdata.id) ) )
- for filename in filenames:
- if not name in primary_datasets:
- primary_datasets[name] = {}
- fields = os.path.basename(filename).split("_")
- fields.pop(0)
- parent_id = int(fields.pop(0))
- designation = fields.pop(0)
- visible = fields.pop(0).lower()
- if visible == "visible":
- visible = True
- else:
- visible = False
- ext = fields.pop(0).lower()
- dbkey = outdata.dbkey
- if fields:
- dbkey = fields[ 0 ]
- # Create new primary dataset
- primary_data = self.app.model.HistoryDatasetAssociation( extension=ext,
- designation=designation,
- visible=visible,
- dbkey=dbkey,
- create_dataset=True,
- sa_session=self.sa_session )
- self.app.security_agent.copy_dataset_permissions( outdata.dataset, primary_data.dataset )
- self.sa_session.add( primary_data )
- self.sa_session.flush()
- # Move data from temp location to dataset location
- self.app.object_store.update_from_file(primary_data.dataset, file_name=filename, create=True)
- primary_data.set_size()
- primary_data.name = "%s (%s)" % ( outdata.name, designation )
- primary_data.info = outdata.info
- primary_data.init_meta( copy_from=outdata )
- primary_data.dbkey = dbkey
- # Associate new dataset with job
- job = None
- for assoc in outdata.creating_job_associations:
- job = assoc.job
- break
- if job:
- assoc = self.app.model.JobToOutputDatasetAssociation( '__new_primary_file_%s|%s__' % ( name, designation ), primary_data )
- assoc.job = job
- self.sa_session.add( assoc )
- self.sa_session.flush()
- primary_data.state = outdata.state
- #add tool/metadata provided information
- new_primary_datasets_attributes = new_primary_datasets.get( os.path.split( filename )[-1] )
- if new_primary_datasets_attributes:
- dataset_att_by_name = dict( ext='extension' )
- for att_set in [ 'name', 'info', 'ext', 'dbkey' ]:
- dataset_att_name = dataset_att_by_name.get( att_set, att_set )
- setattr( primary_data, dataset_att_name, new_primary_datasets_attributes.get( att_set, getattr( primary_data, dataset_att_name ) ) )
- primary_data.set_meta()
- primary_data.set_peek()
- self.sa_session.add( primary_data )
- self.sa_session.flush()
- outdata.history.add_dataset( primary_data )
- # Add dataset to return dict
- primary_datasets[name][designation] = primary_data
- # Need to update all associated output hdas, i.e. history was
- # shared with job running
- for dataset in outdata.dataset.history_associations:
- if outdata == dataset:
- continue
- new_data = primary_data.copy()
- dataset.history.add_dataset( new_data )
- self.sa_session.add( new_data )
- self.sa_session.flush()
- return primary_datasets
+ return output_collect.collect_primary_datatasets( self, output, job_working_directory )
def to_dict( self, trans, link_details=False, io_details=False ):
""" Returns dict of tool. """
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b lib/galaxy/tools/parameters/output_collect.py
--- /dev/null
+++ b/lib/galaxy/tools/parameters/output_collect.py
@@ -0,0 +1,219 @@
+""" Code allowing tools to define extra files associated with an output datset.
+"""
+import os
+import re
+import glob
+import json
+
+
+from galaxy import jobs
+from galaxy import util
+from galaxy.util import odict
+
+DATASET_ID_TOKEN = "DATASET_ID"
+DEFAULT_EXTRA_FILENAME_PATTERN = r"primary_DATASET_ID_(?P<designation>[^_]+)_(?P<visible>[^_]+)_(?P<ext>[^_]+)(_(?P<dbkey>[^_]+))?"
+
+
+def collect_primary_datatasets( tool, output, job_working_directory ):
+ app = tool.app
+ sa_session = tool.sa_session
+ new_primary_datasets = {}
+ try:
+ json_file = open( os.path.join( job_working_directory, jobs.TOOL_PROVIDED_JOB_METADATA_FILE ), 'r' )
+ for line in json_file:
+ line = json.loads( line )
+ if line.get( 'type' ) == 'new_primary_dataset':
+ new_primary_datasets[ os.path.split( line.get( 'filename' ) )[-1] ] = line
+ except Exception:
+ # This should not be considered an error or warning condition, this file is optional
+ pass
+ # Loop through output file names, looking for generated primary
+ # datasets in form of:
+ # 'primary_associatedWithDatasetID_designation_visibility_extension(_DBKEY)'
+ primary_datasets = {}
+ for name, outdata in output.items():
+ dataset_collectors = tool.outputs[ name ].dataset_collectors if name in tool.outputs else [ DEFAULT_DATASET_COLLECTOR ]
+ filenames = odict.odict()
+ if 'new_file_path' in app.config.collect_outputs_from:
+ if DEFAULT_DATASET_COLLECTOR in dataset_collectors:
+ # 'new_file_path' collection should be considered deprecated,
+ # only use old-style matching (glob instead of regex and only
+ # using default collector - if enabled).
+ for filename in glob.glob(os.path.join(app.config.new_file_path, "primary_%i_*" % outdata.id) ):
+ filenames[ filename ] = DEFAULT_DATASET_COLLECTOR
+ if 'job_working_directory' in app.config.collect_outputs_from:
+ for extra_file_collector in dataset_collectors:
+ directory = job_working_directory
+ if extra_file_collector.directory:
+ directory = os.path.join( directory, extra_file_collector.directory )
+ if not util.in_directory( directory, job_working_directory ):
+ raise Exception( "Problem with tool configuration, attempting to pull in datasets from outside working directory." )
+ if not os.path.isdir( directory ):
+ continue
+ for filename in os.listdir( directory ):
+ path = os.path.join( directory, filename )
+ if not os.path.isfile( path ):
+ continue
+ if extra_file_collector.match( outdata, filename ):
+ filenames[ path ] = extra_file_collector
+ for filename, extra_file_collector in filenames.iteritems():
+ if not name in primary_datasets:
+ primary_datasets[name] = {}
+ fields_match = extra_file_collector.match( outdata, os.path.basename( filename ) )
+ if not fields_match:
+ # Before I guess pop() would just have thrown an IndexError
+ raise Exception( "Problem parsing metadata fields for file %s" % filename )
+ designation = fields_match.designation
+ visible = fields_match.visible
+ ext = fields_match.ext
+ dbkey = fields_match.dbkey
+ # Create new primary dataset
+ primary_data = app.model.HistoryDatasetAssociation( extension=ext,
+ designation=designation,
+ visible=visible,
+ dbkey=dbkey,
+ create_dataset=True,
+ sa_session=sa_session )
+ app.security_agent.copy_dataset_permissions( outdata.dataset, primary_data.dataset )
+ sa_session.add( primary_data )
+ sa_session.flush()
+ # Move data from temp location to dataset location
+ app.object_store.update_from_file(primary_data.dataset, file_name=filename, create=True)
+ primary_data.set_size()
+ # If match specified a name use otherwise generate one from
+ # designation.
+ primary_data.name = fields_match.name or "%s (%s)" % ( outdata.name, designation )
+ primary_data.info = outdata.info
+ primary_data.init_meta( copy_from=outdata )
+ primary_data.dbkey = dbkey
+ # Associate new dataset with job
+ job = None
+ for assoc in outdata.creating_job_associations:
+ job = assoc.job
+ break
+ if job:
+ assoc = app.model.JobToOutputDatasetAssociation( '__new_primary_file_%s|%s__' % ( name, designation ), primary_data )
+ assoc.job = job
+ sa_session.add( assoc )
+ sa_session.flush()
+ primary_data.state = outdata.state
+ #add tool/metadata provided information
+ new_primary_datasets_attributes = new_primary_datasets.get( os.path.split( filename )[-1] )
+ if new_primary_datasets_attributes:
+ dataset_att_by_name = dict( ext='extension' )
+ for att_set in [ 'name', 'info', 'ext', 'dbkey' ]:
+ dataset_att_name = dataset_att_by_name.get( att_set, att_set )
+ setattr( primary_data, dataset_att_name, new_primary_datasets_attributes.get( att_set, getattr( primary_data, dataset_att_name ) ) )
+ primary_data.set_meta()
+ primary_data.set_peek()
+ sa_session.add( primary_data )
+ sa_session.flush()
+ outdata.history.add_dataset( primary_data )
+ # Add dataset to return dict
+ primary_datasets[name][designation] = primary_data
+ # Need to update all associated output hdas, i.e. history was
+ # shared with job running
+ for dataset in outdata.dataset.history_associations:
+ if outdata == dataset:
+ continue
+ new_data = primary_data.copy()
+ dataset.history.add_dataset( new_data )
+ sa_session.add( new_data )
+ sa_session.flush()
+ return primary_datasets
+
+
+# XML can describe custom patterns, but these literals describe named
+# patterns that will be replaced.
+NAMED_PATTERNS = {
+ "__default__": DEFAULT_EXTRA_FILENAME_PATTERN,
+ "__name__": r"(?P<name>.*)",
+ "__designation__": r"(?P<designation>.*)",
+ "__name_and_ext__": r"(?P<name>.*)\.(?P<ext>[^\.]+)?",
+ "__designation_and_ext__": r"(?P<designation>.*)\.(?P<ext>[^\._]+)?",
+}
+
+
+def dataset_collectors_from_elem( elem ):
+ primary_dataset_elems = elem.findall( "discover_datasets" )
+ if not primary_dataset_elems:
+ return [ DEFAULT_DATASET_COLLECTOR ]
+ else:
+ return map( lambda elem: DatasetCollector( **elem.attrib ), primary_dataset_elems )
+
+
+class DatasetCollector( object ):
+
+ def __init__( self, **kwargs ):
+ pattern = kwargs.get( "pattern", "__default__" )
+ if pattern in NAMED_PATTERNS:
+ pattern = NAMED_PATTERNS.get( pattern )
+ self.pattern = pattern
+ self.default_dbkey = kwargs.get( "dbkey", None )
+ self.default_ext = kwargs.get( "ext", None )
+ self.default_visible = util.asbool( kwargs.get( "visible", None ) )
+ self.directory = kwargs.get( "directory", None )
+
+ def pattern_for_dataset( self, dataset_instance=None ):
+ token_replacement = r'\d+'
+ if dataset_instance:
+ token_replacement = str( dataset_instance.id )
+ return self.pattern.replace( DATASET_ID_TOKEN, token_replacement )
+
+ def match( self, dataset_instance, filename ):
+ re_match = re.match( self.pattern_for_dataset( dataset_instance ), filename )
+ match_object = None
+ if re_match:
+ match_object = CollectedDatasetMatch( re_match, self )
+ return match_object
+
+
+class CollectedDatasetMatch( object ):
+
+ def __init__( self, re_match, collector ):
+ self.re_match = re_match
+ self.collector = collector
+
+ @property
+ def designation( self ):
+ re_match = self.re_match
+ if "designation" in re_match.groupdict():
+ return re_match.group( "designation" )
+ elif "name" in re_match.groupdict():
+ return re_match.group( "name" )
+ else:
+ return None
+
+ @property
+ def name( self ):
+ """ Return name or None if not defined by the discovery pattern.
+ """
+ re_match = self.re_match
+ name = None
+ if "name" in re_match.groupdict():
+ name = re_match.group( "name" )
+ return name
+
+ @property
+ def dbkey( self ):
+ try:
+ return self.re_match.group( "dbkey" )
+ except IndexError:
+ return self.collector.default_dbkey
+
+ @property
+ def ext( self ):
+ try:
+ return self.re_match.group( "ext" )
+ except IndexError:
+ return self.collector.default_ext
+
+ @property
+ def visible( self ):
+ try:
+ return self.re_match.group( "visible" ).lower() == "visible"
+ except IndexError:
+ return self.collector.default_visible
+
+
+DEFAULT_DATASET_COLLECTOR = DatasetCollector()
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b lib/galaxy/tools/test.py
--- a/lib/galaxy/tools/test.py
+++ b/lib/galaxy/tools/test.py
@@ -297,6 +297,19 @@
if name is None:
raise Exception( "Test output does not have a 'name'" )
+ file, attributes = __parse_test_attributes( output_elem, attrib )
+ primary_datasets = {}
+ for primary_elem in ( output_elem.findall( "discovered_dataset" ) or [] ):
+ primary_attrib = dict( primary_elem.attrib )
+ designation = primary_attrib.pop( 'designation', None )
+ if designation is None:
+ raise Exception( "Test primary dataset does not have a 'designation'" )
+ primary_datasets[ designation ] = __parse_test_attributes( primary_elem, primary_attrib )
+ attributes[ "primary_datasets" ] = primary_datasets
+ return name, file, attributes
+
+
+def __parse_test_attributes( output_elem, attrib ):
assert_list = __parse_assert_list( output_elem )
file = attrib.pop( 'file', None )
# File no longer required if an list of assertions was present.
@@ -321,7 +334,7 @@
attributes['assert_list'] = assert_list
attributes['extra_files'] = extra_files
attributes['metadata'] = metadata
- return name, file, attributes
+ return file, attributes
def __parse_assert_list( output_elem ):
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b lib/galaxy/webapps/galaxy/api/jobs.py
--- a/lib/galaxy/webapps/galaxy/api/jobs.py
+++ b/lib/galaxy/webapps/galaxy/api/jobs.py
@@ -13,6 +13,7 @@
from galaxy.web.base.controller import UsesLibraryMixinItems
from galaxy import exceptions
from galaxy import util
+from galaxy import model
import logging
log = logging.getLogger( __name__ )
@@ -91,10 +92,62 @@
:rtype: dictionary
:returns: dictionary containing full description of job data
"""
+ job = self.__get_job( trans, id )
+ return self.encode_all_ids( trans, job.to_dict( 'element' ), True )
+
+ @expose_api
+ def inputs( self, trans, id, **kwd ):
+ """
+ show( trans, id )
+ * GET /api/jobs/{job_id}/inputs
+ returns input datasets created by job
+
+ :type id: string
+ :param id: Encoded job id
+
+ :rtype: dictionary
+ :returns: dictionary containing input dataset associations
+ """
+ job = self.__get_job( trans, id )
+ return self.__dictify_associations( trans, job.input_datasets, job.input_library_datasets )
+
+ @expose_api
+ def outputs( self, trans, id, **kwd ):
+ """
+ show( trans, id )
+ * GET /api/jobs/{job_id}/outputs
+ returns output datasets created by job
+
+ :type id: string
+ :param id: Encoded job id
+
+ :rtype: dictionary
+ :returns: dictionary containing output dataset associations
+ """
+ job = self.__get_job( trans, id )
+ return self.__dictify_associations( trans, job.output_datasets, job.output_library_datasets )
+
+ def __dictify_associations( self, trans, *association_lists ):
+ rval = []
+ for association_list in association_lists:
+ rval.extend( map( lambda a: self.__dictify_association( trans, a ), association_list ) )
+ return rval
+
+ def __dictify_association( self, trans, job_dataset_association ):
+ dataset_dict = None
+ dataset = job_dataset_association.dataset
+ if dataset:
+ if isinstance( dataset, model.HistoryDatasetAssociation ):
+ dataset_dict = dict( src="hda", id=trans.security.encode_id( dataset.id ) )
+ else:
+ dataset_dict = dict( src="ldda", id=trans.security.encode_id( dataset.id ) )
+ return dict( name=job_dataset_association.name, dataset=dataset_dict )
+
+ def __get_job( self, trans, id ):
try:
- decoded_job_id = trans.security.decode_id(id)
- except:
- raise exceptions.ObjectAttributeInvalidException()
+ decoded_job_id = trans.security.decode_id( id )
+ except Exception:
+ raise exceptions.MalformedId()
query = trans.sa_session.query( trans.app.model.Job ).filter(
trans.app.model.Job.user == trans.user,
trans.app.model.Job.id == decoded_job_id
@@ -102,7 +155,7 @@
job = query.first()
if job is None:
raise exceptions.ObjectNotFound()
- return self.encode_all_ids( trans, job.to_dict( 'element' ), True )
+ return job
@expose_api
def create( self, trans, payload, **kwd ):
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b lib/galaxy/webapps/galaxy/api/provenance.py
--- a/lib/galaxy/webapps/galaxy/api/provenance.py
+++ b/lib/galaxy/webapps/galaxy/api/provenance.py
@@ -46,6 +46,7 @@
return {
"id": trans.security.encode_id(item.id),
"uuid": ( lambda uuid: str( uuid ) if uuid else None )( item.dataset.uuid),
+ "job_id": trans.security.encode_id( job.id ),
"tool_id": job.tool_id,
"parameters": self._get_job_record(trans, job, follow),
"stderr": job.stderr,
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b lib/galaxy/webapps/galaxy/buildapp.py
--- a/lib/galaxy/webapps/galaxy/buildapp.py
+++ b/lib/galaxy/webapps/galaxy/buildapp.py
@@ -281,6 +281,8 @@
'jobs',
path_prefix='/api' )
webapp.mapper.connect( 'job_search', '/api/jobs/search', controller='jobs', action='search', conditions=dict( method=['POST'] ) )
+ webapp.mapper.connect( 'job_inputs', '/api/jobs/{id}/inputs', controller='jobs', action='inputs', conditions=dict( method=['GET'] ) )
+ webapp.mapper.connect( 'job_outputs', '/api/jobs/{id}/outputs', controller='jobs', action='outputs', conditions=dict( method=['GET'] ) )
# Job files controllers. Only for consumption by remote job runners.
webapp.mapper.resource( 'file',
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b test/base/interactor.py
--- a/test/base/interactor.py
+++ b/test/base/interactor.py
@@ -44,16 +44,44 @@
self.api_key = self.__get_user_key( twill_test_case.user_api_key, twill_test_case.master_api_key, test_user=test_user )
self.uploads = {}
- def verify_output( self, history_id, output_data, outfile, attributes, shed_tool_id, maxseconds ):
+ def verify_output( self, history_id, output_data, output_testdef, shed_tool_id, maxseconds ):
+ outfile = output_testdef.outfile
+ attributes = output_testdef.attributes
+ name = output_testdef.name
self.wait_for_history( history_id, maxseconds )
hid = self.__output_id( output_data )
fetcher = self.__dataset_fetcher( history_id )
## TODO: Twill version verifys dataset is 'ok' in here.
self.twill_test_case.verify_hid( outfile, hda_id=hid, attributes=attributes, dataset_fetcher=fetcher, shed_tool_id=shed_tool_id )
+
+ primary_datasets = attributes.get( 'primary_datasets', {} )
+ if primary_datasets:
+ job_id = self._dataset_provenance( history_id, hid )[ "job_id" ]
+ outputs = self._get( "jobs/%s/outputs" % ( job_id ) ).json()
+
+ for designation, ( primary_outfile, primary_attributes ) in primary_datasets.iteritems():
+ primary_output = None
+ for output in outputs:
+ if output[ "name" ] == '__new_primary_file_%s|%s__' % ( name, designation ):
+ primary_output = output
+ break
+
+ if not primary_output:
+ msg_template = "Failed to find primary dataset with designation [%s] for output with name [%s]"
+ msg_args = ( designation, name )
+ raise Exception( msg_template % msg_args )
+
+ primary_hda_id = primary_output[ "dataset" ][ "id" ]
+ self.twill_test_case.verify_hid( primary_outfile, hda_id=primary_hda_id, attributes=primary_attributes, dataset_fetcher=fetcher, shed_tool_id=shed_tool_id )
+ self._verify_metadata( history_id, primary_hda_id, primary_attributes )
+
+ self._verify_metadata( history_id, hid, attributes )
+
+ def _verify_metadata( self, history_id, hid, attributes ):
metadata = attributes.get( 'metadata', {} ).copy()
for key, value in metadata.copy().iteritems():
new_key = "metadata_%s" % key
- metadata[ new_key ] = metadata[ key ]
+ metadata[ new_key ] = metadata[ key ]
del metadata[ key ]
expected_file_type = attributes.get( 'ftype', None )
if expected_file_type:
@@ -319,7 +347,10 @@
def __init__( self, twill_test_case ):
self.twill_test_case = twill_test_case
- def verify_output( self, history, output_data, outfile, attributes, shed_tool_id, maxseconds ):
+ def verify_output( self, history, output_data, output_testdef, shed_tool_id, maxseconds ):
+ outfile = output_testdef.outfile
+ attributes = output_testdef.attributes
+
hid = output_data.get( 'hid' )
self.twill_test_case.verify_dataset_correctness( outfile, hid=hid, attributes=attributes, shed_tool_id=shed_tool_id, maxseconds=maxseconds )
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b test/functional/test_toolbox.py
--- a/test/functional/test_toolbox.py
+++ b/test/functional/test_toolbox.py
@@ -3,6 +3,7 @@
from base.twilltestcase import TwillTestCase
from base.interactor import build_interactor, stage_data_in_history
from galaxy.tools import DataManagerTool
+from galaxy.util import bunch
import logging
log = logging.getLogger( __name__ )
@@ -52,6 +53,7 @@
for output_index, output_tuple in enumerate(testdef.outputs):
# Get the correct hid
name, outfile, attributes = output_tuple
+ output_testdef = bunch.Bunch( name=name, outfile=outfile, attributes=attributes )
try:
output_data = data_list[ name ]
except (TypeError, KeyError):
@@ -64,7 +66,7 @@
output_data = data_list[ len(data_list) - len(testdef.outputs) + output_index ]
self.assertTrue( output_data is not None )
try:
- galaxy_interactor.verify_output( history, output_data, outfile, attributes=attributes, shed_tool_id=shed_tool_id, maxseconds=maxseconds )
+ galaxy_interactor.verify_output( history, output_data, output_testdef=output_testdef, shed_tool_id=shed_tool_id, maxseconds=maxseconds )
except Exception:
for stream in ['stdout', 'stderr']:
stream_output = galaxy_interactor.get_job_stream( history, output_data, stream=stream )
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b test/functional/tools/multi_output.xml
--- a/test/functional/tools/multi_output.xml
+++ b/test/functional/tools/multi_output.xml
@@ -1,7 +1,7 @@
<tool id="multi_output" name="Multi_Output" description="multi_output" force_history_refresh="True" version="0.1.0"><command>
echo "Hello" > $report;
- echo "World" > '${__new_file_path__}/primary_${report.id}_moo_visible_?'
+ echo "World Contents" > '${__new_file_path__}/primary_${report.id}_world_visible_?'
</command><inputs><param name="input" type="integer" value="7" />
@@ -16,6 +16,11 @@
<assert_contents><has_line line="Hello" /></assert_contents>
+ <discovered_dataset designation="world">
+ <assert_contents>
+ <has_line line="World Contents" />
+ </assert_contents>
+ </discovered_dataset></output></test></tests>
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b test/functional/tools/multi_output_configured.xml
--- /dev/null
+++ b/test/functional/tools/multi_output_configured.xml
@@ -0,0 +1,43 @@
+<tool id="multi_output_configured" name="Multi_Output_Configured" description="multi_output_configured" force_history_refresh="True" version="0.1.0">
+ <command>
+ echo "Hello" > $report;
+ mkdir subdir1;
+ echo "This" > subdir1/this.txt;
+ echo "That" > subdir1/that.txt;
+ mkdir subdir2;
+ echo "1" > subdir2/CUSTOM_1.txt;
+ echo "2" > subdir2/CUSTOM_2.tabular;
+ echo "3" > subdir2/CUSTOM_3.txt;
+ </command>
+ <inputs>
+ <param name="input" type="integer" value="7" />
+ </inputs>
+ <outputs>
+ <data format="txt" name="report">
+ <discover_datasets pattern="__designation_and_ext__" directory="subdir1" />
+ <discover_datasets pattern="CUSTOM_(?P<designation>.+)\.(?P<ext>.+)" directory="subdir2" />
+ </data>
+ </outputs>
+ <tests>
+ <test>
+ <param name="input" value="7" />
+ <output name="report">
+ <assert_contents>
+ <has_line line="Hello" />
+ </assert_contents>
+ <discovered_dataset designation="this" ftype="txt">
+ <assert_contents><has_line line="This" /></assert_contents>
+ </discovered_dataset>
+ <discovered_dataset designation="that" ftype="txt">
+ <assert_contents><has_line line="That" /></assert_contents>
+ </discovered_dataset>
+ <discovered_dataset designation="1" ftype="txt">
+ <assert_contents><has_line line="1" /></assert_contents>
+ </discovered_dataset>
+ <discovered_dataset designation="2" ftype="tabular">
+ <assert_contents><has_line line="2" /></assert_contents>
+ </discovered_dataset>
+ </output>
+ </test>
+ </tests>
+</tool>
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b test/functional/tools/samples_tool_conf.xml
--- a/test/functional/tools/samples_tool_conf.xml
+++ b/test/functional/tools/samples_tool_conf.xml
@@ -8,6 +8,7 @@
<tool file="multi_page.xml"/><tool file="multi_select.xml" /><tool file="multi_output.xml" />
+ <tool file="multi_output_configured.xml" /><tool file="composite_output.xml" /><tool file="metadata.xml" /><tool file="output_order.xml" />
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b test/functional/workflow.py
--- a/test/functional/workflow.py
+++ b/test/functional/workflow.py
@@ -4,6 +4,7 @@
from base.interactor import GalaxyInteractorApi, stage_data_in_history
from galaxy.util import parse_xml
+from galaxy.util import bunch
from galaxy.tools.test import parse_param_elem, require_file, test_data_iter, parse_output_elems
from json import load, dumps
@@ -66,10 +67,11 @@
for expected_output_def in workflow_test.outputs:
# Get the correct hid
name, outfile, attributes = expected_output_def
+ output_testdef = bunch.Bunch( name=name, outfile=outfile, attributes=attributes )
output_data = outputs[ int( name ) ]
try:
- galaxy_interactor.verify_output( test_history, output_data, outfile, attributes=attributes, shed_tool_id=None, maxseconds=maxseconds )
+ galaxy_interactor.verify_output( test_history, output_data, output_testdef=output_testdef, shed_tool_id=None, maxseconds=maxseconds )
except Exception:
for stream in ['stdout', 'stderr']:
stream_output = galaxy_interactor.get_job_stream( test_history, output_data, stream=stream )
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b test/unit/tools/test_collect_primary_datasets.py
--- a/test/unit/tools/test_collect_primary_datasets.py
+++ b/test/unit/tools/test_collect_primary_datasets.py
@@ -5,6 +5,8 @@
import tools_support
from galaxy import model
+from galaxy import util
+from galaxy.tools.parameters import output_collect
DEFAULT_TOOL_OUTPUT = "out1"
DEFAULT_EXTRA_NAME = "test1"
@@ -114,6 +116,75 @@
extra_job_assoc = filter( lambda job_assoc: job_assoc.name.startswith( "__" ), self.job.output_datasets )[ 0 ]
assert extra_job_assoc.name == "__new_primary_file_out1|test1__"
+ def test_pattern_override_designation( self ):
+ self._replace_output_collectors( '''<output><discover_datasets pattern="__designation__" directory="subdir" ext="txt" /></output>''' )
+ self._setup_extra_file( subdir="subdir", filename="foo.txt" )
+ primary_outputs = self._collect( )[ DEFAULT_TOOL_OUTPUT ]
+ assert len( primary_outputs ) == 1
+ created_hda = primary_outputs.values()[ 0 ]
+ assert "foo.txt" in created_hda.name
+ assert created_hda.ext == "txt"
+
+ def test_name_and_ext_pattern( self ):
+ self._replace_output_collectors( '''<output><discover_datasets pattern="__name_and_ext__" directory="subdir" /></output>''' )
+ self._setup_extra_file( subdir="subdir", filename="foo1.txt" )
+ self._setup_extra_file( subdir="subdir", filename="foo2.tabular" )
+ primary_outputs = self._collect( )[ DEFAULT_TOOL_OUTPUT ]
+ assert len( primary_outputs ) == 2
+ assert primary_outputs[ "foo1" ].ext == "txt"
+ assert primary_outputs[ "foo2" ].ext == "tabular"
+
+ def test_custom_pattern( self ):
+ # Hypothetical oral metagenomic classifier that populates a directory
+ # of files based on name and genome. Use custom regex pattern to grab
+ # and classify these files.
+ self._replace_output_collectors( '''<output><discover_datasets pattern="(?P<designation>.*)__(?P<dbkey>.*).fasta" directory="genome_breakdown" ext="fasta" /></output>''' )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp1__hg19.fasta" )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp2__lactLact.fasta" )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp3__hg19.fasta" )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp4__lactPlan.fasta" )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp5__fusoNucl.fasta" )
+
+ # Put a file in directory we don't care about, just to make sure
+ # it doesn't get picked up by pattern.
+ self._setup_extra_file( subdir="genome_breakdown", filename="overview.txt" )
+
+ primary_outputs = self._collect( )[ DEFAULT_TOOL_OUTPUT ]
+ assert len( primary_outputs ) == 5
+ genomes = dict( samp1="hg19", samp2="lactLact", samp3="hg19", samp4="lactPlan", samp5="fusoNucl" )
+ for key, hda in primary_outputs.iteritems():
+ assert hda.dbkey == genomes[ key ]
+
+ def test_name_versus_designation( self ):
+ """ This test demonstrates the difference between name and desgination
+ in grouping patterns and named patterns such as __designation__,
+ __name__, __designation_and_ext__, and __name_and_ext__.
+ """
+ self._replace_output_collectors( '''<output>
+ <discover_datasets pattern="__name_and_ext__" directory="subdir_for_name_discovery" />
+ <discover_datasets pattern="__designation_and_ext__" directory="subdir_for_designation_discovery" />
+ </output>''')
+ self._setup_extra_file( subdir="subdir_for_name_discovery", filename="example1.txt" )
+ self._setup_extra_file( subdir="subdir_for_designation_discovery", filename="example2.txt" )
+ primary_outputs = self._collect( )[ DEFAULT_TOOL_OUTPUT ]
+ name_output = primary_outputs[ "example1" ]
+ designation_output = primary_outputs[ "example2" ]
+ # While name is also used for designation, designation is not the name -
+ # it is used in the calculation of the name however...
+ assert name_output.name == "example1"
+ assert designation_output.name == "%s (%s)" % ( self.hda.name, "example2" )
+
+ def test_cannot_read_files_outside_job_directory( self ):
+ self._replace_output_collectors( '''<output>
+ <discover_datasets pattern="__name_and_ext__" directory="../../secrets" />
+ </output>''')
+ exception_thrown = False
+ try:
+ self._collect( )
+ except Exception:
+ exception_thrown = True
+ assert exception_thrown
+
def _collect_default_extra( self, **kwargs ):
return self._collect( **kwargs )[ DEFAULT_TOOL_OUTPUT ][ DEFAULT_EXTRA_NAME ]
@@ -122,6 +193,12 @@
job_working_directory = self.test_directory
return self.tool.collect_primary_datasets( self.outputs, job_working_directory )
+ def _replace_output_collectors( self, xml_str ):
+ # Rewrite tool as if it had been created with output containing
+ # supplied dataset_collector elem.
+ elem = util.parse_xml_string( xml_str )
+ self.tool.outputs[ DEFAULT_TOOL_OUTPUT ].dataset_collectors = output_collect.dataset_collectors_from_elem( elem )
+
def _append_job_json( self, object, output_path=None, line_type="new_primary_dataset" ):
object[ "type" ] = line_type
if output_path:
@@ -133,7 +210,8 @@
def _setup_extra_file( self, **kwargs ):
path = kwargs.get( "path", None )
- if not path:
+ filename = kwargs.get( "filename", None )
+ if not path and not filename:
name = kwargs.get( "name", DEFAULT_EXTRA_NAME )
visible = kwargs.get( "visible", "visible" )
ext = kwargs.get( "ext", "data" )
@@ -142,6 +220,13 @@
path = os.path.join( directory, "primary_%s_%s_%s_%s" % template_args )
if "dbkey" in kwargs:
path = "%s_%s" % ( path, kwargs[ "dbkey" ] )
+ if not path:
+ assert filename
+ subdir = kwargs.get( "subdir", "." )
+ path = os.path.join( self.test_directory, subdir, filename )
+ directory = os.path.dirname( path )
+ if not os.path.exists( directory ):
+ os.makedirs( directory )
contents = kwargs.get( "contents", "test contents" )
open( path, "w" ).write( contents )
return path
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
7 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/92c23f534d2f/
Changeset: 92c23f534d2f
User: jmchilton
Date: 2014-03-29 23:11:17
Summary: Include job id in dataset provenance API.
This allows linking this information with information produced by the jobs API.
Affected #: 1 file
diff -r f83f4d9965283e8dc8640698262da52080081496 -r 92c23f534d2ff322476e988c998d14672d558e31 lib/galaxy/webapps/galaxy/api/provenance.py
--- a/lib/galaxy/webapps/galaxy/api/provenance.py
+++ b/lib/galaxy/webapps/galaxy/api/provenance.py
@@ -46,6 +46,7 @@
return {
"id": trans.security.encode_id(item.id),
"uuid": ( lambda uuid: str( uuid ) if uuid else None )( item.dataset.uuid),
+ "job_id": trans.security.encode_id( job.id ),
"tool_id": job.tool_id,
"parameters": self._get_job_record(trans, job, follow),
"stderr": job.stderr,
https://bitbucket.org/galaxy/galaxy-central/commits/839a099d99a3/
Changeset: 839a099d99a3
User: jmchilton
Date: 2014-03-29 23:11:17
Summary: Allow fetching input/output dataset association information through jobs API.
This will allow tool test framework to discover runtime discovered datastes in subsequent changesets.
Affected #: 2 files
diff -r 92c23f534d2ff322476e988c998d14672d558e31 -r 839a099d99a3d1f35c35c76740087655eacfe46f lib/galaxy/webapps/galaxy/api/jobs.py
--- a/lib/galaxy/webapps/galaxy/api/jobs.py
+++ b/lib/galaxy/webapps/galaxy/api/jobs.py
@@ -13,6 +13,7 @@
from galaxy.web.base.controller import UsesLibraryMixinItems
from galaxy import exceptions
from galaxy import util
+from galaxy import model
import logging
log = logging.getLogger( __name__ )
@@ -91,10 +92,62 @@
:rtype: dictionary
:returns: dictionary containing full description of job data
"""
+ job = self.__get_job( trans, id )
+ return self.encode_all_ids( trans, job.to_dict( 'element' ), True )
+
+ @expose_api
+ def inputs( self, trans, id, **kwd ):
+ """
+ show( trans, id )
+ * GET /api/jobs/{job_id}/inputs
+ returns input datasets created by job
+
+ :type id: string
+ :param id: Encoded job id
+
+ :rtype: dictionary
+ :returns: dictionary containing input dataset associations
+ """
+ job = self.__get_job( trans, id )
+ return self.__dictify_associations( trans, job.input_datasets, job.input_library_datasets )
+
+ @expose_api
+ def outputs( self, trans, id, **kwd ):
+ """
+ show( trans, id )
+ * GET /api/jobs/{job_id}/outputs
+ returns output datasets created by job
+
+ :type id: string
+ :param id: Encoded job id
+
+ :rtype: dictionary
+ :returns: dictionary containing output dataset associations
+ """
+ job = self.__get_job( trans, id )
+ return self.__dictify_associations( trans, job.output_datasets, job.output_library_datasets )
+
+ def __dictify_associations( self, trans, *association_lists ):
+ rval = []
+ for association_list in association_lists:
+ rval.extend( map( lambda a: self.__dictify_association( trans, a ), association_list ) )
+ return rval
+
+ def __dictify_association( self, trans, job_dataset_association ):
+ dataset_dict = None
+ dataset = job_dataset_association.dataset
+ if dataset:
+ if isinstance( dataset, model.HistoryDatasetAssociation ):
+ dataset_dict = dict( src="hda", id=trans.security.encode_id( dataset.id ) )
+ else:
+ dataset_dict = dict( src="ldda", id=trans.security.encode_id( dataset.id ) )
+ return dict( name=job_dataset_association.name, dataset=dataset_dict )
+
+ def __get_job( self, trans, id ):
try:
- decoded_job_id = trans.security.decode_id(id)
- except:
- raise exceptions.ObjectAttributeInvalidException()
+ decoded_job_id = trans.security.decode_id( id )
+ except Exception:
+ raise exceptions.MalformedId()
query = trans.sa_session.query( trans.app.model.Job ).filter(
trans.app.model.Job.user == trans.user,
trans.app.model.Job.id == decoded_job_id
@@ -102,7 +155,7 @@
job = query.first()
if job is None:
raise exceptions.ObjectNotFound()
- return self.encode_all_ids( trans, job.to_dict( 'element' ), True )
+ return job
@expose_api
def create( self, trans, payload, **kwd ):
diff -r 92c23f534d2ff322476e988c998d14672d558e31 -r 839a099d99a3d1f35c35c76740087655eacfe46f lib/galaxy/webapps/galaxy/buildapp.py
--- a/lib/galaxy/webapps/galaxy/buildapp.py
+++ b/lib/galaxy/webapps/galaxy/buildapp.py
@@ -261,6 +261,8 @@
'jobs',
path_prefix='/api' )
webapp.mapper.connect( 'job_search', '/api/jobs/search', controller='jobs', action='search', conditions=dict( method=['POST'] ) )
+ webapp.mapper.connect( 'job_inputs', '/api/jobs/{id}/inputs', controller='jobs', action='inputs', conditions=dict( method=['GET'] ) )
+ webapp.mapper.connect( 'job_outputs', '/api/jobs/{id}/outputs', controller='jobs', action='outputs', conditions=dict( method=['GET'] ) )
# Job files controllers. Only for consumption by remote job runners.
webapp.mapper.resource( 'file',
https://bitbucket.org/galaxy/galaxy-central/commits/0ce5e9eb984b/
Changeset: 0ce5e9eb984b
User: jmchilton
Date: 2014-03-29 23:11:17
Summary: Allow tools to test runtime discovered datasets.
Affected #: 5 files
diff -r 839a099d99a3d1f35c35c76740087655eacfe46f -r 0ce5e9eb984bc70f1f1c338a3f1d21ed9d6ddee4 lib/galaxy/tools/test.py
--- a/lib/galaxy/tools/test.py
+++ b/lib/galaxy/tools/test.py
@@ -297,6 +297,19 @@
if name is None:
raise Exception( "Test output does not have a 'name'" )
+ file, attributes = __parse_test_attributes( output_elem, attrib )
+ primary_datasets = {}
+ for primary_elem in ( output_elem.findall( "discovered_dataset" ) or [] ):
+ primary_attrib = dict( primary_elem.attrib )
+ designation = primary_attrib.pop( 'designation', None )
+ if designation is None:
+ raise Exception( "Test primary dataset does not have a 'designation'" )
+ primary_datasets[ designation ] = __parse_test_attributes( primary_elem, primary_attrib )
+ attributes[ "primary_datasets" ] = primary_datasets
+ return name, file, attributes
+
+
+def __parse_test_attributes( output_elem, attrib ):
assert_list = __parse_assert_list( output_elem )
file = attrib.pop( 'file', None )
# File no longer required if an list of assertions was present.
@@ -321,7 +334,7 @@
attributes['assert_list'] = assert_list
attributes['extra_files'] = extra_files
attributes['metadata'] = metadata
- return name, file, attributes
+ return file, attributes
def __parse_assert_list( output_elem ):
diff -r 839a099d99a3d1f35c35c76740087655eacfe46f -r 0ce5e9eb984bc70f1f1c338a3f1d21ed9d6ddee4 test/base/interactor.py
--- a/test/base/interactor.py
+++ b/test/base/interactor.py
@@ -41,16 +41,44 @@
self.api_key = self.__get_user_key( twill_test_case.user_api_key, twill_test_case.master_api_key, test_user=test_user )
self.uploads = {}
- def verify_output( self, history_id, output_data, outfile, attributes, shed_tool_id, maxseconds ):
+ def verify_output( self, history_id, output_data, output_testdef, shed_tool_id, maxseconds ):
+ outfile = output_testdef.outfile
+ attributes = output_testdef.attributes
+ name = output_testdef.name
self.wait_for_history( history_id, maxseconds )
hid = self.__output_id( output_data )
fetcher = self.__dataset_fetcher( history_id )
## TODO: Twill version verifys dataset is 'ok' in here.
self.twill_test_case.verify_hid( outfile, hda_id=hid, attributes=attributes, dataset_fetcher=fetcher, shed_tool_id=shed_tool_id )
+
+ primary_datasets = attributes.get( 'primary_datasets', {} )
+ if primary_datasets:
+ job_id = self._dataset_provenance( history_id, hid )[ "job_id" ]
+ outputs = self._get( "jobs/%s/outputs" % ( job_id ) ).json()
+
+ for designation, ( primary_outfile, primary_attributes ) in primary_datasets.iteritems():
+ primary_output = None
+ for output in outputs:
+ if output[ "name" ] == '__new_primary_file_%s|%s__' % ( name, designation ):
+ primary_output = output
+ break
+
+ if not primary_output:
+ msg_template = "Failed to find primary dataset with designation [%s] for output with name [%s]"
+ msg_args = ( designation, name )
+ raise Exception( msg_template % msg_args )
+
+ primary_hda_id = primary_output[ "dataset" ][ "id" ]
+ self.twill_test_case.verify_hid( primary_outfile, hda_id=primary_hda_id, attributes=primary_attributes, dataset_fetcher=fetcher, shed_tool_id=shed_tool_id )
+ self._verify_metadata( history_id, primary_hda_id, primary_attributes )
+
+ self._verify_metadata( history_id, hid, attributes )
+
+ def _verify_metadata( self, history_id, hid, attributes ):
metadata = attributes.get( 'metadata', {} ).copy()
for key, value in metadata.copy().iteritems():
new_key = "metadata_%s" % key
- metadata[ new_key ] = metadata[ key ]
+ metadata[ new_key ] = metadata[ key ]
del metadata[ key ]
expected_file_type = attributes.get( 'ftype', None )
if expected_file_type:
@@ -316,7 +344,10 @@
def __init__( self, twill_test_case ):
self.twill_test_case = twill_test_case
- def verify_output( self, history, output_data, outfile, attributes, shed_tool_id, maxseconds ):
+ def verify_output( self, history, output_data, output_testdef, shed_tool_id, maxseconds ):
+ outfile = output_testdef.outfile
+ attributes = output_testdef.attributes
+
hid = output_data.get( 'hid' )
self.twill_test_case.verify_dataset_correctness( outfile, hid=hid, attributes=attributes, shed_tool_id=shed_tool_id, maxseconds=maxseconds )
diff -r 839a099d99a3d1f35c35c76740087655eacfe46f -r 0ce5e9eb984bc70f1f1c338a3f1d21ed9d6ddee4 test/functional/test_toolbox.py
--- a/test/functional/test_toolbox.py
+++ b/test/functional/test_toolbox.py
@@ -3,6 +3,7 @@
from base.twilltestcase import TwillTestCase
from base.interactor import build_interactor, stage_data_in_history
from galaxy.tools import DataManagerTool
+from galaxy.util import bunch
import logging
log = logging.getLogger( __name__ )
@@ -52,6 +53,7 @@
for output_index, output_tuple in enumerate(testdef.outputs):
# Get the correct hid
name, outfile, attributes = output_tuple
+ output_testdef = bunch.Bunch( name=name, outfile=outfile, attributes=attributes )
try:
output_data = data_list[ name ]
except (TypeError, KeyError):
@@ -64,7 +66,7 @@
output_data = data_list[ len(data_list) - len(testdef.outputs) + output_index ]
self.assertTrue( output_data is not None )
try:
- galaxy_interactor.verify_output( history, output_data, outfile, attributes=attributes, shed_tool_id=shed_tool_id, maxseconds=maxseconds )
+ galaxy_interactor.verify_output( history, output_data, output_testdef=output_testdef, shed_tool_id=shed_tool_id, maxseconds=maxseconds )
except Exception:
for stream in ['stdout', 'stderr']:
stream_output = galaxy_interactor.get_job_stream( history, output_data, stream=stream )
diff -r 839a099d99a3d1f35c35c76740087655eacfe46f -r 0ce5e9eb984bc70f1f1c338a3f1d21ed9d6ddee4 test/functional/tools/multi_output.xml
--- a/test/functional/tools/multi_output.xml
+++ b/test/functional/tools/multi_output.xml
@@ -1,7 +1,7 @@
<tool id="multi_output" name="Multi_Output" description="multi_output" force_history_refresh="True" version="0.1.0"><command>
echo "Hello" > $report;
- echo "World" > '${__new_file_path__}/primary_${report.id}_moo_visible_?'
+ echo "World Contents" > '${__new_file_path__}/primary_${report.id}_world_visible_?'
</command><inputs><param name="input" type="integer" value="7" />
@@ -16,6 +16,11 @@
<assert_contents><has_line line="Hello" /></assert_contents>
+ <discovered_dataset designation="world">
+ <assert_contents>
+ <has_line line="World Contents" />
+ </assert_contents>
+ </discovered_dataset></output></test></tests>
diff -r 839a099d99a3d1f35c35c76740087655eacfe46f -r 0ce5e9eb984bc70f1f1c338a3f1d21ed9d6ddee4 test/functional/workflow.py
--- a/test/functional/workflow.py
+++ b/test/functional/workflow.py
@@ -4,6 +4,7 @@
from base.interactor import GalaxyInteractorApi, stage_data_in_history
from galaxy.util import parse_xml
+from galaxy.util import bunch
from galaxy.tools.test import parse_param_elem, require_file, test_data_iter, parse_output_elems
from json import load, dumps
@@ -66,10 +67,11 @@
for expected_output_def in workflow_test.outputs:
# Get the correct hid
name, outfile, attributes = expected_output_def
+ output_testdef = bunch.Bunch( name=name, outfile=outfile, attributes=attributes )
output_data = outputs[ int( name ) ]
try:
- galaxy_interactor.verify_output( test_history, output_data, outfile, attributes=attributes, shed_tool_id=None, maxseconds=maxseconds )
+ galaxy_interactor.verify_output( test_history, output_data, output_testdef=output_testdef, shed_tool_id=None, maxseconds=maxseconds )
except Exception:
for stream in ['stdout', 'stderr']:
stream_output = galaxy_interactor.get_job_stream( test_history, output_data, stream=stream )
https://bitbucket.org/galaxy/galaxy-central/commits/681f2cc6dcfa/
Changeset: 681f2cc6dcfa
User: jmchilton
Date: 2014-03-29 23:11:17
Summary: Refactor Tool.collect_primary_datasets into own module.
Want to break it down in smaller pieces and make process more configurable. Will be easier if its outside of monolithic tool class.
Affected #: 2 files
diff -r 0ce5e9eb984bc70f1f1c338a3f1d21ed9d6ddee4 -r 681f2cc6dcfa2a6674875d10fa8c7699be63dba4 lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -42,6 +42,7 @@
from galaxy.tools.deps import build_dependency_manager
from galaxy.tools.deps.requirements import parse_requirements_from_xml
from galaxy.tools.parameters import check_param, params_from_strings, params_to_strings
+from galaxy.tools.parameters import output_collect
from galaxy.tools.parameters.basic import (BaseURLToolParameter,
DataToolParameter, HiddenToolParameter, LibraryDatasetToolParameter,
SelectToolParameter, ToolParameter, UnvalidatedValue,
@@ -2757,94 +2758,7 @@
Find any additional datasets generated by a tool and attach (for
cases where number of outputs is not known in advance).
"""
- new_primary_datasets = {}
- try:
- json_file = open( os.path.join( job_working_directory, jobs.TOOL_PROVIDED_JOB_METADATA_FILE ), 'r' )
- for line in json_file:
- line = json.loads( line )
- if line.get( 'type' ) == 'new_primary_dataset':
- new_primary_datasets[ os.path.split( line.get( 'filename' ) )[-1] ] = line
- except Exception:
- # This should not be considered an error or warning condition, this file is optional
- pass
- # Loop through output file names, looking for generated primary
- # datasets in form of:
- # 'primary_associatedWithDatasetID_designation_visibility_extension(_DBKEY)'
- primary_datasets = {}
- for name, outdata in output.items():
- filenames = []
- if 'new_file_path' in self.app.config.collect_outputs_from:
- filenames.extend( glob.glob(os.path.join(self.app.config.new_file_path, "primary_%i_*" % outdata.id) ) )
- if 'job_working_directory' in self.app.config.collect_outputs_from:
- filenames.extend( glob.glob(os.path.join(job_working_directory, "primary_%i_*" % outdata.id) ) )
- for filename in filenames:
- if not name in primary_datasets:
- primary_datasets[name] = {}
- fields = os.path.basename(filename).split("_")
- fields.pop(0)
- parent_id = int(fields.pop(0))
- designation = fields.pop(0)
- visible = fields.pop(0).lower()
- if visible == "visible":
- visible = True
- else:
- visible = False
- ext = fields.pop(0).lower()
- dbkey = outdata.dbkey
- if fields:
- dbkey = fields[ 0 ]
- # Create new primary dataset
- primary_data = self.app.model.HistoryDatasetAssociation( extension=ext,
- designation=designation,
- visible=visible,
- dbkey=dbkey,
- create_dataset=True,
- sa_session=self.sa_session )
- self.app.security_agent.copy_dataset_permissions( outdata.dataset, primary_data.dataset )
- self.sa_session.add( primary_data )
- self.sa_session.flush()
- # Move data from temp location to dataset location
- self.app.object_store.update_from_file(primary_data.dataset, file_name=filename, create=True)
- primary_data.set_size()
- primary_data.name = "%s (%s)" % ( outdata.name, designation )
- primary_data.info = outdata.info
- primary_data.init_meta( copy_from=outdata )
- primary_data.dbkey = dbkey
- # Associate new dataset with job
- job = None
- for assoc in outdata.creating_job_associations:
- job = assoc.job
- break
- if job:
- assoc = self.app.model.JobToOutputDatasetAssociation( '__new_primary_file_%s|%s__' % ( name, designation ), primary_data )
- assoc.job = job
- self.sa_session.add( assoc )
- self.sa_session.flush()
- primary_data.state = outdata.state
- #add tool/metadata provided information
- new_primary_datasets_attributes = new_primary_datasets.get( os.path.split( filename )[-1] )
- if new_primary_datasets_attributes:
- dataset_att_by_name = dict( ext='extension' )
- for att_set in [ 'name', 'info', 'ext', 'dbkey' ]:
- dataset_att_name = dataset_att_by_name.get( att_set, att_set )
- setattr( primary_data, dataset_att_name, new_primary_datasets_attributes.get( att_set, getattr( primary_data, dataset_att_name ) ) )
- primary_data.set_meta()
- primary_data.set_peek()
- self.sa_session.add( primary_data )
- self.sa_session.flush()
- outdata.history.add_dataset( primary_data )
- # Add dataset to return dict
- primary_datasets[name][designation] = primary_data
- # Need to update all associated output hdas, i.e. history was
- # shared with job running
- for dataset in outdata.dataset.history_associations:
- if outdata == dataset:
- continue
- new_data = primary_data.copy()
- dataset.history.add_dataset( new_data )
- self.sa_session.add( new_data )
- self.sa_session.flush()
- return primary_datasets
+ return output_collect.collect_primary_datatasets( self, output, job_working_directory )
def to_dict( self, trans, link_details=False, io_details=False ):
""" Returns dict of tool. """
diff -r 0ce5e9eb984bc70f1f1c338a3f1d21ed9d6ddee4 -r 681f2cc6dcfa2a6674875d10fa8c7699be63dba4 lib/galaxy/tools/parameters/output_collect.py
--- /dev/null
+++ b/lib/galaxy/tools/parameters/output_collect.py
@@ -0,0 +1,101 @@
+""" Code allowing tools to define extra files associated with an output datset.
+"""
+import os
+import glob
+import json
+
+
+from galaxy import jobs
+
+
+def collect_primary_datatasets( tool, output, job_working_directory ):
+ app = tool.app
+ sa_session = tool.sa_session
+ new_primary_datasets = {}
+ try:
+ json_file = open( os.path.join( job_working_directory, jobs.TOOL_PROVIDED_JOB_METADATA_FILE ), 'r' )
+ for line in json_file:
+ line = json.loads( line )
+ if line.get( 'type' ) == 'new_primary_dataset':
+ new_primary_datasets[ os.path.split( line.get( 'filename' ) )[-1] ] = line
+ except Exception:
+ # This should not be considered an error or warning condition, this file is optional
+ pass
+ # Loop through output file names, looking for generated primary
+ # datasets in form of:
+ # 'primary_associatedWithDatasetID_designation_visibility_extension(_DBKEY)'
+ primary_datasets = {}
+ for name, outdata in output.items():
+ filenames = []
+ if 'new_file_path' in app.config.collect_outputs_from:
+ filenames.extend( glob.glob(os.path.join(app.config.new_file_path, "primary_%i_*" % outdata.id) ) )
+ if 'job_working_directory' in app.config.collect_outputs_from:
+ filenames.extend( glob.glob(os.path.join(job_working_directory, "primary_%i_*" % outdata.id) ) )
+ for filename in filenames:
+ if not name in primary_datasets:
+ primary_datasets[name] = {}
+ fields = os.path.basename(filename).split("_")
+ fields.pop(0)
+ parent_id = int(fields.pop(0))
+ designation = fields.pop(0)
+ visible = fields.pop(0).lower()
+ if visible == "visible":
+ visible = True
+ else:
+ visible = False
+ ext = fields.pop(0).lower()
+ dbkey = outdata.dbkey
+ if fields:
+ dbkey = fields[ 0 ]
+ # Create new primary dataset
+ primary_data = app.model.HistoryDatasetAssociation( extension=ext,
+ designation=designation,
+ visible=visible,
+ dbkey=dbkey,
+ create_dataset=True,
+ sa_session=sa_session )
+ app.security_agent.copy_dataset_permissions( outdata.dataset, primary_data.dataset )
+ sa_session.add( primary_data )
+ sa_session.flush()
+ # Move data from temp location to dataset location
+ app.object_store.update_from_file(primary_data.dataset, file_name=filename, create=True)
+ primary_data.set_size()
+ primary_data.name = "%s (%s)" % ( outdata.name, designation )
+ primary_data.info = outdata.info
+ primary_data.init_meta( copy_from=outdata )
+ primary_data.dbkey = dbkey
+ # Associate new dataset with job
+ job = None
+ for assoc in outdata.creating_job_associations:
+ job = assoc.job
+ break
+ if job:
+ assoc = app.model.JobToOutputDatasetAssociation( '__new_primary_file_%s|%s__' % ( name, designation ), primary_data )
+ assoc.job = job
+ sa_session.add( assoc )
+ sa_session.flush()
+ primary_data.state = outdata.state
+ #add tool/metadata provided information
+ new_primary_datasets_attributes = new_primary_datasets.get( os.path.split( filename )[-1] )
+ if new_primary_datasets_attributes:
+ dataset_att_by_name = dict( ext='extension' )
+ for att_set in [ 'name', 'info', 'ext', 'dbkey' ]:
+ dataset_att_name = dataset_att_by_name.get( att_set, att_set )
+ setattr( primary_data, dataset_att_name, new_primary_datasets_attributes.get( att_set, getattr( primary_data, dataset_att_name ) ) )
+ primary_data.set_meta()
+ primary_data.set_peek()
+ sa_session.add( primary_data )
+ sa_session.flush()
+ outdata.history.add_dataset( primary_data )
+ # Add dataset to return dict
+ primary_datasets[name][designation] = primary_data
+ # Need to update all associated output hdas, i.e. history was
+ # shared with job running
+ for dataset in outdata.dataset.history_associations:
+ if outdata == dataset:
+ continue
+ new_data = primary_data.copy()
+ dataset.history.add_dataset( new_data )
+ sa_session.add( new_data )
+ sa_session.flush()
+ return primary_datasets
https://bitbucket.org/galaxy/galaxy-central/commits/7b8a02bb183b/
Changeset: 7b8a02bb183b
User: jmchilton
Date: 2014-03-29 23:11:17
Summary: Rework parsing of metadata from file names of runtime discovered datasets.
Newer method uses regex and named groups instead of split for more extensible approach to parsing metadata fields during primary dataset collecting. This method will be easier to allow override and configuration of in subsequent changesets.
Affected #: 1 file
diff -r 681f2cc6dcfa2a6674875d10fa8c7699be63dba4 -r 7b8a02bb183bcd7edf6ee9997ea73fe2b6097d5b lib/galaxy/tools/parameters/output_collect.py
--- a/lib/galaxy/tools/parameters/output_collect.py
+++ b/lib/galaxy/tools/parameters/output_collect.py
@@ -1,12 +1,15 @@
""" Code allowing tools to define extra files associated with an output datset.
"""
import os
+import re
import glob
import json
from galaxy import jobs
+DEFAULT_EXTRA_FILENAME_PATTERN = re.compile(r"primary_(?P<id>\d+)_(?P<designation>[^_]+)_(?P<visible>[^_]+)_(?P<ext>[^_]+)(_(?P<dbkey>[^_]+))?")
+
def collect_primary_datatasets( tool, output, job_working_directory ):
app = tool.app
@@ -34,19 +37,14 @@
for filename in filenames:
if not name in primary_datasets:
primary_datasets[name] = {}
- fields = os.path.basename(filename).split("_")
- fields.pop(0)
- parent_id = int(fields.pop(0))
- designation = fields.pop(0)
- visible = fields.pop(0).lower()
- if visible == "visible":
- visible = True
- else:
- visible = False
- ext = fields.pop(0).lower()
- dbkey = outdata.dbkey
- if fields:
- dbkey = fields[ 0 ]
+ fields_match = DEFAULT_EXTRA_FILENAME_PATTERN.match( os.path.basename(filename) )
+ if not fields_match:
+ # Before I guess pop() would just have thrown an IndexError
+ raise Exception( "Problem parsing metadata fields for file %s" % filename )
+ designation = fields_match.group( "designation" )
+ visible = fields_match.group( "visible" ).lower() == "visible"
+ ext = fields_match.group( "ext" ).lower()
+ dbkey = fields_match.group( "dbkey" ) or outdata.dbkey
# Create new primary dataset
primary_data = app.model.HistoryDatasetAssociation( extension=ext,
designation=designation,
https://bitbucket.org/galaxy/galaxy-central/commits/56e9203f0200/
Changeset: 56e9203f0200
User: jmchilton
Date: 2014-03-29 23:11:17
Summary: Allow tool outputs to configure runtime dataset discovery.
Output tags on tool XML datasets may contain any number of child "discover_datasets" elements describing how Galaxy should discover datasests. This new method only works for job_working_directory collection - new_file_path based discovery should be considered deprecated.
Example unit and functional tests describe this new configurability in detail.
Affected #: 5 files
diff -r 7b8a02bb183bcd7edf6ee9997ea73fe2b6097d5b -r 56e9203f020027c8c7a294555f6286ce654f79d2 lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -1413,6 +1413,7 @@
output.hidden = string_as_bool( data_elem.get("hidden", "") )
output.tool = self
output.actions = ToolOutputActionGroup( output, data_elem.find( 'actions' ) )
+ output.dataset_collectors = output_collect.dataset_collectors_from_elem( data_elem )
self.outputs[ output.name ] = output
# TODO: Include the tool's name in any parsing warnings.
diff -r 7b8a02bb183bcd7edf6ee9997ea73fe2b6097d5b -r 56e9203f020027c8c7a294555f6286ce654f79d2 lib/galaxy/tools/parameters/output_collect.py
--- a/lib/galaxy/tools/parameters/output_collect.py
+++ b/lib/galaxy/tools/parameters/output_collect.py
@@ -7,8 +7,11 @@
from galaxy import jobs
+from galaxy import util
+from galaxy.util import odict
-DEFAULT_EXTRA_FILENAME_PATTERN = re.compile(r"primary_(?P<id>\d+)_(?P<designation>[^_]+)_(?P<visible>[^_]+)_(?P<ext>[^_]+)(_(?P<dbkey>[^_]+))?")
+DATASET_ID_TOKEN = "DATASET_ID"
+DEFAULT_EXTRA_FILENAME_PATTERN = r"primary_DATASET_ID_(?P<designation>[^_]+)_(?P<visible>[^_]+)_(?P<ext>[^_]+)(_(?P<dbkey>[^_]+))?"
def collect_primary_datatasets( tool, output, job_working_directory ):
@@ -29,22 +32,41 @@
# 'primary_associatedWithDatasetID_designation_visibility_extension(_DBKEY)'
primary_datasets = {}
for name, outdata in output.items():
- filenames = []
+ dataset_collectors = tool.outputs[ name ].dataset_collectors if name in tool.outputs else [ DEFAULT_DATASET_COLLECTOR ]
+ filenames = odict.odict()
if 'new_file_path' in app.config.collect_outputs_from:
- filenames.extend( glob.glob(os.path.join(app.config.new_file_path, "primary_%i_*" % outdata.id) ) )
+ if DEFAULT_DATASET_COLLECTOR in dataset_collectors:
+ # 'new_file_path' collection should be considered deprecated,
+ # only use old-style matching (glob instead of regex and only
+ # using default collector - if enabled).
+ for filename in glob.glob(os.path.join(app.config.new_file_path, "primary_%i_*" % outdata.id) ):
+ filenames[ filename ] = DEFAULT_DATASET_COLLECTOR
if 'job_working_directory' in app.config.collect_outputs_from:
- filenames.extend( glob.glob(os.path.join(job_working_directory, "primary_%i_*" % outdata.id) ) )
- for filename in filenames:
+ for extra_file_collector in dataset_collectors:
+ directory = job_working_directory
+ if extra_file_collector.directory:
+ directory = os.path.join( directory, extra_file_collector.directory )
+ if not util.in_directory( directory, job_working_directory ):
+ raise Exception( "Problem with tool configuration, attempting to pull in datasets from outside working directory." )
+ if not os.path.isdir( directory ):
+ continue
+ for filename in os.listdir( directory ):
+ path = os.path.join( directory, filename )
+ if not os.path.isfile( path ):
+ continue
+ if extra_file_collector.match( outdata, filename ):
+ filenames[ path ] = extra_file_collector
+ for filename, extra_file_collector in filenames.iteritems():
if not name in primary_datasets:
primary_datasets[name] = {}
- fields_match = DEFAULT_EXTRA_FILENAME_PATTERN.match( os.path.basename(filename) )
+ fields_match = extra_file_collector.match( outdata, os.path.basename( filename ) )
if not fields_match:
# Before I guess pop() would just have thrown an IndexError
raise Exception( "Problem parsing metadata fields for file %s" % filename )
- designation = fields_match.group( "designation" )
- visible = fields_match.group( "visible" ).lower() == "visible"
- ext = fields_match.group( "ext" ).lower()
- dbkey = fields_match.group( "dbkey" ) or outdata.dbkey
+ designation = fields_match.designation
+ visible = fields_match.visible
+ ext = fields_match.ext
+ dbkey = fields_match.dbkey
# Create new primary dataset
primary_data = app.model.HistoryDatasetAssociation( extension=ext,
designation=designation,
@@ -58,7 +80,9 @@
# Move data from temp location to dataset location
app.object_store.update_from_file(primary_data.dataset, file_name=filename, create=True)
primary_data.set_size()
- primary_data.name = "%s (%s)" % ( outdata.name, designation )
+ # If match specified a name use otherwise generate one from
+ # designation.
+ primary_data.name = fields_match.name or "%s (%s)" % ( outdata.name, designation )
primary_data.info = outdata.info
primary_data.init_meta( copy_from=outdata )
primary_data.dbkey = dbkey
@@ -97,3 +121,99 @@
sa_session.add( new_data )
sa_session.flush()
return primary_datasets
+
+
+# XML can describe custom patterns, but these literals describe named
+# patterns that will be replaced.
+NAMED_PATTERNS = {
+ "__default__": DEFAULT_EXTRA_FILENAME_PATTERN,
+ "__name__": r"(?P<name>.*)",
+ "__designation__": r"(?P<designation>.*)",
+ "__name_and_ext__": r"(?P<name>.*)\.(?P<ext>[^\.]+)?",
+ "__designation_and_ext__": r"(?P<designation>.*)\.(?P<ext>[^\._]+)?",
+}
+
+
+def dataset_collectors_from_elem( elem ):
+ primary_dataset_elems = elem.findall( "discover_datasets" )
+ if not primary_dataset_elems:
+ return [ DEFAULT_DATASET_COLLECTOR ]
+ else:
+ return map( lambda elem: DatasetCollector( **elem.attrib ), primary_dataset_elems )
+
+
+class DatasetCollector( object ):
+
+ def __init__( self, **kwargs ):
+ pattern = kwargs.get( "pattern", "__default__" )
+ if pattern in NAMED_PATTERNS:
+ pattern = NAMED_PATTERNS.get( pattern )
+ self.pattern = pattern
+ self.default_dbkey = kwargs.get( "dbkey", None )
+ self.default_ext = kwargs.get( "ext", None )
+ self.default_visible = util.asbool( kwargs.get( "visible", None ) )
+ self.directory = kwargs.get( "directory", None )
+
+ def pattern_for_dataset( self, dataset_instance=None ):
+ token_replacement = r'\d+'
+ if dataset_instance:
+ token_replacement = str( dataset_instance.id )
+ return self.pattern.replace( DATASET_ID_TOKEN, token_replacement )
+
+ def match( self, dataset_instance, filename ):
+ re_match = re.match( self.pattern_for_dataset( dataset_instance ), filename )
+ match_object = None
+ if re_match:
+ match_object = CollectedDatasetMatch( re_match, self )
+ return match_object
+
+
+class CollectedDatasetMatch( object ):
+
+ def __init__( self, re_match, collector ):
+ self.re_match = re_match
+ self.collector = collector
+
+ @property
+ def designation( self ):
+ re_match = self.re_match
+ if "designation" in re_match.groupdict():
+ return re_match.group( "designation" )
+ elif "name" in re_match.groupdict():
+ return re_match.group( "name" )
+ else:
+ return None
+
+ @property
+ def name( self ):
+ """ Return name or None if not defined by the discovery pattern.
+ """
+ re_match = self.re_match
+ name = None
+ if "name" in re_match.groupdict():
+ name = re_match.group( "name" )
+ return name
+
+ @property
+ def dbkey( self ):
+ try:
+ return self.re_match.group( "dbkey" )
+ except IndexError:
+ return self.collector.default_dbkey
+
+ @property
+ def ext( self ):
+ try:
+ return self.re_match.group( "ext" )
+ except IndexError:
+ return self.collector.default_ext
+
+ @property
+ def visible( self ):
+ try:
+ return self.re_match.group( "visible" ).lower() == "visible"
+ except IndexError:
+ return self.collector.default_visible
+
+
+DEFAULT_DATASET_COLLECTOR = DatasetCollector()
diff -r 7b8a02bb183bcd7edf6ee9997ea73fe2b6097d5b -r 56e9203f020027c8c7a294555f6286ce654f79d2 test/functional/tools/multi_output_configured.xml
--- /dev/null
+++ b/test/functional/tools/multi_output_configured.xml
@@ -0,0 +1,43 @@
+<tool id="multi_output_configured" name="Multi_Output_Configured" description="multi_output_configured" force_history_refresh="True" version="0.1.0">
+ <command>
+ echo "Hello" > $report;
+ mkdir subdir1;
+ echo "This" > subdir1/this.txt;
+ echo "That" > subdir1/that.txt;
+ mkdir subdir2;
+ echo "1" > subdir2/CUSTOM_1.txt;
+ echo "2" > subdir2/CUSTOM_2.tabular;
+ echo "3" > subdir2/CUSTOM_3.txt;
+ </command>
+ <inputs>
+ <param name="input" type="integer" value="7" />
+ </inputs>
+ <outputs>
+ <data format="txt" name="report">
+ <discover_datasets pattern="__designation_and_ext__" directory="subdir1" />
+ <discover_datasets pattern="CUSTOM_(?P<designation>.+)\.(?P<ext>.+)" directory="subdir2" />
+ </data>
+ </outputs>
+ <tests>
+ <test>
+ <param name="input" value="7" />
+ <output name="report">
+ <assert_contents>
+ <has_line line="Hello" />
+ </assert_contents>
+ <discovered_dataset designation="this" ftype="txt">
+ <assert_contents><has_line line="This" /></assert_contents>
+ </discovered_dataset>
+ <discovered_dataset designation="that" ftype="txt">
+ <assert_contents><has_line line="That" /></assert_contents>
+ </discovered_dataset>
+ <discovered_dataset designation="1" ftype="txt">
+ <assert_contents><has_line line="1" /></assert_contents>
+ </discovered_dataset>
+ <discovered_dataset designation="2" ftype="tabular">
+ <assert_contents><has_line line="2" /></assert_contents>
+ </discovered_dataset>
+ </output>
+ </test>
+ </tests>
+</tool>
diff -r 7b8a02bb183bcd7edf6ee9997ea73fe2b6097d5b -r 56e9203f020027c8c7a294555f6286ce654f79d2 test/functional/tools/samples_tool_conf.xml
--- a/test/functional/tools/samples_tool_conf.xml
+++ b/test/functional/tools/samples_tool_conf.xml
@@ -8,6 +8,7 @@
<tool file="multi_page.xml"/><tool file="multi_select.xml" /><tool file="multi_output.xml" />
+ <tool file="multi_output_configured.xml" /><tool file="composite_output.xml" /><tool file="metadata.xml" /><tool file="output_order.xml" />
diff -r 7b8a02bb183bcd7edf6ee9997ea73fe2b6097d5b -r 56e9203f020027c8c7a294555f6286ce654f79d2 test/unit/tools/test_collect_primary_datasets.py
--- a/test/unit/tools/test_collect_primary_datasets.py
+++ b/test/unit/tools/test_collect_primary_datasets.py
@@ -5,6 +5,8 @@
import tools_support
from galaxy import model
+from galaxy import util
+from galaxy.tools.parameters import output_collect
DEFAULT_TOOL_OUTPUT = "out1"
DEFAULT_EXTRA_NAME = "test1"
@@ -114,6 +116,75 @@
extra_job_assoc = filter( lambda job_assoc: job_assoc.name.startswith( "__" ), self.job.output_datasets )[ 0 ]
assert extra_job_assoc.name == "__new_primary_file_out1|test1__"
+ def test_pattern_override_designation( self ):
+ self._replace_output_collectors( '''<output><discover_datasets pattern="__designation__" directory="subdir" ext="txt" /></output>''' )
+ self._setup_extra_file( subdir="subdir", filename="foo.txt" )
+ primary_outputs = self._collect( )[ DEFAULT_TOOL_OUTPUT ]
+ assert len( primary_outputs ) == 1
+ created_hda = primary_outputs.values()[ 0 ]
+ assert "foo.txt" in created_hda.name
+ assert created_hda.ext == "txt"
+
+ def test_name_and_ext_pattern( self ):
+ self._replace_output_collectors( '''<output><discover_datasets pattern="__name_and_ext__" directory="subdir" /></output>''' )
+ self._setup_extra_file( subdir="subdir", filename="foo1.txt" )
+ self._setup_extra_file( subdir="subdir", filename="foo2.tabular" )
+ primary_outputs = self._collect( )[ DEFAULT_TOOL_OUTPUT ]
+ assert len( primary_outputs ) == 2
+ assert primary_outputs[ "foo1" ].ext == "txt"
+ assert primary_outputs[ "foo2" ].ext == "tabular"
+
+ def test_custom_pattern( self ):
+ # Hypothetical oral metagenomic classifier that populates a directory
+ # of files based on name and genome. Use custom regex pattern to grab
+ # and classify these files.
+ self._replace_output_collectors( '''<output><discover_datasets pattern="(?P<designation>.*)__(?P<dbkey>.*).fasta" directory="genome_breakdown" ext="fasta" /></output>''' )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp1__hg19.fasta" )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp2__lactLact.fasta" )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp3__hg19.fasta" )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp4__lactPlan.fasta" )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp5__fusoNucl.fasta" )
+
+ # Put a file in directory we don't care about, just to make sure
+ # it doesn't get picked up by pattern.
+ self._setup_extra_file( subdir="genome_breakdown", filename="overview.txt" )
+
+ primary_outputs = self._collect( )[ DEFAULT_TOOL_OUTPUT ]
+ assert len( primary_outputs ) == 5
+ genomes = dict( samp1="hg19", samp2="lactLact", samp3="hg19", samp4="lactPlan", samp5="fusoNucl" )
+ for key, hda in primary_outputs.iteritems():
+ assert hda.dbkey == genomes[ key ]
+
+ def test_name_versus_designation( self ):
+ """ This test demonstrates the difference between name and desgination
+ in grouping patterns and named patterns such as __designation__,
+ __name__, __designation_and_ext__, and __name_and_ext__.
+ """
+ self._replace_output_collectors( '''<output>
+ <discover_datasets pattern="__name_and_ext__" directory="subdir_for_name_discovery" />
+ <discover_datasets pattern="__designation_and_ext__" directory="subdir_for_designation_discovery" />
+ </output>''')
+ self._setup_extra_file( subdir="subdir_for_name_discovery", filename="example1.txt" )
+ self._setup_extra_file( subdir="subdir_for_designation_discovery", filename="example2.txt" )
+ primary_outputs = self._collect( )[ DEFAULT_TOOL_OUTPUT ]
+ name_output = primary_outputs[ "example1" ]
+ designation_output = primary_outputs[ "example2" ]
+ # While name is also used for designation, designation is not the name -
+ # it is used in the calculation of the name however...
+ assert name_output.name == "example1"
+ assert designation_output.name == "%s (%s)" % ( self.hda.name, "example2" )
+
+ def test_cannot_read_files_outside_job_directory( self ):
+ self._replace_output_collectors( '''<output>
+ <discover_datasets pattern="__name_and_ext__" directory="../../secrets" />
+ </output>''')
+ exception_thrown = False
+ try:
+ self._collect( )
+ except Exception:
+ exception_thrown = True
+ assert exception_thrown
+
def _collect_default_extra( self, **kwargs ):
return self._collect( **kwargs )[ DEFAULT_TOOL_OUTPUT ][ DEFAULT_EXTRA_NAME ]
@@ -122,6 +193,12 @@
job_working_directory = self.test_directory
return self.tool.collect_primary_datasets( self.outputs, job_working_directory )
+ def _replace_output_collectors( self, xml_str ):
+ # Rewrite tool as if it had been created with output containing
+ # supplied dataset_collector elem.
+ elem = util.parse_xml_string( xml_str )
+ self.tool.outputs[ DEFAULT_TOOL_OUTPUT ].dataset_collectors = output_collect.dataset_collectors_from_elem( elem )
+
def _append_job_json( self, object, output_path=None, line_type="new_primary_dataset" ):
object[ "type" ] = line_type
if output_path:
@@ -133,7 +210,8 @@
def _setup_extra_file( self, **kwargs ):
path = kwargs.get( "path", None )
- if not path:
+ filename = kwargs.get( "filename", None )
+ if not path and not filename:
name = kwargs.get( "name", DEFAULT_EXTRA_NAME )
visible = kwargs.get( "visible", "visible" )
ext = kwargs.get( "ext", "data" )
@@ -142,6 +220,13 @@
path = os.path.join( directory, "primary_%s_%s_%s_%s" % template_args )
if "dbkey" in kwargs:
path = "%s_%s" % ( path, kwargs[ "dbkey" ] )
+ if not path:
+ assert filename
+ subdir = kwargs.get( "subdir", "." )
+ path = os.path.join( self.test_directory, subdir, filename )
+ directory = os.path.dirname( path )
+ if not os.path.exists( directory ):
+ os.makedirs( directory )
contents = kwargs.get( "contents", "test contents" )
open( path, "w" ).write( contents )
return path
https://bitbucket.org/galaxy/galaxy-central/commits/8e6cda4c1b3d/
Changeset: 8e6cda4c1b3d
User: jmchilton
Date: 2014-05-06 15:13:29
Summary: Merged in jmchilton/galaxy-central-fork-1 (pull request #356)
Enhancements for Runtime Discovered (Collected Primary) Datasets
Affected #: 13 files
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -42,6 +42,7 @@
from galaxy.tools.deps import build_dependency_manager
from galaxy.tools.deps.requirements import parse_requirements_from_xml
from galaxy.tools.parameters import check_param, params_from_strings, params_to_strings
+from galaxy.tools.parameters import output_collect
from galaxy.tools.parameters.basic import (BaseURLToolParameter,
DataToolParameter, HiddenToolParameter, LibraryDatasetToolParameter,
SelectToolParameter, ToolParameter, UnvalidatedValue,
@@ -1452,6 +1453,7 @@
output.hidden = string_as_bool( data_elem.get("hidden", "") )
output.tool = self
output.actions = ToolOutputActionGroup( output, data_elem.find( 'actions' ) )
+ output.dataset_collectors = output_collect.dataset_collectors_from_elem( data_elem )
self.outputs[ output.name ] = output
# TODO: Include the tool's name in any parsing warnings.
@@ -2800,94 +2802,7 @@
Find any additional datasets generated by a tool and attach (for
cases where number of outputs is not known in advance).
"""
- new_primary_datasets = {}
- try:
- json_file = open( os.path.join( job_working_directory, jobs.TOOL_PROVIDED_JOB_METADATA_FILE ), 'r' )
- for line in json_file:
- line = json.loads( line )
- if line.get( 'type' ) == 'new_primary_dataset':
- new_primary_datasets[ os.path.split( line.get( 'filename' ) )[-1] ] = line
- except Exception:
- # This should not be considered an error or warning condition, this file is optional
- pass
- # Loop through output file names, looking for generated primary
- # datasets in form of:
- # 'primary_associatedWithDatasetID_designation_visibility_extension(_DBKEY)'
- primary_datasets = {}
- for name, outdata in output.items():
- filenames = []
- if 'new_file_path' in self.app.config.collect_outputs_from:
- filenames.extend( glob.glob(os.path.join(self.app.config.new_file_path, "primary_%i_*" % outdata.id) ) )
- if 'job_working_directory' in self.app.config.collect_outputs_from:
- filenames.extend( glob.glob(os.path.join(job_working_directory, "primary_%i_*" % outdata.id) ) )
- for filename in filenames:
- if not name in primary_datasets:
- primary_datasets[name] = {}
- fields = os.path.basename(filename).split("_")
- fields.pop(0)
- parent_id = int(fields.pop(0))
- designation = fields.pop(0)
- visible = fields.pop(0).lower()
- if visible == "visible":
- visible = True
- else:
- visible = False
- ext = fields.pop(0).lower()
- dbkey = outdata.dbkey
- if fields:
- dbkey = fields[ 0 ]
- # Create new primary dataset
- primary_data = self.app.model.HistoryDatasetAssociation( extension=ext,
- designation=designation,
- visible=visible,
- dbkey=dbkey,
- create_dataset=True,
- sa_session=self.sa_session )
- self.app.security_agent.copy_dataset_permissions( outdata.dataset, primary_data.dataset )
- self.sa_session.add( primary_data )
- self.sa_session.flush()
- # Move data from temp location to dataset location
- self.app.object_store.update_from_file(primary_data.dataset, file_name=filename, create=True)
- primary_data.set_size()
- primary_data.name = "%s (%s)" % ( outdata.name, designation )
- primary_data.info = outdata.info
- primary_data.init_meta( copy_from=outdata )
- primary_data.dbkey = dbkey
- # Associate new dataset with job
- job = None
- for assoc in outdata.creating_job_associations:
- job = assoc.job
- break
- if job:
- assoc = self.app.model.JobToOutputDatasetAssociation( '__new_primary_file_%s|%s__' % ( name, designation ), primary_data )
- assoc.job = job
- self.sa_session.add( assoc )
- self.sa_session.flush()
- primary_data.state = outdata.state
- #add tool/metadata provided information
- new_primary_datasets_attributes = new_primary_datasets.get( os.path.split( filename )[-1] )
- if new_primary_datasets_attributes:
- dataset_att_by_name = dict( ext='extension' )
- for att_set in [ 'name', 'info', 'ext', 'dbkey' ]:
- dataset_att_name = dataset_att_by_name.get( att_set, att_set )
- setattr( primary_data, dataset_att_name, new_primary_datasets_attributes.get( att_set, getattr( primary_data, dataset_att_name ) ) )
- primary_data.set_meta()
- primary_data.set_peek()
- self.sa_session.add( primary_data )
- self.sa_session.flush()
- outdata.history.add_dataset( primary_data )
- # Add dataset to return dict
- primary_datasets[name][designation] = primary_data
- # Need to update all associated output hdas, i.e. history was
- # shared with job running
- for dataset in outdata.dataset.history_associations:
- if outdata == dataset:
- continue
- new_data = primary_data.copy()
- dataset.history.add_dataset( new_data )
- self.sa_session.add( new_data )
- self.sa_session.flush()
- return primary_datasets
+ return output_collect.collect_primary_datatasets( self, output, job_working_directory )
def to_dict( self, trans, link_details=False, io_details=False ):
""" Returns dict of tool. """
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b lib/galaxy/tools/parameters/output_collect.py
--- /dev/null
+++ b/lib/galaxy/tools/parameters/output_collect.py
@@ -0,0 +1,219 @@
+""" Code allowing tools to define extra files associated with an output datset.
+"""
+import os
+import re
+import glob
+import json
+
+
+from galaxy import jobs
+from galaxy import util
+from galaxy.util import odict
+
+DATASET_ID_TOKEN = "DATASET_ID"
+DEFAULT_EXTRA_FILENAME_PATTERN = r"primary_DATASET_ID_(?P<designation>[^_]+)_(?P<visible>[^_]+)_(?P<ext>[^_]+)(_(?P<dbkey>[^_]+))?"
+
+
+def collect_primary_datatasets( tool, output, job_working_directory ):
+ app = tool.app
+ sa_session = tool.sa_session
+ new_primary_datasets = {}
+ try:
+ json_file = open( os.path.join( job_working_directory, jobs.TOOL_PROVIDED_JOB_METADATA_FILE ), 'r' )
+ for line in json_file:
+ line = json.loads( line )
+ if line.get( 'type' ) == 'new_primary_dataset':
+ new_primary_datasets[ os.path.split( line.get( 'filename' ) )[-1] ] = line
+ except Exception:
+ # This should not be considered an error or warning condition, this file is optional
+ pass
+ # Loop through output file names, looking for generated primary
+ # datasets in form of:
+ # 'primary_associatedWithDatasetID_designation_visibility_extension(_DBKEY)'
+ primary_datasets = {}
+ for name, outdata in output.items():
+ dataset_collectors = tool.outputs[ name ].dataset_collectors if name in tool.outputs else [ DEFAULT_DATASET_COLLECTOR ]
+ filenames = odict.odict()
+ if 'new_file_path' in app.config.collect_outputs_from:
+ if DEFAULT_DATASET_COLLECTOR in dataset_collectors:
+ # 'new_file_path' collection should be considered deprecated,
+ # only use old-style matching (glob instead of regex and only
+ # using default collector - if enabled).
+ for filename in glob.glob(os.path.join(app.config.new_file_path, "primary_%i_*" % outdata.id) ):
+ filenames[ filename ] = DEFAULT_DATASET_COLLECTOR
+ if 'job_working_directory' in app.config.collect_outputs_from:
+ for extra_file_collector in dataset_collectors:
+ directory = job_working_directory
+ if extra_file_collector.directory:
+ directory = os.path.join( directory, extra_file_collector.directory )
+ if not util.in_directory( directory, job_working_directory ):
+ raise Exception( "Problem with tool configuration, attempting to pull in datasets from outside working directory." )
+ if not os.path.isdir( directory ):
+ continue
+ for filename in os.listdir( directory ):
+ path = os.path.join( directory, filename )
+ if not os.path.isfile( path ):
+ continue
+ if extra_file_collector.match( outdata, filename ):
+ filenames[ path ] = extra_file_collector
+ for filename, extra_file_collector in filenames.iteritems():
+ if not name in primary_datasets:
+ primary_datasets[name] = {}
+ fields_match = extra_file_collector.match( outdata, os.path.basename( filename ) )
+ if not fields_match:
+ # Before I guess pop() would just have thrown an IndexError
+ raise Exception( "Problem parsing metadata fields for file %s" % filename )
+ designation = fields_match.designation
+ visible = fields_match.visible
+ ext = fields_match.ext
+ dbkey = fields_match.dbkey
+ # Create new primary dataset
+ primary_data = app.model.HistoryDatasetAssociation( extension=ext,
+ designation=designation,
+ visible=visible,
+ dbkey=dbkey,
+ create_dataset=True,
+ sa_session=sa_session )
+ app.security_agent.copy_dataset_permissions( outdata.dataset, primary_data.dataset )
+ sa_session.add( primary_data )
+ sa_session.flush()
+ # Move data from temp location to dataset location
+ app.object_store.update_from_file(primary_data.dataset, file_name=filename, create=True)
+ primary_data.set_size()
+ # If match specified a name use otherwise generate one from
+ # designation.
+ primary_data.name = fields_match.name or "%s (%s)" % ( outdata.name, designation )
+ primary_data.info = outdata.info
+ primary_data.init_meta( copy_from=outdata )
+ primary_data.dbkey = dbkey
+ # Associate new dataset with job
+ job = None
+ for assoc in outdata.creating_job_associations:
+ job = assoc.job
+ break
+ if job:
+ assoc = app.model.JobToOutputDatasetAssociation( '__new_primary_file_%s|%s__' % ( name, designation ), primary_data )
+ assoc.job = job
+ sa_session.add( assoc )
+ sa_session.flush()
+ primary_data.state = outdata.state
+ #add tool/metadata provided information
+ new_primary_datasets_attributes = new_primary_datasets.get( os.path.split( filename )[-1] )
+ if new_primary_datasets_attributes:
+ dataset_att_by_name = dict( ext='extension' )
+ for att_set in [ 'name', 'info', 'ext', 'dbkey' ]:
+ dataset_att_name = dataset_att_by_name.get( att_set, att_set )
+ setattr( primary_data, dataset_att_name, new_primary_datasets_attributes.get( att_set, getattr( primary_data, dataset_att_name ) ) )
+ primary_data.set_meta()
+ primary_data.set_peek()
+ sa_session.add( primary_data )
+ sa_session.flush()
+ outdata.history.add_dataset( primary_data )
+ # Add dataset to return dict
+ primary_datasets[name][designation] = primary_data
+ # Need to update all associated output hdas, i.e. history was
+ # shared with job running
+ for dataset in outdata.dataset.history_associations:
+ if outdata == dataset:
+ continue
+ new_data = primary_data.copy()
+ dataset.history.add_dataset( new_data )
+ sa_session.add( new_data )
+ sa_session.flush()
+ return primary_datasets
+
+
+# XML can describe custom patterns, but these literals describe named
+# patterns that will be replaced.
+NAMED_PATTERNS = {
+ "__default__": DEFAULT_EXTRA_FILENAME_PATTERN,
+ "__name__": r"(?P<name>.*)",
+ "__designation__": r"(?P<designation>.*)",
+ "__name_and_ext__": r"(?P<name>.*)\.(?P<ext>[^\.]+)?",
+ "__designation_and_ext__": r"(?P<designation>.*)\.(?P<ext>[^\._]+)?",
+}
+
+
+def dataset_collectors_from_elem( elem ):
+ primary_dataset_elems = elem.findall( "discover_datasets" )
+ if not primary_dataset_elems:
+ return [ DEFAULT_DATASET_COLLECTOR ]
+ else:
+ return map( lambda elem: DatasetCollector( **elem.attrib ), primary_dataset_elems )
+
+
+class DatasetCollector( object ):
+
+ def __init__( self, **kwargs ):
+ pattern = kwargs.get( "pattern", "__default__" )
+ if pattern in NAMED_PATTERNS:
+ pattern = NAMED_PATTERNS.get( pattern )
+ self.pattern = pattern
+ self.default_dbkey = kwargs.get( "dbkey", None )
+ self.default_ext = kwargs.get( "ext", None )
+ self.default_visible = util.asbool( kwargs.get( "visible", None ) )
+ self.directory = kwargs.get( "directory", None )
+
+ def pattern_for_dataset( self, dataset_instance=None ):
+ token_replacement = r'\d+'
+ if dataset_instance:
+ token_replacement = str( dataset_instance.id )
+ return self.pattern.replace( DATASET_ID_TOKEN, token_replacement )
+
+ def match( self, dataset_instance, filename ):
+ re_match = re.match( self.pattern_for_dataset( dataset_instance ), filename )
+ match_object = None
+ if re_match:
+ match_object = CollectedDatasetMatch( re_match, self )
+ return match_object
+
+
+class CollectedDatasetMatch( object ):
+
+ def __init__( self, re_match, collector ):
+ self.re_match = re_match
+ self.collector = collector
+
+ @property
+ def designation( self ):
+ re_match = self.re_match
+ if "designation" in re_match.groupdict():
+ return re_match.group( "designation" )
+ elif "name" in re_match.groupdict():
+ return re_match.group( "name" )
+ else:
+ return None
+
+ @property
+ def name( self ):
+ """ Return name or None if not defined by the discovery pattern.
+ """
+ re_match = self.re_match
+ name = None
+ if "name" in re_match.groupdict():
+ name = re_match.group( "name" )
+ return name
+
+ @property
+ def dbkey( self ):
+ try:
+ return self.re_match.group( "dbkey" )
+ except IndexError:
+ return self.collector.default_dbkey
+
+ @property
+ def ext( self ):
+ try:
+ return self.re_match.group( "ext" )
+ except IndexError:
+ return self.collector.default_ext
+
+ @property
+ def visible( self ):
+ try:
+ return self.re_match.group( "visible" ).lower() == "visible"
+ except IndexError:
+ return self.collector.default_visible
+
+
+DEFAULT_DATASET_COLLECTOR = DatasetCollector()
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b lib/galaxy/tools/test.py
--- a/lib/galaxy/tools/test.py
+++ b/lib/galaxy/tools/test.py
@@ -297,6 +297,19 @@
if name is None:
raise Exception( "Test output does not have a 'name'" )
+ file, attributes = __parse_test_attributes( output_elem, attrib )
+ primary_datasets = {}
+ for primary_elem in ( output_elem.findall( "discovered_dataset" ) or [] ):
+ primary_attrib = dict( primary_elem.attrib )
+ designation = primary_attrib.pop( 'designation', None )
+ if designation is None:
+ raise Exception( "Test primary dataset does not have a 'designation'" )
+ primary_datasets[ designation ] = __parse_test_attributes( primary_elem, primary_attrib )
+ attributes[ "primary_datasets" ] = primary_datasets
+ return name, file, attributes
+
+
+def __parse_test_attributes( output_elem, attrib ):
assert_list = __parse_assert_list( output_elem )
file = attrib.pop( 'file', None )
# File no longer required if an list of assertions was present.
@@ -321,7 +334,7 @@
attributes['assert_list'] = assert_list
attributes['extra_files'] = extra_files
attributes['metadata'] = metadata
- return name, file, attributes
+ return file, attributes
def __parse_assert_list( output_elem ):
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b lib/galaxy/webapps/galaxy/api/jobs.py
--- a/lib/galaxy/webapps/galaxy/api/jobs.py
+++ b/lib/galaxy/webapps/galaxy/api/jobs.py
@@ -13,6 +13,7 @@
from galaxy.web.base.controller import UsesLibraryMixinItems
from galaxy import exceptions
from galaxy import util
+from galaxy import model
import logging
log = logging.getLogger( __name__ )
@@ -91,10 +92,62 @@
:rtype: dictionary
:returns: dictionary containing full description of job data
"""
+ job = self.__get_job( trans, id )
+ return self.encode_all_ids( trans, job.to_dict( 'element' ), True )
+
+ @expose_api
+ def inputs( self, trans, id, **kwd ):
+ """
+ show( trans, id )
+ * GET /api/jobs/{job_id}/inputs
+ returns input datasets created by job
+
+ :type id: string
+ :param id: Encoded job id
+
+ :rtype: dictionary
+ :returns: dictionary containing input dataset associations
+ """
+ job = self.__get_job( trans, id )
+ return self.__dictify_associations( trans, job.input_datasets, job.input_library_datasets )
+
+ @expose_api
+ def outputs( self, trans, id, **kwd ):
+ """
+ show( trans, id )
+ * GET /api/jobs/{job_id}/outputs
+ returns output datasets created by job
+
+ :type id: string
+ :param id: Encoded job id
+
+ :rtype: dictionary
+ :returns: dictionary containing output dataset associations
+ """
+ job = self.__get_job( trans, id )
+ return self.__dictify_associations( trans, job.output_datasets, job.output_library_datasets )
+
+ def __dictify_associations( self, trans, *association_lists ):
+ rval = []
+ for association_list in association_lists:
+ rval.extend( map( lambda a: self.__dictify_association( trans, a ), association_list ) )
+ return rval
+
+ def __dictify_association( self, trans, job_dataset_association ):
+ dataset_dict = None
+ dataset = job_dataset_association.dataset
+ if dataset:
+ if isinstance( dataset, model.HistoryDatasetAssociation ):
+ dataset_dict = dict( src="hda", id=trans.security.encode_id( dataset.id ) )
+ else:
+ dataset_dict = dict( src="ldda", id=trans.security.encode_id( dataset.id ) )
+ return dict( name=job_dataset_association.name, dataset=dataset_dict )
+
+ def __get_job( self, trans, id ):
try:
- decoded_job_id = trans.security.decode_id(id)
- except:
- raise exceptions.ObjectAttributeInvalidException()
+ decoded_job_id = trans.security.decode_id( id )
+ except Exception:
+ raise exceptions.MalformedId()
query = trans.sa_session.query( trans.app.model.Job ).filter(
trans.app.model.Job.user == trans.user,
trans.app.model.Job.id == decoded_job_id
@@ -102,7 +155,7 @@
job = query.first()
if job is None:
raise exceptions.ObjectNotFound()
- return self.encode_all_ids( trans, job.to_dict( 'element' ), True )
+ return job
@expose_api
def create( self, trans, payload, **kwd ):
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b lib/galaxy/webapps/galaxy/api/provenance.py
--- a/lib/galaxy/webapps/galaxy/api/provenance.py
+++ b/lib/galaxy/webapps/galaxy/api/provenance.py
@@ -46,6 +46,7 @@
return {
"id": trans.security.encode_id(item.id),
"uuid": ( lambda uuid: str( uuid ) if uuid else None )( item.dataset.uuid),
+ "job_id": trans.security.encode_id( job.id ),
"tool_id": job.tool_id,
"parameters": self._get_job_record(trans, job, follow),
"stderr": job.stderr,
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b lib/galaxy/webapps/galaxy/buildapp.py
--- a/lib/galaxy/webapps/galaxy/buildapp.py
+++ b/lib/galaxy/webapps/galaxy/buildapp.py
@@ -281,6 +281,8 @@
'jobs',
path_prefix='/api' )
webapp.mapper.connect( 'job_search', '/api/jobs/search', controller='jobs', action='search', conditions=dict( method=['POST'] ) )
+ webapp.mapper.connect( 'job_inputs', '/api/jobs/{id}/inputs', controller='jobs', action='inputs', conditions=dict( method=['GET'] ) )
+ webapp.mapper.connect( 'job_outputs', '/api/jobs/{id}/outputs', controller='jobs', action='outputs', conditions=dict( method=['GET'] ) )
# Job files controllers. Only for consumption by remote job runners.
webapp.mapper.resource( 'file',
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b test/base/interactor.py
--- a/test/base/interactor.py
+++ b/test/base/interactor.py
@@ -44,16 +44,44 @@
self.api_key = self.__get_user_key( twill_test_case.user_api_key, twill_test_case.master_api_key, test_user=test_user )
self.uploads = {}
- def verify_output( self, history_id, output_data, outfile, attributes, shed_tool_id, maxseconds ):
+ def verify_output( self, history_id, output_data, output_testdef, shed_tool_id, maxseconds ):
+ outfile = output_testdef.outfile
+ attributes = output_testdef.attributes
+ name = output_testdef.name
self.wait_for_history( history_id, maxseconds )
hid = self.__output_id( output_data )
fetcher = self.__dataset_fetcher( history_id )
## TODO: Twill version verifys dataset is 'ok' in here.
self.twill_test_case.verify_hid( outfile, hda_id=hid, attributes=attributes, dataset_fetcher=fetcher, shed_tool_id=shed_tool_id )
+
+ primary_datasets = attributes.get( 'primary_datasets', {} )
+ if primary_datasets:
+ job_id = self._dataset_provenance( history_id, hid )[ "job_id" ]
+ outputs = self._get( "jobs/%s/outputs" % ( job_id ) ).json()
+
+ for designation, ( primary_outfile, primary_attributes ) in primary_datasets.iteritems():
+ primary_output = None
+ for output in outputs:
+ if output[ "name" ] == '__new_primary_file_%s|%s__' % ( name, designation ):
+ primary_output = output
+ break
+
+ if not primary_output:
+ msg_template = "Failed to find primary dataset with designation [%s] for output with name [%s]"
+ msg_args = ( designation, name )
+ raise Exception( msg_template % msg_args )
+
+ primary_hda_id = primary_output[ "dataset" ][ "id" ]
+ self.twill_test_case.verify_hid( primary_outfile, hda_id=primary_hda_id, attributes=primary_attributes, dataset_fetcher=fetcher, shed_tool_id=shed_tool_id )
+ self._verify_metadata( history_id, primary_hda_id, primary_attributes )
+
+ self._verify_metadata( history_id, hid, attributes )
+
+ def _verify_metadata( self, history_id, hid, attributes ):
metadata = attributes.get( 'metadata', {} ).copy()
for key, value in metadata.copy().iteritems():
new_key = "metadata_%s" % key
- metadata[ new_key ] = metadata[ key ]
+ metadata[ new_key ] = metadata[ key ]
del metadata[ key ]
expected_file_type = attributes.get( 'ftype', None )
if expected_file_type:
@@ -319,7 +347,10 @@
def __init__( self, twill_test_case ):
self.twill_test_case = twill_test_case
- def verify_output( self, history, output_data, outfile, attributes, shed_tool_id, maxseconds ):
+ def verify_output( self, history, output_data, output_testdef, shed_tool_id, maxseconds ):
+ outfile = output_testdef.outfile
+ attributes = output_testdef.attributes
+
hid = output_data.get( 'hid' )
self.twill_test_case.verify_dataset_correctness( outfile, hid=hid, attributes=attributes, shed_tool_id=shed_tool_id, maxseconds=maxseconds )
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b test/functional/test_toolbox.py
--- a/test/functional/test_toolbox.py
+++ b/test/functional/test_toolbox.py
@@ -3,6 +3,7 @@
from base.twilltestcase import TwillTestCase
from base.interactor import build_interactor, stage_data_in_history
from galaxy.tools import DataManagerTool
+from galaxy.util import bunch
import logging
log = logging.getLogger( __name__ )
@@ -52,6 +53,7 @@
for output_index, output_tuple in enumerate(testdef.outputs):
# Get the correct hid
name, outfile, attributes = output_tuple
+ output_testdef = bunch.Bunch( name=name, outfile=outfile, attributes=attributes )
try:
output_data = data_list[ name ]
except (TypeError, KeyError):
@@ -64,7 +66,7 @@
output_data = data_list[ len(data_list) - len(testdef.outputs) + output_index ]
self.assertTrue( output_data is not None )
try:
- galaxy_interactor.verify_output( history, output_data, outfile, attributes=attributes, shed_tool_id=shed_tool_id, maxseconds=maxseconds )
+ galaxy_interactor.verify_output( history, output_data, output_testdef=output_testdef, shed_tool_id=shed_tool_id, maxseconds=maxseconds )
except Exception:
for stream in ['stdout', 'stderr']:
stream_output = galaxy_interactor.get_job_stream( history, output_data, stream=stream )
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b test/functional/tools/multi_output.xml
--- a/test/functional/tools/multi_output.xml
+++ b/test/functional/tools/multi_output.xml
@@ -1,7 +1,7 @@
<tool id="multi_output" name="Multi_Output" description="multi_output" force_history_refresh="True" version="0.1.0"><command>
echo "Hello" > $report;
- echo "World" > '${__new_file_path__}/primary_${report.id}_moo_visible_?'
+ echo "World Contents" > '${__new_file_path__}/primary_${report.id}_world_visible_?'
</command><inputs><param name="input" type="integer" value="7" />
@@ -16,6 +16,11 @@
<assert_contents><has_line line="Hello" /></assert_contents>
+ <discovered_dataset designation="world">
+ <assert_contents>
+ <has_line line="World Contents" />
+ </assert_contents>
+ </discovered_dataset></output></test></tests>
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b test/functional/tools/multi_output_configured.xml
--- /dev/null
+++ b/test/functional/tools/multi_output_configured.xml
@@ -0,0 +1,43 @@
+<tool id="multi_output_configured" name="Multi_Output_Configured" description="multi_output_configured" force_history_refresh="True" version="0.1.0">
+ <command>
+ echo "Hello" > $report;
+ mkdir subdir1;
+ echo "This" > subdir1/this.txt;
+ echo "That" > subdir1/that.txt;
+ mkdir subdir2;
+ echo "1" > subdir2/CUSTOM_1.txt;
+ echo "2" > subdir2/CUSTOM_2.tabular;
+ echo "3" > subdir2/CUSTOM_3.txt;
+ </command>
+ <inputs>
+ <param name="input" type="integer" value="7" />
+ </inputs>
+ <outputs>
+ <data format="txt" name="report">
+ <discover_datasets pattern="__designation_and_ext__" directory="subdir1" />
+ <discover_datasets pattern="CUSTOM_(?P<designation>.+)\.(?P<ext>.+)" directory="subdir2" />
+ </data>
+ </outputs>
+ <tests>
+ <test>
+ <param name="input" value="7" />
+ <output name="report">
+ <assert_contents>
+ <has_line line="Hello" />
+ </assert_contents>
+ <discovered_dataset designation="this" ftype="txt">
+ <assert_contents><has_line line="This" /></assert_contents>
+ </discovered_dataset>
+ <discovered_dataset designation="that" ftype="txt">
+ <assert_contents><has_line line="That" /></assert_contents>
+ </discovered_dataset>
+ <discovered_dataset designation="1" ftype="txt">
+ <assert_contents><has_line line="1" /></assert_contents>
+ </discovered_dataset>
+ <discovered_dataset designation="2" ftype="tabular">
+ <assert_contents><has_line line="2" /></assert_contents>
+ </discovered_dataset>
+ </output>
+ </test>
+ </tests>
+</tool>
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b test/functional/tools/samples_tool_conf.xml
--- a/test/functional/tools/samples_tool_conf.xml
+++ b/test/functional/tools/samples_tool_conf.xml
@@ -8,6 +8,7 @@
<tool file="multi_page.xml"/><tool file="multi_select.xml" /><tool file="multi_output.xml" />
+ <tool file="multi_output_configured.xml" /><tool file="composite_output.xml" /><tool file="metadata.xml" /><tool file="output_order.xml" />
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b test/functional/workflow.py
--- a/test/functional/workflow.py
+++ b/test/functional/workflow.py
@@ -4,6 +4,7 @@
from base.interactor import GalaxyInteractorApi, stage_data_in_history
from galaxy.util import parse_xml
+from galaxy.util import bunch
from galaxy.tools.test import parse_param_elem, require_file, test_data_iter, parse_output_elems
from json import load, dumps
@@ -66,10 +67,11 @@
for expected_output_def in workflow_test.outputs:
# Get the correct hid
name, outfile, attributes = expected_output_def
+ output_testdef = bunch.Bunch( name=name, outfile=outfile, attributes=attributes )
output_data = outputs[ int( name ) ]
try:
- galaxy_interactor.verify_output( test_history, output_data, outfile, attributes=attributes, shed_tool_id=None, maxseconds=maxseconds )
+ galaxy_interactor.verify_output( test_history, output_data, output_testdef=output_testdef, shed_tool_id=None, maxseconds=maxseconds )
except Exception:
for stream in ['stdout', 'stderr']:
stream_output = galaxy_interactor.get_job_stream( test_history, output_data, stream=stream )
diff -r 74b6e23ed7882f3c091d2b66ce85025241372017 -r 8e6cda4c1b3d1685a8ce07412dc542f34eb8b44b test/unit/tools/test_collect_primary_datasets.py
--- a/test/unit/tools/test_collect_primary_datasets.py
+++ b/test/unit/tools/test_collect_primary_datasets.py
@@ -5,6 +5,8 @@
import tools_support
from galaxy import model
+from galaxy import util
+from galaxy.tools.parameters import output_collect
DEFAULT_TOOL_OUTPUT = "out1"
DEFAULT_EXTRA_NAME = "test1"
@@ -114,6 +116,75 @@
extra_job_assoc = filter( lambda job_assoc: job_assoc.name.startswith( "__" ), self.job.output_datasets )[ 0 ]
assert extra_job_assoc.name == "__new_primary_file_out1|test1__"
+ def test_pattern_override_designation( self ):
+ self._replace_output_collectors( '''<output><discover_datasets pattern="__designation__" directory="subdir" ext="txt" /></output>''' )
+ self._setup_extra_file( subdir="subdir", filename="foo.txt" )
+ primary_outputs = self._collect( )[ DEFAULT_TOOL_OUTPUT ]
+ assert len( primary_outputs ) == 1
+ created_hda = primary_outputs.values()[ 0 ]
+ assert "foo.txt" in created_hda.name
+ assert created_hda.ext == "txt"
+
+ def test_name_and_ext_pattern( self ):
+ self._replace_output_collectors( '''<output><discover_datasets pattern="__name_and_ext__" directory="subdir" /></output>''' )
+ self._setup_extra_file( subdir="subdir", filename="foo1.txt" )
+ self._setup_extra_file( subdir="subdir", filename="foo2.tabular" )
+ primary_outputs = self._collect( )[ DEFAULT_TOOL_OUTPUT ]
+ assert len( primary_outputs ) == 2
+ assert primary_outputs[ "foo1" ].ext == "txt"
+ assert primary_outputs[ "foo2" ].ext == "tabular"
+
+ def test_custom_pattern( self ):
+ # Hypothetical oral metagenomic classifier that populates a directory
+ # of files based on name and genome. Use custom regex pattern to grab
+ # and classify these files.
+ self._replace_output_collectors( '''<output><discover_datasets pattern="(?P<designation>.*)__(?P<dbkey>.*).fasta" directory="genome_breakdown" ext="fasta" /></output>''' )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp1__hg19.fasta" )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp2__lactLact.fasta" )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp3__hg19.fasta" )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp4__lactPlan.fasta" )
+ self._setup_extra_file( subdir="genome_breakdown", filename="samp5__fusoNucl.fasta" )
+
+ # Put a file in directory we don't care about, just to make sure
+ # it doesn't get picked up by pattern.
+ self._setup_extra_file( subdir="genome_breakdown", filename="overview.txt" )
+
+ primary_outputs = self._collect( )[ DEFAULT_TOOL_OUTPUT ]
+ assert len( primary_outputs ) == 5
+ genomes = dict( samp1="hg19", samp2="lactLact", samp3="hg19", samp4="lactPlan", samp5="fusoNucl" )
+ for key, hda in primary_outputs.iteritems():
+ assert hda.dbkey == genomes[ key ]
+
+ def test_name_versus_designation( self ):
+ """ This test demonstrates the difference between name and desgination
+ in grouping patterns and named patterns such as __designation__,
+ __name__, __designation_and_ext__, and __name_and_ext__.
+ """
+ self._replace_output_collectors( '''<output>
+ <discover_datasets pattern="__name_and_ext__" directory="subdir_for_name_discovery" />
+ <discover_datasets pattern="__designation_and_ext__" directory="subdir_for_designation_discovery" />
+ </output>''')
+ self._setup_extra_file( subdir="subdir_for_name_discovery", filename="example1.txt" )
+ self._setup_extra_file( subdir="subdir_for_designation_discovery", filename="example2.txt" )
+ primary_outputs = self._collect( )[ DEFAULT_TOOL_OUTPUT ]
+ name_output = primary_outputs[ "example1" ]
+ designation_output = primary_outputs[ "example2" ]
+ # While name is also used for designation, designation is not the name -
+ # it is used in the calculation of the name however...
+ assert name_output.name == "example1"
+ assert designation_output.name == "%s (%s)" % ( self.hda.name, "example2" )
+
+ def test_cannot_read_files_outside_job_directory( self ):
+ self._replace_output_collectors( '''<output>
+ <discover_datasets pattern="__name_and_ext__" directory="../../secrets" />
+ </output>''')
+ exception_thrown = False
+ try:
+ self._collect( )
+ except Exception:
+ exception_thrown = True
+ assert exception_thrown
+
def _collect_default_extra( self, **kwargs ):
return self._collect( **kwargs )[ DEFAULT_TOOL_OUTPUT ][ DEFAULT_EXTRA_NAME ]
@@ -122,6 +193,12 @@
job_working_directory = self.test_directory
return self.tool.collect_primary_datasets( self.outputs, job_working_directory )
+ def _replace_output_collectors( self, xml_str ):
+ # Rewrite tool as if it had been created with output containing
+ # supplied dataset_collector elem.
+ elem = util.parse_xml_string( xml_str )
+ self.tool.outputs[ DEFAULT_TOOL_OUTPUT ].dataset_collectors = output_collect.dataset_collectors_from_elem( elem )
+
def _append_job_json( self, object, output_path=None, line_type="new_primary_dataset" ):
object[ "type" ] = line_type
if output_path:
@@ -133,7 +210,8 @@
def _setup_extra_file( self, **kwargs ):
path = kwargs.get( "path", None )
- if not path:
+ filename = kwargs.get( "filename", None )
+ if not path and not filename:
name = kwargs.get( "name", DEFAULT_EXTRA_NAME )
visible = kwargs.get( "visible", "visible" )
ext = kwargs.get( "ext", "data" )
@@ -142,6 +220,13 @@
path = os.path.join( directory, "primary_%s_%s_%s_%s" % template_args )
if "dbkey" in kwargs:
path = "%s_%s" % ( path, kwargs[ "dbkey" ] )
+ if not path:
+ assert filename
+ subdir = kwargs.get( "subdir", "." )
+ path = os.path.join( self.test_directory, subdir, filename )
+ directory = os.path.dirname( path )
+ if not os.path.exists( directory ):
+ os.makedirs( directory )
contents = kwargs.get( "contents", "test contents" )
open( path, "w" ).write( contents )
return path
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0

commit/galaxy-central: guerler: Charts: Fixes for heatmaps
by commits-noreply@bitbucket.org 05 May '14
by commits-noreply@bitbucket.org 05 May '14
05 May '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/74b6e23ed788/
Changeset: 74b6e23ed788
User: guerler
Date: 2014-05-05 22:02:16
Summary: Charts: Fixes for heatmaps
Affected #: 3 files
diff -r c69894bb9855d38560eb9a55eaaedcd150930f10 -r 74b6e23ed7882f3c091d2b66ce85025241372017 config/plugins/visualizations/charts/static/charts/heatmap/config.js
--- a/config/plugins/visualizations/charts/static/charts/heatmap/config.js
+++ b/config/plugins/visualizations/charts/static/charts/heatmap/config.js
@@ -1,10 +1,12 @@
define([], function() {
return {
- title : 'Heatmap',
- library : '',
- tag : 'div',
+ title : 'Heatmap',
+ library : '',
+ tag : 'div',
use_panels : true,
+
+ // columns
columns : {
col_label : {
title : 'Columns',
@@ -19,6 +21,7 @@
},
},
+ // settings
settings: {
color_set : {
title : 'Color scheme',
@@ -107,6 +110,38 @@
value : 'wysiwyg'
}
]
+ },
+
+ sorting : {
+ title : 'Sorting',
+ info : 'How should the columns be clustered?',
+ type : 'select',
+ init : 'hclust',
+ data : [
+ {
+ label : 'Read from dataset',
+ value : 'hclust'
+ },
+ {
+ label : 'Sort column and row labels',
+ value : 'byboth'
+ },
+ {
+ label : 'Sort column labels',
+ value : 'bycolumns'
+ },
+ {
+ label : 'Sort by rows',
+ value : 'byrow'
+ }
+ ]
+ }
+ },
+
+ // menu definition
+ menu : function() {
+ return {
+ color_set : this.settings.color_set
}
}
};
diff -r c69894bb9855d38560eb9a55eaaedcd150930f10 -r 74b6e23ed7882f3c091d2b66ce85025241372017 config/plugins/visualizations/charts/static/charts/heatmap/heatmap-plugin.js
--- a/config/plugins/visualizations/charts/static/charts/heatmap/heatmap-plugin.js
+++ b/config/plugins/visualizations/charts/static/charts/heatmap/heatmap-plugin.js
@@ -180,26 +180,22 @@
//
// add ui elements
//
- // create ui elements
+ // create tooltip
this.$tooltip = $(this._templateTooltip());
- this.$select = $(this._templateSelect());
-
- // append
this.$el.append(this.$tooltip);
- this.$el.append(this.$select);
-
- // add event to select field
- this.$select.on('change', function(){
- self._sortByOrder(this.value);
- });
//
// finally draw the heatmap
//
- this._draw();
+ this._build();
+
+ // catch window resize event
+ $(window).resize(function () {
+ self._build();
+ });
},
- _draw: function() {
+ _build: function() {
// link this
var self = this;
@@ -212,7 +208,7 @@
this.height = this.heightContainer;
// calculate cell size
- this.cellSize = Math.min(((this.height - 50) / (this.rowNumber + this.margin.top + this.margin.bottom)),
+ this.cellSize = Math.min(((this.height) / (this.rowNumber + this.margin.top + this.margin.bottom)),
(this.width / (this.colNumber + this.margin.left + this.margin.right)));
// set font size
@@ -227,22 +223,27 @@
var width = this.width;
var height = this.height;
+ // reset svg
+ if (this.svg !== undefined) {
+ this.$el.find('svg').remove();
+ }
+
// add main group and translate
this.svg = d3.select(this.$el[0]).append('svg')
.append('g')
.attr('transform', 'translate(' + (this.widthContainer - width) / 2 + ',' +
- (this.heightContainer - height) / 2 + ')')
-
+ (this.heightContainer - height) / 2 + ')');
+
// reset sorting
this.rowSortOrder = false;
this.colSortOrder = false;
// build
- this._buildRowLabels();
- this._buildColLabels();
- this._buildHeatMap();
- this._buildLegend();
- this._buildTitle();
+ this.d3RowLabels = this._buildRowLabels();
+ this.d3ColLabels = this._buildColLabels();
+ this.d3HeatMap = this._buildHeatMap();
+ this.d3Legend = this._buildLegend();
+ this.d3Title = this._buildTitle();
},
// build title
@@ -258,7 +259,7 @@
var title = this.options.title;
// add title
- this.svg.append('g')
+ return this.svg.append('g')
.append('text')
.attr('x', width / 2)
.attr('y', height + 3 * cellSize + fontSize + 3)
@@ -323,6 +324,9 @@
})
.attr('y', height + cellSize - 3)
.style('font-size', fontSize + 'px');
+
+ // return
+ return legend;
},
// build column labels
@@ -366,6 +370,9 @@
self._sortByLabel('c', 'row', self.rowNumber, i, self.colSortOrder);
d3.select('#order').property('selectedIndex', 4).node().focus();
});
+
+ // return
+ return colLabels;
},
// build row labels
@@ -409,6 +416,9 @@
self._sortByLabel('r', 'col', self.colNumber, i, self.rowSortOrder);
d3.select('#order').property('selectedIndex', 4).node().focus();
});
+
+ // return
+ return rowLabels;
},
// build heat map
@@ -424,7 +434,7 @@
var colLabel = this.colLabel;
// heat map
- var heatMap = this.svg.append('g').attr('class','g3')
+ var heatmap = this.svg.append('g').attr('class','g3')
.selectAll('.cellg')
.data(self.data, function(d) {
return d.row + ':' + d.col;
@@ -466,6 +476,9 @@
d3.selectAll('.colLabel').classed('text-highlight',false);
d3.select('#heatmap-tooltip').classed('hidden', true);
});
+
+ // return
+ return heatmap;
},
// change ordering of cells
diff -r c69894bb9855d38560eb9a55eaaedcd150930f10 -r 74b6e23ed7882f3c091d2b66ce85025241372017 config/plugins/visualizations/charts/static/views/viewport.js
--- a/config/plugins/visualizations/charts/static/views/viewport.js
+++ b/config/plugins/visualizations/charts/static/views/viewport.js
@@ -6,7 +6,8 @@
return Backbone.View.extend({
// list of canvas elements
- canvas: [],
+ container_list: [],
+ canvas_list: [],
// initialize
initialize: function(app, options) {
@@ -26,7 +27,7 @@
this._fullscreen(this.$el, 80);
// create canvas element
- this._createCanvas('div');
+ this._createContainer('div');
// events
var self = this;
@@ -38,6 +39,7 @@
this.chart.on('set:state', function() {
// get info element
var $info = self.$el.find('#info');
+ var $container = self.$el.find('container');
// get icon
var $icon = $info.find('#icon');
@@ -49,11 +51,15 @@
$info.show();
$info.find('#text').html(self.chart.get('state_info'));
+ // hide containers
+ $container.hide();
+
// check status
var state = self.chart.get('state');
switch (state) {
case 'ok':
$info.hide();
+ $container.show()
break;
case 'failed':
$icon.addClass('fa fa-warning');
@@ -86,29 +92,35 @@
},
// creates n canvas elements
- _createCanvas: function(tag, n) {
+ _createContainer: function(tag, n) {
// check size of requested canvas elements
n = n || 1;
// clear previous canvas elements
- for (var i in this.canvas) {
- this.canvas[i].remove();
- this.canvas.slice(i, 0);
+ for (var i in this.container_list) {
+ this.container_list[i].remove();
}
+ // reset lists
+ this.container_list = [];
+ this.canvas_list = [];
+
// create requested canvas elements
for (var i = 0; i < n; i++) {
// create element
- var canvas_el = $(this._templateCanvas(tag, parseInt(100 / n)));
+ var canvas_el = $(this._templateContainer(tag, parseInt(100 / n)));
// add to view
this.$el.append(canvas_el);
- // find canvas element
+ // add to list
+ this.container_list[i] = canvas_el;
+
+ // add a separate list for canvas elements
if (tag == 'svg') {
- this.canvas[i] = d3.select(canvas_el[0]);
+ this.canvas_list[i] = d3.select(canvas_el.find('#canvas')[0]);
} else {
- this.canvas[i] = canvas_el;
+ this.canvas_list[i] = canvas_el.find('#canvas');
}
}
},
@@ -137,7 +149,7 @@
}
// create canvas element and add to canvas list
- this._createCanvas(this.chart_settings.tag, n_panels);
+ this._createContainer(this.chart_settings.tag, n_panels);
// set chart state
chart.state('wait', 'Please wait...');
@@ -145,6 +157,7 @@
// clean up data if there is any from previous jobs
if (!this.chart_settings.execute ||
(this.chart_settings.execute && chart.get('modified'))) {
+
// reset jobs
this.app.jobs.cleanup(chart);
@@ -156,7 +169,7 @@
var self = this;
require(['plugin/charts/' + chart_type + '/' + chart_type], function(ChartView) {
// create chart
- var view = new ChartView(self.app, {canvas : self.canvas});
+ var view = new ChartView(self.app, {canvas : self.canvas_list});
// request data
if (self.chart_settings.execute) {
@@ -269,8 +282,11 @@
},
// template svg/div element
- _templateCanvas: function(tag, width) {
- return '<' + tag + ' class="canvas" style="float: left; display: block; width:' + width + '%; height: 100%;"/>';
+ _templateContainer: function(tag, width) {
+ return '<div class="container" style="float: left; display: block; width:' + width + '%; height: 100%;">' +
+ '<div id="menu"/>' +
+ '<' + tag + ' id="canvas" class="canvas" style="display: block; width:100%; height: inherit;">' +
+ '</div>';
}
});
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0

commit/galaxy-central: martenson: data libraries: when importing into history set it as current and provide link to the 'Analyze page' so users can switch quickly
by commits-noreply@bitbucket.org 05 May '14
by commits-noreply@bitbucket.org 05 May '14
05 May '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/c69894bb9855/
Changeset: c69894bb9855
User: martenson
Date: 2014-05-05 19:51:13
Summary: data libraries: when importing into history set it as current and provide link to the 'Analyze page' so users can switch quickly
Affected #: 4 files
diff -r 24834f127f4cced95f0a64dd5a7ccf72a3f18134 -r c69894bb9855d38560eb9a55eaaedcd150930f10 static/scripts/mvc/library/library-folderrow-view.js
--- a/static/scripts/mvc/library/library-folderrow-view.js
+++ b/static/scripts/mvc/library/library-folderrow-view.js
@@ -183,10 +183,18 @@
var self = this;
historyItem.url = historyItem.urlRoot + history_id + '/contents';
+ // set the used history as current so user will see the last one
+ // that he imported into in the history panel on the 'analysis' page
+ var set_current_url = '/api/histories/' + history_id + '/set_as_current';
+ $.ajax({
+ url: set_current_url,
+ type: 'PUT'
+ });
+
// save the dataset into selected history
historyItem.save({ content : library_dataset_id, source : 'library' }, {
success : function(){
- mod_toastr.success('Dataset imported');
+ mod_toastr.success('Dataset imported. Click this to start analysing it.', '', {onclick: function() {window.location='/'}});
//enable the buttons
self.modal.enableButton('Import');
self.modal.enableButton('Download');
diff -r 24834f127f4cced95f0a64dd5a7ccf72a3f18134 -r c69894bb9855d38560eb9a55eaaedcd150930f10 static/scripts/mvc/library/library-foldertoolbar-view.js
--- a/static/scripts/mvc/library/library-foldertoolbar-view.js
+++ b/static/scripts/mvc/library/library-foldertoolbar-view.js
@@ -222,6 +222,14 @@
datasets_to_import.push(historyItem);
}
this.options.chain_call_control.total_number = datasets_to_import.length;
+
+ // set the used history as current so user will see the last one
+ // that he imported into in the history panel on the 'analysis' page
+ var set_current_url = '/api/histories/' + history_id + '/set_as_current';
+ $.ajax({
+ url: set_current_url,
+ type: 'PUT'
+ });
// call the recursive function to call ajax one after each other (request FIFO queue)
this.chainCall(datasets_to_import, history_name);
},
@@ -231,11 +239,11 @@
var popped_item = history_item_set.pop();
if (typeof popped_item === "undefined") {
if (this.options.chain_call_control.failed_number === 0){
- mod_toastr.success('Selected datasets imported into history');
+ mod_toastr.success('Selected datasets imported into history. Click this to start analysing it.', '', {onclick: function() {window.location='/'}});
} else if (this.options.chain_call_control.failed_number === this.options.chain_call_control.total_number){
mod_toastr.error('There was an error and no datasets were imported into history.');
} else if (this.options.chain_call_control.failed_number < this.options.chain_call_control.total_number){
- mod_toastr.warning('Some of the datasets could not be imported into history');
+ mod_toastr.warning('Some of the datasets could not be imported into history. Click this to see what was imported.', '', {onclick: function() {window.location='/'}});
}
Galaxy.modal.hide();
return;
diff -r 24834f127f4cced95f0a64dd5a7ccf72a3f18134 -r c69894bb9855d38560eb9a55eaaedcd150930f10 static/scripts/packed/mvc/library/library-folderrow-view.js
--- a/static/scripts/packed/mvc/library/library-folderrow-view.js
+++ b/static/scripts/packed/mvc/library/library-folderrow-view.js
@@ -1,1 +1,1 @@
-define(["galaxy.masthead","utils/utils","libs/toastr","mvc/library/library-model"],function(b,d,e,c){var a=Backbone.View.extend({lastSelectedHistory:"",events:{"click .undelete_dataset_btn":"undelete_dataset"},options:{type:null},initialize:function(f){this.render(f)},render:function(f){var g=null;if(f.get("type")==="folder"){this.options.type="folder";g=this.templateRowFolder()}else{this.options.type="file";if(f.get("deleted")){g=this.templateRowDeletedFile()}else{g=this.templateRowFile()}}this.setElement(g({content_item:f}));this.$el.show();return this},showDatasetDetails:function(){var i=this.id;var h=new c.Item();var g=new c.GalaxyHistories();h.id=i;var f=this;h.fetch({success:function(j){g.fetch({success:function(k){f.renderModalAfterFetch(j,k)},error:function(l,k){if(typeof k.responseJSON!=="undefined"){e.error(k.responseJSON.err_msg)}else{e.error("An error occured during fetching histories:(")}f.renderModalAfterFetch(j)}})},error:function(k,j){if(typeof j.responseJSON!=="undefined"){e.error(j.responseJSON.err_msg)}else{e.error("An error occured during loading dataset details :(")}}})},renderModalAfterFetch:function(k,h){var i=this.size_to_string(k.get("file_size"));var j=_.template(this.templateDatasetModal(),{item:k,size:i});var g=this;this.modal=Galaxy.modal;this.modal.show({closing_events:true,title:k.get("name"),body:j,buttons:{Import:function(){g.importCurrentIntoHistory()},Download:function(){g.downloadCurrent()},Close:function(){g.modal.hide()}}});$(".peek").html(k.get("peek"));if(typeof history.models!==undefined){var f=_.template(this.templateHistorySelectInModal(),{histories:h.models});$(this.modal.elMain).find(".buttons").prepend(f);if(g.lastSelectedHistory.length>0){$(this.modal.elMain).find("#dataset_import_single").val(g.lastSelectedHistory)}}},size_to_string:function(f){var g="";if(f>=100000000000){f=f/100000000000;g="TB"}else{if(f>=100000000){f=f/100000000;g="GB"}else{if(f>=100000){f=f/100000;g="MB"}else{if(f>=100){f=f/100;g="KB"}else{f=f*10;g="b"}}}}return(Math.round(f)/10)+g},downloadCurrent:function(){this.modal.disableButton("Import");this.modal.disableButton("Download");var f=[];f.push($("#id_row").attr("data-id"));var g="/api/libraries/datasets/download/uncompressed";var h={ldda_ids:f};this.processDownload(g,h);this.modal.enableButton("Import");this.modal.enableButton("Download")},processDownload:function(g,h,i){if(g&&h){h=typeof h=="string"?h:$.param(h);var f="";$.each(h.split("&"),function(){var j=this.split("=");f+='<input type="hidden" name="'+j[0]+'" value="'+j[1]+'" />'});$('<form action="'+g+'" method="'+(i||"post")+'">'+f+"</form>").appendTo("body").submit().remove();e.info("Your download will begin soon")}},importCurrentIntoHistory:function(){this.modal.disableButton("Import");this.modal.disableButton("Download");var h=$(this.modal.elMain).find("select[name=dataset_import_single] option:selected").val();this.lastSelectedHistory=h;var f=$("#id_row").attr("data-id");var i=new c.HistoryItem();var g=this;i.url=i.urlRoot+h+"/contents";i.save({content:f,source:"library"},{success:function(){e.success("Dataset imported");g.modal.enableButton("Import");g.modal.enableButton("Download")},error:function(k,j){if(typeof j.responseJSON!=="undefined"){e.error("Dataset not imported. "+j.responseJSON.err_msg)}else{e.error("An error occured! Dataset not imported. Please try again.")}g.modal.enableButton("Import");g.modal.enableButton("Download")}})},undelete_dataset:function(g){$(".tooltip").hide();var f=$(g.target).closest("tr")[0].id;var h=Galaxy.libraries.folderListView.collection.get(f);h.url=h.urlRoot+h.id+"?undelete=true";h.destroy({success:function(j,i){Galaxy.libraries.folderListView.collection.remove(f);var k=new c.Item(i);Galaxy.libraries.folderListView.collection.add(k);e.success("Dataset undeleted")},error:function(j,i){if(typeof i.responseJSON!=="undefined"){e.error("Dataset was not undeleted. "+i.responseJSON.err_msg)}else{e.error("An error occured! Dataset was not undeleted. Please try again.")}}})},templateRowFolder:function(){tmpl_array=[];tmpl_array.push('<tr class="folder_row light" id="<%- content_item.id %>">');tmpl_array.push(" <td>");tmpl_array.push(' <span title="Folder" class="fa fa-folder-o"></span>');tmpl_array.push(" </td>");tmpl_array.push(" <td></td>");tmpl_array.push(" <td>");tmpl_array.push(' <a href="#folders/<%- content_item.id %>"><%- content_item.get("name") %></a>');tmpl_array.push(' <% if (content_item.get("item_count") === 0) { %>');tmpl_array.push(" <span>(empty folder)</span>");tmpl_array.push(" <% } %>");tmpl_array.push(" </td>");tmpl_array.push(" <td>folder</td>");tmpl_array.push(" <td></td>");tmpl_array.push(' <td><%= _.escape(content_item.get("update_time")) %></td>');tmpl_array.push(" <td></td>");tmpl_array.push("</tr>");return _.template(tmpl_array.join(""))},templateRowFile:function(){tmpl_array=[];tmpl_array.push('<tr class="dataset_row light" id="<%- content_item.id %>">');tmpl_array.push(" <td>");tmpl_array.push(' <span title="Dataset" class="fa fa-file-o"></span>');tmpl_array.push(" </td>");tmpl_array.push(' <td style="text-align: center; "><input style="margin: 0;" type="checkbox"></td>');tmpl_array.push(' <td><a href="#folders/<%- content_item.get("folder_id") %>/datasets/<%- content_item.id %>" class="library-dataset"><%- content_item.get("name") %><a></td>');tmpl_array.push(' <td><%= _.escape(content_item.get("data_type")) %></td>');tmpl_array.push(' <td><%= _.escape(content_item.get("readable_size")) %></td>');tmpl_array.push(' <td><%= _.escape(content_item.get("update_time")) %></td>');tmpl_array.push(" <td></td>");tmpl_array.push("</tr>");return _.template(tmpl_array.join(""))},templateRowDeletedFile:function(){tmpl_array=[];tmpl_array.push('<tr class="active deleted_dataset" id="<%- content_item.id %>">');tmpl_array.push(" <td>");tmpl_array.push(' <span title="Dataset" class="fa fa-file-o"></span>');tmpl_array.push(" </td>");tmpl_array.push(' <td><span data-toggle="tooltip" data-placement="top" title="Marked deleted" style="color:grey;" class="fa fa-ban fa-lg"></span></td>');tmpl_array.push(' <td style="color:grey;"><%- content_item.get("name") %></td>');tmpl_array.push(' <td><%= _.escape(content_item.get("data_type")) %></td>');tmpl_array.push(' <td><%= _.escape(content_item.get("readable_size")) %></td>');tmpl_array.push(' <td><%= _.escape(content_item.get("update_time")) %></td>');tmpl_array.push(' <td class="right-center"><button data-toggle="tooltip" data-placement="top" title="Undelete <%- content_item.get("name") %>" class="primary-button btn-xs undelete_dataset_btn show_on_hover" type="button" style="display:none;"><span class="fa fa-unlock"> Undelete</span></button></td>');tmpl_array.push("</tr>");return _.template(tmpl_array.join(""))},templateDatasetModal:function(){var f=[];f.push('<div class="modal_table">');f.push(' <table class="grid table table-striped table-condensed">');f.push(" <tr>");f.push(' <th scope="row" id="id_row" data-id="<%= _.escape(item.get("ldda_id")) %>">Name</th>');f.push(' <td><%= _.escape(item.get("name")) %></td>');f.push(" </tr>");f.push(" <tr>");f.push(' <th scope="row">Data type</th>');f.push(' <td><%= _.escape(item.get("data_type")) %></td>');f.push(" </tr>");f.push(" <tr>");f.push(' <th scope="row">Genome build</th>');f.push(' <td><%= _.escape(item.get("genome_build")) %></td>');f.push(" </tr>");f.push(' <th scope="row">Size</th>');f.push(" <td><%= _.escape(size) %></td>");f.push(" </tr>");f.push(" <tr>");f.push(' <th scope="row">Date uploaded (UTC)</th>');f.push(' <td><%= _.escape(item.get("date_uploaded")) %></td>');f.push(" </tr>");f.push(" <tr>");f.push(' <th scope="row">Uploaded by</th>');f.push(' <td><%= _.escape(item.get("uploaded_by")) %></td>');f.push(" </tr>");f.push(' <tr scope="row">');f.push(' <th scope="row">Data Lines</th>');f.push(' <td scope="row"><%= _.escape(item.get("metadata_data_lines")) %></td>');f.push(" </tr>");f.push(' <th scope="row">Comment Lines</th>');f.push(' <% if (item.get("metadata_comment_lines") === "") { %>');f.push(' <td scope="row"><%= _.escape(item.get("metadata_comment_lines")) %></td>');f.push(" <% } else { %>");f.push(' <td scope="row">unknown</td>');f.push(" <% } %>");f.push(" </tr>");f.push(" <tr>");f.push(' <th scope="row">Number of Columns</th>');f.push(' <td scope="row"><%= _.escape(item.get("metadata_columns")) %></td>');f.push(" </tr>");f.push(" <tr>");f.push(' <th scope="row">Column Types</th>');f.push(' <td scope="row"><%= _.escape(item.get("metadata_column_types")) %></td>');f.push(" </tr>");f.push(" <tr>");f.push(' <th scope="row">Miscellaneous information</th>');f.push(' <td scope="row"><%= _.escape(item.get("misc_blurb")) %></td>');f.push(" </tr>");f.push(" </table>");f.push(' <pre class="peek">');f.push(" </pre>");f.push("</div>");return f.join("")},templateHistorySelectInModal:function(){var f=[];f.push('<span id="history_modal_combo" style="width:100%; margin-left: 1em; margin-right: 1em; ">');f.push("Select history: ");f.push('<select id="dataset_import_single" name="dataset_import_single" style="width:40%; margin-bottom: 1em; "> ');f.push(" <% _.each(histories, function(history) { %>");f.push(' <option value="<%= _.escape(history.get("id")) %>"><%= _.escape(history.get("name")) %></option>');f.push(" <% }); %>");f.push("</select>");f.push("</span>");return f.join("")}});return{FolderRowView:a}});
\ No newline at end of file
+define(["galaxy.masthead","utils/utils","libs/toastr","mvc/library/library-model"],function(b,d,e,c){var a=Backbone.View.extend({lastSelectedHistory:"",events:{"click .undelete_dataset_btn":"undelete_dataset"},options:{type:null},initialize:function(f){this.render(f)},render:function(f){var g=null;if(f.get("type")==="folder"){this.options.type="folder";g=this.templateRowFolder()}else{this.options.type="file";if(f.get("deleted")){g=this.templateRowDeletedFile()}else{g=this.templateRowFile()}}this.setElement(g({content_item:f}));this.$el.show();return this},showDatasetDetails:function(){var i=this.id;var h=new c.Item();var g=new c.GalaxyHistories();h.id=i;var f=this;h.fetch({success:function(j){g.fetch({success:function(k){f.renderModalAfterFetch(j,k)},error:function(l,k){if(typeof k.responseJSON!=="undefined"){e.error(k.responseJSON.err_msg)}else{e.error("An error occured during fetching histories:(")}f.renderModalAfterFetch(j)}})},error:function(k,j){if(typeof j.responseJSON!=="undefined"){e.error(j.responseJSON.err_msg)}else{e.error("An error occured during loading dataset details :(")}}})},renderModalAfterFetch:function(k,h){var i=this.size_to_string(k.get("file_size"));var j=_.template(this.templateDatasetModal(),{item:k,size:i});var g=this;this.modal=Galaxy.modal;this.modal.show({closing_events:true,title:k.get("name"),body:j,buttons:{Import:function(){g.importCurrentIntoHistory()},Download:function(){g.downloadCurrent()},Close:function(){g.modal.hide()}}});$(".peek").html(k.get("peek"));if(typeof history.models!==undefined){var f=_.template(this.templateHistorySelectInModal(),{histories:h.models});$(this.modal.elMain).find(".buttons").prepend(f);if(g.lastSelectedHistory.length>0){$(this.modal.elMain).find("#dataset_import_single").val(g.lastSelectedHistory)}}},size_to_string:function(f){var g="";if(f>=100000000000){f=f/100000000000;g="TB"}else{if(f>=100000000){f=f/100000000;g="GB"}else{if(f>=100000){f=f/100000;g="MB"}else{if(f>=100){f=f/100;g="KB"}else{f=f*10;g="b"}}}}return(Math.round(f)/10)+g},downloadCurrent:function(){this.modal.disableButton("Import");this.modal.disableButton("Download");var f=[];f.push($("#id_row").attr("data-id"));var g="/api/libraries/datasets/download/uncompressed";var h={ldda_ids:f};this.processDownload(g,h);this.modal.enableButton("Import");this.modal.enableButton("Download")},processDownload:function(g,h,i){if(g&&h){h=typeof h=="string"?h:$.param(h);var f="";$.each(h.split("&"),function(){var j=this.split("=");f+='<input type="hidden" name="'+j[0]+'" value="'+j[1]+'" />'});$('<form action="'+g+'" method="'+(i||"post")+'">'+f+"</form>").appendTo("body").submit().remove();e.info("Your download will begin soon")}},importCurrentIntoHistory:function(){this.modal.disableButton("Import");this.modal.disableButton("Download");var i=$(this.modal.elMain).find("select[name=dataset_import_single] option:selected").val();this.lastSelectedHistory=i;var g=$("#id_row").attr("data-id");var j=new c.HistoryItem();var h=this;j.url=j.urlRoot+i+"/contents";var f="/api/histories/"+i+"/set_as_current";$.ajax({url:f,type:"PUT"});j.save({content:g,source:"library"},{success:function(){e.success("Dataset imported. Click this to start analysing it.","",{onclick:function(){window.location="/"}});h.modal.enableButton("Import");h.modal.enableButton("Download")},error:function(l,k){if(typeof k.responseJSON!=="undefined"){e.error("Dataset not imported. "+k.responseJSON.err_msg)}else{e.error("An error occured! Dataset not imported. Please try again.")}h.modal.enableButton("Import");h.modal.enableButton("Download")}})},undelete_dataset:function(g){$(".tooltip").hide();var f=$(g.target).closest("tr")[0].id;var h=Galaxy.libraries.folderListView.collection.get(f);h.url=h.urlRoot+h.id+"?undelete=true";h.destroy({success:function(j,i){Galaxy.libraries.folderListView.collection.remove(f);var k=new c.Item(i);Galaxy.libraries.folderListView.collection.add(k);e.success("Dataset undeleted")},error:function(j,i){if(typeof i.responseJSON!=="undefined"){e.error("Dataset was not undeleted. "+i.responseJSON.err_msg)}else{e.error("An error occured! Dataset was not undeleted. Please try again.")}}})},templateRowFolder:function(){tmpl_array=[];tmpl_array.push('<tr class="folder_row light" id="<%- content_item.id %>">');tmpl_array.push(" <td>");tmpl_array.push(' <span title="Folder" class="fa fa-folder-o"></span>');tmpl_array.push(" </td>");tmpl_array.push(" <td></td>");tmpl_array.push(" <td>");tmpl_array.push(' <a href="#folders/<%- content_item.id %>"><%- content_item.get("name") %></a>');tmpl_array.push(' <% if (content_item.get("item_count") === 0) { %>');tmpl_array.push(" <span>(empty folder)</span>");tmpl_array.push(" <% } %>");tmpl_array.push(" </td>");tmpl_array.push(" <td>folder</td>");tmpl_array.push(" <td></td>");tmpl_array.push(' <td><%= _.escape(content_item.get("update_time")) %></td>');tmpl_array.push(" <td></td>");tmpl_array.push("</tr>");return _.template(tmpl_array.join(""))},templateRowFile:function(){tmpl_array=[];tmpl_array.push('<tr class="dataset_row light" id="<%- content_item.id %>">');tmpl_array.push(" <td>");tmpl_array.push(' <span title="Dataset" class="fa fa-file-o"></span>');tmpl_array.push(" </td>");tmpl_array.push(' <td style="text-align: center; "><input style="margin: 0;" type="checkbox"></td>');tmpl_array.push(' <td><a href="#folders/<%- content_item.get("folder_id") %>/datasets/<%- content_item.id %>" class="library-dataset"><%- content_item.get("name") %><a></td>');tmpl_array.push(' <td><%= _.escape(content_item.get("data_type")) %></td>');tmpl_array.push(' <td><%= _.escape(content_item.get("readable_size")) %></td>');tmpl_array.push(' <td><%= _.escape(content_item.get("update_time")) %></td>');tmpl_array.push(" <td></td>");tmpl_array.push("</tr>");return _.template(tmpl_array.join(""))},templateRowDeletedFile:function(){tmpl_array=[];tmpl_array.push('<tr class="active deleted_dataset" id="<%- content_item.id %>">');tmpl_array.push(" <td>");tmpl_array.push(' <span title="Dataset" class="fa fa-file-o"></span>');tmpl_array.push(" </td>");tmpl_array.push(' <td><span data-toggle="tooltip" data-placement="top" title="Marked deleted" style="color:grey;" class="fa fa-ban fa-lg"></span></td>');tmpl_array.push(' <td style="color:grey;"><%- content_item.get("name") %></td>');tmpl_array.push(' <td><%= _.escape(content_item.get("data_type")) %></td>');tmpl_array.push(' <td><%= _.escape(content_item.get("readable_size")) %></td>');tmpl_array.push(' <td><%= _.escape(content_item.get("update_time")) %></td>');tmpl_array.push(' <td class="right-center"><button data-toggle="tooltip" data-placement="top" title="Undelete <%- content_item.get("name") %>" class="primary-button btn-xs undelete_dataset_btn show_on_hover" type="button" style="display:none;"><span class="fa fa-unlock"> Undelete</span></button></td>');tmpl_array.push("</tr>");return _.template(tmpl_array.join(""))},templateDatasetModal:function(){var f=[];f.push('<div class="modal_table">');f.push(' <table class="grid table table-striped table-condensed">');f.push(" <tr>");f.push(' <th scope="row" id="id_row" data-id="<%= _.escape(item.get("ldda_id")) %>">Name</th>');f.push(' <td><%= _.escape(item.get("name")) %></td>');f.push(" </tr>");f.push(" <tr>");f.push(' <th scope="row">Data type</th>');f.push(' <td><%= _.escape(item.get("data_type")) %></td>');f.push(" </tr>");f.push(" <tr>");f.push(' <th scope="row">Genome build</th>');f.push(' <td><%= _.escape(item.get("genome_build")) %></td>');f.push(" </tr>");f.push(' <th scope="row">Size</th>');f.push(" <td><%= _.escape(size) %></td>");f.push(" </tr>");f.push(" <tr>");f.push(' <th scope="row">Date uploaded (UTC)</th>');f.push(' <td><%= _.escape(item.get("date_uploaded")) %></td>');f.push(" </tr>");f.push(" <tr>");f.push(' <th scope="row">Uploaded by</th>');f.push(' <td><%= _.escape(item.get("uploaded_by")) %></td>');f.push(" </tr>");f.push(' <tr scope="row">');f.push(' <th scope="row">Data Lines</th>');f.push(' <td scope="row"><%= _.escape(item.get("metadata_data_lines")) %></td>');f.push(" </tr>");f.push(' <th scope="row">Comment Lines</th>');f.push(' <% if (item.get("metadata_comment_lines") === "") { %>');f.push(' <td scope="row"><%= _.escape(item.get("metadata_comment_lines")) %></td>');f.push(" <% } else { %>");f.push(' <td scope="row">unknown</td>');f.push(" <% } %>");f.push(" </tr>");f.push(" <tr>");f.push(' <th scope="row">Number of Columns</th>');f.push(' <td scope="row"><%= _.escape(item.get("metadata_columns")) %></td>');f.push(" </tr>");f.push(" <tr>");f.push(' <th scope="row">Column Types</th>');f.push(' <td scope="row"><%= _.escape(item.get("metadata_column_types")) %></td>');f.push(" </tr>");f.push(" <tr>");f.push(' <th scope="row">Miscellaneous information</th>');f.push(' <td scope="row"><%= _.escape(item.get("misc_blurb")) %></td>');f.push(" </tr>");f.push(" </table>");f.push(' <pre class="peek">');f.push(" </pre>");f.push("</div>");return f.join("")},templateHistorySelectInModal:function(){var f=[];f.push('<span id="history_modal_combo" style="width:100%; margin-left: 1em; margin-right: 1em; ">');f.push("Select history: ");f.push('<select id="dataset_import_single" name="dataset_import_single" style="width:40%; margin-bottom: 1em; "> ');f.push(" <% _.each(histories, function(history) { %>");f.push(' <option value="<%= _.escape(history.get("id")) %>"><%= _.escape(history.get("name")) %></option>');f.push(" <% }); %>");f.push("</select>");f.push("</span>");return f.join("")}});return{FolderRowView:a}});
\ No newline at end of file
diff -r 24834f127f4cced95f0a64dd5a7ccf72a3f18134 -r c69894bb9855d38560eb9a55eaaedcd150930f10 static/scripts/packed/mvc/library/library-foldertoolbar-view.js
--- a/static/scripts/packed/mvc/library/library-foldertoolbar-view.js
+++ b/static/scripts/packed/mvc/library/library-foldertoolbar-view.js
@@ -1,1 +1,1 @@
-define(["galaxy.masthead","utils/utils","libs/toastr","mvc/library/library-model"],function(b,d,e,c){var a=Backbone.View.extend({el:"#center",events:{"click #toolbtn_create_folder":"createFolderFromModal","click #toolbtn_bulk_import":"modalBulkImport","click .toolbtn_add_files":"addFilesToFolderModal","click #include_deleted_datasets_chk":"checkIncludeDeleted","click #toolbtn_bulk_delete":"deleteSelectedDatasets"},defaults:{can_add_library_item:false,contains_file:false,chain_call_control:{total_number:0,failed_number:0}},modal:null,histories:null,initialize:function(f){this.options=_.defaults(f||{},this.defaults);this.render()},render:function(g){this.options=_.extend(this.options,g);var i=false;var f=true;if(Galaxy.currUser){i=Galaxy.currUser.isAdmin();f=Galaxy.currUser.isAnonymous()}var h=this.templateToolBar();this.$el.html(h({id:this.options.id,admin_user:i,anonym:f}))},configureElements:function(f){this.options=_.extend(this.options,f);if(this.options.can_add_library_item===true){$(".add-library-items").show()}else{$(".add-library-items").hide()}if(this.options.contains_file===true){if(Galaxy.currUser){if(!Galaxy.currUser.isAnonymous()){$(".logged-dataset-manipulation").show();$(".dataset-manipulation").show()}else{$(".dataset-manipulation").show();$(".logged-dataset-manipulation").hide()}}else{$(".logged-dataset-manipulation").hide();$(".dataset-manipulation").hide()}}else{$(".logged-dataset-manipulation").hide();$(".dataset-manipulation").hide()}this.$el.find("[data-toggle]").tooltip()},createFolderFromModal:function(){event.preventDefault();event.stopPropagation();var f=this;var g=this.templateNewFolderInModal();this.modal=Galaxy.modal;this.modal.show({closing_events:true,title:"Create New Folder",body:g(),buttons:{Create:function(){f.create_new_folder_event()},Close:function(){Galaxy.modal.hide()}}})},create_new_folder_event:function(){var f=this.serialize_new_folder();if(this.validate_new_folder(f)){var g=new c.FolderAsModel();url_items=Backbone.history.fragment.split("/");current_folder_id=url_items[url_items.length-1];g.url=g.urlRoot+"/"+current_folder_id;g.save(f,{success:function(h){Galaxy.modal.hide();e.success("Folder created");h.set({type:"folder"});Galaxy.libraries.folderListView.collection.add(h)},error:function(i,h){Galaxy.modal.hide();if(typeof h.responseJSON!=="undefined"){e.error(h.responseJSON.err_msg)}else{e.error("An error ocurred :(")}}})}else{e.error("Folder's name is missing")}return false},serialize_new_folder:function(){return{name:$("input[name='Name']").val(),description:$("input[name='Description']").val()}},validate_new_folder:function(f){return f.name!==""},modalBulkImport:function(){var f=$("#folder_table").find(":checked");if(f.length===0){e.info("You have to select some datasets first")}else{this.refreshUserHistoriesList(function(g){var h=g.templateBulkImportInModal();g.modal=Galaxy.modal;g.modal.show({closing_events:true,title:"Import into History",body:h({histories:g.histories.models}),buttons:{Import:function(){g.importAllIntoHistory()},Close:function(){Galaxy.modal.hide()}}})})}},refreshUserHistoriesList:function(g){var f=this;this.histories=new c.GalaxyHistories();this.histories.fetch({success:function(){g(f)},error:function(i,h){if(typeof h.responseJSON!=="undefined"){e.error(h.responseJSON.err_msg)}else{e.error("An error ocurred :(")}}})},importAllIntoHistory:function(){this.modal.disableButton("Import");this.options.chain_call_control.total_number=0;this.options.chain_call_control.failed_number=0;var j=$("select[name=dataset_import_bulk] option:selected").val();this.options.last_used_history_id=j;var m=$("select[name=dataset_import_bulk] option:selected").text();var o=[];$("#folder_table").find(":checked").each(function(){if(this.parentElement.parentElement.id!==""){o.push(this.parentElement.parentElement.id)}});var n=this.templateImportIntoHistoryProgressBar();this.modal.$el.find(".modal-body").html(n({history_name:m}));var k=100/o.length;this.initProgress(k);var f=[];for(var g=o.length-1;g>=0;g--){var h=o[g];var l=new c.HistoryItem();l.url=l.urlRoot+j+"/contents";l.content=h;l.source="library";f.push(l)}this.options.chain_call_control.total_number=f.length;this.chainCall(f,m)},chainCall:function(g,j){var f=this;var h=g.pop();if(typeof h==="undefined"){if(this.options.chain_call_control.failed_number===0){e.success("Selected datasets imported into history")}else{if(this.options.chain_call_control.failed_number===this.options.chain_call_control.total_number){e.error("There was an error and no datasets were imported into history.")}else{if(this.options.chain_call_control.failed_number<this.options.chain_call_control.total_number){e.warning("Some of the datasets could not be imported into history")}}}Galaxy.modal.hide();return}var i=$.when(h.save({content:h.content,source:h.source}));i.done(function(){f.updateProgress();f.chainCall(g,j)}).fail(function(){f.options.chain_call_control.failed_number+=1;f.updateProgress();f.chainCall(g,j)})},initProgress:function(f){this.progress=0;this.progressStep=f},updateProgress:function(){this.progress+=this.progressStep;$(".progress-bar-import").width(Math.round(this.progress)+"%");txt_representation=Math.round(this.progress)+"% Complete";$(".completion_span").text(txt_representation)},download:function(f,j){var h=[];$("#folder_table").find(":checked").each(function(){if(this.parentElement.parentElement.id!==""){h.push(this.parentElement.parentElement.id)}});var g="/api/libraries/datasets/download/"+j;var i={ldda_ids:h};this.processDownload(g,i,"get")},processDownload:function(g,h,i){if(g&&h){h=typeof h==="string"?h:$.param(h);var f="";$.each(h.split("&"),function(){var j=this.split("=");f+='<input type="hidden" name="'+j[0]+'" value="'+j[1]+'" />'});$('<form action="'+g+'" method="'+(i||"post")+'">'+f+"</form>").appendTo("body").submit().remove();e.info("Your download will begin soon")}},addFilesToFolderModal:function(){this.refreshUserHistoriesList(function(f){f.modal=Galaxy.modal;var g=f.templateAddFilesInModal();f.modal.show({closing_events:true,title:"Add datasets from history to "+f.options.folder_name,body:g({histories:f.histories.models}),buttons:{Add:function(){f.addAllDatasetsFromHistory()},Close:function(){Galaxy.modal.hide()}}});if(f.histories.models.length>0){f.fetchAndDisplayHistoryContents(f.histories.models[0].id);$("#dataset_add_bulk").change(function(h){f.fetchAndDisplayHistoryContents(h.target.value)})}else{e.error("An error ocurred :(")}})},fetchAndDisplayHistoryContents:function(h){var g=new c.HistoryContents({id:h});var f=this;g.fetch({success:function(j){var i=f.templateHistoryContents();f.histories.get(h).set({contents:j});f.modal.$el.find("#selected_history_content").html(i({history_contents:j.models.reverse()}))},error:function(){e.error("An error ocurred :(")}})},addAllDatasetsFromHistory:function(){this.modal.disableButton("Add");this.options.chain_call_control.total_number=0;this.options.chain_call_control.failed_number=0;var f=[];this.modal.$el.find("#selected_history_content").find(":checked").each(function(){var i=$(this.parentElement).data("id");if(i){f.push(i)}});var l=this.options.folder_name;var k=this.templateAddingDatasetsProgressBar();this.modal.$el.find(".modal-body").html(k({folder_name:l}));this.progressStep=100/f.length;this.progress=0;var j=[];for(var h=f.length-1;h>=0;h--){history_dataset_id=f[h];var g=new c.Item();g.url="/api/folders/"+this.options.id+"/contents";g.set({from_hda_id:history_dataset_id});j.push(g)}this.options.chain_call_control.total_number=j.length;this.chainCallAddingHdas(j)},chainCallAddingHdas:function(g){var f=this;this.added_hdas=new c.Folder();var h=g.pop();if(typeof h==="undefined"){if(this.options.chain_call_control.failed_number===0){e.success("Selected datasets from history added to the folder")}else{if(this.options.chain_call_control.failed_number===this.options.chain_call_control.total_number){e.error("There was an error and no datasets were added to the folder.")}else{if(this.options.chain_call_control.failed_number<this.options.chain_call_control.total_number){e.warning("Some of the datasets could not be added to the folder")}}}Galaxy.modal.hide();return this.added_hdas}var i=$.when(h.save({from_hda_id:h.get("from_hda_id")}));i.done(function(j){Galaxy.libraries.folderListView.collection.add(j);f.updateProgress();f.chainCallAddingHdas(g)}).fail(function(){f.options.chain_call_control.failed_number+=1;f.updateProgress();f.chainCallAddingHdas(g)})},checkIncludeDeleted:function(f){if(f.target.checked){Galaxy.libraries.folderListView.fetchFolder({include_deleted:true})}else{Galaxy.libraries.folderListView.fetchFolder({include_deleted:false})}},deleteSelectedDatasets:function(){var f=$("#folder_table").find(":checked");if(f.length===0){e.info("You have to select some datasets first")}else{var j=this.templateDeletingDatasetsProgressBar();this.modal=Galaxy.modal;this.modal.show({closing_events:true,title:"Deleting selected datasets",body:j({}),buttons:{Close:function(){Galaxy.modal.hide()}}});this.options.chain_call_control.total_number=0;this.options.chain_call_control.failed_number=0;var g=[];f.each(function(){if(this.parentElement.parentElement.id!==""){g.push(this.parentElement.parentElement.id)}});this.progressStep=100/g.length;this.progress=0;var l=[];for(var h=g.length-1;h>=0;h--){var k=new c.Item({id:g[h]});l.push(k)}this.options.chain_call_control.total_number=g.length;this.chainCallDeletingHdas(l)}},chainCallDeletingHdas:function(g){var f=this;this.deleted_lddas=new c.Folder();var h=g.pop();if(typeof h==="undefined"){if(this.options.chain_call_control.failed_number===0){e.success("Selected datasets deleted")}else{if(this.options.chain_call_control.failed_number===this.options.chain_call_control.total_number){e.error("There was an error and no datasets were deleted.")}else{if(this.options.chain_call_control.failed_number<this.options.chain_call_control.total_number){e.warning("Some of the datasets could not be deleted")}}}Galaxy.modal.hide();return this.deleted_lddas}var i=$.when(h.destroy());i.done(function(k){Galaxy.libraries.folderListView.collection.remove(h.id);f.updateProgress();if(Galaxy.libraries.folderListView.options.include_deleted){var j=new c.Item(k);Galaxy.libraries.folderListView.collection.add(j)}f.chainCallDeletingHdas(g)}).fail(function(){f.options.chain_call_control.failed_number+=1;f.updateProgress();f.chainCallDeletingHdas(g)})},templateToolBar:function(){tmpl_array=[];tmpl_array.push('<div class="library_style_container">');tmpl_array.push('<div id="library_folder_toolbar">');tmpl_array.push('<span data-toggle="tooltip" data-placement="top" class="logged-dataset-manipulation" title="Include deleted datasets" style="display:none;"><input id="include_deleted_datasets_chk" style="margin: 0;" type="checkbox"><span class="fa fa-trash-o fa-lg"></span></input></span>');tmpl_array.push('<div class="btn-group add-library-items" style="display:none;">');tmpl_array.push(' <button data-toggle="tooltip" data-placement="top" title="Create New Folder" id="toolbtn_create_folder" class="btn btn-default primary-button" type="button"><span class="fa fa-plus"></span><span class="fa fa-folder"></span></button>');tmpl_array.push(' <button data-toggle="tooltip" data-placement="top" title="Add Datasets to Current Folder" id="toolbtn_add_files" class="btn btn-default toolbtn_add_files primary-button" type="button"><span class="fa fa-plus"></span><span class="fa fa-file"></span></span></button>');tmpl_array.push("</div>");tmpl_array.push(' <button data-toggle="tooltip" data-placement="top" title="Import selected datasets into history" id="toolbtn_bulk_import" class="primary-button dataset-manipulation" style="margin-left: 0.5em; display:none;" type="button"><span class="fa fa-book"></span> to history</button>');tmpl_array.push(' <div id="toolbtn_dl" class="btn-group dataset-manipulation" style="margin-left: 0.5em; display:none; ">');tmpl_array.push(' <button title="Download selected datasets as archive" id="drop_toggle" type="button" class="primary-button dropdown-toggle" data-toggle="dropdown">');tmpl_array.push(' <span class="fa fa-download"></span> download <span class="caret"></span>');tmpl_array.push(" </button>");tmpl_array.push(' <ul class="dropdown-menu" role="menu">');tmpl_array.push(' <li id="download_archive"><a href="#/folders/<%= id %>/download/tgz">.tar.gz</a></li>');tmpl_array.push(' <li id="download_archive"><a href="#/folders/<%= id %>/download/tbz">.tar.bz</a></li>');tmpl_array.push(' <li id="download_archive"><a href="#/folders/<%= id %>/download/zip">.zip</a></li>');tmpl_array.push(" </ul>");tmpl_array.push(" </div>");tmpl_array.push(' <button data-toggle="tooltip" data-placement="top" title="Mark selected datasets deleted" id="toolbtn_bulk_delete" class="primary-button logged-dataset-manipulation" style="margin-left: 0.5em; display:none; " type="button"><span class="fa fa-times"></span> delete</button>');tmpl_array.push(" </div>");tmpl_array.push(' <div id="folder_items_element">');tmpl_array.push(" </div>");tmpl_array.push("</div>");return _.template(tmpl_array.join(""))},templateNewFolderInModal:function(){tmpl_array=[];tmpl_array.push('<div id="new_folder_modal">');tmpl_array.push("<form>");tmpl_array.push('<input type="text" name="Name" value="" placeholder="Name">');tmpl_array.push('<input type="text" name="Description" value="" placeholder="Description">');tmpl_array.push("</form>");tmpl_array.push("</div>");return _.template(tmpl_array.join(""))},templateBulkImportInModal:function(){var f=[];f.push('<span id="history_modal_combo_bulk" style="width:90%; margin-left: 1em; margin-right: 1em; ">');f.push("Select history: ");f.push('<select id="dataset_import_bulk" name="dataset_import_bulk" style="width:50%; margin-bottom: 1em; "> ');f.push(" <% _.each(histories, function(history) { %>");f.push(' <option value="<%= _.escape(history.get("id")) %>"><%= _.escape(history.get("name")) %></option>');f.push(" <% }); %>");f.push("</select>");f.push("</span>");return _.template(f.join(""))},templateImportIntoHistoryProgressBar:function(){var f=[];f.push('<div class="import_text">');f.push("Importing selected datasets to history <b><%= _.escape(history_name) %></b>");f.push("</div>");f.push('<div class="progress">');f.push(' <div class="progress-bar progress-bar-import" role="progressbar" aria-valuenow="0" aria-valuemin="0" aria-valuemax="100" style="width: 00%;">');f.push(' <span class="completion_span">0% Complete</span>');f.push(" </div>");f.push("</div>");f.push("");return _.template(f.join(""))},templateAddingDatasetsProgressBar:function(){var f=[];f.push('<div class="import_text">');f.push("Adding selected datasets from history to library folder <b><%= _.escape(folder_name) %></b>");f.push("</div>");f.push('<div class="progress">');f.push(' <div class="progress-bar progress-bar-import" role="progressbar" aria-valuenow="0" aria-valuemin="0" aria-valuemax="100" style="width: 00%;">');f.push(' <span class="completion_span">0% Complete</span>');f.push(" </div>");f.push("</div>");f.push("");return _.template(f.join(""))},templateDeletingDatasetsProgressBar:function(){var f=[];f.push('<div class="import_text">');f.push("</div>");f.push('<div class="progress">');f.push(' <div class="progress-bar progress-bar-delete" role="progressbar" aria-valuenow="0" aria-valuemin="0" aria-valuemax="100" style="width: 00%;">');f.push(' <span class="completion_span">0% Complete</span>');f.push(" </div>");f.push("</div>");f.push("");return _.template(f.join(""))},templateAddFilesInModal:function(){var f=[];f.push('<div id="add_files_modal">');f.push('<div id="history_modal_combo_bulk">');f.push("Select history: ");f.push('<select id="dataset_add_bulk" name="dataset_add_bulk" style="width:66%; "> ');f.push(" <% _.each(histories, function(history) { %>");f.push(' <option value="<%= _.escape(history.get("id")) %>"><%= _.escape(history.get("name")) %></option>');f.push(" <% }); %>");f.push("</select>");f.push("</div>");f.push('<div id="selected_history_content">');f.push("</div>");f.push("</div>");return _.template(f.join(""))},templateHistoryContents:function(){var f=[];f.push("Choose the datasets to import:");f.push("<ul>");f.push(" <% _.each(history_contents, function(history_item) { %>");f.push(' <li data-id="<%= _.escape(history_item.get("id")) %>">');f.push(' <input style="margin: 0;" type="checkbox"><%= _.escape(history_item.get("hid")) %>: <%= _.escape(history_item.get("name")) %>');f.push(" </li>");f.push(" <% }); %>");f.push("</ul>");return _.template(f.join(""))}});return{FolderToolbarView:a}});
\ No newline at end of file
+define(["galaxy.masthead","utils/utils","libs/toastr","mvc/library/library-model"],function(b,d,e,c){var a=Backbone.View.extend({el:"#center",events:{"click #toolbtn_create_folder":"createFolderFromModal","click #toolbtn_bulk_import":"modalBulkImport","click .toolbtn_add_files":"addFilesToFolderModal","click #include_deleted_datasets_chk":"checkIncludeDeleted","click #toolbtn_bulk_delete":"deleteSelectedDatasets"},defaults:{can_add_library_item:false,contains_file:false,chain_call_control:{total_number:0,failed_number:0}},modal:null,histories:null,initialize:function(f){this.options=_.defaults(f||{},this.defaults);this.render()},render:function(g){this.options=_.extend(this.options,g);var i=false;var f=true;if(Galaxy.currUser){i=Galaxy.currUser.isAdmin();f=Galaxy.currUser.isAnonymous()}var h=this.templateToolBar();this.$el.html(h({id:this.options.id,admin_user:i,anonym:f}))},configureElements:function(f){this.options=_.extend(this.options,f);if(this.options.can_add_library_item===true){$(".add-library-items").show()}else{$(".add-library-items").hide()}if(this.options.contains_file===true){if(Galaxy.currUser){if(!Galaxy.currUser.isAnonymous()){$(".logged-dataset-manipulation").show();$(".dataset-manipulation").show()}else{$(".dataset-manipulation").show();$(".logged-dataset-manipulation").hide()}}else{$(".logged-dataset-manipulation").hide();$(".dataset-manipulation").hide()}}else{$(".logged-dataset-manipulation").hide();$(".dataset-manipulation").hide()}this.$el.find("[data-toggle]").tooltip()},createFolderFromModal:function(){event.preventDefault();event.stopPropagation();var f=this;var g=this.templateNewFolderInModal();this.modal=Galaxy.modal;this.modal.show({closing_events:true,title:"Create New Folder",body:g(),buttons:{Create:function(){f.create_new_folder_event()},Close:function(){Galaxy.modal.hide()}}})},create_new_folder_event:function(){var f=this.serialize_new_folder();if(this.validate_new_folder(f)){var g=new c.FolderAsModel();url_items=Backbone.history.fragment.split("/");current_folder_id=url_items[url_items.length-1];g.url=g.urlRoot+"/"+current_folder_id;g.save(f,{success:function(h){Galaxy.modal.hide();e.success("Folder created");h.set({type:"folder"});Galaxy.libraries.folderListView.collection.add(h)},error:function(i,h){Galaxy.modal.hide();if(typeof h.responseJSON!=="undefined"){e.error(h.responseJSON.err_msg)}else{e.error("An error ocurred :(")}}})}else{e.error("Folder's name is missing")}return false},serialize_new_folder:function(){return{name:$("input[name='Name']").val(),description:$("input[name='Description']").val()}},validate_new_folder:function(f){return f.name!==""},modalBulkImport:function(){var f=$("#folder_table").find(":checked");if(f.length===0){e.info("You have to select some datasets first")}else{this.refreshUserHistoriesList(function(g){var h=g.templateBulkImportInModal();g.modal=Galaxy.modal;g.modal.show({closing_events:true,title:"Import into History",body:h({histories:g.histories.models}),buttons:{Import:function(){g.importAllIntoHistory()},Close:function(){Galaxy.modal.hide()}}})})}},refreshUserHistoriesList:function(g){var f=this;this.histories=new c.GalaxyHistories();this.histories.fetch({success:function(){g(f)},error:function(i,h){if(typeof h.responseJSON!=="undefined"){e.error(h.responseJSON.err_msg)}else{e.error("An error ocurred :(")}}})},importAllIntoHistory:function(){this.modal.disableButton("Import");this.options.chain_call_control.total_number=0;this.options.chain_call_control.failed_number=0;var k=$("select[name=dataset_import_bulk] option:selected").val();this.options.last_used_history_id=k;var n=$("select[name=dataset_import_bulk] option:selected").text();var p=[];$("#folder_table").find(":checked").each(function(){if(this.parentElement.parentElement.id!==""){p.push(this.parentElement.parentElement.id)}});var o=this.templateImportIntoHistoryProgressBar();this.modal.$el.find(".modal-body").html(o({history_name:n}));var l=100/p.length;this.initProgress(l);var f=[];for(var h=p.length-1;h>=0;h--){var j=p[h];var m=new c.HistoryItem();m.url=m.urlRoot+k+"/contents";m.content=j;m.source="library";f.push(m)}this.options.chain_call_control.total_number=f.length;var g="/api/histories/"+k+"/set_as_current";$.ajax({url:g,type:"PUT"});this.chainCall(f,n)},chainCall:function(g,j){var f=this;var h=g.pop();if(typeof h==="undefined"){if(this.options.chain_call_control.failed_number===0){e.success("Selected datasets imported into history. Click this to start analysing it.","",{onclick:function(){window.location="/"}})}else{if(this.options.chain_call_control.failed_number===this.options.chain_call_control.total_number){e.error("There was an error and no datasets were imported into history.")}else{if(this.options.chain_call_control.failed_number<this.options.chain_call_control.total_number){e.warning("Some of the datasets could not be imported into history. Click this to see what was imported.","",{onclick:function(){window.location="/"}})}}}Galaxy.modal.hide();return}var i=$.when(h.save({content:h.content,source:h.source}));i.done(function(){f.updateProgress();f.chainCall(g,j)}).fail(function(){f.options.chain_call_control.failed_number+=1;f.updateProgress();f.chainCall(g,j)})},initProgress:function(f){this.progress=0;this.progressStep=f},updateProgress:function(){this.progress+=this.progressStep;$(".progress-bar-import").width(Math.round(this.progress)+"%");txt_representation=Math.round(this.progress)+"% Complete";$(".completion_span").text(txt_representation)},download:function(f,j){var h=[];$("#folder_table").find(":checked").each(function(){if(this.parentElement.parentElement.id!==""){h.push(this.parentElement.parentElement.id)}});var g="/api/libraries/datasets/download/"+j;var i={ldda_ids:h};this.processDownload(g,i,"get")},processDownload:function(g,h,i){if(g&&h){h=typeof h==="string"?h:$.param(h);var f="";$.each(h.split("&"),function(){var j=this.split("=");f+='<input type="hidden" name="'+j[0]+'" value="'+j[1]+'" />'});$('<form action="'+g+'" method="'+(i||"post")+'">'+f+"</form>").appendTo("body").submit().remove();e.info("Your download will begin soon")}},addFilesToFolderModal:function(){this.refreshUserHistoriesList(function(f){f.modal=Galaxy.modal;var g=f.templateAddFilesInModal();f.modal.show({closing_events:true,title:"Add datasets from history to "+f.options.folder_name,body:g({histories:f.histories.models}),buttons:{Add:function(){f.addAllDatasetsFromHistory()},Close:function(){Galaxy.modal.hide()}}});if(f.histories.models.length>0){f.fetchAndDisplayHistoryContents(f.histories.models[0].id);$("#dataset_add_bulk").change(function(h){f.fetchAndDisplayHistoryContents(h.target.value)})}else{e.error("An error ocurred :(")}})},fetchAndDisplayHistoryContents:function(h){var g=new c.HistoryContents({id:h});var f=this;g.fetch({success:function(j){var i=f.templateHistoryContents();f.histories.get(h).set({contents:j});f.modal.$el.find("#selected_history_content").html(i({history_contents:j.models.reverse()}))},error:function(){e.error("An error ocurred :(")}})},addAllDatasetsFromHistory:function(){this.modal.disableButton("Add");this.options.chain_call_control.total_number=0;this.options.chain_call_control.failed_number=0;var f=[];this.modal.$el.find("#selected_history_content").find(":checked").each(function(){var i=$(this.parentElement).data("id");if(i){f.push(i)}});var l=this.options.folder_name;var k=this.templateAddingDatasetsProgressBar();this.modal.$el.find(".modal-body").html(k({folder_name:l}));this.progressStep=100/f.length;this.progress=0;var j=[];for(var h=f.length-1;h>=0;h--){history_dataset_id=f[h];var g=new c.Item();g.url="/api/folders/"+this.options.id+"/contents";g.set({from_hda_id:history_dataset_id});j.push(g)}this.options.chain_call_control.total_number=j.length;this.chainCallAddingHdas(j)},chainCallAddingHdas:function(g){var f=this;this.added_hdas=new c.Folder();var h=g.pop();if(typeof h==="undefined"){if(this.options.chain_call_control.failed_number===0){e.success("Selected datasets from history added to the folder")}else{if(this.options.chain_call_control.failed_number===this.options.chain_call_control.total_number){e.error("There was an error and no datasets were added to the folder.")}else{if(this.options.chain_call_control.failed_number<this.options.chain_call_control.total_number){e.warning("Some of the datasets could not be added to the folder")}}}Galaxy.modal.hide();return this.added_hdas}var i=$.when(h.save({from_hda_id:h.get("from_hda_id")}));i.done(function(j){Galaxy.libraries.folderListView.collection.add(j);f.updateProgress();f.chainCallAddingHdas(g)}).fail(function(){f.options.chain_call_control.failed_number+=1;f.updateProgress();f.chainCallAddingHdas(g)})},checkIncludeDeleted:function(f){if(f.target.checked){Galaxy.libraries.folderListView.fetchFolder({include_deleted:true})}else{Galaxy.libraries.folderListView.fetchFolder({include_deleted:false})}},deleteSelectedDatasets:function(){var f=$("#folder_table").find(":checked");if(f.length===0){e.info("You have to select some datasets first")}else{var j=this.templateDeletingDatasetsProgressBar();this.modal=Galaxy.modal;this.modal.show({closing_events:true,title:"Deleting selected datasets",body:j({}),buttons:{Close:function(){Galaxy.modal.hide()}}});this.options.chain_call_control.total_number=0;this.options.chain_call_control.failed_number=0;var g=[];f.each(function(){if(this.parentElement.parentElement.id!==""){g.push(this.parentElement.parentElement.id)}});this.progressStep=100/g.length;this.progress=0;var l=[];for(var h=g.length-1;h>=0;h--){var k=new c.Item({id:g[h]});l.push(k)}this.options.chain_call_control.total_number=g.length;this.chainCallDeletingHdas(l)}},chainCallDeletingHdas:function(g){var f=this;this.deleted_lddas=new c.Folder();var h=g.pop();if(typeof h==="undefined"){if(this.options.chain_call_control.failed_number===0){e.success("Selected datasets deleted")}else{if(this.options.chain_call_control.failed_number===this.options.chain_call_control.total_number){e.error("There was an error and no datasets were deleted.")}else{if(this.options.chain_call_control.failed_number<this.options.chain_call_control.total_number){e.warning("Some of the datasets could not be deleted")}}}Galaxy.modal.hide();return this.deleted_lddas}var i=$.when(h.destroy());i.done(function(k){Galaxy.libraries.folderListView.collection.remove(h.id);f.updateProgress();if(Galaxy.libraries.folderListView.options.include_deleted){var j=new c.Item(k);Galaxy.libraries.folderListView.collection.add(j)}f.chainCallDeletingHdas(g)}).fail(function(){f.options.chain_call_control.failed_number+=1;f.updateProgress();f.chainCallDeletingHdas(g)})},templateToolBar:function(){tmpl_array=[];tmpl_array.push('<div class="library_style_container">');tmpl_array.push('<div id="library_folder_toolbar">');tmpl_array.push('<span data-toggle="tooltip" data-placement="top" class="logged-dataset-manipulation" title="Include deleted datasets" style="display:none;"><input id="include_deleted_datasets_chk" style="margin: 0;" type="checkbox"><span class="fa fa-trash-o fa-lg"></span></input></span>');tmpl_array.push('<div class="btn-group add-library-items" style="display:none;">');tmpl_array.push(' <button data-toggle="tooltip" data-placement="top" title="Create New Folder" id="toolbtn_create_folder" class="btn btn-default primary-button" type="button"><span class="fa fa-plus"></span><span class="fa fa-folder"></span></button>');tmpl_array.push(' <button data-toggle="tooltip" data-placement="top" title="Add Datasets to Current Folder" id="toolbtn_add_files" class="btn btn-default toolbtn_add_files primary-button" type="button"><span class="fa fa-plus"></span><span class="fa fa-file"></span></span></button>');tmpl_array.push("</div>");tmpl_array.push(' <button data-toggle="tooltip" data-placement="top" title="Import selected datasets into history" id="toolbtn_bulk_import" class="primary-button dataset-manipulation" style="margin-left: 0.5em; display:none;" type="button"><span class="fa fa-book"></span> to history</button>');tmpl_array.push(' <div id="toolbtn_dl" class="btn-group dataset-manipulation" style="margin-left: 0.5em; display:none; ">');tmpl_array.push(' <button title="Download selected datasets as archive" id="drop_toggle" type="button" class="primary-button dropdown-toggle" data-toggle="dropdown">');tmpl_array.push(' <span class="fa fa-download"></span> download <span class="caret"></span>');tmpl_array.push(" </button>");tmpl_array.push(' <ul class="dropdown-menu" role="menu">');tmpl_array.push(' <li id="download_archive"><a href="#/folders/<%= id %>/download/tgz">.tar.gz</a></li>');tmpl_array.push(' <li id="download_archive"><a href="#/folders/<%= id %>/download/tbz">.tar.bz</a></li>');tmpl_array.push(' <li id="download_archive"><a href="#/folders/<%= id %>/download/zip">.zip</a></li>');tmpl_array.push(" </ul>");tmpl_array.push(" </div>");tmpl_array.push(' <button data-toggle="tooltip" data-placement="top" title="Mark selected datasets deleted" id="toolbtn_bulk_delete" class="primary-button logged-dataset-manipulation" style="margin-left: 0.5em; display:none; " type="button"><span class="fa fa-times"></span> delete</button>');tmpl_array.push(" </div>");tmpl_array.push(' <div id="folder_items_element">');tmpl_array.push(" </div>");tmpl_array.push("</div>");return _.template(tmpl_array.join(""))},templateNewFolderInModal:function(){tmpl_array=[];tmpl_array.push('<div id="new_folder_modal">');tmpl_array.push("<form>");tmpl_array.push('<input type="text" name="Name" value="" placeholder="Name">');tmpl_array.push('<input type="text" name="Description" value="" placeholder="Description">');tmpl_array.push("</form>");tmpl_array.push("</div>");return _.template(tmpl_array.join(""))},templateBulkImportInModal:function(){var f=[];f.push('<span id="history_modal_combo_bulk" style="width:90%; margin-left: 1em; margin-right: 1em; ">');f.push("Select history: ");f.push('<select id="dataset_import_bulk" name="dataset_import_bulk" style="width:50%; margin-bottom: 1em; "> ');f.push(" <% _.each(histories, function(history) { %>");f.push(' <option value="<%= _.escape(history.get("id")) %>"><%= _.escape(history.get("name")) %></option>');f.push(" <% }); %>");f.push("</select>");f.push("</span>");return _.template(f.join(""))},templateImportIntoHistoryProgressBar:function(){var f=[];f.push('<div class="import_text">');f.push("Importing selected datasets to history <b><%= _.escape(history_name) %></b>");f.push("</div>");f.push('<div class="progress">');f.push(' <div class="progress-bar progress-bar-import" role="progressbar" aria-valuenow="0" aria-valuemin="0" aria-valuemax="100" style="width: 00%;">');f.push(' <span class="completion_span">0% Complete</span>');f.push(" </div>");f.push("</div>");f.push("");return _.template(f.join(""))},templateAddingDatasetsProgressBar:function(){var f=[];f.push('<div class="import_text">');f.push("Adding selected datasets from history to library folder <b><%= _.escape(folder_name) %></b>");f.push("</div>");f.push('<div class="progress">');f.push(' <div class="progress-bar progress-bar-import" role="progressbar" aria-valuenow="0" aria-valuemin="0" aria-valuemax="100" style="width: 00%;">');f.push(' <span class="completion_span">0% Complete</span>');f.push(" </div>");f.push("</div>");f.push("");return _.template(f.join(""))},templateDeletingDatasetsProgressBar:function(){var f=[];f.push('<div class="import_text">');f.push("</div>");f.push('<div class="progress">');f.push(' <div class="progress-bar progress-bar-delete" role="progressbar" aria-valuenow="0" aria-valuemin="0" aria-valuemax="100" style="width: 00%;">');f.push(' <span class="completion_span">0% Complete</span>');f.push(" </div>");f.push("</div>");f.push("");return _.template(f.join(""))},templateAddFilesInModal:function(){var f=[];f.push('<div id="add_files_modal">');f.push('<div id="history_modal_combo_bulk">');f.push("Select history: ");f.push('<select id="dataset_add_bulk" name="dataset_add_bulk" style="width:66%; "> ');f.push(" <% _.each(histories, function(history) { %>");f.push(' <option value="<%= _.escape(history.get("id")) %>"><%= _.escape(history.get("name")) %></option>');f.push(" <% }); %>");f.push("</select>");f.push("</div>");f.push('<div id="selected_history_content">');f.push("</div>");f.push("</div>");return _.template(f.join(""))},templateHistoryContents:function(){var f=[];f.push("Choose the datasets to import:");f.push("<ul>");f.push(" <% _.each(history_contents, function(history_item) { %>");f.push(' <li data-id="<%= _.escape(history_item.get("id")) %>">');f.push(' <input style="margin: 0;" type="checkbox"><%= _.escape(history_item.get("hid")) %>: <%= _.escape(history_item.get("name")) %>');f.push(" </li>");f.push(" <% }); %>");f.push("</ul>");return _.template(f.join(""))}});return{FolderToolbarView:a}});
\ No newline at end of file
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0