galaxy-commits
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
June 2014
- 1 participants
- 233 discussions
commit/galaxy-central: jmchilton: Fix spelling error in API test.
by commits-noreply@bitbucket.org 13 Jun '14
by commits-noreply@bitbucket.org 13 Jun '14
13 Jun '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/05500f5c6710/
Changeset: 05500f5c6710
User: jmchilton
Date: 2014-06-13 15:46:01
Summary: Fix spelling error in API test.
Affected #: 1 file
diff -r 588559638acc7ed88542f8e77987698f2fa0b2fb -r 05500f5c6710816865536335a42eb9e00caf6f03 test/api/test_dataset_collections.py
--- a/test/api/test_dataset_collections.py
+++ b/test/api/test_dataset_collections.py
@@ -73,12 +73,13 @@
collection_type="list:paired",
instance_type="history",
history_id=self.history_id,
- name="nested_collecion",
+ name="a nested collection",
element_identifiers=json.dumps( element_identifiers ),
)
create_response = self._post( "dataset_collections", payload )
dataset_collection = self._check_create_response( create_response )
assert dataset_collection[ "collection_type" ] == "list:paired"
+ assert dataset_collection[ "name" ] == "a nested collection"
returned_collections = dataset_collection[ "elements" ]
assert len( returned_collections ) == 1, dataset_collection
pair_1_element = returned_collections[ 0 ]
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: jmchilton: Fixup XML formatting in job_conf.xml.sample_advanced.
by commits-noreply@bitbucket.org 13 Jun '14
by commits-noreply@bitbucket.org 13 Jun '14
13 Jun '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/588559638acc/
Changeset: 588559638acc
User: jmchilton
Date: 2014-06-13 08:02:13
Summary: Fixup XML formatting in job_conf.xml.sample_advanced.
Should fix the remaining failing unit tests.
Affected #: 1 file
diff -r 365e1cef25a5ef79ed64934e0b1c483ae73a7445 -r 588559638acc7ed88542f8e77987698f2fa0b2fb job_conf.xml.sample_advanced
--- a/job_conf.xml.sample_advanced
+++ b/job_conf.xml.sample_advanced
@@ -162,7 +162,7 @@
<!-- Following command can be used to tweak docker command. --><!-- <param id="docker_cmd">/usr/local/custom_docker/docker</param> --><!-- Following can be used to connect to docke server in different
- ways (translated as -H argument to docker client).
+ ways (translated as -H argument to docker client). --><!-- <param id="docker_host">unix:///var/run/docker.sock</param> --><!-- <param id="docker_host">:5555</param> --><!-- <param id="docker_host">:5555</param> -->
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: jmchilton: Fix failing unit tests for 60d8de4.
by commits-noreply@bitbucket.org 13 Jun '14
by commits-noreply@bitbucket.org 13 Jun '14
13 Jun '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/365e1cef25a5/
Changeset: 365e1cef25a5
User: jmchilton
Date: 2014-06-13 07:58:23
Summary: Fix failing unit tests for 60d8de4.
Affected #: 4 files
diff -r 972bd6fbd63f372cc80096e0c099c5e17d33dfe7 -r 365e1cef25a5ef79ed64934e0b1c483ae73a7445 job_conf.xml.sample_advanced
--- a/job_conf.xml.sample_advanced
+++ b/job_conf.xml.sample_advanced
@@ -153,7 +153,7 @@
<!-- <param id="docker_sudo">false</param> --><!-- Following option can be used to tweak sudo command used by
default. -->
- <!-- <param id="docker_sudo_cmd">/usr/bin/sudo --extra_param</param> -->
+ <!-- <param id="docker_sudo_cmd">/usr/bin/sudo -extra_param</param> --><!-- By default, docker container will not have any networking
enabled. host networking can be bridged by uncommenting next option
http://docs.docker.io/reference/run/#network-settings
diff -r 972bd6fbd63f372cc80096e0c099c5e17d33dfe7 -r 365e1cef25a5ef79ed64934e0b1c483ae73a7445 lib/galaxy/tools/deps/containers.py
--- a/lib/galaxy/tools/deps/containers.py
+++ b/lib/galaxy/tools/deps/containers.py
@@ -99,6 +99,12 @@
return asbool(destination_info.get("%s_enabled" % container_type, False))
+class NullContainerFinder(object):
+
+ def find_container(self, tool_info, destination_info, job_info):
+ return []
+
+
class ContainerRegistry():
def __init__(self):
diff -r 972bd6fbd63f372cc80096e0c099c5e17d33dfe7 -r 365e1cef25a5ef79ed64934e0b1c483ae73a7445 lib/galaxy/tools/deps/requirements.py
--- a/lib/galaxy/tools/deps/requirements.py
+++ b/lib/galaxy/tools/deps/requirements.py
@@ -53,14 +53,14 @@
... contents_document = '''<tool><requirements>%s</requirements></tool>'''
... root = ElementTree.fromstring( contents_document % contents )
... return parse_requirements_from_xml( root )
- >>> reqs = load_requirements('''<requirement>bwa</requirement>''')
+ >>> reqs, containers = load_requirements('''<requirement>bwa</requirement>''')
>>> reqs[0].name
'bwa'
>>> reqs[0].version is None
True
>>> reqs[0].type
'package'
- >>> reqs = load_requirements('''<requirement type="binary" version="1.3.3">cufflinks</requirement>''')
+ >>> reqs, containers = load_requirements('''<requirement type="binary" version="1.3.3">cufflinks</requirement>''')
>>> reqs[0].name
'cufflinks'
>>> reqs[0].version
diff -r 972bd6fbd63f372cc80096e0c099c5e17d33dfe7 -r 365e1cef25a5ef79ed64934e0b1c483ae73a7445 test/unit/tools_support.py
--- a/test/unit/tools_support.py
+++ b/test/unit/tools_support.py
@@ -16,6 +16,7 @@
from galaxy.util import parse_xml
from galaxy.util.dbkeys import GenomeBuilds
from galaxy.jobs import NoopQueue
+from galaxy.tools.deps.containers import NullContainerFinder
class UsesApp( object ):
@@ -126,6 +127,7 @@
self.security_agent = GalaxyRBACAgent( self.model )
self.tool_data_tables = {}
self.dataset_collections_service = None
+ self.container_finder = NullContainerFinder()
class MockContext(object):
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: dannon: Re-enable flexible default_bucket for cloudlaunch.
by commits-noreply@bitbucket.org 12 Jun '14
by commits-noreply@bitbucket.org 12 Jun '14
12 Jun '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/972bd6fbd63f/
Changeset: 972bd6fbd63f
User: dannon
Date: 2014-06-13 00:26:24
Summary: Re-enable flexible default_bucket for cloudlaunch.
Affected #: 1 file
diff -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c -r 972bd6fbd63f372cc80096e0c099c5e17d33dfe7 lib/galaxy/webapps/galaxy/controllers/cloudlaunch.py
--- a/lib/galaxy/webapps/galaxy/controllers/cloudlaunch.py
+++ b/lib/galaxy/webapps/galaxy/controllers/cloudlaunch.py
@@ -72,7 +72,8 @@
cml = cloudman.launch.CloudManLauncher(key_id, secret)
result = cml.launch(cluster_name, ami, instance_type, password,
cfg.kernel_id, cfg.ramdisk_id, cfg.key_name,
- cfg.security_groups, cfg.placement)
+ cfg.security_groups, cfg.placement,
+ bucket_default=bucket_default)
# result is a dict with sg_names, kp_name, kp_material, rs, and instance_id
if not result['rs']:
trans.response.status = 400
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Eliminate the use of trans in most functions in the Tool Shed's shed_util_common utility module.
by commits-noreply@bitbucket.org 12 Jun '14
by commits-noreply@bitbucket.org 12 Jun '14
12 Jun '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/9a69f032d6d9/
Changeset: 9a69f032d6d9
User: greg
Date: 2014-06-12 21:43:38
Summary: Eliminate the use of trans in most functions in the Tool Shed's shed_util_common utility module.
Affected #: 24 files
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/galaxy/web/base/controllers/admin.py
--- a/lib/galaxy/web/base/controllers/admin.py
+++ b/lib/galaxy/web/base/controllers/admin.py
@@ -7,6 +7,8 @@
from string import punctuation as PUNCTUATION
import galaxy.queue_worker
+from tool_shed.util import shed_util_common as suc
+
log = logging.getLogger( __name__ )
@@ -29,7 +31,7 @@
status = kwd.get( 'status', 'done' )
if trans.webapp.name == 'galaxy':
installed_repositories = trans.install_model.context.query( trans.install_model.ToolShedRepository ).first()
- installing_repository_ids = get_ids_of_tool_shed_repositories_being_installed( trans, as_string=True )
+ installing_repository_ids = suc.get_ids_of_tool_shed_repositories_being_installed( trans.app, as_string=True )
return trans.fill_template( '/webapps/galaxy/admin/index.mako',
installed_repositories=installed_repositories,
installing_repository_ids=installing_repository_ids,
@@ -1091,24 +1093,6 @@
## ---- Utility methods -------------------------------------------------------
-def get_ids_of_tool_shed_repositories_being_installed( trans, as_string=False ):
- installing_repository_ids = []
- new_status = trans.install_model.ToolShedRepository.installation_status.NEW
- cloning_status = trans.install_model.ToolShedRepository.installation_status.CLONING
- setting_tool_versions_status = trans.install_model.ToolShedRepository.installation_status.SETTING_TOOL_VERSIONS
- installing_dependencies_status = trans.install_model.ToolShedRepository.installation_status.INSTALLING_TOOL_DEPENDENCIES
- loading_datatypes_status = trans.install_model.ToolShedRepository.installation_status.LOADING_PROPRIETARY_DATATYPES
- for tool_shed_repository in trans.install_model.context.query( trans.install_model.ToolShedRepository ) \
- .filter( or_( trans.install_model.ToolShedRepository.status == new_status,
- trans.install_model.ToolShedRepository.status == cloning_status,
- trans.install_model.ToolShedRepository.status == setting_tool_versions_status,
- trans.install_model.ToolShedRepository.status == installing_dependencies_status,
- trans.install_model.ToolShedRepository.status == loading_datatypes_status ) ):
- installing_repository_ids.append( trans.security.encode_id( tool_shed_repository.id ) )
- if as_string:
- return ','.join( installing_repository_ids )
- return installing_repository_ids
-
def get_user( trans, user_id ):
"""Get a User from the database by id."""
user = trans.sa_session.query( trans.model.User ).get( trans.security.decode_id( user_id ) )
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/galaxy/webapps/galaxy/api/tool_shed_repositories.py
--- a/lib/galaxy/webapps/galaxy/api/tool_shed_repositories.py
+++ b/lib/galaxy/webapps/galaxy/api/tool_shed_repositories.py
@@ -59,7 +59,7 @@
# Since exported workflows are dictionaries with very few attributes that differentiate them from each
# other, we'll build the list based on the following dictionary of those few attributes.
exported_workflows = []
- repository = suc.get_tool_shed_repository_by_id( trans, id )
+ repository = suc.get_tool_shed_repository_by_id( trans.app, id )
metadata = repository.metadata
if metadata:
exported_workflow_tups = metadata.get( 'workflows', [] )
@@ -142,7 +142,7 @@
index = payload.get( 'index', None )
if index is None:
raise HTTPBadRequest( detail="Missing required parameter 'index'." )
- repository = suc.get_tool_shed_repository_by_id( trans, tool_shed_repository_id )
+ repository = suc.get_tool_shed_repository_by_id( trans.app, tool_shed_repository_id )
exported_workflows = json.from_json_string( self.exported_workflows( trans, tool_shed_repository_id ) )
# Since we don't have an in-memory object with an id, we'll identify the exported workflow via its location (i.e., index) in the list.
exported_workflow = exported_workflows[ int( index ) ]
@@ -169,7 +169,7 @@
tool_shed_repository_id = kwd.get( 'id', '' )
if not tool_shed_repository_id:
raise HTTPBadRequest( detail="Missing required parameter 'id'." )
- repository = suc.get_tool_shed_repository_by_id( trans, tool_shed_repository_id )
+ repository = suc.get_tool_shed_repository_by_id( trans.app, tool_shed_repository_id )
exported_workflows = json.from_json_string( self.exported_workflows( trans, tool_shed_repository_id ) )
imported_workflow_dicts = []
for exported_workflow_dict in exported_workflows:
@@ -406,7 +406,7 @@
for repository in query:
repository_id = trans.security.encode_id( repository.id )
try:
- invalid_file_tups, metadata_dict = metadata_util.reset_all_metadata_on_installed_repository( trans, repository_id )
+ invalid_file_tups, metadata_dict = metadata_util.reset_all_metadata_on_installed_repository( trans.app, repository_id )
if invalid_file_tups:
message = tool_util.generate_message_for_invalid_tools( trans,
invalid_file_tups,
@@ -436,7 +436,7 @@
:param id: the encoded id of the ToolShedRepository object
"""
# Example URL: http://localhost:8763/api/tool_shed_repositories/df7a1f0c02a5b08e
- tool_shed_repository = suc.get_tool_shed_repository_by_id( trans, id )
+ tool_shed_repository = suc.get_tool_shed_repository_by_id( trans.app, id )
if tool_shed_repository is None:
log.debug( "Unable to locate tool_shed_repository record for id %s." % ( str( id ) ) )
return {}
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/galaxy/webapps/galaxy/controllers/admin_toolshed.py
--- a/lib/galaxy/webapps/galaxy/controllers/admin_toolshed.py
+++ b/lib/galaxy/webapps/galaxy/controllers/admin_toolshed.py
@@ -326,7 +326,7 @@
repository_id = kwd.get( 'repository_id', None )
relative_path_to_image_file = kwd.get( 'image_file', None )
if repository_id and relative_path_to_image_file:
- repository = suc.get_tool_shed_repository_by_id( trans, repository_id )
+ repository = suc.get_tool_shed_repository_by_id( trans.app, repository_id )
if repository:
repo_files_dir = repository.repo_files_directory( trans.app )
path_to_file = suc.get_absolute_path_to_file_in_repository( repo_files_dir, relative_path_to_image_file )
@@ -433,7 +433,7 @@
status = kwd.get( 'status', 'done' )
if workflow_name:
workflow_name = encoding_util.tool_shed_decode( workflow_name )
- repository = suc.get_tool_shed_repository_by_id( trans, repository_id )
+ repository = suc.get_tool_shed_repository_by_id( trans.app, repository_id )
if repository:
workflow, status, message = workflow_util.import_workflow( trans, repository, workflow_name )
if workflow:
@@ -808,7 +808,7 @@
# The user must be on the manage_repository_tool_dependencies page and clicked the button to either install or uninstall a
# tool dependency, but they didn't check any of the available tool dependencies on which to perform the action.
repository_id = kwd.get( 'repository_id', None )
- tool_shed_repository = suc.get_tool_shed_repository_by_id( trans, repository_id )
+ tool_shed_repository = suc.get_tool_shed_repository_by_id( trans.app, repository_id )
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if not tool_dependency_ids:
@@ -858,12 +858,12 @@
message += ' and restart your Galaxy server to install tool dependencies.'
status = 'error'
installed_tool_dependencies_select_field = \
- suc.build_tool_dependencies_select_field( trans,
+ suc.build_tool_dependencies_select_field( trans.app,
tool_shed_repository=tool_shed_repository,
name='inst_td_ids',
uninstalled_only=False )
uninstalled_tool_dependencies_select_field = \
- suc.build_tool_dependencies_select_field( trans,
+ suc.build_tool_dependencies_select_field( trans.app,
tool_shed_repository=tool_shed_repository,
name='uninstalled_tool_dependency_ids',
uninstalled_only=True )
@@ -890,7 +890,7 @@
else:
# The user must be on the manage_repository_tool_dependencies page and clicked the button to either install or uninstall a
# tool dependency, but they didn't check any of the available tool dependencies on which to perform the action.
- tool_shed_repository = suc.get_tool_shed_repository_by_id( trans, repository_id )
+ tool_shed_repository = suc.get_tool_shed_repository_by_id( trans.app, repository_id )
self.tool_dependency_grid.title = "Tool shed repository '%s' tool dependencies" % tool_shed_repository.name
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
@@ -942,7 +942,7 @@
def monitor_repository_installation( self, trans, **kwd ):
tsridslist = repository_util.get_tool_shed_repository_ids( **kwd )
if not tsridslist:
- tsridslist = suc.get_ids_of_tool_shed_repositories_being_installed( trans, as_string=False )
+ tsridslist = suc.get_ids_of_tool_shed_repositories_being_installed( trans.app, as_string=False )
kwd[ 'tool_shed_repository_ids' ] = tsridslist
return self.repository_installation_grid( trans, **kwd )
@@ -952,7 +952,7 @@
# Avoid caching
trans.response.headers['Pragma'] = 'no-cache'
trans.response.headers['Expires'] = '0'
- return suc.open_repository_files_folder( trans, folder_path )
+ return suc.open_repository_files_folder( folder_path )
@web.expose
@web.require_admin
@@ -1008,7 +1008,7 @@
# but the received repository id is from the Galaxy side (the caller is this controller's
# update_to_changeset_revision() method. We need to get the id of the same repository from the
# Tool Shed side.
- repository = suc.get_tool_shed_repository_by_id( trans, updating_repository_id )
+ repository = suc.get_tool_shed_repository_by_id( trans.app, updating_repository_id )
# For backward compatibility to the 12/20/12 Galaxy release.
try:
params = '?name=%s&owner=%s' % ( str( repository.name ), str( repository.owner ) )
@@ -1056,7 +1056,7 @@
encoded_updated_metadata_dict = kwd.get( 'encoded_updated_metadata_dict', None )
updated_changeset_revision = kwd.get( 'updated_changeset_revision', None )
updated_ctx_rev = kwd.get( 'updated_ctx_rev', None )
- repository = suc.get_tool_shed_repository_by_id( trans, updating_repository_id )
+ repository = suc.get_tool_shed_repository_by_id( trans.app, updating_repository_id )
decoded_updated_metadata = encoding_util.tool_shed_decode( encoded_updated_metadata )
# Now that the user has decided whether they will handle dependencies, we can update
# the repository to the latest revision.
@@ -1070,7 +1070,7 @@
install_tool_dependencies = CheckboxField.is_checked( install_tool_dependencies )
else:
install_tool_dependencies = False
- tool_path = suc.get_tool_path_by_shed_tool_conf_filename( trans, shed_tool_conf )
+ tool_path = suc.get_tool_path_by_shed_tool_conf_filename( trans.app, shed_tool_conf )
installation_dict = dict( install_repository_dependencies=install_repository_dependencies,
new_tool_panel_section_label=new_tool_panel_section_label,
no_changes_checked=False,
@@ -1112,7 +1112,7 @@
action='manage_repositories',
**kwd ) )
shed_tool_conf_select_field = tool_util.build_shed_tool_conf_select_field( trans )
- tool_path = suc.get_tool_path_by_shed_tool_conf_filename( trans, shed_tool_conf )
+ tool_path = suc.get_tool_path_by_shed_tool_conf_filename( trans.app, shed_tool_conf )
tool_panel_section_select_field = tool_util.build_tool_panel_section_select_field( trans )
if len( repo_info_dicts ) == 1:
# If we're installing or updating a single repository, see if it contains a readme or
@@ -2053,7 +2053,7 @@
updated_count = 0
for repository in trans.install_model.context.query( trans.install_model.ToolShedRepository ) \
.filter( trans.install_model.ToolShedRepository.table.c.deleted == False ):
- ok, updated = suc.check_or_update_tool_shed_status_for_installed_repository( trans, repository )
+ ok, updated = suc.check_or_update_tool_shed_status_for_installed_repository( trans.app, repository )
if ok:
success_count += 1
else:
@@ -2067,8 +2067,8 @@
message += ", ".join( repository_names_not_updated )
else:
repository_id = kwd.get( 'id', None )
- repository = suc.get_tool_shed_repository_by_id( trans, repository_id )
- ok, updated = suc.check_or_update_tool_shed_status_for_installed_repository( trans, repository )
+ repository = suc.get_tool_shed_repository_by_id( trans.app, repository_id )
+ ok, updated = suc.check_or_update_tool_shed_status_for_installed_repository( trans.app, repository )
if ok:
if updated:
message = "The tool shed status for repository <b>%s</b> has been updated." % str( repository.name )
@@ -2122,7 +2122,7 @@
status = kwd.get( 'status', 'done' )
if workflow_name:
workflow_name = encoding_util.tool_shed_decode( workflow_name )
- repository = suc.get_tool_shed_repository_by_id( trans, repository_id )
+ repository = suc.get_tool_shed_repository_by_id( trans.app, repository_id )
changeset_revision = repository.changeset_revision
metadata = repository.metadata
return trans.fill_template( "/admin/tool_shed_repository/view_workflow.mako",
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/galaxy/webapps/tool_shed/api/categories.py
--- a/lib/galaxy/webapps/tool_shed/api/categories.py
+++ b/lib/galaxy/webapps/tool_shed/api/categories.py
@@ -39,7 +39,7 @@
if not description:
# Default the description to the name.
description = name
- if suc.get_category_by_name( trans, name ):
+ if suc.get_category_by_name( trans.app, name ):
category_dict[ 'message' ] = 'A category with that name already exists'
category_dict[ 'status' ] = 'error'
else:
@@ -90,7 +90,7 @@
:param id: the encoded id of the Repository object
"""
# Example URL: http://localhost:9009/api/categories/f9cad7b01a472135
- category = suc.get_category( trans, id )
+ category = suc.get_category( trans.app, id )
if category is None:
category_dict = dict( message = 'Unable to locate category record for id %s.' % ( str( id ) ),
status = 'error' )
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/galaxy/webapps/tool_shed/api/repositories.py
--- a/lib/galaxy/webapps/tool_shed/api/repositories.py
+++ b/lib/galaxy/webapps/tool_shed/api/repositories.py
@@ -498,7 +498,7 @@
repository_id = payload.get( 'repository_id', None )
if repository_id is not None:
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
start_time = strftime( "%Y-%m-%d %H:%M:%S" )
log.debug( "%s...resetting metadata on repository %s" % ( start_time, str( repository.name ) ) )
results = handle_repository( trans, start_time, repository )
@@ -515,7 +515,7 @@
:param id: the encoded id of the Repository object
"""
# Example URL: http://localhost:9009/api/repositories/f9cad7b01a472135
- repository = suc.get_repository_in_tool_shed( trans, id )
+ repository = suc.get_repository_in_tool_shed( trans.app, id )
if repository is None:
log.debug( "Unable to locate repository record for id %s." % ( str( id ) ) )
return {}
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/galaxy/webapps/tool_shed/api/repository_revisions.py
--- a/lib/galaxy/webapps/tool_shed/api/repository_revisions.py
+++ b/lib/galaxy/webapps/tool_shed/api/repository_revisions.py
@@ -141,7 +141,7 @@
"""
# Example URL: http://localhost:9009/api/repository_revisions/repository_dependencies/bb12…
repository_dependencies_dicts = []
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, id )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, id )
if repository_metadata is None:
log.debug( 'Invalid repository_metadata id received: %s' % str( id ) )
return repository_dependencies_dicts
@@ -214,12 +214,12 @@
:param id: the encoded id of the `RepositoryMetadata` object
"""
# Example URL: http://localhost:9009/api/repository_revisions/bb125606ff9ea620
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, id )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, id )
if repository_metadata is None:
log.debug( 'Cannot locate repository_metadata with id %s' % str( id ) )
return {}
encoded_repository_id = trans.security.encode_id( repository_metadata.repository_id )
- repository = suc.get_repository_by_id( trans, encoded_repository_id )
+ repository = suc.get_repository_by_id( trans.app, encoded_repository_id )
repository_metadata_dict = repository_metadata.to_dict( view='element',
value_mapper=self.__get_value_mapper( trans ) )
repository_metadata_dict[ 'url' ] = web.url_for( controller='repositories',
@@ -238,7 +238,7 @@
repository_metadata_id = kwd.get( 'id', None )
if repository_metadata_id is None:
raise HTTPBadRequest( detail="Missing required parameter 'id'." )
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, repository_metadata_id )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
if repository_metadata is None:
decoded_repository_metadata_id = trans.security.decode_id( repository_metadata_id )
log.debug( 'Cannot locate repository_metadata with id %s' % str( decoded_repository_metadata_id ) )
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/galaxy/webapps/tool_shed/api/users.py
--- a/lib/galaxy/webapps/tool_shed/api/users.py
+++ b/lib/galaxy/webapps/tool_shed/api/users.py
@@ -105,7 +105,7 @@
:param id: the encoded id of the User object.
"""
# Example URL: http://localhost:9009/api/users/f9cad7b01a472135
- user = suc.get_user( trans, id )
+ user = suc.get_user( trans.app, id )
if user is None:
user_dict = dict( message = 'Unable to locate user record for id %s.' % ( str( id ) ),
status = 'error' )
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/galaxy/webapps/tool_shed/controllers/admin.py
--- a/lib/galaxy/webapps/tool_shed/controllers/admin.py
+++ b/lib/galaxy/webapps/tool_shed/controllers/admin.py
@@ -43,14 +43,14 @@
if k.startswith( 'f-' ):
del kwd[ k ]
if 'user_id' in kwd:
- user = suc.get_user( trans, kwd[ 'user_id' ] )
+ user = suc.get_user( trans.app, kwd[ 'user_id' ] )
kwd[ 'f-email' ] = user.email
del kwd[ 'user_id' ]
else:
# The received id is the repository id, so we need to get the id of the user
# that uploaded the repository.
repository_id = kwd.get( 'id', None )
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
kwd[ 'f-email' ] = repository.user.email
elif operation == "repositories_by_category":
# Eliminate the current filters if any exist.
@@ -58,7 +58,7 @@
if k.startswith( 'f-' ):
del kwd[ k ]
category_id = kwd.get( 'id', None )
- category = suc.get_category( trans, category_id )
+ category = suc.get_category( trans.app, category_id )
kwd[ 'f-Category.name' ] = category.name
elif operation == "receive email alerts":
if kwd[ 'id' ]:
@@ -82,7 +82,7 @@
changeset_revision_str = 'changeset_revision_'
if k.startswith( changeset_revision_str ):
repository_id = trans.security.encode_id( int( k.lstrip( changeset_revision_str ) ) )
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( tran.apps, repository_id )
if repository.tip( trans.app ) != v:
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
@@ -103,7 +103,7 @@
# The received id is a RepositoryMetadata object id, so we need to get the
# associated Repository and redirect to view_or_manage_repository with the
# changeset_revision.
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, kwd[ 'id' ] )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, kwd[ 'id' ] )
repository = repository_metadata.repository
kwd[ 'id' ] = trans.security.encode_id( repository.id )
kwd[ 'changeset_revision' ] = repository_metadata.changeset_revision
@@ -124,7 +124,7 @@
if not name or not description:
message = 'Enter a valid name and a description'
status = 'error'
- elif suc.get_category_by_name( trans, name ):
+ elif suc.get_category_by_name( trans.app, name ):
message = 'A category with that name already exists'
status = 'error'
else:
@@ -158,7 +158,7 @@
count = 0
deleted_repositories = ""
for repository_id in ids:
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
if repository:
if not repository.deleted:
# Mark all installable repository_metadata records as not installable.
@@ -199,7 +199,7 @@
ids = util.listify( id )
count = 0
for repository_metadata_id in ids:
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, repository_metadata_id )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
trans.sa_session.delete( repository_metadata )
trans.sa_session.flush()
count += 1
@@ -225,7 +225,7 @@
action='manage_categories',
message=message,
status='error' ) )
- category = suc.get_category( trans, id )
+ category = suc.get_category( trans.app, id )
original_category_name = str( category.name )
original_category_description = str( category.description )
if kwd.get( 'edit_category_button', False ):
@@ -236,7 +236,7 @@
if not new_name:
message = 'Enter a valid name'
status = 'error'
- elif original_category_name != new_name and suc.get_category_by_name( trans, new_name ):
+ elif original_category_name != new_name and suc.get_category_by_name( trans.app, new_name ):
message = 'A category with that name already exists'
status = 'error'
else:
@@ -363,7 +363,7 @@
count = 0
undeleted_repositories = ""
for repository_id in ids:
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
if repository:
if repository.deleted:
# Inspect all repository_metadata records to determine those that are installable, and mark
@@ -412,7 +412,7 @@
ids = util.listify( id )
message = "Deleted %d categories: " % len( ids )
for category_id in ids:
- category = suc.get_category( trans, category_id )
+ category = suc.get_category( trans.app, category_id )
category.deleted = True
trans.sa_session.add( category )
trans.sa_session.flush()
@@ -442,7 +442,7 @@
purged_categories = ""
message = "Purged %d categories: " % len( ids )
for category_id in ids:
- category = suc.get_category( trans, category_id )
+ category = suc.get_category( trans.app, category_id )
if category.deleted:
# Delete RepositoryCategoryAssociations
for rca in category.repositories:
@@ -469,7 +469,7 @@
count = 0
undeleted_categories = ""
for category_id in ids:
- category = suc.get_category( trans, category_id )
+ category = suc.get_category( trans.app, category_id )
if category.deleted:
category.deleted = False
trans.sa_session.add( category )
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/galaxy/webapps/tool_shed/controllers/repository.py
--- a/lib/galaxy/webapps/tool_shed/controllers/repository.py
+++ b/lib/galaxy/webapps/tool_shed/controllers/repository.py
@@ -127,7 +127,7 @@
operation = kwd[ 'operation' ].lower()
# The received id is a RepositoryMetadata id.
repository_metadata_id = kwd[ 'id' ]
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, repository_metadata_id )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
repository_id = trans.security.encode_id( repository_metadata.repository_id )
changeset_revision = repository_metadata.changeset_revision
new_kwd = dict( id=repository_id,
@@ -146,7 +146,7 @@
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
**kwd ) )
- selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans, **kwd )
+ selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
@@ -172,7 +172,7 @@
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
- selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans, **kwd )
+ selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
@@ -397,7 +397,7 @@
kwd[ 'message' ] = 'You must be logged in to set email alerts.'
kwd[ 'status' ] = 'error'
del kwd[ 'operation' ]
- selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans, **kwd )
+ selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
@@ -428,19 +428,19 @@
# The received id is the repository id, so we need to get the id of the user that owns the repository.
repository_id = kwd.get( 'id', None )
if repository_id:
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
user_id = trans.security.encode_id( repository.user.id )
kwd[ 'user_id' ] = user_id
else:
# The user selected a repository revision which results in a refresh_on_change.
- selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans, **kwd )
+ selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
return trans.response.send_redirect( web.url_for( controller='repository',
action='view_or_manage_repository',
id=trans.security.encode_id( repository.id ),
changeset_revision=selected_changeset_revision ) )
if user_id:
- user = suc.get_user( trans, user_id )
+ user = suc.get_user( trans.app, user_id )
trailing_string = 'Owned by %s' % str( user.username )
default = 'Repositories Owned by %s' % str( user.username )
else:
@@ -469,7 +469,7 @@
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
- selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans, **kwd )
+ selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
@@ -495,7 +495,7 @@
return trans.response.send_redirect( web.url_for( controller='repository',
action='deprecate',
**kwd ) )
- selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans, **kwd )
+ selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
return trans.response.send_redirect( web.url_for( controller='repository',
action='browse_repositories',
@@ -518,13 +518,13 @@
# The received id is the repository id, so we need to get the id of the user that owns the repository.
repository_id = kwd.get( 'id', None )
if repository_id:
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
user_id = trans.security.encode_id( repository.user.id )
- user = suc.get_user( trans, user_id )
+ user = suc.get_user( trans.app, user_id )
self.repositories_by_user_grid.title = "Repositories owned by %s" % user.username
kwd[ 'user_id' ] = user_id
return self.repositories_by_user_grid( trans, **kwd )
- selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans, **kwd )
+ selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
# The user selected a repository revision which results in a refresh_on_change.
return trans.response.send_redirect( web.url_for( controller='repository',
@@ -533,7 +533,7 @@
changeset_revision=selected_changeset_revision ) )
category_id = kwd.get( 'id', None )
if category_id:
- category = suc.get_category( trans, category_id )
+ category = suc.get_category( trans.app, category_id )
if category:
trailing_string = 'in Category %s' % str( category.name )
else:
@@ -718,7 +718,7 @@
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
commit_message = kwd.get( 'commit_message', 'Deleted selected files' )
- repository = suc.get_repository_in_tool_shed( trans, id )
+ repository = suc.get_repository_in_tool_shed( trans.app, id )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
# Update repository files for browsing.
hg_util.update_repository( repo )
@@ -740,7 +740,7 @@
operation = kwd[ 'operation' ].lower()
# The received id is a RepositoryMetadata id.
repository_metadata_id = kwd[ 'id' ]
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, repository_metadata_id )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
repository_id = trans.security.encode_id( repository_metadata.repository_id )
changeset_revision = repository_metadata.changeset_revision
new_kwd = dict( id=repository_id,
@@ -761,7 +761,7 @@
operation = kwd[ 'operation' ].lower()
# The received id is a RepositoryMetadata id.
repository_metadata_id = kwd['id' ]
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, repository_metadata_id )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
repository_id = trans.security.encode_id( repository_metadata.repository_id )
changeset_revision = repository_metadata.changeset_revision
new_kwd = dict( id=repository_id,
@@ -778,7 +778,7 @@
operation = kwd[ 'operation' ].lower()
# The received id is a RepositoryMetadata id.
repository_metadata_id = kwd[ 'id' ]
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, repository_metadata_id )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
repository_id = trans.security.encode_id( repository_metadata.repository_id )
changeset_revision = repository_metadata.changeset_revision
new_kwd = dict( id=repository_id,
@@ -839,13 +839,13 @@
if 'f-Category.name' in kwd:
# The user browsed to a category and then entered a search string, so get the category associated with its value.
category_name = kwd[ 'f-Category.name' ]
- category = suc.get_category_by_name( trans, category_name )
+ category = suc.get_category_by_name( trans.app, category_name )
# Set the id value in kwd since it is required by the ValidRepositoryGrid.build_initial_query method.
kwd[ 'id' ] = trans.security.encode_id( category.id )
if 'operation' in kwd:
operation = kwd[ 'operation' ].lower()
if operation == "preview_tools_in_changeset":
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
repository_metadata = metadata_util.get_latest_repository_metadata( trans, repository.id, downloadable=True )
latest_installable_changeset_revision = repository_metadata.changeset_revision
return trans.response.send_redirect( web.url_for( controller='repository',
@@ -858,9 +858,9 @@
if k.startswith( 'f-' ):
del kwd[ k ]
category_id = kwd.get( 'id', None )
- category = suc.get_category( trans, category_id )
+ category = suc.get_category( trans.app, category_id )
kwd[ 'f-Category.name' ] = category.name
- selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans, **kwd )
+ selected_changeset_revision, repository = suc.get_repository_from_refresh_on_change( trans.app, **kwd )
if repository:
return trans.response.send_redirect( web.url_for( controller='repository',
action='preview_tools_in_changeset',
@@ -969,7 +969,7 @@
def contact_owner( self, trans, id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
- repository = suc.get_repository_in_tool_shed( trans, id )
+ repository = suc.get_repository_in_tool_shed( trans.app, id )
metadata = metadata_util.get_repository_metadata_by_repository_id_changeset_revision( trans,
id,
repository.tip( trans.app ),
@@ -1046,7 +1046,7 @@
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
repository_id = kwd.get( 'id', None )
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
mark_deprecated = util.string_as_bool( kwd.get( 'mark_deprecated', False ) )
repository.deprecated = mark_deprecated
trans.sa_session.add( repository )
@@ -1080,7 +1080,7 @@
repository_id = kwd.get( 'repository_id', None )
relative_path_to_image_file = kwd.get( 'image_file', None )
if repository_id and relative_path_to_image_file:
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
if repository:
repo_files_dir = repository.repo_path( trans.app )
path_to_file = suc.get_absolute_path_to_file_in_repository( repo_files_dir, relative_path_to_image_file )
@@ -1142,7 +1142,7 @@
"""Download an archive of the repository files compressed as zip, gz or bz2."""
# FIXME: thgis will currently only download the repository tip, no matter which installable changeset_revision is being viewed.
# This should be enhanced to use the export method below, which accounts for the currently viewed changeset_revision.
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
# Allow hgweb to handle the download. This requires the tool shed
# server account's .hgrc file to include the following setting:
# [web]
@@ -1164,7 +1164,7 @@
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
export_repository_dependencies = kwd.get( 'export_repository_dependencies', '' )
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
if kwd.get( 'export_repository_button', False ):
# We'll currently support only gzip-compressed tar archives.
file_type = 'gz'
@@ -1252,9 +1252,9 @@
is_admin = trans.user_is_admin()
if operation == "view_or_manage_repository":
# The received id is a RepositoryMetadata id, so we have to get the repository id.
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, item_id )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, item_id )
repository_id = trans.security.encode_id( repository_metadata.repository.id )
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
kwd[ 'id' ] = repository_id
kwd[ 'changeset_revision' ] = repository_metadata.changeset_revision
if trans.webapp.name == 'tool_shed' and ( is_admin or repository.user == trans.user ):
@@ -1269,7 +1269,7 @@
encoded_repository_ids = []
changeset_revisions = []
for repository_metadata_id in util.listify( item_id ):
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, repository_metadata_id )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
encoded_repository_ids.append( trans.security.encode_id( repository_metadata.repository.id ) )
changeset_revisions.append( repository_metadata.changeset_revision )
new_kwd = {}
@@ -1343,9 +1343,9 @@
is_admin = trans.user_is_admin()
if operation == "view_or_manage_repository":
# The received id is a RepositoryMetadata id, so we have to get the repository id.
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, item_id )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, item_id )
repository_id = trans.security.encode_id( repository_metadata.repository.id )
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
kwd[ 'id' ] = repository_id
kwd[ 'changeset_revision' ] = repository_metadata.changeset_revision
if trans.webapp.name == 'tool_shed' and ( is_admin or repository.user == trans.user ):
@@ -1360,7 +1360,7 @@
encoded_repository_ids = []
changeset_revisions = []
for repository_metadata_id in util.listify( item_id ):
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, item_id )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, item_id )
encoded_repository_ids.append( trans.security.encode_id( repository_metadata.repository.id ) )
changeset_revisions.append( repository_metadata.changeset_revision )
new_kwd = {}
@@ -2161,7 +2161,7 @@
def manage_repository( self, trans, id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
- repository = suc.get_repository_in_tool_shed( trans, id )
+ repository = suc.get_repository_in_tool_shed( trans.app, id )
repository_type = kwd.get( 'repository_type', str( repository.type ) )
repo_dir = repository.repo_path( trans.app )
repo = hg_util.get_repo_for_repository( trans.app, repository=None, repo_path=repo_dir, create=False )
@@ -2387,7 +2387,7 @@
repository_type_select_field = rt_util.build_repository_type_select_field( trans, repository=repository )
malicious_check_box = CheckboxField( 'malicious', checked=is_malicious )
skip_tool_tests_check_box = CheckboxField( 'skip_tool_tests', checked=skip_tool_tests_checked )
- categories = suc.get_categories( trans )
+ categories = suc.get_categories( trans.app )
selected_categories = [ rca.category_id for rca in repository.categories ]
containers_dict = container_util.build_repository_containers_for_tool_shed( trans,
repository,
@@ -2433,7 +2433,7 @@
def manage_repository_admins( self, trans, id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
- repository = suc.get_repository_in_tool_shed( trans, id )
+ repository = suc.get_repository_in_tool_shed( trans.app, id )
changeset_revision = kwd.get( 'changeset_revision', repository.tip( trans.app ) )
metadata = None
if changeset_revision != hg_util.INITIAL_CHANGELOG_HASH:
@@ -2531,13 +2531,13 @@
# Avoid caching
trans.response.headers['Pragma'] = 'no-cache'
trans.response.headers['Expires'] = '0'
- return suc.open_repository_files_folder( trans, folder_path )
+ return suc.open_repository_files_folder( folder_path )
@web.expose
def preview_tools_in_changeset( self, trans, repository_id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
changeset_revision = kwd.get( 'changeset_revision', repository.tip( trans.app ) )
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app, repository_id, changeset_revision )
@@ -2646,7 +2646,7 @@
action='browse_repositories',
message='Select a repository to rate',
status='error' ) )
- repository = suc.get_repository_in_tool_shed( trans, id )
+ repository = suc.get_repository_in_tool_shed( trans.app, id )
changeset_revision = repository.tip( trans.app )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
if repository.user == trans.user:
@@ -2685,7 +2685,7 @@
# This method is called only from the ~/templates/webapps/tool_shed/repository/manage_repository.mako template.
invalid_file_tups, metadata_dict = metadata_util.reset_all_metadata_on_repository_in_tool_shed( trans, id, **kwd )
if invalid_file_tups:
- repository = suc.get_repository_in_tool_shed( trans, id )
+ repository = suc.get_repository_in_tool_shed( trans.app, id )
message = tool_util.generate_message_for_invalid_tools( trans, invalid_file_tups, repository, metadata_dict )
status = 'error'
else:
@@ -2715,7 +2715,7 @@
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
commit_message = kwd.get( 'commit_message', 'Deleted selected files' )
- repository = suc.get_repository_in_tool_shed( trans, id )
+ repository = suc.get_repository_in_tool_shed( trans.app, id )
repo_dir = repository.repo_path( trans.app )
repo = hg_util.get_repo_for_repository( trans.app, repository=None, repo_path=repo_dir, create=False )
selected_files_to_delete = kwd.get( 'selected_files_to_delete', '' )
@@ -2786,7 +2786,7 @@
@web.expose
def send_to_owner( self, trans, id, message='' ):
- repository = suc.get_repository_in_tool_shed( trans, id )
+ repository = suc.get_repository_in_tool_shed( trans.app, id )
if not message:
message = 'Enter a message'
status = 'error'
@@ -2836,7 +2836,7 @@
total_alerts_removed = 0
flush_needed = False
for repository_id in repository_ids:
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
if repository.email_alerts:
email_alerts = json.from_json_string( repository.email_alerts )
else:
@@ -3045,7 +3045,7 @@
owner = kwd.get( 'owner', None )
changeset_revision = kwd.get( 'changeset_revision', None )
if name and owner and changeset_revision:
- return suc.get_updated_changeset_revisions( trans, name, owner, changeset_revision )
+ return suc.get_updated_changeset_revisions( trans.app, name, owner, changeset_revision )
return ''
@web.expose
@@ -3078,7 +3078,7 @@
def view_changelog( self, trans, id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
- repository = suc.get_repository_in_tool_shed( trans, id )
+ repository = suc.get_repository_in_tool_shed( trans.app, id )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
changesets = []
for changeset in repo.changelog:
@@ -3113,7 +3113,7 @@
def view_changeset( self, trans, id, ctx_str, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
- repository = suc.get_repository_in_tool_shed( trans, id )
+ repository = suc.get_repository_in_tool_shed( trans.app, id )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
ctx = hg_util.get_changectx_for_changeset( repo, ctx_str )
if ctx is None:
@@ -3183,7 +3183,7 @@
def view_or_manage_repository( self, trans, **kwd ):
repository_id = kwd.get( 'id', None )
if repository_id:
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
user = trans.user
if repository:
if user is not None and ( trans.user_is_admin() or \
@@ -3202,7 +3202,7 @@
def view_repository( self, trans, id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
- repository = suc.get_repository_in_tool_shed( trans, id )
+ repository = suc.get_repository_in_tool_shed( trans.app, id )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
avg_rating, num_ratings = self.get_ave_item_rating_data( trans.sa_session, repository, webapp_model=trans.model )
changeset_revision = kwd.get( 'changeset_revision', repository.tip( trans.app ) )
@@ -3295,7 +3295,7 @@
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
render_repository_actions_for = kwd.get( 'render_repository_actions_for', 'tool_shed' )
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
repo_files_dir = repository.repo_path( trans.app )
repo = hg_util.get_repo_for_repository( trans.app, repository=None, repo_path=repo_files_dir, create=False )
tool_metadata_dict = {}
@@ -3345,7 +3345,10 @@
basic_util.remove_dir( work_dir )
break
if guid:
- tool_lineage = tool_util.get_version_lineage_for_tool( trans, repository_id, repository_metadata, guid )
+ tool_lineage = tool_util.get_version_lineage_for_tool( trans.app,
+ repository_id,
+ repository_metadata,
+ guid )
else:
repository_metadata_id = None
metadata = None
@@ -3377,8 +3380,8 @@
render_repository_actions_for = kwd.get( 'render_repository_actions_for', 'tool_shed' )
if workflow_name:
workflow_name = encoding_util.tool_shed_decode( workflow_name )
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, repository_metadata_id )
- repository = suc.get_repository_in_tool_shed( trans, trans.security.encode_id( repository_metadata.repository_id ) )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, trans.security.encode_id( repository_metadata.repository_id ) )
changeset_revision = repository_metadata.changeset_revision
metadata = repository_metadata.metadata
return trans.fill_template( "/webapps/tool_shed/repository/view_workflow.mako",
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/galaxy/webapps/tool_shed/controllers/repository_review.py
--- a/lib/galaxy/webapps/tool_shed/controllers/repository_review.py
+++ b/lib/galaxy/webapps/tool_shed/controllers/repository_review.py
@@ -136,7 +136,7 @@
if repository_id:
if changeset_revision:
# Make sure there is not already a review of the revision by the user.
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
if review_util.get_review_by_repository_id_changeset_revision_user_id( trans=trans,
repository_id=repository_id,
changeset_revision=changeset_revision,
@@ -465,7 +465,7 @@
status = kwd.get( 'status', 'done' )
repository_id = kwd.get( 'id', None )
if repository_id:
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
metadata_revision_hashes = [ metadata_revision.changeset_revision for metadata_revision in repository.metadata_revisions ]
reviewed_revision_hashes = [ review.changeset_revision for review in repository.reviews ]
@@ -511,7 +511,7 @@
status = kwd.get( 'status', 'done' )
repository_id = kwd.get( 'id', None )
changeset_revision = kwd.get( 'changeset_revision', None )
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
installable = changeset_revision in [ metadata_revision.changeset_revision for metadata_revision in repository.metadata_revisions ]
rev, changeset_revision_label = hg_util.get_rev_label_from_changeset_revision( repo, changeset_revision )
@@ -547,7 +547,7 @@
action='view_or_manage_repository',
**kwd ) )
# The user may not be the current user. The value of the received id is the encoded user id.
- user = suc.get_user( trans, kwd[ 'id' ] )
+ user = suc.get_user( trans.app, kwd[ 'id' ] )
self.repository_reviews_by_user_grid.title = "All repository revision reviews for user '%s'" % user.username
return self.repository_reviews_by_user_grid( trans, **kwd )
@@ -575,7 +575,7 @@
# The value of the received id is the encoded repository id.
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
- repository = suc.get_repository_in_tool_shed( trans, kwd[ 'id' ] )
+ repository = suc.get_repository_in_tool_shed( trans.app, kwd[ 'id' ] )
changeset_revision = kwd.get( 'changeset_revision', None )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
previous_reviews_dict = review_util.get_previous_repository_reviews( trans, repository, changeset_revision )
@@ -591,7 +591,7 @@
@web.expose
@web.require_login( "view or manage repository" )
def view_or_manage_repository( self, trans, **kwd ):
- repository = suc.get_repository_in_tool_shed( trans, kwd[ 'id' ] )
+ repository = suc.get_repository_in_tool_shed( trans.app, kwd[ 'id' ] )
if trans.user_is_admin() or repository.user == trans.user:
return trans.response.send_redirect( web.url_for( controller='repository',
action='manage_repository',
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/galaxy/webapps/tool_shed/controllers/upload.py
--- a/lib/galaxy/webapps/tool_shed/controllers/upload.py
+++ b/lib/galaxy/webapps/tool_shed/controllers/upload.py
@@ -35,9 +35,9 @@
status = kwd.get( 'status', 'done' )
commit_message = kwd.get( 'commit_message', 'Uploaded' )
category_ids = util.listify( kwd.get( 'category_id', '' ) )
- categories = suc.get_categories( trans )
+ categories = suc.get_categories( trans.app )
repository_id = kwd.get( 'repository_id', '' )
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
repo_dir = repository.repo_path( trans.app )
repo = hg_util.get_repo_for_repository( trans.app, repository=None, repo_path=repo_dir, create=False )
uncompress_file = util.string_as_bool( kwd.get( 'uncompress_file', 'true' ) )
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/tool_shed/galaxy_install/grids/admin_toolshed_grids.py
--- a/lib/tool_shed/galaxy_install/grids/admin_toolshed_grids.py
+++ b/lib/tool_shed/galaxy_install/grids/admin_toolshed_grids.py
@@ -103,7 +103,7 @@
class StatusColumn( grids.TextColumn ):
def get_value( self, trans, grid, tool_shed_repository ):
- return suc.get_tool_shed_repository_status_label( trans, tool_shed_repository )
+ return suc.get_tool_shed_repository_status_label( trans.app, tool_shed_repository )
class ToolShedColumn( grids.TextColumn ):
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/tool_shed/galaxy_install/repository_util.py
--- a/lib/tool_shed/galaxy_install/repository_util.py
+++ b/lib/tool_shed/galaxy_install/repository_util.py
@@ -185,7 +185,7 @@
return repair_dict
def get_repo_info_dict( trans, repository_id, changeset_revision ):
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
repository_clone_url = common_util.generate_clone_url_for_repository_in_tool_shed( trans, repository )
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app,
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/tool_shed/grids/repository_grids.py
--- a/lib/tool_shed/grids/repository_grids.py
+++ b/lib/tool_shed/grids/repository_grids.py
@@ -579,7 +579,7 @@
filter = trans.app.repository_grid_filter_manager.get_filter( trans )
if filter == trans.app.repository_grid_filter_manager.filters.CERTIFIED_LEVEL_ONE:
if category_id:
- category = suc.get_category( trans, category_id )
+ category = suc.get_category( trans.app, category_id )
if category:
return trans.sa_session.query( model.Repository ) \
.join( model.RepositoryMetadata.table ) \
@@ -596,7 +596,7 @@
.outerjoin( model.Category.table )
if filter == trans.app.repository_grid_filter_manager.filters.CERTIFIED_LEVEL_ONE_SUITES:
if category_id:
- category = suc.get_category( trans, category_id )
+ category = suc.get_category( trans.app, category_id )
if category:
return trans.sa_session.query( model.Repository ) \
.filter( model.Repository.type == rt_util.REPOSITORY_SUITE_DEFINITION ) \
@@ -616,7 +616,7 @@
else:
# The value of filter is None.
if category_id:
- category = suc.get_category( trans, category_id )
+ category = suc.get_category( trans.app, category_id )
if category:
return trans.sa_session.query( model.Repository ) \
.filter( and_( model.Repository.table.c.deleted == False,
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/tool_shed/util/container_util.py
--- a/lib/tool_shed/util/container_util.py
+++ b/lib/tool_shed/util/container_util.py
@@ -1580,7 +1580,8 @@
tool_shed_repository_id = None
installation_status = 'unknown'
if tool_shed_repository_id:
- tool_shed_repository = suc.get_tool_shed_repository_by_id( trans, trans.security.encode_id( tool_shed_repository_id ) )
+ tool_shed_repository = suc.get_tool_shed_repository_by_id( trans.app,
+ trans.security.encode_id( tool_shed_repository_id ) )
if tool_shed_repository:
if tool_shed_repository.missing_repository_dependencies:
installation_status = '%s, missing repository dependencies' % installation_status
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/tool_shed/util/export_util.py
--- a/lib/tool_shed/util/export_util.py
+++ b/lib/tool_shed/util/export_util.py
@@ -52,7 +52,7 @@
def export_repository( trans, tool_shed_url, repository_id, repository_name, changeset_revision, file_type,
export_repository_dependencies, api=False ):
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
repositories_archive_filename = generate_repository_archive_filename( tool_shed_url,
str( repository.name ),
str( repository.user.username ),
@@ -239,7 +239,7 @@
Return a list of dictionaries defining repositories that are required by the repository associated with the
received repository_id.
"""
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
repository_metadata = suc.get_repository_metadata_by_changeset_revision( trans.app, repository_id, changeset_revision )
# Get a dictionary of all repositories upon which the contents of the current repository_metadata record depend.
toolshed_base_url = str( web.url_for( '/', qualified=True ) ).rstrip( '/' )
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/tool_shed/util/import_util.py
--- a/lib/tool_shed/util/import_util.py
+++ b/lib/tool_shed/util/import_util.py
@@ -38,7 +38,7 @@
flush = True
# Do not allow dependent repository revisions to be automatically installed if population
# resulted in errors.
- dependent_downloadable_revisions = suc.get_dependent_downloadable_revisions( trans, repository_metadata )
+ dependent_downloadable_revisions = suc.get_dependent_downloadable_revisions( trans.app, repository_metadata )
for dependent_downloadable_revision in dependent_downloadable_revisions:
if dependent_downloadable_revision.downloadable:
dependent_downloadable_revision.downloadable = False
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/tool_shed/util/metadata_util.py
--- a/lib/tool_shed/util/metadata_util.py
+++ b/lib/tool_shed/util/metadata_util.py
@@ -352,10 +352,10 @@
repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
for changeset in repo.changelog:
changeset_hash = str( repo.changectx( changeset ) )
- skip_tool_test = suc.get_skip_tool_test_by_changeset_revision( trans, changeset_hash )
+ skip_tool_test = suc.get_skip_tool_test_by_changeset_revision( trans.app, changeset_hash )
if skip_tool_test:
# We found a skip_tool_test record associated with the changeset_revision, so see if it has a valid repository_revision.
- repository_revision = suc.get_repository_metadata_by_id( trans, trans.security.encode_id( repository_metadata.id ) )
+ repository_revision = get_repository_metadata_by_id( trans.app, trans.security.encode_id( repository_metadata.id ) )
if repository_revision:
# The skip_tool_test record is associated with a valid repository_metadata record, so proceed.
continue
@@ -1117,9 +1117,10 @@
relative_path_to_file = relative_path_to_file[ len( shed_config_dict.get( 'tool_path' ) ) + 1: ]
return relative_path_to_file
-def get_repository_metadata_by_id( trans, id ):
+def get_repository_metadata_by_id( app, id ):
"""Get repository metadata from the database"""
- return trans.sa_session.query( trans.model.RepositoryMetadata ).get( trans.security.decode_id( id ) )
+ sa_session = app.model.context.current
+ return sa_session.query( app.model.RepositoryMetadata ).get( app.security.decode_id( id ) )
def get_repository_metadata_by_repository_id_changeset_revision( trans, id, changeset_revision, metadata_only=False ):
"""Get a specified metadata record for a specified repository in the tool shed."""
@@ -1692,30 +1693,31 @@
workflows=None )
return containers_dict
-def reset_all_metadata_on_installed_repository( trans, id ):
+def reset_all_metadata_on_installed_repository( app, id ):
"""Reset all metadata on a single tool shed repository installed into a Galaxy instance."""
invalid_file_tups = []
metadata_dict = {}
- repository = suc.get_installed_tool_shed_repository( trans.app, id )
- repository_clone_url = common_util.generate_clone_url_for_installed_repository( trans.app, repository )
- tool_path, relative_install_dir = repository.get_tool_relative_path( trans.app )
+ repository = suc.get_installed_tool_shed_repository( app, id )
+ repository_clone_url = common_util.generate_clone_url_for_installed_repository( app, repository )
+ tool_path, relative_install_dir = repository.get_tool_relative_path( app )
if relative_install_dir:
original_metadata_dict = repository.metadata
- metadata_dict, invalid_file_tups = generate_metadata_for_changeset_revision( app=trans.app,
- repository=repository,
- changeset_revision=repository.changeset_revision,
- repository_clone_url=repository_clone_url,
- shed_config_dict = repository.get_shed_config_dict( trans.app ),
- relative_install_dir=relative_install_dir,
- repository_files_dir=None,
- resetting_all_metadata_on_repository=False,
- updating_installed_repository=False,
- persist=False )
+ metadata_dict, invalid_file_tups = \
+ generate_metadata_for_changeset_revision( app=app,
+ repository=repository,
+ changeset_revision=repository.changeset_revision,
+ repository_clone_url=repository_clone_url,
+ shed_config_dict = repository.get_shed_config_dict( app ),
+ relative_install_dir=relative_install_dir,
+ repository_files_dir=None,
+ resetting_all_metadata_on_repository=False,
+ updating_installed_repository=False,
+ persist=False )
repository.metadata = metadata_dict
if metadata_dict != original_metadata_dict:
- suc.update_in_shed_tool_config( trans.app, repository )
- trans.install_model.context.add( repository )
- trans.install_model.context.flush()
+ suc.update_in_shed_tool_config( app, repository )
+ app.install_model.context.add( repository )
+ app.install_model.context.flush()
log.debug( 'Metadata has been reset on repository %s.' % repository.name )
else:
log.debug( 'Metadata did not need to be reset on repository %s.' % repository.name )
@@ -1761,7 +1763,7 @@
repository_metadata.tool_versions = tool_versions_dict
trans.sa_session.add( repository_metadata )
trans.sa_session.flush()
- repository = suc.get_repository_in_tool_shed( trans, id )
+ repository = suc.get_repository_in_tool_shed( trans.app, id )
log.debug( "Resetting all metadata on repository: %s" % repository.name )
repo_dir = repository.repo_path( trans.app )
repo = hg_util.get_repo_for_repository( trans.app, repository=None, repo_path=repo_dir, create=False )
@@ -1881,14 +1883,14 @@
try:
if trans.webapp.name == 'tool_shed':
# We're in the tool shed.
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
owner = str( repository.user.username )
invalid_file_tups, metadata_dict = reset_all_metadata_on_repository_in_tool_shed( trans, repository_id )
else:
# We're in Galaxy.
repository = suc.get_installed_tool_shed_repository( trans.app, repository_id )
owner = str( repository.owner )
- invalid_file_tups, metadata_dict = reset_all_metadata_on_installed_repository( trans, repository_id )
+ invalid_file_tups, metadata_dict = reset_all_metadata_on_installed_repository( trans.app, repository_id )
if invalid_file_tups:
message = tool_util.generate_message_for_invalid_tools( trans, invalid_file_tups, repository, None, as_html=False )
log.debug( message )
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/tool_shed/util/repository_dependency_util.py
--- a/lib/tool_shed/util/repository_dependency_util.py
+++ b/lib/tool_shed/util/repository_dependency_util.py
@@ -269,10 +269,10 @@
message += '</table>'
return message
-def get_key_for_repository_changeset_revision( trans, toolshed_base_url, repository, repository_metadata, all_repository_dependencies ):
+def get_key_for_repository_changeset_revision( app, toolshed_base_url, repository, repository_metadata, all_repository_dependencies ):
# The received toolshed_base_url must include the port, but doesn't have to include the protocol.
prior_installation_required, only_if_compiling_contained_td = \
- get_prior_installation_required_and_only_if_compiling_contained_td( trans,
+ get_prior_installation_required_and_only_if_compiling_contained_td( app,
toolshed_base_url,
repository,
repository_metadata,
@@ -286,7 +286,7 @@
only_if_compiling_contained_td=only_if_compiling_contained_td )
return key
-def get_prior_installation_required_and_only_if_compiling_contained_td( trans, toolshed_base_url, repository, repository_metadata,
+def get_prior_installation_required_and_only_if_compiling_contained_td( app, toolshed_base_url, repository, repository_metadata,
all_repository_dependencies ):
"""
This method is called from the tool shed and never Galaxy. If all_repository_dependencies contains
@@ -312,7 +312,7 @@
metadata = repository_metadata.metadata
current_changeset_revision = str( repository_metadata.changeset_revision )
# Get the changeset revision to which the current value of required_repository_changeset_revision should be updated if it's not current.
- text = suc.get_updated_changeset_revisions( trans,
+ text = suc.get_updated_changeset_revisions( app,
name=str( repository.name ),
owner=str( repository.user.username ),
changeset_revision=current_changeset_revision )
@@ -401,7 +401,7 @@
# The value of the received toolshed_base_url must include the port, but doesn't have
# to include the protocol.
if 'repository_dependencies' in metadata:
- current_repository_key = get_key_for_repository_changeset_revision( trans,
+ current_repository_key = get_key_for_repository_changeset_revision( trans.app,
toolshed_base_url,
repository,
repository_metadata,
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/tool_shed/util/repository_maintenance_util.py
--- a/lib/tool_shed/util/repository_maintenance_util.py
+++ b/lib/tool_shed/util/repository_maintenance_util.py
@@ -152,7 +152,7 @@
category_ids = []
category_names = repository_archive_dict.get( 'category_names', [] )
for category_name in category_names:
- category = suc.get_category_by_name( trans, category_name )
+ category = suc.get_category_by_name( trans.app, category_name )
if category is None:
results_message += 'This Tool Shed does not have the category <b>%s</b> so it ' % str( category_name )
results_message += 'will not be associated with this repository.'
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/tool_shed/util/shed_util_common.py
--- a/lib/tool_shed/util/shed_util_common.py
+++ b/lib/tool_shed/util/shed_util_common.py
@@ -97,32 +97,32 @@
repositories_select_field.add_option( option_label, option_value )
return repositories_select_field
-def build_tool_dependencies_select_field( trans, tool_shed_repository, name, multiple=True, display='checkboxes', uninstalled_only=False ):
+def build_tool_dependencies_select_field( app, tool_shed_repository, name, multiple=True, display='checkboxes', uninstalled_only=False ):
"""Method called from Galaxy to generate the current list of tool dependency ids for an installed tool shed repository."""
tool_dependencies_select_field = SelectField( name=name, multiple=multiple, display=display )
for tool_dependency in tool_shed_repository.tool_dependencies:
if uninstalled_only:
- if tool_dependency.status not in [ trans.install_model.ToolDependency.installation_status.NEVER_INSTALLED,
- trans.install_model.ToolDependency.installation_status.UNINSTALLED ]:
+ if tool_dependency.status not in [ app.install_model.ToolDependency.installation_status.NEVER_INSTALLED,
+ app.install_model.ToolDependency.installation_status.UNINSTALLED ]:
continue
else:
- if tool_dependency.status in [ trans.install_model.ToolDependency.installation_status.NEVER_INSTALLED,
- trans.install_model.ToolDependency.installation_status.UNINSTALLED ]:
+ if tool_dependency.status in [ app.install_model.ToolDependency.installation_status.NEVER_INSTALLED,
+ app.install_model.ToolDependency.installation_status.UNINSTALLED ]:
continue
option_label = '%s version %s' % ( str( tool_dependency.name ), str( tool_dependency.version ) )
- option_value = trans.security.encode_id( tool_dependency.id )
+ option_value = app.security.encode_id( tool_dependency.id )
tool_dependencies_select_field.add_option( option_label, option_value )
return tool_dependencies_select_field
-def check_or_update_tool_shed_status_for_installed_repository( trans, repository ):
+def check_or_update_tool_shed_status_for_installed_repository( app, repository ):
updated = False
- tool_shed_status_dict = get_tool_shed_status_for_installed_repository( trans.app, repository )
+ tool_shed_status_dict = get_tool_shed_status_for_installed_repository( app, repository )
if tool_shed_status_dict:
ok = True
if tool_shed_status_dict != repository.tool_shed_status:
repository.tool_shed_status = tool_shed_status_dict
- trans.install_model.context.add( repository )
- trans.install_model.context.flush()
+ app.install_model.context.add( repository )
+ app.install_model.context.flush()
updated = True
else:
ok = False
@@ -368,21 +368,24 @@
return os.path.abspath( os.path.join( root, name ) )
return file_path
-def get_categories( trans ):
+def get_categories( app ):
"""Get all categories from the database."""
- return trans.sa_session.query( trans.model.Category ) \
- .filter( trans.model.Category.table.c.deleted==False ) \
- .order_by( trans.model.Category.table.c.name ) \
- .all()
+ sa_session = app.model.context.current
+ return sa_session.query( app.model.Category ) \
+ .filter( app.model.Category.table.c.deleted==False ) \
+ .order_by( app.model.Category.table.c.name ) \
+ .all()
-def get_category( trans, id ):
+def get_category( app, id ):
"""Get a category from the database."""
- return trans.sa_session.query( trans.model.Category ).get( trans.security.decode_id( id ) )
+ sa_session = app.model.context.current
+ return sa_session.query( app.model.Category ).get( app.security.decode_id( id ) )
-def get_category_by_name( trans, name ):
+def get_category_by_name( app, name ):
"""Get a category from the database via name."""
+ sa_session = app.model.context.current
try:
- return trans.sa_session.query( trans.model.Category ).filter_by( name=name ).one()
+ return sa_session.query( app.model.Category ).filter_by( name=name ).one()
except sqlalchemy.orm.exc.NoResultFound:
return None
@@ -419,21 +422,22 @@
return repository_metadata
return None
-def get_dependent_downloadable_revisions( trans, repository_metadata ):
+def get_dependent_downloadable_revisions( app, repository_metadata ):
"""
Return all repository_metadata records that are downloadable and that depend upon the received
repository_metadata record.
"""
# This method is called only from the tool shed.
+ sa_session = app.model.context.current
rm_changeset_revision = repository_metadata.changeset_revision
rm_repository = repository_metadata.repository
rm_repository_name = str( rm_repository.name )
rm_repository_owner = str( rm_repository.user.username )
dependent_downloadable_revisions = []
- for repository in trans.sa_session.query( trans.model.Repository ) \
- .filter( and_( trans.model.Repository.table.c.id != rm_repository.id,
- trans.model.Repository.table.c.deleted == False,
- trans.model.Repository.table.c.deprecated == False ) ):
+ for repository in sa_session.query( app.model.Repository ) \
+ .filter( and_( app.model.Repository.table.c.id != rm_repository.id,
+ app.model.Repository.table.c.deleted == False,
+ app.model.Repository.table.c.deprecated == False ) ):
downloadable_revisions = repository.downloadable_revisions
if downloadable_revisions:
for downloadable_revision in downloadable_revisions:
@@ -454,14 +458,14 @@
else:
# Make sure the defined changeset_revision is current.
defined_repository_metadata = \
- trans.sa_session.query( trans.model.RepositoryMetadata ) \
- .filter( trans.model.RepositoryMetadata.table.c.changeset_revision == changeset_revision ) \
- .first()
+ sa_session.query( app.model.RepositoryMetadata ) \
+ .filter( app.model.RepositoryMetadata.table.c.changeset_revision == changeset_revision ) \
+ .first()
if defined_repository_metadata is None:
# The defined changeset_revision is not associated with a repository_metadata
# record, so updates must be necessary.
- defined_repository = get_repository_by_name_and_owner( trans.app, name, owner )
- defined_repo = hg_util.get_repo_for_repository( trans.app,
+ defined_repository = get_repository_by_name_and_owner( app, name, owner )
+ defined_repo = hg_util.get_repo_for_repository( app,
repository=defined_repository,
repo_path=None,
create=False )
@@ -473,20 +477,21 @@
dependent_downloadable_revisions.append( downloadable_revision )
return dependent_downloadable_revisions
-def get_ids_of_tool_shed_repositories_being_installed( trans, as_string=False ):
+def get_ids_of_tool_shed_repositories_being_installed( app, as_string=False ):
installing_repository_ids = []
- new_status = trans.install_model.ToolShedRepository.installation_status.NEW
- cloning_status = trans.install_model.ToolShedRepository.installation_status.CLONING
- setting_tool_versions_status = trans.install_model.ToolShedRepository.installation_status.SETTING_TOOL_VERSIONS
- installing_dependencies_status = trans.install_model.ToolShedRepository.installation_status.INSTALLING_TOOL_DEPENDENCIES
- loading_datatypes_status = trans.install_model.ToolShedRepository.installation_status.LOADING_PROPRIETARY_DATATYPES
- for tool_shed_repository in trans.install_model.context.query( trans.install_model.ToolShedRepository ) \
- .filter( or_( trans.install_model.ToolShedRepository.status == new_status,
- trans.install_model.ToolShedRepository.status == cloning_status,
- trans.install_model.ToolShedRepository.status == setting_tool_versions_status,
- trans.install_model.ToolShedRepository.status == installing_dependencies_status,
- trans.install_model.ToolShedRepository.status == loading_datatypes_status ) ):
- installing_repository_ids.append( trans.security.encode_id( tool_shed_repository.id ) )
+ new_status = app.install_model.ToolShedRepository.installation_status.NEW
+ cloning_status = app.install_model.ToolShedRepository.installation_status.CLONING
+ setting_tool_versions_status = app.install_model.ToolShedRepository.installation_status.SETTING_TOOL_VERSIONS
+ installing_dependencies_status = app.install_model.ToolShedRepository.installation_status.INSTALLING_TOOL_DEPENDENCIES
+ loading_datatypes_status = app.install_model.ToolShedRepository.installation_status.LOADING_PROPRIETARY_DATATYPES
+ for tool_shed_repository in \
+ app.install_model.context.query( app.install_model.ToolShedRepository ) \
+ .filter( or_( app.install_model.ToolShedRepository.status == new_status,
+ app.install_model.ToolShedRepository.status == cloning_status,
+ app.install_model.ToolShedRepository.status == setting_tool_versions_status,
+ app.install_model.ToolShedRepository.status == installing_dependencies_status,
+ app.install_model.ToolShedRepository.status == loading_datatypes_status ) ):
+ installing_repository_ids.append( app.security.encode_id( tool_shed_repository.id ) )
if as_string:
return ','.join( installing_repository_ids )
return installing_repository_ids
@@ -723,12 +728,13 @@
repository = get_repository_by_name_and_owner( app, repository_name, repository_owner )
return repository, repository_dependencies
-def get_repository_by_id( trans, id ):
+def get_repository_by_id( app, id ):
"""Get a repository from the database via id."""
- if is_tool_shed_client( trans.app ):
- return trans.install_model.context.query( trans.install_model.ToolShedRepository ).get( trans.security.decode_id( id ) )
+ if is_tool_shed_client( app ):
+ return app.install_model.context.query( app.install_model.ToolShedRepository ).get( app.security.decode_id( id ) )
else:
- return trans.sa_session.query( trans.model.Repository ).get( trans.security.decode_id( id ) )
+ sa_session = app.model.context.current
+ return sa_session.query( app.model.Repository ).get( app.security.decode_id( id ) )
def get_repository_by_name( app, name ):
"""Get a repository from the database via name."""
@@ -851,7 +857,7 @@
beginning_on_size_error=True )
return safe_str
-def get_repository_files( trans, folder_path ):
+def get_repository_files( folder_path ):
"""Return the file hierarchy of a tool shed repository."""
contents = []
for item in os.listdir( folder_path ):
@@ -866,7 +872,7 @@
contents.sort()
return contents
-def get_repository_from_refresh_on_change( trans, **kwd ):
+def get_repository_from_refresh_on_change( app, **kwd ):
# The changeset_revision_select_field in several grids performs a refresh_on_change which sends in request parameters like
# changeset_revison_1, changeset_revision_2, etc. One of the many select fields on the grid performed the refresh_on_change,
# so we loop through all of the received values to see which value is not the repository tip. If we find it, we know the
@@ -876,9 +882,9 @@
for k, v in kwd.items():
changeset_revision_str = 'changeset_revision_'
if k.startswith( changeset_revision_str ):
- repository_id = trans.security.encode_id( int( k.lstrip( changeset_revision_str ) ) )
- repository = get_repository_in_tool_shed( trans, repository_id )
- if repository.tip( trans.app ) != v:
+ repository_id = app.security.encode_id( int( k.lstrip( changeset_revision_str ) ) )
+ repository = get_repository_in_tool_shed( app, repository_id )
+ if repository.tip( app ) != v:
return v, repository
# This should never be reached - raise an exception?
return v, None
@@ -919,9 +925,10 @@
prior_tsr_ids.append( encoded_repository_id )
return prior_tsr_ids
-def get_repository_in_tool_shed( trans, id ):
+def get_repository_in_tool_shed( app, id ):
"""Get a repository on the tool shed side from the database via id."""
- return trans.sa_session.query( trans.model.Repository ).get( trans.security.decode_id( id ) )
+ sa_session = app.model.context.current
+ return sa_session.query( app.model.Repository ).get( app.security.decode_id( id ) )
def get_repository_metadata_by_changeset_revision( app, id, changeset_revision ):
"""Get metadata for a specified repository change set from the database."""
@@ -944,9 +951,6 @@
return all_metadata_records[ 0 ]
return None
-def get_repository_metadata_by_id( trans, id ):
- return trans.sa_session.query( trans.model.RepositoryMetadata ).get( trans.security.decode_id( id ) )
-
def get_repository_owner( cleaned_repository_url ):
"""Gvien a "cleaned" repository clone URL, return the owner of the repository."""
items = cleaned_repository_url.split( '/repos/' )
@@ -996,12 +1000,13 @@
if shed_tool_conf == file_name:
return index, shed_tool_conf_dict
-def get_skip_tool_test_by_changeset_revision( trans, changeset_revision ):
+def get_skip_tool_test_by_changeset_revision( app, changeset_revision ):
"""Return a skip_tool_test record whose initial_changeset_revision is the received changeset_revision."""
# There should only be one, but we'll use first() so callers won't have to handle exceptions.
- return trans.sa_session.query( trans.model.SkipToolTest ) \
- .filter( trans.model.SkipToolTest.table.c.initial_changeset_revision == changeset_revision ) \
- .first()
+ sa_session = app.model.context.current
+ return sa_session.query( app.model.SkipToolTest ) \
+ .filter( app.model.SkipToolTest.table.c.initial_changeset_revision == changeset_revision ) \
+ .first()
def get_tool_panel_config_tool_path_install_dir( app, repository ):
"""
@@ -1026,12 +1031,12 @@
tool_path = shed_config_dict[ 'tool_path' ]
return shed_tool_conf, tool_path, relative_install_dir
-def get_tool_path_by_shed_tool_conf_filename( trans, shed_tool_conf ):
+def get_tool_path_by_shed_tool_conf_filename( app, shed_tool_conf ):
"""
Return the tool_path config setting for the received shed_tool_conf file by searching the tool box's in-memory list of shed_tool_confs for the
dictionary whose config_filename key has a value matching the received shed_tool_conf.
"""
- for shed_tool_conf_dict in trans.app.toolbox.shed_tool_confs:
+ for shed_tool_conf_dict in app.toolbox.shed_tool_confs:
config_filename = shed_tool_conf_dict[ 'config_filename' ]
if config_filename == shed_tool_conf:
return shed_tool_conf_dict[ 'tool_path' ]
@@ -1045,12 +1050,12 @@
tmp_url = common_util.remove_protocol_and_user_from_clone_url( repository_clone_url )
return tmp_url.split( '/repos/' )[ 0 ].rstrip( '/' )
-def get_tool_shed_repository_by_id( trans, repository_id ):
+def get_tool_shed_repository_by_id( app, repository_id ):
"""Return a tool shed repository database record defined by the id."""
# This method is used only in Galaxy, not the tool shed.
- return trans.install_model.context.query( trans.install_model.ToolShedRepository ) \
- .filter( trans.install_model.ToolShedRepository.table.c.id == trans.security.decode_id( repository_id ) ) \
- .first()
+ return app.install_model.context.query( app.install_model.ToolShedRepository ) \
+ .filter( app.install_model.ToolShedRepository.table.c.id == app.security.decode_id( repository_id ) ) \
+ .first()
def get_tool_shed_repository_by_shed_name_owner_changeset_revision( app, tool_shed, name, owner, changeset_revision ):
"""
@@ -1123,60 +1128,62 @@
log.exception( "Error attempting to get tool shed status for installed repository %s: %s" % ( str( repository.name ), str( e ) ) )
return {}
-def get_tool_shed_repository_status_label( trans, tool_shed_repository=None, name=None, owner=None, changeset_revision=None, repository_clone_url=None ):
+def get_tool_shed_repository_status_label( app, tool_shed_repository=None, name=None, owner=None, changeset_revision=None, repository_clone_url=None ):
"""Return a color-coded label for the status of the received tool-shed_repository installed into Galaxy."""
if tool_shed_repository is None:
if name is not None and owner is not None and repository_clone_url is not None:
tool_shed = get_tool_shed_from_clone_url( repository_clone_url )
- tool_shed_repository = get_tool_shed_repository_by_shed_name_owner_installed_changeset_revision( trans.app,
+ tool_shed_repository = get_tool_shed_repository_by_shed_name_owner_installed_changeset_revision( app,
tool_shed,
name,
owner,
changeset_revision )
if tool_shed_repository:
status_label = tool_shed_repository.status
- if tool_shed_repository.status in [ trans.install_model.ToolShedRepository.installation_status.CLONING,
- trans.install_model.ToolShedRepository.installation_status.SETTING_TOOL_VERSIONS,
- trans.install_model.ToolShedRepository.installation_status.INSTALLING_REPOSITORY_DEPENDENCIES,
- trans.install_model.ToolShedRepository.installation_status.INSTALLING_TOOL_DEPENDENCIES,
- trans.install_model.ToolShedRepository.installation_status.LOADING_PROPRIETARY_DATATYPES ]:
- bgcolor = trans.install_model.ToolShedRepository.states.INSTALLING
- elif tool_shed_repository.status in [ trans.install_model.ToolShedRepository.installation_status.NEW,
- trans.install_model.ToolShedRepository.installation_status.UNINSTALLED ]:
- bgcolor = trans.install_model.ToolShedRepository.states.UNINSTALLED
- elif tool_shed_repository.status in [ trans.install_model.ToolShedRepository.installation_status.ERROR ]:
- bgcolor = trans.install_model.ToolShedRepository.states.ERROR
- elif tool_shed_repository.status in [ trans.install_model.ToolShedRepository.installation_status.DEACTIVATED ]:
- bgcolor = trans.install_model.ToolShedRepository.states.WARNING
- elif tool_shed_repository.status in [ trans.install_model.ToolShedRepository.installation_status.INSTALLED ]:
+ if tool_shed_repository.status in [ app.install_model.ToolShedRepository.installation_status.CLONING,
+ app.install_model.ToolShedRepository.installation_status.SETTING_TOOL_VERSIONS,
+ app.install_model.ToolShedRepository.installation_status.INSTALLING_REPOSITORY_DEPENDENCIES,
+ app.install_model.ToolShedRepository.installation_status.INSTALLING_TOOL_DEPENDENCIES,
+ app.install_model.ToolShedRepository.installation_status.LOADING_PROPRIETARY_DATATYPES ]:
+ bgcolor = app.install_model.ToolShedRepository.states.INSTALLING
+ elif tool_shed_repository.status in [ app.install_model.ToolShedRepository.installation_status.NEW,
+ app.install_model.ToolShedRepository.installation_status.UNINSTALLED ]:
+ bgcolor = app.install_model.ToolShedRepository.states.UNINSTALLED
+ elif tool_shed_repository.status in [ app.install_model.ToolShedRepository.installation_status.ERROR ]:
+ bgcolor = app.install_model.ToolShedRepository.states.ERROR
+ elif tool_shed_repository.status in [ app.install_model.ToolShedRepository.installation_status.DEACTIVATED ]:
+ bgcolor = app.install_model.ToolShedRepository.states.WARNING
+ elif tool_shed_repository.status in [ app.install_model.ToolShedRepository.installation_status.INSTALLED ]:
if tool_shed_repository.repository_dependencies_being_installed:
- bgcolor = trans.install_model.ToolShedRepository.states.WARNING
- status_label = '%s, %s' % ( status_label, trans.install_model.ToolShedRepository.installation_status.INSTALLING_REPOSITORY_DEPENDENCIES )
+ bgcolor = app.install_model.ToolShedRepository.states.WARNING
+ status_label = '%s, %s' % ( status_label,
+ app.install_model.ToolShedRepository.installation_status.INSTALLING_REPOSITORY_DEPENDENCIES )
elif tool_shed_repository.missing_repository_dependencies:
- bgcolor = trans.install_model.ToolShedRepository.states.WARNING
+ bgcolor = app.install_model.ToolShedRepository.states.WARNING
status_label = '%s, missing repository dependencies' % status_label
elif tool_shed_repository.tool_dependencies_being_installed:
- bgcolor = trans.install_model.ToolShedRepository.states.WARNING
- status_label = '%s, %s' % (status_label, trans.install_model.ToolShedRepository.installation_status.INSTALLING_TOOL_DEPENDENCIES )
+ bgcolor = app.install_model.ToolShedRepository.states.WARNING
+ status_label = '%s, %s' % ( status_label,
+ app.install_model.ToolShedRepository.installation_status.INSTALLING_TOOL_DEPENDENCIES )
elif tool_shed_repository.missing_tool_dependencies:
- bgcolor = trans.install_model.ToolShedRepository.states.WARNING
+ bgcolor = app.install_model.ToolShedRepository.states.WARNING
status_label = '%s, missing tool dependencies' % status_label
else:
- bgcolor = trans.install_model.ToolShedRepository.states.OK
+ bgcolor = app.install_model.ToolShedRepository.states.OK
else:
- bgcolor = trans.install_model.ToolShedRepository.states.ERROR
+ bgcolor = app.install_model.ToolShedRepository.states.ERROR
else:
- bgcolor = trans.install_model.ToolShedRepository.states.WARNING
+ bgcolor = app.install_model.ToolShedRepository.states.WARNING
status_label = 'unknown status'
return '<div class="count-box state-color-%s">%s</div>' % ( bgcolor, status_label )
-def get_updated_changeset_revisions( trans, name, owner, changeset_revision ):
+def get_updated_changeset_revisions( app, name, owner, changeset_revision ):
"""
Return a string of comma-separated changeset revision hashes for all available updates to the received changeset
revision for the repository defined by the received name and owner.
"""
- repository = get_repository_by_name_and_owner( trans.app, name, owner )
- repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
+ repository = get_repository_by_name_and_owner( app, name, owner )
+ repo = hg_util.get_repo_for_repository( app, repository=repository, repo_path=None, create=False )
# Get the upper bound changeset revision.
upper_bound_changeset_revision = get_next_downloadable_changeset_revision( repository, repo, changeset_revision )
# Build the list of changeset revision hashes defining each available update up to, but excluding
@@ -1208,9 +1215,10 @@
text = common_util.tool_shed_get( app, tool_shed_url, url )
return text
-def get_user( trans, id ):
+def get_user( app, id ):
"""Get a user from the database by id."""
- return trans.sa_session.query( trans.model.User ).get( trans.security.decode_id( id ) )
+ sa_session = app.model.context.current
+ return sa_session.query( app.model.User ).get( app.security.decode_id( id ) )
def get_user_by_username( app, username ):
"""Get a user from the database by username."""
@@ -1325,7 +1333,6 @@
return True
return False
-
def is_tool_shed_client( app ):
""" The tool shed and clients to the tool (i.e. Galaxy) require a lot
of similar functionality in this file but with small differences. This
@@ -1334,14 +1341,13 @@
"""
return hasattr( app, "install_model" )
-
-def open_repository_files_folder( trans, folder_path ):
+def open_repository_files_folder( folder_path ):
"""
Return a list of dictionaries, each of which contains information for a file or directory contained
within a directory in a repository file hierarchy.
"""
try:
- files_list = get_repository_files( trans, folder_path )
+ files_list = get_repository_files( folder_path )
except OSError, e:
if str( e ).find( 'No such file or directory' ) >= 0:
# We have a repository with no contents.
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/tool_shed/util/tool_util.py
--- a/lib/tool_shed/util/tool_util.py
+++ b/lib/tool_shed/util/tool_util.py
@@ -526,20 +526,20 @@
app.install_model.ToolVersionAssociation.table.c.tool_id == tool_version.id ) ) \
.first()
-def get_version_lineage_for_tool( trans, repository_id, repository_metadata, guid ):
+def get_version_lineage_for_tool( app, repository_id, repository_metadata, guid ):
"""
Return the tool version lineage chain in descendant order for the received guid contained in the received
repsitory_metadata.tool_versions.
"""
- repository = suc.get_repository_by_id( trans, repository_id )
- repo = hg_util.get_repo_for_repository( trans.app, repository=repository, repo_path=None, create=False )
+ repository = suc.get_repository_by_id( app, repository_id )
+ repo = hg_util.get_repo_for_repository( app, repository=repository, repo_path=None, create=False )
# Initialize the tool lineage
version_lineage = [ guid ]
# Get all ancestor guids of the received guid.
current_child_guid = guid
for changeset in hg_util.reversed_upper_bounded_changelog( repo, repository_metadata.changeset_revision ):
ctx = repo.changectx( changeset )
- rm = suc.get_repository_metadata_by_changeset_revision( trans.app, repository_id, str( ctx ) )
+ rm = suc.get_repository_metadata_by_changeset_revision( app, repository_id, str( ctx ) )
if rm:
parent_guid = rm.tool_versions.get( current_child_guid, None )
if parent_guid:
@@ -549,9 +549,9 @@
current_parent_guid = guid
for changeset in hg_util.reversed_lower_upper_bounded_changelog( repo,
repository_metadata.changeset_revision,
- repository.tip( trans.app ) ):
+ repository.tip( app ) ):
ctx = repo.changectx( changeset )
- rm = suc.get_repository_metadata_by_changeset_revision( trans.app, repository_id, str( ctx ) )
+ rm = suc.get_repository_metadata_by_changeset_revision( app, repository_id, str( ctx ) )
if rm:
tool_versions = rm.tool_versions
for child_guid, parent_guid in tool_versions.items():
@@ -853,12 +853,14 @@
def load_tool_from_changeset_revision( trans, repository_id, changeset_revision, tool_config_filename ):
"""
- Return a loaded tool whose tool config file name (e.g., filtering.xml) is the value of tool_config_filename. The value of changeset_revision
- is a valid (downloadable) changset revision. The tool config will be located in the repository manifest between the received valid changeset
- revision and the first changeset revision in the repository, searching backwards.
+ Return a loaded tool whose tool config file name (e.g., filtering.xml) is the value
+ of tool_config_filename. The value of changeset_revision is a valid (downloadable)
+ changset revision. The tool config will be located in the repository manifest between
+ the received valid changeset revision and the first changeset revision in the repository,
+ searching backwards.
"""
original_tool_data_path = trans.app.config.tool_data_path
- repository = suc.get_repository_in_tool_shed( trans, repository_id )
+ repository = suc.get_repository_in_tool_shed( trans.app, repository_id )
repo_files_dir = repository.repo_path( trans.app )
repo = hg_util.get_repo_for_repository( trans.app, repository=None, repo_path=repo_files_dir, create=False )
message = ''
@@ -869,7 +871,11 @@
can_use_disk_file = can_use_tool_config_disk_file( trans, repository, repo, tool_config_filepath, changeset_revision )
if can_use_disk_file:
trans.app.config.tool_data_path = work_dir
- tool, valid, message, sample_files = handle_sample_files_and_load_tool_from_disk( trans, repo_files_dir, repository_id, tool_config_filepath, work_dir )
+ tool, valid, message, sample_files = handle_sample_files_and_load_tool_from_disk( trans,
+ repo_files_dir,
+ repository_id,
+ tool_config_filepath,
+ work_dir )
if tool is not None:
invalid_files_and_errors_tups = check_tool_input_params( trans.app,
repo_files_dir,
@@ -885,7 +891,12 @@
displaying_invalid_tool=True )
message = concat_messages( message, message2 )
else:
- tool, message, sample_files = handle_sample_files_and_load_tool_from_tmp_config( trans, repo, repository_id, changeset_revision, tool_config_filename, work_dir )
+ tool, message, sample_files = handle_sample_files_and_load_tool_from_tmp_config( trans,
+ repo,
+ repository_id,
+ changeset_revision,
+ tool_config_filename,
+ work_dir )
basic_util.remove_dir( work_dir )
trans.app.config.tool_data_path = original_tool_data_path
# Reset the tool_data_tables by loading the empty tool_data_table_conf.xml file.
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c lib/tool_shed/util/workflow_util.py
--- a/lib/tool_shed/util/workflow_util.py
+++ b/lib/tool_shed/util/workflow_util.py
@@ -176,13 +176,13 @@
workflow_name = encoding_util.tool_shed_decode( workflow_name )
if trans.webapp.name == 'tool_shed':
# We're in the tool shed.
- repository_metadata = metadata_util.get_repository_metadata_by_id( trans, repository_metadata_id )
+ repository_metadata = metadata_util.get_repository_metadata_by_id( trans.app, repository_metadata_id )
repository_id = trans.security.encode_id( repository_metadata.repository_id )
changeset_revision = repository_metadata.changeset_revision
metadata = repository_metadata.metadata
else:
# We're in Galaxy.
- repository = suc.get_tool_shed_repository_by_id( trans, repository_id )
+ repository = suc.get_tool_shed_repository_by_id( trans.app, repository_id )
changeset_revision = repository.changeset_revision
metadata = repository.metadata
# metadata[ 'workflows' ] is a list of tuples where each contained tuple is
diff -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc -r 9a69f032d6d9a0d5ece25332db78a81b3a18d91c templates/admin/tool_shed_repository/repair_repository.mako
--- a/templates/admin/tool_shed_repository/repair_repository.mako
+++ b/templates/admin/tool_shed_repository/repair_repository.mako
@@ -56,7 +56,7 @@
for name, repo_info_tuple in repo_info_dict.items():
description, repository_clone_url, changeset_revision, ctx_rev, repository_owner, repository_dependencies, tool_dependencies = repo_info_tuple
break
- status_label = get_tool_shed_repository_status_label( trans,
+ status_label = get_tool_shed_repository_status_label( trans.app,
tool_shed_repository=None,
name=name,
owner=repository_owner,
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Add the ability to purge a "white ghost" tool shed repository from Galaxy.
by commits-noreply@bitbucket.org 12 Jun '14
by commits-noreply@bitbucket.org 12 Jun '14
12 Jun '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/7d6f8afcac4d/
Changeset: 7d6f8afcac4d
User: greg
Date: 2014-06-12 20:16:00
Summary: Add the ability to purge a "white ghost" tool shed repository from Galaxy.
Affected #: 6 files
diff -r 08fe9352836ee4d18ac3d446c45d84e5a7db9611 -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc lib/galaxy/model/tool_shed_install/__init__.py
--- a/lib/galaxy/model/tool_shed_install/__init__.py
+++ b/lib/galaxy/model/tool_shed_install/__init__.py
@@ -238,7 +238,7 @@
def is_deactivated_or_installed( self ):
return self.status in [ self.installation_status.DEACTIVATED,
self.installation_status.INSTALLED ]
-
+
@property
def is_installed( self ):
return self.status == self.installation_status.INSTALLED
@@ -250,6 +250,10 @@
return False
@property
+ def is_new( self ):
+ return self.status == self.installation_status.NEW
+
+ @property
def missing_repository_dependencies( self ):
"""Return the repository's repository dependencies that are not currently installed, and may not ever have been installed."""
missing_required_repositories = []
@@ -629,11 +633,11 @@
def to_dict( self, view='element' ):
rval = super( ToolVersion, self ).to_dict( view=view )
- rval['tool_name'] = self.tool_id
+ rval[ 'tool_name' ] = self.tool_id
for a in self.parent_tool_association:
- rval['parent_tool_id'] = a.parent_id
+ rval[ 'parent_tool_id' ] = a.parent_id
for a in self.child_tool_association:
- rval['child_tool_id'] = a.tool_id
+ rval[ 'child_tool_id' ] = a.tool_id
return rval
diff -r 08fe9352836ee4d18ac3d446c45d84e5a7db9611 -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc lib/galaxy/webapps/galaxy/controllers/admin_toolshed.py
--- a/lib/galaxy/webapps/galaxy/controllers/admin_toolshed.py
+++ b/lib/galaxy/webapps/galaxy/controllers/admin_toolshed.py
@@ -45,7 +45,7 @@
def activate_repository( self, trans, **kwd ):
"""Activate a repository that was deactivated but not uninstalled."""
repository_id = kwd[ 'id' ]
- repository = suc.get_installed_tool_shed_repository( trans, repository_id )
+ repository = suc.get_installed_tool_shed_repository( trans.app, repository_id )
try:
common_install_util.activate_repository( trans.app, repository )
except Exception, e:
@@ -71,7 +71,7 @@
def browse_repository( self, trans, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
- repository = suc.get_installed_tool_shed_repository( trans, kwd[ 'id' ] )
+ repository = suc.get_installed_tool_shed_repository( trans.app, kwd[ 'id' ] )
return trans.fill_template( '/admin/tool_shed_repository/browse_repository.mako',
repository=repository,
message=message,
@@ -99,8 +99,13 @@
return trans.response.send_redirect( web.url_for( controller='admin_toolshed',
action='reset_to_install',
**kwd ) )
+ if operation == "purge":
+ kwd[ 'purge_repository' ] = True
+ return trans.response.send_redirect( web.url_for( controller='admin_toolshed',
+ action='purge_repository',
+ **kwd ) )
if operation == "activate or reinstall":
- repository = suc.get_installed_tool_shed_repository( trans, kwd[ 'id' ] )
+ repository = suc.get_installed_tool_shed_repository( trans.app, kwd[ 'id' ] )
if repository.uninstalled:
# Since we're reinstalling the repository we need to find the latest changeset revision to which it can
# be updated so that we can reset the metadata if necessary. This will ensure that information about
@@ -134,7 +139,7 @@
**kwd ) )
else:
message = "Unable to get latest revision for repository <b>%s</b> from " % str( repository.name )
- message += "the Tool Shed, so repository reinstallation is not possible at this time."
+ message += "the Tool Shed, so repository re-installation is not possible at this time."
status = "error"
return trans.response.send_redirect( web.url_for( controller='admin_toolshed',
action='browse_repositories',
@@ -152,6 +157,10 @@
return trans.response.send_redirect( web.url_for( controller='admin_toolshed',
action='install_latest_repository_revision',
**kwd ) )
+ if operation == 'install':
+ # The user is attempting to install a white ghost.
+ kwd[ 'status' ] = 'error'
+ kwd[ 'message' ] = 'It seems you are attempting to install a "white ghost", which should instead be purged.'
return self.installed_repository_grid( trans, **kwd )
@web.expose
@@ -196,7 +205,7 @@
def check_for_updates( self, trans, **kwd ):
"""Send a request to the relevant tool shed to see if there are any updates."""
repository_id = kwd.get( 'id', None )
- repository = suc.get_installed_tool_shed_repository( trans, repository_id )
+ repository = suc.get_installed_tool_shed_repository( trans.app, repository_id )
tool_shed_url = common_util.get_tool_shed_url_from_tool_shed_registry( trans.app, str( repository.tool_shed ) )
params = '?galaxy_url=%s&name=%s&owner=%s&changeset_revision=%s' % \
( web.url_for( '/', qualified=True ),
@@ -220,7 +229,7 @@
status = kwd.get( 'status', 'done' )
remove_from_disk = kwd.get( 'remove_from_disk', '' )
remove_from_disk_checked = CheckboxField.is_checked( remove_from_disk )
- tool_shed_repository = suc.get_installed_tool_shed_repository( trans, kwd[ 'id' ] )
+ tool_shed_repository = suc.get_installed_tool_shed_repository( trans.app, kwd[ 'id' ] )
shed_tool_conf, tool_path, relative_install_dir = suc.get_tool_panel_config_tool_path_install_dir( trans.app, tool_shed_repository )
if relative_install_dir:
if tool_path:
@@ -375,7 +384,7 @@
of the installed tool shed repository in Galaxy. We need it so that we can derive the tool shed from which
it was installed.
"""
- repository = suc.get_installed_tool_shed_repository( trans, repository_id )
+ repository = suc.get_installed_tool_shed_repository( trans.app, repository_id )
tool_shed_url = common_util.get_tool_shed_url_from_tool_shed_registry( trans.app, str( repository.tool_shed ) )
if tool_shed_url is None or repository_name is None or repository_owner is None or changeset_revision is None:
message = "Unable to retrieve tool dependencies from the Tool Shed because one or more of the following required "
@@ -400,7 +409,7 @@
Send a request to the appropriate tool shed to retrieve the dictionary of information required to reinstall
an updated revision of an uninstalled tool shed repository.
"""
- repository = suc.get_installed_tool_shed_repository( trans, repository_id )
+ repository = suc.get_installed_tool_shed_repository( trans.app, repository_id )
tool_shed_url = common_util.get_tool_shed_url_from_tool_shed_registry( trans.app, str( repository.tool_shed ) )
if tool_shed_url is None or repository_name is None or repository_owner is None or changeset_revision is None:
message = "Unable to retrieve updated repository information from the Tool Shed because one or more of the following "
@@ -495,7 +504,7 @@
status = kwd.get( 'status', 'done' )
repository_id = kwd.get( 'id', None )
if repository_id is not None:
- repository = suc.get_installed_tool_shed_repository( trans, repository_id )
+ repository = suc.get_installed_tool_shed_repository( trans.app, repository_id )
if repository is not None:
tool_shed_url = common_util.get_tool_shed_url_from_tool_shed_registry( trans.app, str( repository.tool_shed ) )
name = str( repository.name )
@@ -553,7 +562,7 @@
dependencies are included in the updated revision.
"""
updating_repository_id = kwd.get( 'updating_repository_id', None )
- repository = suc.get_installed_tool_shed_repository( trans, updating_repository_id )
+ repository = suc.get_installed_tool_shed_repository( trans.app, updating_repository_id )
# All received dependencies need to be installed - confirmed by the caller.
encoded_tool_dependencies_dict = kwd.get( 'encoded_tool_dependencies_dict', None )
if encoded_tool_dependencies_dict is not None:
@@ -730,7 +739,7 @@
if repository_id is None:
return trans.show_error_message( 'Missing required encoded repository id.' )
operation = kwd.get( 'operation', None )
- repository = suc.get_installed_tool_shed_repository( trans, repository_id )
+ repository = suc.get_installed_tool_shed_repository( trans.app, repository_id )
if repository is None:
return trans.show_error_message( 'Invalid repository specified.' )
tool_shed_url = common_util.get_tool_shed_url_from_tool_shed_registry( trans.app, str( repository.tool_shed ) )
@@ -1254,6 +1263,32 @@
@web.expose
@web.require_admin
+ def purge_repository( self, trans, **kwd ):
+ """Purge a "white ghost" repository from the database."""
+ repository_id = kwd.get( 'id', None )
+ new_kwd = {}
+ if repository_id is not None:
+ repository = suc.get_installed_tool_shed_repository( trans.app, repository_id )
+ if repository:
+ if kwd.get( 'purge_repository', False ):
+ purge_status, purge_message = repository_util.purge_repository( trans.app, repository )
+ if purge_status == 'ok':
+ new_kwd[ 'status' ] = "done"
+ else:
+ new_kwd[ 'status' ] = 'error'
+ new_kwd[ 'message' ] = purge_message
+ else:
+ new_kwd[ 'status' ] = 'error'
+ new_kwd[ 'message' ] = 'Cannot locate the database record for the repository with encoded id %s.' % str( repository_id )
+ else:
+ new_kwd[ 'status' ] = 'error'
+ new_kwd[ 'message' ] = 'Invalid repository id value "None" received for repository to be purged.'
+ return trans.response.send_redirect( web.url_for( controller='admin_toolshed',
+ action='browse_repositories',
+ **new_kwd ) )
+
+ @web.expose
+ @web.require_admin
def reinstall_repository( self, trans, **kwd ):
"""
Reinstall a tool shed repository that has been previously uninstalled, making sure to handle all repository
@@ -1262,7 +1297,7 @@
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
repository_id = kwd[ 'id' ]
- tool_shed_repository = suc.get_installed_tool_shed_repository( trans, repository_id )
+ tool_shed_repository = suc.get_installed_tool_shed_repository( trans.app, repository_id )
no_changes = kwd.get( 'no_changes', '' )
no_changes_checked = CheckboxField.is_checked( no_changes )
install_repository_dependencies = CheckboxField.is_checked( kwd.get( 'install_repository_dependencies', '' ) )
@@ -1407,7 +1442,7 @@
action='browse_repositories',
message=message,
status=status ) )
- tool_shed_repository = suc.get_installed_tool_shed_repository( trans, repository_id )
+ tool_shed_repository = suc.get_installed_tool_shed_repository( trans.app, repository_id )
if kwd.get( 'repair_repository_button', False ):
encoded_repair_dict = kwd.get( 'repair_dict', None )
if encoded_repair_dict:
@@ -1424,7 +1459,7 @@
repository = trans.install_model.context.query( trans.install_model.ToolShedRepository ).get( trans.security.decode_id( tsr_id ) )
repositories_for_repair.append( repository )
return self.repair_tool_shed_repositories( trans, repositories_for_repair, ordered_repo_info_dicts )
- tool_shed_repository = suc.get_installed_tool_shed_repository( trans, repository_id )
+ tool_shed_repository = suc.get_installed_tool_shed_repository( trans.app, repository_id )
repair_dict = repository_util.get_repair_dict( trans, tool_shed_repository )
encoded_repair_dict = encoding_util.tool_shed_encode( repair_dict )
ordered_tsr_ids = repair_dict.get( 'ordered_tsr_ids', [] )
@@ -1486,7 +1521,7 @@
repository_id = kwd.get( 'id', None )
latest_changeset_revision = kwd.get( 'latest_changeset_revision', None )
latest_ctx_rev = kwd.get( 'latest_ctx_rev', None )
- tool_shed_repository = suc.get_installed_tool_shed_repository( trans, repository_id )
+ tool_shed_repository = suc.get_installed_tool_shed_repository( trans.app, repository_id )
repository_clone_url = common_util.generate_clone_url_for_installed_repository( trans.app, tool_shed_repository )
metadata = tool_shed_repository.metadata
tool_shed_url = common_util.get_tool_shed_url_from_tool_shed_registry( trans.app, str( tool_shed_repository.tool_shed ) )
@@ -1672,7 +1707,7 @@
@web.require_admin
def reset_repository_metadata( self, trans, id ):
"""Reset all metadata on a single installed tool shed repository."""
- repository = suc.get_installed_tool_shed_repository( trans, id )
+ repository = suc.get_installed_tool_shed_repository( trans.app, id )
repository_clone_url = common_util.generate_clone_url_for_installed_repository( trans.app, repository )
tool_path, relative_install_dir = repository.get_tool_relative_path( trans.app )
if relative_install_dir:
@@ -1711,7 +1746,7 @@
@web.require_admin
def reset_to_install( self, trans, **kwd ):
"""An error occurred while cloning the repository, so reset everything necessary to enable another attempt."""
- repository = suc.get_installed_tool_shed_repository( trans, kwd[ 'id' ] )
+ repository = suc.get_installed_tool_shed_repository( trans.app, kwd[ 'id' ] )
if kwd.get( 'reset_repository', False ):
repository_util.set_repository_attributes( trans.app,
repository,
@@ -1721,7 +1756,7 @@
uninstalled=False,
remove_from_disk=True )
new_kwd = {}
- new_kwd[ 'message' ] = "You can now attempt to install the repository named <b>%s</b> again." % repository.name
+ new_kwd[ 'message' ] = "You can now attempt to install the repository named <b>%s</b> again." % str( repository.name )
new_kwd[ 'status' ] = "done"
return trans.response.send_redirect( web.url_for( controller='admin_toolshed',
action='browse_repositories',
@@ -1737,7 +1772,7 @@
Get the tool_versions from the tool shed for each tool in the installed revision of a selected tool shed
repository and update the metadata for the repository's revision in the Galaxy database.
"""
- repository = suc.get_installed_tool_shed_repository( trans, kwd[ 'id' ] )
+ repository = suc.get_installed_tool_shed_repository( trans.app, kwd[ 'id' ] )
tool_shed_url = common_util.get_tool_shed_url_from_tool_shed_registry( trans.app, str( repository.tool_shed ) )
params = '?name=%s&owner=%s&changeset_revision=%s' % ( str( repository.name ),
str( repository.owner ),
@@ -2052,7 +2087,7 @@
def view_tool_metadata( self, trans, repository_id, tool_id, **kwd ):
message = kwd.get( 'message', '' )
status = kwd.get( 'status', 'done' )
- repository = suc.get_installed_tool_shed_repository( trans, repository_id )
+ repository = suc.get_installed_tool_shed_repository( trans.app, repository_id )
repository_metadata = repository.metadata
shed_config_dict = repository.get_shed_config_dict( trans.app )
tool_metadata = {}
diff -r 08fe9352836ee4d18ac3d446c45d84e5a7db9611 -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc lib/tool_shed/galaxy_install/grids/admin_toolshed_grids.py
--- a/lib/tool_shed/galaxy_install/grids/admin_toolshed_grids.py
+++ b/lib/tool_shed/galaxy_install/grids/admin_toolshed_grids.py
@@ -214,7 +214,14 @@
target=None,
url_args=dict( controller='admin_toolshed',
action='browse_repositories',
- operation='activate or reinstall' ) ) ]
+ operation='activate or reinstall' ) ),
+ grids.GridOperation( label="Purge",
+ condition=( lambda item: item.is_new ),
+ allow_multiple=False,
+ target=None,
+ url_args=dict( controller='admin_toolshed',
+ action='browse_repositories',
+ operation='purge' ) ) ]
standard_filters = []
default_filter = dict( deleted="False" )
num_rows_per_page = 50
diff -r 08fe9352836ee4d18ac3d446c45d84e5a7db9611 -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc lib/tool_shed/galaxy_install/repository_util.py
--- a/lib/tool_shed/galaxy_install/repository_util.py
+++ b/lib/tool_shed/galaxy_install/repository_util.py
@@ -976,6 +976,117 @@
"""Pull changes from a remote repository to a local one."""
commands.pull( hg_util.get_configured_ui(), repo, source=repository_clone_url, rev=[ ctx_rev ] )
+def purge_repository( app, repository ):
+ """Purge a repository with status New (a white ghost) from the database."""
+ sa_session = app.model.context.current
+ status = 'ok'
+ message = ''
+ purged_tool_versions = 0
+ purged_tool_dependencies = 0
+ purged_required_repositories = 0
+ purged_orphan_repository_repository_dependency_association_records = 0
+ purged_orphan_repository_dependency_records = 0
+ if repository.is_new:
+ # Purge this repository's associated tool versions.
+ if repository.tool_versions:
+ for tool_version in repository.tool_versions:
+ try:
+ tool_version_association = tool_version.parent_tool_association
+ sa_session.delete( tool_version_association )
+ sa_session.flush()
+ except Exception, e:
+ status = 'error'
+ message = 'Error attempting to purge tool_versions for the repository named %s with status %s: %s.' % \
+ ( str( repository.name ), str( repository.status ), str( e ) )
+ return status, message
+ try:
+ tool_version_association = tool_version.child_tool_association
+ sa_session.delete( tool_version_association )
+ sa_session.flush()
+ except Exception, e:
+ status = 'error'
+ message = 'Error attempting to purge tool_versions for the repository named %s with status %s: %s.' % \
+ ( str( repository.name ), str( repository.status ), str( e ) )
+ return status, message
+ try:
+ sa_session.delete( tool_version )
+ sa_session.flush()
+ purged_tool_versions += 1
+ except Exception, e:
+ status = 'error'
+ message = 'Error attempting to purge tool_versions for the repository named %s with status %s: %s.' % \
+ ( str( repository.name ), str( repository.status ), str( e ) )
+ return status, message
+ # Purge this repository's associated tool dependencies.
+ if repository.tool_dependencies:
+ for tool_dependency in repository.tool_dependencies:
+ try:
+ sa_session.delete( tool_dependency )
+ sa_session.flush()
+ purged_tool_dependencies += 1
+ except Exception, e:
+ status = 'error'
+ message = 'Error attempting to purge tool_dependencies for the repository named %s with status %s: %s.' % \
+ ( str( repository.name ), str( repository.status ), str( e ) )
+ return status, message
+ # Purge this repository's associated required repositories.
+ if repository.required_repositories:
+ for rrda in repository.required_repositories:
+ try:
+ sa_session.delete( rrda )
+ sa_session.flush()
+ purged_required_repositories += 1
+ except Exception, e:
+ status = 'error'
+ message = 'Error attempting to purge required_repositories for the repository named %s with status %s: %s.' % \
+ ( str( repository.name ), str( repository.status ), str( e ) )
+ return status, message
+ # Purge any "orphan" repository_dependency records associated with the repository, but not with any
+ # repository_repository_dependency_association records.
+ for orphan_repository_dependency in \
+ sa_session.query( app.install_model.RepositoryDependency ) \
+ .filter( app.install_model.RepositoryDependency.table.c.tool_shed_repository_id == repository.id ):
+ # Purge any repository_repository_dependency_association records whose repository_dependency_id is
+ # the id of the orphan repository_dependency record.
+ for orphan_rrda in \
+ sa_session.query( app.install_model.RepositoryRepositoryDependencyAssociation ) \
+ .filter( app.install_model.RepositoryRepositoryDependencyAssociation.table.c.repository_dependency_id == orphan_repository_dependency.id ):
+ try:
+ sa_session.delete( orphan_rrda )
+ sa_session.flush()
+ purged_orphan_repository_repository_dependency_association_records += 1
+ except Exception, e:
+ status = 'error'
+ message = 'Error attempting to purge repository_repository_dependency_association records associated with '
+ message += 'an orphan repository_dependency record for the repository named %s with status %s: %s.' % \
+ ( str( repository.name ), str( repository.status ), str( e ) )
+ return status, message
+ try:
+ sa_session.delete( orphan_repository_dependency )
+ sa_session.flush()
+ purged_orphan_repository_dependency_records += 1
+ except Exception, e:
+ status = 'error'
+ message = 'Error attempting to purge orphan repository_dependency records for the repository named %s with status %s: %s.' % \
+ ( str( repository.name ), str( repository.status ), str( e ) )
+ return status, message
+ # Purge the repository.
+ sa_session.delete( repository )
+ sa_session.flush()
+ message = 'The repository named <b>%s</b> with status <b>%s</b> has been purged.<br/>' % \
+ ( str( repository.name ), str( repository.status ) )
+ message += 'Total associated tool_version records purged: %d<br/>' % purged_tool_versions
+ message += 'Total associated tool_dependency records purged: %d<br/>' % purged_tool_dependencies
+ message += 'Total associated repository_repository_dependency_association records purged: %d<br/>' % purged_required_repositories
+ message += 'Total associated orphan repository_repository_dependency_association records purged: %d<br/>' % \
+ purged_orphan_repository_repository_dependency_association_records
+ message += 'Total associated orphan repository_dependency records purged: %d<br/>' % purged_orphan_repository_dependency_records
+ else:
+ status = 'error'
+ message = 'A repository must have the status <b>New</b> in order to be purged. This repository has '
+ message += ' the status %s.' % str( repository.status )
+ return status, message
+
def repair_tool_shed_repository( trans, repository, repo_info_dict ):
def add_repair_dict_entry( repository_name, error_message ):
diff -r 08fe9352836ee4d18ac3d446c45d84e5a7db9611 -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc lib/tool_shed/util/metadata_util.py
--- a/lib/tool_shed/util/metadata_util.py
+++ b/lib/tool_shed/util/metadata_util.py
@@ -1696,7 +1696,7 @@
"""Reset all metadata on a single tool shed repository installed into a Galaxy instance."""
invalid_file_tups = []
metadata_dict = {}
- repository = suc.get_installed_tool_shed_repository( trans, id )
+ repository = suc.get_installed_tool_shed_repository( trans.app, id )
repository_clone_url = common_util.generate_clone_url_for_installed_repository( trans.app, repository )
tool_path, relative_install_dir = repository.get_tool_relative_path( trans.app )
if relative_install_dir:
@@ -1886,7 +1886,7 @@
invalid_file_tups, metadata_dict = reset_all_metadata_on_repository_in_tool_shed( trans, repository_id )
else:
# We're in Galaxy.
- repository = suc.get_installed_tool_shed_repository( trans, repository_id )
+ repository = suc.get_installed_tool_shed_repository( trans.app, repository_id )
owner = str( repository.owner )
invalid_file_tups, metadata_dict = reset_all_metadata_on_installed_repository( trans, repository_id )
if invalid_file_tups:
diff -r 08fe9352836ee4d18ac3d446c45d84e5a7db9611 -r 7d6f8afcac4d6c208e864c9b8eb35d14d98725cc lib/tool_shed/util/shed_util_common.py
--- a/lib/tool_shed/util/shed_util_common.py
+++ b/lib/tool_shed/util/shed_util_common.py
@@ -491,9 +491,9 @@
return ','.join( installing_repository_ids )
return installing_repository_ids
-def get_installed_tool_shed_repository( trans, id ):
+def get_installed_tool_shed_repository( app, id ):
"""Get a tool shed repository record from the Galaxy database defined by the id."""
- return trans.install_model.context.query( trans.install_model.ToolShedRepository ).get( trans.security.decode_id( id ) )
+ return app.install_model.context.query( app.install_model.ToolShedRepository ).get( app.security.decode_id( id ) )
def get_latest_changeset_revision( app, repository, repo ):
repository_tip = repository.tip( app )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: davebgx: Merge fix for dataset stdio.
by commits-noreply@bitbucket.org 12 Jun '14
by commits-noreply@bitbucket.org 12 Jun '14
12 Jun '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/08fe9352836e/
Changeset: 08fe9352836e
User: davebgx
Date: 2014-06-12 19:59:49
Summary: Merge fix for dataset stdio.
Affected #: 1 file
diff -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 -r 08fe9352836ee4d18ac3d446c45d84e5a7db9611 test/base/interactor.py
--- a/test/base/interactor.py
+++ b/test/base/interactor.py
@@ -386,8 +386,7 @@
self.twill_test_case.verify_dataset_correctness( outfile, hid=hid, attributes=attributes, shed_tool_id=shed_tool_id, maxseconds=maxseconds )
def get_job_stream( self, history_id, output_data, stream ):
- encoded_id = self.twill_test_case.security.encode_id( output_data.get( 'id' ) )
- return self.twill_test_case._get_job_stream_output( encoded_id, stream=stream, format=False )
+ return self.twill_test_case._get_job_stream_output( output_data.get( 'id' ), stream=stream, format=False )
def stage_data_async( self, test_data, history, shed_tool_id, async=True ):
name = test_data.get( 'name', None )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: davebgx: Revert change that ended up breaking display of job stdio in functional tests.
by commits-noreply@bitbucket.org 12 Jun '14
by commits-noreply@bitbucket.org 12 Jun '14
12 Jun '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/9b78595ec114/
Changeset: 9b78595ec114
Branch: stable
User: davebgx
Date: 2014-06-12 19:40:04
Summary: Revert change that ended up breaking display of job stdio in functional tests.
Affected #: 1 file
diff -r d6b1ea3a6a9ed2e7c3fa41518c5a690ed49c1265 -r 9b78595ec1142d75930dbfffd8baeaa689cb4c6c test/base/interactor.py
--- a/test/base/interactor.py
+++ b/test/base/interactor.py
@@ -386,8 +386,7 @@
self.twill_test_case.verify_dataset_correctness( outfile, hid=hid, attributes=attributes, shed_tool_id=shed_tool_id, maxseconds=maxseconds )
def get_job_stream( self, history_id, output_data, stream ):
- encoded_id = self.twill_test_case.security.encode_id( output_data.get( 'id' ) )
- return self.twill_test_case._get_job_stream_output( encoded_id, stream=stream, format=False )
+ return self.twill_test_case._get_job_stream_output( output_data.get( 'id' ), stream=stream, format=False )
def stage_data_async( self, test_data, history, shed_tool_id, async=True ):
name = test_data.get( 'name', None )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: dannon: Merged in jmchilton/galaxy-central-fork-1 (pull request #401)
by commits-noreply@bitbucket.org 12 Jun '14
by commits-noreply@bitbucket.org 12 Jun '14
12 Jun '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/646b5877871b/
Changeset: 646b5877871b
User: dannon
Date: 2014-06-12 17:09:04
Summary: Merged in jmchilton/galaxy-central-fork-1 (pull request #401)
Allow tools and deployers to specify optional Docker-based dependency resolution.
Affected #: 10 files
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 job_conf.xml.sample_advanced
--- a/job_conf.xml.sample_advanced
+++ b/job_conf.xml.sample_advanced
@@ -97,6 +97,90 @@
elements. This empty block will simply disable job metrics for the
corresponding destination. --></destination>
+ <destination id="docker_local" runner="local">
+ <param id="docker_enabled">true</param>
+ <!-- docker_volumes can be used to configure volumes to expose to docker,
+ For added isolation append :ro to the path to mount it read only.
+ Galaxy will attempt to infer a reasonable set of defaults which
+ volumes should be exposed how based on Galaxy's settings and the
+ destination - but be sure to add any library paths or data incides
+ that may be needed read-only.
+ -->
+ <!--
+ <param id="docker_volumes">$defaults,/mnt/galaxyData/libraries:ro,/mnt/galaxyData/indices:ro</param>
+ -->
+ <!-- For a stock Galaxy instance and traditional job runner $defaults will
+ expand out as:
+
+ $galaxy_root:ro,$tool_directory:ro,$working_directory:rw,$default_file_path:rw
+
+ This assumes most of what is needed is available under Galaxy's root directory,
+ the tool directory, and the Galaxy's file_path (if using object store creatively
+ you will definitely need to expand defaults).
+
+ This configuration allows any docker instance to write to any Galaxy
+ file - for greater isolation set outputs_to_working_directory in
+ universe_wsgi.ini. This will cause $defaults to allow writing to much
+ less. It will then expand as follows:
+
+ $galaxy_root:ro,$tool_directory:ro,$working_directory:rw,$default_file_path:ro
+
+ If using the LWR, defaults will be even further restricted because the
+ LWR will (by default) stage all needed inputs into the job's job_directory
+ (so there is not need to allow the docker container to read all the
+ files - let alone write over them). Defaults in this case becomes:
+
+ $job_directory:ro,$tool_directory:ro,$job_directory/outputs:rw,$working_directory:rw
+
+ Python string.Template is used to expand volumes and values $defaults,
+ $galaxy_root, $default_file_path, $tool_directory, $working_directory,
+ are available to all jobs and $job_directory is also available for
+ LWR jobs.
+ -->
+ <!-- Control memory allocatable by docker container with following option:
+ -->
+ <!-- <param id="docker_memory">24G</param> -->
+ <!-- By default Docker will need to runnable by Galaxy using
+ password-less sudo - this can be configured by adding the
+ following line to the sudoers file of all compute nodes
+ with docker enabled:
+
+ galaxy ALL = (root) NOPASSWD: SETENV: /usr/bin/docker
+
+ The follow option is set to false to disable sudo (docker
+ must likewise be configured to allow this).
+ -->
+ <!-- <param id="docker_sudo">false</param> -->
+ <!-- Following option can be used to tweak sudo command used by
+ default. -->
+ <!-- <param id="docker_sudo_cmd">/usr/bin/sudo --extra_param</param> -->
+ <!-- By default, docker container will not have any networking
+ enabled. host networking can be bridged by uncommenting next option
+ http://docs.docker.io/reference/run/#network-settings
+ -->
+ <!-- <param id="docker_net">bridge</param> -->
+ <!-- Following command can be used to tweak docker command. -->
+ <!-- <param id="docker_cmd">/usr/local/custom_docker/docker</param> -->
+ <!-- Following can be used to connect to docke server in different
+ ways (translated as -H argument to docker client).
+ <!-- <param id="docker_host">unix:///var/run/docker.sock</param> -->
+ <!-- <param id="docker_host">:5555</param> -->
+ <!-- <param id="docker_host">:5555</param> -->
+ <!-- <param id="docker_host">tcp://127.0.0.1:4243</param> -->
+
+ <!-- If deployer wants to use docker for isolation, but does not
+ trust tool's specified container - a destination wide override
+ can be set. This will cause all jobs on this destination to use
+ that docker image. -->
+ <!-- <param id="docker_container_id_override">busybox:ubuntu-14.04</param> -->
+
+ <!-- Likewise, if deployer wants to use docker for isolation and
+ does trust tool's specified container - but also wants tool's not
+ configured to run in a container the following option can provide
+ a fallback. -->
+ <!-- <param id="dockers_default_container_id">busybox:ubuntu-14.04</param> -->
+
+ </destination><destination id="pbs" runner="pbs" tags="mycluster"/><destination id="pbs_longjobs" runner="pbs" tags="mycluster,longjobs"><!-- Define parameters that are native to the job runner plugin. -->
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/config.py
--- a/lib/galaxy/config.py
+++ b/lib/galaxy/config.py
@@ -582,6 +582,16 @@
import galaxy.tools.search
self.toolbox_search = galaxy.tools.search.ToolBoxSearch( self.toolbox )
+ from galaxy.tools.deps import containers
+ galaxy_root_dir = os.path.abspath(self.config.root)
+ file_path = os.path.abspath(getattr(self.config, "file_path"))
+ app_info = containers.AppInfo(
+ galaxy_root_dir,
+ default_file_path=file_path,
+ outputs_to_working_directory=self.config.outputs_to_working_directory
+ )
+ self.container_finder = galaxy.tools.deps.containers.ContainerFinder(app_info)
+
def _configure_tool_data_tables( self, from_shed_config ):
from galaxy.tools.data import ToolDataTableManager
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/jobs/command_factory.py
--- a/lib/galaxy/jobs/command_factory.py
+++ b/lib/galaxy/jobs/command_factory.py
@@ -1,4 +1,6 @@
from os import getcwd
+from os import chmod
+from os.path import join
from os.path import abspath
CAPTURE_RETURN_CODE = "return_code=$?"
@@ -8,7 +10,14 @@
log = getLogger( __name__ )
-def build_command( runner, job_wrapper, include_metadata=False, include_work_dir_outputs=True, remote_command_params={} ):
+def build_command(
+ runner,
+ job_wrapper,
+ container=None,
+ include_metadata=False,
+ include_work_dir_outputs=True,
+ remote_command_params={}
+):
"""
Compose the sequence of commands necessary to execute a job. This will
currently include:
@@ -29,7 +38,35 @@
__handle_version_command(commands_builder, job_wrapper)
__handle_task_splitting(commands_builder, job_wrapper)
- __handle_dependency_resolution(commands_builder, job_wrapper, remote_command_params)
+
+ # One could imagine also allowing dependencies inside of the container but
+ # that is too sophisticated for a first crack at this - build your
+ # containers ready to go!
+ if not container:
+ __handle_dependency_resolution(commands_builder, job_wrapper, remote_command_params)
+
+ if container:
+ # Stop now and build command before handling metadata and copying
+ # working directory files back. These should always happen outside
+ # of docker container - no security implications when generating
+ # metadata and means no need for Galaxy to be available to container
+ # and not copying workdir outputs back means on can be more restrictive
+ # of where container can write to in some circumstances.
+
+ local_container_script = join( job_wrapper.working_directory, "container.sh" )
+ fh = file( local_container_script, "w" )
+ fh.write( "#!/bin/sh\n%s" % commands_builder.build() )
+ fh.close()
+ chmod( local_container_script, 0755 )
+
+ compute_container_script = local_container_script
+ if 'working_directory' in remote_command_params:
+ compute_container_script = "/bin/sh %s" % join(remote_command_params['working_directory'], "container.sh")
+
+ run_in_container_command = container.containerize_command(
+ compute_container_script
+ )
+ commands_builder = CommandsBuilder( run_in_container_command )
if include_work_dir_outputs:
__handle_work_dir_outputs(commands_builder, job_wrapper, runner, remote_command_params)
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/jobs/runners/__init__.py
--- a/lib/galaxy/jobs/runners/__init__.py
+++ b/lib/galaxy/jobs/runners/__init__.py
@@ -177,7 +177,16 @@
raise NotImplementedError()
def build_command_line( self, job_wrapper, include_metadata=False, include_work_dir_outputs=True ):
- return build_command( self, job_wrapper, include_metadata=include_metadata, include_work_dir_outputs=include_work_dir_outputs )
+ # TODO: Eliminate extra kwds no longer used (since LWR skips
+ # abstraction and calls build_command directly).
+ container = self._find_container( job_wrapper )
+ return build_command(
+ self,
+ job_wrapper,
+ include_metadata=include_metadata,
+ include_work_dir_outputs=include_work_dir_outputs,
+ container=container
+ )
def get_work_dir_outputs( self, job_wrapper, job_working_directory=None ):
"""
@@ -276,6 +285,31 @@
if ajs.job_wrapper.get_state() != model.Job.states.DELETED:
self.work_queue.put( ( self.finish_job, ajs ) )
+ def _find_container(
+ self,
+ job_wrapper,
+ compute_working_directory=None,
+ compute_tool_directory=None,
+ compute_job_directory=None
+ ):
+ if not compute_working_directory:
+ compute_working_directory = job_wrapper.working_directory
+
+ if not compute_tool_directory:
+ compute_tool_directory = job_wrapper.tool.tool_dir
+
+ tool = job_wrapper.tool
+ from galaxy.tools.deps import containers
+ tool_info = containers.ToolInfo(tool.containers, tool.requirements)
+ job_info = containers.JobInfo(compute_working_directory, compute_tool_directory, compute_job_directory)
+
+ destination_info = job_wrapper.job_destination.params
+ return self.app.container_finder.find_container(
+ tool_info,
+ destination_info,
+ job_info
+ )
+
class JobState( object ):
"""
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/jobs/runners/lwr.py
--- a/lib/galaxy/jobs/runners/lwr.py
+++ b/lib/galaxy/jobs/runners/lwr.py
@@ -233,9 +233,21 @@
metadata_kwds=metadata_kwds,
dependency_resolution=dependency_resolution,
)
+ remote_working_directory = remote_job_config['working_directory']
+ # TODO: Following defs work for LWR, always worked for LWR but should be
+ # calculated at some other level.
+ remote_job_directory = os.path.abspath(os.path.join(remote_working_directory, os.path.pardir))
+ remote_tool_directory = os.path.abspath(os.path.join(remote_job_directory, "tool_files"))
+ container = self._find_container(
+ job_wrapper,
+ compute_working_directory=remote_working_directory,
+ compute_tool_directory=remote_tool_directory,
+ compute_job_directory=remote_job_directory,
+ )
command_line = build_command(
self,
job_wrapper=job_wrapper,
+ container=container,
include_metadata=remote_metadata,
include_work_dir_outputs=False,
remote_command_params=remote_command_params,
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -1346,7 +1346,9 @@
self.__tests_populated = False
# Requirements (dependencies)
- self.requirements = parse_requirements_from_xml( root )
+ requirements, containers = parse_requirements_from_xml( root )
+ self.requirements = requirements
+ self.containers = containers
# Determine if this tool can be used in workflows
self.is_workflow_compatible = self.check_workflow_compatible(root)
# Trackster configuration.
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/tools/deps/containers.py
--- /dev/null
+++ b/lib/galaxy/tools/deps/containers.py
@@ -0,0 +1,246 @@
+from abc import (
+ ABCMeta,
+ abstractmethod
+)
+import os
+import string
+
+from galaxy.util import asbool
+from ..deps import docker_util
+
+import logging
+log = logging.getLogger(__name__)
+
+DEFAULT_CONTAINER_TYPE = "docker"
+
+
+class ContainerFinder(object):
+
+ def __init__(self, app_info):
+ self.app_info = app_info
+ self.container_registry = ContainerRegistry()
+
+ def find_container(self, tool_info, destination_info, job_info):
+ def __destination_container(container_description=None, container_id=None, container_type=None):
+ if container_description:
+ container_id = container_description.identifier
+ container_type = container_description.type
+ container = self.__destination_container(
+ container_id,
+ container_type,
+ tool_info,
+ destination_info,
+ job_info
+ )
+ return container
+
+ # Is destination forcing Galaxy to use a particular container do it,
+ # this is likely kind of a corner case. For instance if deployers
+ # do not trust the containers annotated in tools.
+ for container_type in CONTAINER_CLASSES.keys():
+ container_id = self.__overridden_container_id(container_type, destination_info)
+ if container_id:
+ container = __destination_container(container_type=container_type, container_id=container_id)
+ if container:
+ return container
+
+ # Otherwise lets see if we can find container for the tool.
+
+ # Exact matches first from explicitly listed containers in tools...
+ for container_description in tool_info.container_descriptions:
+ container = __destination_container(container_description)
+ if container:
+ return container
+
+ # Implement vague concept of looping through all containers
+ # matching requirements. Exact details need to be worked through
+ # but hopefully the idea that it sits below find_container somewhere
+ # external components to this module don't need to worry about it
+ # is good enough.
+ container_descriptions = self.container_registry.container_descriptions_for_requirements(tool_info.requirements)
+ for container_description in container_descriptions:
+ container = __destination_container(container_description)
+ if container:
+ return container
+
+ # If we still don't have a container, check to see if any container
+ # types define a default container id and use that.
+ for container_type in CONTAINER_CLASSES.keys():
+ container_id = self.__default_container_id(container_type, destination_info)
+ if container_id:
+ container = __destination_container(container_type=container_type, container_id=container_id)
+ if container:
+ return container
+
+ return NULL_CONTAINER
+
+ def __overridden_container_id(self, container_type, destination_info):
+ if not self.__container_type_enabled(container_type, destination_info):
+ return None
+ return destination_info.get("%s_container_id_override" % container_type)
+
+ def __default_container_id(self, container_type, destination_info):
+ if not self.__container_type_enabled(container_type, destination_info):
+ return None
+ return destination_info.get("%s_default_container_id" % container_type)
+
+ def __destination_container(self, container_id, container_type, tool_info, destination_info, job_info):
+ # TODO: ensure destination_info is dict-like
+ if not self.__container_type_enabled(container_type, destination_info):
+ return NULL_CONTAINER
+
+ # TODO: Right now this assumes all containers available when a
+ # container type is - there should be more thought put into this.
+ # Checking which are availalbe - settings policies for what can be
+ # auto-fetched, etc....
+ return CONTAINER_CLASSES[container_type](container_id, self.app_info, tool_info, destination_info, job_info)
+
+ def __container_type_enabled(self, container_type, destination_info):
+ return asbool(destination_info.get("%s_enabled" % container_type, False))
+
+
+class ContainerRegistry():
+
+ def __init__(self):
+ pass
+
+ def container_descriptions_for_requirements(self, requirements):
+ # Return lists of containers that would match requirements...
+ return []
+
+
+class AppInfo(object):
+
+ def __init__(self, galaxy_root_dir=None, default_file_path=None, outputs_to_working_directory=False):
+ self.galaxy_root_dir = galaxy_root_dir
+ self.default_file_path = default_file_path
+ # TODO: Vary default value for docker_volumes based on this...
+ self.outputs_to_working_directory = outputs_to_working_directory
+
+
+class ToolInfo(object):
+ # TODO: Introduce tool XML syntax to annotate the optional environment
+ # variables they can consume (e.g. JVM options, license keys, etc..)
+ # and add these to env_path_through
+
+ def __init__(self, container_descriptions=[], requirements=[]):
+ self.container_descriptions = container_descriptions
+ self.requirements = requirements
+ self.env_pass_through = ["GALAXY_SLOTS"]
+
+
+class JobInfo(object):
+
+ def __init__(self, working_directory, tool_directory, job_directory):
+ self.working_directory = working_directory
+ self.job_directory = job_directory
+ # Tool files may be remote staged - so this is unintuitively a property
+ # of the job not of the tool.
+ self.tool_directory = tool_directory
+
+
+class Container( object ):
+ __metaclass__ = ABCMeta
+
+ def __init__(self, container_id, app_info, tool_info, destination_info, job_info):
+ self.container_id = container_id
+ self.app_info = app_info
+ self.tool_info = tool_info
+ self.destination_info = destination_info
+ self.job_info = job_info
+
+ @abstractmethod
+ def containerize_command(self, command):
+ """
+ Use destination supplied container configuration parameters,
+ container_id, and command to build a new command that runs
+ input command in container.
+ """
+
+
+class DockerContainer(Container):
+
+ def containerize_command(self, command):
+ def prop(name, default):
+ destination_name = "docker_%s" % name
+ return self.destination_info.get(destination_name, default)
+
+ env_directives = []
+ for pass_through_var in self.tool_info.env_pass_through:
+ env_directives.append('"%s=$%s"' % (pass_through_var, pass_through_var))
+
+ # Allow destinations to explicitly set environment variables just for
+ # docker container. Better approach is to set for destination and then
+ # pass through only what tool needs however. (See todo in ToolInfo.)
+ for key, value in self.destination_info.iteritems():
+ if key.startswith("docker_env_"):
+ env = key[len("docker_env_"):]
+ env_directives.append('"%s=%s"' % (env, value))
+
+ working_directory = self.job_info.working_directory
+ if not working_directory:
+ raise Exception("Cannot containerize command [%s] without defined working directory." % working_directory)
+
+ volumes_raw = self.__expand_str(self.destination_info.get("docker_volumes", "$defaults"))
+ # TODO: Remove redundant volumes...
+ volumes = docker_util.DockerVolume.volumes_from_str(volumes_raw)
+ return docker_util.build_docker_run_command(
+ command,
+ self.container_id,
+ volumes=volumes,
+ env_directives=env_directives,
+ working_directory=working_directory,
+ docker_cmd=prop("cmd", docker_util.DEFAULT_DOCKER_COMMAND),
+ sudo=asbool(prop("sudo", docker_util.DEFAULT_SUDO)),
+ sudo_cmd=prop("sudo_cmd", docker_util.DEFAULT_SUDO_COMMAND),
+ host=prop("host", docker_util.DEFAULT_HOST),
+ net=prop("net", "none") # By default, docker instance has networking disabled
+ )
+
+ def __expand_str(self, value):
+ template = string.Template(value)
+ variables = dict()
+
+ def add_var(name, value):
+ if value:
+ variables[name] = os.path.abspath(value)
+
+ add_var("working_directory", self.job_info.working_directory)
+ add_var("job_directory", self.job_info.job_directory)
+ add_var("tool_directory", self.job_info.tool_directory)
+ add_var("galaxy_root", self.app_info.galaxy_root_dir)
+ add_var("default_file_path", self.app_info.default_file_path)
+
+ if self.job_info.job_directory:
+ # We have a job directory, so everything needed (excluding index
+ # files) should be available in job_directory...
+ defaults = "$job_directory:ro,$tool_directory:ro,$job_directory/outputs:rw,$working_directory:rw"
+ elif self.app_info.outputs_to_working_directory:
+ # Should need default_file_path (which is a course estimate given
+ # object stores anyway.
+ defaults = "$galaxy_root:ro,$tool_directory:ro,$working_directory:rw,$default_file_path:ro"
+ else:
+ defaults = "$galaxy_root:ro,$tool_directory:ro,$working_directory:rw,$default_file_path:rw"
+
+ # Define $defaults that can easily be extended with external library and
+ # index data without deployer worrying about above details.
+ variables["defaults"] = string.Template(defaults).safe_substitute(variables)
+
+ return template.safe_substitute(variables)
+
+
+CONTAINER_CLASSES = dict(
+ docker=DockerContainer,
+)
+
+
+class NullContainer(object):
+
+ def __init__(self):
+ pass
+
+ def __nonzero__(self):
+ return False
+
+
+NULL_CONTAINER = NullContainer()
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/tools/deps/docker_util.py
--- /dev/null
+++ b/lib/galaxy/tools/deps/docker_util.py
@@ -0,0 +1,91 @@
+
+DEFAULT_DOCKER_COMMAND = "docker"
+DEFAULT_SUDO = True
+DEFAULT_SUDO_COMMAND = "sudo"
+DEFAULT_HOST = None
+DEFAULT_VOLUME_MOUNT_TYPE = "rw"
+DEFAULT_WORKING_DIRECTORY = None
+DEFAULT_NET = None
+DEFAULT_MEMORY = None
+
+
+class DockerVolume(object):
+
+ def __init__(self, path, to_path=None, how=DEFAULT_VOLUME_MOUNT_TYPE):
+ self.from_path = path
+ self.to_path = to_path or path
+ if not DockerVolume.__valid_how(how):
+ raise ValueError("Invalid way to specify docker volume %s" % how)
+ self.how = how
+
+ @staticmethod
+ def volumes_from_str(volumes_as_str):
+ if not volumes_as_str:
+ return []
+ volume_strs = [v.strip() for v in volumes_as_str.split(",")]
+ return map(DockerVolume.volume_from_str, volume_strs)
+
+ @staticmethod
+ def volume_from_str(as_str):
+ if not as_str:
+ raise ValueError("Failed to parse docker volume from %s" % as_str)
+ parts = as_str.split(":", 2)
+ kwds = dict(path=parts[0])
+ if len(parts) == 2:
+ if DockerVolume.__valid_how(parts[1]):
+ kwds["how"] = parts[1]
+ else:
+ kwds["to_path"] = parts[1]
+ elif len(parts) == 3:
+ kwds["to_path"] = parts[1]
+ kwds["how"] = parts[2]
+ return DockerVolume(**kwds)
+
+ @staticmethod
+ def __valid_how(how):
+ return how in ["ro", "rw"]
+
+ def __str__(self):
+ return ":".join([self.from_path, self.to_path, self.how])
+
+
+def build_docker_run_command(
+ container_command,
+ image,
+ tag=None,
+ docker_cmd=DEFAULT_DOCKER_COMMAND,
+ volumes=[],
+ memory=DEFAULT_MEMORY,
+ env_directives=[],
+ working_directory=DEFAULT_WORKING_DIRECTORY,
+ sudo=DEFAULT_SUDO,
+ sudo_cmd=DEFAULT_SUDO_COMMAND,
+ name=None,
+ host=DEFAULT_HOST,
+ net=DEFAULT_NET,
+):
+ command_parts = []
+ if sudo:
+ command_parts.append(sudo_cmd)
+ command_parts.append(docker_cmd)
+ if host:
+ command_parts.append(["-H", host])
+ command_parts.append("run")
+ for env_directive in env_directives:
+ command_parts.extend(["-e", env_directive])
+ for volume in volumes:
+ command_parts.extend(["-v", str(volume)])
+ if memory:
+ command_parts.extend(["-m", memory])
+ if name:
+ command_parts.extend(["-name", name])
+ if working_directory:
+ command_parts.extend(["-w", working_directory])
+ if net:
+ command_parts.extend(["--net", net])
+ full_image = image
+ if tag:
+ full_image = "%s:%s" % (full_image, tag)
+ command_parts.append(full_image)
+ command_parts.append(container_command)
+ return " ".join(command_parts)
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/tools/deps/requirements.py
--- a/lib/galaxy/tools/deps/requirements.py
+++ b/lib/galaxy/tools/deps/requirements.py
@@ -26,6 +26,25 @@
return ToolRequirement( name=name, type=type, version=version )
+DEFAULT_CONTAINER_TYPE = "docker"
+
+
+class ContainerDescription( object ):
+
+ def __init__( self, identifier=None, type="docker" ):
+ self.identifier = identifier
+ self.type = type
+
+ def to_dict( self ):
+ return dict(identifier=self.identifier, type=self.type)
+
+ @staticmethod
+ def from_dict( dict ):
+ identifier = dict["identifier"]
+ type = dict.get("type", DEFAULT_CONTAINER_TYPE)
+ return ContainerDescription( identifier=identifier, type=type )
+
+
def parse_requirements_from_xml( xml_root ):
"""
@@ -63,4 +82,15 @@
requirement = ToolRequirement( name=name, type=type, version=version )
requirements.append( requirement )
- return requirements
+ container_elems = []
+ if requirements_elem is not None:
+ container_elems = requirements_elem.findall( 'container' )
+
+ containers = []
+ for container_elem in container_elems:
+ identifier = xml_text( container_elem )
+ type = container_elem.get( "type", DEFAULT_CONTAINER_TYPE )
+ container = ContainerDescription( identifier=identifier, type=type )
+ containers.append( container )
+
+ return requirements, containers
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 test/functional/tools/catDocker.xml
--- /dev/null
+++ b/test/functional/tools/catDocker.xml
@@ -0,0 +1,28 @@
+<tool id="catdc" name="Concatenate datasets (in docker)">
+ <description>tail-to-head</description>
+ <requirements>
+ <container type="docker">busybox:ubuntu-14.04</container>
+ </requirements>
+ <command>
+ echo "Galaxy slots passed through contain as \$GALAXY_SLOTS";
+ cat $input1
+ #for $q in $queries
+ ${q.input2}
+ #end for
+ > $out_file1;
+ echo "Work dir output" > working_file
+
+ </command>
+ <inputs>
+ <param name="input1" type="data" label="Concatenate Dataset"/>
+ <repeat name="queries" title="Dataset">
+ <param name="input2" type="data" label="Select" />
+ </repeat>
+ </inputs>
+ <outputs>
+ <data name="out_file1" format="input" metadata_source="input1"/>
+ <data name="out_file2" format="txt" from_work_dir="working_file" />
+ </outputs>
+ <help>
+ </help>
+</tool>
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/60d8de457dd5/
Changeset: 60d8de457dd5
User: jmchilton
Date: 2014-06-07 08:45:05
Summary: Allow tools and deployers to specify optional Docker-based dependency resolution.
Testing it out:
---------------
- Install [docker](http://docker.io) (tough, but getting easier).
- Copy `test/functional/tools/catDocker.xml` to somewhere in `tools/` and add to `tool_conf.xml`.
- Add `<param id="docker_enabled">true</param>` to your favorite job destination.
- Run the tool.
Description and Configuration:
------------------------------
Works with all stock job runners including remote jobs with the LWR.
Supports file system isolation allowing deployer to determine what paths are exposed to container and optionally allowing these to be read-only. They can be overridden or extended but defaults are provided that attempt to guess what should be read-only and what should be writable based on Galaxy's configuration and the job destination. Lots of details and discussion in job_conf.xml.sample_advanced.
`$GALAXY_SLOTS` (however it is configured for the given runner) is passed into the container at runtime and will be available.
Tools are allowed to explicitly annotate what container should be used to run the tool. I added in hooks to allow a more expansive approach where containers could be linked to requirements and resolved that way. To be clear, this is not implemented at all but the class ContainerRegistry is instantiated, passed the list of requirements, and given the chance to return a list of potential containers... if someone wants to implement this someday.
>From a reproducibility stand-point it makes sense for tool author's to have control over which container is selected, but there is this security and isolation aspect to these enhancements as well. So there are some more advanced options that allow deployers (instead of tool authors) to decide which containers are selected for jobs. `docker_default_container_id` can be added to a destination to cause that container to be used for all un-mapped tools - which will result in every job on that destination being run in a docker container. If the deployer does not even trust those tools annotated with image ids - they can go a step further and set `docker_container_id_override` instead. This will likewise cause all jobs to run in a container - but the tool details themselves will be ignored and *EVERY* tool will use the specified container.
Additional advanced docker options are available to control memory, enable network access (disabled by default), where docker is found, if and how sudo is used, etc.... These are all documented in `job_conf.xml.sample_advanced`.
Implementation Details:
-----------------------
Metadata is set outside the container - so the container itself only needs to supply the underlying application and doesn't need to be configured with Galaxy for instance. Likewise - traditional `tool_dependency_dir` based dependency resolution is disabled when job is run in a container - for now it is assumed the container will supply these dependencies.
What's Next:
------------
If implementation is merged, much is left to be discussed and worked through - how to fetch and control what images are fetched (right now the code just assumes if you have docker enabled all referenced images are available), where to fetch images from, tool shed integration (host a tool shed docker repository?, services to build docker images preconfigured with tool shed depedencies?), etc.... This is meant as more of a foundation for the dependency resolution and job runner portions of this.
Affected #: 10 files
diff -r 84a00e4f7d06e9adcec7297d8133ce70524dc242 -r 60d8de457dd57adb71e30725a6451126148edd6d job_conf.xml.sample_advanced
--- a/job_conf.xml.sample_advanced
+++ b/job_conf.xml.sample_advanced
@@ -97,6 +97,90 @@
elements. This empty block will simply disable job metrics for the
corresponding destination. --></destination>
+ <destination id="docker_local" runner="local">
+ <param id="docker_enabled">true</param>
+ <!-- docker_volumes can be used to configure volumes to expose to docker,
+ For added isolation append :ro to the path to mount it read only.
+ Galaxy will attempt to infer a reasonable set of defaults which
+ volumes should be exposed how based on Galaxy's settings and the
+ destination - but be sure to add any library paths or data incides
+ that may be needed read-only.
+ -->
+ <!--
+ <param id="docker_volumes">$defaults,/mnt/galaxyData/libraries:ro,/mnt/galaxyData/indices:ro</param>
+ -->
+ <!-- For a stock Galaxy instance and traditional job runner $defaults will
+ expand out as:
+
+ $galaxy_root:ro,$tool_directory:ro,$working_directory:rw,$default_file_path:rw
+
+ This assumes most of what is needed is available under Galaxy's root directory,
+ the tool directory, and the Galaxy's file_path (if using object store creatively
+ you will definitely need to expand defaults).
+
+ This configuration allows any docker instance to write to any Galaxy
+ file - for greater isolation set outputs_to_working_directory in
+ universe_wsgi.ini. This will cause $defaults to allow writing to much
+ less. It will then expand as follows:
+
+ $galaxy_root:ro,$tool_directory:ro,$working_directory:rw,$default_file_path:ro
+
+ If using the LWR, defaults will be even further restricted because the
+ LWR will (by default) stage all needed inputs into the job's job_directory
+ (so there is not need to allow the docker container to read all the
+ files - let alone write over them). Defaults in this case becomes:
+
+ $job_directory:ro,$tool_directory:ro,$job_directory/outputs:rw,$working_directory:rw
+
+ Python string.Template is used to expand volumes and values $defaults,
+ $galaxy_root, $default_file_path, $tool_directory, $working_directory,
+ are available to all jobs and $job_directory is also available for
+ LWR jobs.
+ -->
+ <!-- Control memory allocatable by docker container with following option:
+ -->
+ <!-- <param id="docker_memory">24G</param> -->
+ <!-- By default Docker will need to runnable by Galaxy using
+ password-less sudo - this can be configured by adding the
+ following line to the sudoers file of all compute nodes
+ with docker enabled:
+
+ galaxy ALL = (root) NOPASSWD: SETENV: /usr/bin/docker
+
+ The follow option is set to false to disable sudo (docker
+ must likewise be configured to allow this).
+ -->
+ <!-- <param id="docker_sudo">false</param> -->
+ <!-- Following option can be used to tweak sudo command used by
+ default. -->
+ <!-- <param id="docker_sudo_cmd">/usr/bin/sudo --extra_param</param> -->
+ <!-- By default, docker container will not have any networking
+ enabled. host networking can be bridged by uncommenting next option
+ http://docs.docker.io/reference/run/#network-settings
+ -->
+ <!-- <param id="docker_net">bridge</param> -->
+ <!-- Following command can be used to tweak docker command. -->
+ <!-- <param id="docker_cmd">/usr/local/custom_docker/docker</param> -->
+ <!-- Following can be used to connect to docke server in different
+ ways (translated as -H argument to docker client).
+ <!-- <param id="docker_host">unix:///var/run/docker.sock</param> -->
+ <!-- <param id="docker_host">:5555</param> -->
+ <!-- <param id="docker_host">:5555</param> -->
+ <!-- <param id="docker_host">tcp://127.0.0.1:4243</param> -->
+
+ <!-- If deployer wants to use docker for isolation, but does not
+ trust tool's specified container - a destination wide override
+ can be set. This will cause all jobs on this destination to use
+ that docker image. -->
+ <!-- <param id="docker_container_id_override">busybox:ubuntu-14.04</param> -->
+
+ <!-- Likewise, if deployer wants to use docker for isolation and
+ does trust tool's specified container - but also wants tool's not
+ configured to run in a container the following option can provide
+ a fallback. -->
+ <!-- <param id="dockers_default_container_id">busybox:ubuntu-14.04</param> -->
+
+ </destination><destination id="pbs" runner="pbs" tags="mycluster"/><destination id="pbs_longjobs" runner="pbs" tags="mycluster,longjobs"><!-- Define parameters that are native to the job runner plugin. -->
diff -r 84a00e4f7d06e9adcec7297d8133ce70524dc242 -r 60d8de457dd57adb71e30725a6451126148edd6d lib/galaxy/config.py
--- a/lib/galaxy/config.py
+++ b/lib/galaxy/config.py
@@ -581,6 +581,16 @@
import galaxy.tools.search
self.toolbox_search = galaxy.tools.search.ToolBoxSearch( self.toolbox )
+ from galaxy.tools.deps import containers
+ galaxy_root_dir = os.path.abspath(self.config.root)
+ file_path = os.path.abspath(getattr(self.config, "file_path"))
+ app_info = containers.AppInfo(
+ galaxy_root_dir,
+ default_file_path=file_path,
+ outputs_to_working_directory=self.config.outputs_to_working_directory
+ )
+ self.container_finder = galaxy.tools.deps.containers.ContainerFinder(app_info)
+
def _configure_tool_data_tables( self, from_shed_config ):
from galaxy.tools.data import ToolDataTableManager
diff -r 84a00e4f7d06e9adcec7297d8133ce70524dc242 -r 60d8de457dd57adb71e30725a6451126148edd6d lib/galaxy/jobs/command_factory.py
--- a/lib/galaxy/jobs/command_factory.py
+++ b/lib/galaxy/jobs/command_factory.py
@@ -1,4 +1,6 @@
from os import getcwd
+from os import chmod
+from os.path import join
from os.path import abspath
CAPTURE_RETURN_CODE = "return_code=$?"
@@ -8,7 +10,14 @@
log = getLogger( __name__ )
-def build_command( runner, job_wrapper, include_metadata=False, include_work_dir_outputs=True, remote_command_params={} ):
+def build_command(
+ runner,
+ job_wrapper,
+ container=None,
+ include_metadata=False,
+ include_work_dir_outputs=True,
+ remote_command_params={}
+):
"""
Compose the sequence of commands necessary to execute a job. This will
currently include:
@@ -29,7 +38,35 @@
__handle_version_command(commands_builder, job_wrapper)
__handle_task_splitting(commands_builder, job_wrapper)
- __handle_dependency_resolution(commands_builder, job_wrapper, remote_command_params)
+
+ # One could imagine also allowing dependencies inside of the container but
+ # that is too sophisticated for a first crack at this - build your
+ # containers ready to go!
+ if not container:
+ __handle_dependency_resolution(commands_builder, job_wrapper, remote_command_params)
+
+ if container:
+ # Stop now and build command before handling metadata and copying
+ # working directory files back. These should always happen outside
+ # of docker container - no security implications when generating
+ # metadata and means no need for Galaxy to be available to container
+ # and not copying workdir outputs back means on can be more restrictive
+ # of where container can write to in some circumstances.
+
+ local_container_script = join( job_wrapper.working_directory, "container.sh" )
+ fh = file( local_container_script, "w" )
+ fh.write( "#!/bin/sh\n%s" % commands_builder.build() )
+ fh.close()
+ chmod( local_container_script, 0755 )
+
+ compute_container_script = local_container_script
+ if 'working_directory' in remote_command_params:
+ compute_container_script = "/bin/sh %s" % join(remote_command_params['working_directory'], "container.sh")
+
+ run_in_container_command = container.containerize_command(
+ compute_container_script
+ )
+ commands_builder = CommandsBuilder( run_in_container_command )
if include_work_dir_outputs:
__handle_work_dir_outputs(commands_builder, job_wrapper, runner, remote_command_params)
diff -r 84a00e4f7d06e9adcec7297d8133ce70524dc242 -r 60d8de457dd57adb71e30725a6451126148edd6d lib/galaxy/jobs/runners/__init__.py
--- a/lib/galaxy/jobs/runners/__init__.py
+++ b/lib/galaxy/jobs/runners/__init__.py
@@ -177,7 +177,16 @@
raise NotImplementedError()
def build_command_line( self, job_wrapper, include_metadata=False, include_work_dir_outputs=True ):
- return build_command( self, job_wrapper, include_metadata=include_metadata, include_work_dir_outputs=include_work_dir_outputs )
+ # TODO: Eliminate extra kwds no longer used (since LWR skips
+ # abstraction and calls build_command directly).
+ container = self._find_container( job_wrapper )
+ return build_command(
+ self,
+ job_wrapper,
+ include_metadata=include_metadata,
+ include_work_dir_outputs=include_work_dir_outputs,
+ container=container
+ )
def get_work_dir_outputs( self, job_wrapper, job_working_directory=None ):
"""
@@ -276,6 +285,31 @@
if ajs.job_wrapper.get_state() != model.Job.states.DELETED:
self.work_queue.put( ( self.finish_job, ajs ) )
+ def _find_container(
+ self,
+ job_wrapper,
+ compute_working_directory=None,
+ compute_tool_directory=None,
+ compute_job_directory=None
+ ):
+ if not compute_working_directory:
+ compute_working_directory = job_wrapper.working_directory
+
+ if not compute_tool_directory:
+ compute_tool_directory = job_wrapper.tool.tool_dir
+
+ tool = job_wrapper.tool
+ from galaxy.tools.deps import containers
+ tool_info = containers.ToolInfo(tool.containers, tool.requirements)
+ job_info = containers.JobInfo(compute_working_directory, compute_tool_directory, compute_job_directory)
+
+ destination_info = job_wrapper.job_destination.params
+ return self.app.container_finder.find_container(
+ tool_info,
+ destination_info,
+ job_info
+ )
+
class JobState( object ):
"""
diff -r 84a00e4f7d06e9adcec7297d8133ce70524dc242 -r 60d8de457dd57adb71e30725a6451126148edd6d lib/galaxy/jobs/runners/lwr.py
--- a/lib/galaxy/jobs/runners/lwr.py
+++ b/lib/galaxy/jobs/runners/lwr.py
@@ -227,9 +227,21 @@
metadata_kwds=metadata_kwds,
dependency_resolution=dependency_resolution,
)
+ remote_working_directory = remote_job_config['working_directory']
+ # TODO: Following defs work for LWR, always worked for LWR but should be
+ # calculated at some other level.
+ remote_job_directory = os.path.abspath(os.path.join(remote_working_directory, os.path.pardir))
+ remote_tool_directory = os.path.abspath(os.path.join(remote_job_directory, "tool_files"))
+ container = self._find_container(
+ job_wrapper,
+ compute_working_directory=remote_working_directory,
+ compute_tool_directory=remote_tool_directory,
+ compute_job_directory=remote_job_directory,
+ )
command_line = build_command(
self,
job_wrapper=job_wrapper,
+ container=container,
include_metadata=remote_metadata,
include_work_dir_outputs=False,
remote_command_params=remote_command_params,
diff -r 84a00e4f7d06e9adcec7297d8133ce70524dc242 -r 60d8de457dd57adb71e30725a6451126148edd6d lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -1346,7 +1346,9 @@
self.__tests_populated = False
# Requirements (dependencies)
- self.requirements = parse_requirements_from_xml( root )
+ requirements, containers = parse_requirements_from_xml( root )
+ self.requirements = requirements
+ self.containers = containers
# Determine if this tool can be used in workflows
self.is_workflow_compatible = self.check_workflow_compatible(root)
# Trackster configuration.
diff -r 84a00e4f7d06e9adcec7297d8133ce70524dc242 -r 60d8de457dd57adb71e30725a6451126148edd6d lib/galaxy/tools/deps/containers.py
--- /dev/null
+++ b/lib/galaxy/tools/deps/containers.py
@@ -0,0 +1,246 @@
+from abc import (
+ ABCMeta,
+ abstractmethod
+)
+import os
+import string
+
+from galaxy.util import asbool
+from ..deps import docker_util
+
+import logging
+log = logging.getLogger(__name__)
+
+DEFAULT_CONTAINER_TYPE = "docker"
+
+
+class ContainerFinder(object):
+
+ def __init__(self, app_info):
+ self.app_info = app_info
+ self.container_registry = ContainerRegistry()
+
+ def find_container(self, tool_info, destination_info, job_info):
+ def __destination_container(container_description=None, container_id=None, container_type=None):
+ if container_description:
+ container_id = container_description.identifier
+ container_type = container_description.type
+ container = self.__destination_container(
+ container_id,
+ container_type,
+ tool_info,
+ destination_info,
+ job_info
+ )
+ return container
+
+ # Is destination forcing Galaxy to use a particular container do it,
+ # this is likely kind of a corner case. For instance if deployers
+ # do not trust the containers annotated in tools.
+ for container_type in CONTAINER_CLASSES.keys():
+ container_id = self.__overridden_container_id(container_type, destination_info)
+ if container_id:
+ container = __destination_container(container_type=container_type, container_id=container_id)
+ if container:
+ return container
+
+ # Otherwise lets see if we can find container for the tool.
+
+ # Exact matches first from explicitly listed containers in tools...
+ for container_description in tool_info.container_descriptions:
+ container = __destination_container(container_description)
+ if container:
+ return container
+
+ # Implement vague concept of looping through all containers
+ # matching requirements. Exact details need to be worked through
+ # but hopefully the idea that it sits below find_container somewhere
+ # external components to this module don't need to worry about it
+ # is good enough.
+ container_descriptions = self.container_registry.container_descriptions_for_requirements(tool_info.requirements)
+ for container_description in container_descriptions:
+ container = __destination_container(container_description)
+ if container:
+ return container
+
+ # If we still don't have a container, check to see if any container
+ # types define a default container id and use that.
+ for container_type in CONTAINER_CLASSES.keys():
+ container_id = self.__default_container_id(container_type, destination_info)
+ if container_id:
+ container = __destination_container(container_type=container_type, container_id=container_id)
+ if container:
+ return container
+
+ return NULL_CONTAINER
+
+ def __overridden_container_id(self, container_type, destination_info):
+ if not self.__container_type_enabled(container_type, destination_info):
+ return None
+ return destination_info.get("%s_container_id_override" % container_type)
+
+ def __default_container_id(self, container_type, destination_info):
+ if not self.__container_type_enabled(container_type, destination_info):
+ return None
+ return destination_info.get("%s_default_container_id" % container_type)
+
+ def __destination_container(self, container_id, container_type, tool_info, destination_info, job_info):
+ # TODO: ensure destination_info is dict-like
+ if not self.__container_type_enabled(container_type, destination_info):
+ return NULL_CONTAINER
+
+ # TODO: Right now this assumes all containers available when a
+ # container type is - there should be more thought put into this.
+ # Checking which are availalbe - settings policies for what can be
+ # auto-fetched, etc....
+ return CONTAINER_CLASSES[container_type](container_id, self.app_info, tool_info, destination_info, job_info)
+
+ def __container_type_enabled(self, container_type, destination_info):
+ return asbool(destination_info.get("%s_enabled" % container_type, False))
+
+
+class ContainerRegistry():
+
+ def __init__(self):
+ pass
+
+ def container_descriptions_for_requirements(self, requirements):
+ # Return lists of containers that would match requirements...
+ return []
+
+
+class AppInfo(object):
+
+ def __init__(self, galaxy_root_dir=None, default_file_path=None, outputs_to_working_directory=False):
+ self.galaxy_root_dir = galaxy_root_dir
+ self.default_file_path = default_file_path
+ # TODO: Vary default value for docker_volumes based on this...
+ self.outputs_to_working_directory = outputs_to_working_directory
+
+
+class ToolInfo(object):
+ # TODO: Introduce tool XML syntax to annotate the optional environment
+ # variables they can consume (e.g. JVM options, license keys, etc..)
+ # and add these to env_path_through
+
+ def __init__(self, container_descriptions=[], requirements=[]):
+ self.container_descriptions = container_descriptions
+ self.requirements = requirements
+ self.env_pass_through = ["GALAXY_SLOTS"]
+
+
+class JobInfo(object):
+
+ def __init__(self, working_directory, tool_directory, job_directory):
+ self.working_directory = working_directory
+ self.job_directory = job_directory
+ # Tool files may be remote staged - so this is unintuitively a property
+ # of the job not of the tool.
+ self.tool_directory = tool_directory
+
+
+class Container( object ):
+ __metaclass__ = ABCMeta
+
+ def __init__(self, container_id, app_info, tool_info, destination_info, job_info):
+ self.container_id = container_id
+ self.app_info = app_info
+ self.tool_info = tool_info
+ self.destination_info = destination_info
+ self.job_info = job_info
+
+ @abstractmethod
+ def containerize_command(self, command):
+ """
+ Use destination supplied container configuration parameters,
+ container_id, and command to build a new command that runs
+ input command in container.
+ """
+
+
+class DockerContainer(Container):
+
+ def containerize_command(self, command):
+ def prop(name, default):
+ destination_name = "docker_%s" % name
+ return self.destination_info.get(destination_name, default)
+
+ env_directives = []
+ for pass_through_var in self.tool_info.env_pass_through:
+ env_directives.append('"%s=$%s"' % (pass_through_var, pass_through_var))
+
+ # Allow destinations to explicitly set environment variables just for
+ # docker container. Better approach is to set for destination and then
+ # pass through only what tool needs however. (See todo in ToolInfo.)
+ for key, value in self.destination_info.iteritems():
+ if key.startswith("docker_env_"):
+ env = key[len("docker_env_"):]
+ env_directives.append('"%s=%s"' % (env, value))
+
+ working_directory = self.job_info.working_directory
+ if not working_directory:
+ raise Exception("Cannot containerize command [%s] without defined working directory." % working_directory)
+
+ volumes_raw = self.__expand_str(self.destination_info.get("docker_volumes", "$defaults"))
+ # TODO: Remove redundant volumes...
+ volumes = docker_util.DockerVolume.volumes_from_str(volumes_raw)
+ return docker_util.build_docker_run_command(
+ command,
+ self.container_id,
+ volumes=volumes,
+ env_directives=env_directives,
+ working_directory=working_directory,
+ docker_cmd=prop("cmd", docker_util.DEFAULT_DOCKER_COMMAND),
+ sudo=asbool(prop("sudo", docker_util.DEFAULT_SUDO)),
+ sudo_cmd=prop("sudo_cmd", docker_util.DEFAULT_SUDO_COMMAND),
+ host=prop("host", docker_util.DEFAULT_HOST),
+ net=prop("net", "none") # By default, docker instance has networking disabled
+ )
+
+ def __expand_str(self, value):
+ template = string.Template(value)
+ variables = dict()
+
+ def add_var(name, value):
+ if value:
+ variables[name] = os.path.abspath(value)
+
+ add_var("working_directory", self.job_info.working_directory)
+ add_var("job_directory", self.job_info.job_directory)
+ add_var("tool_directory", self.job_info.tool_directory)
+ add_var("galaxy_root", self.app_info.galaxy_root_dir)
+ add_var("default_file_path", self.app_info.default_file_path)
+
+ if self.job_info.job_directory:
+ # We have a job directory, so everything needed (excluding index
+ # files) should be available in job_directory...
+ defaults = "$job_directory:ro,$tool_directory:ro,$job_directory/outputs:rw,$working_directory:rw"
+ elif self.app_info.outputs_to_working_directory:
+ # Should need default_file_path (which is a course estimate given
+ # object stores anyway.
+ defaults = "$galaxy_root:ro,$tool_directory:ro,$working_directory:rw,$default_file_path:ro"
+ else:
+ defaults = "$galaxy_root:ro,$tool_directory:ro,$working_directory:rw,$default_file_path:rw"
+
+ # Define $defaults that can easily be extended with external library and
+ # index data without deployer worrying about above details.
+ variables["defaults"] = string.Template(defaults).safe_substitute(variables)
+
+ return template.safe_substitute(variables)
+
+
+CONTAINER_CLASSES = dict(
+ docker=DockerContainer,
+)
+
+
+class NullContainer(object):
+
+ def __init__(self):
+ pass
+
+ def __nonzero__(self):
+ return False
+
+
+NULL_CONTAINER = NullContainer()
diff -r 84a00e4f7d06e9adcec7297d8133ce70524dc242 -r 60d8de457dd57adb71e30725a6451126148edd6d lib/galaxy/tools/deps/docker_util.py
--- /dev/null
+++ b/lib/galaxy/tools/deps/docker_util.py
@@ -0,0 +1,91 @@
+
+DEFAULT_DOCKER_COMMAND = "docker"
+DEFAULT_SUDO = True
+DEFAULT_SUDO_COMMAND = "sudo"
+DEFAULT_HOST = None
+DEFAULT_VOLUME_MOUNT_TYPE = "rw"
+DEFAULT_WORKING_DIRECTORY = None
+DEFAULT_NET = None
+DEFAULT_MEMORY = None
+
+
+class DockerVolume(object):
+
+ def __init__(self, path, to_path=None, how=DEFAULT_VOLUME_MOUNT_TYPE):
+ self.from_path = path
+ self.to_path = to_path or path
+ if not DockerVolume.__valid_how(how):
+ raise ValueError("Invalid way to specify docker volume %s" % how)
+ self.how = how
+
+ @staticmethod
+ def volumes_from_str(volumes_as_str):
+ if not volumes_as_str:
+ return []
+ volume_strs = [v.strip() for v in volumes_as_str.split(",")]
+ return map(DockerVolume.volume_from_str, volume_strs)
+
+ @staticmethod
+ def volume_from_str(as_str):
+ if not as_str:
+ raise ValueError("Failed to parse docker volume from %s" % as_str)
+ parts = as_str.split(":", 2)
+ kwds = dict(path=parts[0])
+ if len(parts) == 2:
+ if DockerVolume.__valid_how(parts[1]):
+ kwds["how"] = parts[1]
+ else:
+ kwds["to_path"] = parts[1]
+ elif len(parts) == 3:
+ kwds["to_path"] = parts[1]
+ kwds["how"] = parts[2]
+ return DockerVolume(**kwds)
+
+ @staticmethod
+ def __valid_how(how):
+ return how in ["ro", "rw"]
+
+ def __str__(self):
+ return ":".join([self.from_path, self.to_path, self.how])
+
+
+def build_docker_run_command(
+ container_command,
+ image,
+ tag=None,
+ docker_cmd=DEFAULT_DOCKER_COMMAND,
+ volumes=[],
+ memory=DEFAULT_MEMORY,
+ env_directives=[],
+ working_directory=DEFAULT_WORKING_DIRECTORY,
+ sudo=DEFAULT_SUDO,
+ sudo_cmd=DEFAULT_SUDO_COMMAND,
+ name=None,
+ host=DEFAULT_HOST,
+ net=DEFAULT_NET,
+):
+ command_parts = []
+ if sudo:
+ command_parts.append(sudo_cmd)
+ command_parts.append(docker_cmd)
+ if host:
+ command_parts.append(["-H", host])
+ command_parts.append("run")
+ for env_directive in env_directives:
+ command_parts.extend(["-e", env_directive])
+ for volume in volumes:
+ command_parts.extend(["-v", str(volume)])
+ if memory:
+ command_parts.extend(["-m", memory])
+ if name:
+ command_parts.extend(["-name", name])
+ if working_directory:
+ command_parts.extend(["-w", working_directory])
+ if net:
+ command_parts.extend(["--net", net])
+ full_image = image
+ if tag:
+ full_image = "%s:%s" % (full_image, tag)
+ command_parts.append(full_image)
+ command_parts.append(container_command)
+ return " ".join(command_parts)
diff -r 84a00e4f7d06e9adcec7297d8133ce70524dc242 -r 60d8de457dd57adb71e30725a6451126148edd6d lib/galaxy/tools/deps/requirements.py
--- a/lib/galaxy/tools/deps/requirements.py
+++ b/lib/galaxy/tools/deps/requirements.py
@@ -26,6 +26,25 @@
return ToolRequirement( name=name, type=type, version=version )
+DEFAULT_CONTAINER_TYPE = "docker"
+
+
+class ContainerDescription( object ):
+
+ def __init__( self, identifier=None, type="docker" ):
+ self.identifier = identifier
+ self.type = type
+
+ def to_dict( self ):
+ return dict(identifier=self.identifier, type=self.type)
+
+ @staticmethod
+ def from_dict( dict ):
+ identifier = dict["identifier"]
+ type = dict.get("type", DEFAULT_CONTAINER_TYPE)
+ return ContainerDescription( identifier=identifier, type=type )
+
+
def parse_requirements_from_xml( xml_root ):
"""
@@ -63,4 +82,15 @@
requirement = ToolRequirement( name=name, type=type, version=version )
requirements.append( requirement )
- return requirements
+ container_elems = []
+ if requirements_elem is not None:
+ container_elems = requirements_elem.findall( 'container' )
+
+ containers = []
+ for container_elem in container_elems:
+ identifier = xml_text( container_elem )
+ type = container_elem.get( "type", DEFAULT_CONTAINER_TYPE )
+ container = ContainerDescription( identifier=identifier, type=type )
+ containers.append( container )
+
+ return requirements, containers
diff -r 84a00e4f7d06e9adcec7297d8133ce70524dc242 -r 60d8de457dd57adb71e30725a6451126148edd6d test/functional/tools/catDocker.xml
--- /dev/null
+++ b/test/functional/tools/catDocker.xml
@@ -0,0 +1,28 @@
+<tool id="catdc" name="Concatenate datasets (in docker)">
+ <description>tail-to-head</description>
+ <requirements>
+ <container type="docker">busybox:ubuntu-14.04</container>
+ </requirements>
+ <command>
+ echo "Galaxy slots passed through contain as \$GALAXY_SLOTS";
+ cat $input1
+ #for $q in $queries
+ ${q.input2}
+ #end for
+ > $out_file1;
+ echo "Work dir output" > working_file
+
+ </command>
+ <inputs>
+ <param name="input1" type="data" label="Concatenate Dataset"/>
+ <repeat name="queries" title="Dataset">
+ <param name="input2" type="data" label="Select" />
+ </repeat>
+ </inputs>
+ <outputs>
+ <data name="out_file1" format="input" metadata_source="input1"/>
+ <data name="out_file2" format="txt" from_work_dir="working_file" />
+ </outputs>
+ <help>
+ </help>
+</tool>
https://bitbucket.org/galaxy/galaxy-central/commits/646b5877871b/
Changeset: 646b5877871b
User: dannon
Date: 2014-06-12 17:09:04
Summary: Merged in jmchilton/galaxy-central-fork-1 (pull request #401)
Allow tools and deployers to specify optional Docker-based dependency resolution.
Affected #: 10 files
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 job_conf.xml.sample_advanced
--- a/job_conf.xml.sample_advanced
+++ b/job_conf.xml.sample_advanced
@@ -97,6 +97,90 @@
elements. This empty block will simply disable job metrics for the
corresponding destination. --></destination>
+ <destination id="docker_local" runner="local">
+ <param id="docker_enabled">true</param>
+ <!-- docker_volumes can be used to configure volumes to expose to docker,
+ For added isolation append :ro to the path to mount it read only.
+ Galaxy will attempt to infer a reasonable set of defaults which
+ volumes should be exposed how based on Galaxy's settings and the
+ destination - but be sure to add any library paths or data incides
+ that may be needed read-only.
+ -->
+ <!--
+ <param id="docker_volumes">$defaults,/mnt/galaxyData/libraries:ro,/mnt/galaxyData/indices:ro</param>
+ -->
+ <!-- For a stock Galaxy instance and traditional job runner $defaults will
+ expand out as:
+
+ $galaxy_root:ro,$tool_directory:ro,$working_directory:rw,$default_file_path:rw
+
+ This assumes most of what is needed is available under Galaxy's root directory,
+ the tool directory, and the Galaxy's file_path (if using object store creatively
+ you will definitely need to expand defaults).
+
+ This configuration allows any docker instance to write to any Galaxy
+ file - for greater isolation set outputs_to_working_directory in
+ universe_wsgi.ini. This will cause $defaults to allow writing to much
+ less. It will then expand as follows:
+
+ $galaxy_root:ro,$tool_directory:ro,$working_directory:rw,$default_file_path:ro
+
+ If using the LWR, defaults will be even further restricted because the
+ LWR will (by default) stage all needed inputs into the job's job_directory
+ (so there is not need to allow the docker container to read all the
+ files - let alone write over them). Defaults in this case becomes:
+
+ $job_directory:ro,$tool_directory:ro,$job_directory/outputs:rw,$working_directory:rw
+
+ Python string.Template is used to expand volumes and values $defaults,
+ $galaxy_root, $default_file_path, $tool_directory, $working_directory,
+ are available to all jobs and $job_directory is also available for
+ LWR jobs.
+ -->
+ <!-- Control memory allocatable by docker container with following option:
+ -->
+ <!-- <param id="docker_memory">24G</param> -->
+ <!-- By default Docker will need to runnable by Galaxy using
+ password-less sudo - this can be configured by adding the
+ following line to the sudoers file of all compute nodes
+ with docker enabled:
+
+ galaxy ALL = (root) NOPASSWD: SETENV: /usr/bin/docker
+
+ The follow option is set to false to disable sudo (docker
+ must likewise be configured to allow this).
+ -->
+ <!-- <param id="docker_sudo">false</param> -->
+ <!-- Following option can be used to tweak sudo command used by
+ default. -->
+ <!-- <param id="docker_sudo_cmd">/usr/bin/sudo --extra_param</param> -->
+ <!-- By default, docker container will not have any networking
+ enabled. host networking can be bridged by uncommenting next option
+ http://docs.docker.io/reference/run/#network-settings
+ -->
+ <!-- <param id="docker_net">bridge</param> -->
+ <!-- Following command can be used to tweak docker command. -->
+ <!-- <param id="docker_cmd">/usr/local/custom_docker/docker</param> -->
+ <!-- Following can be used to connect to docke server in different
+ ways (translated as -H argument to docker client).
+ <!-- <param id="docker_host">unix:///var/run/docker.sock</param> -->
+ <!-- <param id="docker_host">:5555</param> -->
+ <!-- <param id="docker_host">:5555</param> -->
+ <!-- <param id="docker_host">tcp://127.0.0.1:4243</param> -->
+
+ <!-- If deployer wants to use docker for isolation, but does not
+ trust tool's specified container - a destination wide override
+ can be set. This will cause all jobs on this destination to use
+ that docker image. -->
+ <!-- <param id="docker_container_id_override">busybox:ubuntu-14.04</param> -->
+
+ <!-- Likewise, if deployer wants to use docker for isolation and
+ does trust tool's specified container - but also wants tool's not
+ configured to run in a container the following option can provide
+ a fallback. -->
+ <!-- <param id="dockers_default_container_id">busybox:ubuntu-14.04</param> -->
+
+ </destination><destination id="pbs" runner="pbs" tags="mycluster"/><destination id="pbs_longjobs" runner="pbs" tags="mycluster,longjobs"><!-- Define parameters that are native to the job runner plugin. -->
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/config.py
--- a/lib/galaxy/config.py
+++ b/lib/galaxy/config.py
@@ -582,6 +582,16 @@
import galaxy.tools.search
self.toolbox_search = galaxy.tools.search.ToolBoxSearch( self.toolbox )
+ from galaxy.tools.deps import containers
+ galaxy_root_dir = os.path.abspath(self.config.root)
+ file_path = os.path.abspath(getattr(self.config, "file_path"))
+ app_info = containers.AppInfo(
+ galaxy_root_dir,
+ default_file_path=file_path,
+ outputs_to_working_directory=self.config.outputs_to_working_directory
+ )
+ self.container_finder = galaxy.tools.deps.containers.ContainerFinder(app_info)
+
def _configure_tool_data_tables( self, from_shed_config ):
from galaxy.tools.data import ToolDataTableManager
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/jobs/command_factory.py
--- a/lib/galaxy/jobs/command_factory.py
+++ b/lib/galaxy/jobs/command_factory.py
@@ -1,4 +1,6 @@
from os import getcwd
+from os import chmod
+from os.path import join
from os.path import abspath
CAPTURE_RETURN_CODE = "return_code=$?"
@@ -8,7 +10,14 @@
log = getLogger( __name__ )
-def build_command( runner, job_wrapper, include_metadata=False, include_work_dir_outputs=True, remote_command_params={} ):
+def build_command(
+ runner,
+ job_wrapper,
+ container=None,
+ include_metadata=False,
+ include_work_dir_outputs=True,
+ remote_command_params={}
+):
"""
Compose the sequence of commands necessary to execute a job. This will
currently include:
@@ -29,7 +38,35 @@
__handle_version_command(commands_builder, job_wrapper)
__handle_task_splitting(commands_builder, job_wrapper)
- __handle_dependency_resolution(commands_builder, job_wrapper, remote_command_params)
+
+ # One could imagine also allowing dependencies inside of the container but
+ # that is too sophisticated for a first crack at this - build your
+ # containers ready to go!
+ if not container:
+ __handle_dependency_resolution(commands_builder, job_wrapper, remote_command_params)
+
+ if container:
+ # Stop now and build command before handling metadata and copying
+ # working directory files back. These should always happen outside
+ # of docker container - no security implications when generating
+ # metadata and means no need for Galaxy to be available to container
+ # and not copying workdir outputs back means on can be more restrictive
+ # of where container can write to in some circumstances.
+
+ local_container_script = join( job_wrapper.working_directory, "container.sh" )
+ fh = file( local_container_script, "w" )
+ fh.write( "#!/bin/sh\n%s" % commands_builder.build() )
+ fh.close()
+ chmod( local_container_script, 0755 )
+
+ compute_container_script = local_container_script
+ if 'working_directory' in remote_command_params:
+ compute_container_script = "/bin/sh %s" % join(remote_command_params['working_directory'], "container.sh")
+
+ run_in_container_command = container.containerize_command(
+ compute_container_script
+ )
+ commands_builder = CommandsBuilder( run_in_container_command )
if include_work_dir_outputs:
__handle_work_dir_outputs(commands_builder, job_wrapper, runner, remote_command_params)
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/jobs/runners/__init__.py
--- a/lib/galaxy/jobs/runners/__init__.py
+++ b/lib/galaxy/jobs/runners/__init__.py
@@ -177,7 +177,16 @@
raise NotImplementedError()
def build_command_line( self, job_wrapper, include_metadata=False, include_work_dir_outputs=True ):
- return build_command( self, job_wrapper, include_metadata=include_metadata, include_work_dir_outputs=include_work_dir_outputs )
+ # TODO: Eliminate extra kwds no longer used (since LWR skips
+ # abstraction and calls build_command directly).
+ container = self._find_container( job_wrapper )
+ return build_command(
+ self,
+ job_wrapper,
+ include_metadata=include_metadata,
+ include_work_dir_outputs=include_work_dir_outputs,
+ container=container
+ )
def get_work_dir_outputs( self, job_wrapper, job_working_directory=None ):
"""
@@ -276,6 +285,31 @@
if ajs.job_wrapper.get_state() != model.Job.states.DELETED:
self.work_queue.put( ( self.finish_job, ajs ) )
+ def _find_container(
+ self,
+ job_wrapper,
+ compute_working_directory=None,
+ compute_tool_directory=None,
+ compute_job_directory=None
+ ):
+ if not compute_working_directory:
+ compute_working_directory = job_wrapper.working_directory
+
+ if not compute_tool_directory:
+ compute_tool_directory = job_wrapper.tool.tool_dir
+
+ tool = job_wrapper.tool
+ from galaxy.tools.deps import containers
+ tool_info = containers.ToolInfo(tool.containers, tool.requirements)
+ job_info = containers.JobInfo(compute_working_directory, compute_tool_directory, compute_job_directory)
+
+ destination_info = job_wrapper.job_destination.params
+ return self.app.container_finder.find_container(
+ tool_info,
+ destination_info,
+ job_info
+ )
+
class JobState( object ):
"""
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/jobs/runners/lwr.py
--- a/lib/galaxy/jobs/runners/lwr.py
+++ b/lib/galaxy/jobs/runners/lwr.py
@@ -233,9 +233,21 @@
metadata_kwds=metadata_kwds,
dependency_resolution=dependency_resolution,
)
+ remote_working_directory = remote_job_config['working_directory']
+ # TODO: Following defs work for LWR, always worked for LWR but should be
+ # calculated at some other level.
+ remote_job_directory = os.path.abspath(os.path.join(remote_working_directory, os.path.pardir))
+ remote_tool_directory = os.path.abspath(os.path.join(remote_job_directory, "tool_files"))
+ container = self._find_container(
+ job_wrapper,
+ compute_working_directory=remote_working_directory,
+ compute_tool_directory=remote_tool_directory,
+ compute_job_directory=remote_job_directory,
+ )
command_line = build_command(
self,
job_wrapper=job_wrapper,
+ container=container,
include_metadata=remote_metadata,
include_work_dir_outputs=False,
remote_command_params=remote_command_params,
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/tools/__init__.py
--- a/lib/galaxy/tools/__init__.py
+++ b/lib/galaxy/tools/__init__.py
@@ -1346,7 +1346,9 @@
self.__tests_populated = False
# Requirements (dependencies)
- self.requirements = parse_requirements_from_xml( root )
+ requirements, containers = parse_requirements_from_xml( root )
+ self.requirements = requirements
+ self.containers = containers
# Determine if this tool can be used in workflows
self.is_workflow_compatible = self.check_workflow_compatible(root)
# Trackster configuration.
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/tools/deps/containers.py
--- /dev/null
+++ b/lib/galaxy/tools/deps/containers.py
@@ -0,0 +1,246 @@
+from abc import (
+ ABCMeta,
+ abstractmethod
+)
+import os
+import string
+
+from galaxy.util import asbool
+from ..deps import docker_util
+
+import logging
+log = logging.getLogger(__name__)
+
+DEFAULT_CONTAINER_TYPE = "docker"
+
+
+class ContainerFinder(object):
+
+ def __init__(self, app_info):
+ self.app_info = app_info
+ self.container_registry = ContainerRegistry()
+
+ def find_container(self, tool_info, destination_info, job_info):
+ def __destination_container(container_description=None, container_id=None, container_type=None):
+ if container_description:
+ container_id = container_description.identifier
+ container_type = container_description.type
+ container = self.__destination_container(
+ container_id,
+ container_type,
+ tool_info,
+ destination_info,
+ job_info
+ )
+ return container
+
+ # Is destination forcing Galaxy to use a particular container do it,
+ # this is likely kind of a corner case. For instance if deployers
+ # do not trust the containers annotated in tools.
+ for container_type in CONTAINER_CLASSES.keys():
+ container_id = self.__overridden_container_id(container_type, destination_info)
+ if container_id:
+ container = __destination_container(container_type=container_type, container_id=container_id)
+ if container:
+ return container
+
+ # Otherwise lets see if we can find container for the tool.
+
+ # Exact matches first from explicitly listed containers in tools...
+ for container_description in tool_info.container_descriptions:
+ container = __destination_container(container_description)
+ if container:
+ return container
+
+ # Implement vague concept of looping through all containers
+ # matching requirements. Exact details need to be worked through
+ # but hopefully the idea that it sits below find_container somewhere
+ # external components to this module don't need to worry about it
+ # is good enough.
+ container_descriptions = self.container_registry.container_descriptions_for_requirements(tool_info.requirements)
+ for container_description in container_descriptions:
+ container = __destination_container(container_description)
+ if container:
+ return container
+
+ # If we still don't have a container, check to see if any container
+ # types define a default container id and use that.
+ for container_type in CONTAINER_CLASSES.keys():
+ container_id = self.__default_container_id(container_type, destination_info)
+ if container_id:
+ container = __destination_container(container_type=container_type, container_id=container_id)
+ if container:
+ return container
+
+ return NULL_CONTAINER
+
+ def __overridden_container_id(self, container_type, destination_info):
+ if not self.__container_type_enabled(container_type, destination_info):
+ return None
+ return destination_info.get("%s_container_id_override" % container_type)
+
+ def __default_container_id(self, container_type, destination_info):
+ if not self.__container_type_enabled(container_type, destination_info):
+ return None
+ return destination_info.get("%s_default_container_id" % container_type)
+
+ def __destination_container(self, container_id, container_type, tool_info, destination_info, job_info):
+ # TODO: ensure destination_info is dict-like
+ if not self.__container_type_enabled(container_type, destination_info):
+ return NULL_CONTAINER
+
+ # TODO: Right now this assumes all containers available when a
+ # container type is - there should be more thought put into this.
+ # Checking which are availalbe - settings policies for what can be
+ # auto-fetched, etc....
+ return CONTAINER_CLASSES[container_type](container_id, self.app_info, tool_info, destination_info, job_info)
+
+ def __container_type_enabled(self, container_type, destination_info):
+ return asbool(destination_info.get("%s_enabled" % container_type, False))
+
+
+class ContainerRegistry():
+
+ def __init__(self):
+ pass
+
+ def container_descriptions_for_requirements(self, requirements):
+ # Return lists of containers that would match requirements...
+ return []
+
+
+class AppInfo(object):
+
+ def __init__(self, galaxy_root_dir=None, default_file_path=None, outputs_to_working_directory=False):
+ self.galaxy_root_dir = galaxy_root_dir
+ self.default_file_path = default_file_path
+ # TODO: Vary default value for docker_volumes based on this...
+ self.outputs_to_working_directory = outputs_to_working_directory
+
+
+class ToolInfo(object):
+ # TODO: Introduce tool XML syntax to annotate the optional environment
+ # variables they can consume (e.g. JVM options, license keys, etc..)
+ # and add these to env_path_through
+
+ def __init__(self, container_descriptions=[], requirements=[]):
+ self.container_descriptions = container_descriptions
+ self.requirements = requirements
+ self.env_pass_through = ["GALAXY_SLOTS"]
+
+
+class JobInfo(object):
+
+ def __init__(self, working_directory, tool_directory, job_directory):
+ self.working_directory = working_directory
+ self.job_directory = job_directory
+ # Tool files may be remote staged - so this is unintuitively a property
+ # of the job not of the tool.
+ self.tool_directory = tool_directory
+
+
+class Container( object ):
+ __metaclass__ = ABCMeta
+
+ def __init__(self, container_id, app_info, tool_info, destination_info, job_info):
+ self.container_id = container_id
+ self.app_info = app_info
+ self.tool_info = tool_info
+ self.destination_info = destination_info
+ self.job_info = job_info
+
+ @abstractmethod
+ def containerize_command(self, command):
+ """
+ Use destination supplied container configuration parameters,
+ container_id, and command to build a new command that runs
+ input command in container.
+ """
+
+
+class DockerContainer(Container):
+
+ def containerize_command(self, command):
+ def prop(name, default):
+ destination_name = "docker_%s" % name
+ return self.destination_info.get(destination_name, default)
+
+ env_directives = []
+ for pass_through_var in self.tool_info.env_pass_through:
+ env_directives.append('"%s=$%s"' % (pass_through_var, pass_through_var))
+
+ # Allow destinations to explicitly set environment variables just for
+ # docker container. Better approach is to set for destination and then
+ # pass through only what tool needs however. (See todo in ToolInfo.)
+ for key, value in self.destination_info.iteritems():
+ if key.startswith("docker_env_"):
+ env = key[len("docker_env_"):]
+ env_directives.append('"%s=%s"' % (env, value))
+
+ working_directory = self.job_info.working_directory
+ if not working_directory:
+ raise Exception("Cannot containerize command [%s] without defined working directory." % working_directory)
+
+ volumes_raw = self.__expand_str(self.destination_info.get("docker_volumes", "$defaults"))
+ # TODO: Remove redundant volumes...
+ volumes = docker_util.DockerVolume.volumes_from_str(volumes_raw)
+ return docker_util.build_docker_run_command(
+ command,
+ self.container_id,
+ volumes=volumes,
+ env_directives=env_directives,
+ working_directory=working_directory,
+ docker_cmd=prop("cmd", docker_util.DEFAULT_DOCKER_COMMAND),
+ sudo=asbool(prop("sudo", docker_util.DEFAULT_SUDO)),
+ sudo_cmd=prop("sudo_cmd", docker_util.DEFAULT_SUDO_COMMAND),
+ host=prop("host", docker_util.DEFAULT_HOST),
+ net=prop("net", "none") # By default, docker instance has networking disabled
+ )
+
+ def __expand_str(self, value):
+ template = string.Template(value)
+ variables = dict()
+
+ def add_var(name, value):
+ if value:
+ variables[name] = os.path.abspath(value)
+
+ add_var("working_directory", self.job_info.working_directory)
+ add_var("job_directory", self.job_info.job_directory)
+ add_var("tool_directory", self.job_info.tool_directory)
+ add_var("galaxy_root", self.app_info.galaxy_root_dir)
+ add_var("default_file_path", self.app_info.default_file_path)
+
+ if self.job_info.job_directory:
+ # We have a job directory, so everything needed (excluding index
+ # files) should be available in job_directory...
+ defaults = "$job_directory:ro,$tool_directory:ro,$job_directory/outputs:rw,$working_directory:rw"
+ elif self.app_info.outputs_to_working_directory:
+ # Should need default_file_path (which is a course estimate given
+ # object stores anyway.
+ defaults = "$galaxy_root:ro,$tool_directory:ro,$working_directory:rw,$default_file_path:ro"
+ else:
+ defaults = "$galaxy_root:ro,$tool_directory:ro,$working_directory:rw,$default_file_path:rw"
+
+ # Define $defaults that can easily be extended with external library and
+ # index data without deployer worrying about above details.
+ variables["defaults"] = string.Template(defaults).safe_substitute(variables)
+
+ return template.safe_substitute(variables)
+
+
+CONTAINER_CLASSES = dict(
+ docker=DockerContainer,
+)
+
+
+class NullContainer(object):
+
+ def __init__(self):
+ pass
+
+ def __nonzero__(self):
+ return False
+
+
+NULL_CONTAINER = NullContainer()
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/tools/deps/docker_util.py
--- /dev/null
+++ b/lib/galaxy/tools/deps/docker_util.py
@@ -0,0 +1,91 @@
+
+DEFAULT_DOCKER_COMMAND = "docker"
+DEFAULT_SUDO = True
+DEFAULT_SUDO_COMMAND = "sudo"
+DEFAULT_HOST = None
+DEFAULT_VOLUME_MOUNT_TYPE = "rw"
+DEFAULT_WORKING_DIRECTORY = None
+DEFAULT_NET = None
+DEFAULT_MEMORY = None
+
+
+class DockerVolume(object):
+
+ def __init__(self, path, to_path=None, how=DEFAULT_VOLUME_MOUNT_TYPE):
+ self.from_path = path
+ self.to_path = to_path or path
+ if not DockerVolume.__valid_how(how):
+ raise ValueError("Invalid way to specify docker volume %s" % how)
+ self.how = how
+
+ @staticmethod
+ def volumes_from_str(volumes_as_str):
+ if not volumes_as_str:
+ return []
+ volume_strs = [v.strip() for v in volumes_as_str.split(",")]
+ return map(DockerVolume.volume_from_str, volume_strs)
+
+ @staticmethod
+ def volume_from_str(as_str):
+ if not as_str:
+ raise ValueError("Failed to parse docker volume from %s" % as_str)
+ parts = as_str.split(":", 2)
+ kwds = dict(path=parts[0])
+ if len(parts) == 2:
+ if DockerVolume.__valid_how(parts[1]):
+ kwds["how"] = parts[1]
+ else:
+ kwds["to_path"] = parts[1]
+ elif len(parts) == 3:
+ kwds["to_path"] = parts[1]
+ kwds["how"] = parts[2]
+ return DockerVolume(**kwds)
+
+ @staticmethod
+ def __valid_how(how):
+ return how in ["ro", "rw"]
+
+ def __str__(self):
+ return ":".join([self.from_path, self.to_path, self.how])
+
+
+def build_docker_run_command(
+ container_command,
+ image,
+ tag=None,
+ docker_cmd=DEFAULT_DOCKER_COMMAND,
+ volumes=[],
+ memory=DEFAULT_MEMORY,
+ env_directives=[],
+ working_directory=DEFAULT_WORKING_DIRECTORY,
+ sudo=DEFAULT_SUDO,
+ sudo_cmd=DEFAULT_SUDO_COMMAND,
+ name=None,
+ host=DEFAULT_HOST,
+ net=DEFAULT_NET,
+):
+ command_parts = []
+ if sudo:
+ command_parts.append(sudo_cmd)
+ command_parts.append(docker_cmd)
+ if host:
+ command_parts.append(["-H", host])
+ command_parts.append("run")
+ for env_directive in env_directives:
+ command_parts.extend(["-e", env_directive])
+ for volume in volumes:
+ command_parts.extend(["-v", str(volume)])
+ if memory:
+ command_parts.extend(["-m", memory])
+ if name:
+ command_parts.extend(["-name", name])
+ if working_directory:
+ command_parts.extend(["-w", working_directory])
+ if net:
+ command_parts.extend(["--net", net])
+ full_image = image
+ if tag:
+ full_image = "%s:%s" % (full_image, tag)
+ command_parts.append(full_image)
+ command_parts.append(container_command)
+ return " ".join(command_parts)
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 lib/galaxy/tools/deps/requirements.py
--- a/lib/galaxy/tools/deps/requirements.py
+++ b/lib/galaxy/tools/deps/requirements.py
@@ -26,6 +26,25 @@
return ToolRequirement( name=name, type=type, version=version )
+DEFAULT_CONTAINER_TYPE = "docker"
+
+
+class ContainerDescription( object ):
+
+ def __init__( self, identifier=None, type="docker" ):
+ self.identifier = identifier
+ self.type = type
+
+ def to_dict( self ):
+ return dict(identifier=self.identifier, type=self.type)
+
+ @staticmethod
+ def from_dict( dict ):
+ identifier = dict["identifier"]
+ type = dict.get("type", DEFAULT_CONTAINER_TYPE)
+ return ContainerDescription( identifier=identifier, type=type )
+
+
def parse_requirements_from_xml( xml_root ):
"""
@@ -63,4 +82,15 @@
requirement = ToolRequirement( name=name, type=type, version=version )
requirements.append( requirement )
- return requirements
+ container_elems = []
+ if requirements_elem is not None:
+ container_elems = requirements_elem.findall( 'container' )
+
+ containers = []
+ for container_elem in container_elems:
+ identifier = xml_text( container_elem )
+ type = container_elem.get( "type", DEFAULT_CONTAINER_TYPE )
+ container = ContainerDescription( identifier=identifier, type=type )
+ containers.append( container )
+
+ return requirements, containers
diff -r a19ef9da5e470d14a31ad8385f14168a14556224 -r 646b5877871b3b15c6fc6a9ecf111fceb23ea270 test/functional/tools/catDocker.xml
--- /dev/null
+++ b/test/functional/tools/catDocker.xml
@@ -0,0 +1,28 @@
+<tool id="catdc" name="Concatenate datasets (in docker)">
+ <description>tail-to-head</description>
+ <requirements>
+ <container type="docker">busybox:ubuntu-14.04</container>
+ </requirements>
+ <command>
+ echo "Galaxy slots passed through contain as \$GALAXY_SLOTS";
+ cat $input1
+ #for $q in $queries
+ ${q.input2}
+ #end for
+ > $out_file1;
+ echo "Work dir output" > working_file
+
+ </command>
+ <inputs>
+ <param name="input1" type="data" label="Concatenate Dataset"/>
+ <repeat name="queries" title="Dataset">
+ <param name="input2" type="data" label="Select" />
+ </repeat>
+ </inputs>
+ <outputs>
+ <data name="out_file1" format="input" metadata_source="input1"/>
+ <data name="out_file2" format="txt" from_work_dir="working_file" />
+ </outputs>
+ <help>
+ </help>
+</tool>
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0