galaxy-commits
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
May 2014
- 1 participants
- 242 discussions
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/2a7b890c09c1/
Changeset: 2a7b890c09c1
Branch: next-stable
User: dannon
Date: 2014-05-29 23:23:51
Summary: Improve assertion logging for upload_store mismatch
Affected #: 1 file
diff -r b8fef1dfa8e8eb5afdeaa96236a0a9990f7c532b -r 2a7b890c09c1fb677378e099a5f0637c2036b310 lib/galaxy/tools/parameters/basic.py
--- a/lib/galaxy/tools/parameters/basic.py
+++ b/lib/galaxy/tools/parameters/basic.py
@@ -498,7 +498,7 @@
# Check that the file is in the right location
local_filename = os.path.abspath( value['path'] )
assert local_filename.startswith( upload_store ), \
- "Filename provided by nginx is not in correct directory"
+ "Filename provided by nginx (%s) is not in correct directory (%s)" % (local_filename, upload_store)
value = dict(
filename=value["name"],
local_filename=local_filename
https://bitbucket.org/galaxy/galaxy-central/commits/3409e283965f/
Changeset: 3409e283965f
User: dannon
Date: 2014-05-29 23:24:03
Summary: Merge.
Affected #: 1 file
diff -r 675be4213255a51c84ff52973bc199fa6d2152a1 -r 3409e283965f58ec4bf96594f980a8e6ee2cb058 lib/galaxy/tools/parameters/basic.py
--- a/lib/galaxy/tools/parameters/basic.py
+++ b/lib/galaxy/tools/parameters/basic.py
@@ -498,7 +498,7 @@
# Check that the file is in the right location
local_filename = os.path.abspath( value['path'] )
assert local_filename.startswith( upload_store ), \
- "Filename provided by nginx is not in correct directory"
+ "Filename provided by nginx (%s) is not in correct directory (%s)" % (local_filename, upload_store)
value = dict(
filename=value["name"],
local_filename=local_filename
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/b8fef1dfa8e8/
Changeset: b8fef1dfa8e8
Branch: next-stable
User: carlfeberhard
Date: 2014-05-29 22:25:44
Summary: Silence log warnings when class names are not found in the visualizations registry applicability tests
Affected #: 1 file
diff -r 3edca70df09ded514c1e6a46701372638c4596a9 -r b8fef1dfa8e8eb5afdeaa96236a0a9990f7c532b lib/galaxy/visualization/registry.py
--- a/lib/galaxy/visualization/registry.py
+++ b/lib/galaxy/visualization/registry.py
@@ -243,10 +243,11 @@
datatype_class_name = test_result
test_result = trans.app.datatypes_registry.get_datatype_class_by_name( datatype_class_name )
if not test_result:
- # warn if can't find class, but continue (with other tests)
- log.warn( 'visualizations_registry cannot find class (%s)' +
- ' for applicability test on: %s, id: %s', datatype_class_name,
- target_object, getattr( target_object, 'id', '' ) )
+ # but continue (with other tests) if can't find class by that name
+ #if self.debug:
+ # log.warn( 'visualizations_registry cannot find class (%s)' +
+ # ' for applicability test on: %s, id: %s', datatype_class_name,
+ # target_object, getattr( target_object, 'id', '' ) )
continue
#NOTE: tests are OR'd, if any test passes - the visualization can be applied
https://bitbucket.org/galaxy/galaxy-central/commits/675be4213255/
Changeset: 675be4213255
User: carlfeberhard
Date: 2014-05-29 22:25:59
Summary: merge
Affected #: 1 file
diff -r 34d179b55c859f9c669c73733bde2684cf9d4a9f -r 675be4213255a51c84ff52973bc199fa6d2152a1 lib/galaxy/visualization/registry.py
--- a/lib/galaxy/visualization/registry.py
+++ b/lib/galaxy/visualization/registry.py
@@ -243,10 +243,11 @@
datatype_class_name = test_result
test_result = trans.app.datatypes_registry.get_datatype_class_by_name( datatype_class_name )
if not test_result:
- # warn if can't find class, but continue (with other tests)
- log.warn( 'visualizations_registry cannot find class (%s)' +
- ' for applicability test on: %s, id: %s', datatype_class_name,
- target_object, getattr( target_object, 'id', '' ) )
+ # but continue (with other tests) if can't find class by that name
+ #if self.debug:
+ # log.warn( 'visualizations_registry cannot find class (%s)' +
+ # ' for applicability test on: %s, id: %s', datatype_class_name,
+ # target_object, getattr( target_object, 'id', '' ) )
continue
#NOTE: tests are OR'd, if any test passes - the visualization can be applied
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/e69024c183a7/
Changeset: e69024c183a7
User: jmchilton
Date: 2014-05-29 07:15:38
Summary: Unit tests for RunnerParams.
Small changes to make this easier - mostly just defining constants.
Affected #: 2 files
diff -r cc0f9182fcb0b18730dd79c6889de4c8a58d4446 -r e69024c183a7403bae7b0801faa8f0db5fc1e5c0 lib/galaxy/jobs/runners/__init__.py
--- a/lib/galaxy/jobs/runners/__init__.py
+++ b/lib/galaxy/jobs/runners/__init__.py
@@ -24,20 +24,27 @@
STOP_SIGNAL = object()
+JOB_RUNNER_PARAMETER_UNKNOWN_MESSAGE = "Invalid job runner parameter for this plugin: %s"
+JOB_RUNNER_PARAMETER_MAP_PROBLEM_MESSAGE = "Job runner parameter '%s' value '%s' could not be converted to the correct type"
+JOB_RUNNER_PARAMETER_VALIDATION_FAILED_MESSAGE = "Job runner parameter %s failed validation"
+
+
class RunnerParams( object ):
def __init__( self, specs=None, params=None ):
self.specs = specs or dict()
self.params = params or dict()
for name, value in self.params.items():
- assert name in self.specs, 'Invalid job runner parameter for this plugin: %s' % name
+ assert name in self.specs, JOB_RUNNER_PARAMETER_UNKNOWN_MESSAGE % name
if 'map' in self.specs[ name ]:
try:
self.params[ name ] = self.specs[ name ][ 'map' ]( value )
- except Exception, e:
- raise Exception( 'Job runner parameter "%s" value "%s" could not be converted to the correct type: %s' % ( name, value, e ) )
+ except Exception:
+ message = JOB_RUNNER_PARAMETER_MAP_PROBLEM_MESSAGE % ( name, value )
+ log.exception(message)
+ raise Exception( message )
if 'valid' in self.specs[ name ]:
- assert self.specs[ name ][ 'valid' ]( value ), 'Job runner parameter %s failed validation' % name
+ assert self.specs[ name ][ 'valid' ]( value ), JOB_RUNNER_PARAMETER_VALIDATION_FAILED_MESSAGE % name
def __getattr__( self, name ):
return self.params.get( name, self.specs[ name ][ 'default' ] )
@@ -46,13 +53,15 @@
class BaseJobRunner( object ):
+ DEFAULT_SPECS = dict( recheck_missing_job_retries=dict( map=int, valid=lambda x: x >= 0, default=0 ) )
+
def __init__( self, app, nworkers, **kwargs ):
"""Start the job runner
"""
self.app = app
self.sa_session = app.model.context
self.nworkers = nworkers
- runner_param_specs = dict( recheck_missing_job_retries=dict( map=int, valid=lambda x: x >= 0, default=0 ) )
+ runner_param_specs = self.DEFAULT_SPECS.copy()
if 'runner_param_specs' in kwargs:
runner_param_specs.update( kwargs.pop( 'runner_param_specs' ) )
if kwargs:
diff -r cc0f9182fcb0b18730dd79c6889de4c8a58d4446 -r e69024c183a7403bae7b0801faa8f0db5fc1e5c0 test/unit/jobs/test_runner_params.py
--- /dev/null
+++ b/test/unit/jobs/test_runner_params.py
@@ -0,0 +1,48 @@
+from galaxy.jobs import runners
+
+
+def test_default_specs():
+ # recheck_missing_job_retries is integer >= 0
+ params = runners.RunnerParams( specs=runners.BaseJobRunner.DEFAULT_SPECS, params=dict( recheck_missing_job_retries="1" ) )
+ assert params.recheck_missing_job_retries == 1
+ assert params["recheck_missing_job_retries"] == 1
+
+ exception_raised = False
+ try:
+ runners.RunnerParams( specs=runners.BaseJobRunner.DEFAULT_SPECS, params=dict( recheck_missing_job_retries=-1 ) )
+ except Exception:
+ exception_raised = True
+ assert exception_raised
+
+
+def test_missing_parameter():
+ exception = None
+ try:
+ runners.RunnerParams( specs={}, params=dict( foo="bar" ) )
+ except Exception as e:
+ exception = e
+ assert exception.message == runners.JOB_RUNNER_PARAMETER_UNKNOWN_MESSAGE % "foo"
+
+
+def test_invalid_parameter():
+ exception = None
+ try:
+ runners.RunnerParams( specs=dict( foo=dict( valid=lambda x: x != "bar", defualt="baz" ) ), params=dict( foo="bar" ) )
+ except Exception as e:
+ exception = e
+ assert exception.message == runners.JOB_RUNNER_PARAMETER_VALIDATION_FAILED_MESSAGE % "foo"
+
+
+def test_map_problem():
+ exception = None
+ try:
+ runners.RunnerParams( specs=dict( foo=dict( map=lambda x: 1 / 0, default="baz" ) ), params=dict( foo="bar" ) )
+ except Exception as e:
+ exception = e
+ assert exception.message == runners.JOB_RUNNER_PARAMETER_MAP_PROBLEM_MESSAGE % ( "foo", "bar" )
+
+
+def test_param_default():
+ runner_params = runners.RunnerParams( specs=dict( foo=dict( default="baz" ) ), params={} )
+ assert runner_params["foo"] == "baz"
+ assert runner_params.foo == "baz"
https://bitbucket.org/galaxy/galaxy-central/commits/34d179b55c85/
Changeset: 34d179b55c85
User: jmchilton
Date: 2014-05-29 07:15:38
Summary: Generalize RunnerParams.
Create ParamsWithSpecs class in galaxy.util. If we are going to build out specs for all LWR client parameters - would be nice if they could be reused on the LWR server side where relevant (e.g. AMQP connection parameters).
Slightly modified implementation that extends collections.defaultdict so that operations like iterating over parameters, fetching keys, etc... are available so LWR runner param handling code can look like destination param handling code.
Affected #: 2 files
diff -r e69024c183a7403bae7b0801faa8f0db5fc1e5c0 -r 34d179b55c859f9c669c73733bde2684cf9d4a9f lib/galaxy/jobs/runners/__init__.py
--- a/lib/galaxy/jobs/runners/__init__.py
+++ b/lib/galaxy/jobs/runners/__init__.py
@@ -16,6 +16,7 @@
from galaxy import model
from galaxy.util import DATABASE_MAX_STRING_SIZE, shrink_stream_by_size
from galaxy.util import in_directory
+from galaxy.util import ParamsWithSpecs
from galaxy.jobs.runners.util.job_script import job_script
from galaxy.jobs.runners.util.env import env_to_statement
@@ -29,27 +30,16 @@
JOB_RUNNER_PARAMETER_VALIDATION_FAILED_MESSAGE = "Job runner parameter %s failed validation"
-class RunnerParams( object ):
+class RunnerParams( ParamsWithSpecs ):
- def __init__( self, specs=None, params=None ):
- self.specs = specs or dict()
- self.params = params or dict()
- for name, value in self.params.items():
- assert name in self.specs, JOB_RUNNER_PARAMETER_UNKNOWN_MESSAGE % name
- if 'map' in self.specs[ name ]:
- try:
- self.params[ name ] = self.specs[ name ][ 'map' ]( value )
- except Exception:
- message = JOB_RUNNER_PARAMETER_MAP_PROBLEM_MESSAGE % ( name, value )
- log.exception(message)
- raise Exception( message )
- if 'valid' in self.specs[ name ]:
- assert self.specs[ name ][ 'valid' ]( value ), JOB_RUNNER_PARAMETER_VALIDATION_FAILED_MESSAGE % name
+ def _param_unknown_error( self, name ):
+ raise Exception( JOB_RUNNER_PARAMETER_UNKNOWN_MESSAGE % name )
- def __getattr__( self, name ):
- return self.params.get( name, self.specs[ name ][ 'default' ] )
+ def _param_map_error( self, name, value ):
+ raise Exception( JOB_RUNNER_PARAMETER_MAP_PROBLEM_MESSAGE % ( name, value ) )
- __getitem__ = __getattr__
+ def _param_vaildation_error( self, name, value ):
+ raise Exception( JOB_RUNNER_PARAMETER_VALIDATION_FAILED_MESSAGE % name )
class BaseJobRunner( object ):
diff -r e69024c183a7403bae7b0801faa8f0db5fc1e5c0 -r 34d179b55c859f9c669c73733bde2684cf9d4a9f lib/galaxy/util/__init__.py
--- a/lib/galaxy/util/__init__.py
+++ b/lib/galaxy/util/__init__.py
@@ -7,6 +7,7 @@
from __future__ import absolute_import
import binascii
+import collections
import errno
import grp
import json
@@ -693,6 +694,44 @@
def string_to_object( s ):
return pickle.loads( binascii.unhexlify( s ) )
+
+class ParamsWithSpecs( collections.defaultdict ):
+ """
+ """
+
+ def __init__( self, specs=None, params=None ):
+ self.specs = specs or dict()
+ self.params = params or dict()
+ for name, value in self.params.items():
+ if name not in self.specs:
+ self._param_unknown_error( name )
+ if 'map' in self.specs[ name ]:
+ try:
+ self.params[ name ] = self.specs[ name ][ 'map' ]( value )
+ except Exception:
+ self._param_map_error( name, value )
+ if 'valid' in self.specs[ name ]:
+ if not self.specs[ name ][ 'valid' ]( value ):
+ self._param_vaildation_error( name, value )
+
+ self.update( self.params )
+
+ def __missing__( self, name ):
+ return self.specs[ name ][ 'default' ]
+
+ def __getattr__( self, name ):
+ return self[ name ]
+
+ def _param_unknown_error( self, name ):
+ raise NotImplementedError()
+
+ def _param_map_error( self, name, value ):
+ raise NotImplementedError()
+
+ def _param_vaildation_error( self, name, value ):
+ raise NotImplementedError()
+
+
def compare_urls( url1, url2, compare_scheme=True, compare_hostname=True, compare_path=True ):
url1 = urlparse( url1 )
url2 = urlparse( url2 )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/3edca70df09d/
Changeset: 3edca70df09d
Branch: next-stable
User: jmchilton
Date: 2014-05-29 04:14:52
Summary: More MQ-LWR documentation.
Mention pycurl dependency when using LWR client in this fashion.
Affected #: 1 file
diff -r eed45027e219df9c4fc87fcfc788380ecc500b1a -r 3edca70df09ded514c1e6a46701372638c4596a9 job_conf.xml.sample_advanced
--- a/job_conf.xml.sample_advanced
+++ b/job_conf.xml.sample_advanced
@@ -204,6 +204,10 @@
using message queues (in the more traditional mode Galaxy sends
files to and pull files from the LWR - this is obviously less
appropriate when using a message queue).
+
+ The default_file_action currently requires pycurl be available
+ to Galaxy (presumably in its virtualenv). Making this dependency
+ optional is an open task.
--><param id="default_file_action">remote_transfer</param></destination>
https://bitbucket.org/galaxy/galaxy-central/commits/cc0f9182fcb0/
Changeset: cc0f9182fcb0
User: jmchilton
Date: 2014-05-29 04:15:08
Summary: Merge latest next-stable.
Affected #: 1 file
diff -r f415f6c5ecf22720f5bc8d8ae983cbc713831890 -r cc0f9182fcb0b18730dd79c6889de4c8a58d4446 job_conf.xml.sample_advanced
--- a/job_conf.xml.sample_advanced
+++ b/job_conf.xml.sample_advanced
@@ -204,6 +204,10 @@
using message queues (in the more traditional mode Galaxy sends
files to and pull files from the LWR - this is obviously less
appropriate when using a message queue).
+
+ The default_file_action currently requires pycurl be available
+ to Galaxy (presumably in its virtualenv). Making this dependency
+ optional is an open task.
--><param id="default_file_action">remote_transfer</param></destination>
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/fa67fbfa58e1/
Changeset: fa67fbfa58e1
User: jmchilton
Date: 2014-05-29 03:25:53
Summary: Bugfix for a06c6c9.
With improved testing :).
Affected #: 2 files
diff -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 -r fa67fbfa58e16ad2f1a18c9e53fac2b02c7ab7f0 lib/galaxy/dataset_collections/__init__.py
--- a/lib/galaxy/dataset_collections/__init__.py
+++ b/lib/galaxy/dataset_collections/__init__.py
@@ -147,7 +147,10 @@
):
assert source == "hdca" # for now
source_hdca = self.__get_history_collection_instance( trans, encoded_source_id )
- parent.add_dataset_collection( source_hdca.copy() )
+ new_hdca = source_hdca.copy()
+ parent.add_dataset_collection( new_hdca )
+ trans.sa_session.add( new_hdca )
+ trans.sa_session.flush()
return source_hdca
def _set_from_dict( self, trans, dataset_collection_instance, new_data ):
diff -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 -r fa67fbfa58e16ad2f1a18c9e53fac2b02c7ab7f0 test/api/test_history_contents.py
--- a/test/api/test_history_contents.py
+++ b/test/api/test_history_contents.py
@@ -150,8 +150,10 @@
source='hdca',
content=hdca_id,
)
+ assert len( self._get( "histories/%s/contents/dataset_collections" % second_history_id ).json() ) == 0
create_response = self._post( "histories/%s/contents/dataset_collections" % second_history_id, create_data )
self.__check_create_collection_response( create_response )
+ assert len( self._get( "histories/%s/contents/dataset_collections" % second_history_id ).json() ) == 1
def __check_create_collection_response( self, response ):
self._assert_status_code_is( response, 200 )
https://bitbucket.org/galaxy/galaxy-central/commits/f415f6c5ecf2/
Changeset: f415f6c5ecf2
User: jmchilton
Date: 2014-05-29 03:25:53
Summary: More extract functional tests.
Add test for extracting copied, mapped datasets and copied input dataset collections from history.
Affected #: 2 files
diff -r fa67fbfa58e16ad2f1a18c9e53fac2b02c7ab7f0 -r f415f6c5ecf22720f5bc8d8ae983cbc713831890 lib/galaxy/workflow/extract.py
--- a/lib/galaxy/workflow/extract.py
+++ b/lib/galaxy/workflow/extract.py
@@ -2,6 +2,7 @@
histories.
"""
from galaxy.util.odict import odict
+from galaxy import exceptions
from galaxy import model
from galaxy.tools.parameters.basic import (
DataToolParameter,
@@ -88,6 +89,8 @@
for hid in dataset_collection_ids:
step = model.WorkflowStep()
step.type = 'data_collection_input'
+ if hid not in summary.collection_types:
+ raise exceptions.RequestParameterInvalidException( "hid %s does not appear to be a collection" % hid )
collection_type = summary.collection_types[ hid ]
step.tool_inputs = dict( name="Input Dataset Collection", collection_type=collection_type )
hid_to_output_pair[ hid ] = ( step, 'output' )
diff -r fa67fbfa58e16ad2f1a18c9e53fac2b02c7ab7f0 -r f415f6c5ecf22720f5bc8d8ae983cbc713831890 test/api/test_workflows.py
--- a/test/api/test_workflows.py
+++ b/test/api/test_workflows.py
@@ -106,12 +106,7 @@
offset = 0
old_contents = self._get( "histories/%s/contents" % old_history_id ).json()
for old_dataset in old_contents:
- payload = dict(
- source="hda",
- content=old_dataset["id"]
- )
- response = self._post( "histories/%s/contents/datasets" % history_id, payload )
- self._assert_status_code_is( response, 200 )
+ self.__copy_content_to_history( history_id, old_dataset )
new_contents = self._get( "histories/%s/contents" % history_id ).json()
input_hids = map( lambda c: c[ "hid" ], new_contents[ (offset + 0):(offset + 2) ] )
cat1_job_id = self.__job_id( history_id, new_contents[ (offset + 2) ][ "id" ] )
@@ -186,40 +181,33 @@
@skip_without_tool( "random_lines1" )
def test_extract_mapping_workflow_from_history( self ):
history_id = self.dataset_populator.new_history()
- hdca = self.dataset_collection_populator.create_pair_in_history( history_id, contents=["1 2 3\n4 5 6", "7 8 9\n10 11 10"] ).json()
- hdca_id = hdca[ "id" ]
- inputs1 = {
- "input|__collection_multirun__": hdca_id,
- "num_lines": 2
- }
- implicit_hdca1, job_id1 = self._run_tool_get_collection_and_job_id( history_id, "random_lines1", inputs1 )
- inputs2 = {
- "input|__collection_multirun__": implicit_hdca1[ "id" ],
- "num_lines": 1
- }
- _, job_id2 = self._run_tool_get_collection_and_job_id( history_id, "random_lines1", inputs2 )
+ hdca, job_id1, job_id2 = self.__run_random_lines_mapped_over_pair( history_id )
downloaded_workflow = self._extract_and_download_workflow(
from_history_id=history_id,
dataset_collection_ids=dumps( [ hdca[ "hid" ] ] ),
job_ids=dumps( [ job_id1, job_id2 ] ),
workflow_name="test import from mapping history",
)
- # Assert workflow is input connected to a tool step with one output
- # connected to another tool step.
- assert len( downloaded_workflow[ "steps" ] ) == 3
- collect_step_idx = self._assert_first_step_is_paired_input( downloaded_workflow )
- tool_steps = self._get_steps_of_type( downloaded_workflow, "tool", expected_len=2 )
- tool_step_idxs = []
- tool_input_step_idxs = []
- for tool_step in tool_steps:
- self._assert_has_key( tool_step[ "input_connections" ], "input" )
- input_step_idx = tool_step[ "input_connections" ][ "input" ][ "id" ]
- tool_step_idxs.append( tool_step[ "id" ] )
- tool_input_step_idxs.append( input_step_idx )
+ self.__assert_looks_like_randomlines_mapping_workflow( downloaded_workflow )
- assert collect_step_idx not in tool_step_idxs
- assert tool_input_step_idxs[ 0 ] == collect_step_idx
- assert tool_input_step_idxs[ 1 ] == tool_step_idxs[ 0 ]
+ def test_extract_copied_mapping_from_history( self ):
+ old_history_id = self.dataset_populator.new_history()
+ hdca, job_id1, job_id2 = self.__run_random_lines_mapped_over_pair( old_history_id )
+
+ history_id = self.dataset_populator.new_history()
+ old_contents = self._get( "histories/%s/contents" % old_history_id ).json()
+ for old_content in old_contents:
+ self.__copy_content_to_history( history_id, old_content )
+ # API test is somewhat contrived since there is no good way
+ # to retrieve job_id1, job_id2 like this for copied dataset
+ # collections I don't think.
+ downloaded_workflow = self._extract_and_download_workflow(
+ from_history_id=history_id,
+ dataset_collection_ids=dumps( [ hdca[ "hid" ] ] ),
+ job_ids=dumps( [ job_id1, job_id2 ] ),
+ workflow_name="test import from history",
+ )
+ self.__assert_looks_like_randomlines_mapping_workflow( downloaded_workflow )
@skip_without_tool( "random_lines1" )
@skip_without_tool( "multi_data_param" )
@@ -259,6 +247,56 @@
reduction_step_input = reduction_step[ "input_connections" ][ "f1" ]
assert reduction_step_input[ "id"] == random_lines_map_step[ "id" ]
+ def __copy_content_to_history( self, history_id, content ):
+ if content[ "history_content_type" ] == "dataset":
+ payload = dict(
+ source="hda",
+ content=content["id"]
+ )
+ response = self._post( "histories/%s/contents/datasets" % history_id, payload )
+
+ else:
+ payload = dict(
+ source="hdca",
+ content=content["id"]
+ )
+ response = self._post( "histories/%s/contents/dataset_collections" % history_id, payload )
+ self._assert_status_code_is( response, 200 )
+ return response.json()
+
+ def __run_random_lines_mapped_over_pair( self, history_id ):
+ hdca = self.dataset_collection_populator.create_pair_in_history( history_id, contents=["1 2 3\n4 5 6", "7 8 9\n10 11 10"] ).json()
+ hdca_id = hdca[ "id" ]
+ inputs1 = {
+ "input|__collection_multirun__": hdca_id,
+ "num_lines": 2
+ }
+ implicit_hdca1, job_id1 = self._run_tool_get_collection_and_job_id( history_id, "random_lines1", inputs1 )
+ inputs2 = {
+ "input|__collection_multirun__": implicit_hdca1[ "id" ],
+ "num_lines": 1
+ }
+ _, job_id2 = self._run_tool_get_collection_and_job_id( history_id, "random_lines1", inputs2 )
+ return hdca, job_id1, job_id2
+
+ def __assert_looks_like_randomlines_mapping_workflow( self, downloaded_workflow ):
+ # Assert workflow is input connected to a tool step with one output
+ # connected to another tool step.
+ assert len( downloaded_workflow[ "steps" ] ) == 3
+ collect_step_idx = self._assert_first_step_is_paired_input( downloaded_workflow )
+ tool_steps = self._get_steps_of_type( downloaded_workflow, "tool", expected_len=2 )
+ tool_step_idxs = []
+ tool_input_step_idxs = []
+ for tool_step in tool_steps:
+ self._assert_has_key( tool_step[ "input_connections" ], "input" )
+ input_step_idx = tool_step[ "input_connections" ][ "input" ][ "id" ]
+ tool_step_idxs.append( tool_step[ "id" ] )
+ tool_input_step_idxs.append( input_step_idx )
+
+ assert collect_step_idx not in tool_step_idxs
+ assert tool_input_step_idxs[ 0 ] == collect_step_idx
+ assert tool_input_step_idxs[ 1 ] == tool_step_idxs[ 0 ]
+
def _run_tool_get_collection_and_job_id( self, history_id, tool_id, inputs ):
run_output1 = self.dataset_populator.run_tool(
tool_id=tool_id,
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/f8c6577665c3/
Changeset: f8c6577665c3
User: jgoecks
Date: 2014-05-28 23:20:15
Summary: Enhance TabularChunkedView to load first chunk from server if it's not already available. This enables TabularChunkedView to be loaded dynamically rather than relying on bootstrapped data.
Affected #: 1 file
diff -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 -r f8c6577665c383645e934629784ece516b498d9b static/scripts/mvc/data.js
--- a/static/scripts/mvc/data.js
+++ b/static/scripts/mvc/data.js
@@ -145,17 +145,24 @@
header_row.append('<th>' + column_names.join('</th><th>') + '</th>');
}
- // Add first chunk.
- var first_chunk = this.model.get('first_data_chunk');
+ // Render first chunk.
+ var self = this,
+ first_chunk = this.model.get('first_data_chunk');
if (first_chunk) {
+ // First chunk is bootstrapped, so render now.
this._renderChunk(first_chunk);
}
+ else {
+ // No bootstrapping, so get first chunk and then render.
+ $.when(self.model.get_next_chunk()).then(function(result) {
+ self._renderChunk(result);
+ });
+ }
// -- Show new chunks during scrolling. --
- var self = this,
- // Flag to ensure that only one chunk is loaded at a time.
- loading_chunk = false;
+ // Flag to ensure that only one chunk is loaded at a time.
+ var loading_chunk = false;
// Set up chunk loading when scrolling using the scrolling element.
this.scroll_elt.scroll(function() {
https://bitbucket.org/galaxy/galaxy-central/commits/85aa79e9ab9a/
Changeset: 85aa79e9ab9a
User: jgoecks
Date: 2014-05-28 23:21:04
Summary: Automated merge
Affected #: 19 files
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/galaxy/dataset_collections/__init__.py
--- a/lib/galaxy/dataset_collections/__init__.py
+++ b/lib/galaxy/dataset_collections/__init__.py
@@ -138,6 +138,18 @@
changed = self._set_from_dict( trans, dataset_collection_instance, payload )
return changed
+ def copy(
+ self,
+ trans,
+ parent, # PRECONDITION: security checks on ability to add to parent occurred during load.
+ source,
+ encoded_source_id,
+ ):
+ assert source == "hdca" # for now
+ source_hdca = self.__get_history_collection_instance( trans, encoded_source_id )
+ parent.add_dataset_collection( source_hdca.copy() )
+ return source_hdca
+
def _set_from_dict( self, trans, dataset_collection_instance, new_data ):
# Blatantly stolen from UsesHistoryDatasetAssociationMixin.set_hda_from_dict.
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/galaxy/webapps/galaxy/api/history_contents.py
--- a/lib/galaxy/webapps/galaxy/api/history_contents.py
+++ b/lib/galaxy/webapps/galaxy/api/history_contents.py
@@ -292,9 +292,28 @@
return hda_dict
def __create_dataset_collection( self, trans, history, payload, **kwd ):
- create_params = api_payload_to_create_params( payload )
+ source = kwd.get("source", "new_collection")
service = trans.app.dataset_collections_service
- dataset_collection_instance = service.create( trans, parent=history, **create_params )
+ if source == "new_collection":
+ create_params = api_payload_to_create_params( payload )
+ dataset_collection_instance = service.create(
+ trans,
+ parent=history,
+ **create_params
+ )
+ elif source == "hdca":
+ content = payload.get( 'content', None )
+ if content is None:
+ raise exceptions.RequestParameterMissingException( "'content' id of target to copy is missing" )
+ dataset_collection_instance = service.copy(
+ trans=trans,
+ parent=history,
+ source="hdca",
+ encoded_source_id=content,
+ )
+ else:
+ message = "Invalid 'source' parameter in request %s" % source
+ raise exceptions.RequestParameterInvalidException(message)
return self.__collection_dict( trans, dataset_collection_instance, view="element" )
@expose_api_anonymous
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/galaxy/webapps/tool_shed/controllers/repository.py
--- a/lib/galaxy/webapps/tool_shed/controllers/repository.py
+++ b/lib/galaxy/webapps/tool_shed/controllers/repository.py
@@ -3120,12 +3120,12 @@
options_dict = hg_util.get_mercurial_default_options_dict( 'diff' )
# Not quite sure if the following settings make any difference, but with a combination of them and the size check on each
# diff, we don't run out of memory when viewing the changelog of the cisortho2 repository on the test tool shed.
- options_dict[ 'maxfile' ] = suc.MAXDIFFSIZE
- options_dict[ 'maxtotal' ] = suc.MAXDIFFSIZE
+ options_dict[ 'maxfile' ] = basic_util.MAXDIFFSIZE
+ options_dict[ 'maxtotal' ] = basic_util.MAXDIFFSIZE
diffopts = mdiff.diffopts( **options_dict )
for diff in patch.diff( repo, node1=ctx_parent.node(), node2=ctx.node(), opts=diffopts ):
- if len( diff ) > suc.MAXDIFFSIZE:
- diff = util.shrink_string_by_size( diff, suc.MAXDIFFSIZE )
+ if len( diff ) > basic_util.MAXDIFFSIZE:
+ diff = util.shrink_string_by_size( diff, basic_util.MAXDIFFSIZE )
diffs.append( basic_util.to_html_string( diff ) )
modified, added, removed, deleted, unknown, ignored, clean = repo.status( node1=ctx_parent.node(), node2=ctx.node() )
anchors = modified + added + removed + deleted + unknown + ignored + clean
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/tool_shed/galaxy_install/install_manager.py
--- a/lib/tool_shed/galaxy_install/install_manager.py
+++ b/lib/tool_shed/galaxy_install/install_manager.py
@@ -1,5 +1,7 @@
import logging
import os
+import sys
+import traceback
from galaxy import eggs
@@ -11,7 +13,6 @@
from tool_shed.util import tool_dependency_util
-from tool_shed.galaxy_install.tool_dependencies import td_common_util
from tool_shed.galaxy_install.tool_dependencies.recipe.env_file_builder import EnvFileBuilder
from tool_shed.galaxy_install.tool_dependencies.recipe.install_environment import InstallEnvironment
from tool_shed.galaxy_install.tool_dependencies.recipe.recipe_manager import StepManager
@@ -25,6 +26,10 @@
class InstallManager( object ):
+ def format_traceback( self ):
+ ex_type, ex, tb = sys.exc_info()
+ return ''.join( traceback.format_tb( tb ) )
+
def get_tool_shed_repository_install_dir( self, app, tool_shed_repository ):
return os.path.abspath( tool_shed_repository.repo_files_directory( app ) )
@@ -112,7 +117,7 @@
log.exception( 'Error installing tool dependency %s version %s.', str( tool_dependency.name ), str( tool_dependency.version ) )
# Since there was an installation error, update the tool dependency status to Error. The remove_installation_path option must
# be left False here.
- error_message = '%s\n%s' % ( td_common_util.format_traceback(), str( e ) )
+ error_message = '%s\n%s' % ( self.format_traceback(), str( e ) )
tool_dependency = tool_dependency_util.handle_tool_dependency_installation_error( app,
tool_dependency,
error_message,
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/tool_shed/galaxy_install/tool_dependencies/env_manager.py
--- /dev/null
+++ b/lib/tool_shed/galaxy_install/tool_dependencies/env_manager.py
@@ -0,0 +1,147 @@
+import logging
+import os
+import sys
+from tool_shed.util import common_util
+import tool_shed.util.shed_util_common as suc
+
+log = logging.getLogger( __name__ )
+
+
+class EnvManager( object ):
+
+ def __init__( self, app ):
+ self.app = app
+
+ def create_env_var_dict( self, elem, install_environment ):
+ env_var_name = elem.get( 'name', 'PATH' )
+ env_var_action = elem.get( 'action', 'prepend_to' )
+ env_var_text = None
+ tool_dependency_install_dir = install_environment.install_dir
+ tool_shed_repository_install_dir = install_environment.tool_shed_repository_install_dir
+ if elem.text and elem.text.find( 'REPOSITORY_INSTALL_DIR' ) >= 0:
+ if tool_shed_repository_install_dir and elem.text.find( '$REPOSITORY_INSTALL_DIR' ) != -1:
+ env_var_text = elem.text.replace( '$REPOSITORY_INSTALL_DIR', tool_shed_repository_install_dir )
+ return dict( name=env_var_name, action=env_var_action, value=env_var_text )
+ else:
+ env_var_text = elem.text.replace( '$REPOSITORY_INSTALL_DIR', tool_dependency_install_dir )
+ return dict( name=env_var_name, action=env_var_action, value=env_var_text )
+ if elem.text and elem.text.find( 'INSTALL_DIR' ) >= 0:
+ if tool_dependency_install_dir:
+ env_var_text = elem.text.replace( '$INSTALL_DIR', tool_dependency_install_dir )
+ return dict( name=env_var_name, action=env_var_action, value=env_var_text )
+ else:
+ env_var_text = elem.text.replace( '$INSTALL_DIR', tool_shed_repository_install_dir )
+ return dict( name=env_var_name, action=env_var_action, value=env_var_text )
+ if elem.text:
+ # Allow for environment variables that contain neither REPOSITORY_INSTALL_DIR nor INSTALL_DIR
+ # since there may be command line parameters that are tuned for a Galaxy instance. Allowing them
+ # to be set in one location rather than being hard coded into each tool config is the best approach.
+ # For example:
+ # <environment_variable name="GATK2_SITE_OPTIONS" action="set_to">
+ # "--num_threads 4 --num_cpu_threads_per_data_thread 3 --phone_home STANDARD"
+ # </environment_variable>
+ return dict( name=env_var_name, action=env_var_action, value=elem.text)
+ return None
+
+ def get_env_shell_file_path( self, installation_directory ):
+ env_shell_file_name = 'env.sh'
+ default_location = os.path.abspath( os.path.join( installation_directory, env_shell_file_name ) )
+ if os.path.exists( default_location ):
+ return default_location
+ for root, dirs, files in os.walk( installation_directory ):
+ for name in files:
+ if name == env_shell_file_name:
+ return os.path.abspath( os.path.join( root, name ) )
+ return None
+
+ def get_env_shell_file_paths( self, elem ):
+ # Currently only the following tag set is supported.
+ # <repository toolshed="http://localhost:9009/" name="package_numpy_1_7" owner="test" changeset_revision="c84c6a8be056">
+ # <package name="numpy" version="1.7.1" />
+ # </repository>
+ env_shell_file_paths = []
+ toolshed = elem.get( 'toolshed', None )
+ repository_name = elem.get( 'name', None )
+ repository_owner = elem.get( 'owner', None )
+ changeset_revision = elem.get( 'changeset_revision', None )
+ if toolshed and repository_name and repository_owner and changeset_revision:
+ # The protocol is not stored, but the port is if it exists.
+ toolshed = common_util.remove_protocol_from_tool_shed_url( toolshed )
+ repository = suc.get_repository_for_dependency_relationship( self.app,
+ toolshed,
+ repository_name,
+ repository_owner,
+ changeset_revision )
+ if repository:
+ for sub_elem in elem:
+ tool_dependency_type = sub_elem.tag
+ tool_dependency_name = sub_elem.get( 'name' )
+ tool_dependency_version = sub_elem.get( 'version' )
+ if tool_dependency_type and tool_dependency_name and tool_dependency_version:
+ # Get the tool_dependency so we can get its installation directory.
+ tool_dependency = None
+ for tool_dependency in repository.tool_dependencies:
+ if tool_dependency.type == tool_dependency_type and \
+ tool_dependency.name == tool_dependency_name and \
+ tool_dependency.version == tool_dependency_version:
+ break
+ if tool_dependency:
+ tool_dependency_key = '%s/%s' % ( tool_dependency_name, tool_dependency_version )
+ installation_directory = tool_dependency.installation_directory( self.app )
+ env_shell_file_path = self.get_env_shell_file_path( installation_directory )
+ if env_shell_file_path:
+ env_shell_file_paths.append( env_shell_file_path )
+ else:
+ error_message = "Skipping tool dependency definition because unable to locate env.sh file for tool dependency "
+ error_message += "type %s, name %s, version %s for repository %s" % \
+ ( str( tool_dependency_type ), str( tool_dependency_name ), str( tool_dependency_version ), str( repository.name ) )
+ log.debug( error_message )
+ continue
+ else:
+ error_message = "Skipping tool dependency definition because unable to locate tool dependency "
+ error_message += "type %s, name %s, version %s for repository %s" % \
+ ( str( tool_dependency_type ), str( tool_dependency_name ), str( tool_dependency_version ), str( repository.name ) )
+ log.debug( error_message )
+ continue
+ else:
+ error_message = "Skipping invalid tool dependency definition: type %s, name %s, version %s." % \
+ ( str( tool_dependency_type ), str( tool_dependency_name ), str( tool_dependency_version ) )
+ log.debug( error_message )
+ continue
+ else:
+ error_message = "Skipping set_environment_for_install definition because unable to locate required installed tool shed repository: "
+ error_message += "toolshed %s, name %s, owner %s, changeset_revision %s." % \
+ ( str( toolshed ), str( repository_name ), str( repository_owner ), str( changeset_revision ) )
+ log.debug( error_message )
+ else:
+ error_message = "Skipping invalid set_environment_for_install definition: toolshed %s, name %s, owner %s, changeset_revision %s." % \
+ ( str( toolshed ), str( repository_name ), str( repository_owner ), str( changeset_revision ) )
+ log.debug( error_message )
+ return env_shell_file_paths
+
+ def get_env_shell_file_paths_from_setup_environment_elem( self, all_env_shell_file_paths, elem, action_dict ):
+ """
+ Parse an XML tag set to discover all child repository dependency tags and define the path to an env.sh file associated
+ with the repository (this requires the repository dependency to be in an installed state). The received action_dict
+ will be updated with these discovered paths and returned to the caller. This method handles tool dependency definition
+ tag sets <setup_r_environment>, <setup_ruby_environment> and <setup_perl_environment>.
+ """
+ # An example elem is:
+ # <action type="setup_perl_environment">
+ # <repository name="package_perl_5_18" owner="iuc">
+ # <package name="perl" version="5.18.1" />
+ # </repository>
+ # <repository name="package_expat_2_1" owner="iuc" prior_installation_required="True">
+ # <package name="expat" version="2.1.0" />
+ # </repository>
+ # <package>http://search.cpan.org/CPAN/authors/id/T/TO/TODDR/XML-Parser-2.41.tar.gz</package>
+ # <package>http://search.cpan.org/CPAN/authors/id/L/LD/LDS/CGI.pm-3.43.tar.gz</package>
+ # </action>
+ for action_elem in elem:
+ if action_elem.tag == 'repository':
+ env_shell_file_paths = self.get_env_shell_file_paths( action_elem )
+ all_env_shell_file_paths.extend( env_shell_file_paths )
+ if all_env_shell_file_paths:
+ action_dict[ 'env_shell_file_paths' ] = all_env_shell_file_paths
+ action_dict[ 'action_shell_file_paths' ] = env_shell_file_paths
+ return action_dict
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/tool_shed/galaxy_install/tool_dependencies/recipe/install_environment.py
--- a/lib/tool_shed/galaxy_install/tool_dependencies/recipe/install_environment.py
+++ b/lib/tool_shed/galaxy_install/tool_dependencies/recipe/install_environment.py
@@ -24,11 +24,13 @@
from galaxy.util import shrink_string_by_size
from galaxy.util import unicodify
-from tool_shed.galaxy_install.tool_dependencies import td_common_util
from tool_shed.galaxy_install.tool_dependencies.recipe import asynchronous_reader
+from tool_shed.util import basic_util
+
log = logging.getLogger( __name__ )
+
class InstallEnvironment( object ):
"""Object describing the environment built up as part of the process of building and installing a package."""
@@ -44,7 +46,7 @@
self.tool_shed_repository_install_dir = tool_shed_repository_install_dir
def __call__( self ):
- with settings( warn_only=True, **td_common_util.get_env_var_values( self ) ):
+ with settings( warn_only=True, **basic_util.get_env_var_values( self ) ):
with prefix( self.__setup_environment() ):
yield
@@ -125,7 +127,7 @@
context = app.install_model.context
command = str( cmd )
output = self.handle_complex_command( command )
- self.log_results( cmd, output, os.path.join( self.install_dir, td_common_util.INSTALLATION_LOG ) )
+ self.log_results( cmd, output, os.path.join( self.install_dir, basic_util.INSTALLATION_LOG ) )
stdout = output.stdout
stderr = output.stderr
if len( stdout ) > DATABASE_MAX_STRING_SIZE:
@@ -214,18 +216,18 @@
# Sleep a bit before asking the readers again.
time.sleep( .1 )
current_wait_time = time.time() - start_timer
- if stdout_queue.empty() and stderr_queue.empty() and current_wait_time > td_common_util.NO_OUTPUT_TIMEOUT:
+ if stdout_queue.empty() and stderr_queue.empty() and current_wait_time > basic_util.NO_OUTPUT_TIMEOUT:
err_msg = "\nShutting down process id %s because it generated no output for the defined timeout period of %.1f seconds.\n" % \
- ( pid, td_common_util.NO_OUTPUT_TIMEOUT )
+ ( pid, basic_util.NO_OUTPUT_TIMEOUT )
stderr_reader.lines.append( err_msg )
process_handle.kill()
break
thread_lock.release()
# Wait until each of the threads we've started terminate. The following calls will block each thread
# until it terminates either normally, through an unhandled exception, or until the timeout occurs.
- stdio_thread.join( td_common_util.NO_OUTPUT_TIMEOUT )
- stdout_reader.join( td_common_util.NO_OUTPUT_TIMEOUT )
- stderr_reader.join( td_common_util.NO_OUTPUT_TIMEOUT )
+ stdio_thread.join( basic_util.NO_OUTPUT_TIMEOUT )
+ stdout_reader.join( basic_util.NO_OUTPUT_TIMEOUT )
+ stderr_reader.join( basic_util.NO_OUTPUT_TIMEOUT )
# Close subprocess' file descriptors.
error = self.close_file_descriptor( process_handle.stdout )
error = self.close_file_descriptor( process_handle.stderr )
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/tool_shed/galaxy_install/tool_dependencies/recipe/step_handler.py
--- a/lib/tool_shed/galaxy_install/tool_dependencies/recipe/step_handler.py
+++ b/lib/tool_shed/galaxy_install/tool_dependencies/recipe/step_handler.py
@@ -4,12 +4,17 @@
import stat
from string import Template
import sys
+import tarfile
+import time
+import urllib2
+import zipfile
from galaxy.util import asbool
from galaxy.util.template import fill_template
+from tool_shed.util import basic_util
from tool_shed.util import tool_dependency_util
-from tool_shed.galaxy_install.tool_dependencies import td_common_util
+from tool_shed.galaxy_install.tool_dependencies.env_manager import EnvManager
# TODO: eliminate the use of fabric here.
from galaxy import eggs
@@ -26,6 +31,152 @@
VIRTUALENV_URL = 'https://pypi.python.org/packages/source/v/virtualenv/virtualenv-1.9.1.tar.gz'
+class CompressedFile( object ):
+
+ def __init__( self, file_path, mode='r' ):
+ if tarfile.is_tarfile( file_path ):
+ self.file_type = 'tar'
+ elif zipfile.is_zipfile( file_path ) and not file_path.endswith( '.jar' ):
+ self.file_type = 'zip'
+ self.file_name = os.path.splitext( os.path.basename( file_path ) )[ 0 ]
+ if self.file_name.endswith( '.tar' ):
+ self.file_name = os.path.splitext( self.file_name )[ 0 ]
+ self.type = self.file_type
+ method = 'open_%s' % self.file_type
+ if hasattr( self, method ):
+ self.archive = getattr( self, method )( file_path, mode )
+ else:
+ raise NameError( 'File type %s specified, no open method found.' % self.file_type )
+
+ def extract( self, path ):
+ '''Determine the path to which the archive should be extracted.'''
+ contents = self.getmembers()
+ extraction_path = path
+ if len( contents ) == 1:
+ # The archive contains a single file, return the extraction path.
+ if self.isfile( contents[ 0 ] ):
+ extraction_path = os.path.join( path, self.file_name )
+ if not os.path.exists( extraction_path ):
+ os.makedirs( extraction_path )
+ self.archive.extractall( extraction_path )
+ else:
+ # Get the common prefix for all the files in the archive. If the common prefix ends with a slash,
+ # or self.isdir() returns True, the archive contains a single directory with the desired contents.
+ # Otherwise, it contains multiple files and/or directories at the root of the archive.
+ common_prefix = os.path.commonprefix( [ self.getname( item ) for item in contents ] )
+ if len( common_prefix ) >= 1 and not common_prefix.endswith( os.sep ) and self.isdir( self.getmember( common_prefix ) ):
+ common_prefix += os.sep
+ if common_prefix.endswith( os.sep ):
+ self.archive.extractall( os.path.join( path ) )
+ extraction_path = os.path.join( path, common_prefix )
+ else:
+ extraction_path = os.path.join( path, self.file_name )
+ if not os.path.exists( extraction_path ):
+ os.makedirs( extraction_path )
+ self.archive.extractall( os.path.join( extraction_path ) )
+ return os.path.abspath( extraction_path )
+
+ def getmembers_tar( self ):
+ return self.archive.getmembers()
+
+ def getmembers_zip( self ):
+ return self.archive.infolist()
+
+ def getname_tar( self, item ):
+ return item.name
+
+ def getname_zip( self, item ):
+ return item.filename
+
+ def getmember( self, name ):
+ for member in self.getmembers():
+ if self.getname( member ) == name:
+ return member
+
+ def getmembers( self ):
+ return getattr( self, 'getmembers_%s' % self.type )()
+
+ def getname( self, member ):
+ return getattr( self, 'getname_%s' % self.type )( member )
+
+ def isdir( self, member ):
+ return getattr( self, 'isdir_%s' % self.type )( member )
+
+ def isdir_tar( self, member ):
+ return member.isdir()
+
+ def isdir_zip( self, member ):
+ if member.filename.endswith( os.sep ):
+ return True
+ return False
+
+ def isfile( self, member ):
+ if not self.isdir( member ):
+ return True
+ return False
+
+ def open_tar( self, filepath, mode ):
+ return tarfile.open( filepath, mode, errorlevel=0 )
+
+ def open_zip( self, filepath, mode ):
+ return zipfile.ZipFile( filepath, mode )
+
+ def zipfile_ok( self, path_to_archive ):
+ """
+ This function is a bit pedantic and not functionally necessary. It checks whether there is
+ no file pointing outside of the extraction, because ZipFile.extractall() has some potential
+ security holes. See python zipfile documentation for more details.
+ """
+ basename = os.path.realpath( os.path.dirname( path_to_archive ) )
+ zip_archive = zipfile.ZipFile( path_to_archive )
+ for member in zip_archive.namelist():
+ member_path = os.path.realpath( os.path.join( basename, member ) )
+ if not member_path.startswith( basename ):
+ return False
+ return True
+
+
+class Download( object ):
+
+ def url_download( self, install_dir, downloaded_file_name, download_url, extract=True ):
+ file_path = os.path.join( install_dir, downloaded_file_name )
+ src = None
+ dst = None
+ # Set a timer so we don't sit here forever.
+ start_time = time.time()
+ try:
+ src = urllib2.urlopen( download_url )
+ dst = open( file_path, 'wb' )
+ while True:
+ chunk = src.read( basic_util.CHUNK_SIZE )
+ if chunk:
+ dst.write( chunk )
+ else:
+ break
+ time_taken = time.time() - start_time
+ if time_taken > basic_util.NO_OUTPUT_TIMEOUT:
+ err_msg = 'Downloading from URL %s took longer than the defined timeout period of %.1f seconds.' % \
+ ( str( download_url ), basic_util.NO_OUTPUT_TIMEOUT )
+ raise Exception( err_msg )
+ except Exception, e:
+ err_msg = err_msg = 'Error downloading from URL\n%s:\n%s' % ( str( download_url ), str( e ) )
+ raise Exception( err_msg )
+ finally:
+ if src:
+ src.close()
+ if dst:
+ dst.close()
+ if extract:
+ if tarfile.is_tarfile( file_path ) or ( zipfile.is_zipfile( file_path ) and not file_path.endswith( '.jar' ) ):
+ archive = CompressedFile( file_path )
+ extraction_path = archive.extract( install_dir )
+ else:
+ extraction_path = os.path.abspath( install_dir )
+ else:
+ extraction_path = os.path.abspath( install_dir )
+ return extraction_path
+
+
class RecipeStep( object ):
"""Abstract class that defines a standard format for handling recipe steps when installing packages."""
@@ -42,6 +193,22 @@
def __init__( self ):
self.type = 'assert_directory_executable'
+ def assert_directory_executable( self, full_path ):
+ """
+ Return True if a symbolic link or directory exists and is executable, but if
+ full_path is a file, return False.
+ """
+ if full_path is None:
+ return False
+ if os.path.isfile( full_path ):
+ return False
+ if os.path.isdir( full_path ):
+ # Make sure the owner has execute permission on the directory.
+ # See http://docs.python.org/2/library/stat.html
+ if stat.S_IXUSR & os.stat( full_path )[ stat.ST_MODE ] == 64:
+ return True
+ return False
+
def execute_step( self, app, tool_dependency, package_name, actions, action_dict, filtered_actions, env_file_builder,
install_environment, work_dir, current_dir=None, initial_download=False ):
"""
@@ -53,7 +220,7 @@
full_path = action_dict[ 'full_path' ]
else:
full_path = os.path.join( current_dir, action_dict[ 'full_path' ] )
- if not td_common_util.assert_directory_executable( full_path=full_path ):
+ if not self.assert_directory_executable( full_path=full_path ):
status = app.install_model.ToolDependency.installation_status.ERROR
error_message = 'The path %s is not a directory or is not executable by the owner.' % str( full_path )
tool_dependency = tool_dependency_util.set_tool_dependency_attributes( app,
@@ -66,7 +233,7 @@
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# <action type="assert_executable">$INSTALL_DIR/mira/my_file</action>
if action_elem.text:
- action_dict[ 'full_path' ] = td_common_util.evaluate_template( action_elem.text, install_environment )
+ action_dict[ 'full_path' ] = basic_util.evaluate_template( action_elem.text, install_environment )
return action_dict
@@ -75,6 +242,18 @@
def __init__( self ):
self.type = 'assert_directory_exists'
+ def assert_directory_exists( self, full_path ):
+ """
+ Return True if a symbolic link or directory exists, but if full_path is a file,
+ return False. """
+ if full_path is None:
+ return False
+ if os.path.isfile( full_path ):
+ return False
+ if os.path.isdir( full_path ):
+ return True
+ return False
+
def execute_step( self, app, tool_dependency, package_name, actions, action_dict, filtered_actions, env_file_builder,
install_environment, work_dir, current_dir=None, initial_download=False ):
"""
@@ -86,7 +265,7 @@
full_path = action_dict[ 'full_path' ]
else:
full_path = os.path.join( current_dir, action_dict[ 'full_path' ] )
- if not td_common_util.assert_directory_exists( full_path=full_path ):
+ if not self.assert_directory_exists( full_path=full_path ):
status = app.install_model.ToolDependency.installation_status.ERROR
error_message = 'The path %s is not a directory or does not exist.' % str( full_path )
tool_dependency = tool_dependency_util.set_tool_dependency_attributes( app,
@@ -99,7 +278,7 @@
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# <action type="make_directory">$INSTALL_DIR/mira</action>
if action_elem.text:
- action_dict[ 'full_path' ] = td_common_util.evaluate_template( action_elem.text, install_environment )
+ action_dict[ 'full_path' ] = basic_util.evaluate_template( action_elem.text, install_environment )
return action_dict
@@ -108,6 +287,22 @@
def __init__( self ):
self.type = 'assert_file_executable'
+ def assert_file_executable( self, full_path ):
+ """
+ Return True if a symbolic link or file exists and is executable, but if full_path
+ is a directory, return False.
+ """
+ if full_path is None:
+ return False
+ if os.path.isdir( full_path ):
+ return False
+ if os.path.exists( full_path ):
+ # Make sure the owner has execute permission on the file.
+ # See http://docs.python.org/2/library/stat.html
+ if stat.S_IXUSR & os.stat( full_path )[ stat.ST_MODE ] == 64:
+ return True
+ return False
+
def execute_step( self, app, tool_dependency, package_name, actions, action_dict, filtered_actions, env_file_builder,
install_environment, work_dir, current_dir=None, initial_download=False ):
"""
@@ -119,7 +314,7 @@
full_path = action_dict[ 'full_path' ]
else:
full_path = os.path.join( current_dir, action_dict[ 'full_path' ] )
- if not td_common_util.assert_file_executable( full_path=full_path ):
+ if not self.assert_file_executable( full_path=full_path ):
status = app.install_model.ToolDependency.installation_status.ERROR
error_message = 'The path %s is not a file or is not executable by the owner.' % str( full_path )
tool_dependency = tool_dependency_util.set_tool_dependency_attributes( app,
@@ -132,7 +327,7 @@
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# <action type="assert_executable">$INSTALL_DIR/mira/my_file</action>
if action_elem.text:
- action_dict[ 'full_path' ] = td_common_util.evaluate_template( action_elem.text, install_environment )
+ action_dict[ 'full_path' ] = basic_util.evaluate_template( action_elem.text, install_environment )
return action_dict
@@ -141,6 +336,19 @@
def __init__( self ):
self.type = 'assert_file_exists'
+ def assert_file_exists( self, full_path ):
+ """
+ Return True if a symbolic link or file exists, but if full_path is a directory,
+ return False.
+ """
+ if full_path is None:
+ return False
+ if os.path.isdir( full_path ):
+ return False
+ if os.path.exists( full_path ):
+ return True
+ return False
+
def execute_step( self, app, tool_dependency, package_name, actions, action_dict, filtered_actions, env_file_builder,
install_environment, work_dir, current_dir=None, initial_download=False ):
"""
@@ -152,7 +360,7 @@
full_path = action_dict[ 'full_path' ]
else:
full_path = os.path.join( current_dir, action_dict[ 'full_path' ] )
- if not td_common_util.assert_file_exists( full_path=full_path ):
+ if not self.assert_file_exists( full_path=full_path ):
status = app.install_model.ToolDependency.installation_status.ERROR
error_message = 'The path %s is not a file or does not exist.' % str( full_path )
tool_dependency = tool_dependency_util.set_tool_dependency_attributes( app,
@@ -165,7 +373,7 @@
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# <action type="assert_on_path">$INSTALL_DIR/mira/my_file</action>
if action_elem.text:
- action_dict[ 'full_path' ] = td_common_util.evaluate_template( action_elem.text, install_environment )
+ action_dict[ 'full_path' ] = basic_util.evaluate_template( action_elem.text, install_environment )
return action_dict
@@ -187,7 +395,7 @@
pre_cmd = './configure %s && make && make install' % configure_opts
else:
pre_cmd = './configure --prefix=$INSTALL_DIR %s && make && make install' % configure_opts
- cmd = install_environment.build_command( td_common_util.evaluate_template( pre_cmd, install_environment ) )
+ cmd = install_environment.build_command( basic_util.evaluate_template( pre_cmd, install_environment ) )
return_code = install_environment.handle_command( app=app,
tool_dependency=tool_dependency,
cmd=cmd,
@@ -199,7 +407,7 @@
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# Handle configure, make and make install allow providing configuration options
if action_elem.text:
- configure_opts = td_common_util.evaluate_template( action_elem.text, install_environment )
+ configure_opts = basic_util.evaluate_template( action_elem.text, install_environment )
action_dict[ 'configure_opts' ] = configure_opts
return action_dict
@@ -274,7 +482,7 @@
received_mode = int( file_elem.get( 'mode', 600 ), base=8 )
# For added security, ensure that the setuid and setgid bits are not set.
mode = received_mode & ~( stat.S_ISUID | stat.S_ISGID )
- file = td_common_util.evaluate_template( file_elem.text, install_environment )
+ file = basic_util.evaluate_template( file_elem.text, install_environment )
chmod_tuple = ( file, mode )
chmod_actions.append( chmod_tuple )
if chmod_actions:
@@ -282,11 +490,17 @@
return action_dict
-class DownloadBinary( RecipeStep ):
+class DownloadBinary( Download, RecipeStep ):
def __init__( self ):
self.type = 'download_binary'
+ def download_binary( self, url, work_dir ):
+ """Download a pre-compiled binary from the specified URL."""
+ downloaded_filename = os.path.split( url )[ -1 ]
+ dir = self.url_download( work_dir, downloaded_filename, url, extract=False )
+ return downloaded_filename
+
def filter_actions_after_binary_installation( self, actions ):
'''Filter out actions that should not be processed if a binary download succeeded.'''
filtered_actions = []
@@ -311,7 +525,7 @@
log.debug( 'Attempting to download from %s to %s', url, str( target_directory ) )
downloaded_filename = None
try:
- downloaded_filename = td_common_util.download_binary( url, work_dir )
+ downloaded_filename = self.download_binary( url, work_dir )
if initial_download:
# Filter out any actions that are not download_binary, chmod, or set_environment.
filtered_actions = self.filter_actions_after_binary_installation( actions[ 1: ] )
@@ -338,9 +552,9 @@
full_path_to_dir = os.path.abspath( install_environment.install_dir )
else:
full_path_to_dir = os.path.abspath( install_environment.install_dir )
- td_common_util.move_file( current_dir=work_dir,
- source=downloaded_filename,
- destination=full_path_to_dir )
+ basic_util.move_file( current_dir=work_dir,
+ source=downloaded_filename,
+ destination=full_path_to_dir )
# Not sure why dir is ignored in this method, need to investigate...
dir = None
if initial_download:
@@ -368,7 +582,7 @@
return action_dict
-class DownloadByUrl( RecipeStep ):
+class DownloadByUrl( Download, RecipeStep ):
def __init__( self ):
self.type = 'download_by_url'
@@ -394,9 +608,9 @@
downloaded_filename = action_dict[ 'target_filename' ]
else:
downloaded_filename = os.path.split( url )[ -1 ]
- dir = td_common_util.url_download( work_dir, downloaded_filename, url, extract=True )
+ dir = self.url_download( work_dir, downloaded_filename, url, extract=True )
if is_binary:
- log_file = os.path.join( install_environment.install_dir, td_common_util.INSTALLATION_LOG )
+ log_file = os.path.join( install_environment.install_dir, basic_util.INSTALLATION_LOG )
if os.path.exists( log_file ):
logfile = open( log_file, 'ab' )
else:
@@ -422,7 +636,7 @@
return action_dict
-class DownloadFile( RecipeStep ):
+class DownloadFile( Download, RecipeStep ):
def __init__( self ):
self.type = 'download_file'
@@ -447,7 +661,7 @@
filename = action_dict[ 'target_filename' ]
else:
filename = url.split( '/' )[ -1 ]
- td_common_util.url_download( work_dir, filename, url )
+ self.url_download( work_dir, filename, url )
if initial_download:
dir = os.path.curdir
return tool_dependency, filtered_actions, dir
@@ -479,13 +693,17 @@
full_path = action_dict[ 'full_path' ]
else:
full_path = os.path.join( current_dir, action_dict[ 'full_path' ] )
- td_common_util.make_directory( full_path=full_path )
+ self.make_directory( full_path=full_path )
return tool_dependency, None, None
+ def make_directory( self, full_path ):
+ if not os.path.exists( full_path ):
+ os.makedirs( full_path )
+
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# <action type="make_directory">$INSTALL_DIR/lib/python</action>
if action_elem.text:
- action_dict[ 'full_path' ] = td_common_util.evaluate_template( action_elem.text, install_environment )
+ action_dict[ 'full_path' ] = basic_util.evaluate_template( action_elem.text, install_environment )
return action_dict
@@ -515,7 +733,7 @@
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# make; make install; allow providing make options
if action_elem.text:
- make_opts = td_common_util.evaluate_template( action_elem.text, install_environment )
+ make_opts = basic_util.evaluate_template( action_elem.text, install_environment )
action_dict[ 'make_opts' ] = make_opts
return action_dict
@@ -531,18 +749,38 @@
Move a directory of files. Since this class is not used in the initial download stage, no recipe step
filtering is performed here, and None values are always returned for filtered_actions and dir.
"""
- td_common_util.move_directory_files( current_dir=current_dir,
- source_dir=os.path.join( action_dict[ 'source_directory' ] ),
- destination_dir=os.path.join( action_dict[ 'destination_directory' ] ) )
+ self.move_directory_files( current_dir=current_dir,
+ source_dir=os.path.join( action_dict[ 'source_directory' ] ),
+ destination_dir=os.path.join( action_dict[ 'destination_directory' ] ) )
return tool_dependency, None, None
+ def move_directory_files( self, current_dir, source_dir, destination_dir ):
+ source_directory = os.path.abspath( os.path.join( current_dir, source_dir ) )
+ destination_directory = os.path.join( destination_dir )
+ if not os.path.isdir( destination_directory ):
+ os.makedirs( destination_directory )
+ symlinks = []
+ regular_files = []
+ for file_name in os.listdir( source_directory ):
+ source_file = os.path.join( source_directory, file_name )
+ destination_file = os.path.join( destination_directory, file_name )
+ files_tuple = ( source_file, destination_file )
+ if os.path.islink( source_file ):
+ symlinks.append( files_tuple )
+ else:
+ regular_files.append( files_tuple )
+ for source_file, destination_file in symlinks:
+ shutil.move( source_file, destination_file )
+ for source_file, destination_file in regular_files:
+ shutil.move( source_file, destination_file )
+
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# <action type="move_directory_files">
# <source_directory>bin</source_directory>
# <destination_directory>$INSTALL_DIR/bin</destination_directory>
# </action>
for move_elem in action_elem:
- move_elem_text = td_common_util.evaluate_template( move_elem.text, install_environment )
+ move_elem_text = basic_util.evaluate_template( move_elem.text, install_environment )
if move_elem_text:
action_dict[ move_elem.tag ] = move_elem_text
return action_dict
@@ -559,10 +797,10 @@
Move a file on disk. Since this class is not used in the initial download stage, no recipe step
filtering is performed here, and None values are always returned for filtered_actions and dir.
"""
- td_common_util.move_file( current_dir=current_dir,
- source=os.path.join( action_dict[ 'source' ] ),
- destination=os.path.join( action_dict[ 'destination' ] ),
- rename_to=action_dict[ 'rename_to' ] )
+ basic_util.move_file( current_dir=current_dir,
+ source=os.path.join( action_dict[ 'source' ] ),
+ destination=os.path.join( action_dict[ 'destination' ] ),
+ rename_to=action_dict[ 'rename_to' ] )
return tool_dependency, None, None
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
@@ -570,8 +808,8 @@
# <source>misc/some_file</source>
# <destination>$INSTALL_DIR/bin</destination>
# </action>
- action_dict[ 'source' ] = td_common_util.evaluate_template( action_elem.find( 'source' ).text, install_environment )
- action_dict[ 'destination' ] = td_common_util.evaluate_template( action_elem.find( 'destination' ).text, install_environment )
+ action_dict[ 'source' ] = basic_util.evaluate_template( action_elem.find( 'source' ).text, install_environment )
+ action_dict[ 'destination' ] = basic_util.evaluate_template( action_elem.find( 'destination' ).text, install_environment )
action_dict[ 'rename_to' ] = action_elem.get( 'rename_to' )
return action_dict
@@ -717,12 +955,12 @@
# <action type="set_environment">
# <environment_variable name="PATH" action="prepend_to">$INSTALL_DIR</environment_variable>
# </action>
+ env_manager = EnvManager( app )
env_var_dicts = []
for env_elem in action_elem:
if env_elem.tag == 'environment_variable':
- env_var_dict = \
- td_common_util.create_env_var_dict( elem=env_elem,
- install_environment=install_environment )
+ env_var_dict = env_manager.create_env_var_dict( elem=env_elem,
+ install_environment=install_environment )
if env_var_dict:
env_var_dicts.append( env_var_dict )
if env_var_dicts:
@@ -764,16 +1002,17 @@
# the current tool dependency package. See the package_matplotlib_1_2 repository in the test tool
# shed for a real-world example.
all_env_shell_file_paths = []
+ env_manager = EnvManager( app )
for env_elem in action_elem:
if env_elem.tag == 'repository':
- env_shell_file_paths = td_common_util.get_env_shell_file_paths( app, env_elem )
+ env_shell_file_paths = env_manager.get_env_shell_file_paths( env_elem )
if env_shell_file_paths:
all_env_shell_file_paths.extend( env_shell_file_paths )
action_dict[ 'env_shell_file_paths' ] = all_env_shell_file_paths
return action_dict
-class SetupPerlEnvironment( RecipeStep ):
+class SetupPerlEnvironment( Download, RecipeStep ):
def __init__( self ):
self.type = 'setup_purl_environment'
@@ -822,7 +1061,7 @@
# We assume a URL to a gem file.
url = perl_package
perl_package_name = url.split( '/' )[ -1 ]
- dir = td_common_util.url_download( work_dir, perl_package_name, url, extract=True )
+ dir = self.url_download( work_dir, perl_package_name, url, extract=True )
# Search for Build.PL or Makefile.PL (ExtUtils::MakeMaker vs. Module::Build).
tmp_work_dir = os.path.join( work_dir, dir )
if os.path.exists( os.path.join( tmp_work_dir, 'Makefile.PL' ) ):
@@ -836,7 +1075,7 @@
return tool_dependency, filtered_actions, dir
return tool_dependency, None, None
with lcd( tmp_work_dir ):
- cmd = install_environment.build_command( td_common_util.evaluate_template( cmd, install_environment ) )
+ cmd = install_environment.build_command( basic_util.evaluate_template( cmd, install_environment ) )
return_code = install_environment.handle_command( app=app,
tool_dependency=tool_dependency,
cmd=cmd,
@@ -849,7 +1088,7 @@
# perl package from CPAN without version number.
# cpanm should be installed with the parent perl distribution, otherwise this will not work.
cmd += '''cpanm --local-lib=$INSTALL_DIR %s''' % ( perl_package )
- cmd = install_environment.build_command( td_common_util.evaluate_template( cmd, install_environment ) )
+ cmd = install_environment.build_command( basic_util.evaluate_template( cmd, install_environment ) )
return_code = install_environment.handle_command( app=app,
tool_dependency=tool_dependency,
cmd=cmd,
@@ -890,10 +1129,11 @@
# with each repository. This will potentially update the value of the 'env_shell_file_paths' entry
# in action_dict.
all_env_shell_file_paths = []
- action_dict = td_common_util.get_env_shell_file_paths_from_setup_environment_elem( app,
- all_env_shell_file_paths,
- action_elem,
- action_dict )
+ env_manager = EnvManager( app )
+ action_dict = env_manager.get_env_shell_file_paths_from_setup_environment_elem( app,
+ all_env_shell_file_paths,
+ action_elem,
+ action_dict )
perl_packages = []
for env_elem in action_elem:
if env_elem.tag == 'package':
@@ -908,7 +1148,7 @@
return action_dict
-class SetupREnvironment( RecipeStep ):
+class SetupREnvironment( Download, RecipeStep ):
def __init__( self ):
self.type = 'setup_r_environment'
@@ -947,7 +1187,7 @@
for url in action_dict[ 'r_packages' ]:
filename = url.split( '/' )[ -1 ]
tarball_names.append( filename )
- td_common_util.url_download( work_dir, filename, url, extract=False )
+ self.url_download( work_dir, filename, url, extract=False )
dir = os.path.curdir
current_dir = os.path.abspath( os.path.join( work_dir, dir ) )
with lcd( current_dir ):
@@ -958,7 +1198,7 @@
cmd = r'''PATH=$PATH:$R_HOME/bin; export PATH; R_LIBS=$INSTALL_DIR; export R_LIBS;
Rscript -e "install.packages(c('%s'),lib='$INSTALL_DIR', repos=NULL, dependencies=FALSE)"''' % \
( str( tarball_name ) )
- cmd = install_environment.build_command( td_common_util.evaluate_template( cmd, install_environment ) )
+ cmd = install_environment.build_command( basic_util.evaluate_template( cmd, install_environment ) )
return_code = install_environment.handle_command( app=app,
tool_dependency=tool_dependency,
cmd=cmd,
@@ -993,10 +1233,11 @@
# associated with each repository. This will potentially update the value of the
# 'env_shell_file_paths' entry in action_dict.
all_env_shell_file_paths = []
- action_dict = td_common_util.get_env_shell_file_paths_from_setup_environment_elem( app,
- all_env_shell_file_paths,
- action_elem,
- action_dict )
+ env_manager = EnvManager( app )
+ action_dict = env_manager.get_env_shell_file_paths_from_setup_environment_elem( app,
+ all_env_shell_file_paths,
+ action_elem,
+ action_dict )
r_packages = list()
for env_elem in action_elem:
if env_elem.tag == 'package':
@@ -1006,7 +1247,7 @@
return action_dict
-class SetupRubyEnvironment( RecipeStep ):
+class SetupRubyEnvironment( Download, RecipeStep ):
def __init__( self ):
self.type = 'setup_ruby_environment'
@@ -1058,7 +1299,7 @@
# We assume a URL to a gem file.
url = gem
gem_name = url.split( '/' )[ -1 ]
- td_common_util.url_download( work_dir, gem_name, url, extract=False )
+ self.url_download( work_dir, gem_name, url, extract=False )
cmd = '''PATH=$PATH:$RUBY_HOME/bin; export PATH; GEM_HOME=$INSTALL_DIR; export GEM_HOME;
gem install --local %s ''' % ( gem_name )
else:
@@ -1073,7 +1314,7 @@
# no version number given
cmd = '''PATH=$PATH:$RUBY_HOME/bin; export PATH; GEM_HOME=$INSTALL_DIR; export GEM_HOME;
gem install %s''' % ( gem )
- cmd = install_environment.build_command( td_common_util.evaluate_template( cmd, install_environment ) )
+ cmd = install_environment.build_command( basic_util.evaluate_template( cmd, install_environment ) )
return_code = install_environment.handle_command( app=app,
tool_dependency=tool_dependency,
cmd=cmd,
@@ -1114,10 +1355,11 @@
# associated with each repository. This will potentially update the value of the
# 'env_shell_file_paths' entry in action_dict.
all_env_shell_file_paths = []
- action_dict = td_common_util.get_env_shell_file_paths_from_setup_environment_elem( app,
- all_env_shell_file_paths,
- action_elem,
- action_dict )
+ env_manager = EnvManager( app )
+ action_dict = env_manager.get_env_shell_file_paths_from_setup_environment_elem( app,
+ all_env_shell_file_paths,
+ action_elem,
+ action_dict )
ruby_package_tups = []
for env_elem in action_elem:
if env_elem.tag == 'package':
@@ -1140,7 +1382,7 @@
return action_dict
-class SetupVirtualEnv( RecipeStep ):
+class SetupVirtualEnv( Download, RecipeStep ):
def __init__( self ):
self.type = 'setup_virtualenv'
@@ -1228,9 +1470,10 @@
with install_environment.make_tmp_dir() as work_dir:
downloaded_filename = VIRTUALENV_URL.rsplit('/', 1)[-1]
try:
- dir = td_common_util.url_download( work_dir, downloaded_filename, VIRTUALENV_URL )
+ dir = self.url_download( work_dir, downloaded_filename, VIRTUALENV_URL )
except:
- log.error( "Failed to download virtualenv: td_common_util.url_download( '%s', '%s', '%s' ) threw an exception", work_dir, downloaded_filename, VIRTUALENV_URL )
+ log.error( "Failed to download virtualenv: url_download( '%s', '%s', '%s' ) threw an exception",
+ work_dir, downloaded_filename, VIRTUALENV_URL )
return False
full_path_to_dir = os.path.abspath( os.path.join( work_dir, dir ) )
shutil.move( full_path_to_dir, venv_dir )
@@ -1245,7 +1488,7 @@
# lxml==2.3.0</action>
## Manually specify contents of requirements.txt file to create dynamically.
action_dict[ 'use_requirements_file' ] = asbool( action_elem.get( 'use_requirements_file', True ) )
- action_dict[ 'requirements' ] = td_common_util.evaluate_template( action_elem.text or 'requirements.txt', install_environment )
+ action_dict[ 'requirements' ] = basic_util.evaluate_template( action_elem.text or 'requirements.txt', install_environment )
action_dict[ 'python' ] = action_elem.get( 'python', 'python' )
return action_dict
@@ -1316,7 +1559,7 @@
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# <action type="shell_command">make</action>
- action_elem_text = td_common_util.evaluate_template( action_elem.text, install_environment )
+ action_elem_text = basic_util.evaluate_template( action_elem.text, install_environment )
if action_elem_text:
action_dict[ 'command' ] = action_elem_text
return action_dict
@@ -1338,7 +1581,7 @@
env_vars = dict()
env_vars = install_environment.environment_dict()
tool_shed_repository = tool_dependency.tool_shed_repository
- env_vars.update( td_common_util.get_env_var_values( install_environment ) )
+ env_vars.update( basic_util.get_env_var_values( install_environment ) )
language = action_dict[ 'language' ]
with settings( warn_only=True, **env_vars ):
if language == 'cheetah':
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/tool_shed/galaxy_install/tool_dependencies/recipe/tag_handler.py
--- a/lib/tool_shed/galaxy_install/tool_dependencies/recipe/tag_handler.py
+++ b/lib/tool_shed/galaxy_install/tool_dependencies/recipe/tag_handler.py
@@ -10,7 +10,7 @@
from tool_shed.util import tool_dependency_util
from tool_shed.util import xml_util
-from tool_shed.galaxy_install.tool_dependencies import td_common_util
+from tool_shed.galaxy_install.tool_dependencies.env_manager import EnvManager
from tool_shed.galaxy_install.tool_dependencies.recipe.env_file_builder import EnvFileBuilder
from tool_shed.galaxy_install.tool_dependencies.recipe.install_environment import InstallEnvironment
@@ -85,9 +85,9 @@
platform_info_dict = tool_dependency_util.get_platform_info_dict()
if package_install_version == '1.0':
# Handle tool dependency installation using a fabric method included in the Galaxy framework.
- actions_elem_tuples = td_common_util.parse_package_elem( package_elem,
- platform_info_dict=platform_info_dict,
- include_after_install_actions=True )
+ actions_elem_tuples = tool_dependency_util.parse_package_elem( package_elem,
+ platform_info_dict=platform_info_dict,
+ include_after_install_actions=True )
if not actions_elem_tuples:
proceed_with_install = False
error_message = 'Version %s of the %s package cannot be installed because ' % ( str( package_version ), str( package_name ) )
@@ -491,6 +491,7 @@
# <set_environment version="1.0">
# <repository toolshed="<tool shed>" name="<repository name>" owner="<repository owner>" changeset_revision="<changeset revision>" />
# </set_environment>
+ env_manager = EnvManager( app )
tool_dependencies = []
env_var_version = elem.get( 'version', '1.0' )
tool_shed_repository_install_dir = os.path.abspath( tool_shed_repository.repo_files_directory( app ) )
@@ -514,8 +515,8 @@
tool_dependency_version=None )
install_environment = InstallEnvironment( tool_shed_repository_install_dir=tool_shed_repository_install_dir,
install_dir=install_dir )
- env_var_dict = td_common_util.create_env_var_dict( elem=env_var_elem,
- install_environment=install_environment )
+ env_var_dict = env_manager.create_env_var_dict( elem=env_var_elem,
+ install_environment=install_environment )
if env_var_dict:
if not os.path.exists( install_dir ):
os.makedirs( install_dir )
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/tool_shed/galaxy_install/tool_dependencies/td_common_util.py
--- a/lib/tool_shed/galaxy_install/tool_dependencies/td_common_util.py
+++ /dev/null
@@ -1,578 +0,0 @@
-import logging
-import os
-import re
-import shutil
-import stat
-import sys
-import tarfile
-import time
-import traceback
-import urllib2
-import zipfile
-from string import Template
-from tool_shed.util import common_util
-import tool_shed.util.shed_util_common as suc
-from galaxy.datatypes import checkers
-
-log = logging.getLogger( __name__ )
-
-# Set no activity timeout to 20 minutes.
-NO_OUTPUT_TIMEOUT = 1200.0
-INSTALLATION_LOG = 'INSTALLATION.log'
-
-
-class CompressedFile( object ):
-
- def __init__( self, file_path, mode='r' ):
- if istar( file_path ):
- self.file_type = 'tar'
- elif iszip( file_path ) and not isjar( file_path ):
- self.file_type = 'zip'
- self.file_name = os.path.splitext( os.path.basename( file_path ) )[ 0 ]
- if self.file_name.endswith( '.tar' ):
- self.file_name = os.path.splitext( self.file_name )[ 0 ]
- self.type = self.file_type
- method = 'open_%s' % self.file_type
- if hasattr( self, method ):
- self.archive = getattr( self, method )( file_path, mode )
- else:
- raise NameError( 'File type %s specified, no open method found.' % self.file_type )
-
- def extract( self, path ):
- '''Determine the path to which the archive should be extracted.'''
- contents = self.getmembers()
- extraction_path = path
- if len( contents ) == 1:
- # The archive contains a single file, return the extraction path.
- if self.isfile( contents[ 0 ] ):
- extraction_path = os.path.join( path, self.file_name )
- if not os.path.exists( extraction_path ):
- os.makedirs( extraction_path )
- self.archive.extractall( extraction_path )
- else:
- # Get the common prefix for all the files in the archive. If the common prefix ends with a slash,
- # or self.isdir() returns True, the archive contains a single directory with the desired contents.
- # Otherwise, it contains multiple files and/or directories at the root of the archive.
- common_prefix = os.path.commonprefix( [ self.getname( item ) for item in contents ] )
- if len( common_prefix ) >= 1 and not common_prefix.endswith( os.sep ) and self.isdir( self.getmember( common_prefix ) ):
- common_prefix += os.sep
- if common_prefix.endswith( os.sep ):
- self.archive.extractall( os.path.join( path ) )
- extraction_path = os.path.join( path, common_prefix )
- else:
- extraction_path = os.path.join( path, self.file_name )
- if not os.path.exists( extraction_path ):
- os.makedirs( extraction_path )
- self.archive.extractall( os.path.join( extraction_path ) )
- return os.path.abspath( extraction_path )
-
- def getmembers_tar( self ):
- return self.archive.getmembers()
-
- def getmembers_zip( self ):
- return self.archive.infolist()
-
- def getname_tar( self, item ):
- return item.name
-
- def getname_zip( self, item ):
- return item.filename
-
- def getmember( self, name ):
- for member in self.getmembers():
- if self.getname( member ) == name:
- return member
-
- def getmembers( self ):
- return getattr( self, 'getmembers_%s' % self.type )()
-
- def getname( self, member ):
- return getattr( self, 'getname_%s' % self.type )( member )
-
- def isdir( self, member ):
- return getattr( self, 'isdir_%s' % self.type )( member )
-
- def isdir_tar( self, member ):
- return member.isdir()
-
- def isdir_zip( self, member ):
- if member.filename.endswith( os.sep ):
- return True
- return False
-
- def isfile( self, member ):
- if not self.isdir( member ):
- return True
- return False
-
- def open_tar( self, filepath, mode ):
- return tarfile.open( filepath, mode, errorlevel=0 )
-
- def open_zip( self, filepath, mode ):
- return zipfile.ZipFile( filepath, mode )
-
-def assert_directory_executable( full_path ):
- """
- Return True if a symbolic link or directory exists and is executable, but if
- full_path is a file, return False.
- """
- if full_path is None:
- return False
- if os.path.isfile( full_path ):
- return False
- if os.path.isdir( full_path ):
- # Make sure the owner has execute permission on the directory.
- # See http://docs.python.org/2/library/stat.html
- if stat.S_IXUSR & os.stat( full_path )[ stat.ST_MODE ] == 64:
- return True
- return False
-
-def assert_directory_exists( full_path ):
- """
- Return True if a symbolic link or directory exists, but if full_path is a file,
- return False. """
- if full_path is None:
- return False
- if os.path.isfile( full_path ):
- return False
- if os.path.isdir( full_path ):
- return True
- return False
-
-def assert_file_executable( full_path ):
- """
- Return True if a symbolic link or file exists and is executable, but if full_path
- is a directory, return False.
- """
- if full_path is None:
- return False
- if os.path.isdir( full_path ):
- return False
- if os.path.exists( full_path ):
- # Make sure the owner has execute permission on the file.
- # See http://docs.python.org/2/library/stat.html
- if stat.S_IXUSR & os.stat( full_path )[ stat.ST_MODE ] == 64:
- return True
- return False
-
-def assert_file_exists( full_path ):
- """
- Return True if a symbolic link or file exists, but if full_path is a directory,
- return False.
- """
- if full_path is None:
- return False
- if os.path.isdir( full_path ):
- return False
- if os.path.exists( full_path ):
- return True
- return False
-
-def create_env_var_dict( elem, install_environment ):
- env_var_name = elem.get( 'name', 'PATH' )
- env_var_action = elem.get( 'action', 'prepend_to' )
- env_var_text = None
- tool_dependency_install_dir = install_environment.install_dir
- tool_shed_repository_install_dir = install_environment.tool_shed_repository_install_dir
- if elem.text and elem.text.find( 'REPOSITORY_INSTALL_DIR' ) >= 0:
- if tool_shed_repository_install_dir and elem.text.find( '$REPOSITORY_INSTALL_DIR' ) != -1:
- env_var_text = elem.text.replace( '$REPOSITORY_INSTALL_DIR', tool_shed_repository_install_dir )
- return dict( name=env_var_name, action=env_var_action, value=env_var_text )
- else:
- env_var_text = elem.text.replace( '$REPOSITORY_INSTALL_DIR', tool_dependency_install_dir )
- return dict( name=env_var_name, action=env_var_action, value=env_var_text )
- if elem.text and elem.text.find( 'INSTALL_DIR' ) >= 0:
- if tool_dependency_install_dir:
- env_var_text = elem.text.replace( '$INSTALL_DIR', tool_dependency_install_dir )
- return dict( name=env_var_name, action=env_var_action, value=env_var_text )
- else:
- env_var_text = elem.text.replace( '$INSTALL_DIR', tool_shed_repository_install_dir )
- return dict( name=env_var_name, action=env_var_action, value=env_var_text )
- if elem.text:
- # Allow for environment variables that contain neither REPOSITORY_INSTALL_DIR nor INSTALL_DIR
- # since there may be command line parameters that are tuned for a Galaxy instance. Allowing them
- # to be set in one location rather than being hard coded into each tool config is the best approach.
- # For example:
- # <environment_variable name="GATK2_SITE_OPTIONS" action="set_to">
- # "--num_threads 4 --num_cpu_threads_per_data_thread 3 --phone_home STANDARD"
- # </environment_variable>
- return dict( name=env_var_name, action=env_var_action, value=elem.text)
- return None
-
-def download_binary( url, work_dir ):
- """Download a pre-compiled binary from the specified URL."""
- downloaded_filename = os.path.split( url )[ -1 ]
- dir = url_download( work_dir, downloaded_filename, url, extract=False )
- return downloaded_filename
-
-def egrep_escape( text ):
- """Escape ``text`` to allow literal matching using egrep."""
- regex = re.escape( text )
- # Seems like double escaping is needed for \
- regex = regex.replace( '\\\\', '\\\\\\' )
- # Triple-escaping seems to be required for $ signs
- regex = regex.replace( r'\$', r'\\\$' )
- # Whereas single quotes should not be escaped
- regex = regex.replace( r"\'", "'" )
- return regex
-
-def evaluate_template( text, install_environment ):
- """
- Substitute variables defined in XML blocks from dependencies file. The value of the received
- repository_install_dir is the root installation directory of the repository that contains the
- tool dependency. The value of the received install_dir is the root installation directory of
- the tool_dependency.
- """
- return Template( text ).safe_substitute( get_env_var_values( install_environment ) )
-
-def format_traceback():
- ex_type, ex, tb = sys.exc_info()
- return ''.join( traceback.format_tb( tb ) )
-
-def get_env_shell_file_path( installation_directory ):
- env_shell_file_name = 'env.sh'
- default_location = os.path.abspath( os.path.join( installation_directory, env_shell_file_name ) )
- if os.path.exists( default_location ):
- return default_location
- for root, dirs, files in os.walk( installation_directory ):
- for name in files:
- if name == env_shell_file_name:
- return os.path.abspath( os.path.join( root, name ) )
- return None
-
-def get_env_shell_file_paths( app, elem ):
- # Currently only the following tag set is supported.
- # <repository toolshed="http://localhost:9009/" name="package_numpy_1_7" owner="test" changeset_revision="c84c6a8be056">
- # <package name="numpy" version="1.7.1" />
- # </repository>
- env_shell_file_paths = []
- toolshed = elem.get( 'toolshed', None )
- repository_name = elem.get( 'name', None )
- repository_owner = elem.get( 'owner', None )
- changeset_revision = elem.get( 'changeset_revision', None )
- if toolshed and repository_name and repository_owner and changeset_revision:
- # The protocol is not stored, but the port is if it exists.
- toolshed = common_util.remove_protocol_from_tool_shed_url( toolshed )
- repository = suc.get_repository_for_dependency_relationship( app, toolshed, repository_name, repository_owner, changeset_revision )
- if repository:
- for sub_elem in elem:
- tool_dependency_type = sub_elem.tag
- tool_dependency_name = sub_elem.get( 'name' )
- tool_dependency_version = sub_elem.get( 'version' )
- if tool_dependency_type and tool_dependency_name and tool_dependency_version:
- # Get the tool_dependency so we can get its installation directory.
- tool_dependency = None
- for tool_dependency in repository.tool_dependencies:
- if tool_dependency.type == tool_dependency_type and \
- tool_dependency.name == tool_dependency_name and \
- tool_dependency.version == tool_dependency_version:
- break
- if tool_dependency:
- tool_dependency_key = '%s/%s' % ( tool_dependency_name, tool_dependency_version )
- installation_directory = tool_dependency.installation_directory( app )
- env_shell_file_path = get_env_shell_file_path( installation_directory )
- if env_shell_file_path:
- env_shell_file_paths.append( env_shell_file_path )
- else:
- error_message = "Skipping tool dependency definition because unable to locate env.sh file for tool dependency "
- error_message += "type %s, name %s, version %s for repository %s" % \
- ( str( tool_dependency_type ), str( tool_dependency_name ), str( tool_dependency_version ), str( repository.name ) )
- log.debug( error_message )
- continue
- else:
- error_message = "Skipping tool dependency definition because unable to locate tool dependency "
- error_message += "type %s, name %s, version %s for repository %s" % \
- ( str( tool_dependency_type ), str( tool_dependency_name ), str( tool_dependency_version ), str( repository.name ) )
- log.debug( error_message )
- continue
- else:
- error_message = "Skipping invalid tool dependency definition: type %s, name %s, version %s." % \
- ( str( tool_dependency_type ), str( tool_dependency_name ), str( tool_dependency_version ) )
- log.debug( error_message )
- continue
- else:
- error_message = "Skipping set_environment_for_install definition because unable to locate required installed tool shed repository: "
- error_message += "toolshed %s, name %s, owner %s, changeset_revision %s." % \
- ( str( toolshed ), str( repository_name ), str( repository_owner ), str( changeset_revision ) )
- log.debug( error_message )
- else:
- error_message = "Skipping invalid set_environment_for_install definition: toolshed %s, name %s, owner %s, changeset_revision %s." % \
- ( str( toolshed ), str( repository_name ), str( repository_owner ), str( changeset_revision ) )
- log.debug( error_message )
- return env_shell_file_paths
-
-def get_env_shell_file_paths_from_setup_environment_elem( app, all_env_shell_file_paths, elem, action_dict ):
- """
- Parse an XML tag set to discover all child repository dependency tags and define the path to an env.sh file associated
- with the repository (this requires the repository dependency to be in an installed state). The received action_dict
- will be updated with these discovered paths and returned to the caller. This method handles tool dependency definition
- tag sets <setup_r_environment>, <setup_ruby_environment> and <setup_perl_environment>.
- """
- # An example elem is:
- # <action type="setup_perl_environment">
- # <repository name="package_perl_5_18" owner="iuc">
- # <package name="perl" version="5.18.1" />
- # </repository>
- # <repository name="package_expat_2_1" owner="iuc" prior_installation_required="True">
- # <package name="expat" version="2.1.0" />
- # </repository>
- # <package>http://search.cpan.org/CPAN/authors/id/T/TO/TODDR/XML-Parser-2.41.tar.gz</package>
- # <package>http://search.cpan.org/CPAN/authors/id/L/LD/LDS/CGI.pm-3.43.tar.gz</package>
- # </action>
- for action_elem in elem:
- if action_elem.tag == 'repository':
- env_shell_file_paths = get_env_shell_file_paths( app, action_elem )
- all_env_shell_file_paths.extend( env_shell_file_paths )
- if all_env_shell_file_paths:
- action_dict[ 'env_shell_file_paths' ] = all_env_shell_file_paths
- action_dict[ 'action_shell_file_paths' ] = env_shell_file_paths
- return action_dict
-
-def get_env_var_values( install_environment ):
- """
- Return a dictionary of values, some of which enable substitution of reserved words for the values.
- The received install_enviroment object has 2 important attributes for reserved word substitution:
- install_environment.tool_shed_repository_install_dir is the root installation directory of the repository
- that contains the tool dependency being installed, and install_environment.install_dir is the root
- installation directory of the tool dependency.
- """
- env_var_dict = {}
- env_var_dict[ 'REPOSITORY_INSTALL_DIR' ] = install_environment.tool_shed_repository_install_dir
- env_var_dict[ 'INSTALL_DIR' ] = install_environment.install_dir
- env_var_dict[ 'system_install' ] = install_environment.install_dir
- # If the Python interpreter is 64bit then we can safely assume that the underlying system is also 64bit.
- env_var_dict[ '__is64bit__' ] = sys.maxsize > 2**32
- return env_var_dict
-
-def isbz2( file_path ):
- return checkers.is_bz2( file_path )
-
-def isgzip( file_path ):
- return checkers.is_gzip( file_path )
-
-def isjar( file_path ):
- return iszip( file_path ) and file_path.endswith( '.jar' )
-
-def istar( file_path ):
- return tarfile.is_tarfile( file_path )
-
-def iszip( file_path ):
- return checkers.check_zip( file_path )
-
-def is_compressed( file_path ):
- if isjar( file_path ):
- return False
- else:
- return iszip( file_path ) or isgzip( file_path ) or istar( file_path ) or isbz2( file_path )
-
-def make_directory( full_path ):
- if not os.path.exists( full_path ):
- os.makedirs( full_path )
-
-def move_directory_files( current_dir, source_dir, destination_dir ):
- source_directory = os.path.abspath( os.path.join( current_dir, source_dir ) )
- destination_directory = os.path.join( destination_dir )
- if not os.path.isdir( destination_directory ):
- os.makedirs( destination_directory )
- symlinks = []
- regular_files = []
- for file_name in os.listdir( source_directory ):
- source_file = os.path.join( source_directory, file_name )
- destination_file = os.path.join( destination_directory, file_name )
- files_tuple = ( source_file, destination_file )
- if os.path.islink( source_file ):
- symlinks.append( files_tuple )
- else:
- regular_files.append( files_tuple )
- for source_file, destination_file in symlinks:
- shutil.move( source_file, destination_file )
- for source_file, destination_file in regular_files:
- shutil.move( source_file, destination_file )
-
-def move_file( current_dir, source, destination, rename_to=None ):
- source_path = os.path.abspath( os.path.join( current_dir, source ) )
- source_file = os.path.basename( source_path )
- if rename_to is not None:
- destination_file = rename_to
- destination_directory = os.path.join( destination )
- destination_path = os.path.join( destination_directory, destination_file )
- else:
- destination_directory = os.path.join( destination )
- destination_path = os.path.join( destination_directory, source_file )
- if not os.path.exists( destination_directory ):
- os.makedirs( destination_directory )
- shutil.move( source_path, destination_path )
-
-def parse_package_elem( package_elem, platform_info_dict=None, include_after_install_actions=True ):
- """
- Parse a <package> element within a tool dependency definition and return a list of action tuples.
- This method is called when setting metadata on a repository that includes a tool_dependencies.xml
- file or when installing a repository that includes a tool_dependencies.xml file. If installing,
- platform_info_dict must be a valid dictionary and include_after_install_actions must be True.
- """
- # The actions_elem_tuples list contains <actions> tag sets (possibly inside of an <actions_group>
- # tag set) to be processed in the order they are defined in the tool_dependencies.xml file.
- actions_elem_tuples = []
- # The tag sets that will go into the actions_elem_list are those that install a compiled binary if
- # the architecture and operating system match its defined attributes. If compiled binary is not
- # installed, the first <actions> tag set [following those that have the os and architecture attributes]
- # that does not have os or architecture attributes will be processed. This tag set must contain the
- # recipe for downloading and compiling source.
- actions_elem_list = []
- for elem in package_elem:
- if elem.tag == 'actions':
- # We have an <actions> tag that should not be matched against a specific combination of
- # architecture and operating system.
- in_actions_group = False
- actions_elem_tuples.append( ( in_actions_group, elem ) )
- elif elem.tag == 'actions_group':
- # We have an actions_group element, and its child <actions> elements should therefore be compared
- # with the current operating system
- # and processor architecture.
- in_actions_group = True
- # Record the number of <actions> elements so we can filter out any <action> elements that precede
- # <actions> elements.
- actions_elem_count = len( elem.findall( 'actions' ) )
- # Record the number of <actions> elements that have both architecture and os specified, in order
- # to filter out any platform-independent <actions> elements that come before platform-specific
- # <actions> elements.
- platform_actions_elements = []
- for actions_elem in elem.findall( 'actions' ):
- if actions_elem.get( 'architecture' ) is not None and actions_elem.get( 'os' ) is not None:
- platform_actions_elements.append( actions_elem )
- platform_actions_element_count = len( platform_actions_elements )
- platform_actions_elements_processed = 0
- actions_elems_processed = 0
- # The tag sets that will go into the after_install_actions list are <action> tags instead of <actions>
- # tags. These will be processed only if they are at the very end of the <actions_group> tag set (after
- # all <actions> tag sets). See below for details.
- after_install_actions = []
- # Inspect the <actions_group> element and build the actions_elem_list and the after_install_actions list.
- for child_element in elem:
- if child_element.tag == 'actions':
- actions_elems_processed += 1
- system = child_element.get( 'os' )
- architecture = child_element.get( 'architecture' )
- # Skip <actions> tags that have only one of architecture or os specified, in order for the
- # count in platform_actions_elements_processed to remain accurate.
- if ( system and not architecture ) or ( architecture and not system ):
- log.debug( 'Error: Both architecture and os attributes must be specified in an <actions> tag.' )
- continue
- # Since we are inside an <actions_group> tag set, compare it with our current platform information
- # and filter the <actions> tag sets that don't match. Require both the os and architecture attributes
- # to be defined in order to find a match.
- if system and architecture:
- platform_actions_elements_processed += 1
- # If either the os or architecture do not match the platform, this <actions> tag will not be
- # considered a match. Skip it and proceed with checking the next one.
- if platform_info_dict:
- if platform_info_dict[ 'os' ] != system or platform_info_dict[ 'architecture' ] != architecture:
- continue
- else:
- # We must not be installing a repository into Galaxy, so determining if we can install a
- # binary is not necessary.
- continue
- else:
- # <actions> tags without both os and architecture attributes are only allowed to be specified
- # after platform-specific <actions> tags. If we find a platform-independent <actions> tag before
- # all platform-specific <actions> tags have been processed.
- if platform_actions_elements_processed < platform_actions_element_count:
- debug_msg = 'Error: <actions> tags without os and architecture attributes are only allowed '
- debug_msg += 'after all <actions> tags with os and architecture attributes have been defined. '
- debug_msg += 'Skipping the <actions> tag set with no os or architecture attributes that has '
- debug_msg += 'been defined between two <actions> tag sets that have these attributes defined. '
- log.debug( debug_msg )
- continue
- # If we reach this point, it means one of two things: 1) The system and architecture attributes are
- # not defined in this <actions> tag, or 2) The system and architecture attributes are defined, and
- # they are an exact match for the current platform. Append the child element to the list of elements
- # to process.
- actions_elem_list.append( child_element )
- elif child_element.tag == 'action':
- # Any <action> tags within an <actions_group> tag set must come after all <actions> tags.
- if actions_elems_processed == actions_elem_count:
- # If all <actions> elements have been processed, then this <action> element can be appended to the
- # list of actions to execute within this group.
- after_install_actions.append( child_element )
- else:
- # If any <actions> elements remain to be processed, then log a message stating that <action>
- # elements are not allowed to precede any <actions> elements within an <actions_group> tag set.
- debug_msg = 'Error: <action> tags are only allowed at the end of an <actions_group> tag set after '
- debug_msg += 'all <actions> tags. Skipping <%s> element with type %s.' % \
- ( child_element.tag, child_element.get( 'type', 'unknown' ) )
- log.debug( debug_msg )
- continue
- if platform_info_dict is None and not include_after_install_actions:
- # We must be setting metadata on a repository.
- if len( actions_elem_list ) >= 1:
- actions_elem_tuples.append( ( in_actions_group, actions_elem_list[ 0 ] ) )
- else:
- # We are processing a recipe that contains only an <actions_group> tag set for installing a binary,
- # but does not include an additional recipe for installing and compiling from source.
- actions_elem_tuples.append( ( in_actions_group, [] ) )
- elif platform_info_dict is not None and include_after_install_actions:
- # We must be installing a repository.
- if after_install_actions:
- actions_elem_list.extend( after_install_actions )
- actions_elem_tuples.append( ( in_actions_group, actions_elem_list ) )
- else:
- # Skip any element that is not <actions> or <actions_group> - this will skip comments, <repository> tags
- # and <readme> tags.
- in_actions_group = False
- continue
- return actions_elem_tuples
-
-def __shellquote( s ):
- """Quote and escape the supplied string for use in shell expressions."""
- return "'" + s.replace( "'", "'\\''" ) + "'"
-
-def url_download( install_dir, downloaded_file_name, download_url, extract=True ):
- file_path = os.path.join( install_dir, downloaded_file_name )
- src = None
- dst = None
- # Set a timer so we don't sit here forever.
- start_time = time.time()
- try:
- src = urllib2.urlopen( download_url )
- dst = open( file_path, 'wb' )
- while True:
- chunk = src.read( suc.CHUNK_SIZE )
- if chunk:
- dst.write( chunk )
- else:
- break
- time_taken = time.time() - start_time
- if time_taken > NO_OUTPUT_TIMEOUT:
- err_msg = 'Downloading from URL %s took longer than the defined timeout period of %.1f seconds.' % \
- ( str( download_url ), NO_OUTPUT_TIMEOUT )
- raise Exception( err_msg )
- except Exception, e:
- err_msg = err_msg = 'Error downloading from URL\n%s:\n%s' % ( str( download_url ), str( e ) )
- raise Exception( err_msg )
- finally:
- if src:
- src.close()
- if dst:
- dst.close()
- if extract:
- if istar( file_path ) or ( iszip( file_path ) and not isjar( file_path ) ):
- archive = CompressedFile( file_path )
- extraction_path = archive.extract( install_dir )
- else:
- extraction_path = os.path.abspath( install_dir )
- else:
- extraction_path = os.path.abspath( install_dir )
- return extraction_path
-
-def zipfile_ok( path_to_archive ):
- """
- This function is a bit pedantic and not functionally necessary. It checks whether there is no file pointing outside of the extraction,
- because ZipFile.extractall() has some potential security holes. See python zipfile documentation for more details.
- """
- basename = os.path.realpath( os.path.dirname( path_to_archive ) )
- zip_archive = zipfile.ZipFile( path_to_archive )
- for member in zip_archive.namelist():
- member_path = os.path.realpath( os.path.join( basename, member ) )
- if not member_path.startswith( basename ):
- return False
- return True
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/tool_shed/scripts/check_filesystem_for_empty_tool_dependency_installation_paths.py
--- a/lib/tool_shed/scripts/check_filesystem_for_empty_tool_dependency_installation_paths.py
+++ b/lib/tool_shed/scripts/check_filesystem_for_empty_tool_dependency_installation_paths.py
@@ -7,7 +7,7 @@
new_path.extend( sys.path[1:] )
sys.path = new_path
-from tool_shed.galaxy_install.tool_dependencies import td_common_util
+from tool_shed.util.basic_util import INSTALLATION_LOG
def main( args ):
empty_installation_paths = []
@@ -31,13 +31,13 @@
no_files = False
if len( dirs ) == 0:
no_dirs = True
- if len( files ) == 0 or len( files ) == 1 and td_common_util.INSTALLATION_LOG in files:
+ if len( files ) == 0 or len( files ) == 1 and INSTALLATION_LOG in files:
no_files = True
if no_files and no_dirs and root not in empty_installation_paths:
empty_installation_paths.append( root )
if len( empty_installation_paths ) > 0:
print 'The following %d tool dependency installation directories were found to be empty or contain only the file %s.' % \
- ( len( empty_installation_paths ), td_common_util.INSTALLATION_LOG )
+ ( len( empty_installation_paths ), INSTALLATION_LOG )
if args.delete:
for path in empty_installation_paths:
if os.path.exists( path ):
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/tool_shed/scripts/check_s3_for_empty_tool_dependency_installation_paths.py
--- a/lib/tool_shed/scripts/check_s3_for_empty_tool_dependency_installation_paths.py
+++ b/lib/tool_shed/scripts/check_s3_for_empty_tool_dependency_installation_paths.py
@@ -12,8 +12,7 @@
import boto
-from tool_shed.galaxy_install.tool_dependencies import td_common_util
-
+from tool_shed.util.basic_util import INSTALLATION_LOG
class BucketList( object ):
@@ -77,7 +76,7 @@
# This would not be the case in a Galaxy instance, since the Galaxy admin will need to verify the contents of
# the installation path in order to determine which action should be taken.
elif len( tool_dependency_path_contents ) == 2 and \
- tool_dependency_path_contents[1].name.endswith( td_common_util.INSTALLATION_LOG ):
+ tool_dependency_path_contents[1].name.endswith( INSTALLATION_LOG ):
empty_directories.append( tool_dependency_path_contents[ 0 ] )
return [ item.name for item in empty_directories ]
@@ -106,7 +105,7 @@
print 'No empty installation paths found, exiting.'
return 0
print 'The following %d tool dependency installation paths were found to be empty or contain only the file %s.' % \
- ( len( dependency_cleaner.empty_installation_paths ), td_common_util.INSTALLATION_LOG )
+ ( len( dependency_cleaner.empty_installation_paths ), INSTALLATION_LOG )
if asbool( args.delete ):
dependency_cleaner.delete_empty_installation_paths()
else:
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/tool_shed/util/basic_util.py
--- a/lib/tool_shed/util/basic_util.py
+++ b/lib/tool_shed/util/basic_util.py
@@ -1,5 +1,8 @@
import logging
import os
+import shutil
+import sys
+from string import Template
from galaxy.util import unicodify
@@ -10,8 +13,52 @@
log = logging.getLogger( __name__ )
+CHUNK_SIZE = 2**20 # 1Mb
+INSTALLATION_LOG = 'INSTALLATION.log'
+# Set no activity timeout to 20 minutes.
+NO_OUTPUT_TIMEOUT = 1200.0
+MAXDIFFSIZE = 8000
MAX_DISPLAY_SIZE = 32768
+def evaluate_template( text, install_environment ):
+ """
+ Substitute variables defined in XML blocks from dependencies file. The value of the received
+ repository_install_dir is the root installation directory of the repository that contains the
+ tool dependency. The value of the received install_dir is the root installation directory of
+ the tool_dependency.
+ """
+ return Template( text ).safe_substitute( get_env_var_values( install_environment ) )
+
+def get_env_var_values( install_environment ):
+ """
+ Return a dictionary of values, some of which enable substitution of reserved words for the values.
+ The received install_enviroment object has 2 important attributes for reserved word substitution:
+ install_environment.tool_shed_repository_install_dir is the root installation directory of the repository
+ that contains the tool dependency being installed, and install_environment.install_dir is the root
+ installation directory of the tool dependency.
+ """
+ env_var_dict = {}
+ env_var_dict[ 'REPOSITORY_INSTALL_DIR' ] = install_environment.tool_shed_repository_install_dir
+ env_var_dict[ 'INSTALL_DIR' ] = install_environment.install_dir
+ env_var_dict[ 'system_install' ] = install_environment.install_dir
+ # If the Python interpreter is 64bit then we can safely assume that the underlying system is also 64bit.
+ env_var_dict[ '__is64bit__' ] = sys.maxsize > 2**32
+ return env_var_dict
+
+def move_file( current_dir, source, destination, rename_to=None ):
+ source_path = os.path.abspath( os.path.join( current_dir, source ) )
+ source_file = os.path.basename( source_path )
+ if rename_to is not None:
+ destination_file = rename_to
+ destination_directory = os.path.join( destination )
+ destination_path = os.path.join( destination_directory, destination_file )
+ else:
+ destination_directory = os.path.join( destination )
+ destination_path = os.path.join( destination_directory, source_file )
+ if not os.path.exists( destination_directory ):
+ os.makedirs( destination_directory )
+ shutil.move( source_path, destination_path )
+
def remove_dir( dir ):
"""Attempt to remove a directory from disk."""
if dir:
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/tool_shed/util/commit_util.py
--- a/lib/tool_shed/util/commit_util.py
+++ b/lib/tool_shed/util/commit_util.py
@@ -12,6 +12,7 @@
from galaxy.util.odict import odict
from galaxy.web import url_for
import tool_shed.util.shed_util_common as suc
+from tool_shed.util import basic_util
from tool_shed.util import hg_util
from tool_shed.util import tool_util
from tool_shed.util import xml_util
@@ -126,7 +127,7 @@
bzipped_file = bz2.BZ2File( uploaded_file_name, 'rb' )
while 1:
try:
- chunk = bzipped_file.read( suc.CHUNK_SIZE )
+ chunk = bzipped_file.read( basic_util.CHUNK_SIZE )
except IOError:
os.close( fd )
os.remove( uncompressed )
@@ -239,7 +240,7 @@
gzipped_file = gzip.GzipFile( uploaded_file_name, 'rb' )
while 1:
try:
- chunk = gzipped_file.read( suc.CHUNK_SIZE )
+ chunk = gzipped_file.read( basic_util.CHUNK_SIZE )
except IOError, e:
os.close( fd )
os.remove( uncompressed )
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/tool_shed/util/hg_util.py
--- a/lib/tool_shed/util/hg_util.py
+++ b/lib/tool_shed/util/hg_util.py
@@ -4,6 +4,7 @@
from datetime import datetime
from time import gmtime
from time import strftime
+import tempfile
from galaxy.util import listify
from galaxy import eggs
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/tool_shed/util/metadata_util.py
--- a/lib/tool_shed/util/metadata_util.py
+++ b/lib/tool_shed/util/metadata_util.py
@@ -22,11 +22,12 @@
from tool_shed.util import tool_dependency_util
from tool_shed.util import tool_util
from tool_shed.util import xml_util
-from tool_shed.galaxy_install.tool_dependencies import td_common_util
import tool_shed.repository_types.util as rt_util
log = logging.getLogger( __name__ )
+REPOSITORY_DATA_MANAGER_CONFIG_FILENAME = 'data_manager_conf.xml'
+
# Repository metadata comparisons for changeset revisions.
EQUAL = 'equal'
NO_METADATA = 'no metadata'
@@ -37,7 +38,7 @@
NOT_TOOL_CONFIGS = [ suc.DATATYPES_CONFIG_FILENAME,
rt_util.REPOSITORY_DEPENDENCY_DEFINITION_FILENAME,
rt_util.TOOL_DEPENDENCY_DEFINITION_FILENAME,
- suc.REPOSITORY_DATA_MANAGER_CONFIG_FILENAME ]
+ REPOSITORY_DATA_MANAGER_CONFIG_FILENAME ]
def add_tool_versions( trans, id, repository_metadata, changeset_revisions ):
# Build a dictionary of { 'tool id' : 'parent tool id' } pairs for each tool in repository_metadata.
@@ -750,7 +751,7 @@
metadata_dict = generate_data_manager_metadata( app,
repository,
files_dir,
- hg_util.get_config_from_disk( suc.REPOSITORY_DATA_MANAGER_CONFIG_FILENAME, files_dir ),
+ hg_util.get_config_from_disk( REPOSITORY_DATA_MANAGER_CONFIG_FILENAME, files_dir ),
metadata_dict,
shed_config_dict=shed_config_dict )
@@ -809,10 +810,10 @@
if package_install_version == '1.0':
# Complex repository dependencies can be defined within the last <actions> tag set contained in an
# <actions_group> tag set. Comments, <repository> tag sets and <readme> tag sets will be skipped
- # in td_common_util.parse_package_elem().
- actions_elem_tuples = td_common_util.parse_package_elem( sub_elem,
- platform_info_dict=None,
- include_after_install_actions=False )
+ # in tool_dependency_util.parse_package_elem().
+ actions_elem_tuples = tool_dependency_util.parse_package_elem( sub_elem,
+ platform_info_dict=None,
+ include_after_install_actions=False )
if actions_elem_tuples:
# We now have a list of a single tuple that looks something like:
# [(True, <Element 'actions' at 0x104017850>)]
diff -r f8c6577665c383645e934629784ece516b498d9b -r 85aa79e9ab9a87a5b5fea0ed1039aeb9a5c23642 lib/tool_shed/util/shed_util_common.py
--- a/lib/tool_shed/util/shed_util_common.py
+++ b/lib/tool_shed/util/shed_util_common.py
@@ -24,11 +24,8 @@
log = logging.getLogger( __name__ )
-CHUNK_SIZE = 2**20 # 1Mb
MAX_CONTENT_SIZE = 1048576
-MAXDIFFSIZE = 8000
DATATYPES_CONFIG_FILENAME = 'datatypes_conf.xml'
-REPOSITORY_DATA_MANAGER_CONFIG_FILENAME = 'data_manager_conf.xml'
new_repo_email_alert_template = """
Sharable link: ${sharable_link}
This diff is so big that we needed to truncate the remainder.
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: jmchilton: Collections: Add API ability to copy HDCA mirroring HDA operations.
by commits-noreply@bitbucket.org 28 May '14
by commits-noreply@bitbucket.org 28 May '14
28 May '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/a06c6c99d74f/
Changeset: a06c6c99d74f
User: jmchilton
Date: 2014-05-28 22:28:55
Summary: Collections: Add API ability to copy HDCA mirroring HDA operations.
Affected #: 3 files
diff -r baeea4ce794df61d9e6531023b8e4c105b26a8bb -r a06c6c99d74f7aa902fb2c23e8c904df32a49791 lib/galaxy/dataset_collections/__init__.py
--- a/lib/galaxy/dataset_collections/__init__.py
+++ b/lib/galaxy/dataset_collections/__init__.py
@@ -138,6 +138,18 @@
changed = self._set_from_dict( trans, dataset_collection_instance, payload )
return changed
+ def copy(
+ self,
+ trans,
+ parent, # PRECONDITION: security checks on ability to add to parent occurred during load.
+ source,
+ encoded_source_id,
+ ):
+ assert source == "hdca" # for now
+ source_hdca = self.__get_history_collection_instance( trans, encoded_source_id )
+ parent.add_dataset_collection( source_hdca.copy() )
+ return source_hdca
+
def _set_from_dict( self, trans, dataset_collection_instance, new_data ):
# Blatantly stolen from UsesHistoryDatasetAssociationMixin.set_hda_from_dict.
diff -r baeea4ce794df61d9e6531023b8e4c105b26a8bb -r a06c6c99d74f7aa902fb2c23e8c904df32a49791 lib/galaxy/webapps/galaxy/api/history_contents.py
--- a/lib/galaxy/webapps/galaxy/api/history_contents.py
+++ b/lib/galaxy/webapps/galaxy/api/history_contents.py
@@ -292,9 +292,28 @@
return hda_dict
def __create_dataset_collection( self, trans, history, payload, **kwd ):
- create_params = api_payload_to_create_params( payload )
+ source = kwd.get("source", "new_collection")
service = trans.app.dataset_collections_service
- dataset_collection_instance = service.create( trans, parent=history, **create_params )
+ if source == "new_collection":
+ create_params = api_payload_to_create_params( payload )
+ dataset_collection_instance = service.create(
+ trans,
+ parent=history,
+ **create_params
+ )
+ elif source == "hdca":
+ content = payload.get( 'content', None )
+ if content is None:
+ raise exceptions.RequestParameterMissingException( "'content' id of target to copy is missing" )
+ dataset_collection_instance = service.copy(
+ trans=trans,
+ parent=history,
+ source="hdca",
+ encoded_source_id=content,
+ )
+ else:
+ message = "Invalid 'source' parameter in request %s" % source
+ raise exceptions.RequestParameterInvalidException(message)
return self.__collection_dict( trans, dataset_collection_instance, view="element" )
@expose_api_anonymous
diff -r baeea4ce794df61d9e6531023b8e4c105b26a8bb -r a06c6c99d74f7aa902fb2c23e8c904df32a49791 test/api/test_history_contents.py
--- a/test/api/test_history_contents.py
+++ b/test/api/test_history_contents.py
@@ -97,9 +97,7 @@
dataset_collection_response = self._post( "histories/%s/contents" % self.history_id, payload )
- self._assert_status_code_is( dataset_collection_response, 200 )
- dataset_collection = dataset_collection_response.json()
- self._assert_has_keys( dataset_collection, "url", "name", "deleted" )
+ dataset_collection = self.__check_create_collection_response( dataset_collection_response )
post_collection_count = self.__count_contents( type="dataset_collection" )
post_dataset_count = self.__count_contents( type="dataset" )
@@ -144,6 +142,23 @@
show_response = self.__show( hdca )
assert str( show_response.json()[ "name" ] ) == "newnameforpair"
+ def test_hdca_copy( self ):
+ hdca = self.dataset_collection_populator.create_pair_in_history( self.history_id ).json()
+ hdca_id = hdca[ "id" ]
+ second_history_id = self._new_history()
+ create_data = dict(
+ source='hdca',
+ content=hdca_id,
+ )
+ create_response = self._post( "histories/%s/contents/dataset_collections" % second_history_id, create_data )
+ self.__check_create_collection_response( create_response )
+
+ def __check_create_collection_response( self, response ):
+ self._assert_status_code_is( response, 200 )
+ dataset_collection = response.json()
+ self._assert_has_keys( dataset_collection, "url", "name", "deleted", "visible", "elements" )
+ return dataset_collection
+
def __show( self, contents ):
show_response = self._get( "histories/%s/contents/%ss/%s" % ( self.history_id, contents["history_content_type"], contents[ "id" ] ) )
return show_response
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Import fix in the Tool Shed's hg_util.py module.
by commits-noreply@bitbucket.org 28 May '14
by commits-noreply@bitbucket.org 28 May '14
28 May '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/baeea4ce794d/
Changeset: baeea4ce794d
User: greg
Date: 2014-05-28 22:06:46
Summary: Import fix in the Tool Shed's hg_util.py module.
Affected #: 1 file
diff -r b1b1e5cefca5b0193c8623a9aee7a39cd1dda55c -r baeea4ce794df61d9e6531023b8e4c105b26a8bb lib/tool_shed/util/hg_util.py
--- a/lib/tool_shed/util/hg_util.py
+++ b/lib/tool_shed/util/hg_util.py
@@ -4,6 +4,7 @@
from datetime import datetime
from time import gmtime
from time import strftime
+import tempfile
from galaxy.util import listify
from galaxy import eggs
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Eliminate the Tool Shed's td_common_util.py module by moving the functions to appropriate classes.
by commits-noreply@bitbucket.org 28 May '14
by commits-noreply@bitbucket.org 28 May '14
28 May '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/b1b1e5cefca5/
Changeset: b1b1e5cefca5
User: greg
Date: 2014-05-28 21:49:02
Summary: Eliminate the Tool Shed's td_common_util.py module by moving the functions to appropriate classes.
Affected #: 15 files
diff -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 -r b1b1e5cefca5b0193c8623a9aee7a39cd1dda55c lib/galaxy/webapps/tool_shed/controllers/repository.py
--- a/lib/galaxy/webapps/tool_shed/controllers/repository.py
+++ b/lib/galaxy/webapps/tool_shed/controllers/repository.py
@@ -3120,12 +3120,12 @@
options_dict = hg_util.get_mercurial_default_options_dict( 'diff' )
# Not quite sure if the following settings make any difference, but with a combination of them and the size check on each
# diff, we don't run out of memory when viewing the changelog of the cisortho2 repository on the test tool shed.
- options_dict[ 'maxfile' ] = suc.MAXDIFFSIZE
- options_dict[ 'maxtotal' ] = suc.MAXDIFFSIZE
+ options_dict[ 'maxfile' ] = basic_util.MAXDIFFSIZE
+ options_dict[ 'maxtotal' ] = basic_util.MAXDIFFSIZE
diffopts = mdiff.diffopts( **options_dict )
for diff in patch.diff( repo, node1=ctx_parent.node(), node2=ctx.node(), opts=diffopts ):
- if len( diff ) > suc.MAXDIFFSIZE:
- diff = util.shrink_string_by_size( diff, suc.MAXDIFFSIZE )
+ if len( diff ) > basic_util.MAXDIFFSIZE:
+ diff = util.shrink_string_by_size( diff, basic_util.MAXDIFFSIZE )
diffs.append( basic_util.to_html_string( diff ) )
modified, added, removed, deleted, unknown, ignored, clean = repo.status( node1=ctx_parent.node(), node2=ctx.node() )
anchors = modified + added + removed + deleted + unknown + ignored + clean
diff -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 -r b1b1e5cefca5b0193c8623a9aee7a39cd1dda55c lib/tool_shed/galaxy_install/install_manager.py
--- a/lib/tool_shed/galaxy_install/install_manager.py
+++ b/lib/tool_shed/galaxy_install/install_manager.py
@@ -1,5 +1,7 @@
import logging
import os
+import sys
+import traceback
from galaxy import eggs
@@ -11,7 +13,6 @@
from tool_shed.util import tool_dependency_util
-from tool_shed.galaxy_install.tool_dependencies import td_common_util
from tool_shed.galaxy_install.tool_dependencies.recipe.env_file_builder import EnvFileBuilder
from tool_shed.galaxy_install.tool_dependencies.recipe.install_environment import InstallEnvironment
from tool_shed.galaxy_install.tool_dependencies.recipe.recipe_manager import StepManager
@@ -25,6 +26,10 @@
class InstallManager( object ):
+ def format_traceback( self ):
+ ex_type, ex, tb = sys.exc_info()
+ return ''.join( traceback.format_tb( tb ) )
+
def get_tool_shed_repository_install_dir( self, app, tool_shed_repository ):
return os.path.abspath( tool_shed_repository.repo_files_directory( app ) )
@@ -112,7 +117,7 @@
log.exception( 'Error installing tool dependency %s version %s.', str( tool_dependency.name ), str( tool_dependency.version ) )
# Since there was an installation error, update the tool dependency status to Error. The remove_installation_path option must
# be left False here.
- error_message = '%s\n%s' % ( td_common_util.format_traceback(), str( e ) )
+ error_message = '%s\n%s' % ( self.format_traceback(), str( e ) )
tool_dependency = tool_dependency_util.handle_tool_dependency_installation_error( app,
tool_dependency,
error_message,
diff -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 -r b1b1e5cefca5b0193c8623a9aee7a39cd1dda55c lib/tool_shed/galaxy_install/tool_dependencies/env_manager.py
--- /dev/null
+++ b/lib/tool_shed/galaxy_install/tool_dependencies/env_manager.py
@@ -0,0 +1,147 @@
+import logging
+import os
+import sys
+from tool_shed.util import common_util
+import tool_shed.util.shed_util_common as suc
+
+log = logging.getLogger( __name__ )
+
+
+class EnvManager( object ):
+
+ def __init__( self, app ):
+ self.app = app
+
+ def create_env_var_dict( self, elem, install_environment ):
+ env_var_name = elem.get( 'name', 'PATH' )
+ env_var_action = elem.get( 'action', 'prepend_to' )
+ env_var_text = None
+ tool_dependency_install_dir = install_environment.install_dir
+ tool_shed_repository_install_dir = install_environment.tool_shed_repository_install_dir
+ if elem.text and elem.text.find( 'REPOSITORY_INSTALL_DIR' ) >= 0:
+ if tool_shed_repository_install_dir and elem.text.find( '$REPOSITORY_INSTALL_DIR' ) != -1:
+ env_var_text = elem.text.replace( '$REPOSITORY_INSTALL_DIR', tool_shed_repository_install_dir )
+ return dict( name=env_var_name, action=env_var_action, value=env_var_text )
+ else:
+ env_var_text = elem.text.replace( '$REPOSITORY_INSTALL_DIR', tool_dependency_install_dir )
+ return dict( name=env_var_name, action=env_var_action, value=env_var_text )
+ if elem.text and elem.text.find( 'INSTALL_DIR' ) >= 0:
+ if tool_dependency_install_dir:
+ env_var_text = elem.text.replace( '$INSTALL_DIR', tool_dependency_install_dir )
+ return dict( name=env_var_name, action=env_var_action, value=env_var_text )
+ else:
+ env_var_text = elem.text.replace( '$INSTALL_DIR', tool_shed_repository_install_dir )
+ return dict( name=env_var_name, action=env_var_action, value=env_var_text )
+ if elem.text:
+ # Allow for environment variables that contain neither REPOSITORY_INSTALL_DIR nor INSTALL_DIR
+ # since there may be command line parameters that are tuned for a Galaxy instance. Allowing them
+ # to be set in one location rather than being hard coded into each tool config is the best approach.
+ # For example:
+ # <environment_variable name="GATK2_SITE_OPTIONS" action="set_to">
+ # "--num_threads 4 --num_cpu_threads_per_data_thread 3 --phone_home STANDARD"
+ # </environment_variable>
+ return dict( name=env_var_name, action=env_var_action, value=elem.text)
+ return None
+
+ def get_env_shell_file_path( self, installation_directory ):
+ env_shell_file_name = 'env.sh'
+ default_location = os.path.abspath( os.path.join( installation_directory, env_shell_file_name ) )
+ if os.path.exists( default_location ):
+ return default_location
+ for root, dirs, files in os.walk( installation_directory ):
+ for name in files:
+ if name == env_shell_file_name:
+ return os.path.abspath( os.path.join( root, name ) )
+ return None
+
+ def get_env_shell_file_paths( self, elem ):
+ # Currently only the following tag set is supported.
+ # <repository toolshed="http://localhost:9009/" name="package_numpy_1_7" owner="test" changeset_revision="c84c6a8be056">
+ # <package name="numpy" version="1.7.1" />
+ # </repository>
+ env_shell_file_paths = []
+ toolshed = elem.get( 'toolshed', None )
+ repository_name = elem.get( 'name', None )
+ repository_owner = elem.get( 'owner', None )
+ changeset_revision = elem.get( 'changeset_revision', None )
+ if toolshed and repository_name and repository_owner and changeset_revision:
+ # The protocol is not stored, but the port is if it exists.
+ toolshed = common_util.remove_protocol_from_tool_shed_url( toolshed )
+ repository = suc.get_repository_for_dependency_relationship( self.app,
+ toolshed,
+ repository_name,
+ repository_owner,
+ changeset_revision )
+ if repository:
+ for sub_elem in elem:
+ tool_dependency_type = sub_elem.tag
+ tool_dependency_name = sub_elem.get( 'name' )
+ tool_dependency_version = sub_elem.get( 'version' )
+ if tool_dependency_type and tool_dependency_name and tool_dependency_version:
+ # Get the tool_dependency so we can get its installation directory.
+ tool_dependency = None
+ for tool_dependency in repository.tool_dependencies:
+ if tool_dependency.type == tool_dependency_type and \
+ tool_dependency.name == tool_dependency_name and \
+ tool_dependency.version == tool_dependency_version:
+ break
+ if tool_dependency:
+ tool_dependency_key = '%s/%s' % ( tool_dependency_name, tool_dependency_version )
+ installation_directory = tool_dependency.installation_directory( self.app )
+ env_shell_file_path = self.get_env_shell_file_path( installation_directory )
+ if env_shell_file_path:
+ env_shell_file_paths.append( env_shell_file_path )
+ else:
+ error_message = "Skipping tool dependency definition because unable to locate env.sh file for tool dependency "
+ error_message += "type %s, name %s, version %s for repository %s" % \
+ ( str( tool_dependency_type ), str( tool_dependency_name ), str( tool_dependency_version ), str( repository.name ) )
+ log.debug( error_message )
+ continue
+ else:
+ error_message = "Skipping tool dependency definition because unable to locate tool dependency "
+ error_message += "type %s, name %s, version %s for repository %s" % \
+ ( str( tool_dependency_type ), str( tool_dependency_name ), str( tool_dependency_version ), str( repository.name ) )
+ log.debug( error_message )
+ continue
+ else:
+ error_message = "Skipping invalid tool dependency definition: type %s, name %s, version %s." % \
+ ( str( tool_dependency_type ), str( tool_dependency_name ), str( tool_dependency_version ) )
+ log.debug( error_message )
+ continue
+ else:
+ error_message = "Skipping set_environment_for_install definition because unable to locate required installed tool shed repository: "
+ error_message += "toolshed %s, name %s, owner %s, changeset_revision %s." % \
+ ( str( toolshed ), str( repository_name ), str( repository_owner ), str( changeset_revision ) )
+ log.debug( error_message )
+ else:
+ error_message = "Skipping invalid set_environment_for_install definition: toolshed %s, name %s, owner %s, changeset_revision %s." % \
+ ( str( toolshed ), str( repository_name ), str( repository_owner ), str( changeset_revision ) )
+ log.debug( error_message )
+ return env_shell_file_paths
+
+ def get_env_shell_file_paths_from_setup_environment_elem( self, all_env_shell_file_paths, elem, action_dict ):
+ """
+ Parse an XML tag set to discover all child repository dependency tags and define the path to an env.sh file associated
+ with the repository (this requires the repository dependency to be in an installed state). The received action_dict
+ will be updated with these discovered paths and returned to the caller. This method handles tool dependency definition
+ tag sets <setup_r_environment>, <setup_ruby_environment> and <setup_perl_environment>.
+ """
+ # An example elem is:
+ # <action type="setup_perl_environment">
+ # <repository name="package_perl_5_18" owner="iuc">
+ # <package name="perl" version="5.18.1" />
+ # </repository>
+ # <repository name="package_expat_2_1" owner="iuc" prior_installation_required="True">
+ # <package name="expat" version="2.1.0" />
+ # </repository>
+ # <package>http://search.cpan.org/CPAN/authors/id/T/TO/TODDR/XML-Parser-2.41.tar.gz</package>
+ # <package>http://search.cpan.org/CPAN/authors/id/L/LD/LDS/CGI.pm-3.43.tar.gz</package>
+ # </action>
+ for action_elem in elem:
+ if action_elem.tag == 'repository':
+ env_shell_file_paths = self.get_env_shell_file_paths( action_elem )
+ all_env_shell_file_paths.extend( env_shell_file_paths )
+ if all_env_shell_file_paths:
+ action_dict[ 'env_shell_file_paths' ] = all_env_shell_file_paths
+ action_dict[ 'action_shell_file_paths' ] = env_shell_file_paths
+ return action_dict
diff -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 -r b1b1e5cefca5b0193c8623a9aee7a39cd1dda55c lib/tool_shed/galaxy_install/tool_dependencies/recipe/install_environment.py
--- a/lib/tool_shed/galaxy_install/tool_dependencies/recipe/install_environment.py
+++ b/lib/tool_shed/galaxy_install/tool_dependencies/recipe/install_environment.py
@@ -24,11 +24,13 @@
from galaxy.util import shrink_string_by_size
from galaxy.util import unicodify
-from tool_shed.galaxy_install.tool_dependencies import td_common_util
from tool_shed.galaxy_install.tool_dependencies.recipe import asynchronous_reader
+from tool_shed.util import basic_util
+
log = logging.getLogger( __name__ )
+
class InstallEnvironment( object ):
"""Object describing the environment built up as part of the process of building and installing a package."""
@@ -44,7 +46,7 @@
self.tool_shed_repository_install_dir = tool_shed_repository_install_dir
def __call__( self ):
- with settings( warn_only=True, **td_common_util.get_env_var_values( self ) ):
+ with settings( warn_only=True, **basic_util.get_env_var_values( self ) ):
with prefix( self.__setup_environment() ):
yield
@@ -125,7 +127,7 @@
context = app.install_model.context
command = str( cmd )
output = self.handle_complex_command( command )
- self.log_results( cmd, output, os.path.join( self.install_dir, td_common_util.INSTALLATION_LOG ) )
+ self.log_results( cmd, output, os.path.join( self.install_dir, basic_util.INSTALLATION_LOG ) )
stdout = output.stdout
stderr = output.stderr
if len( stdout ) > DATABASE_MAX_STRING_SIZE:
@@ -214,18 +216,18 @@
# Sleep a bit before asking the readers again.
time.sleep( .1 )
current_wait_time = time.time() - start_timer
- if stdout_queue.empty() and stderr_queue.empty() and current_wait_time > td_common_util.NO_OUTPUT_TIMEOUT:
+ if stdout_queue.empty() and stderr_queue.empty() and current_wait_time > basic_util.NO_OUTPUT_TIMEOUT:
err_msg = "\nShutting down process id %s because it generated no output for the defined timeout period of %.1f seconds.\n" % \
- ( pid, td_common_util.NO_OUTPUT_TIMEOUT )
+ ( pid, basic_util.NO_OUTPUT_TIMEOUT )
stderr_reader.lines.append( err_msg )
process_handle.kill()
break
thread_lock.release()
# Wait until each of the threads we've started terminate. The following calls will block each thread
# until it terminates either normally, through an unhandled exception, or until the timeout occurs.
- stdio_thread.join( td_common_util.NO_OUTPUT_TIMEOUT )
- stdout_reader.join( td_common_util.NO_OUTPUT_TIMEOUT )
- stderr_reader.join( td_common_util.NO_OUTPUT_TIMEOUT )
+ stdio_thread.join( basic_util.NO_OUTPUT_TIMEOUT )
+ stdout_reader.join( basic_util.NO_OUTPUT_TIMEOUT )
+ stderr_reader.join( basic_util.NO_OUTPUT_TIMEOUT )
# Close subprocess' file descriptors.
error = self.close_file_descriptor( process_handle.stdout )
error = self.close_file_descriptor( process_handle.stderr )
diff -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 -r b1b1e5cefca5b0193c8623a9aee7a39cd1dda55c lib/tool_shed/galaxy_install/tool_dependencies/recipe/step_handler.py
--- a/lib/tool_shed/galaxy_install/tool_dependencies/recipe/step_handler.py
+++ b/lib/tool_shed/galaxy_install/tool_dependencies/recipe/step_handler.py
@@ -4,12 +4,17 @@
import stat
from string import Template
import sys
+import tarfile
+import time
+import urllib2
+import zipfile
from galaxy.util import asbool
from galaxy.util.template import fill_template
+from tool_shed.util import basic_util
from tool_shed.util import tool_dependency_util
-from tool_shed.galaxy_install.tool_dependencies import td_common_util
+from tool_shed.galaxy_install.tool_dependencies.env_manager import EnvManager
# TODO: eliminate the use of fabric here.
from galaxy import eggs
@@ -26,6 +31,152 @@
VIRTUALENV_URL = 'https://pypi.python.org/packages/source/v/virtualenv/virtualenv-1.9.1.tar.gz'
+class CompressedFile( object ):
+
+ def __init__( self, file_path, mode='r' ):
+ if tarfile.is_tarfile( file_path ):
+ self.file_type = 'tar'
+ elif zipfile.is_zipfile( file_path ) and not file_path.endswith( '.jar' ):
+ self.file_type = 'zip'
+ self.file_name = os.path.splitext( os.path.basename( file_path ) )[ 0 ]
+ if self.file_name.endswith( '.tar' ):
+ self.file_name = os.path.splitext( self.file_name )[ 0 ]
+ self.type = self.file_type
+ method = 'open_%s' % self.file_type
+ if hasattr( self, method ):
+ self.archive = getattr( self, method )( file_path, mode )
+ else:
+ raise NameError( 'File type %s specified, no open method found.' % self.file_type )
+
+ def extract( self, path ):
+ '''Determine the path to which the archive should be extracted.'''
+ contents = self.getmembers()
+ extraction_path = path
+ if len( contents ) == 1:
+ # The archive contains a single file, return the extraction path.
+ if self.isfile( contents[ 0 ] ):
+ extraction_path = os.path.join( path, self.file_name )
+ if not os.path.exists( extraction_path ):
+ os.makedirs( extraction_path )
+ self.archive.extractall( extraction_path )
+ else:
+ # Get the common prefix for all the files in the archive. If the common prefix ends with a slash,
+ # or self.isdir() returns True, the archive contains a single directory with the desired contents.
+ # Otherwise, it contains multiple files and/or directories at the root of the archive.
+ common_prefix = os.path.commonprefix( [ self.getname( item ) for item in contents ] )
+ if len( common_prefix ) >= 1 and not common_prefix.endswith( os.sep ) and self.isdir( self.getmember( common_prefix ) ):
+ common_prefix += os.sep
+ if common_prefix.endswith( os.sep ):
+ self.archive.extractall( os.path.join( path ) )
+ extraction_path = os.path.join( path, common_prefix )
+ else:
+ extraction_path = os.path.join( path, self.file_name )
+ if not os.path.exists( extraction_path ):
+ os.makedirs( extraction_path )
+ self.archive.extractall( os.path.join( extraction_path ) )
+ return os.path.abspath( extraction_path )
+
+ def getmembers_tar( self ):
+ return self.archive.getmembers()
+
+ def getmembers_zip( self ):
+ return self.archive.infolist()
+
+ def getname_tar( self, item ):
+ return item.name
+
+ def getname_zip( self, item ):
+ return item.filename
+
+ def getmember( self, name ):
+ for member in self.getmembers():
+ if self.getname( member ) == name:
+ return member
+
+ def getmembers( self ):
+ return getattr( self, 'getmembers_%s' % self.type )()
+
+ def getname( self, member ):
+ return getattr( self, 'getname_%s' % self.type )( member )
+
+ def isdir( self, member ):
+ return getattr( self, 'isdir_%s' % self.type )( member )
+
+ def isdir_tar( self, member ):
+ return member.isdir()
+
+ def isdir_zip( self, member ):
+ if member.filename.endswith( os.sep ):
+ return True
+ return False
+
+ def isfile( self, member ):
+ if not self.isdir( member ):
+ return True
+ return False
+
+ def open_tar( self, filepath, mode ):
+ return tarfile.open( filepath, mode, errorlevel=0 )
+
+ def open_zip( self, filepath, mode ):
+ return zipfile.ZipFile( filepath, mode )
+
+ def zipfile_ok( self, path_to_archive ):
+ """
+ This function is a bit pedantic and not functionally necessary. It checks whether there is
+ no file pointing outside of the extraction, because ZipFile.extractall() has some potential
+ security holes. See python zipfile documentation for more details.
+ """
+ basename = os.path.realpath( os.path.dirname( path_to_archive ) )
+ zip_archive = zipfile.ZipFile( path_to_archive )
+ for member in zip_archive.namelist():
+ member_path = os.path.realpath( os.path.join( basename, member ) )
+ if not member_path.startswith( basename ):
+ return False
+ return True
+
+
+class Download( object ):
+
+ def url_download( self, install_dir, downloaded_file_name, download_url, extract=True ):
+ file_path = os.path.join( install_dir, downloaded_file_name )
+ src = None
+ dst = None
+ # Set a timer so we don't sit here forever.
+ start_time = time.time()
+ try:
+ src = urllib2.urlopen( download_url )
+ dst = open( file_path, 'wb' )
+ while True:
+ chunk = src.read( basic_util.CHUNK_SIZE )
+ if chunk:
+ dst.write( chunk )
+ else:
+ break
+ time_taken = time.time() - start_time
+ if time_taken > basic_util.NO_OUTPUT_TIMEOUT:
+ err_msg = 'Downloading from URL %s took longer than the defined timeout period of %.1f seconds.' % \
+ ( str( download_url ), basic_util.NO_OUTPUT_TIMEOUT )
+ raise Exception( err_msg )
+ except Exception, e:
+ err_msg = err_msg = 'Error downloading from URL\n%s:\n%s' % ( str( download_url ), str( e ) )
+ raise Exception( err_msg )
+ finally:
+ if src:
+ src.close()
+ if dst:
+ dst.close()
+ if extract:
+ if tarfile.is_tarfile( file_path ) or ( zipfile.is_zipfile( file_path ) and not file_path.endswith( '.jar' ) ):
+ archive = CompressedFile( file_path )
+ extraction_path = archive.extract( install_dir )
+ else:
+ extraction_path = os.path.abspath( install_dir )
+ else:
+ extraction_path = os.path.abspath( install_dir )
+ return extraction_path
+
+
class RecipeStep( object ):
"""Abstract class that defines a standard format for handling recipe steps when installing packages."""
@@ -42,6 +193,22 @@
def __init__( self ):
self.type = 'assert_directory_executable'
+ def assert_directory_executable( self, full_path ):
+ """
+ Return True if a symbolic link or directory exists and is executable, but if
+ full_path is a file, return False.
+ """
+ if full_path is None:
+ return False
+ if os.path.isfile( full_path ):
+ return False
+ if os.path.isdir( full_path ):
+ # Make sure the owner has execute permission on the directory.
+ # See http://docs.python.org/2/library/stat.html
+ if stat.S_IXUSR & os.stat( full_path )[ stat.ST_MODE ] == 64:
+ return True
+ return False
+
def execute_step( self, app, tool_dependency, package_name, actions, action_dict, filtered_actions, env_file_builder,
install_environment, work_dir, current_dir=None, initial_download=False ):
"""
@@ -53,7 +220,7 @@
full_path = action_dict[ 'full_path' ]
else:
full_path = os.path.join( current_dir, action_dict[ 'full_path' ] )
- if not td_common_util.assert_directory_executable( full_path=full_path ):
+ if not self.assert_directory_executable( full_path=full_path ):
status = app.install_model.ToolDependency.installation_status.ERROR
error_message = 'The path %s is not a directory or is not executable by the owner.' % str( full_path )
tool_dependency = tool_dependency_util.set_tool_dependency_attributes( app,
@@ -66,7 +233,7 @@
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# <action type="assert_executable">$INSTALL_DIR/mira/my_file</action>
if action_elem.text:
- action_dict[ 'full_path' ] = td_common_util.evaluate_template( action_elem.text, install_environment )
+ action_dict[ 'full_path' ] = basic_util.evaluate_template( action_elem.text, install_environment )
return action_dict
@@ -75,6 +242,18 @@
def __init__( self ):
self.type = 'assert_directory_exists'
+ def assert_directory_exists( self, full_path ):
+ """
+ Return True if a symbolic link or directory exists, but if full_path is a file,
+ return False. """
+ if full_path is None:
+ return False
+ if os.path.isfile( full_path ):
+ return False
+ if os.path.isdir( full_path ):
+ return True
+ return False
+
def execute_step( self, app, tool_dependency, package_name, actions, action_dict, filtered_actions, env_file_builder,
install_environment, work_dir, current_dir=None, initial_download=False ):
"""
@@ -86,7 +265,7 @@
full_path = action_dict[ 'full_path' ]
else:
full_path = os.path.join( current_dir, action_dict[ 'full_path' ] )
- if not td_common_util.assert_directory_exists( full_path=full_path ):
+ if not self.assert_directory_exists( full_path=full_path ):
status = app.install_model.ToolDependency.installation_status.ERROR
error_message = 'The path %s is not a directory or does not exist.' % str( full_path )
tool_dependency = tool_dependency_util.set_tool_dependency_attributes( app,
@@ -99,7 +278,7 @@
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# <action type="make_directory">$INSTALL_DIR/mira</action>
if action_elem.text:
- action_dict[ 'full_path' ] = td_common_util.evaluate_template( action_elem.text, install_environment )
+ action_dict[ 'full_path' ] = basic_util.evaluate_template( action_elem.text, install_environment )
return action_dict
@@ -108,6 +287,22 @@
def __init__( self ):
self.type = 'assert_file_executable'
+ def assert_file_executable( self, full_path ):
+ """
+ Return True if a symbolic link or file exists and is executable, but if full_path
+ is a directory, return False.
+ """
+ if full_path is None:
+ return False
+ if os.path.isdir( full_path ):
+ return False
+ if os.path.exists( full_path ):
+ # Make sure the owner has execute permission on the file.
+ # See http://docs.python.org/2/library/stat.html
+ if stat.S_IXUSR & os.stat( full_path )[ stat.ST_MODE ] == 64:
+ return True
+ return False
+
def execute_step( self, app, tool_dependency, package_name, actions, action_dict, filtered_actions, env_file_builder,
install_environment, work_dir, current_dir=None, initial_download=False ):
"""
@@ -119,7 +314,7 @@
full_path = action_dict[ 'full_path' ]
else:
full_path = os.path.join( current_dir, action_dict[ 'full_path' ] )
- if not td_common_util.assert_file_executable( full_path=full_path ):
+ if not self.assert_file_executable( full_path=full_path ):
status = app.install_model.ToolDependency.installation_status.ERROR
error_message = 'The path %s is not a file or is not executable by the owner.' % str( full_path )
tool_dependency = tool_dependency_util.set_tool_dependency_attributes( app,
@@ -132,7 +327,7 @@
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# <action type="assert_executable">$INSTALL_DIR/mira/my_file</action>
if action_elem.text:
- action_dict[ 'full_path' ] = td_common_util.evaluate_template( action_elem.text, install_environment )
+ action_dict[ 'full_path' ] = basic_util.evaluate_template( action_elem.text, install_environment )
return action_dict
@@ -141,6 +336,19 @@
def __init__( self ):
self.type = 'assert_file_exists'
+ def assert_file_exists( self, full_path ):
+ """
+ Return True if a symbolic link or file exists, but if full_path is a directory,
+ return False.
+ """
+ if full_path is None:
+ return False
+ if os.path.isdir( full_path ):
+ return False
+ if os.path.exists( full_path ):
+ return True
+ return False
+
def execute_step( self, app, tool_dependency, package_name, actions, action_dict, filtered_actions, env_file_builder,
install_environment, work_dir, current_dir=None, initial_download=False ):
"""
@@ -152,7 +360,7 @@
full_path = action_dict[ 'full_path' ]
else:
full_path = os.path.join( current_dir, action_dict[ 'full_path' ] )
- if not td_common_util.assert_file_exists( full_path=full_path ):
+ if not self.assert_file_exists( full_path=full_path ):
status = app.install_model.ToolDependency.installation_status.ERROR
error_message = 'The path %s is not a file or does not exist.' % str( full_path )
tool_dependency = tool_dependency_util.set_tool_dependency_attributes( app,
@@ -165,7 +373,7 @@
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# <action type="assert_on_path">$INSTALL_DIR/mira/my_file</action>
if action_elem.text:
- action_dict[ 'full_path' ] = td_common_util.evaluate_template( action_elem.text, install_environment )
+ action_dict[ 'full_path' ] = basic_util.evaluate_template( action_elem.text, install_environment )
return action_dict
@@ -187,7 +395,7 @@
pre_cmd = './configure %s && make && make install' % configure_opts
else:
pre_cmd = './configure --prefix=$INSTALL_DIR %s && make && make install' % configure_opts
- cmd = install_environment.build_command( td_common_util.evaluate_template( pre_cmd, install_environment ) )
+ cmd = install_environment.build_command( basic_util.evaluate_template( pre_cmd, install_environment ) )
return_code = install_environment.handle_command( app=app,
tool_dependency=tool_dependency,
cmd=cmd,
@@ -199,7 +407,7 @@
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# Handle configure, make and make install allow providing configuration options
if action_elem.text:
- configure_opts = td_common_util.evaluate_template( action_elem.text, install_environment )
+ configure_opts = basic_util.evaluate_template( action_elem.text, install_environment )
action_dict[ 'configure_opts' ] = configure_opts
return action_dict
@@ -274,7 +482,7 @@
received_mode = int( file_elem.get( 'mode', 600 ), base=8 )
# For added security, ensure that the setuid and setgid bits are not set.
mode = received_mode & ~( stat.S_ISUID | stat.S_ISGID )
- file = td_common_util.evaluate_template( file_elem.text, install_environment )
+ file = basic_util.evaluate_template( file_elem.text, install_environment )
chmod_tuple = ( file, mode )
chmod_actions.append( chmod_tuple )
if chmod_actions:
@@ -282,11 +490,17 @@
return action_dict
-class DownloadBinary( RecipeStep ):
+class DownloadBinary( Download, RecipeStep ):
def __init__( self ):
self.type = 'download_binary'
+ def download_binary( self, url, work_dir ):
+ """Download a pre-compiled binary from the specified URL."""
+ downloaded_filename = os.path.split( url )[ -1 ]
+ dir = self.url_download( work_dir, downloaded_filename, url, extract=False )
+ return downloaded_filename
+
def filter_actions_after_binary_installation( self, actions ):
'''Filter out actions that should not be processed if a binary download succeeded.'''
filtered_actions = []
@@ -311,7 +525,7 @@
log.debug( 'Attempting to download from %s to %s', url, str( target_directory ) )
downloaded_filename = None
try:
- downloaded_filename = td_common_util.download_binary( url, work_dir )
+ downloaded_filename = self.download_binary( url, work_dir )
if initial_download:
# Filter out any actions that are not download_binary, chmod, or set_environment.
filtered_actions = self.filter_actions_after_binary_installation( actions[ 1: ] )
@@ -338,9 +552,9 @@
full_path_to_dir = os.path.abspath( install_environment.install_dir )
else:
full_path_to_dir = os.path.abspath( install_environment.install_dir )
- td_common_util.move_file( current_dir=work_dir,
- source=downloaded_filename,
- destination=full_path_to_dir )
+ basic_util.move_file( current_dir=work_dir,
+ source=downloaded_filename,
+ destination=full_path_to_dir )
# Not sure why dir is ignored in this method, need to investigate...
dir = None
if initial_download:
@@ -368,7 +582,7 @@
return action_dict
-class DownloadByUrl( RecipeStep ):
+class DownloadByUrl( Download, RecipeStep ):
def __init__( self ):
self.type = 'download_by_url'
@@ -394,9 +608,9 @@
downloaded_filename = action_dict[ 'target_filename' ]
else:
downloaded_filename = os.path.split( url )[ -1 ]
- dir = td_common_util.url_download( work_dir, downloaded_filename, url, extract=True )
+ dir = self.url_download( work_dir, downloaded_filename, url, extract=True )
if is_binary:
- log_file = os.path.join( install_environment.install_dir, td_common_util.INSTALLATION_LOG )
+ log_file = os.path.join( install_environment.install_dir, basic_util.INSTALLATION_LOG )
if os.path.exists( log_file ):
logfile = open( log_file, 'ab' )
else:
@@ -422,7 +636,7 @@
return action_dict
-class DownloadFile( RecipeStep ):
+class DownloadFile( Download, RecipeStep ):
def __init__( self ):
self.type = 'download_file'
@@ -447,7 +661,7 @@
filename = action_dict[ 'target_filename' ]
else:
filename = url.split( '/' )[ -1 ]
- td_common_util.url_download( work_dir, filename, url )
+ self.url_download( work_dir, filename, url )
if initial_download:
dir = os.path.curdir
return tool_dependency, filtered_actions, dir
@@ -479,13 +693,17 @@
full_path = action_dict[ 'full_path' ]
else:
full_path = os.path.join( current_dir, action_dict[ 'full_path' ] )
- td_common_util.make_directory( full_path=full_path )
+ self.make_directory( full_path=full_path )
return tool_dependency, None, None
+ def make_directory( self, full_path ):
+ if not os.path.exists( full_path ):
+ os.makedirs( full_path )
+
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# <action type="make_directory">$INSTALL_DIR/lib/python</action>
if action_elem.text:
- action_dict[ 'full_path' ] = td_common_util.evaluate_template( action_elem.text, install_environment )
+ action_dict[ 'full_path' ] = basic_util.evaluate_template( action_elem.text, install_environment )
return action_dict
@@ -515,7 +733,7 @@
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# make; make install; allow providing make options
if action_elem.text:
- make_opts = td_common_util.evaluate_template( action_elem.text, install_environment )
+ make_opts = basic_util.evaluate_template( action_elem.text, install_environment )
action_dict[ 'make_opts' ] = make_opts
return action_dict
@@ -531,18 +749,38 @@
Move a directory of files. Since this class is not used in the initial download stage, no recipe step
filtering is performed here, and None values are always returned for filtered_actions and dir.
"""
- td_common_util.move_directory_files( current_dir=current_dir,
- source_dir=os.path.join( action_dict[ 'source_directory' ] ),
- destination_dir=os.path.join( action_dict[ 'destination_directory' ] ) )
+ self.move_directory_files( current_dir=current_dir,
+ source_dir=os.path.join( action_dict[ 'source_directory' ] ),
+ destination_dir=os.path.join( action_dict[ 'destination_directory' ] ) )
return tool_dependency, None, None
+ def move_directory_files( self, current_dir, source_dir, destination_dir ):
+ source_directory = os.path.abspath( os.path.join( current_dir, source_dir ) )
+ destination_directory = os.path.join( destination_dir )
+ if not os.path.isdir( destination_directory ):
+ os.makedirs( destination_directory )
+ symlinks = []
+ regular_files = []
+ for file_name in os.listdir( source_directory ):
+ source_file = os.path.join( source_directory, file_name )
+ destination_file = os.path.join( destination_directory, file_name )
+ files_tuple = ( source_file, destination_file )
+ if os.path.islink( source_file ):
+ symlinks.append( files_tuple )
+ else:
+ regular_files.append( files_tuple )
+ for source_file, destination_file in symlinks:
+ shutil.move( source_file, destination_file )
+ for source_file, destination_file in regular_files:
+ shutil.move( source_file, destination_file )
+
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# <action type="move_directory_files">
# <source_directory>bin</source_directory>
# <destination_directory>$INSTALL_DIR/bin</destination_directory>
# </action>
for move_elem in action_elem:
- move_elem_text = td_common_util.evaluate_template( move_elem.text, install_environment )
+ move_elem_text = basic_util.evaluate_template( move_elem.text, install_environment )
if move_elem_text:
action_dict[ move_elem.tag ] = move_elem_text
return action_dict
@@ -559,10 +797,10 @@
Move a file on disk. Since this class is not used in the initial download stage, no recipe step
filtering is performed here, and None values are always returned for filtered_actions and dir.
"""
- td_common_util.move_file( current_dir=current_dir,
- source=os.path.join( action_dict[ 'source' ] ),
- destination=os.path.join( action_dict[ 'destination' ] ),
- rename_to=action_dict[ 'rename_to' ] )
+ basic_util.move_file( current_dir=current_dir,
+ source=os.path.join( action_dict[ 'source' ] ),
+ destination=os.path.join( action_dict[ 'destination' ] ),
+ rename_to=action_dict[ 'rename_to' ] )
return tool_dependency, None, None
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
@@ -570,8 +808,8 @@
# <source>misc/some_file</source>
# <destination>$INSTALL_DIR/bin</destination>
# </action>
- action_dict[ 'source' ] = td_common_util.evaluate_template( action_elem.find( 'source' ).text, install_environment )
- action_dict[ 'destination' ] = td_common_util.evaluate_template( action_elem.find( 'destination' ).text, install_environment )
+ action_dict[ 'source' ] = basic_util.evaluate_template( action_elem.find( 'source' ).text, install_environment )
+ action_dict[ 'destination' ] = basic_util.evaluate_template( action_elem.find( 'destination' ).text, install_environment )
action_dict[ 'rename_to' ] = action_elem.get( 'rename_to' )
return action_dict
@@ -717,12 +955,12 @@
# <action type="set_environment">
# <environment_variable name="PATH" action="prepend_to">$INSTALL_DIR</environment_variable>
# </action>
+ env_manager = EnvManager( app )
env_var_dicts = []
for env_elem in action_elem:
if env_elem.tag == 'environment_variable':
- env_var_dict = \
- td_common_util.create_env_var_dict( elem=env_elem,
- install_environment=install_environment )
+ env_var_dict = env_manager.create_env_var_dict( elem=env_elem,
+ install_environment=install_environment )
if env_var_dict:
env_var_dicts.append( env_var_dict )
if env_var_dicts:
@@ -764,16 +1002,17 @@
# the current tool dependency package. See the package_matplotlib_1_2 repository in the test tool
# shed for a real-world example.
all_env_shell_file_paths = []
+ env_manager = EnvManager( app )
for env_elem in action_elem:
if env_elem.tag == 'repository':
- env_shell_file_paths = td_common_util.get_env_shell_file_paths( app, env_elem )
+ env_shell_file_paths = env_manager.get_env_shell_file_paths( env_elem )
if env_shell_file_paths:
all_env_shell_file_paths.extend( env_shell_file_paths )
action_dict[ 'env_shell_file_paths' ] = all_env_shell_file_paths
return action_dict
-class SetupPerlEnvironment( RecipeStep ):
+class SetupPerlEnvironment( Download, RecipeStep ):
def __init__( self ):
self.type = 'setup_purl_environment'
@@ -822,7 +1061,7 @@
# We assume a URL to a gem file.
url = perl_package
perl_package_name = url.split( '/' )[ -1 ]
- dir = td_common_util.url_download( work_dir, perl_package_name, url, extract=True )
+ dir = self.url_download( work_dir, perl_package_name, url, extract=True )
# Search for Build.PL or Makefile.PL (ExtUtils::MakeMaker vs. Module::Build).
tmp_work_dir = os.path.join( work_dir, dir )
if os.path.exists( os.path.join( tmp_work_dir, 'Makefile.PL' ) ):
@@ -836,7 +1075,7 @@
return tool_dependency, filtered_actions, dir
return tool_dependency, None, None
with lcd( tmp_work_dir ):
- cmd = install_environment.build_command( td_common_util.evaluate_template( cmd, install_environment ) )
+ cmd = install_environment.build_command( basic_util.evaluate_template( cmd, install_environment ) )
return_code = install_environment.handle_command( app=app,
tool_dependency=tool_dependency,
cmd=cmd,
@@ -849,7 +1088,7 @@
# perl package from CPAN without version number.
# cpanm should be installed with the parent perl distribution, otherwise this will not work.
cmd += '''cpanm --local-lib=$INSTALL_DIR %s''' % ( perl_package )
- cmd = install_environment.build_command( td_common_util.evaluate_template( cmd, install_environment ) )
+ cmd = install_environment.build_command( basic_util.evaluate_template( cmd, install_environment ) )
return_code = install_environment.handle_command( app=app,
tool_dependency=tool_dependency,
cmd=cmd,
@@ -890,10 +1129,11 @@
# with each repository. This will potentially update the value of the 'env_shell_file_paths' entry
# in action_dict.
all_env_shell_file_paths = []
- action_dict = td_common_util.get_env_shell_file_paths_from_setup_environment_elem( app,
- all_env_shell_file_paths,
- action_elem,
- action_dict )
+ env_manager = EnvManager( app )
+ action_dict = env_manager.get_env_shell_file_paths_from_setup_environment_elem( app,
+ all_env_shell_file_paths,
+ action_elem,
+ action_dict )
perl_packages = []
for env_elem in action_elem:
if env_elem.tag == 'package':
@@ -908,7 +1148,7 @@
return action_dict
-class SetupREnvironment( RecipeStep ):
+class SetupREnvironment( Download, RecipeStep ):
def __init__( self ):
self.type = 'setup_r_environment'
@@ -947,7 +1187,7 @@
for url in action_dict[ 'r_packages' ]:
filename = url.split( '/' )[ -1 ]
tarball_names.append( filename )
- td_common_util.url_download( work_dir, filename, url, extract=False )
+ self.url_download( work_dir, filename, url, extract=False )
dir = os.path.curdir
current_dir = os.path.abspath( os.path.join( work_dir, dir ) )
with lcd( current_dir ):
@@ -958,7 +1198,7 @@
cmd = r'''PATH=$PATH:$R_HOME/bin; export PATH; R_LIBS=$INSTALL_DIR; export R_LIBS;
Rscript -e "install.packages(c('%s'),lib='$INSTALL_DIR', repos=NULL, dependencies=FALSE)"''' % \
( str( tarball_name ) )
- cmd = install_environment.build_command( td_common_util.evaluate_template( cmd, install_environment ) )
+ cmd = install_environment.build_command( basic_util.evaluate_template( cmd, install_environment ) )
return_code = install_environment.handle_command( app=app,
tool_dependency=tool_dependency,
cmd=cmd,
@@ -993,10 +1233,11 @@
# associated with each repository. This will potentially update the value of the
# 'env_shell_file_paths' entry in action_dict.
all_env_shell_file_paths = []
- action_dict = td_common_util.get_env_shell_file_paths_from_setup_environment_elem( app,
- all_env_shell_file_paths,
- action_elem,
- action_dict )
+ env_manager = EnvManager( app )
+ action_dict = env_manager.get_env_shell_file_paths_from_setup_environment_elem( app,
+ all_env_shell_file_paths,
+ action_elem,
+ action_dict )
r_packages = list()
for env_elem in action_elem:
if env_elem.tag == 'package':
@@ -1006,7 +1247,7 @@
return action_dict
-class SetupRubyEnvironment( RecipeStep ):
+class SetupRubyEnvironment( Download, RecipeStep ):
def __init__( self ):
self.type = 'setup_ruby_environment'
@@ -1058,7 +1299,7 @@
# We assume a URL to a gem file.
url = gem
gem_name = url.split( '/' )[ -1 ]
- td_common_util.url_download( work_dir, gem_name, url, extract=False )
+ self.url_download( work_dir, gem_name, url, extract=False )
cmd = '''PATH=$PATH:$RUBY_HOME/bin; export PATH; GEM_HOME=$INSTALL_DIR; export GEM_HOME;
gem install --local %s ''' % ( gem_name )
else:
@@ -1073,7 +1314,7 @@
# no version number given
cmd = '''PATH=$PATH:$RUBY_HOME/bin; export PATH; GEM_HOME=$INSTALL_DIR; export GEM_HOME;
gem install %s''' % ( gem )
- cmd = install_environment.build_command( td_common_util.evaluate_template( cmd, install_environment ) )
+ cmd = install_environment.build_command( basic_util.evaluate_template( cmd, install_environment ) )
return_code = install_environment.handle_command( app=app,
tool_dependency=tool_dependency,
cmd=cmd,
@@ -1114,10 +1355,11 @@
# associated with each repository. This will potentially update the value of the
# 'env_shell_file_paths' entry in action_dict.
all_env_shell_file_paths = []
- action_dict = td_common_util.get_env_shell_file_paths_from_setup_environment_elem( app,
- all_env_shell_file_paths,
- action_elem,
- action_dict )
+ env_manager = EnvManager( app )
+ action_dict = env_manager.get_env_shell_file_paths_from_setup_environment_elem( app,
+ all_env_shell_file_paths,
+ action_elem,
+ action_dict )
ruby_package_tups = []
for env_elem in action_elem:
if env_elem.tag == 'package':
@@ -1140,7 +1382,7 @@
return action_dict
-class SetupVirtualEnv( RecipeStep ):
+class SetupVirtualEnv( Download, RecipeStep ):
def __init__( self ):
self.type = 'setup_virtualenv'
@@ -1228,9 +1470,10 @@
with install_environment.make_tmp_dir() as work_dir:
downloaded_filename = VIRTUALENV_URL.rsplit('/', 1)[-1]
try:
- dir = td_common_util.url_download( work_dir, downloaded_filename, VIRTUALENV_URL )
+ dir = self.url_download( work_dir, downloaded_filename, VIRTUALENV_URL )
except:
- log.error( "Failed to download virtualenv: td_common_util.url_download( '%s', '%s', '%s' ) threw an exception", work_dir, downloaded_filename, VIRTUALENV_URL )
+ log.error( "Failed to download virtualenv: url_download( '%s', '%s', '%s' ) threw an exception",
+ work_dir, downloaded_filename, VIRTUALENV_URL )
return False
full_path_to_dir = os.path.abspath( os.path.join( work_dir, dir ) )
shutil.move( full_path_to_dir, venv_dir )
@@ -1245,7 +1488,7 @@
# lxml==2.3.0</action>
## Manually specify contents of requirements.txt file to create dynamically.
action_dict[ 'use_requirements_file' ] = asbool( action_elem.get( 'use_requirements_file', True ) )
- action_dict[ 'requirements' ] = td_common_util.evaluate_template( action_elem.text or 'requirements.txt', install_environment )
+ action_dict[ 'requirements' ] = basic_util.evaluate_template( action_elem.text or 'requirements.txt', install_environment )
action_dict[ 'python' ] = action_elem.get( 'python', 'python' )
return action_dict
@@ -1316,7 +1559,7 @@
def prepare_step( self, app, tool_dependency, action_elem, action_dict, install_environment, is_binary_download ):
# <action type="shell_command">make</action>
- action_elem_text = td_common_util.evaluate_template( action_elem.text, install_environment )
+ action_elem_text = basic_util.evaluate_template( action_elem.text, install_environment )
if action_elem_text:
action_dict[ 'command' ] = action_elem_text
return action_dict
@@ -1338,7 +1581,7 @@
env_vars = dict()
env_vars = install_environment.environment_dict()
tool_shed_repository = tool_dependency.tool_shed_repository
- env_vars.update( td_common_util.get_env_var_values( install_environment ) )
+ env_vars.update( basic_util.get_env_var_values( install_environment ) )
language = action_dict[ 'language' ]
with settings( warn_only=True, **env_vars ):
if language == 'cheetah':
diff -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 -r b1b1e5cefca5b0193c8623a9aee7a39cd1dda55c lib/tool_shed/galaxy_install/tool_dependencies/recipe/tag_handler.py
--- a/lib/tool_shed/galaxy_install/tool_dependencies/recipe/tag_handler.py
+++ b/lib/tool_shed/galaxy_install/tool_dependencies/recipe/tag_handler.py
@@ -10,7 +10,7 @@
from tool_shed.util import tool_dependency_util
from tool_shed.util import xml_util
-from tool_shed.galaxy_install.tool_dependencies import td_common_util
+from tool_shed.galaxy_install.tool_dependencies.env_manager import EnvManager
from tool_shed.galaxy_install.tool_dependencies.recipe.env_file_builder import EnvFileBuilder
from tool_shed.galaxy_install.tool_dependencies.recipe.install_environment import InstallEnvironment
@@ -85,9 +85,9 @@
platform_info_dict = tool_dependency_util.get_platform_info_dict()
if package_install_version == '1.0':
# Handle tool dependency installation using a fabric method included in the Galaxy framework.
- actions_elem_tuples = td_common_util.parse_package_elem( package_elem,
- platform_info_dict=platform_info_dict,
- include_after_install_actions=True )
+ actions_elem_tuples = tool_dependency_util.parse_package_elem( package_elem,
+ platform_info_dict=platform_info_dict,
+ include_after_install_actions=True )
if not actions_elem_tuples:
proceed_with_install = False
error_message = 'Version %s of the %s package cannot be installed because ' % ( str( package_version ), str( package_name ) )
@@ -491,6 +491,7 @@
# <set_environment version="1.0">
# <repository toolshed="<tool shed>" name="<repository name>" owner="<repository owner>" changeset_revision="<changeset revision>" />
# </set_environment>
+ env_manager = EnvManager( app )
tool_dependencies = []
env_var_version = elem.get( 'version', '1.0' )
tool_shed_repository_install_dir = os.path.abspath( tool_shed_repository.repo_files_directory( app ) )
@@ -514,8 +515,8 @@
tool_dependency_version=None )
install_environment = InstallEnvironment( tool_shed_repository_install_dir=tool_shed_repository_install_dir,
install_dir=install_dir )
- env_var_dict = td_common_util.create_env_var_dict( elem=env_var_elem,
- install_environment=install_environment )
+ env_var_dict = env_manager.create_env_var_dict( elem=env_var_elem,
+ install_environment=install_environment )
if env_var_dict:
if not os.path.exists( install_dir ):
os.makedirs( install_dir )
diff -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 -r b1b1e5cefca5b0193c8623a9aee7a39cd1dda55c lib/tool_shed/galaxy_install/tool_dependencies/td_common_util.py
--- a/lib/tool_shed/galaxy_install/tool_dependencies/td_common_util.py
+++ /dev/null
@@ -1,578 +0,0 @@
-import logging
-import os
-import re
-import shutil
-import stat
-import sys
-import tarfile
-import time
-import traceback
-import urllib2
-import zipfile
-from string import Template
-from tool_shed.util import common_util
-import tool_shed.util.shed_util_common as suc
-from galaxy.datatypes import checkers
-
-log = logging.getLogger( __name__ )
-
-# Set no activity timeout to 20 minutes.
-NO_OUTPUT_TIMEOUT = 1200.0
-INSTALLATION_LOG = 'INSTALLATION.log'
-
-
-class CompressedFile( object ):
-
- def __init__( self, file_path, mode='r' ):
- if istar( file_path ):
- self.file_type = 'tar'
- elif iszip( file_path ) and not isjar( file_path ):
- self.file_type = 'zip'
- self.file_name = os.path.splitext( os.path.basename( file_path ) )[ 0 ]
- if self.file_name.endswith( '.tar' ):
- self.file_name = os.path.splitext( self.file_name )[ 0 ]
- self.type = self.file_type
- method = 'open_%s' % self.file_type
- if hasattr( self, method ):
- self.archive = getattr( self, method )( file_path, mode )
- else:
- raise NameError( 'File type %s specified, no open method found.' % self.file_type )
-
- def extract( self, path ):
- '''Determine the path to which the archive should be extracted.'''
- contents = self.getmembers()
- extraction_path = path
- if len( contents ) == 1:
- # The archive contains a single file, return the extraction path.
- if self.isfile( contents[ 0 ] ):
- extraction_path = os.path.join( path, self.file_name )
- if not os.path.exists( extraction_path ):
- os.makedirs( extraction_path )
- self.archive.extractall( extraction_path )
- else:
- # Get the common prefix for all the files in the archive. If the common prefix ends with a slash,
- # or self.isdir() returns True, the archive contains a single directory with the desired contents.
- # Otherwise, it contains multiple files and/or directories at the root of the archive.
- common_prefix = os.path.commonprefix( [ self.getname( item ) for item in contents ] )
- if len( common_prefix ) >= 1 and not common_prefix.endswith( os.sep ) and self.isdir( self.getmember( common_prefix ) ):
- common_prefix += os.sep
- if common_prefix.endswith( os.sep ):
- self.archive.extractall( os.path.join( path ) )
- extraction_path = os.path.join( path, common_prefix )
- else:
- extraction_path = os.path.join( path, self.file_name )
- if not os.path.exists( extraction_path ):
- os.makedirs( extraction_path )
- self.archive.extractall( os.path.join( extraction_path ) )
- return os.path.abspath( extraction_path )
-
- def getmembers_tar( self ):
- return self.archive.getmembers()
-
- def getmembers_zip( self ):
- return self.archive.infolist()
-
- def getname_tar( self, item ):
- return item.name
-
- def getname_zip( self, item ):
- return item.filename
-
- def getmember( self, name ):
- for member in self.getmembers():
- if self.getname( member ) == name:
- return member
-
- def getmembers( self ):
- return getattr( self, 'getmembers_%s' % self.type )()
-
- def getname( self, member ):
- return getattr( self, 'getname_%s' % self.type )( member )
-
- def isdir( self, member ):
- return getattr( self, 'isdir_%s' % self.type )( member )
-
- def isdir_tar( self, member ):
- return member.isdir()
-
- def isdir_zip( self, member ):
- if member.filename.endswith( os.sep ):
- return True
- return False
-
- def isfile( self, member ):
- if not self.isdir( member ):
- return True
- return False
-
- def open_tar( self, filepath, mode ):
- return tarfile.open( filepath, mode, errorlevel=0 )
-
- def open_zip( self, filepath, mode ):
- return zipfile.ZipFile( filepath, mode )
-
-def assert_directory_executable( full_path ):
- """
- Return True if a symbolic link or directory exists and is executable, but if
- full_path is a file, return False.
- """
- if full_path is None:
- return False
- if os.path.isfile( full_path ):
- return False
- if os.path.isdir( full_path ):
- # Make sure the owner has execute permission on the directory.
- # See http://docs.python.org/2/library/stat.html
- if stat.S_IXUSR & os.stat( full_path )[ stat.ST_MODE ] == 64:
- return True
- return False
-
-def assert_directory_exists( full_path ):
- """
- Return True if a symbolic link or directory exists, but if full_path is a file,
- return False. """
- if full_path is None:
- return False
- if os.path.isfile( full_path ):
- return False
- if os.path.isdir( full_path ):
- return True
- return False
-
-def assert_file_executable( full_path ):
- """
- Return True if a symbolic link or file exists and is executable, but if full_path
- is a directory, return False.
- """
- if full_path is None:
- return False
- if os.path.isdir( full_path ):
- return False
- if os.path.exists( full_path ):
- # Make sure the owner has execute permission on the file.
- # See http://docs.python.org/2/library/stat.html
- if stat.S_IXUSR & os.stat( full_path )[ stat.ST_MODE ] == 64:
- return True
- return False
-
-def assert_file_exists( full_path ):
- """
- Return True if a symbolic link or file exists, but if full_path is a directory,
- return False.
- """
- if full_path is None:
- return False
- if os.path.isdir( full_path ):
- return False
- if os.path.exists( full_path ):
- return True
- return False
-
-def create_env_var_dict( elem, install_environment ):
- env_var_name = elem.get( 'name', 'PATH' )
- env_var_action = elem.get( 'action', 'prepend_to' )
- env_var_text = None
- tool_dependency_install_dir = install_environment.install_dir
- tool_shed_repository_install_dir = install_environment.tool_shed_repository_install_dir
- if elem.text and elem.text.find( 'REPOSITORY_INSTALL_DIR' ) >= 0:
- if tool_shed_repository_install_dir and elem.text.find( '$REPOSITORY_INSTALL_DIR' ) != -1:
- env_var_text = elem.text.replace( '$REPOSITORY_INSTALL_DIR', tool_shed_repository_install_dir )
- return dict( name=env_var_name, action=env_var_action, value=env_var_text )
- else:
- env_var_text = elem.text.replace( '$REPOSITORY_INSTALL_DIR', tool_dependency_install_dir )
- return dict( name=env_var_name, action=env_var_action, value=env_var_text )
- if elem.text and elem.text.find( 'INSTALL_DIR' ) >= 0:
- if tool_dependency_install_dir:
- env_var_text = elem.text.replace( '$INSTALL_DIR', tool_dependency_install_dir )
- return dict( name=env_var_name, action=env_var_action, value=env_var_text )
- else:
- env_var_text = elem.text.replace( '$INSTALL_DIR', tool_shed_repository_install_dir )
- return dict( name=env_var_name, action=env_var_action, value=env_var_text )
- if elem.text:
- # Allow for environment variables that contain neither REPOSITORY_INSTALL_DIR nor INSTALL_DIR
- # since there may be command line parameters that are tuned for a Galaxy instance. Allowing them
- # to be set in one location rather than being hard coded into each tool config is the best approach.
- # For example:
- # <environment_variable name="GATK2_SITE_OPTIONS" action="set_to">
- # "--num_threads 4 --num_cpu_threads_per_data_thread 3 --phone_home STANDARD"
- # </environment_variable>
- return dict( name=env_var_name, action=env_var_action, value=elem.text)
- return None
-
-def download_binary( url, work_dir ):
- """Download a pre-compiled binary from the specified URL."""
- downloaded_filename = os.path.split( url )[ -1 ]
- dir = url_download( work_dir, downloaded_filename, url, extract=False )
- return downloaded_filename
-
-def egrep_escape( text ):
- """Escape ``text`` to allow literal matching using egrep."""
- regex = re.escape( text )
- # Seems like double escaping is needed for \
- regex = regex.replace( '\\\\', '\\\\\\' )
- # Triple-escaping seems to be required for $ signs
- regex = regex.replace( r'\$', r'\\\$' )
- # Whereas single quotes should not be escaped
- regex = regex.replace( r"\'", "'" )
- return regex
-
-def evaluate_template( text, install_environment ):
- """
- Substitute variables defined in XML blocks from dependencies file. The value of the received
- repository_install_dir is the root installation directory of the repository that contains the
- tool dependency. The value of the received install_dir is the root installation directory of
- the tool_dependency.
- """
- return Template( text ).safe_substitute( get_env_var_values( install_environment ) )
-
-def format_traceback():
- ex_type, ex, tb = sys.exc_info()
- return ''.join( traceback.format_tb( tb ) )
-
-def get_env_shell_file_path( installation_directory ):
- env_shell_file_name = 'env.sh'
- default_location = os.path.abspath( os.path.join( installation_directory, env_shell_file_name ) )
- if os.path.exists( default_location ):
- return default_location
- for root, dirs, files in os.walk( installation_directory ):
- for name in files:
- if name == env_shell_file_name:
- return os.path.abspath( os.path.join( root, name ) )
- return None
-
-def get_env_shell_file_paths( app, elem ):
- # Currently only the following tag set is supported.
- # <repository toolshed="http://localhost:9009/" name="package_numpy_1_7" owner="test" changeset_revision="c84c6a8be056">
- # <package name="numpy" version="1.7.1" />
- # </repository>
- env_shell_file_paths = []
- toolshed = elem.get( 'toolshed', None )
- repository_name = elem.get( 'name', None )
- repository_owner = elem.get( 'owner', None )
- changeset_revision = elem.get( 'changeset_revision', None )
- if toolshed and repository_name and repository_owner and changeset_revision:
- # The protocol is not stored, but the port is if it exists.
- toolshed = common_util.remove_protocol_from_tool_shed_url( toolshed )
- repository = suc.get_repository_for_dependency_relationship( app, toolshed, repository_name, repository_owner, changeset_revision )
- if repository:
- for sub_elem in elem:
- tool_dependency_type = sub_elem.tag
- tool_dependency_name = sub_elem.get( 'name' )
- tool_dependency_version = sub_elem.get( 'version' )
- if tool_dependency_type and tool_dependency_name and tool_dependency_version:
- # Get the tool_dependency so we can get its installation directory.
- tool_dependency = None
- for tool_dependency in repository.tool_dependencies:
- if tool_dependency.type == tool_dependency_type and \
- tool_dependency.name == tool_dependency_name and \
- tool_dependency.version == tool_dependency_version:
- break
- if tool_dependency:
- tool_dependency_key = '%s/%s' % ( tool_dependency_name, tool_dependency_version )
- installation_directory = tool_dependency.installation_directory( app )
- env_shell_file_path = get_env_shell_file_path( installation_directory )
- if env_shell_file_path:
- env_shell_file_paths.append( env_shell_file_path )
- else:
- error_message = "Skipping tool dependency definition because unable to locate env.sh file for tool dependency "
- error_message += "type %s, name %s, version %s for repository %s" % \
- ( str( tool_dependency_type ), str( tool_dependency_name ), str( tool_dependency_version ), str( repository.name ) )
- log.debug( error_message )
- continue
- else:
- error_message = "Skipping tool dependency definition because unable to locate tool dependency "
- error_message += "type %s, name %s, version %s for repository %s" % \
- ( str( tool_dependency_type ), str( tool_dependency_name ), str( tool_dependency_version ), str( repository.name ) )
- log.debug( error_message )
- continue
- else:
- error_message = "Skipping invalid tool dependency definition: type %s, name %s, version %s." % \
- ( str( tool_dependency_type ), str( tool_dependency_name ), str( tool_dependency_version ) )
- log.debug( error_message )
- continue
- else:
- error_message = "Skipping set_environment_for_install definition because unable to locate required installed tool shed repository: "
- error_message += "toolshed %s, name %s, owner %s, changeset_revision %s." % \
- ( str( toolshed ), str( repository_name ), str( repository_owner ), str( changeset_revision ) )
- log.debug( error_message )
- else:
- error_message = "Skipping invalid set_environment_for_install definition: toolshed %s, name %s, owner %s, changeset_revision %s." % \
- ( str( toolshed ), str( repository_name ), str( repository_owner ), str( changeset_revision ) )
- log.debug( error_message )
- return env_shell_file_paths
-
-def get_env_shell_file_paths_from_setup_environment_elem( app, all_env_shell_file_paths, elem, action_dict ):
- """
- Parse an XML tag set to discover all child repository dependency tags and define the path to an env.sh file associated
- with the repository (this requires the repository dependency to be in an installed state). The received action_dict
- will be updated with these discovered paths and returned to the caller. This method handles tool dependency definition
- tag sets <setup_r_environment>, <setup_ruby_environment> and <setup_perl_environment>.
- """
- # An example elem is:
- # <action type="setup_perl_environment">
- # <repository name="package_perl_5_18" owner="iuc">
- # <package name="perl" version="5.18.1" />
- # </repository>
- # <repository name="package_expat_2_1" owner="iuc" prior_installation_required="True">
- # <package name="expat" version="2.1.0" />
- # </repository>
- # <package>http://search.cpan.org/CPAN/authors/id/T/TO/TODDR/XML-Parser-2.41.tar.gz</package>
- # <package>http://search.cpan.org/CPAN/authors/id/L/LD/LDS/CGI.pm-3.43.tar.gz</package>
- # </action>
- for action_elem in elem:
- if action_elem.tag == 'repository':
- env_shell_file_paths = get_env_shell_file_paths( app, action_elem )
- all_env_shell_file_paths.extend( env_shell_file_paths )
- if all_env_shell_file_paths:
- action_dict[ 'env_shell_file_paths' ] = all_env_shell_file_paths
- action_dict[ 'action_shell_file_paths' ] = env_shell_file_paths
- return action_dict
-
-def get_env_var_values( install_environment ):
- """
- Return a dictionary of values, some of which enable substitution of reserved words for the values.
- The received install_enviroment object has 2 important attributes for reserved word substitution:
- install_environment.tool_shed_repository_install_dir is the root installation directory of the repository
- that contains the tool dependency being installed, and install_environment.install_dir is the root
- installation directory of the tool dependency.
- """
- env_var_dict = {}
- env_var_dict[ 'REPOSITORY_INSTALL_DIR' ] = install_environment.tool_shed_repository_install_dir
- env_var_dict[ 'INSTALL_DIR' ] = install_environment.install_dir
- env_var_dict[ 'system_install' ] = install_environment.install_dir
- # If the Python interpreter is 64bit then we can safely assume that the underlying system is also 64bit.
- env_var_dict[ '__is64bit__' ] = sys.maxsize > 2**32
- return env_var_dict
-
-def isbz2( file_path ):
- return checkers.is_bz2( file_path )
-
-def isgzip( file_path ):
- return checkers.is_gzip( file_path )
-
-def isjar( file_path ):
- return iszip( file_path ) and file_path.endswith( '.jar' )
-
-def istar( file_path ):
- return tarfile.is_tarfile( file_path )
-
-def iszip( file_path ):
- return checkers.check_zip( file_path )
-
-def is_compressed( file_path ):
- if isjar( file_path ):
- return False
- else:
- return iszip( file_path ) or isgzip( file_path ) or istar( file_path ) or isbz2( file_path )
-
-def make_directory( full_path ):
- if not os.path.exists( full_path ):
- os.makedirs( full_path )
-
-def move_directory_files( current_dir, source_dir, destination_dir ):
- source_directory = os.path.abspath( os.path.join( current_dir, source_dir ) )
- destination_directory = os.path.join( destination_dir )
- if not os.path.isdir( destination_directory ):
- os.makedirs( destination_directory )
- symlinks = []
- regular_files = []
- for file_name in os.listdir( source_directory ):
- source_file = os.path.join( source_directory, file_name )
- destination_file = os.path.join( destination_directory, file_name )
- files_tuple = ( source_file, destination_file )
- if os.path.islink( source_file ):
- symlinks.append( files_tuple )
- else:
- regular_files.append( files_tuple )
- for source_file, destination_file in symlinks:
- shutil.move( source_file, destination_file )
- for source_file, destination_file in regular_files:
- shutil.move( source_file, destination_file )
-
-def move_file( current_dir, source, destination, rename_to=None ):
- source_path = os.path.abspath( os.path.join( current_dir, source ) )
- source_file = os.path.basename( source_path )
- if rename_to is not None:
- destination_file = rename_to
- destination_directory = os.path.join( destination )
- destination_path = os.path.join( destination_directory, destination_file )
- else:
- destination_directory = os.path.join( destination )
- destination_path = os.path.join( destination_directory, source_file )
- if not os.path.exists( destination_directory ):
- os.makedirs( destination_directory )
- shutil.move( source_path, destination_path )
-
-def parse_package_elem( package_elem, platform_info_dict=None, include_after_install_actions=True ):
- """
- Parse a <package> element within a tool dependency definition and return a list of action tuples.
- This method is called when setting metadata on a repository that includes a tool_dependencies.xml
- file or when installing a repository that includes a tool_dependencies.xml file. If installing,
- platform_info_dict must be a valid dictionary and include_after_install_actions must be True.
- """
- # The actions_elem_tuples list contains <actions> tag sets (possibly inside of an <actions_group>
- # tag set) to be processed in the order they are defined in the tool_dependencies.xml file.
- actions_elem_tuples = []
- # The tag sets that will go into the actions_elem_list are those that install a compiled binary if
- # the architecture and operating system match its defined attributes. If compiled binary is not
- # installed, the first <actions> tag set [following those that have the os and architecture attributes]
- # that does not have os or architecture attributes will be processed. This tag set must contain the
- # recipe for downloading and compiling source.
- actions_elem_list = []
- for elem in package_elem:
- if elem.tag == 'actions':
- # We have an <actions> tag that should not be matched against a specific combination of
- # architecture and operating system.
- in_actions_group = False
- actions_elem_tuples.append( ( in_actions_group, elem ) )
- elif elem.tag == 'actions_group':
- # We have an actions_group element, and its child <actions> elements should therefore be compared
- # with the current operating system
- # and processor architecture.
- in_actions_group = True
- # Record the number of <actions> elements so we can filter out any <action> elements that precede
- # <actions> elements.
- actions_elem_count = len( elem.findall( 'actions' ) )
- # Record the number of <actions> elements that have both architecture and os specified, in order
- # to filter out any platform-independent <actions> elements that come before platform-specific
- # <actions> elements.
- platform_actions_elements = []
- for actions_elem in elem.findall( 'actions' ):
- if actions_elem.get( 'architecture' ) is not None and actions_elem.get( 'os' ) is not None:
- platform_actions_elements.append( actions_elem )
- platform_actions_element_count = len( platform_actions_elements )
- platform_actions_elements_processed = 0
- actions_elems_processed = 0
- # The tag sets that will go into the after_install_actions list are <action> tags instead of <actions>
- # tags. These will be processed only if they are at the very end of the <actions_group> tag set (after
- # all <actions> tag sets). See below for details.
- after_install_actions = []
- # Inspect the <actions_group> element and build the actions_elem_list and the after_install_actions list.
- for child_element in elem:
- if child_element.tag == 'actions':
- actions_elems_processed += 1
- system = child_element.get( 'os' )
- architecture = child_element.get( 'architecture' )
- # Skip <actions> tags that have only one of architecture or os specified, in order for the
- # count in platform_actions_elements_processed to remain accurate.
- if ( system and not architecture ) or ( architecture and not system ):
- log.debug( 'Error: Both architecture and os attributes must be specified in an <actions> tag.' )
- continue
- # Since we are inside an <actions_group> tag set, compare it with our current platform information
- # and filter the <actions> tag sets that don't match. Require both the os and architecture attributes
- # to be defined in order to find a match.
- if system and architecture:
- platform_actions_elements_processed += 1
- # If either the os or architecture do not match the platform, this <actions> tag will not be
- # considered a match. Skip it and proceed with checking the next one.
- if platform_info_dict:
- if platform_info_dict[ 'os' ] != system or platform_info_dict[ 'architecture' ] != architecture:
- continue
- else:
- # We must not be installing a repository into Galaxy, so determining if we can install a
- # binary is not necessary.
- continue
- else:
- # <actions> tags without both os and architecture attributes are only allowed to be specified
- # after platform-specific <actions> tags. If we find a platform-independent <actions> tag before
- # all platform-specific <actions> tags have been processed.
- if platform_actions_elements_processed < platform_actions_element_count:
- debug_msg = 'Error: <actions> tags without os and architecture attributes are only allowed '
- debug_msg += 'after all <actions> tags with os and architecture attributes have been defined. '
- debug_msg += 'Skipping the <actions> tag set with no os or architecture attributes that has '
- debug_msg += 'been defined between two <actions> tag sets that have these attributes defined. '
- log.debug( debug_msg )
- continue
- # If we reach this point, it means one of two things: 1) The system and architecture attributes are
- # not defined in this <actions> tag, or 2) The system and architecture attributes are defined, and
- # they are an exact match for the current platform. Append the child element to the list of elements
- # to process.
- actions_elem_list.append( child_element )
- elif child_element.tag == 'action':
- # Any <action> tags within an <actions_group> tag set must come after all <actions> tags.
- if actions_elems_processed == actions_elem_count:
- # If all <actions> elements have been processed, then this <action> element can be appended to the
- # list of actions to execute within this group.
- after_install_actions.append( child_element )
- else:
- # If any <actions> elements remain to be processed, then log a message stating that <action>
- # elements are not allowed to precede any <actions> elements within an <actions_group> tag set.
- debug_msg = 'Error: <action> tags are only allowed at the end of an <actions_group> tag set after '
- debug_msg += 'all <actions> tags. Skipping <%s> element with type %s.' % \
- ( child_element.tag, child_element.get( 'type', 'unknown' ) )
- log.debug( debug_msg )
- continue
- if platform_info_dict is None and not include_after_install_actions:
- # We must be setting metadata on a repository.
- if len( actions_elem_list ) >= 1:
- actions_elem_tuples.append( ( in_actions_group, actions_elem_list[ 0 ] ) )
- else:
- # We are processing a recipe that contains only an <actions_group> tag set for installing a binary,
- # but does not include an additional recipe for installing and compiling from source.
- actions_elem_tuples.append( ( in_actions_group, [] ) )
- elif platform_info_dict is not None and include_after_install_actions:
- # We must be installing a repository.
- if after_install_actions:
- actions_elem_list.extend( after_install_actions )
- actions_elem_tuples.append( ( in_actions_group, actions_elem_list ) )
- else:
- # Skip any element that is not <actions> or <actions_group> - this will skip comments, <repository> tags
- # and <readme> tags.
- in_actions_group = False
- continue
- return actions_elem_tuples
-
-def __shellquote( s ):
- """Quote and escape the supplied string for use in shell expressions."""
- return "'" + s.replace( "'", "'\\''" ) + "'"
-
-def url_download( install_dir, downloaded_file_name, download_url, extract=True ):
- file_path = os.path.join( install_dir, downloaded_file_name )
- src = None
- dst = None
- # Set a timer so we don't sit here forever.
- start_time = time.time()
- try:
- src = urllib2.urlopen( download_url )
- dst = open( file_path, 'wb' )
- while True:
- chunk = src.read( suc.CHUNK_SIZE )
- if chunk:
- dst.write( chunk )
- else:
- break
- time_taken = time.time() - start_time
- if time_taken > NO_OUTPUT_TIMEOUT:
- err_msg = 'Downloading from URL %s took longer than the defined timeout period of %.1f seconds.' % \
- ( str( download_url ), NO_OUTPUT_TIMEOUT )
- raise Exception( err_msg )
- except Exception, e:
- err_msg = err_msg = 'Error downloading from URL\n%s:\n%s' % ( str( download_url ), str( e ) )
- raise Exception( err_msg )
- finally:
- if src:
- src.close()
- if dst:
- dst.close()
- if extract:
- if istar( file_path ) or ( iszip( file_path ) and not isjar( file_path ) ):
- archive = CompressedFile( file_path )
- extraction_path = archive.extract( install_dir )
- else:
- extraction_path = os.path.abspath( install_dir )
- else:
- extraction_path = os.path.abspath( install_dir )
- return extraction_path
-
-def zipfile_ok( path_to_archive ):
- """
- This function is a bit pedantic and not functionally necessary. It checks whether there is no file pointing outside of the extraction,
- because ZipFile.extractall() has some potential security holes. See python zipfile documentation for more details.
- """
- basename = os.path.realpath( os.path.dirname( path_to_archive ) )
- zip_archive = zipfile.ZipFile( path_to_archive )
- for member in zip_archive.namelist():
- member_path = os.path.realpath( os.path.join( basename, member ) )
- if not member_path.startswith( basename ):
- return False
- return True
diff -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 -r b1b1e5cefca5b0193c8623a9aee7a39cd1dda55c lib/tool_shed/scripts/check_filesystem_for_empty_tool_dependency_installation_paths.py
--- a/lib/tool_shed/scripts/check_filesystem_for_empty_tool_dependency_installation_paths.py
+++ b/lib/tool_shed/scripts/check_filesystem_for_empty_tool_dependency_installation_paths.py
@@ -7,7 +7,7 @@
new_path.extend( sys.path[1:] )
sys.path = new_path
-from tool_shed.galaxy_install.tool_dependencies import td_common_util
+from tool_shed.util.basic_util import INSTALLATION_LOG
def main( args ):
empty_installation_paths = []
@@ -31,13 +31,13 @@
no_files = False
if len( dirs ) == 0:
no_dirs = True
- if len( files ) == 0 or len( files ) == 1 and td_common_util.INSTALLATION_LOG in files:
+ if len( files ) == 0 or len( files ) == 1 and INSTALLATION_LOG in files:
no_files = True
if no_files and no_dirs and root not in empty_installation_paths:
empty_installation_paths.append( root )
if len( empty_installation_paths ) > 0:
print 'The following %d tool dependency installation directories were found to be empty or contain only the file %s.' % \
- ( len( empty_installation_paths ), td_common_util.INSTALLATION_LOG )
+ ( len( empty_installation_paths ), INSTALLATION_LOG )
if args.delete:
for path in empty_installation_paths:
if os.path.exists( path ):
diff -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 -r b1b1e5cefca5b0193c8623a9aee7a39cd1dda55c lib/tool_shed/scripts/check_s3_for_empty_tool_dependency_installation_paths.py
--- a/lib/tool_shed/scripts/check_s3_for_empty_tool_dependency_installation_paths.py
+++ b/lib/tool_shed/scripts/check_s3_for_empty_tool_dependency_installation_paths.py
@@ -12,8 +12,7 @@
import boto
-from tool_shed.galaxy_install.tool_dependencies import td_common_util
-
+from tool_shed.util.basic_util import INSTALLATION_LOG
class BucketList( object ):
@@ -77,7 +76,7 @@
# This would not be the case in a Galaxy instance, since the Galaxy admin will need to verify the contents of
# the installation path in order to determine which action should be taken.
elif len( tool_dependency_path_contents ) == 2 and \
- tool_dependency_path_contents[1].name.endswith( td_common_util.INSTALLATION_LOG ):
+ tool_dependency_path_contents[1].name.endswith( INSTALLATION_LOG ):
empty_directories.append( tool_dependency_path_contents[ 0 ] )
return [ item.name for item in empty_directories ]
@@ -106,7 +105,7 @@
print 'No empty installation paths found, exiting.'
return 0
print 'The following %d tool dependency installation paths were found to be empty or contain only the file %s.' % \
- ( len( dependency_cleaner.empty_installation_paths ), td_common_util.INSTALLATION_LOG )
+ ( len( dependency_cleaner.empty_installation_paths ), INSTALLATION_LOG )
if asbool( args.delete ):
dependency_cleaner.delete_empty_installation_paths()
else:
diff -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 -r b1b1e5cefca5b0193c8623a9aee7a39cd1dda55c lib/tool_shed/util/basic_util.py
--- a/lib/tool_shed/util/basic_util.py
+++ b/lib/tool_shed/util/basic_util.py
@@ -1,5 +1,8 @@
import logging
import os
+import shutil
+import sys
+from string import Template
from galaxy.util import unicodify
@@ -10,8 +13,52 @@
log = logging.getLogger( __name__ )
+CHUNK_SIZE = 2**20 # 1Mb
+INSTALLATION_LOG = 'INSTALLATION.log'
+# Set no activity timeout to 20 minutes.
+NO_OUTPUT_TIMEOUT = 1200.0
+MAXDIFFSIZE = 8000
MAX_DISPLAY_SIZE = 32768
+def evaluate_template( text, install_environment ):
+ """
+ Substitute variables defined in XML blocks from dependencies file. The value of the received
+ repository_install_dir is the root installation directory of the repository that contains the
+ tool dependency. The value of the received install_dir is the root installation directory of
+ the tool_dependency.
+ """
+ return Template( text ).safe_substitute( get_env_var_values( install_environment ) )
+
+def get_env_var_values( install_environment ):
+ """
+ Return a dictionary of values, some of which enable substitution of reserved words for the values.
+ The received install_enviroment object has 2 important attributes for reserved word substitution:
+ install_environment.tool_shed_repository_install_dir is the root installation directory of the repository
+ that contains the tool dependency being installed, and install_environment.install_dir is the root
+ installation directory of the tool dependency.
+ """
+ env_var_dict = {}
+ env_var_dict[ 'REPOSITORY_INSTALL_DIR' ] = install_environment.tool_shed_repository_install_dir
+ env_var_dict[ 'INSTALL_DIR' ] = install_environment.install_dir
+ env_var_dict[ 'system_install' ] = install_environment.install_dir
+ # If the Python interpreter is 64bit then we can safely assume that the underlying system is also 64bit.
+ env_var_dict[ '__is64bit__' ] = sys.maxsize > 2**32
+ return env_var_dict
+
+def move_file( current_dir, source, destination, rename_to=None ):
+ source_path = os.path.abspath( os.path.join( current_dir, source ) )
+ source_file = os.path.basename( source_path )
+ if rename_to is not None:
+ destination_file = rename_to
+ destination_directory = os.path.join( destination )
+ destination_path = os.path.join( destination_directory, destination_file )
+ else:
+ destination_directory = os.path.join( destination )
+ destination_path = os.path.join( destination_directory, source_file )
+ if not os.path.exists( destination_directory ):
+ os.makedirs( destination_directory )
+ shutil.move( source_path, destination_path )
+
def remove_dir( dir ):
"""Attempt to remove a directory from disk."""
if dir:
diff -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 -r b1b1e5cefca5b0193c8623a9aee7a39cd1dda55c lib/tool_shed/util/commit_util.py
--- a/lib/tool_shed/util/commit_util.py
+++ b/lib/tool_shed/util/commit_util.py
@@ -12,6 +12,7 @@
from galaxy.util.odict import odict
from galaxy.web import url_for
import tool_shed.util.shed_util_common as suc
+from tool_shed.util import basic_util
from tool_shed.util import hg_util
from tool_shed.util import tool_util
from tool_shed.util import xml_util
@@ -126,7 +127,7 @@
bzipped_file = bz2.BZ2File( uploaded_file_name, 'rb' )
while 1:
try:
- chunk = bzipped_file.read( suc.CHUNK_SIZE )
+ chunk = bzipped_file.read( basic_util.CHUNK_SIZE )
except IOError:
os.close( fd )
os.remove( uncompressed )
@@ -239,7 +240,7 @@
gzipped_file = gzip.GzipFile( uploaded_file_name, 'rb' )
while 1:
try:
- chunk = gzipped_file.read( suc.CHUNK_SIZE )
+ chunk = gzipped_file.read( basic_util.CHUNK_SIZE )
except IOError, e:
os.close( fd )
os.remove( uncompressed )
diff -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 -r b1b1e5cefca5b0193c8623a9aee7a39cd1dda55c lib/tool_shed/util/metadata_util.py
--- a/lib/tool_shed/util/metadata_util.py
+++ b/lib/tool_shed/util/metadata_util.py
@@ -22,11 +22,12 @@
from tool_shed.util import tool_dependency_util
from tool_shed.util import tool_util
from tool_shed.util import xml_util
-from tool_shed.galaxy_install.tool_dependencies import td_common_util
import tool_shed.repository_types.util as rt_util
log = logging.getLogger( __name__ )
+REPOSITORY_DATA_MANAGER_CONFIG_FILENAME = 'data_manager_conf.xml'
+
# Repository metadata comparisons for changeset revisions.
EQUAL = 'equal'
NO_METADATA = 'no metadata'
@@ -37,7 +38,7 @@
NOT_TOOL_CONFIGS = [ suc.DATATYPES_CONFIG_FILENAME,
rt_util.REPOSITORY_DEPENDENCY_DEFINITION_FILENAME,
rt_util.TOOL_DEPENDENCY_DEFINITION_FILENAME,
- suc.REPOSITORY_DATA_MANAGER_CONFIG_FILENAME ]
+ REPOSITORY_DATA_MANAGER_CONFIG_FILENAME ]
def add_tool_versions( trans, id, repository_metadata, changeset_revisions ):
# Build a dictionary of { 'tool id' : 'parent tool id' } pairs for each tool in repository_metadata.
@@ -750,7 +751,7 @@
metadata_dict = generate_data_manager_metadata( app,
repository,
files_dir,
- hg_util.get_config_from_disk( suc.REPOSITORY_DATA_MANAGER_CONFIG_FILENAME, files_dir ),
+ hg_util.get_config_from_disk( REPOSITORY_DATA_MANAGER_CONFIG_FILENAME, files_dir ),
metadata_dict,
shed_config_dict=shed_config_dict )
@@ -809,10 +810,10 @@
if package_install_version == '1.0':
# Complex repository dependencies can be defined within the last <actions> tag set contained in an
# <actions_group> tag set. Comments, <repository> tag sets and <readme> tag sets will be skipped
- # in td_common_util.parse_package_elem().
- actions_elem_tuples = td_common_util.parse_package_elem( sub_elem,
- platform_info_dict=None,
- include_after_install_actions=False )
+ # in tool_dependency_util.parse_package_elem().
+ actions_elem_tuples = tool_dependency_util.parse_package_elem( sub_elem,
+ platform_info_dict=None,
+ include_after_install_actions=False )
if actions_elem_tuples:
# We now have a list of a single tuple that looks something like:
# [(True, <Element 'actions' at 0x104017850>)]
diff -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 -r b1b1e5cefca5b0193c8623a9aee7a39cd1dda55c lib/tool_shed/util/shed_util_common.py
--- a/lib/tool_shed/util/shed_util_common.py
+++ b/lib/tool_shed/util/shed_util_common.py
@@ -24,11 +24,8 @@
log = logging.getLogger( __name__ )
-CHUNK_SIZE = 2**20 # 1Mb
MAX_CONTENT_SIZE = 1048576
-MAXDIFFSIZE = 8000
DATATYPES_CONFIG_FILENAME = 'datatypes_conf.xml'
-REPOSITORY_DATA_MANAGER_CONFIG_FILENAME = 'data_manager_conf.xml'
new_repo_email_alert_template = """
Sharable link: ${sharable_link}
This diff is so big that we needed to truncate the remainder.
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: carlfeberhard: HDA view: properly compile underscore template on first load, override template fn to pass localizer (_l) into template fn
by commits-noreply@bitbucket.org 28 May '14
by commits-noreply@bitbucket.org 28 May '14
28 May '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/31740dffaf9a/
Changeset: 31740dffaf9a
User: carlfeberhard
Date: 2014-05-28 19:41:07
Summary: HDA view: properly compile underscore template on first load, override template fn to pass localizer (_l) into template fn
Affected #: 3 files
diff -r 7fd77febf6bb8aa7a6acde9c3a31429f7b1ee99f -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 static/scripts/mvc/dataset/hda-base.js
--- a/static/scripts/mvc/dataset/hda-base.js
+++ b/static/scripts/mvc/dataset/hda-base.js
@@ -652,7 +652,8 @@
});
//------------------------------------------------------------------------------ TEMPLATES
-var skeletonTemplate = [
+//TODO: possibly break these out into a sep. module
+var skeletonTemplate = _.template([
'<div class="dataset hda">',
'<div class="dataset-warnings">',
// error during index fetch - show error on dataset
@@ -704,9 +705,9 @@
'<div class="dataset-body"></div>',
'</div>'
-].join( '' );
+].join( '' ));
-var bodyTemplate = [
+var bodyTemplate = _.template([
'<div class="dataset-body">',
'<% if( hda.body ){ %>',
'<div class="dataset-summary">',
@@ -799,14 +800,15 @@
'<% } %>',
// end if body
'</div>'
-].join( '' );
+].join( '' ));
HDABaseView.templates = {
+ // we override here in order to pass the localizer (_L) into the template scope - since we use it as a fn within
skeleton : function( hdaJSON ){
- return _.template( skeletonTemplate, hdaJSON, { variable: 'hda' });
+ return skeletonTemplate({ _l: _l, hda: hdaJSON });
},
body : function( hdaJSON ){
- return _.template( bodyTemplate, hdaJSON, { variable: 'hda' });
+ return bodyTemplate({ _l: _l, hda: hdaJSON });
}
};
diff -r 7fd77febf6bb8aa7a6acde9c3a31429f7b1ee99f -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 static/scripts/packed/mvc/dataset/hda-base.js
--- a/static/scripts/packed/mvc/dataset/hda-base.js
+++ b/static/scripts/packed/mvc/dataset/hda-base.js
@@ -1,1 +1,1 @@
-define(["mvc/dataset/hda-model","mvc/base-mvc","utils/localization"],function(e,b,d){var g=Backbone.View.extend(b.LoggableMixin).extend({tagName:"div",fxSpeed:"fast",_queueNewRender:function(i,j){j=(j===undefined)?(true):(j);var h=this;if(j){$(h).queue(function(k){this.$el.fadeOut(h.fxSpeed,k)})}$(h).queue(function(k){this.$el.empty().attr("class",h.className).addClass("state-"+h.model.get("state")).append(i.children());if(this.selectable){this.showSelector(0)}k()});if(j){$(h).queue(function(k){this.$el.fadeIn(h.fxSpeed,k)})}$(h).queue(function(k){this.trigger("rendered",h);if(this.model.inReadyState()){this.trigger("rendered:ready",h)}if(this.draggable){this.draggableOn()}k()})},toggleBodyVisibility:function(k,i){var h=32,j=13;if(k&&(k.type==="keydown")&&!(k.keyCode===h||k.keyCode===j)){return true}var l=this.$el.find(".dataset-body");i=(i===undefined)?(!l.is(":visible")):(i);if(i){this.expandBody()}else{this.collapseBody()}return false},showSelector:function(){if(this.selected){this.select(null,true)}this.selectable=true;this.trigger("selectable",true,this);this.$(".dataset-primary-actions").hide();this.$(".dataset-selector").show()},hideSelector:function(){this.selectable=false;this.trigger("selectable",false,this);this.$(".dataset-selector").hide();this.$(".dataset-primary-actions").show()},toggleSelector:function(){if(!this.$el.find(".dataset-selector").is(":visible")){this.showSelector()}else{this.hideSelector()}},select:function(h){this.$el.find(".dataset-selector span").removeClass("fa-square-o").addClass("fa-check-square-o");if(!this.selected){this.trigger("selected",this);this.selected=true}return false},deselect:function(h){this.$el.find(".dataset-selector span").removeClass("fa-check-square-o").addClass("fa-square-o");if(this.selected){this.trigger("de-selected",this);this.selected=false}return false},toggleSelect:function(h){if(this.selected){this.deselect(h)}else{this.select(h)}},});var c=g.extend({className:"dataset hda history-panel-hda",id:function(){return"hda-"+this.model.get("id")},initialize:function(h){if(h.logger){this.logger=this.model.logger=h.logger}this.log(this+".initialize:",h);this.defaultPrimaryActionButtonRenderers=[this._render_showParamsButton];this.linkTarget=h.linkTarget||"_blank";this.selectable=h.selectable||false;this.selected=h.selected||false;this.expanded=h.expanded||false;this.draggable=h.draggable||false;this._setUpListeners()},_setUpListeners:function(){this.model.on("change",function(i,h){if(this.model.changedAttributes().state&&this.model.inReadyState()&&this.expanded&&!this.model.hasDetails()){this.model.fetch()}else{this.render()}},this)},render:function(i){this.$el.find("[title]").tooltip("destroy");this.urls=this.model.urls();var h=this._buildNewRender();this._queueNewRender(h,i);return this},_buildNewRender:function(){var h=$(c.templates.skeleton(this.model.toJSON()));h.find(".dataset-primary-actions").append(this._render_titleButtons());h.children(".dataset-body").replaceWith(this._render_body());this._setUpBehaviors(h);return h},_setUpBehaviors:function(h){h=h||this.$el;make_popup_menus(h);h.find("[title]").tooltip({placement:"bottom"})},_render_titleButtons:function(){return[this._render_displayButton()]},_render_displayButton:function(){if((this.model.get("state")===e.HistoryDatasetAssociation.STATES.NOT_VIEWABLE)||(this.model.get("state")===e.HistoryDatasetAssociation.STATES.DISCARDED)||(!this.model.get("accessible"))){return null}var i={target:this.linkTarget,classes:"dataset-display"};if(this.model.get("purged")){i.disabled=true;i.title=d("Cannot display datasets removed from disk")}else{if(this.model.get("state")===e.HistoryDatasetAssociation.STATES.UPLOAD){i.disabled=true;i.title=d("This dataset must finish uploading before it can be viewed")}else{if(this.model.get("state")===e.HistoryDatasetAssociation.STATES.NEW){i.disabled=true;i.title=d("This dataset is not yet viewable")}else{i.title=d("View data");i.href=this.urls.display;var h=this;i.onclick=function(j){if(Galaxy.frame&&Galaxy.frame.active){Galaxy.frame.add({title:"Data Viewer: "+h.model.get("name"),type:"url",content:h.urls.display});j.preventDefault()}}}}}i.faIcon="fa-eye";return faIconButton(i)},_render_downloadButton:function(){if(this.model.get("purged")||!this.model.hasData()){return null}var i=this.urls,j=this.model.get("meta_files");if(_.isEmpty(j)){return $(['<a href="'+i.download+'" title="'+d("Download")+'" ','class="icon-btn dataset-download-btn">','<span class="fa fa-floppy-o"></span>',"</a>"].join(""))}var k="dataset-"+this.model.get("id")+"-popup",h=['<div popupmenu="'+k+'">','<a href="'+i.download+'">',d("Download dataset"),"</a>","<a>"+d("Additional files")+"</a>",_.map(j,function(l){return['<a class="action-button" href="',i.meta_download+l.file_type,'">',d("Download")," ",l.file_type,"</a>"].join("")}).join("\n"),"</div>",'<div class="icon-btn-group">','<a href="'+i.download+'" title="'+d("Download")+'" ','class="icon-btn dataset-download-btn">','<span class="fa fa-floppy-o"></span>','</a><a class="icon-btn popup" id="'+k+'">','<span class="fa fa-caret-down"></span>',"</a>","</div>"].join("\n");return $(h)},_render_showParamsButton:function(){return faIconButton({title:d("View details"),classes:"dataset-params-btn",href:this.urls.show_params,target:this.linkTarget,faIcon:"fa-info-circle"})},_render_body:function(){var i=$('<div>Error: unknown dataset state "'+this.model.get("state")+'".</div>'),h=this["_render_body_"+this.model.get("state")];if(_.isFunction(h)){i=h.call(this)}this._setUpBehaviors(i);if(this.expanded){i.show()}return i},_render_stateBodyHelper:function(h,k){k=k||[];var i=this,j=$(c.templates.body(_.extend(this.model.toJSON(),{body:h})));j.find(".dataset-actions .left").append(_.map(k,function(l){return l.call(i)}));return j},_render_body_new:function(){return this._render_stateBodyHelper("<div>"+d("This is a new dataset and not all of its data are available yet")+"</div>",this.defaultPrimaryActionButtonRenderers)},_render_body_noPermission:function(){return this._render_stateBodyHelper("<div>"+d("You do not have permission to view this dataset")+"</div>")},_render_body_discarded:function(){return this._render_stateBodyHelper("<div>"+d("The job creating this dataset was cancelled before completion")+"</div>",this.defaultPrimaryActionButtonRenderers)},_render_body_queued:function(){return this._render_stateBodyHelper("<div>"+d("This job is waiting to run")+"</div>",this.defaultPrimaryActionButtonRenderers)},_render_body_upload:function(){return this._render_stateBodyHelper("<div>"+d("This dataset is currently uploading")+"</div>")},_render_body_setting_metadata:function(){return this._render_stateBodyHelper("<div>"+d("Metadata is being auto-detected")+"</div>")},_render_body_running:function(){return this._render_stateBodyHelper("<div>"+d("This job is currently running")+"</div>",this.defaultPrimaryActionButtonRenderers)},_render_body_paused:function(){return this._render_stateBodyHelper("<div>"+d('This job is paused. Use the "Resume Paused Jobs" in the history menu to resume')+"</div>",this.defaultPrimaryActionButtonRenderers)},_render_body_error:function(){var h=['<span class="help-text">',d("An error occurred with this dataset"),":</span>",'<div class="job-error-text">',$.trim(this.model.get("misc_info")),"</div>"].join("");if(!this.model.get("purged")){h="<div>"+this.model.get("misc_blurb")+"</div>"+h}return this._render_stateBodyHelper(h,[this._render_downloadButton].concat(this.defaultPrimaryActionButtonRenderers))},_render_body_empty:function(){return this._render_stateBodyHelper("<div>"+d("No data")+": <i>"+this.model.get("misc_blurb")+"</i></div>",this.defaultPrimaryActionButtonRenderers)},_render_body_failed_metadata:function(){var h=$('<div class="warningmessagesmall"></div>').append($("<strong/>").text(d("An error occurred setting the metadata for this dataset"))),i=this._render_body_ok();i.prepend(h);return i},_render_body_ok:function(){var h=this,j=$(c.templates.body(this.model.toJSON())),i=[this._render_downloadButton].concat(this.defaultPrimaryActionButtonRenderers);j.find(".dataset-actions .left").append(_.map(i,function(k){return k.call(h)}));if(this.model.isDeletedOrPurged()){return j}return j},events:{"click .dataset-title-bar":"toggleBodyVisibility","keydown .dataset-title-bar":"toggleBodyVisibility","click .dataset-selector":"toggleSelect"},expandBody:function(){var h=this;function i(){h.$el.children(".dataset-body").replaceWith(h._render_body());h.$el.children(".dataset-body").slideDown(h.fxSpeed,function(){h.expanded=true;h.trigger("body-expanded",h.model.get("id"))})}if(this.model.inReadyState()&&!this.model.hasDetails()){this.model.fetch({silent:true}).always(function(j){h.urls=h.model.urls();i()})}else{i()}},collapseBody:function(){var h=this;this.$el.children(".dataset-body").slideUp(h.fxSpeed,function(){h.expanded=false;h.trigger("body-collapsed",h.model.get("id"))})},draggableOn:function(){this.draggable=true;this.dragStartHandler=_.bind(this._dragStartHandler,this);this.dragEndHandler=_.bind(this._dragEndHandler,this);var h=this.$el.find(".dataset-title-bar").attr("draggable",true).get(0);h.addEventListener("dragstart",this.dragStartHandler,false);h.addEventListener("dragend",this.dragEndHandler,false)},draggableOff:function(){this.draggable=false;var h=this.$el.find(".dataset-title-bar").attr("draggable",false).get(0);h.removeEventListener("dragstart",this.dragStartHandler,false);h.removeEventListener("dragend",this.dragEndHandler,false)},toggleDraggable:function(){if(this.draggable){this.draggableOff()}else{this.draggableOn()}},_dragStartHandler:function(h){this.trigger("dragstart",this);h.dataTransfer.effectAllowed="move";h.dataTransfer.setData("text",JSON.stringify(this.model.toJSON()));return false},_dragEndHandler:function(h){this.trigger("dragend",this);return false},remove:function(i){var h=this;this.$el.fadeOut(h.fxSpeed,function(){h.$el.remove();h.off();if(i){i()}})},toString:function(){var h=(this.model)?(this.model+""):("(no model)");return"HDABaseView("+h+")"}});var a=['<div class="dataset hda">','<div class="dataset-warnings">',"<% if( hda.error ){ %>",'<div class="errormessagesmall">',d("There was an error getting the data for this dataset"),":<%- hda.error %>","</div>","<% } %>","<% if( hda.deleted ){ %>","<% if( hda.purged ){ %>",'<div class="dataset-purged-msg warningmessagesmall"><strong>',d("This dataset has been deleted and removed from disk")+".","</strong></div>","<% } else { %>",'<div class="dataset-deleted-msg warningmessagesmall"><strong>',d("This dataset has been deleted")+".","</strong></div>","<% } %>","<% } %>","<% if( !hda.visible ){ %>",'<div class="dataset-hidden-msg warningmessagesmall"><strong>',d("This dataset has been hidden")+".","</strong></div>","<% } %>","</div>",'<div class="dataset-selector">','<span class="fa fa-2x fa-square-o"></span>',"</div>",'<div class="dataset-primary-actions"></div>','<div class="dataset-title-bar clear" tabindex="0">','<span class="dataset-state-icon state-icon"></span>','<div class="dataset-title">','<span class="hda-hid"><%- hda.hid %></span> ','<span class="dataset-name"><%- hda.name %></span>',"</div>","</div>",'<div class="dataset-body"></div>',"</div>"].join("");var f=['<div class="dataset-body">',"<% if( hda.body ){ %>",'<div class="dataset-summary">',"<%= hda.body %>","</div>",'<div class="dataset-actions clear">','<div class="left"></div>','<div class="right"></div>',"</div>","<% } else { %>",'<div class="dataset-summary">',"<% if( hda.misc_blurb ){ %>",'<div class="dataset-blurb">','<span class="value"><%- hda.misc_blurb %></span>',"</div>","<% } %>","<% if( hda.data_type ){ %>",'<div class="dataset-datatype">','<label class="prompt">',d("format"),"</label>",'<span class="value"><%- hda.data_type %></span>',"</div>","<% } %>","<% if( hda.metadata_dbkey ){ %>",'<div class="dataset-dbkey">','<label class="prompt">',d("database"),"</label>",'<span class="value">',"<%- hda.metadata_dbkey %>","</span>","</div>","<% } %>","<% if( hda.misc_info ){ %>",'<div class="dataset-info">','<span class="value"><%- hda.misc_info %></span>',"</div>","<% } %>","</div>",'<div class="dataset-actions clear">','<div class="left"></div>','<div class="right"></div>',"</div>","<% if( !hda.deleted ){ %>",'<div class="tags-display"></div>','<div class="annotation-display"></div>','<div class="dataset-display-applications">',"<% _.each( hda.display_apps, function( app ){ %>",'<div class="display-application">','<span class="display-application-location"><%- app.label %></span> ','<span class="display-application-links">',"<% _.each( app.links, function( link ){ %>",'<a target="<%= link.target %>" href="<%= link.href %>">',"<% print( _l( link.text ) ); %>","</a> ","<% }); %>","</span>","</div>","<% }); %>","<% _.each( hda.display_types, function( app ){ %>",'<div class="display-application">','<span class="display-application-location"><%- app.label %></span> ','<span class="display-application-links">',"<% _.each( app.links, function( link ){ %>",'<a target="<%= link.target %>" href="<%= link.href %>">',"<% print( _l( link.text ) ); %>","</a> ","<% }); %>","</span>","</div>","<% }); %>","</div>",'<div class="dataset-peek">',"<% if( hda.peek ){ %>",'<pre class="peek"><%= hda.peek %></pre>',"<% } %>","</div>","<% } %>","<% } %>","</div>"].join("");c.templates={skeleton:function(h){return _.template(a,h,{variable:"hda"})},body:function(h){return _.template(f,h,{variable:"hda"})}};return{HistoryContentBaseView:g,HDABaseView:c}});
\ No newline at end of file
+define(["mvc/dataset/hda-model","mvc/base-mvc","utils/localization"],function(e,b,d){var g=Backbone.View.extend(b.LoggableMixin).extend({tagName:"div",fxSpeed:"fast",_queueNewRender:function(i,j){j=(j===undefined)?(true):(j);var h=this;if(j){$(h).queue(function(k){this.$el.fadeOut(h.fxSpeed,k)})}$(h).queue(function(k){this.$el.empty().attr("class",h.className).addClass("state-"+h.model.get("state")).append(i.children());if(this.selectable){this.showSelector(0)}k()});if(j){$(h).queue(function(k){this.$el.fadeIn(h.fxSpeed,k)})}$(h).queue(function(k){this.trigger("rendered",h);if(this.model.inReadyState()){this.trigger("rendered:ready",h)}if(this.draggable){this.draggableOn()}k()})},toggleBodyVisibility:function(k,i){var h=32,j=13;if(k&&(k.type==="keydown")&&!(k.keyCode===h||k.keyCode===j)){return true}var l=this.$el.find(".dataset-body");i=(i===undefined)?(!l.is(":visible")):(i);if(i){this.expandBody()}else{this.collapseBody()}return false},showSelector:function(){if(this.selected){this.select(null,true)}this.selectable=true;this.trigger("selectable",true,this);this.$(".dataset-primary-actions").hide();this.$(".dataset-selector").show()},hideSelector:function(){this.selectable=false;this.trigger("selectable",false,this);this.$(".dataset-selector").hide();this.$(".dataset-primary-actions").show()},toggleSelector:function(){if(!this.$el.find(".dataset-selector").is(":visible")){this.showSelector()}else{this.hideSelector()}},select:function(h){this.$el.find(".dataset-selector span").removeClass("fa-square-o").addClass("fa-check-square-o");if(!this.selected){this.trigger("selected",this);this.selected=true}return false},deselect:function(h){this.$el.find(".dataset-selector span").removeClass("fa-check-square-o").addClass("fa-square-o");if(this.selected){this.trigger("de-selected",this);this.selected=false}return false},toggleSelect:function(h){if(this.selected){this.deselect(h)}else{this.select(h)}},});var c=g.extend({className:"dataset hda history-panel-hda",id:function(){return"hda-"+this.model.get("id")},initialize:function(h){if(h.logger){this.logger=this.model.logger=h.logger}this.log(this+".initialize:",h);this.defaultPrimaryActionButtonRenderers=[this._render_showParamsButton];this.linkTarget=h.linkTarget||"_blank";this.selectable=h.selectable||false;this.selected=h.selected||false;this.expanded=h.expanded||false;this.draggable=h.draggable||false;this._setUpListeners()},_setUpListeners:function(){this.model.on("change",function(i,h){if(this.model.changedAttributes().state&&this.model.inReadyState()&&this.expanded&&!this.model.hasDetails()){this.model.fetch()}else{this.render()}},this)},render:function(i){this.$el.find("[title]").tooltip("destroy");this.urls=this.model.urls();var h=this._buildNewRender();this._queueNewRender(h,i);return this},_buildNewRender:function(){var h=$(c.templates.skeleton(this.model.toJSON()));h.find(".dataset-primary-actions").append(this._render_titleButtons());h.children(".dataset-body").replaceWith(this._render_body());this._setUpBehaviors(h);return h},_setUpBehaviors:function(h){h=h||this.$el;make_popup_menus(h);h.find("[title]").tooltip({placement:"bottom"})},_render_titleButtons:function(){return[this._render_displayButton()]},_render_displayButton:function(){if((this.model.get("state")===e.HistoryDatasetAssociation.STATES.NOT_VIEWABLE)||(this.model.get("state")===e.HistoryDatasetAssociation.STATES.DISCARDED)||(!this.model.get("accessible"))){return null}var i={target:this.linkTarget,classes:"dataset-display"};if(this.model.get("purged")){i.disabled=true;i.title=d("Cannot display datasets removed from disk")}else{if(this.model.get("state")===e.HistoryDatasetAssociation.STATES.UPLOAD){i.disabled=true;i.title=d("This dataset must finish uploading before it can be viewed")}else{if(this.model.get("state")===e.HistoryDatasetAssociation.STATES.NEW){i.disabled=true;i.title=d("This dataset is not yet viewable")}else{i.title=d("View data");i.href=this.urls.display;var h=this;i.onclick=function(j){if(Galaxy.frame&&Galaxy.frame.active){Galaxy.frame.add({title:"Data Viewer: "+h.model.get("name"),type:"url",content:h.urls.display});j.preventDefault()}}}}}i.faIcon="fa-eye";return faIconButton(i)},_render_downloadButton:function(){if(this.model.get("purged")||!this.model.hasData()){return null}var i=this.urls,j=this.model.get("meta_files");if(_.isEmpty(j)){return $(['<a href="'+i.download+'" title="'+d("Download")+'" ','class="icon-btn dataset-download-btn">','<span class="fa fa-floppy-o"></span>',"</a>"].join(""))}var k="dataset-"+this.model.get("id")+"-popup",h=['<div popupmenu="'+k+'">','<a href="'+i.download+'">',d("Download dataset"),"</a>","<a>"+d("Additional files")+"</a>",_.map(j,function(l){return['<a class="action-button" href="',i.meta_download+l.file_type,'">',d("Download")," ",l.file_type,"</a>"].join("")}).join("\n"),"</div>",'<div class="icon-btn-group">','<a href="'+i.download+'" title="'+d("Download")+'" ','class="icon-btn dataset-download-btn">','<span class="fa fa-floppy-o"></span>','</a><a class="icon-btn popup" id="'+k+'">','<span class="fa fa-caret-down"></span>',"</a>","</div>"].join("\n");return $(h)},_render_showParamsButton:function(){return faIconButton({title:d("View details"),classes:"dataset-params-btn",href:this.urls.show_params,target:this.linkTarget,faIcon:"fa-info-circle"})},_render_body:function(){var i=$('<div>Error: unknown dataset state "'+this.model.get("state")+'".</div>'),h=this["_render_body_"+this.model.get("state")];if(_.isFunction(h)){i=h.call(this)}this._setUpBehaviors(i);if(this.expanded){i.show()}return i},_render_stateBodyHelper:function(h,k){k=k||[];var i=this,j=$(c.templates.body(_.extend(this.model.toJSON(),{body:h})));j.find(".dataset-actions .left").append(_.map(k,function(l){return l.call(i)}));return j},_render_body_new:function(){return this._render_stateBodyHelper("<div>"+d("This is a new dataset and not all of its data are available yet")+"</div>",this.defaultPrimaryActionButtonRenderers)},_render_body_noPermission:function(){return this._render_stateBodyHelper("<div>"+d("You do not have permission to view this dataset")+"</div>")},_render_body_discarded:function(){return this._render_stateBodyHelper("<div>"+d("The job creating this dataset was cancelled before completion")+"</div>",this.defaultPrimaryActionButtonRenderers)},_render_body_queued:function(){return this._render_stateBodyHelper("<div>"+d("This job is waiting to run")+"</div>",this.defaultPrimaryActionButtonRenderers)},_render_body_upload:function(){return this._render_stateBodyHelper("<div>"+d("This dataset is currently uploading")+"</div>")},_render_body_setting_metadata:function(){return this._render_stateBodyHelper("<div>"+d("Metadata is being auto-detected")+"</div>")},_render_body_running:function(){return this._render_stateBodyHelper("<div>"+d("This job is currently running")+"</div>",this.defaultPrimaryActionButtonRenderers)},_render_body_paused:function(){return this._render_stateBodyHelper("<div>"+d('This job is paused. Use the "Resume Paused Jobs" in the history menu to resume')+"</div>",this.defaultPrimaryActionButtonRenderers)},_render_body_error:function(){var h=['<span class="help-text">',d("An error occurred with this dataset"),":</span>",'<div class="job-error-text">',$.trim(this.model.get("misc_info")),"</div>"].join("");if(!this.model.get("purged")){h="<div>"+this.model.get("misc_blurb")+"</div>"+h}return this._render_stateBodyHelper(h,[this._render_downloadButton].concat(this.defaultPrimaryActionButtonRenderers))},_render_body_empty:function(){return this._render_stateBodyHelper("<div>"+d("No data")+": <i>"+this.model.get("misc_blurb")+"</i></div>",this.defaultPrimaryActionButtonRenderers)},_render_body_failed_metadata:function(){var h=$('<div class="warningmessagesmall"></div>').append($("<strong/>").text(d("An error occurred setting the metadata for this dataset"))),i=this._render_body_ok();i.prepend(h);return i},_render_body_ok:function(){var h=this,j=$(c.templates.body(this.model.toJSON())),i=[this._render_downloadButton].concat(this.defaultPrimaryActionButtonRenderers);j.find(".dataset-actions .left").append(_.map(i,function(k){return k.call(h)}));if(this.model.isDeletedOrPurged()){return j}return j},events:{"click .dataset-title-bar":"toggleBodyVisibility","keydown .dataset-title-bar":"toggleBodyVisibility","click .dataset-selector":"toggleSelect"},expandBody:function(){var h=this;function i(){h.$el.children(".dataset-body").replaceWith(h._render_body());h.$el.children(".dataset-body").slideDown(h.fxSpeed,function(){h.expanded=true;h.trigger("body-expanded",h.model.get("id"))})}if(this.model.inReadyState()&&!this.model.hasDetails()){this.model.fetch({silent:true}).always(function(j){h.urls=h.model.urls();i()})}else{i()}},collapseBody:function(){var h=this;this.$el.children(".dataset-body").slideUp(h.fxSpeed,function(){h.expanded=false;h.trigger("body-collapsed",h.model.get("id"))})},draggableOn:function(){this.draggable=true;this.dragStartHandler=_.bind(this._dragStartHandler,this);this.dragEndHandler=_.bind(this._dragEndHandler,this);var h=this.$el.find(".dataset-title-bar").attr("draggable",true).get(0);h.addEventListener("dragstart",this.dragStartHandler,false);h.addEventListener("dragend",this.dragEndHandler,false)},draggableOff:function(){this.draggable=false;var h=this.$el.find(".dataset-title-bar").attr("draggable",false).get(0);h.removeEventListener("dragstart",this.dragStartHandler,false);h.removeEventListener("dragend",this.dragEndHandler,false)},toggleDraggable:function(){if(this.draggable){this.draggableOff()}else{this.draggableOn()}},_dragStartHandler:function(h){this.trigger("dragstart",this);h.dataTransfer.effectAllowed="move";h.dataTransfer.setData("text",JSON.stringify(this.model.toJSON()));return false},_dragEndHandler:function(h){this.trigger("dragend",this);return false},remove:function(i){var h=this;this.$el.fadeOut(h.fxSpeed,function(){h.$el.remove();h.off();if(i){i()}})},toString:function(){var h=(this.model)?(this.model+""):("(no model)");return"HDABaseView("+h+")"}});var a=_.template(['<div class="dataset hda">','<div class="dataset-warnings">',"<% if( hda.error ){ %>",'<div class="errormessagesmall">',d("There was an error getting the data for this dataset"),":<%- hda.error %>","</div>","<% } %>","<% if( hda.deleted ){ %>","<% if( hda.purged ){ %>",'<div class="dataset-purged-msg warningmessagesmall"><strong>',d("This dataset has been deleted and removed from disk")+".","</strong></div>","<% } else { %>",'<div class="dataset-deleted-msg warningmessagesmall"><strong>',d("This dataset has been deleted")+".","</strong></div>","<% } %>","<% } %>","<% if( !hda.visible ){ %>",'<div class="dataset-hidden-msg warningmessagesmall"><strong>',d("This dataset has been hidden")+".","</strong></div>","<% } %>","</div>",'<div class="dataset-selector">','<span class="fa fa-2x fa-square-o"></span>',"</div>",'<div class="dataset-primary-actions"></div>','<div class="dataset-title-bar clear" tabindex="0">','<span class="dataset-state-icon state-icon"></span>','<div class="dataset-title">','<span class="hda-hid"><%- hda.hid %></span> ','<span class="dataset-name"><%- hda.name %></span>',"</div>","</div>",'<div class="dataset-body"></div>',"</div>"].join(""));var f=_.template(['<div class="dataset-body">',"<% if( hda.body ){ %>",'<div class="dataset-summary">',"<%= hda.body %>","</div>",'<div class="dataset-actions clear">','<div class="left"></div>','<div class="right"></div>',"</div>","<% } else { %>",'<div class="dataset-summary">',"<% if( hda.misc_blurb ){ %>",'<div class="dataset-blurb">','<span class="value"><%- hda.misc_blurb %></span>',"</div>","<% } %>","<% if( hda.data_type ){ %>",'<div class="dataset-datatype">','<label class="prompt">',d("format"),"</label>",'<span class="value"><%- hda.data_type %></span>',"</div>","<% } %>","<% if( hda.metadata_dbkey ){ %>",'<div class="dataset-dbkey">','<label class="prompt">',d("database"),"</label>",'<span class="value">',"<%- hda.metadata_dbkey %>","</span>","</div>","<% } %>","<% if( hda.misc_info ){ %>",'<div class="dataset-info">','<span class="value"><%- hda.misc_info %></span>',"</div>","<% } %>","</div>",'<div class="dataset-actions clear">','<div class="left"></div>','<div class="right"></div>',"</div>","<% if( !hda.deleted ){ %>",'<div class="tags-display"></div>','<div class="annotation-display"></div>','<div class="dataset-display-applications">',"<% _.each( hda.display_apps, function( app ){ %>",'<div class="display-application">','<span class="display-application-location"><%- app.label %></span> ','<span class="display-application-links">',"<% _.each( app.links, function( link ){ %>",'<a target="<%= link.target %>" href="<%= link.href %>">',"<% print( _l( link.text ) ); %>","</a> ","<% }); %>","</span>","</div>","<% }); %>","<% _.each( hda.display_types, function( app ){ %>",'<div class="display-application">','<span class="display-application-location"><%- app.label %></span> ','<span class="display-application-links">',"<% _.each( app.links, function( link ){ %>",'<a target="<%= link.target %>" href="<%= link.href %>">',"<% print( _l( link.text ) ); %>","</a> ","<% }); %>","</span>","</div>","<% }); %>","</div>",'<div class="dataset-peek">',"<% if( hda.peek ){ %>",'<pre class="peek"><%= hda.peek %></pre>',"<% } %>","</div>","<% } %>","<% } %>","</div>"].join(""));c.templates={skeleton:function(h){return a({_l:d,hda:h})},body:function(h){return f({_l:d,hda:h})}};return{HistoryContentBaseView:g,HDABaseView:c}});
\ No newline at end of file
diff -r 7fd77febf6bb8aa7a6acde9c3a31429f7b1ee99f -r 31740dffaf9ae4e345fd2d4de4ab44ed133dd567 static/scripts/packed/viz/trackster.js
--- a/static/scripts/packed/viz/trackster.js
+++ b/static/scripts/packed/viz/trackster.js
@@ -1,1 +1,1 @@
-var ui=null;var view=null;var browser_router=null;require(["utils/utils","libs/jquery/jstorage","libs/jquery/jquery.event.drag","libs/jquery/jquery.event.hover","libs/jquery/jquery.mousewheel","libs/jquery/jquery-ui","libs/jquery/select2","libs/farbtastic","libs/jquery/jquery.form","libs/jquery/jquery.rating","mvc/ui"],function(a){a.cssLoadFile("static/style/jquery.rating.css");a.cssLoadFile("static/style/autocomplete_tagging.css");a.cssLoadFile("static/style/jquery-ui/smoothness/jquery-ui.css");a.cssLoadFile("static/style/library.css");a.cssLoadFile("static/style/trackster.css")});define(["libs/backbone/backbone","viz/trackster_ui"],function(b,a){var c=Backbone.View.extend({initialize:function(){ui=new a.TracksterUI(galaxy_config.root);ui.createButtonMenu();ui.buttonMenu.$el.attr("style","float: right");$("#center .unified-panel-header-inner").append(ui.buttonMenu.$el);$("#right .unified-panel-title").append("Bookmarks");$("#right .unified-panel-icons").append("<a id='add-bookmark-button' class='icon-button menu-button plus-button' href='javascript:void(0);' title='Add bookmark'></a>");$("#right-border").click(function(){view.resize_window()});force_right_panel("hide");if(galaxy_config.app.id){this.view_existing()}else{this.view_new()}},view_existing:function(){var d=galaxy_config.app.viz_config;view=ui.create_visualization({container:$("#center .unified-panel-body"),name:d.title,vis_id:d.vis_id,dbkey:d.dbkey},d.viewport,d.tracks,d.bookmarks,true);this.init_editor()},view_new:function(){var d=this;$.ajax({url:galaxy_config.root+"api/genomes?chrom_info=True",data:{},error:function(){alert("Couldn't create new browser.")},success:function(e){Galaxy.modal.show({title:"New Visualization",body:d.template_view_new(e),buttons:{Cancel:function(){window.location=galaxy_config.root+"visualization/list"},Create:function(){d.create_browser($("#new-title").val(),$("#new-dbkey").val());Galaxy.modal.hide()}}});if(galaxy_config.app.default_dbkey){$("#new-dbkey").val(galaxy_config.app.default_dbkey)}$("#new-title").focus();$("select[name='dbkey']").select2();$("#overlay").css("overflow","auto")}})},template_view_new:function(d){var f='<form id="new-browser-form" action="javascript:void(0);" method="post" onsubmit="return false;"><div class="form-row"><label for="new-title">Browser name:</label><div class="form-row-input"><input type="text" name="title" id="new-title" value="Unnamed"></input></div><div style="clear: both;"></div></div><div class="form-row"><label for="new-dbkey">Reference genome build (dbkey): </label><div class="form-row-input"><select name="dbkey" id="new-dbkey">';for(var e=0;e<d.length;e++){f+='<option value="'+d[e][1]+'">'+d[e][0]+"</option>"}f+='</select></div><div style="clear: both;"></div></div><div class="form-row">Is the build not listed here? <a href="'+galaxy_config.root+'user/dbkeys?use_panels=True">Add a Custom Build</a></div></form>';return f},create_browser:function(e,d){$(document).trigger("convert_to_values");view=ui.create_visualization({container:$("#center .unified-panel-body"),name:e,dbkey:d},galaxy_config.app.gene_region);this.init_editor();view.editor=true},init_editor:function(){$("#center .unified-panel-title").text(view.config.get_value("name")+" ("+view.dbkey+")");if(galaxy_config.app.add_dataset){$.ajax({url:galaxy_config.root+"api/datasets/"+galaxy_config.app.add_dataset,data:{hda_ldda:"hda",data_type:"track_config"},dataType:"json",success:function(d){view.add_drawable(a.object_from_template(d,view,view))}})}$("#add-bookmark-button").click(function(){var e=view.chrom+":"+view.low+"-"+view.high,d="Bookmark description";return ui.add_bookmark(e,d,true)});ui.init_keyboard_nav(view)}});return{GalaxyApp:c}});
\ No newline at end of file
+var ui=null;var view=null;var browser_router=null;require(["utils/utils","libs/jquery/jquery.event.drag","libs/jquery/jquery.event.hover","libs/jquery/jquery.mousewheel","libs/jquery/jquery-ui","libs/jquery/select2","libs/farbtastic","libs/jquery/jquery.form","libs/jquery/jquery.rating","mvc/ui"],function(a){a.cssLoadFile("static/style/jquery.rating.css");a.cssLoadFile("static/style/autocomplete_tagging.css");a.cssLoadFile("static/style/jquery-ui/smoothness/jquery-ui.css");a.cssLoadFile("static/style/library.css");a.cssLoadFile("static/style/trackster.css")});define(["libs/backbone/backbone","viz/trackster_ui"],function(b,a){var c=Backbone.View.extend({initialize:function(){ui=new a.TracksterUI(galaxy_config.root);ui.createButtonMenu();ui.buttonMenu.$el.attr("style","float: right");$("#center .unified-panel-header-inner").append(ui.buttonMenu.$el);$("#right .unified-panel-title").append("Bookmarks");$("#right .unified-panel-icons").append("<a id='add-bookmark-button' class='icon-button menu-button plus-button' href='javascript:void(0);' title='Add bookmark'></a>");$("#right-border").click(function(){view.resize_window()});force_right_panel("hide");if(galaxy_config.app.id){this.view_existing()}else{this.view_new()}},view_existing:function(){var d=galaxy_config.app.viz_config;view=ui.create_visualization({container:$("#center .unified-panel-body"),name:d.title,vis_id:d.vis_id,dbkey:d.dbkey},d.viewport,d.tracks,d.bookmarks,true);this.init_editor()},view_new:function(){var d=this;$.ajax({url:galaxy_config.root+"api/genomes?chrom_info=True",data:{},error:function(){alert("Couldn't create new browser.")},success:function(e){Galaxy.modal.show({title:"New Visualization",body:d.template_view_new(e),buttons:{Cancel:function(){window.location=galaxy_config.root+"visualization/list"},Create:function(){d.create_browser($("#new-title").val(),$("#new-dbkey").val());Galaxy.modal.hide()}}});if(galaxy_config.app.default_dbkey){$("#new-dbkey").val(galaxy_config.app.default_dbkey)}$("#new-title").focus();$("select[name='dbkey']").select2();$("#overlay").css("overflow","auto")}})},template_view_new:function(d){var f='<form id="new-browser-form" action="javascript:void(0);" method="post" onsubmit="return false;"><div class="form-row"><label for="new-title">Browser name:</label><div class="form-row-input"><input type="text" name="title" id="new-title" value="Unnamed"></input></div><div style="clear: both;"></div></div><div class="form-row"><label for="new-dbkey">Reference genome build (dbkey): </label><div class="form-row-input"><select name="dbkey" id="new-dbkey">';for(var e=0;e<d.length;e++){f+='<option value="'+d[e][1]+'">'+d[e][0]+"</option>"}f+='</select></div><div style="clear: both;"></div></div><div class="form-row">Is the build not listed here? <a href="'+galaxy_config.root+'user/dbkeys?use_panels=True">Add a Custom Build</a></div></form>';return f},create_browser:function(e,d){$(document).trigger("convert_to_values");view=ui.create_visualization({container:$("#center .unified-panel-body"),name:e,dbkey:d},galaxy_config.app.gene_region);this.init_editor();view.editor=true},init_editor:function(){$("#center .unified-panel-title").text(view.config.get_value("name")+" ("+view.dbkey+")");if(galaxy_config.app.add_dataset){$.ajax({url:galaxy_config.root+"api/datasets/"+galaxy_config.app.add_dataset,data:{hda_ldda:"hda",data_type:"track_config"},dataType:"json",success:function(d){view.add_drawable(a.object_from_template(d,view,view))}})}$("#add-bookmark-button").click(function(){var e=view.chrom+":"+view.low+"-"+view.high,d="Bookmark description";return ui.add_bookmark(e,d,true)});ui.init_keyboard_nav(view)}});return{GalaxyApp:c}});
\ No newline at end of file
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0