galaxy-commits
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
April 2014
- 1 participants
- 261 discussions
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/1d1292f5b4ab/
Changeset: 1d1292f5b4ab
User: dannon
Date: 2014-04-23 15:48:48
Summary: Merge stable.
Affected #: 1 file
diff -r 26575805a77b61e893a6601401499c970f156a0c -r 1d1292f5b4ab6da6863f845ec538389c1e8659a9 lib/galaxy/webapps/galaxy/controllers/workflow.py
--- a/lib/galaxy/webapps/galaxy/controllers/workflow.py
+++ b/lib/galaxy/webapps/galaxy/controllers/workflow.py
@@ -764,7 +764,7 @@
def callback( input, value, prefixed_name, prefixed_label ):
if isinstance( input, DataToolParameter ):
data_input_names[ prefixed_name ] = True
- multiple_input[input.name] = input.multiple
+ multiple_input[ prefixed_name ] = input.multiple
visit_input_values( module.tool.inputs, module.state.inputs, callback )
# Filter
# FIXME: this removes connection without displaying a message currently!
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: dannon: Merged in jmchilton/galaxy-central-fork-1/stable (pull request #373)
by commits-noreply@bitbucket.org 23 Apr '14
by commits-noreply@bitbucket.org 23 Apr '14
23 Apr '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/3b4272535922/
Changeset: 3b4272535922
Branch: stable
User: dannon
Date: 2014-04-23 15:44:28
Summary: Merged in jmchilton/galaxy-central-fork-1/stable (pull request #373)
[stable] Bugfix for multiple input data parameters.
Affected #: 1 file
diff -r 7a7985a007fbce2bfa7bea5d8d23af1d63fecbb1 -r 3b42725359224832317a066d95dff596f93ab33f lib/galaxy/webapps/galaxy/controllers/workflow.py
--- a/lib/galaxy/webapps/galaxy/controllers/workflow.py
+++ b/lib/galaxy/webapps/galaxy/controllers/workflow.py
@@ -763,7 +763,7 @@
def callback( input, value, prefixed_name, prefixed_label ):
if isinstance( input, DataToolParameter ):
data_input_names[ prefixed_name ] = True
- multiple_input[input.name] = input.multiple
+ multiple_input[ prefixed_name ] = input.multiple
visit_input_values( module.tool.inputs, module.state.inputs, callback )
# Filter
# FIXME: this removes connection without displaying a message currently!
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/078dd25c5fa7/
Changeset: 078dd25c5fa7
Branch: stable
User: jmchilton
Date: 2014-04-19 21:37:02
Summary: Bugfix for multiple input data parameters.
Workflow editor would drop all but one multiple input data connections if and when the parameter itself was nested in repeats/conds.
Affected #: 1 file
diff -r e6876f6918548a0805fa0cc5306ba1ced6fb184a -r 078dd25c5fa78199f8c21b60b0bbe6b35f1eb0ad lib/galaxy/webapps/galaxy/controllers/workflow.py
--- a/lib/galaxy/webapps/galaxy/controllers/workflow.py
+++ b/lib/galaxy/webapps/galaxy/controllers/workflow.py
@@ -763,7 +763,7 @@
def callback( input, value, prefixed_name, prefixed_label ):
if isinstance( input, DataToolParameter ):
data_input_names[ prefixed_name ] = True
- multiple_input[input.name] = input.multiple
+ multiple_input[ prefixed_name ] = input.multiple
visit_input_values( module.tool.inputs, module.state.inputs, callback )
# Filter
# FIXME: this removes connection without displaying a message currently!
https://bitbucket.org/galaxy/galaxy-central/commits/3b4272535922/
Changeset: 3b4272535922
Branch: stable
User: dannon
Date: 2014-04-23 15:44:28
Summary: Merged in jmchilton/galaxy-central-fork-1/stable (pull request #373)
[stable] Bugfix for multiple input data parameters.
Affected #: 1 file
diff -r 7a7985a007fbce2bfa7bea5d8d23af1d63fecbb1 -r 3b42725359224832317a066d95dff596f93ab33f lib/galaxy/webapps/galaxy/controllers/workflow.py
--- a/lib/galaxy/webapps/galaxy/controllers/workflow.py
+++ b/lib/galaxy/webapps/galaxy/controllers/workflow.py
@@ -763,7 +763,7 @@
def callback( input, value, prefixed_name, prefixed_label ):
if isinstance( input, DataToolParameter ):
data_input_names[ prefixed_name ] = True
- multiple_input[input.name] = input.multiple
+ multiple_input[ prefixed_name ] = input.multiple
visit_input_values( module.tool.inputs, module.state.inputs, callback )
# Filter
# FIXME: this removes connection without displaying a message currently!
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
11 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/a841d01847df/
Changeset: a841d01847df
User: jmchilton
Date: 2014-03-26 17:01:46
Summary: Unit tests to exercise job and task mapping.
Affected #: 1 file
diff -r 09985439d17f9bef026554938b05d0d6eedd06cb -r a841d01847dffac0f554f9730ec45829c38c05e8 test/unit/test_galaxy_mapping.py
--- a/test/unit/test_galaxy_mapping.py
+++ b/test/unit/test_galaxy_mapping.py
@@ -210,6 +210,29 @@
assert hist1.name == "History 2b"
# gvk TODO need to ad test for GalaxySessions, but not yet sure what they should look like.
+ def test_jobs( self ):
+ model = self.model
+ u = model.User( email="jobtest(a)foo.bar.baz", password="password" )
+ job = model.Job()
+ job.user = u
+ job.tool_id = "cat1"
+
+ self.persist( u, job )
+
+ loaded_job = model.session.query( model.Job ).filter( model.Job.user == u ).first()
+ assert loaded_job.tool_id == "cat1"
+
+ def test_tasks( self ):
+ model = self.model
+ u = model.User( email="jobtest(a)foo.bar.baz", password="password" )
+ job = model.Job()
+ task = model.Task( job=job, working_directory="/tmp", prepare_files_cmd="split.sh" )
+ job.user = u
+ self.persist( u, job, task )
+
+ loaded_task = model.session.query( model.Task ).filter( model.Task.job == job ).first()
+ assert loaded_task.prepare_input_files_cmd == "split.sh"
+
def test_history_contents( self ):
model = self.model
u = model.User( email="contents(a)foo.bar.baz", password="password" )
https://bitbucket.org/galaxy/galaxy-central/commits/ad576bf6497e/
Changeset: ad576bf6497e
User: jmchilton
Date: 2014-03-26 17:01:46
Summary: Layout models and mapping for a job metrics plugin framework.
Affected #: 4 files
diff -r a841d01847dffac0f554f9730ec45829c38c05e8 -r ad576bf6497ec40b64d84802bda596bb63af63af lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py
+++ b/lib/galaxy/model/__init__.py
@@ -17,6 +17,7 @@
import json
import socket
import time
+import numbers
from uuid import UUID, uuid4
from string import Template
from itertools import ifilter
@@ -87,6 +88,21 @@
return name
+class HasJobMetrics:
+
+ def _init_metrics( self ):
+ self.text_metrics = []
+ self.numeric_metrics = []
+
+ def add_metric( self, plugin, metric_name, metric_value ):
+ if isinstance( metric_value, numbers.Number ):
+ metric = self._numeric_metric( plugin, metric_name, metric_value )
+ self.numeric_metrics.append( metric )
+ else:
+ metric = self._text_metric( plugin, metric_name, metric_value )
+ self.text_metrics.append( metric )
+
+
class User( object, Dictifiable ):
use_pbkdf2 = True
"""
@@ -226,7 +242,31 @@
return Template( in_string ).safe_substitute( environment )
-class Job( object, Dictifiable ):
+class BaseJobMetric( object ):
+
+ def __init__( self, plugin, metric_name, metric_value ):
+ self.plugin = plugin
+ self.metric_name = metric_name
+ self.metric_value = metric_value
+
+
+class JobMetricText( BaseJobMetric ):
+ pass
+
+
+class JobMetricNumeric( BaseJobMetric ):
+ pass
+
+
+class TaskMetricText( BaseJobMetric ):
+ pass
+
+
+class TaskMetricNumeric( BaseJobMetric ):
+ pass
+
+
+class Job( object, HasJobMetrics, Dictifiable ):
dict_collection_visible_keys = [ 'id', 'state', 'exit_code', 'update_time', 'create_time' ]
dict_element_visible_keys = [ 'id', 'state', 'exit_code', 'update_time', 'create_time' ]
@@ -234,6 +274,9 @@
A job represents a request to run a tool given input datasets, tool
parameters, and output datasets.
"""
+ _numeric_metric = JobMetricNumeric
+ _text_metric = JobMetricText
+
states = Bunch( NEW = 'new',
UPLOAD = 'upload',
WAITING = 'waiting',
@@ -267,6 +310,7 @@
self.imported = False
self.handler = None
self.exit_code = None
+ self._init_metrics()
@property
def finished( self ):
@@ -472,10 +516,14 @@
return rval
-class Task( object ):
+
+class Task( object, HasJobMetrics ):
"""
A task represents a single component of a job.
"""
+ _numeric_metric = TaskMetricNumeric
+ _text_metric = TaskMetricText
+
states = Bunch( NEW = 'new',
WAITING = 'waiting',
QUEUED = 'queued',
@@ -498,6 +546,7 @@
self.stderr = ""
self.exit_code = None
self.prepare_input_files_cmd = prepare_files_cmd
+ self._init_metrics()
def get_param_values( self, app ):
"""
@@ -608,6 +657,7 @@
def set_prepare_input_files_cmd( self, prepare_input_files_cmd ):
self.prepare_input_files_cmd = prepare_input_files_cmd
+
class JobParameter( object ):
def __init__( self, name, value ):
self.name = name
diff -r a841d01847dffac0f554f9730ec45829c38c05e8 -r ad576bf6497ec40b64d84802bda596bb63af63af lib/galaxy/model/mapping.py
--- a/lib/galaxy/model/mapping.py
+++ b/lib/galaxy/model/mapping.py
@@ -470,6 +470,52 @@
Column( "archive_dir", TEXT )
)
+
+JOB_METRIC_MAX_LENGTH = 1023
+
+model.JobMetricText.table = Table(
+ "job_metric_text",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Unicode(JOB_METRIC_MAX_LENGTH), ),
+)
+
+model.TaskMetricText.table = Table(
+ "task_metric_text",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "task_id", Integer, ForeignKey( "task.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Unicode(JOB_METRIC_MAX_LENGTH), ),
+)
+
+
+model.JobMetricNumeric.table = Table(
+ "job_metric_numeric",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Numeric( 22, 7 ), ),
+)
+
+
+model.TaskMetricNumeric.table = Table(
+ "task_metric_numeric",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "task_id", Integer, ForeignKey( "task.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Numeric( 22, 7 ), ),
+)
+
+
model.GenomeIndexToolData.table = Table( "genome_index_tool_data", metadata,
Column( "id", Integer, primary_key=True ),
Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
@@ -1569,6 +1615,26 @@
job=relation( model.Job ), dataset=relation(
model.LibraryDatasetDatasetAssociation, lazy=False ) ) )
+simple_mapping(
+ model.JobMetricText,
+ job=relation( model.Job, backref="text_metrics" ),
+)
+
+simple_mapping(
+ model.TaskMetricText,
+ task=relation( model.Task, backref="text_metrics" ),
+)
+
+simple_mapping(
+ model.JobMetricNumeric,
+ job=relation( model.Job, backref="numeric_metrics" ),
+)
+
+simple_mapping(
+ model.TaskMetricNumeric,
+ task=relation( model.Task, backref="numeric_metrics" ),
+)
+
mapper( model.JobParameter, model.JobParameter.table )
mapper( model.JobExternalOutputMetadata, model.JobExternalOutputMetadata.table,
diff -r a841d01847dffac0f554f9730ec45829c38c05e8 -r ad576bf6497ec40b64d84802bda596bb63af63af lib/galaxy/model/migrate/versions/0119_job_metrics.py
--- /dev/null
+++ b/lib/galaxy/model/migrate/versions/0119_job_metrics.py
@@ -0,0 +1,102 @@
+"""
+Migration script for job metric plugins.
+"""
+
+from sqlalchemy import *
+from sqlalchemy.orm import *
+from migrate import *
+from migrate.changeset import *
+
+import datetime
+now = datetime.datetime.utcnow
+
+import logging
+log = logging.getLogger( __name__ )
+
+metadata = MetaData()
+
+TEXT_METRIC_MAX_LENGTH = 1023
+
+JobMetricText_table = Table(
+ "job_metric_text",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Unicode(TEXT_METRIC_MAX_LENGTH), ),
+)
+
+
+TaskMetricText_table = Table(
+ "task_metric_text",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "task_id", Integer, ForeignKey( "task.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Unicode(TEXT_METRIC_MAX_LENGTH), ),
+)
+
+
+JobMetricNumeric_table = Table(
+ "job_metric_numeric",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Numeric( 22, 7 ), ),
+)
+
+
+TaskMetricNumeric_table = Table(
+ "task_metric_numeric",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "task_id", Integer, ForeignKey( "task.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Numeric( 22, 7 ), ),
+)
+
+
+TABLES = [
+ JobMetricText_table,
+ TaskMetricText_table,
+ JobMetricNumeric_table,
+ TaskMetricNumeric_table,
+]
+
+
+def upgrade( migrate_engine ):
+ metadata.bind = migrate_engine
+ print __doc__
+ metadata.reflect()
+
+ for table in TABLES:
+ __create(table)
+
+
+def downgrade(migrate_engine):
+ metadata.bind = migrate_engine
+ metadata.reflect()
+
+ for table in TABLES:
+ __drop(table)
+
+
+def __create(table):
+ try:
+ table.create()
+ except Exception as e:
+ print str(e)
+ log.debug("Creating %s table failed: %s" % (table.name, str( e ) ) )
+
+
+def __drop(table):
+ try:
+ table.drop()
+ except Exception as e:
+ print str(e)
+ log.debug("Dropping %s table failed: %s" % (table.name, str( e ) ) )
diff -r a841d01847dffac0f554f9730ec45829c38c05e8 -r ad576bf6497ec40b64d84802bda596bb63af63af test/unit/test_galaxy_mapping.py
--- a/test/unit/test_galaxy_mapping.py
+++ b/test/unit/test_galaxy_mapping.py
@@ -222,6 +222,18 @@
loaded_job = model.session.query( model.Job ).filter( model.Job.user == u ).first()
assert loaded_job.tool_id == "cat1"
+ def test_job_metrics( self ):
+ model = self.model
+ u = model.User( email="jobtest(a)foo.bar.baz", password="password" )
+ job = model.Job()
+ job.user = u
+ job.tool_id = "cat1"
+
+ job.add_metric( "gx", "galaxy_slots", 5 )
+ job.add_metric( "system", "system_name", "localhost" )
+
+ self.persist( u, job )
+
def test_tasks( self ):
model = self.model
u = model.User( email="jobtest(a)foo.bar.baz", password="password" )
https://bitbucket.org/galaxy/galaxy-central/commits/3ffd1ca6ac44/
Changeset: 3ffd1ca6ac44
User: jmchilton
Date: 2014-03-26 17:01:46
Summary: Implement plugin framework for collecting data about runtime job execution.
An example job_metrics_conf.xml.sample is included that describes which plugins are enabled and how they are configured. This will be updated for each new plugin added. By default not instrumentation or data collection occurs - but if a job_metrics.xml file is present it will serve as the default for all job destination. Additionally, individual job destinations may disable, load a different job metrics file, or define metrics directly in job_conf.xml in an embedded fashion. See comment at top of job_metrics_conf.xml for more information.
This commit include an initial plugin (named 'core') to demonstrate the framework and capture the highest priority data - namely the number of cores allocated to the job and the runtime of the job on the cluster. These two pieces of information alone should provide a much clearer picture of what Galaxy is actually allocating cluster compute cycles to.
Current limitations - This only works with job runners utilizing the job script module and the LWR (it utilizes the job script module on the remote server), hence it won't yet work with...
- Local job runner - I do have a downstream fork of Galaxy where I have reworked the local job runner to use the common job script template.
https://github.com/jmchilton/galaxy-central/commits/local_job_script
https://github.com/jmchilton/galaxy-central/commit/949db2cd14c7191cedf1febe…
- CLI runner - CLI runner needs to be reworked to use the job script module anyway so GALAXY_SLOTS works - the LWR version of the CLI runner uses the job script module - this work just needs to be back ported to Galaxy.
If a job_metrics_conf.xml is present and some jobs route to the above destinations - the jobs won't fail but annoying errors will appear in the logs. Simply attach a 'metrics="off"' those these specific job destinations to disable any attempt to use metrics for these jobs and disable these errors.
Affected #: 15 files
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 job_conf.xml.sample_advanced
--- a/job_conf.xml.sample_advanced
+++ b/job_conf.xml.sample_advanced
@@ -62,6 +62,11 @@
<!-- Warning: Local slot count doesn't tie up additional worker threads, to prevent over
allocating machine define a second local runner with different name and fewer workers
to run this destination. -->
+ <job_metrics />
+ <!-- Above element demonstrates embedded job metrics definition - see
+ job_metrics_conf.xml.sample for full documentation on possible nested
+ elements. This empty block will simply disable job metrics for the
+ corresponding destination. --></destination><destination id="pbs" runner="pbs" tags="mycluster"/><destination id="pbs_longjobs" runner="pbs" tags="mycluster,longjobs">
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 job_metrics_conf.xml.sample
--- /dev/null
+++ b/job_metrics_conf.xml.sample
@@ -0,0 +1,18 @@
+<?xml version="1.0"?>
+<!-- If job_metrics.xml exists, this file will define the default job metric
+ plugin used for all jobs. Individual job_conf.xml destinations can
+ disable metric collection by setting metrics="off" on that destination.
+ The metrics attribute on destination definition elements can also be
+ a path - in which case that XML metrics file will be loaded and used for
+ that destination. Finally, the destination element may contain a job_metrics
+ child element (with all options defined below) to define job metrics in an
+ embedded manner directly in the job_conf.xml file.
+-->
+<job_metrics>
+ <!-- Each element in this file corresponds to a job instrumentation plugin
+ used to generate metrics in lib/galaxy/jobs/metrics/instrumenters. -->
+
+ <!-- Core plugin captures Galaxy slots, start and end of job (in seconds
+ since epoch) and computes runtime in seconds. -->
+ <core />
+</job_metrics>
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 lib/galaxy/app.py
--- a/lib/galaxy/app.py
+++ b/lib/galaxy/app.py
@@ -15,7 +15,7 @@
from galaxy.sample_tracking import external_service_types
from galaxy.openid.providers import OpenIDProviders
from galaxy.tools.data_manager.manager import DataManagers
-
+from galaxy.jobs import metrics as job_metrics
from galaxy.web.base import pluginframework
import logging
@@ -60,6 +60,10 @@
self._configure_tool_data_tables( from_shed_config=False )
+ # Initialize job metrics manager, needs to be in place before
+ # config so per-destination modifications can be made.
+ self.job_metrics = job_metrics.JobMetrics( self.config.job_metrics_config_file, app=self )
+
# Initialize the job management configuration
self.job_config = jobs.JobConfiguration(self)
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 lib/galaxy/config.py
--- a/lib/galaxy/config.py
+++ b/lib/galaxy/config.py
@@ -137,6 +137,7 @@
self.template_path = resolve_path( kwargs.get( "template_path", "templates" ), self.root )
self.template_cache = resolve_path( kwargs.get( "template_cache_path", "database/compiled_templates" ), self.root )
self.dependency_resolvers_config_file = resolve_path( kwargs.get( 'dependency_resolvers_config_file', 'dependency_resolvers_conf.xml' ), self.root )
+ self.job_metrics_config_file = resolve_path( kwargs.get( 'job_metrics_config_file', 'job_metrics_conf.xml' ), self.root )
self.job_config_file = resolve_path( kwargs.get( 'job_config_file', 'job_conf.xml' ), self.root )
self.local_job_queue_workers = int( kwargs.get( "local_job_queue_workers", "5" ) )
self.cluster_job_queue_workers = int( kwargs.get( "cluster_job_queue_workers", "3" ) )
@@ -453,6 +454,11 @@
admin_users = [ x.strip() for x in self.get( "admin_users", "" ).split( "," ) ]
return ( user is not None and user.email in admin_users )
+ def resolve_path( self, path ):
+ """ Resolve a path relative to Galaxy's root.
+ """
+ return resolve_path( path, self.root )
+
def get_database_engine_options( kwargs, model_prefix='' ):
"""
Allow options for the SQLAlchemy database engine to be passed by using
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 lib/galaxy/jobs/__init__.py
--- a/lib/galaxy/jobs/__init__.py
+++ b/lib/galaxy/jobs/__init__.py
@@ -162,8 +162,21 @@
# Parse destinations
destinations = root.find('destinations')
+ job_metrics = self.app.job_metrics
for destination in self.__findall_with_required(destinations, 'destination', ('id', 'runner')):
id = destination.get('id')
+ destination_metrics = destination.get( "metrics", None )
+ if destination_metrics:
+ if not util.asbool( destination_metrics ):
+ # disable
+ job_metrics.set_destination_instrumenter( id, None )
+ else:
+ metrics_conf_path = self.app.config.resolve_path( destination_metrics )
+ job_metrics.set_destination_conf_file( id, metrics_conf_path )
+ else:
+ metrics_elements = self.__findall_with_required( destination, 'job_metrics', () )
+ if metrics_elements:
+ job_metrics.set_destination_conf_element( id, metrics_elements[ 0 ] )
job_destination = JobDestination(**dict(destination.items()))
job_destination['params'] = self.__get_params(destination)
self.destinations[id] = (job_destination,)
@@ -1068,8 +1081,10 @@
# Finally set the job state. This should only happen *after* all
# dataset creation, and will allow us to eliminate force_history_refresh.
job.state = final_job_state
+ if not job.tasks:
+ # If job was composed of tasks, don't attempt to recollect statisitcs
+ self._collect_metrics( job )
self.sa_session.flush()
-
log.debug( 'job %d ended' % self.job_id )
delete_files = self.app.config.cleanup_job == 'always' or ( job.state == job.states.OK and self.app.config.cleanup_job == 'onsuccess' )
self.cleanup( delete_files=delete_files )
@@ -1094,6 +1109,16 @@
except:
log.exception( "Unable to cleanup job %d" % self.job_id )
+ def _collect_metrics( self, has_metrics ):
+ job = has_metrics.get_job()
+ per_plugin_properties = self.app.job_metrics.collect_properties( job.destination_id, self.job_id, self.working_directory )
+ if per_plugin_properties:
+ log.info( "Collecting job metrics for %s" % has_metrics )
+ for plugin, properties in per_plugin_properties.iteritems():
+ for metric_name, metric_value in properties.iteritems():
+ if metric_value is not None:
+ has_metrics.add_metric( plugin, metric_name, metric_value )
+
def get_output_sizes( self ):
sizes = []
output_paths = self.get_output_fnames()
@@ -1508,6 +1533,7 @@
task.stdout = util.shrink_string_by_size( stdout, DATABASE_MAX_STRING_SIZE, join_by="\n..\n", left_larger=True, beginning_on_size_error=True )
if len( stderr ) > DATABASE_MAX_STRING_SIZE:
log.error( "stderr for task %d is greater than %s, only a portion will be logged to database" % ( task.id, DATABASE_MAX_STRING_SIZE_PRETTY ) )
+ self._collect_metrics( task )
task.stderr = util.shrink_string_by_size( stderr, DATABASE_MAX_STRING_SIZE, join_by="\n..\n", left_larger=True, beginning_on_size_error=True )
task.exit_code = tool_exit_code
task.command_line = self.command_line
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 lib/galaxy/jobs/metrics/__init__.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/__init__.py
@@ -0,0 +1,134 @@
+import collections
+import os
+
+from xml.etree import ElementTree
+
+from galaxy.util.submodules import submodules
+from galaxy import util
+
+from ..metrics import formatting
+
+import logging
+log = logging.getLogger( __name__ )
+
+
+DEFAULT_FORMATTER = formatting.JobMetricFormatter()
+
+
+class JobMetrics( object ):
+
+ def __init__( self, conf_file=None, **kwargs ):
+ """
+ """
+ self.plugin_classes = self.__plugins_dict()
+ self.default_job_instrumenter = JobInstrumenter.from_file( self.plugin_classes, conf_file, **kwargs )
+ self.job_instrumenters = collections.defaultdict( lambda: self.default_job_instrumenter )
+
+ def format( self, plugin, key, value ):
+ if plugin in self.plugin_classes:
+ plugin_class = self.plugin_classes[ plugin ]
+ formatter = plugin_class.formatter
+ else:
+ formatter = DEFAULT_FORMATTER
+ return formatter.format( key, value )
+
+ def set_destination_conf_file( self, destination_id, conf_file ):
+ instrumenter = JobInstrumenter.from_file( self.plugin_classes, conf_file )
+ self.set_destination_instrumenter( destination_id, instrumenter )
+
+ def set_destination_conf_element( self, destination_id, element ):
+ instrumenter = JobInstrumenter( self.plugin_classes, element )
+ self.set_destination_instrumenter( destination_id, instrumenter )
+
+ def set_destination_instrumenter( self, destination_id, job_instrumenter=None ):
+ if job_instrumenter is None:
+ job_instrumenter = NULL_JOB_INSTRUMENTER
+ self.job_instrumenters[ destination_id ] = job_instrumenter
+
+ def collect_properties( self, destination_id, job_id, job_directory ):
+ return self.job_instrumenters[ destination_id ].collect_properties( job_id, job_directory )
+
+ def __plugins_dict( self ):
+ plugin_dict = {}
+ for plugin_module in self.__plugin_modules():
+ for clazz in plugin_module.__all__:
+ plugin_type = getattr( clazz, 'plugin_type', None )
+ if plugin_type:
+ plugin_dict[ plugin_type ] = clazz
+ return plugin_dict
+
+ def __plugin_modules( self ):
+ import galaxy.jobs.metrics.instrumenters
+ return submodules( galaxy.jobs.metrics.instrumenters )
+
+
+class NullJobInstrumenter( object ):
+
+ def pre_execute_commands( self, job_directory ):
+ return None
+
+ def post_execute_commands( self, job_directory ):
+ return None
+
+ def collect_properties( self, job_id, job_directory ):
+ return {}
+
+NULL_JOB_INSTRUMENTER = NullJobInstrumenter()
+
+
+class JobInstrumenter( object ):
+
+ def __init__( self, plugin_classes, metrics_element, **kwargs ):
+ self.extra_kwargs = kwargs
+ self.plugin_classes = plugin_classes
+ self.plugins = self.__plugins_for_element( metrics_element )
+
+ def pre_execute_commands( self, job_directory ):
+ commands = []
+ for plugin in self.plugins:
+ try:
+ plugin_commands = plugin.pre_execute_instrument( job_directory )
+ if plugin_commands:
+ commands.extend( util.listify( plugin_commands ) )
+ except Exception:
+ log.exception( "Failed to generate pre-execute commands for plugin %s" % plugin )
+ return "\n".join( [ c for c in commands if c ] )
+
+ def post_execute_commands( self, job_directory ):
+ commands = []
+ for plugin in self.plugins:
+ try:
+ plugin_commands = plugin.post_execute_instrument( job_directory )
+ if plugin_commands:
+ commands.extend( util.listify( plugin_commands ) )
+ except Exception:
+ log.exception( "Failed to generate post-execute commands for plugin %s" % plugin )
+ return "\n".join( [ c for c in commands if c ] )
+
+ def collect_properties( self, job_id, job_directory ):
+ per_plugin_properites = {}
+ for plugin in self.plugins:
+ try:
+ properties = plugin.job_properties( job_id, job_directory )
+ if properties:
+ per_plugin_properites[ plugin.plugin_type ] = properties
+ except Exception:
+ log.exception( "Failed to collect job properties for plugin %s" % plugin )
+ return per_plugin_properites
+
+ def __plugins_for_element( self, plugins_element ):
+ plugins = []
+ for plugin_element in plugins_element.getchildren():
+ plugin_type = plugin_element.tag
+ plugin_kwds = dict( plugin_element.items() )
+ plugin_kwds.update( self.extra_kwargs )
+ plugin = self.plugin_classes[ plugin_type ]( **plugin_kwds )
+ plugins.append( plugin )
+ return plugins
+
+ @staticmethod
+ def from_file( plugin_classes, conf_file, **kwargs ):
+ if not conf_file or not os.path.exists( conf_file ):
+ return NULL_JOB_INSTRUMENTER
+ plugins_element = ElementTree.parse( conf_file ).getroot()
+ return JobInstrumenter( plugin_classes, plugins_element, **kwargs )
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 lib/galaxy/jobs/metrics/formatting.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/formatting.py
@@ -0,0 +1,18 @@
+
+
+class JobMetricFormatter( object ):
+ """ Format job metric key-value pairs for human consumption in Web UI. """
+
+ def format( self, key, value ):
+ return ( str( key ), str( value ) )
+
+
+## Formatting utilities
+
+def seconds_to_str( value ):
+ if value < 60:
+ return "%s seconds" % value
+ elif value < 3600:
+ return "%s minutes" % ( value / 60 )
+ else:
+ return "%s days and %s minutes" % ( value / 3600, ( value % 3600 ) / 60 )
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 lib/galaxy/jobs/metrics/instrumenters/__init__.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/instrumenters/__init__.py
@@ -0,0 +1,53 @@
+from abc import ABCMeta
+from abc import abstractmethod
+
+import os.path
+
+from ...metrics import formatting
+
+
+INSTRUMENT_FILE_PREFIX = "__instrument"
+
+
+class InstrumentPlugin( object ):
+ """ A plugin describing how to instrument Galaxy jobs and retrieve metrics
+ from this instrumentation.
+ """
+ __metaclass__ = ABCMeta
+ formatter = formatting.JobMetricFormatter()
+
+ @property
+ @abstractmethod
+ def plugin_type( self ):
+ """ Short string providing labelling this plugin """
+
+ def pre_execute_instrument( self, job_directory ):
+ """ Optionally return one or more commands to instrument job. These
+ commands will be executed on the compute server prior to the job
+ running.
+ """
+ return None
+
+ def post_execute_instrument( self, job_directory ):
+ """ Optionally return one or more commands to instrument job. These
+ commands will be executed on the compute server after the tool defined
+ command is ran.
+ """
+ return None
+
+ @abstractmethod
+ def job_properties( self, job_id, job_directory ):
+ """ Collect properties for this plugin from specified job directory.
+ This method will run on the Galaxy server and can assume files created
+ in job_directory with pre_execute_instrument and
+ post_execute_instrument are available.
+ """
+
+ def _instrument_file_name( self, name ):
+ """ Provide a common pattern for naming files used by instrumentation
+ plugins - to ease their staging out of remote job directories.
+ """
+ return "%s_%s_%s" % ( INSTRUMENT_FILE_PREFIX, self.plugin_type, name )
+
+ def _instrument_file_path( self, job_directory, name ):
+ return os.path.join( job_directory, self._instrument_file_name( name ) )
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 lib/galaxy/jobs/metrics/instrumenters/core.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/instrumenters/core.py
@@ -0,0 +1,85 @@
+from ..instrumenters import InstrumentPlugin
+from ...metrics import formatting
+import time
+
+import logging
+log = logging.getLogger( __name__ )
+
+GALAXY_SLOTS_KEY = "galaxy_slots"
+START_EPOCH_KEY = "start_epoch"
+END_EPOCH_KEY = "end_epoch"
+RUNTIME_SECONDS_KEY = "runtime_seconds"
+
+
+class CorePluginFormatter( formatting.JobMetricFormatter ):
+
+ def format( self, key, value ):
+ value = int( value )
+ if key == GALAXY_SLOTS_KEY:
+ return ( "Cores Allocated", "%d" % value )
+ elif key == RUNTIME_SECONDS_KEY:
+ return ( "Job Runtime (Wall Clock)", formatting.seconds_to_str( value ) )
+ else:
+ # TODO: Use localized version of this from universe_wsgi.ini
+ title = "Job Start Time" if key == START_EPOCH_KEY else "Job End Time"
+ return (title, time.strftime( '%Y-%m-%d %H:%M:%S', time.localtime( value ) ) )
+
+
+class CorePlugin( InstrumentPlugin ):
+ """ Simple plugin that collects data without external dependencies. In
+ particular it currently collects value set for Galaxy slots.
+ """
+ plugin_type = "core"
+ formatter = CorePluginFormatter()
+
+ def __init__( self, **kwargs ):
+ pass
+
+ def pre_execute_instrument( self, job_directory ):
+ commands = []
+ commands.append( self.__record_galaxy_slots_command( job_directory ) )
+ commands.append( self.__record_seconds_since_epoch_to_file( job_directory, "start" ) )
+ return commands
+
+ def post_execute_instrument( self, job_directory ):
+ commands = []
+ commands.append( self.__record_seconds_since_epoch_to_file( job_directory, "end" ) )
+ return commands
+
+ def job_properties( self, job_id, job_directory ):
+ galaxy_slots_file = self.__galaxy_slots_file( job_directory )
+
+ properties = {}
+ properties[ GALAXY_SLOTS_KEY ] = self.__read_integer( galaxy_slots_file )
+ start = self.__read_seconds_since_epoch( job_directory, "start" )
+ end = self.__read_seconds_since_epoch( job_directory, "end" )
+ if start is not None and end is not None:
+ properties[ START_EPOCH_KEY ] = start
+ properties[ END_EPOCH_KEY ] = end
+ properties[ RUNTIME_SECONDS_KEY ] = end - start
+ return properties
+
+ def __record_galaxy_slots_command( self, job_directory ):
+ galaxy_slots_file = self.__galaxy_slots_file( job_directory )
+ return '''echo "$GALAXY_SLOTS" > '%s' ''' % galaxy_slots_file
+
+ def __record_seconds_since_epoch_to_file( self, job_directory, name ):
+ path = self._instrument_file_path( job_directory, "epoch_%s" % name )
+ return 'date +"%s" > ' + path
+
+ def __read_seconds_since_epoch( self, job_directory, name ):
+ path = self._instrument_file_path( job_directory, "epoch_%s" % name )
+ return self.__read_integer( path )
+
+ def __galaxy_slots_file( self, job_directory ):
+ return self._instrument_file_path( job_directory, "galaxy_slots" )
+
+ def __read_integer( self, path ):
+ value = None
+ try:
+ value = int( open( path, "r" ).read() )
+ except Exception:
+ pass
+ return value
+
+__all__ = [ CorePlugin ]
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 lib/galaxy/jobs/runners/__init__.py
--- a/lib/galaxy/jobs/runners/__init__.py
+++ b/lib/galaxy/jobs/runners/__init__.py
@@ -251,7 +251,10 @@
log.debug( 'execution of external set_meta for job %d finished' % job_wrapper.job_id )
def get_job_file(self, job_wrapper, **kwds):
+ job_metrics = job_wrapper.app.job_metrics
+ job_instrumenter = job_metrics.job_instrumenters[ job_wrapper.job_destination.id ]
options = dict(
+ job_instrumenter=job_instrumenter,
galaxy_lib=job_wrapper.galaxy_lib_dir,
env_setup_commands=job_wrapper.get_env_setup_clause(),
working_directory=os.path.abspath( job_wrapper.working_directory ),
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 lib/galaxy/jobs/runners/util/job_script/DEFAULT_JOB_FILE_TEMPLATE.sh
--- a/lib/galaxy/jobs/runners/util/job_script/DEFAULT_JOB_FILE_TEMPLATE.sh
+++ b/lib/galaxy/jobs/runners/util/job_script/DEFAULT_JOB_FILE_TEMPLATE.sh
@@ -12,6 +12,8 @@
export PYTHONPATH
fi
$env_setup_commands
+$instrument_pre_commands
cd $working_directory
$command
echo $? > $exit_code_path
+$instrument_post_commands
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 lib/galaxy/jobs/runners/util/job_script/__init__.py
--- a/lib/galaxy/jobs/runners/util/job_script/__init__.py
+++ b/lib/galaxy/jobs/runners/util/job_script/__init__.py
@@ -18,6 +18,8 @@
'headers': '',
'env_setup_commands': '',
'slots_statement': SLOTS_STATEMENT_CLUSTER_DEFAULT,
+ 'instrument_pre_commands': '',
+ 'instrument_post_commands': '',
}
@@ -47,6 +49,13 @@
"""
if any([param not in kwds for param in REQUIRED_TEMPLATE_PARAMS]):
raise Exception("Failed to create job_script, a required parameter is missing.")
+ job_instrumenter = kwds.get("job_instrumenter", None)
+ if job_instrumenter:
+ del kwds[ "job_instrumenter" ]
+ working_directory = kwds["working_directory"]
+ kwds["instrument_pre_commands"] = job_instrumenter.pre_execute_commands(working_directory) or ''
+ kwds["instrument_post_commands"] = job_instrumenter.post_execute_commands(working_directory) or ''
+
template_params = OPTIONAL_TEMPLATE_PARAMS.copy()
template_params.update(**kwds)
if not isinstance(template, Template):
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py
+++ b/lib/galaxy/model/__init__.py
@@ -102,6 +102,11 @@
metric = self._text_metric( plugin, metric_name, metric_value )
self.text_metrics.append( metric )
+ @property
+ def metrics( self ):
+ # TODO: Make iterable, concatenate with chain
+ return self.text_metrics + self.numeric_metrics
+
class User( object, Dictifiable ):
use_pbkdf2 = True
@@ -414,6 +419,12 @@
self.info = info
def set_runner_name( self, job_runner_name ):
self.job_runner_name = job_runner_name
+
+ def get_job( self ):
+ # Added so job and task have same interface (.get_job() ) to get at
+ # underlying job object.
+ return self
+
def set_runner_external_id( self, job_runner_external_id ):
self.job_runner_external_id = job_runner_external_id
def set_post_job_actions( self, post_job_actions ):
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 templates/show_params.mako
--- a/templates/show_params.mako
+++ b/templates/show_params.mako
@@ -126,6 +126,13 @@
%if job and job.command_line and trans.user_is_admin():
<tr><td>Job Command-Line:</td><td>${ job.command_line | h }</td></tr>
%endif
+ %if job and trans.user_is_admin():
+ <% job_metrics = trans.app.job_metrics %>
+ %for metric in job.metrics:
+ <% metric_title, metric_value = job_metrics.format( metric.plugin, metric.metric_name, metric.metric_value ) %>
+ <tr><td>${ metric_title | h }</td><td>${ metric_value | h }</td></tr>
+ %endfor
+ %endif
</table><br />
diff -r ad576bf6497ec40b64d84802bda596bb63af63af -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 test/unit/test_galaxy_mapping.py
--- a/test/unit/test_galaxy_mapping.py
+++ b/test/unit/test_galaxy_mapping.py
@@ -234,6 +234,11 @@
self.persist( u, job )
+ task = model.Task( job=job, working_directory="/tmp", prepare_files_cmd="split.sh" )
+ task.add_metric( "gx", "galaxy_slots", 5 )
+ task.add_metric( "system", "system_name", "localhost" )
+ self.persist( task )
+
def test_tasks( self ):
model = self.model
u = model.User( email="jobtest(a)foo.bar.baz", password="password" )
https://bitbucket.org/galaxy/galaxy-central/commits/2a7ff2234880/
Changeset: 2a7ff2234880
User: jmchilton
Date: 2014-03-26 17:01:46
Summary: Implement env job instrumentation plugin.
The 'env' plugin can collect all environment variables set at the time of job execution or just targetted environment variables. Environment variables like PATH or PYTHONPATH may be useful for debugging and environment variables such as HOSTNAME, PID, and DRM-specific variables like SLURM_JOB_ID could be useful for integrating Galaxy's view of jobs and users with external accounting/metric systems. See job_metrics.xml.sample for how to configure environment variable collection.
Affected #: 2 files
diff -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 -r 2a7ff223488017279b3bdaf854821a6500ff688c job_metrics_conf.xml.sample
--- a/job_metrics_conf.xml.sample
+++ b/job_metrics_conf.xml.sample
@@ -15,4 +15,10 @@
<!-- Core plugin captures Galaxy slots, start and end of job (in seconds
since epoch) and computes runtime in seconds. --><core />
+ <!-- Uncomment following to enable plugin dumping complete environment
+ for each job, potentially useful for debuging -->
+ <!-- <env /> -->
+ <!-- env plugin can also record more targetted, obviously useful variables
+ as well. -->
+ <!-- <env variables="HOSTNAME,SLURM_CPUS_ON_NODE,SLURM_JOBID" /> --></job_metrics>
diff -r 3ffd1ca6ac44736cba9e36abfaf3b1e607ddefb6 -r 2a7ff223488017279b3bdaf854821a6500ff688c lib/galaxy/jobs/metrics/instrumenters/env.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/instrumenters/env.py
@@ -0,0 +1,54 @@
+from ..instrumenters import InstrumentPlugin
+from ...metrics import formatting
+
+import logging
+log = logging.getLogger( __name__ )
+
+
+class EnvFormatter( formatting.JobMetricFormatter ):
+
+ def format( self, key, value ):
+ return ( "%s (runtime environment variable)" % key, value )
+
+
+class EnvPlugin( InstrumentPlugin ):
+ """ Instrumentation plugin capable of recording all or specific environment
+ variables for a job at runtime.
+ """
+ plugin_type = "env"
+ formatter = EnvFormatter()
+
+ def __init__( self, **kwargs ):
+ variables_str = kwargs.get( "variables", None )
+ if variables_str:
+ variables = [ v.strip() for v in variables_str.split(",") ]
+ else:
+ variables = None
+ self.variables = variables
+
+ def pre_execute_instrument( self, job_directory ):
+ """ Use env to dump all environment variables to a file.
+ """
+ return "env > '%s'" % self.__env_file( job_directory )
+
+ def post_execute_instrument( self, job_directory ):
+ return None
+
+ def job_properties( self, job_id, job_directory ):
+ """ Recover environment variables dumped out on compute server and filter
+ out specific variables if needed.
+ """
+ variables = self.variables
+
+ properties = {}
+ for line in open( self.__env_file( job_directory ) ).readlines():
+ var, value = line.split( "=", 1 )
+ if not variables or var in variables:
+ properties[ var ] = value
+
+ return properties
+
+ def __env_file( self, job_directory ):
+ return self._instrument_file_path( job_directory, "vars" )
+
+__all__ = [ EnvPlugin ]
https://bitbucket.org/galaxy/galaxy-central/commits/de4745c66e9d/
Changeset: de4745c66e9d
User: jmchilton
Date: 2014-03-26 17:01:46
Summary: Implement cpuinfo job instrumentation plugin.
Affected #: 2 files
diff -r 2a7ff223488017279b3bdaf854821a6500ff688c -r de4745c66e9d030a7df3a8eec9586fa9d65aa6c6 job_metrics_conf.xml.sample
--- a/job_metrics_conf.xml.sample
+++ b/job_metrics_conf.xml.sample
@@ -15,6 +15,13 @@
<!-- Core plugin captures Galaxy slots, start and end of job (in seconds
since epoch) and computes runtime in seconds. --><core />
+
+ <!-- Uncomment to dump processor count for each job - linux only. -->
+ <!-- <cpuinfo /> -->
+ <!-- Uncomment to dump information about all processors for for each
+ job - this is likely too much data. Linux only. -->
+ <!-- <cpuinfo verbose="true" /> -->
+
<!-- Uncomment following to enable plugin dumping complete environment
for each job, potentially useful for debuging --><!-- <env /> -->
diff -r 2a7ff223488017279b3bdaf854821a6500ff688c -r de4745c66e9d030a7df3a8eec9586fa9d65aa6c6 lib/galaxy/jobs/metrics/instrumenters/cpuinfo.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/instrumenters/cpuinfo.py
@@ -0,0 +1,62 @@
+import re
+
+from galaxy import util
+
+from ..instrumenters import InstrumentPlugin
+from ...metrics import formatting
+
+import logging
+log = logging.getLogger( __name__ )
+
+PROCESSOR_LINE = re.compile(r"processor\s*\:\s*(\d+)")
+
+
+class CpuInfoFormatter( formatting.JobMetricFormatter ):
+
+ def format( self, key, value ):
+ if key == "processor_count":
+ return "Processor Count", "%s" % int( value )
+ else:
+ return key, value
+
+
+class CpuInfoPlugin( InstrumentPlugin ):
+ """ Gather information about processor configuration from /proc/cpuinfo.
+ Linux only.
+ """
+ plugin_type = "cpuinfo"
+ formatter = CpuInfoFormatter()
+
+ def __init__( self, **kwargs ):
+ self.verbose = util.asbool( kwargs.get( "verbose", False ) )
+
+ def pre_execute_instrument( self, job_directory ):
+ return "cat /proc/cpuinfo > '%s'" % self.__instrument_cpuinfo_path( job_directory )
+
+ def job_properties( self, job_id, job_directory ):
+ properties = {}
+ processor_count = 0
+ with open( self.__instrument_cpuinfo_path( job_directory ) ) as f:
+ current_processor = None
+ for line in f:
+ line = line.strip().lower()
+ if not line: # Skip empty lines
+ continue
+
+ processor_line_match = PROCESSOR_LINE.match( line )
+ if processor_line_match:
+ processor_count += 1
+ current_processor = processor_line_match.group( 1 )
+ elif current_processor and self.verbose:
+ # If verbose, dump information about each processor
+ # into database...
+ key, value = line.split( ":", 1 )
+ key = "processor_%s_%s" % ( current_processor, key.strip() )
+ value = value
+ properties[ "processor_count" ] = processor_count
+ return properties
+
+ def __instrument_cpuinfo_path( self, job_directory ):
+ return self._instrument_file_path( job_directory, "cpuinfo" )
+
+__all__ = [ CpuInfoPlugin ]
https://bitbucket.org/galaxy/galaxy-central/commits/2d9d06f5b39e/
Changeset: 2d9d06f5b39e
User: jmchilton
Date: 2014-03-26 17:01:46
Summary: Implement meminfo job instrumentation plugin.
Affected #: 2 files
diff -r de4745c66e9d030a7df3a8eec9586fa9d65aa6c6 -r 2d9d06f5b39eb5d7d2b926dc1d74d7720cf43e2f job_metrics_conf.xml.sample
--- a/job_metrics_conf.xml.sample
+++ b/job_metrics_conf.xml.sample
@@ -22,6 +22,10 @@
job - this is likely too much data. Linux only. --><!-- <cpuinfo verbose="true" /> -->
+ <!-- Uncomment to dump system memory information for each job - linux
+ only. -->
+ <!-- <meminfo /> -->
+
<!-- Uncomment following to enable plugin dumping complete environment
for each job, potentially useful for debuging --><!-- <env /> -->
diff -r de4745c66e9d030a7df3a8eec9586fa9d65aa6c6 -r 2d9d06f5b39eb5d7d2b926dc1d74d7720cf43e2f lib/galaxy/jobs/metrics/instrumenters/meminfo.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/instrumenters/meminfo.py
@@ -0,0 +1,59 @@
+import re
+
+from galaxy import util
+
+from ..instrumenters import InstrumentPlugin
+from ...metrics import formatting
+
+MEMINFO_LINE = re.compile(r"(\w+)\s*\:\s*(\d+) kB")
+
+# Important (non-verbose) meminfo property titles.
+MEMINFO_TITLES = {
+ "memtotal": "Total System Memory",
+ "swaptotal": "Total System Swap"
+}
+
+
+class MemInfoFormatter( formatting.JobMetricFormatter ):
+
+ def format( self, key, value ):
+ title = MEMINFO_TITLES.get( key, key )
+ return title, util.nice_size( value * 1000 ) # kB = *1000, KB = *1024 - wikipedia
+
+
+class MemInfoPlugin( InstrumentPlugin ):
+ """ Gather information about processor configuration from /proc/cpuinfo.
+ Linux only.
+ """
+ plugin_type = "meminfo"
+ formatter = MemInfoFormatter()
+
+ def __init__( self, **kwargs ):
+ self.verbose = util.asbool( kwargs.get( "verbose", False ) )
+
+ def pre_execute_instrument( self, job_directory ):
+ return "cat /proc/meminfo > '%s'" % self.__instrument_meminfo_path( job_directory )
+
+ def job_properties( self, job_id, job_directory ):
+ properties = {}
+ with open( self.__instrument_meminfo_path( job_directory ) ) as f:
+ for line in f:
+ line = line.strip()
+ if not line: # Skip empty lines
+ continue
+ line_match = MEMINFO_LINE.match( line )
+ if not line_match:
+ continue
+ key = line_match.group( 1 ).lower()
+ # By default just grab important meminfo properties with titles
+ # defined for formatter. Grab everything in verbose mode for
+ # an arbitrary snapshot of memory at beginning of run.
+ if key in MEMINFO_TITLES or self.verbose:
+ value = long( line_match.group( 2 ) )
+ properties[ key ] = value
+ return properties
+
+ def __instrument_meminfo_path( self, job_directory ):
+ return self._instrument_file_path( job_directory, "meminfo" )
+
+__all__ = [ MemInfoPlugin ]
https://bitbucket.org/galaxy/galaxy-central/commits/54d890bb498d/
Changeset: 54d890bb498d
User: jmchilton
Date: 2014-03-26 17:01:46
Summary: Implement uname job instrumentation plugin.
Gathers information on operating system configuration job is run on - linux only.
Affected #: 2 files
diff -r 2d9d06f5b39eb5d7d2b926dc1d74d7720cf43e2f -r 54d890bb498dc7bb90a1a61c5b4dc636daf8385a job_metrics_conf.xml.sample
--- a/job_metrics_conf.xml.sample
+++ b/job_metrics_conf.xml.sample
@@ -26,6 +26,10 @@
only. --><!-- <meminfo /> -->
+ <!-- Uncomment to record operating system each job is executed on - linux
+ only. -->
+ <!-- <uname /> -->
+
<!-- Uncomment following to enable plugin dumping complete environment
for each job, potentially useful for debuging --><!-- <env /> -->
diff -r 2d9d06f5b39eb5d7d2b926dc1d74d7720cf43e2f -r 54d890bb498dc7bb90a1a61c5b4dc636daf8385a lib/galaxy/jobs/metrics/instrumenters/uname.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/instrumenters/uname.py
@@ -0,0 +1,34 @@
+from ..instrumenters import InstrumentPlugin
+from ...metrics import formatting
+
+
+class UnameFormatter( formatting.JobMetricFormatter ):
+
+ def format( self, key, value ):
+ return "Operating System", value
+
+
+class UnamePlugin( InstrumentPlugin ):
+ """ Use uname to gather operating system information about remote system
+ job is running on. Linux only.
+ """
+ plugin_type = "uname"
+ formatter = UnameFormatter()
+
+ def __init__( self, **kwargs ):
+ self.uname_args = kwargs.get( "args", "-a" )
+
+ def pre_execute_instrument( self, job_directory ):
+ return "uname %s > '%s'" % ( self.uname_args, self.__instrument_uname_path( job_directory ) )
+
+ def job_properties( self, job_id, job_directory ):
+ properties = {}
+ with open( self.__instrument_uname_path( job_directory ) ) as f:
+ properties[ "uname" ] = f.read()
+ return properties
+
+ def __instrument_uname_path( self, job_directory ):
+ return self._instrument_file_path( job_directory, "uname" )
+
+
+__all__ = [ UnamePlugin ]
https://bitbucket.org/galaxy/galaxy-central/commits/8cd359ea420e/
Changeset: 8cd359ea420e
User: jmchilton
Date: 2014-03-26 17:01:46
Summary: Implement collectl job instrumentation plugin.
This changeset documentation is an abridged form fo the documentation included in job_metrics_conf.xml.sample.
Collectl (http://collectl.sourceforge.net/) is a powerful monitoring utility capable of gathering numerous system and process level statistics of running applications. The Galaxy collectl job metrics plugin by default will grab a variety of process level metrics aggregated across all processes corresponding to a job, this behavior is highly customiziable - both using the attributes documented below or simply hacking up the code in lib/galaxy/jobs/metrics.
Warning: In order to use this plugin collectl must be available on the compute server the job runs on and on the local Galaxy server as well (unless in this latter case summarize_process_data is set to False).
Attributes (the follow describes attributes that can be used with the collectl job metrics element above to modify its behavior).
'summarize_process_data': Boolean indicating whether to run collectl
in playback mode after jobs complete and gather process level
statistics for the job run. These statistics can be customized
with the 'process_statistics' attribute. (defaults to True)
'saved_logs_path': If set (it is off by default), all collectl logs
will be saved to the specified path after jobs complete. These
logs can later be replayed using collectl offline to generate
full time-series data corresponding to a job run.
'subsystems': Comma separated list of collectl subystems to collect
data for. Plugin doesn't currently expose all of them or offer
summary data for any of them except 'process' but extensions
would be welcome. May seem pointless to include subsystems beside
process since they won't be processed online by Galaxy - but if
'saved_logs_path' these files can be played back at anytime.
Available subsystems - 'process', 'cpu', 'memory', 'network',
'disk', 'network'. (Default 'process').
'process_statistics': If 'summarize_process_data' this attribute can
be specified as a comma separated list to override the statistics
that are gathered. Each statistics is of the for X_Y where X if
one of 'min', 'max', 'count', 'avg', or 'sum' and Y is a value
from 'S', 'VmSize', 'VmLck', 'VmRSS', 'VmData', 'VmStk', 'VmExe',
'VmLib', 'CPU', 'SysT', 'UsrT', 'PCT', 'AccumT' 'WKB', 'RKBC',
'WKBC', 'RSYS', 'WSYS', 'CNCL', 'MajF', 'MinF'. Consult
lib/galaxy/jobs/metrics/collectl/processes.py for more details on
what each of these resource types means.
Defaults to 'max_VmSize,avg_VmSize,max_VmRSS,avg_VmRSS,sum_SysT,sum_UsrT,max_PCT avg_PCT,max_AccumT,sum_RSYS,sum_WSYS'
as variety of statistics roughly describing CPU and memory usage of
the program and VERY ROUGHLY describing I/O consumption.
Affected #: 7 files
diff -r 54d890bb498dc7bb90a1a61c5b4dc636daf8385a -r 8cd359ea420e3b0c3cbadcbc6fdf99ef614ad28a job_metrics_conf.xml.sample
--- a/job_metrics_conf.xml.sample
+++ b/job_metrics_conf.xml.sample
@@ -36,4 +36,89 @@
<!-- env plugin can also record more targetted, obviously useful variables
as well. --><!-- <env variables="HOSTNAME,SLURM_CPUS_ON_NODE,SLURM_JOBID" /> -->
+
+ <!-- <collectl /> -->
+ <!-- Collectl (http://collectl.sourceforge.net/) is a powerful monitoring
+ utility capable of gathering numerous system and process level
+ statistics of running applications. The Galaxy collectl job metrics
+ plugin by default will grab a variety of process level metrics
+ aggregated across all processes corresponding to a job, this behavior
+ is highly customiziable - both using the attributes documented below
+ or simply hacking up the code in lib/galaxy/jobs/metrics.
+
+ Warning: In order to use this plugin collectl must be available on the
+ compute server the job runs on and on the local Galaxy server as well
+ (unless in this latter case summarize_process_data is set to False).
+
+ Attributes (the follow describes attributes that can be used with
+ the collectl job metrics element above to modify its behavior).
+
+ 'summarize_process_data': Boolean indicating whether to run collectl
+ in playback mode after jobs complete and gather process level
+ statistics for the job run. These statistics can be customized
+ with the 'process_statistics' attribute. (defaults to True)
+
+ 'saved_logs_path': If set (it is off by default), all collectl logs
+ will be saved to the specified path after jobs complete. These
+ logs can later be replayed using collectl offline to generate
+ full time-series data corresponding to a job run.
+
+ 'subsystems': Comma separated list of collectl subystems to collect
+ data for. Plugin doesn't currently expose all of them or offer
+ summary data for any of them except 'process' but extensions
+ would be welcome. May seem pointless to include subsystems
+ beside process since they won't be processed online by Galaxy -
+ but if 'saved_logs_path' these files can be played back at anytime.
+
+ Available subsystems - 'process', 'cpu', 'memory', 'network',
+ 'disk', 'network'. (Default 'process').
+
+ Warning: If you override this - be sure to include 'process'
+ unless 'summarize_process_data' is set to false.
+
+ 'process_statistics': If 'summarize_process_data' this attribute can be
+ specified as a comma separated list to override the statistics
+ that are gathered. Each statistics is of the for X_Y where X
+ if one of 'min', 'max', 'count', 'avg', or 'sum' and Y is a
+ value from 'S', 'VmSize', 'VmLck', 'VmRSS', 'VmData', 'VmStk',
+ 'VmExe', 'VmLib', 'CPU', 'SysT', 'UsrT', 'PCT', 'AccumT' 'WKB',
+ 'RKBC', 'WKBC', 'RSYS', 'WSYS', 'CNCL', 'MajF', 'MinF'. Consult
+ lib/galaxy/jobs/metrics/collectl/processes.py for more details
+ on what each of these resource types means.
+
+ Defaults to 'max_VmSize,avg_VmSize,max_VmRSS,avg_VmRSS,sum_SysT,sum_UsrT,max_PCT avg_PCT,max_AccumT,sum_RSYS,sum_WSYS'
+ as variety of statistics roughly describing CPU and memory
+ usage of the program and VERY ROUGHLY describing I/O consumption.
+
+ 'procfilt_on': By default Galaxy will tell collectl to only collect
+ 'process' level data for the current user (as identified)
+ by 'username' (default) - this can be disabled by settting this
+ to 'none' - the plugin will still only aggregate process level
+ statistics for the jobs process tree - but the additional
+ information can still be used offline with 'saved_logs_path'
+ if set. Obsecurely, this can also be set 'uid' to identify
+ the current user to filter on by UID instead of username -
+ this may needed on some clusters(?).
+
+ 'interval': The time (in seconds) between data collection points.
+ Collectl uses a variety of different defaults for different
+ subsystems if this is not set, but process information (likely
+ the most pertinent for Galaxy jobs will collect data every
+ 60 seconds).
+
+ 'flush': Interval (in seconds I think) between when collectl will
+ flush its buffer to disk. Galaxy overrides this to disable
+ flushing by default if not set.
+
+ 'local_collectl_path', 'remote_collectl_path', 'collectl_path':
+ By default, jobs will just assume collectl is on the PATH, but
+ it can be overridden with 'local_collectl_path' and
+ 'remote_collectl_path' (or simply 'collectl_path' if it is not
+ on the path but installed in the same location both locally and
+ remotely).
+
+ There are more and more increasingly obsecure options including -
+ log_collectl_program_output, interval2, and interval3. Consult
+ source code for more details.
+ --></job_metrics>
diff -r 54d890bb498dc7bb90a1a61c5b4dc636daf8385a -r 8cd359ea420e3b0c3cbadcbc6fdf99ef614ad28a lib/galaxy/jobs/metrics/collectl/__init__.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/collectl/__init__.py
@@ -0,0 +1,5 @@
+""" This module contains helper functions and data structures for interacting
+with collectl and collectl generated data. More information on collectl can be
+found at: http://collectl.sourceforge.net/.
+
+"""
diff -r 54d890bb498dc7bb90a1a61c5b4dc636daf8385a -r 8cd359ea420e3b0c3cbadcbc6fdf99ef614ad28a lib/galaxy/jobs/metrics/collectl/cli.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/collectl/cli.py
@@ -0,0 +1,128 @@
+from string import Template
+import subprocess
+
+import logging
+log = logging.getLogger( __name__ )
+
+COMMAND_LINE_TEMPLATE = Template(
+ "$collectl_path $destination_arg $mode_arg $subsystems_arg $interval_arg $procfilt_arg $flush_arg $sep_arg"
+)
+MODE_RECORD = "record"
+MODE_PLAYBACK = "playback"
+
+
+class CollectlCli( object ):
+ """ Abstraction over (some of) the command-line arguments of collectl.
+ Ideally this will be useful for building up command line arguments for
+ remote execution as well as runnning directly on local host.
+
+ This is meant to be a fairly generic utility - for interfacing with
+ collectl CLI - logic more directly related to the Galaxy job metric plugin
+ plugin should be placed in other modules.
+
+ Keyword Arguments:
+ collectl_path: Path to collectl executable (defaults to collectl - i.e.
+ search the PATH).
+
+ playback_path (defaults to None): If this is None collectl will run in
+ record mode, else it will playback specified file.
+
+ Playback Mode Options:
+
+ sep : Separator used in playback mode (set to 9 to produce tsv)
+ (defaults to None).
+
+ Record Mode Options (some of these may work in playback mode also):
+
+ destination_path: Location of path files to write to (defaults to None
+ and collectl will just use cwd). Really this is just to prefix -
+ collectl will append hostname and datetime to file.
+ interval: Setup polling interval (secs) for most subsystems (defaults
+ to None and when unspecified collectl will use default of 1 second).
+ interval2: Setup polling interval (secs) for process information
+ (defaults to None and when unspecified collectl will use default to
+ 60 seconds).
+ interval3: Setup polling interval (secs) for environment information
+ (defaults to None and when unspecified collectl will use default to
+ 300 seconds).
+ procfilt: Optional argument to procfilt. (defaults to None).
+ flush : Optional flush interval (defaults to None).
+ """
+
+ def __init__( self, **kwargs ):
+ command_args = {}
+ command_args[ "collectl_path" ] = kwargs.get( "collectl_path", "collectl" )
+ playback_path = kwargs.get( "playback_path", None )
+ self.mode = MODE_RECORD if not playback_path else MODE_PLAYBACK
+ if self.mode == MODE_RECORD:
+ mode_arg = ""
+ elif self.mode == MODE_PLAYBACK:
+ mode_arg = "-P -p '%s'" % playback_path
+ else:
+ raise Exception( "Invalid mode supplied to CollectlCli - %s" % self.mode )
+ command_args[ "mode_arg" ] = mode_arg
+ command_args[ "interval_arg" ] = self.__interval_arg( kwargs )
+ destination = kwargs.get( "destination_path", None )
+ if destination:
+ destination_arg = "-f '%s'" % destination
+ else:
+ destination_arg = ""
+ command_args[ "destination_arg" ] = destination_arg
+ procfilt = kwargs.get( "procfilt", None )
+ command_args[ "procfilt_arg" ] = "" if not procfilt else "--procfilt %s" % procfilt
+ command_args[ "subsystems_arg" ] = self.__subsystems_arg( kwargs.get( "subsystems", [] ) )
+ flush = kwargs.get( "flush", None )
+ command_args[ "flush_arg"] = "--flush %s" % flush if flush else ""
+ sep = kwargs.get( "sep", None )
+ command_args[ "sep_arg" ] = "--sep=%s" % sep if sep else ""
+
+ self.command_args = command_args
+
+ def __subsystems_arg( self, subsystems ):
+ if subsystems:
+ return "-s%s" % "".join( [ s.command_line_arg for s in subsystems ] )
+ else:
+ return ""
+
+ def __interval_arg( self, kwargs ):
+ if self.mode != MODE_RECORD:
+ return ""
+
+ interval = kwargs.get( "interval", None )
+ if not interval:
+ return ""
+
+ self.__validate_interval_arg( interval )
+ interval_arg = "-i %s" % interval
+ interval2 = kwargs.get( "interval2", None )
+ if not interval2:
+ return interval_arg
+ self.__validate_interval_arg( interval2, multiple_of=int( interval ) )
+ interval_arg = "%s:%s" % ( interval_arg, interval2 )
+
+ interval3 = kwargs.get( "interval3", None )
+ if not interval3:
+ return interval_arg
+ self.__validate_interval_arg( interval3, multiple_of=int( interval ) )
+ interval_arg = "%s:%s" % ( interval_arg, interval3 )
+ return interval_arg
+
+ def __validate_interval_arg( self, value, multiple_of=None ):
+ if value and not str(value).isdigit():
+ raise Exception( "Invalid interval argument supplied, must be integer %s" % value )
+ if multiple_of:
+ if int( value ) % multiple_of != 0:
+ raise Exception( "Invalid interval argument supplied, must multiple of %s" % multiple_of )
+
+ def build_command_line( self ):
+ return COMMAND_LINE_TEMPLATE.substitute( **self.command_args )
+
+ def run( self, stdout=subprocess.PIPE, stderr=subprocess.PIPE ):
+ command_line = self.build_command_line()
+ log.info( "Executing %s" % command_line )
+ proc = subprocess.Popen( command_line, shell=True, stdout=stdout, stderr=stderr )
+ return_code = proc.wait()
+ if return_code:
+ raise Exception( "Problem running collectl command." )
+
+__all__ = [ CollectlCli ]
diff -r 54d890bb498dc7bb90a1a61c5b4dc636daf8385a -r 8cd359ea420e3b0c3cbadcbc6fdf99ef614ad28a lib/galaxy/jobs/metrics/collectl/processes.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/collectl/processes.py
@@ -0,0 +1,252 @@
+""" Modules will run collectl in playback mode and collect various process
+statistics for a given pid's process and process ancestors.
+"""
+import collections
+import csv
+import tempfile
+
+from galaxy import util
+
+from ..collectl import stats
+
+import logging
+log = logging.getLogger( __name__ )
+
+# Collectl process information cheat sheet:
+#
+# Record process information for current user.
+# % collectl -sZ -f./__instrument_collectl -i 10:10 --procfilt U$USER
+#
+# TSV Replay of processing information in plottable mode...
+#
+# % collectl -sZ -P --sep=9 -p __instrument_collectl-jlaptop13-20140322-120919.raw.gz
+#
+# Has following columns:
+# Date Time PID User PR PPID THRD S VmSize VmLck VmRSS VmData VmStk VmExe VmLib CPU SysT UsrT PCT AccumT RKB WKB RKBC WKBC RSYS WSYS CNCL MajF MinF Command
+#
+
+# Process data dumped one row per process per interval.
+#http://collectl.sourceforge.net/Data-detail.html
+PROCESS_COLUMNS = [
+ "#Date", # Date of interval - e.g. 20140322
+ "Time", # Time of interval - 12:18:58
+ "PID", # Process pid.
+ "User", # Process user.
+ "PR", # Priority of process.
+ "PPID", # Parent PID of process.
+ "THRD", # Thread???
+ "S", # Process state - S - Sleeping, D - Uninterruptable Sleep, R - Running, Z - Zombie or T - Stopped/Traced
+ ## Memory options - http://ewx.livejournal.com/579283.html
+ "VmSize",
+ "VmLck",
+ "VmRSS",
+ "VmData",
+ "VmStk",
+ "VmExe",
+ "VmLib",
+ "CPU", # CPU number of process
+ "SysT", # Amount of system time consumed during interval
+ "UsrT", # Amount user time consumed during interval
+ "PCT", # Percentage of current interval consumed by task
+ "AccumT", # Total accumulated System and User time since the process began execution
+ # kilobytes read/written - requires I/O level monitoring to be enabled in kernel.
+ "RKB", # kilobytes read by process - requires I/O monitoring in kernel
+ "WKB",
+ "RKBC",
+ "WKBC",
+ "RSYS", # Number of read system calls
+ "WSYS", # Number of write system calls
+ "CNCL",
+ "MajF", # Number of major page faults
+ "MinF", # Number of minor page faults
+ "Command", # Command executed
+]
+
+# Types of statistics this module can summarize
+STATISTIC_TYPES = [ "max", "min", "sum", "count", "avg" ]
+
+COLUMN_INDICES = dict( [ ( col, i ) for i, col in enumerate( PROCESS_COLUMNS ) ] )
+PID_INDEX = COLUMN_INDICES[ "PID" ]
+PARENT_PID_INDEX = COLUMN_INDICES[ "PPID" ]
+
+DEFAULT_STATISTICS = [
+ ("max", "VmSize"),
+ ("avg", "VmSize"),
+ ("max", "VmRSS"),
+ ("avg", "VmRSS"),
+ ("sum", "SysT"),
+ ("sum", "UsrT"),
+ ("max", "PCT"),
+ ("avg", "PCT"),
+ ("max", "AccumT"),
+ ("sum", "RSYS"),
+ ("sum", "WSYS"),
+]
+
+
+def parse_process_statistics( statistics ):
+ """ Turn string or list of strings into list of tuples in format ( stat,
+ resource ) where stat is a value from STATISTIC_TYPES and resource is a
+ value from PROCESS_COLUMNS.
+ """
+ if statistics is None:
+ statistics = DEFAULT_STATISTICS
+
+ statistics = util.listify( statistics )
+ statistics = map( __tuplize_statistic, statistics )
+ # Check for validity...
+ for statistic in statistics:
+ if statistic[ 0 ] not in STATISTIC_TYPES:
+ raise Exception( "Unknown statistic type encountered %s" % statistic[ 0 ] )
+ if statistic[ 1 ] not in PROCESS_COLUMNS:
+ raise Exception( "Unknown process column encountered %s" % statistic[ 1 ] )
+ return statistics
+
+
+def generate_process_statistics( collectl_playback_cli, pid, statistics=DEFAULT_STATISTICS ):
+ """ Playback collectl file and generate summary statistics.
+ """
+ with tempfile.NamedTemporaryFile( ) as tmp_tsv:
+ collectl_playback_cli.run( stdout=tmp_tsv )
+ with open( tmp_tsv.name, "r" ) as tsv_file:
+ return __read_process_statistics( tsv_file, pid, statistics )
+
+
+def __read_process_statistics( tsv_file, pid, statistics ):
+ process_summarizer = CollectlProcessSummarizer( pid, statistics )
+ current_interval = None
+
+ for row in csv.reader( tsv_file, dialect="excel-tab" ):
+ if current_interval is None:
+ for header, expected_header in zip( row, PROCESS_COLUMNS ):
+ if header.lower() != expected_header.lower():
+ raise Exception( "Unknown header value encountered while processing collectl playback - %s" % header )
+
+ # First row, check contains correct header.
+ current_interval = CollectlProcessInterval()
+ continue
+
+ if current_interval.row_is_in( row ):
+ current_interval.add_row( row )
+ else:
+ process_summarizer.handle_interval( current_interval )
+ current_interval = CollectlProcessInterval()
+
+ # Do we have unsummarized rows...
+ if current_interval and current_interval.rows:
+ process_summarizer.handle_interval( current_interval )
+
+ return process_summarizer.get_statistics()
+
+
+class CollectlProcessSummarizer( object ):
+
+ def __init__( self, pid, statistics ):
+ self.pid = pid
+ self.statistics = statistics
+ self.columns_of_interest = set( [ s[ 1 ] for s in statistics ] )
+ self.tree_statistics = collections.defaultdict( stats.StatisticsTracker )
+ self.process_accum_statistics = collections.defaultdict( stats.StatisticsTracker )
+ self.interval_count = 0
+
+ def handle_interval( self, interval ):
+ self.interval_count += 1
+ rows = self.__rows_for_process( interval.rows, self.pid )
+ for column_name in self.columns_of_interest:
+ column_index = COLUMN_INDICES[ column_name ]
+
+ if column_name == "AccumT":
+ # Should not sum this across pids each interval, sum max at end...
+ for r in rows:
+ pid_seconds = self.__time_to_seconds( r[ column_index ] )
+ self.process_accum_statistics[ r[ PID_INDEX ] ].track( pid_seconds )
+ else:
+ # All other stastics should be summed across whole process tree
+ # at each interval I guess.
+ if column_name in [ "SysT", "UsrT", "PCT" ]:
+ to_num = float
+ else:
+ to_num = long
+
+ interval_stat = sum( to_num( r[ column_index ] ) for r in rows )
+ self.tree_statistics[ column_name ].track( interval_stat )
+
+ def get_statistics( self ):
+ if self.interval_count == 0:
+ return []
+
+ computed_statistics = []
+ for statistic in self.statistics:
+ statistic_type, column = statistic
+ if column == "AccumT":
+ # Only thing that makes sense is sum
+ if statistic_type != "max":
+ log.warn( "Only statistic max makes sense for AccumT" )
+ continue
+
+ value = sum( [ v.max for v in self.process_accum_statistics.itervalues() ] )
+ else:
+ statistics_tracker = self.tree_statistics[ column ]
+ value = getattr( statistics_tracker, statistic_type )
+
+ computed_statistic = ( statistic, value )
+ computed_statistics.append( computed_statistic )
+
+ return computed_statistics
+
+ def __rows_for_process( self, rows, pid ):
+ process_rows = []
+ pids = self.__all_child_pids( rows, pid )
+ for row in rows:
+ if row[ PID_INDEX ] in pids:
+ process_rows.append( row )
+ return process_rows
+
+ def __all_child_pids( self, rows, pid ):
+ pids_in_process_tree = set( [ str( self.pid ) ] )
+ added = True
+ while added:
+ added = False
+ for row in rows:
+ pid = row[ PID_INDEX ]
+ parent_pid = row[ PARENT_PID_INDEX ]
+ if parent_pid in pids_in_process_tree and pid not in pids_in_process_tree:
+ pids_in_process_tree.add( pid )
+ added = True
+ return pids_in_process_tree
+
+ def __time_to_seconds( self, minutes_str ):
+ parts = minutes_str.split( ":" )
+ seconds = 0.0
+ for i, val in enumerate( parts ):
+ seconds += float(val) * ( 60 ** ( len( parts ) - ( i + 1 ) ) )
+ return seconds
+
+
+class CollectlProcessInterval( object ):
+ """ Represent all rows in collectl playback file for given time slice with
+ ability to filter out just rows corresponding to the process tree
+ corresponding to a given pid.
+ """
+
+ def __init__( self ):
+ self.rows = []
+
+ def row_is_in( self, row ):
+ if not self.rows: # No rows, this row defines interval.
+ return True
+ first_row = self.rows[ 0 ]
+ return first_row[ 0 ] == row[ 0 ] and first_row[ 1 ] == row[ 1 ]
+
+ def add_row( self, row ):
+ self.rows.append( row )
+
+
+def __tuplize_statistic( statistic ):
+ if not isinstance( statistic, tuple ):
+ statistic_split = statistic.split( "_", 1 )
+ statistic = ( statistic_split[ 0 ].lower(), statistic_split[ 1 ] )
+ return statistic
+
+
+__all__ = [ generate_process_statistics ]
diff -r 54d890bb498dc7bb90a1a61c5b4dc636daf8385a -r 8cd359ea420e3b0c3cbadcbc6fdf99ef614ad28a lib/galaxy/jobs/metrics/collectl/stats.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/collectl/stats.py
@@ -0,0 +1,27 @@
+""" Primitive module for tracking running statistics without storing values in
+memory.
+"""
+
+
+class StatisticsTracker( object ):
+
+ def __init__( self ):
+ self.min = None
+ self.max = None
+ self.count = 0
+ self.sum = 0
+
+ def track( self, value ):
+ if self.min is None or value < self.min:
+ self.min = value
+ if self.max is None or value > self.max:
+ self.max = value
+ self.count += 1
+ self.sum += value
+
+ @property
+ def avg( self ):
+ if self.count > 0:
+ return self.sum / self.count
+ else:
+ return None
diff -r 54d890bb498dc7bb90a1a61c5b4dc636daf8385a -r 8cd359ea420e3b0c3cbadcbc6fdf99ef614ad28a lib/galaxy/jobs/metrics/collectl/subsystems.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/collectl/subsystems.py
@@ -0,0 +1,72 @@
+from abc import ABCMeta
+from abc import abstractmethod
+
+
+class CollectlSubsystem( object ):
+ """ Class providing an abstraction of collectl subsytems.
+ """
+ __metaclass__ = ABCMeta
+
+ @property
+ @abstractmethod
+ def command_line_arg( self ):
+ """ Return single letter command-line argument used by collectl CLI.
+ """
+
+ @property
+ @abstractmethod
+ def name( self, job_directory ):
+ """ High-level name for subsystem as consumed by this module.
+ """
+
+
+class ProcessesSubsystem( CollectlSubsystem ):
+ command_line_arg = "Z"
+ name = "process"
+
+
+class CpuSubsystem( CollectlSubsystem ):
+ command_line_arg = "C"
+ name = "cpu"
+
+
+class DiskSubsystem( CollectlSubsystem ):
+ command_line_arg = "D"
+ name = "disk"
+
+
+class NetworkSubsystem( CollectlSubsystem ):
+ command_line_arg = "N"
+ name = "network"
+
+
+class EnvironmentSubsystem( CollectlSubsystem ):
+ command_line_arg = "E"
+ name = "environment"
+
+
+class MemorySubsystem( CollectlSubsystem ):
+ command_line_arg = "M"
+ name = "memory"
+
+
+SUBSYSTEMS = [
+ ProcessesSubsystem(),
+ CpuSubsystem(),
+ DiskSubsystem(),
+ NetworkSubsystem(),
+ EnvironmentSubsystem(),
+ MemorySubsystem(),
+]
+SUBSYSTEM_DICT = dict( [ (s.name, s) for s in SUBSYSTEMS ] )
+
+
+def get_subsystem( name ):
+ """
+
+ >>> get_subsystem( "process" ).command_line_arg == "Z"
+ True
+ """
+ return SUBSYSTEM_DICT[ name ]
+
+__all__ = [ get_subsystem ]
diff -r 54d890bb498dc7bb90a1a61c5b4dc636daf8385a -r 8cd359ea420e3b0c3cbadcbc6fdf99ef614ad28a lib/galaxy/jobs/metrics/instrumenters/collectl.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/instrumenters/collectl.py
@@ -0,0 +1,214 @@
+import os
+import shutil
+
+from ..instrumenters import InstrumentPlugin
+from ...metrics import formatting
+from ..collectl import subsystems
+from ..collectl import cli
+from ..collectl import processes
+
+from galaxy import util
+from galaxy.util import directory_hash
+
+import logging
+log = logging.getLogger( __name__ )
+
+DEFAULT_PROCFILT_ON = "username" # By default, only grab statistics for user
+ # processes (as identifiers by username).
+DEFAULT_SUBSYSTEMS = "process"
+DEFAULT_FLUSH_INTERVAL = "0" # Set to zero to flush every collection.
+
+FORMATTED_RESOURCE_TITLES = {
+ "PCT": "Percent CPU Usage",
+ "RSYS": "Disk Reads",
+ "WSYS": "Disk Writes",
+}
+
+EMPTY_COLLECTL_FILE_MESSAGE = "Skipping process summary due to empty file... job probably did not run long enough for collectl to gather data."
+
+
+class CollectlFormatter( formatting.JobMetricFormatter ):
+
+ def format( self, key, value ):
+ if key == "pid":
+ return ( "Process ID", int( value ) )
+ elif key == "raw_log_path":
+ return ( "Relative Path of Full Collectl Log", value )
+ elif key == "process_max_AccumT":
+ return ( "Job Runtime (System+User)", formatting.seconds_to_str( float( value ) ) )
+ else:
+ _, stat_type, resource_type = key.split( "_", 2 )
+ if resource_type.startswith( "Vm"):
+ value_str = "%s KB" % int( value )
+ elif resource_type in [ "RSYS", "WSYS" ] and stat_type in [ "count", "max", "sum" ]:
+ value_str = "%d (# system calls)" % int( value )
+ else:
+ value_str = str( value )
+ resource_title = FORMATTED_RESOURCE_TITLES.get( resource_type, resource_type )
+ return ( "%s (%s)" % ( resource_title, stat_type ), value_str )
+
+
+class CollectlPlugin( InstrumentPlugin ):
+ """ Run collectl along with job to capture system and/or process data
+ according to specified collectl subsystems.
+ """
+ plugin_type = "collectl"
+ formatter = CollectlFormatter()
+
+ def __init__( self, **kwargs ):
+ self.__configure_paths( kwargs )
+ self.__configure_subsystems( kwargs )
+ saved_logs_path = kwargs.get( "saved_logs_path", None )
+ if "app" in kwargs:
+ saved_logs_path = kwargs[ "app" ].config.resolve_path( saved_logs_path )
+ self.saved_logs_path = saved_logs_path
+ self.__configure_collectl_recorder_args( kwargs )
+ self.summarize_process_data = util.asbool( kwargs.get( "summarize_process_data", True ) )
+ self.log_collectl_program_output = util.asbool( kwargs.get( "log_collectl_program_output", False ) )
+ if self.summarize_process_data:
+ if subsystems.get_subsystem( "process" ) not in self.subsystems:
+ raise Exception( "Collectl plugin misconfigured - cannot summarize_process_data without process subsystem being enabled." )
+
+ process_statistics = kwargs.get( "process_statistics", None )
+ # None will let processes module use default set of statistics
+ # defined there.
+ self.process_statistics = processes.parse_process_statistics( process_statistics )
+
+ def pre_execute_instrument( self, job_directory ):
+ commands = []
+ # Capture PID of process so we can walk its ancestors when building
+ # statistics for the whole job.
+ commands.append( '''echo "$$" > '%s' ''' % self.__pid_file( job_directory ) )
+ # Run collectl in record mode to capture process and system level
+ # statistics according to supplied subsystems.
+ commands.append( self.__collectl_record_command( job_directory ) )
+ return commands
+
+ def post_execute_instrument( self, job_directory ):
+ commands = []
+ # collectl dies when job script completes, perhaps capture pid of
+ # collectl above and check if it is still alive to allow tracking if
+ # collectl ran successfully through the whole job.
+ return commands
+
+ def job_properties( self, job_id, job_directory ):
+ pid = open( self.__pid_file( job_directory ), "r" ).read().strip()
+ contents = os.listdir( job_directory )
+ try:
+ rel_path = filter( self._is_instrumented_collectl_log, contents )[ 0 ]
+ path = os.path.join( job_directory, rel_path )
+ except IndexError:
+ message = "Failed to find collectl log in directory %s, files were %s" % ( job_directory, contents )
+ raise Exception( message )
+
+ properties = dict(
+ pid=int( pid ),
+ )
+
+ if self.saved_logs_path:
+ destination_rel_dir = os.path.join( *directory_hash.directory_hash_id( job_id ) )
+ destination_rel_path = os.path.join( destination_rel_dir, rel_path )
+ destination_path = os.path.join( self.saved_logs_path, destination_rel_path )
+ destination_dir = os.path.dirname( destination_path )
+ if not os.path.isdir( destination_dir ):
+ os.makedirs( destination_dir )
+ shutil.copyfile( path, destination_path )
+ properties[ "raw_log_path" ] = destination_rel_path
+
+ if self.summarize_process_data:
+ # Run collectl in playback and generate statistics of interest
+ summary_statistics = self.__summarize_process_data( pid, path )
+ for statistic, value in summary_statistics:
+ properties[ "process_%s" % "_".join( statistic ) ] = value
+
+ return properties
+
+ def __configure_paths( self, kwargs ):
+ # 95% of time I would expect collectl to just be installed with apt or
+ # yum, but if it is manually installed on not on path, allow
+ # configuration of explicit path - and allow path to be different
+ # between galaxy job handler (local_collectl_path) and compute node
+ # (remote_collectl_path).
+ collectl_path = kwargs.get( "collectl_path", "collectl" )
+ self.remote_collectl_path = kwargs.get( "remote_collectl_path", collectl_path )
+ self.local_collectl_path = kwargs.get( "local_collectl_path", collectl_path )
+
+ def __configure_subsystems( self, kwargs ):
+ raw_subsystems_str = kwargs.get( "subsystems", DEFAULT_SUBSYSTEMS )
+ raw_subsystems = util.listify( raw_subsystems_str, do_strip=True )
+ self.subsystems = map( subsystems.get_subsystem, raw_subsystems )
+
+ def __configure_collectl_recorder_args( self, kwargs ):
+ collectl_recorder_args = kwargs.copy()
+
+ # Allow deployer to configure separate system and process intervals,
+ # but if they specify just one - use it for both. Thinking here is this
+ # plugin's most useful feature is the process level information so
+ # this is likely what the deployer is attempting to configure.
+ if "interval" in kwargs and "interval2" not in kwargs:
+ collectl_recorder_args[ "interval2" ] = kwargs[ "interval"]
+
+ if "flush" not in kwargs:
+ collectl_recorder_args[ "flush" ] = DEFAULT_FLUSH_INTERVAL
+
+ procfilt_on = kwargs.get( "procfilt_on", DEFAULT_PROCFILT_ON ).lower()
+ # Calculate explicit arguments, rest can just be passed through from
+ # constructor arguments.
+ explicit_args = dict(
+ collectl_path=self.remote_collectl_path,
+ procfilt=procfilt_argument( procfilt_on ),
+ subsystems=self.subsystems,
+ )
+ collectl_recorder_args.update( explicit_args )
+ self.collectl_recorder_args = collectl_recorder_args
+
+ def __summarize_process_data( self, pid, collectl_log_path ):
+ playback_cli_args = dict(
+ collectl_path=self.local_collectl_path,
+ playback_path=collectl_log_path,
+ sep="9"
+ )
+ if not os.stat( collectl_log_path ).st_size:
+ log.debug( EMPTY_COLLECTL_FILE_MESSAGE )
+ return [ ]
+
+ playback_cli = cli.CollectlCli( **playback_cli_args )
+ return processes.generate_process_statistics( playback_cli, pid, self.process_statistics )
+
+ def __collectl_recorder_cli( self, job_directory ):
+ cli_args = self.collectl_recorder_args.copy()
+ cli_args[ "destination_path" ] = self._instrument_file_path( job_directory, "log" )
+ return cli.CollectlCli( **cli_args )
+
+ def __collectl_record_command( self, job_directory ):
+ collectl_cli = self.__collectl_recorder_cli( job_directory )
+ if self.log_collectl_program_output:
+ redirect_to = self._instrument_file_path( job_directory, "program_output" )
+ else:
+ redirect_to = "/dev/null"
+ return "%s > %s 2>&1 &" % (
+ collectl_cli.build_command_line(),
+ redirect_to,
+ )
+
+ def __pid_file( self, job_directory ):
+ return self._instrument_file_path( job_directory, "pid" )
+
+ def _is_instrumented_collectl_log( self, filename ):
+ prefix = self._instrument_file_name( "log" )
+ return filename.startswith( prefix ) and filename.endswith( ".raw.gz" )
+
+
+def procfilt_argument( procfilt_on ):
+ if procfilt_on == "username":
+ return "U$USER"
+ elif procfilt_on == "uid":
+ return "u$UID"
+ else:
+ # Ensure it is empty of None
+ if procfilt_on or procfilt_on.lower() != "none":
+ raise Exception( "Invalid procfilt_on argument encountered")
+ return ""
+
+
+__all__ = [ CollectlPlugin ]
https://bitbucket.org/galaxy/galaxy-central/commits/b24137a3a980/
Changeset: b24137a3a980
User: jmchilton
Date: 2014-04-23 04:06:03
Summary: Merge pull request #352.
Affected #: 27 files
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 job_conf.xml.sample_advanced
--- a/job_conf.xml.sample_advanced
+++ b/job_conf.xml.sample_advanced
@@ -62,6 +62,11 @@
<!-- Warning: Local slot count doesn't tie up additional worker threads, to prevent over
allocating machine define a second local runner with different name and fewer workers
to run this destination. -->
+ <job_metrics />
+ <!-- Above element demonstrates embedded job metrics definition - see
+ job_metrics_conf.xml.sample for full documentation on possible nested
+ elements. This empty block will simply disable job metrics for the
+ corresponding destination. --></destination><destination id="pbs" runner="pbs" tags="mycluster"/><destination id="pbs_longjobs" runner="pbs" tags="mycluster,longjobs">
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 job_metrics_conf.xml.sample
--- /dev/null
+++ b/job_metrics_conf.xml.sample
@@ -0,0 +1,124 @@
+<?xml version="1.0"?>
+<!-- If job_metrics.xml exists, this file will define the default job metric
+ plugin used for all jobs. Individual job_conf.xml destinations can
+ disable metric collection by setting metrics="off" on that destination.
+ The metrics attribute on destination definition elements can also be
+ a path - in which case that XML metrics file will be loaded and used for
+ that destination. Finally, the destination element may contain a job_metrics
+ child element (with all options defined below) to define job metrics in an
+ embedded manner directly in the job_conf.xml file.
+-->
+<job_metrics>
+ <!-- Each element in this file corresponds to a job instrumentation plugin
+ used to generate metrics in lib/galaxy/jobs/metrics/instrumenters. -->
+
+ <!-- Core plugin captures Galaxy slots, start and end of job (in seconds
+ since epoch) and computes runtime in seconds. -->
+ <core />
+
+ <!-- Uncomment to dump processor count for each job - linux only. -->
+ <!-- <cpuinfo /> -->
+ <!-- Uncomment to dump information about all processors for for each
+ job - this is likely too much data. Linux only. -->
+ <!-- <cpuinfo verbose="true" /> -->
+
+ <!-- Uncomment to dump system memory information for each job - linux
+ only. -->
+ <!-- <meminfo /> -->
+
+ <!-- Uncomment to record operating system each job is executed on - linux
+ only. -->
+ <!-- <uname /> -->
+
+ <!-- Uncomment following to enable plugin dumping complete environment
+ for each job, potentially useful for debuging -->
+ <!-- <env /> -->
+ <!-- env plugin can also record more targetted, obviously useful variables
+ as well. -->
+ <!-- <env variables="HOSTNAME,SLURM_CPUS_ON_NODE,SLURM_JOBID" /> -->
+
+ <!-- <collectl /> -->
+ <!-- Collectl (http://collectl.sourceforge.net/) is a powerful monitoring
+ utility capable of gathering numerous system and process level
+ statistics of running applications. The Galaxy collectl job metrics
+ plugin by default will grab a variety of process level metrics
+ aggregated across all processes corresponding to a job, this behavior
+ is highly customiziable - both using the attributes documented below
+ or simply hacking up the code in lib/galaxy/jobs/metrics.
+
+ Warning: In order to use this plugin collectl must be available on the
+ compute server the job runs on and on the local Galaxy server as well
+ (unless in this latter case summarize_process_data is set to False).
+
+ Attributes (the follow describes attributes that can be used with
+ the collectl job metrics element above to modify its behavior).
+
+ 'summarize_process_data': Boolean indicating whether to run collectl
+ in playback mode after jobs complete and gather process level
+ statistics for the job run. These statistics can be customized
+ with the 'process_statistics' attribute. (defaults to True)
+
+ 'saved_logs_path': If set (it is off by default), all collectl logs
+ will be saved to the specified path after jobs complete. These
+ logs can later be replayed using collectl offline to generate
+ full time-series data corresponding to a job run.
+
+ 'subsystems': Comma separated list of collectl subystems to collect
+ data for. Plugin doesn't currently expose all of them or offer
+ summary data for any of them except 'process' but extensions
+ would be welcome. May seem pointless to include subsystems
+ beside process since they won't be processed online by Galaxy -
+ but if 'saved_logs_path' these files can be played back at anytime.
+
+ Available subsystems - 'process', 'cpu', 'memory', 'network',
+ 'disk', 'network'. (Default 'process').
+
+ Warning: If you override this - be sure to include 'process'
+ unless 'summarize_process_data' is set to false.
+
+ 'process_statistics': If 'summarize_process_data' this attribute can be
+ specified as a comma separated list to override the statistics
+ that are gathered. Each statistics is of the for X_Y where X
+ if one of 'min', 'max', 'count', 'avg', or 'sum' and Y is a
+ value from 'S', 'VmSize', 'VmLck', 'VmRSS', 'VmData', 'VmStk',
+ 'VmExe', 'VmLib', 'CPU', 'SysT', 'UsrT', 'PCT', 'AccumT' 'WKB',
+ 'RKBC', 'WKBC', 'RSYS', 'WSYS', 'CNCL', 'MajF', 'MinF'. Consult
+ lib/galaxy/jobs/metrics/collectl/processes.py for more details
+ on what each of these resource types means.
+
+ Defaults to 'max_VmSize,avg_VmSize,max_VmRSS,avg_VmRSS,sum_SysT,sum_UsrT,max_PCT avg_PCT,max_AccumT,sum_RSYS,sum_WSYS'
+ as variety of statistics roughly describing CPU and memory
+ usage of the program and VERY ROUGHLY describing I/O consumption.
+
+ 'procfilt_on': By default Galaxy will tell collectl to only collect
+ 'process' level data for the current user (as identified)
+ by 'username' (default) - this can be disabled by settting this
+ to 'none' - the plugin will still only aggregate process level
+ statistics for the jobs process tree - but the additional
+ information can still be used offline with 'saved_logs_path'
+ if set. Obsecurely, this can also be set 'uid' to identify
+ the current user to filter on by UID instead of username -
+ this may needed on some clusters(?).
+
+ 'interval': The time (in seconds) between data collection points.
+ Collectl uses a variety of different defaults for different
+ subsystems if this is not set, but process information (likely
+ the most pertinent for Galaxy jobs will collect data every
+ 60 seconds).
+
+ 'flush': Interval (in seconds I think) between when collectl will
+ flush its buffer to disk. Galaxy overrides this to disable
+ flushing by default if not set.
+
+ 'local_collectl_path', 'remote_collectl_path', 'collectl_path':
+ By default, jobs will just assume collectl is on the PATH, but
+ it can be overridden with 'local_collectl_path' and
+ 'remote_collectl_path' (or simply 'collectl_path' if it is not
+ on the path but installed in the same location both locally and
+ remotely).
+
+ There are more and more increasingly obsecure options including -
+ log_collectl_program_output, interval2, and interval3. Consult
+ source code for more details.
+ -->
+</job_metrics>
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/app.py
--- a/lib/galaxy/app.py
+++ b/lib/galaxy/app.py
@@ -15,7 +15,7 @@
from galaxy.sample_tracking import external_service_types
from galaxy.openid.providers import OpenIDProviders
from galaxy.tools.data_manager.manager import DataManagers
-
+from galaxy.jobs import metrics as job_metrics
from galaxy.web.base import pluginframework
import logging
@@ -60,6 +60,10 @@
self._configure_tool_data_tables( from_shed_config=False )
+ # Initialize job metrics manager, needs to be in place before
+ # config so per-destination modifications can be made.
+ self.job_metrics = job_metrics.JobMetrics( self.config.job_metrics_config_file, app=self )
+
# Initialize the job management configuration
self.job_config = jobs.JobConfiguration(self)
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/config.py
--- a/lib/galaxy/config.py
+++ b/lib/galaxy/config.py
@@ -137,6 +137,7 @@
self.template_path = resolve_path( kwargs.get( "template_path", "templates" ), self.root )
self.template_cache = resolve_path( kwargs.get( "template_cache_path", "database/compiled_templates" ), self.root )
self.dependency_resolvers_config_file = resolve_path( kwargs.get( 'dependency_resolvers_config_file', 'dependency_resolvers_conf.xml' ), self.root )
+ self.job_metrics_config_file = resolve_path( kwargs.get( 'job_metrics_config_file', 'job_metrics_conf.xml' ), self.root )
self.job_config_file = resolve_path( kwargs.get( 'job_config_file', 'job_conf.xml' ), self.root )
self.local_job_queue_workers = int( kwargs.get( "local_job_queue_workers", "5" ) )
self.cluster_job_queue_workers = int( kwargs.get( "cluster_job_queue_workers", "3" ) )
@@ -455,6 +456,11 @@
admin_users = [ x.strip() for x in self.get( "admin_users", "" ).split( "," ) ]
return ( user is not None and user.email in admin_users )
+ def resolve_path( self, path ):
+ """ Resolve a path relative to Galaxy's root.
+ """
+ return resolve_path( path, self.root )
+
def get_database_engine_options( kwargs, model_prefix='' ):
"""
Allow options for the SQLAlchemy database engine to be passed by using
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/__init__.py
--- a/lib/galaxy/jobs/__init__.py
+++ b/lib/galaxy/jobs/__init__.py
@@ -162,8 +162,21 @@
# Parse destinations
destinations = root.find('destinations')
+ job_metrics = self.app.job_metrics
for destination in self.__findall_with_required(destinations, 'destination', ('id', 'runner')):
id = destination.get('id')
+ destination_metrics = destination.get( "metrics", None )
+ if destination_metrics:
+ if not util.asbool( destination_metrics ):
+ # disable
+ job_metrics.set_destination_instrumenter( id, None )
+ else:
+ metrics_conf_path = self.app.config.resolve_path( destination_metrics )
+ job_metrics.set_destination_conf_file( id, metrics_conf_path )
+ else:
+ metrics_elements = self.__findall_with_required( destination, 'job_metrics', () )
+ if metrics_elements:
+ job_metrics.set_destination_conf_element( id, metrics_elements[ 0 ] )
job_destination = JobDestination(**dict(destination.items()))
job_destination['params'] = self.__get_params(destination)
self.destinations[id] = (job_destination,)
@@ -1068,8 +1081,10 @@
# Finally set the job state. This should only happen *after* all
# dataset creation, and will allow us to eliminate force_history_refresh.
job.state = final_job_state
+ if not job.tasks:
+ # If job was composed of tasks, don't attempt to recollect statisitcs
+ self._collect_metrics( job )
self.sa_session.flush()
-
log.debug( 'job %d ended' % self.job_id )
delete_files = self.app.config.cleanup_job == 'always' or ( job.state == job.states.OK and self.app.config.cleanup_job == 'onsuccess' )
self.cleanup( delete_files=delete_files )
@@ -1094,6 +1109,16 @@
except:
log.exception( "Unable to cleanup job %d" % self.job_id )
+ def _collect_metrics( self, has_metrics ):
+ job = has_metrics.get_job()
+ per_plugin_properties = self.app.job_metrics.collect_properties( job.destination_id, self.job_id, self.working_directory )
+ if per_plugin_properties:
+ log.info( "Collecting job metrics for %s" % has_metrics )
+ for plugin, properties in per_plugin_properties.iteritems():
+ for metric_name, metric_value in properties.iteritems():
+ if metric_value is not None:
+ has_metrics.add_metric( plugin, metric_name, metric_value )
+
def get_output_sizes( self ):
sizes = []
output_paths = self.get_output_fnames()
@@ -1508,6 +1533,7 @@
task.stdout = util.shrink_string_by_size( stdout, DATABASE_MAX_STRING_SIZE, join_by="\n..\n", left_larger=True, beginning_on_size_error=True )
if len( stderr ) > DATABASE_MAX_STRING_SIZE:
log.error( "stderr for task %d is greater than %s, only a portion will be logged to database" % ( task.id, DATABASE_MAX_STRING_SIZE_PRETTY ) )
+ self._collect_metrics( task )
task.stderr = util.shrink_string_by_size( stderr, DATABASE_MAX_STRING_SIZE, join_by="\n..\n", left_larger=True, beginning_on_size_error=True )
task.exit_code = tool_exit_code
task.command_line = self.command_line
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/metrics/__init__.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/__init__.py
@@ -0,0 +1,134 @@
+import collections
+import os
+
+from xml.etree import ElementTree
+
+from galaxy.util.submodules import submodules
+from galaxy import util
+
+from ..metrics import formatting
+
+import logging
+log = logging.getLogger( __name__ )
+
+
+DEFAULT_FORMATTER = formatting.JobMetricFormatter()
+
+
+class JobMetrics( object ):
+
+ def __init__( self, conf_file=None, **kwargs ):
+ """
+ """
+ self.plugin_classes = self.__plugins_dict()
+ self.default_job_instrumenter = JobInstrumenter.from_file( self.plugin_classes, conf_file, **kwargs )
+ self.job_instrumenters = collections.defaultdict( lambda: self.default_job_instrumenter )
+
+ def format( self, plugin, key, value ):
+ if plugin in self.plugin_classes:
+ plugin_class = self.plugin_classes[ plugin ]
+ formatter = plugin_class.formatter
+ else:
+ formatter = DEFAULT_FORMATTER
+ return formatter.format( key, value )
+
+ def set_destination_conf_file( self, destination_id, conf_file ):
+ instrumenter = JobInstrumenter.from_file( self.plugin_classes, conf_file )
+ self.set_destination_instrumenter( destination_id, instrumenter )
+
+ def set_destination_conf_element( self, destination_id, element ):
+ instrumenter = JobInstrumenter( self.plugin_classes, element )
+ self.set_destination_instrumenter( destination_id, instrumenter )
+
+ def set_destination_instrumenter( self, destination_id, job_instrumenter=None ):
+ if job_instrumenter is None:
+ job_instrumenter = NULL_JOB_INSTRUMENTER
+ self.job_instrumenters[ destination_id ] = job_instrumenter
+
+ def collect_properties( self, destination_id, job_id, job_directory ):
+ return self.job_instrumenters[ destination_id ].collect_properties( job_id, job_directory )
+
+ def __plugins_dict( self ):
+ plugin_dict = {}
+ for plugin_module in self.__plugin_modules():
+ for clazz in plugin_module.__all__:
+ plugin_type = getattr( clazz, 'plugin_type', None )
+ if plugin_type:
+ plugin_dict[ plugin_type ] = clazz
+ return plugin_dict
+
+ def __plugin_modules( self ):
+ import galaxy.jobs.metrics.instrumenters
+ return submodules( galaxy.jobs.metrics.instrumenters )
+
+
+class NullJobInstrumenter( object ):
+
+ def pre_execute_commands( self, job_directory ):
+ return None
+
+ def post_execute_commands( self, job_directory ):
+ return None
+
+ def collect_properties( self, job_id, job_directory ):
+ return {}
+
+NULL_JOB_INSTRUMENTER = NullJobInstrumenter()
+
+
+class JobInstrumenter( object ):
+
+ def __init__( self, plugin_classes, metrics_element, **kwargs ):
+ self.extra_kwargs = kwargs
+ self.plugin_classes = plugin_classes
+ self.plugins = self.__plugins_for_element( metrics_element )
+
+ def pre_execute_commands( self, job_directory ):
+ commands = []
+ for plugin in self.plugins:
+ try:
+ plugin_commands = plugin.pre_execute_instrument( job_directory )
+ if plugin_commands:
+ commands.extend( util.listify( plugin_commands ) )
+ except Exception:
+ log.exception( "Failed to generate pre-execute commands for plugin %s" % plugin )
+ return "\n".join( [ c for c in commands if c ] )
+
+ def post_execute_commands( self, job_directory ):
+ commands = []
+ for plugin in self.plugins:
+ try:
+ plugin_commands = plugin.post_execute_instrument( job_directory )
+ if plugin_commands:
+ commands.extend( util.listify( plugin_commands ) )
+ except Exception:
+ log.exception( "Failed to generate post-execute commands for plugin %s" % plugin )
+ return "\n".join( [ c for c in commands if c ] )
+
+ def collect_properties( self, job_id, job_directory ):
+ per_plugin_properites = {}
+ for plugin in self.plugins:
+ try:
+ properties = plugin.job_properties( job_id, job_directory )
+ if properties:
+ per_plugin_properites[ plugin.plugin_type ] = properties
+ except Exception:
+ log.exception( "Failed to collect job properties for plugin %s" % plugin )
+ return per_plugin_properites
+
+ def __plugins_for_element( self, plugins_element ):
+ plugins = []
+ for plugin_element in plugins_element.getchildren():
+ plugin_type = plugin_element.tag
+ plugin_kwds = dict( plugin_element.items() )
+ plugin_kwds.update( self.extra_kwargs )
+ plugin = self.plugin_classes[ plugin_type ]( **plugin_kwds )
+ plugins.append( plugin )
+ return plugins
+
+ @staticmethod
+ def from_file( plugin_classes, conf_file, **kwargs ):
+ if not conf_file or not os.path.exists( conf_file ):
+ return NULL_JOB_INSTRUMENTER
+ plugins_element = ElementTree.parse( conf_file ).getroot()
+ return JobInstrumenter( plugin_classes, plugins_element, **kwargs )
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/metrics/collectl/__init__.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/collectl/__init__.py
@@ -0,0 +1,5 @@
+""" This module contains helper functions and data structures for interacting
+with collectl and collectl generated data. More information on collectl can be
+found at: http://collectl.sourceforge.net/.
+
+"""
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/metrics/collectl/cli.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/collectl/cli.py
@@ -0,0 +1,128 @@
+from string import Template
+import subprocess
+
+import logging
+log = logging.getLogger( __name__ )
+
+COMMAND_LINE_TEMPLATE = Template(
+ "$collectl_path $destination_arg $mode_arg $subsystems_arg $interval_arg $procfilt_arg $flush_arg $sep_arg"
+)
+MODE_RECORD = "record"
+MODE_PLAYBACK = "playback"
+
+
+class CollectlCli( object ):
+ """ Abstraction over (some of) the command-line arguments of collectl.
+ Ideally this will be useful for building up command line arguments for
+ remote execution as well as runnning directly on local host.
+
+ This is meant to be a fairly generic utility - for interfacing with
+ collectl CLI - logic more directly related to the Galaxy job metric plugin
+ plugin should be placed in other modules.
+
+ Keyword Arguments:
+ collectl_path: Path to collectl executable (defaults to collectl - i.e.
+ search the PATH).
+
+ playback_path (defaults to None): If this is None collectl will run in
+ record mode, else it will playback specified file.
+
+ Playback Mode Options:
+
+ sep : Separator used in playback mode (set to 9 to produce tsv)
+ (defaults to None).
+
+ Record Mode Options (some of these may work in playback mode also):
+
+ destination_path: Location of path files to write to (defaults to None
+ and collectl will just use cwd). Really this is just to prefix -
+ collectl will append hostname and datetime to file.
+ interval: Setup polling interval (secs) for most subsystems (defaults
+ to None and when unspecified collectl will use default of 1 second).
+ interval2: Setup polling interval (secs) for process information
+ (defaults to None and when unspecified collectl will use default to
+ 60 seconds).
+ interval3: Setup polling interval (secs) for environment information
+ (defaults to None and when unspecified collectl will use default to
+ 300 seconds).
+ procfilt: Optional argument to procfilt. (defaults to None).
+ flush : Optional flush interval (defaults to None).
+ """
+
+ def __init__( self, **kwargs ):
+ command_args = {}
+ command_args[ "collectl_path" ] = kwargs.get( "collectl_path", "collectl" )
+ playback_path = kwargs.get( "playback_path", None )
+ self.mode = MODE_RECORD if not playback_path else MODE_PLAYBACK
+ if self.mode == MODE_RECORD:
+ mode_arg = ""
+ elif self.mode == MODE_PLAYBACK:
+ mode_arg = "-P -p '%s'" % playback_path
+ else:
+ raise Exception( "Invalid mode supplied to CollectlCli - %s" % self.mode )
+ command_args[ "mode_arg" ] = mode_arg
+ command_args[ "interval_arg" ] = self.__interval_arg( kwargs )
+ destination = kwargs.get( "destination_path", None )
+ if destination:
+ destination_arg = "-f '%s'" % destination
+ else:
+ destination_arg = ""
+ command_args[ "destination_arg" ] = destination_arg
+ procfilt = kwargs.get( "procfilt", None )
+ command_args[ "procfilt_arg" ] = "" if not procfilt else "--procfilt %s" % procfilt
+ command_args[ "subsystems_arg" ] = self.__subsystems_arg( kwargs.get( "subsystems", [] ) )
+ flush = kwargs.get( "flush", None )
+ command_args[ "flush_arg"] = "--flush %s" % flush if flush else ""
+ sep = kwargs.get( "sep", None )
+ command_args[ "sep_arg" ] = "--sep=%s" % sep if sep else ""
+
+ self.command_args = command_args
+
+ def __subsystems_arg( self, subsystems ):
+ if subsystems:
+ return "-s%s" % "".join( [ s.command_line_arg for s in subsystems ] )
+ else:
+ return ""
+
+ def __interval_arg( self, kwargs ):
+ if self.mode != MODE_RECORD:
+ return ""
+
+ interval = kwargs.get( "interval", None )
+ if not interval:
+ return ""
+
+ self.__validate_interval_arg( interval )
+ interval_arg = "-i %s" % interval
+ interval2 = kwargs.get( "interval2", None )
+ if not interval2:
+ return interval_arg
+ self.__validate_interval_arg( interval2, multiple_of=int( interval ) )
+ interval_arg = "%s:%s" % ( interval_arg, interval2 )
+
+ interval3 = kwargs.get( "interval3", None )
+ if not interval3:
+ return interval_arg
+ self.__validate_interval_arg( interval3, multiple_of=int( interval ) )
+ interval_arg = "%s:%s" % ( interval_arg, interval3 )
+ return interval_arg
+
+ def __validate_interval_arg( self, value, multiple_of=None ):
+ if value and not str(value).isdigit():
+ raise Exception( "Invalid interval argument supplied, must be integer %s" % value )
+ if multiple_of:
+ if int( value ) % multiple_of != 0:
+ raise Exception( "Invalid interval argument supplied, must multiple of %s" % multiple_of )
+
+ def build_command_line( self ):
+ return COMMAND_LINE_TEMPLATE.substitute( **self.command_args )
+
+ def run( self, stdout=subprocess.PIPE, stderr=subprocess.PIPE ):
+ command_line = self.build_command_line()
+ log.info( "Executing %s" % command_line )
+ proc = subprocess.Popen( command_line, shell=True, stdout=stdout, stderr=stderr )
+ return_code = proc.wait()
+ if return_code:
+ raise Exception( "Problem running collectl command." )
+
+__all__ = [ CollectlCli ]
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/metrics/collectl/processes.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/collectl/processes.py
@@ -0,0 +1,252 @@
+""" Modules will run collectl in playback mode and collect various process
+statistics for a given pid's process and process ancestors.
+"""
+import collections
+import csv
+import tempfile
+
+from galaxy import util
+
+from ..collectl import stats
+
+import logging
+log = logging.getLogger( __name__ )
+
+# Collectl process information cheat sheet:
+#
+# Record process information for current user.
+# % collectl -sZ -f./__instrument_collectl -i 10:10 --procfilt U$USER
+#
+# TSV Replay of processing information in plottable mode...
+#
+# % collectl -sZ -P --sep=9 -p __instrument_collectl-jlaptop13-20140322-120919.raw.gz
+#
+# Has following columns:
+# Date Time PID User PR PPID THRD S VmSize VmLck VmRSS VmData VmStk VmExe VmLib CPU SysT UsrT PCT AccumT RKB WKB RKBC WKBC RSYS WSYS CNCL MajF MinF Command
+#
+
+# Process data dumped one row per process per interval.
+#http://collectl.sourceforge.net/Data-detail.html
+PROCESS_COLUMNS = [
+ "#Date", # Date of interval - e.g. 20140322
+ "Time", # Time of interval - 12:18:58
+ "PID", # Process pid.
+ "User", # Process user.
+ "PR", # Priority of process.
+ "PPID", # Parent PID of process.
+ "THRD", # Thread???
+ "S", # Process state - S - Sleeping, D - Uninterruptable Sleep, R - Running, Z - Zombie or T - Stopped/Traced
+ ## Memory options - http://ewx.livejournal.com/579283.html
+ "VmSize",
+ "VmLck",
+ "VmRSS",
+ "VmData",
+ "VmStk",
+ "VmExe",
+ "VmLib",
+ "CPU", # CPU number of process
+ "SysT", # Amount of system time consumed during interval
+ "UsrT", # Amount user time consumed during interval
+ "PCT", # Percentage of current interval consumed by task
+ "AccumT", # Total accumulated System and User time since the process began execution
+ # kilobytes read/written - requires I/O level monitoring to be enabled in kernel.
+ "RKB", # kilobytes read by process - requires I/O monitoring in kernel
+ "WKB",
+ "RKBC",
+ "WKBC",
+ "RSYS", # Number of read system calls
+ "WSYS", # Number of write system calls
+ "CNCL",
+ "MajF", # Number of major page faults
+ "MinF", # Number of minor page faults
+ "Command", # Command executed
+]
+
+# Types of statistics this module can summarize
+STATISTIC_TYPES = [ "max", "min", "sum", "count", "avg" ]
+
+COLUMN_INDICES = dict( [ ( col, i ) for i, col in enumerate( PROCESS_COLUMNS ) ] )
+PID_INDEX = COLUMN_INDICES[ "PID" ]
+PARENT_PID_INDEX = COLUMN_INDICES[ "PPID" ]
+
+DEFAULT_STATISTICS = [
+ ("max", "VmSize"),
+ ("avg", "VmSize"),
+ ("max", "VmRSS"),
+ ("avg", "VmRSS"),
+ ("sum", "SysT"),
+ ("sum", "UsrT"),
+ ("max", "PCT"),
+ ("avg", "PCT"),
+ ("max", "AccumT"),
+ ("sum", "RSYS"),
+ ("sum", "WSYS"),
+]
+
+
+def parse_process_statistics( statistics ):
+ """ Turn string or list of strings into list of tuples in format ( stat,
+ resource ) where stat is a value from STATISTIC_TYPES and resource is a
+ value from PROCESS_COLUMNS.
+ """
+ if statistics is None:
+ statistics = DEFAULT_STATISTICS
+
+ statistics = util.listify( statistics )
+ statistics = map( __tuplize_statistic, statistics )
+ # Check for validity...
+ for statistic in statistics:
+ if statistic[ 0 ] not in STATISTIC_TYPES:
+ raise Exception( "Unknown statistic type encountered %s" % statistic[ 0 ] )
+ if statistic[ 1 ] not in PROCESS_COLUMNS:
+ raise Exception( "Unknown process column encountered %s" % statistic[ 1 ] )
+ return statistics
+
+
+def generate_process_statistics( collectl_playback_cli, pid, statistics=DEFAULT_STATISTICS ):
+ """ Playback collectl file and generate summary statistics.
+ """
+ with tempfile.NamedTemporaryFile( ) as tmp_tsv:
+ collectl_playback_cli.run( stdout=tmp_tsv )
+ with open( tmp_tsv.name, "r" ) as tsv_file:
+ return __read_process_statistics( tsv_file, pid, statistics )
+
+
+def __read_process_statistics( tsv_file, pid, statistics ):
+ process_summarizer = CollectlProcessSummarizer( pid, statistics )
+ current_interval = None
+
+ for row in csv.reader( tsv_file, dialect="excel-tab" ):
+ if current_interval is None:
+ for header, expected_header in zip( row, PROCESS_COLUMNS ):
+ if header.lower() != expected_header.lower():
+ raise Exception( "Unknown header value encountered while processing collectl playback - %s" % header )
+
+ # First row, check contains correct header.
+ current_interval = CollectlProcessInterval()
+ continue
+
+ if current_interval.row_is_in( row ):
+ current_interval.add_row( row )
+ else:
+ process_summarizer.handle_interval( current_interval )
+ current_interval = CollectlProcessInterval()
+
+ # Do we have unsummarized rows...
+ if current_interval and current_interval.rows:
+ process_summarizer.handle_interval( current_interval )
+
+ return process_summarizer.get_statistics()
+
+
+class CollectlProcessSummarizer( object ):
+
+ def __init__( self, pid, statistics ):
+ self.pid = pid
+ self.statistics = statistics
+ self.columns_of_interest = set( [ s[ 1 ] for s in statistics ] )
+ self.tree_statistics = collections.defaultdict( stats.StatisticsTracker )
+ self.process_accum_statistics = collections.defaultdict( stats.StatisticsTracker )
+ self.interval_count = 0
+
+ def handle_interval( self, interval ):
+ self.interval_count += 1
+ rows = self.__rows_for_process( interval.rows, self.pid )
+ for column_name in self.columns_of_interest:
+ column_index = COLUMN_INDICES[ column_name ]
+
+ if column_name == "AccumT":
+ # Should not sum this across pids each interval, sum max at end...
+ for r in rows:
+ pid_seconds = self.__time_to_seconds( r[ column_index ] )
+ self.process_accum_statistics[ r[ PID_INDEX ] ].track( pid_seconds )
+ else:
+ # All other stastics should be summed across whole process tree
+ # at each interval I guess.
+ if column_name in [ "SysT", "UsrT", "PCT" ]:
+ to_num = float
+ else:
+ to_num = long
+
+ interval_stat = sum( to_num( r[ column_index ] ) for r in rows )
+ self.tree_statistics[ column_name ].track( interval_stat )
+
+ def get_statistics( self ):
+ if self.interval_count == 0:
+ return []
+
+ computed_statistics = []
+ for statistic in self.statistics:
+ statistic_type, column = statistic
+ if column == "AccumT":
+ # Only thing that makes sense is sum
+ if statistic_type != "max":
+ log.warn( "Only statistic max makes sense for AccumT" )
+ continue
+
+ value = sum( [ v.max for v in self.process_accum_statistics.itervalues() ] )
+ else:
+ statistics_tracker = self.tree_statistics[ column ]
+ value = getattr( statistics_tracker, statistic_type )
+
+ computed_statistic = ( statistic, value )
+ computed_statistics.append( computed_statistic )
+
+ return computed_statistics
+
+ def __rows_for_process( self, rows, pid ):
+ process_rows = []
+ pids = self.__all_child_pids( rows, pid )
+ for row in rows:
+ if row[ PID_INDEX ] in pids:
+ process_rows.append( row )
+ return process_rows
+
+ def __all_child_pids( self, rows, pid ):
+ pids_in_process_tree = set( [ str( self.pid ) ] )
+ added = True
+ while added:
+ added = False
+ for row in rows:
+ pid = row[ PID_INDEX ]
+ parent_pid = row[ PARENT_PID_INDEX ]
+ if parent_pid in pids_in_process_tree and pid not in pids_in_process_tree:
+ pids_in_process_tree.add( pid )
+ added = True
+ return pids_in_process_tree
+
+ def __time_to_seconds( self, minutes_str ):
+ parts = minutes_str.split( ":" )
+ seconds = 0.0
+ for i, val in enumerate( parts ):
+ seconds += float(val) * ( 60 ** ( len( parts ) - ( i + 1 ) ) )
+ return seconds
+
+
+class CollectlProcessInterval( object ):
+ """ Represent all rows in collectl playback file for given time slice with
+ ability to filter out just rows corresponding to the process tree
+ corresponding to a given pid.
+ """
+
+ def __init__( self ):
+ self.rows = []
+
+ def row_is_in( self, row ):
+ if not self.rows: # No rows, this row defines interval.
+ return True
+ first_row = self.rows[ 0 ]
+ return first_row[ 0 ] == row[ 0 ] and first_row[ 1 ] == row[ 1 ]
+
+ def add_row( self, row ):
+ self.rows.append( row )
+
+
+def __tuplize_statistic( statistic ):
+ if not isinstance( statistic, tuple ):
+ statistic_split = statistic.split( "_", 1 )
+ statistic = ( statistic_split[ 0 ].lower(), statistic_split[ 1 ] )
+ return statistic
+
+
+__all__ = [ generate_process_statistics ]
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/metrics/collectl/stats.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/collectl/stats.py
@@ -0,0 +1,27 @@
+""" Primitive module for tracking running statistics without storing values in
+memory.
+"""
+
+
+class StatisticsTracker( object ):
+
+ def __init__( self ):
+ self.min = None
+ self.max = None
+ self.count = 0
+ self.sum = 0
+
+ def track( self, value ):
+ if self.min is None or value < self.min:
+ self.min = value
+ if self.max is None or value > self.max:
+ self.max = value
+ self.count += 1
+ self.sum += value
+
+ @property
+ def avg( self ):
+ if self.count > 0:
+ return self.sum / self.count
+ else:
+ return None
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/metrics/collectl/subsystems.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/collectl/subsystems.py
@@ -0,0 +1,72 @@
+from abc import ABCMeta
+from abc import abstractmethod
+
+
+class CollectlSubsystem( object ):
+ """ Class providing an abstraction of collectl subsytems.
+ """
+ __metaclass__ = ABCMeta
+
+ @property
+ @abstractmethod
+ def command_line_arg( self ):
+ """ Return single letter command-line argument used by collectl CLI.
+ """
+
+ @property
+ @abstractmethod
+ def name( self, job_directory ):
+ """ High-level name for subsystem as consumed by this module.
+ """
+
+
+class ProcessesSubsystem( CollectlSubsystem ):
+ command_line_arg = "Z"
+ name = "process"
+
+
+class CpuSubsystem( CollectlSubsystem ):
+ command_line_arg = "C"
+ name = "cpu"
+
+
+class DiskSubsystem( CollectlSubsystem ):
+ command_line_arg = "D"
+ name = "disk"
+
+
+class NetworkSubsystem( CollectlSubsystem ):
+ command_line_arg = "N"
+ name = "network"
+
+
+class EnvironmentSubsystem( CollectlSubsystem ):
+ command_line_arg = "E"
+ name = "environment"
+
+
+class MemorySubsystem( CollectlSubsystem ):
+ command_line_arg = "M"
+ name = "memory"
+
+
+SUBSYSTEMS = [
+ ProcessesSubsystem(),
+ CpuSubsystem(),
+ DiskSubsystem(),
+ NetworkSubsystem(),
+ EnvironmentSubsystem(),
+ MemorySubsystem(),
+]
+SUBSYSTEM_DICT = dict( [ (s.name, s) for s in SUBSYSTEMS ] )
+
+
+def get_subsystem( name ):
+ """
+
+ >>> get_subsystem( "process" ).command_line_arg == "Z"
+ True
+ """
+ return SUBSYSTEM_DICT[ name ]
+
+__all__ = [ get_subsystem ]
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/metrics/formatting.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/formatting.py
@@ -0,0 +1,18 @@
+
+
+class JobMetricFormatter( object ):
+ """ Format job metric key-value pairs for human consumption in Web UI. """
+
+ def format( self, key, value ):
+ return ( str( key ), str( value ) )
+
+
+## Formatting utilities
+
+def seconds_to_str( value ):
+ if value < 60:
+ return "%s seconds" % value
+ elif value < 3600:
+ return "%s minutes" % ( value / 60 )
+ else:
+ return "%s days and %s minutes" % ( value / 3600, ( value % 3600 ) / 60 )
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/metrics/instrumenters/__init__.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/instrumenters/__init__.py
@@ -0,0 +1,53 @@
+from abc import ABCMeta
+from abc import abstractmethod
+
+import os.path
+
+from ...metrics import formatting
+
+
+INSTRUMENT_FILE_PREFIX = "__instrument"
+
+
+class InstrumentPlugin( object ):
+ """ A plugin describing how to instrument Galaxy jobs and retrieve metrics
+ from this instrumentation.
+ """
+ __metaclass__ = ABCMeta
+ formatter = formatting.JobMetricFormatter()
+
+ @property
+ @abstractmethod
+ def plugin_type( self ):
+ """ Short string providing labelling this plugin """
+
+ def pre_execute_instrument( self, job_directory ):
+ """ Optionally return one or more commands to instrument job. These
+ commands will be executed on the compute server prior to the job
+ running.
+ """
+ return None
+
+ def post_execute_instrument( self, job_directory ):
+ """ Optionally return one or more commands to instrument job. These
+ commands will be executed on the compute server after the tool defined
+ command is ran.
+ """
+ return None
+
+ @abstractmethod
+ def job_properties( self, job_id, job_directory ):
+ """ Collect properties for this plugin from specified job directory.
+ This method will run on the Galaxy server and can assume files created
+ in job_directory with pre_execute_instrument and
+ post_execute_instrument are available.
+ """
+
+ def _instrument_file_name( self, name ):
+ """ Provide a common pattern for naming files used by instrumentation
+ plugins - to ease their staging out of remote job directories.
+ """
+ return "%s_%s_%s" % ( INSTRUMENT_FILE_PREFIX, self.plugin_type, name )
+
+ def _instrument_file_path( self, job_directory, name ):
+ return os.path.join( job_directory, self._instrument_file_name( name ) )
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/metrics/instrumenters/collectl.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/instrumenters/collectl.py
@@ -0,0 +1,214 @@
+import os
+import shutil
+
+from ..instrumenters import InstrumentPlugin
+from ...metrics import formatting
+from ..collectl import subsystems
+from ..collectl import cli
+from ..collectl import processes
+
+from galaxy import util
+from galaxy.util import directory_hash
+
+import logging
+log = logging.getLogger( __name__ )
+
+DEFAULT_PROCFILT_ON = "username" # By default, only grab statistics for user
+ # processes (as identifiers by username).
+DEFAULT_SUBSYSTEMS = "process"
+DEFAULT_FLUSH_INTERVAL = "0" # Set to zero to flush every collection.
+
+FORMATTED_RESOURCE_TITLES = {
+ "PCT": "Percent CPU Usage",
+ "RSYS": "Disk Reads",
+ "WSYS": "Disk Writes",
+}
+
+EMPTY_COLLECTL_FILE_MESSAGE = "Skipping process summary due to empty file... job probably did not run long enough for collectl to gather data."
+
+
+class CollectlFormatter( formatting.JobMetricFormatter ):
+
+ def format( self, key, value ):
+ if key == "pid":
+ return ( "Process ID", int( value ) )
+ elif key == "raw_log_path":
+ return ( "Relative Path of Full Collectl Log", value )
+ elif key == "process_max_AccumT":
+ return ( "Job Runtime (System+User)", formatting.seconds_to_str( float( value ) ) )
+ else:
+ _, stat_type, resource_type = key.split( "_", 2 )
+ if resource_type.startswith( "Vm"):
+ value_str = "%s KB" % int( value )
+ elif resource_type in [ "RSYS", "WSYS" ] and stat_type in [ "count", "max", "sum" ]:
+ value_str = "%d (# system calls)" % int( value )
+ else:
+ value_str = str( value )
+ resource_title = FORMATTED_RESOURCE_TITLES.get( resource_type, resource_type )
+ return ( "%s (%s)" % ( resource_title, stat_type ), value_str )
+
+
+class CollectlPlugin( InstrumentPlugin ):
+ """ Run collectl along with job to capture system and/or process data
+ according to specified collectl subsystems.
+ """
+ plugin_type = "collectl"
+ formatter = CollectlFormatter()
+
+ def __init__( self, **kwargs ):
+ self.__configure_paths( kwargs )
+ self.__configure_subsystems( kwargs )
+ saved_logs_path = kwargs.get( "saved_logs_path", None )
+ if "app" in kwargs:
+ saved_logs_path = kwargs[ "app" ].config.resolve_path( saved_logs_path )
+ self.saved_logs_path = saved_logs_path
+ self.__configure_collectl_recorder_args( kwargs )
+ self.summarize_process_data = util.asbool( kwargs.get( "summarize_process_data", True ) )
+ self.log_collectl_program_output = util.asbool( kwargs.get( "log_collectl_program_output", False ) )
+ if self.summarize_process_data:
+ if subsystems.get_subsystem( "process" ) not in self.subsystems:
+ raise Exception( "Collectl plugin misconfigured - cannot summarize_process_data without process subsystem being enabled." )
+
+ process_statistics = kwargs.get( "process_statistics", None )
+ # None will let processes module use default set of statistics
+ # defined there.
+ self.process_statistics = processes.parse_process_statistics( process_statistics )
+
+ def pre_execute_instrument( self, job_directory ):
+ commands = []
+ # Capture PID of process so we can walk its ancestors when building
+ # statistics for the whole job.
+ commands.append( '''echo "$$" > '%s' ''' % self.__pid_file( job_directory ) )
+ # Run collectl in record mode to capture process and system level
+ # statistics according to supplied subsystems.
+ commands.append( self.__collectl_record_command( job_directory ) )
+ return commands
+
+ def post_execute_instrument( self, job_directory ):
+ commands = []
+ # collectl dies when job script completes, perhaps capture pid of
+ # collectl above and check if it is still alive to allow tracking if
+ # collectl ran successfully through the whole job.
+ return commands
+
+ def job_properties( self, job_id, job_directory ):
+ pid = open( self.__pid_file( job_directory ), "r" ).read().strip()
+ contents = os.listdir( job_directory )
+ try:
+ rel_path = filter( self._is_instrumented_collectl_log, contents )[ 0 ]
+ path = os.path.join( job_directory, rel_path )
+ except IndexError:
+ message = "Failed to find collectl log in directory %s, files were %s" % ( job_directory, contents )
+ raise Exception( message )
+
+ properties = dict(
+ pid=int( pid ),
+ )
+
+ if self.saved_logs_path:
+ destination_rel_dir = os.path.join( *directory_hash.directory_hash_id( job_id ) )
+ destination_rel_path = os.path.join( destination_rel_dir, rel_path )
+ destination_path = os.path.join( self.saved_logs_path, destination_rel_path )
+ destination_dir = os.path.dirname( destination_path )
+ if not os.path.isdir( destination_dir ):
+ os.makedirs( destination_dir )
+ shutil.copyfile( path, destination_path )
+ properties[ "raw_log_path" ] = destination_rel_path
+
+ if self.summarize_process_data:
+ # Run collectl in playback and generate statistics of interest
+ summary_statistics = self.__summarize_process_data( pid, path )
+ for statistic, value in summary_statistics:
+ properties[ "process_%s" % "_".join( statistic ) ] = value
+
+ return properties
+
+ def __configure_paths( self, kwargs ):
+ # 95% of time I would expect collectl to just be installed with apt or
+ # yum, but if it is manually installed on not on path, allow
+ # configuration of explicit path - and allow path to be different
+ # between galaxy job handler (local_collectl_path) and compute node
+ # (remote_collectl_path).
+ collectl_path = kwargs.get( "collectl_path", "collectl" )
+ self.remote_collectl_path = kwargs.get( "remote_collectl_path", collectl_path )
+ self.local_collectl_path = kwargs.get( "local_collectl_path", collectl_path )
+
+ def __configure_subsystems( self, kwargs ):
+ raw_subsystems_str = kwargs.get( "subsystems", DEFAULT_SUBSYSTEMS )
+ raw_subsystems = util.listify( raw_subsystems_str, do_strip=True )
+ self.subsystems = map( subsystems.get_subsystem, raw_subsystems )
+
+ def __configure_collectl_recorder_args( self, kwargs ):
+ collectl_recorder_args = kwargs.copy()
+
+ # Allow deployer to configure separate system and process intervals,
+ # but if they specify just one - use it for both. Thinking here is this
+ # plugin's most useful feature is the process level information so
+ # this is likely what the deployer is attempting to configure.
+ if "interval" in kwargs and "interval2" not in kwargs:
+ collectl_recorder_args[ "interval2" ] = kwargs[ "interval"]
+
+ if "flush" not in kwargs:
+ collectl_recorder_args[ "flush" ] = DEFAULT_FLUSH_INTERVAL
+
+ procfilt_on = kwargs.get( "procfilt_on", DEFAULT_PROCFILT_ON ).lower()
+ # Calculate explicit arguments, rest can just be passed through from
+ # constructor arguments.
+ explicit_args = dict(
+ collectl_path=self.remote_collectl_path,
+ procfilt=procfilt_argument( procfilt_on ),
+ subsystems=self.subsystems,
+ )
+ collectl_recorder_args.update( explicit_args )
+ self.collectl_recorder_args = collectl_recorder_args
+
+ def __summarize_process_data( self, pid, collectl_log_path ):
+ playback_cli_args = dict(
+ collectl_path=self.local_collectl_path,
+ playback_path=collectl_log_path,
+ sep="9"
+ )
+ if not os.stat( collectl_log_path ).st_size:
+ log.debug( EMPTY_COLLECTL_FILE_MESSAGE )
+ return [ ]
+
+ playback_cli = cli.CollectlCli( **playback_cli_args )
+ return processes.generate_process_statistics( playback_cli, pid, self.process_statistics )
+
+ def __collectl_recorder_cli( self, job_directory ):
+ cli_args = self.collectl_recorder_args.copy()
+ cli_args[ "destination_path" ] = self._instrument_file_path( job_directory, "log" )
+ return cli.CollectlCli( **cli_args )
+
+ def __collectl_record_command( self, job_directory ):
+ collectl_cli = self.__collectl_recorder_cli( job_directory )
+ if self.log_collectl_program_output:
+ redirect_to = self._instrument_file_path( job_directory, "program_output" )
+ else:
+ redirect_to = "/dev/null"
+ return "%s > %s 2>&1 &" % (
+ collectl_cli.build_command_line(),
+ redirect_to,
+ )
+
+ def __pid_file( self, job_directory ):
+ return self._instrument_file_path( job_directory, "pid" )
+
+ def _is_instrumented_collectl_log( self, filename ):
+ prefix = self._instrument_file_name( "log" )
+ return filename.startswith( prefix ) and filename.endswith( ".raw.gz" )
+
+
+def procfilt_argument( procfilt_on ):
+ if procfilt_on == "username":
+ return "U$USER"
+ elif procfilt_on == "uid":
+ return "u$UID"
+ else:
+ # Ensure it is empty of None
+ if procfilt_on or procfilt_on.lower() != "none":
+ raise Exception( "Invalid procfilt_on argument encountered")
+ return ""
+
+
+__all__ = [ CollectlPlugin ]
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/metrics/instrumenters/core.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/instrumenters/core.py
@@ -0,0 +1,85 @@
+from ..instrumenters import InstrumentPlugin
+from ...metrics import formatting
+import time
+
+import logging
+log = logging.getLogger( __name__ )
+
+GALAXY_SLOTS_KEY = "galaxy_slots"
+START_EPOCH_KEY = "start_epoch"
+END_EPOCH_KEY = "end_epoch"
+RUNTIME_SECONDS_KEY = "runtime_seconds"
+
+
+class CorePluginFormatter( formatting.JobMetricFormatter ):
+
+ def format( self, key, value ):
+ value = int( value )
+ if key == GALAXY_SLOTS_KEY:
+ return ( "Cores Allocated", "%d" % value )
+ elif key == RUNTIME_SECONDS_KEY:
+ return ( "Job Runtime (Wall Clock)", formatting.seconds_to_str( value ) )
+ else:
+ # TODO: Use localized version of this from universe_wsgi.ini
+ title = "Job Start Time" if key == START_EPOCH_KEY else "Job End Time"
+ return (title, time.strftime( '%Y-%m-%d %H:%M:%S', time.localtime( value ) ) )
+
+
+class CorePlugin( InstrumentPlugin ):
+ """ Simple plugin that collects data without external dependencies. In
+ particular it currently collects value set for Galaxy slots.
+ """
+ plugin_type = "core"
+ formatter = CorePluginFormatter()
+
+ def __init__( self, **kwargs ):
+ pass
+
+ def pre_execute_instrument( self, job_directory ):
+ commands = []
+ commands.append( self.__record_galaxy_slots_command( job_directory ) )
+ commands.append( self.__record_seconds_since_epoch_to_file( job_directory, "start" ) )
+ return commands
+
+ def post_execute_instrument( self, job_directory ):
+ commands = []
+ commands.append( self.__record_seconds_since_epoch_to_file( job_directory, "end" ) )
+ return commands
+
+ def job_properties( self, job_id, job_directory ):
+ galaxy_slots_file = self.__galaxy_slots_file( job_directory )
+
+ properties = {}
+ properties[ GALAXY_SLOTS_KEY ] = self.__read_integer( galaxy_slots_file )
+ start = self.__read_seconds_since_epoch( job_directory, "start" )
+ end = self.__read_seconds_since_epoch( job_directory, "end" )
+ if start is not None and end is not None:
+ properties[ START_EPOCH_KEY ] = start
+ properties[ END_EPOCH_KEY ] = end
+ properties[ RUNTIME_SECONDS_KEY ] = end - start
+ return properties
+
+ def __record_galaxy_slots_command( self, job_directory ):
+ galaxy_slots_file = self.__galaxy_slots_file( job_directory )
+ return '''echo "$GALAXY_SLOTS" > '%s' ''' % galaxy_slots_file
+
+ def __record_seconds_since_epoch_to_file( self, job_directory, name ):
+ path = self._instrument_file_path( job_directory, "epoch_%s" % name )
+ return 'date +"%s" > ' + path
+
+ def __read_seconds_since_epoch( self, job_directory, name ):
+ path = self._instrument_file_path( job_directory, "epoch_%s" % name )
+ return self.__read_integer( path )
+
+ def __galaxy_slots_file( self, job_directory ):
+ return self._instrument_file_path( job_directory, "galaxy_slots" )
+
+ def __read_integer( self, path ):
+ value = None
+ try:
+ value = int( open( path, "r" ).read() )
+ except Exception:
+ pass
+ return value
+
+__all__ = [ CorePlugin ]
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/metrics/instrumenters/cpuinfo.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/instrumenters/cpuinfo.py
@@ -0,0 +1,62 @@
+import re
+
+from galaxy import util
+
+from ..instrumenters import InstrumentPlugin
+from ...metrics import formatting
+
+import logging
+log = logging.getLogger( __name__ )
+
+PROCESSOR_LINE = re.compile(r"processor\s*\:\s*(\d+)")
+
+
+class CpuInfoFormatter( formatting.JobMetricFormatter ):
+
+ def format( self, key, value ):
+ if key == "processor_count":
+ return "Processor Count", "%s" % int( value )
+ else:
+ return key, value
+
+
+class CpuInfoPlugin( InstrumentPlugin ):
+ """ Gather information about processor configuration from /proc/cpuinfo.
+ Linux only.
+ """
+ plugin_type = "cpuinfo"
+ formatter = CpuInfoFormatter()
+
+ def __init__( self, **kwargs ):
+ self.verbose = util.asbool( kwargs.get( "verbose", False ) )
+
+ def pre_execute_instrument( self, job_directory ):
+ return "cat /proc/cpuinfo > '%s'" % self.__instrument_cpuinfo_path( job_directory )
+
+ def job_properties( self, job_id, job_directory ):
+ properties = {}
+ processor_count = 0
+ with open( self.__instrument_cpuinfo_path( job_directory ) ) as f:
+ current_processor = None
+ for line in f:
+ line = line.strip().lower()
+ if not line: # Skip empty lines
+ continue
+
+ processor_line_match = PROCESSOR_LINE.match( line )
+ if processor_line_match:
+ processor_count += 1
+ current_processor = processor_line_match.group( 1 )
+ elif current_processor and self.verbose:
+ # If verbose, dump information about each processor
+ # into database...
+ key, value = line.split( ":", 1 )
+ key = "processor_%s_%s" % ( current_processor, key.strip() )
+ value = value
+ properties[ "processor_count" ] = processor_count
+ return properties
+
+ def __instrument_cpuinfo_path( self, job_directory ):
+ return self._instrument_file_path( job_directory, "cpuinfo" )
+
+__all__ = [ CpuInfoPlugin ]
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/metrics/instrumenters/env.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/instrumenters/env.py
@@ -0,0 +1,54 @@
+from ..instrumenters import InstrumentPlugin
+from ...metrics import formatting
+
+import logging
+log = logging.getLogger( __name__ )
+
+
+class EnvFormatter( formatting.JobMetricFormatter ):
+
+ def format( self, key, value ):
+ return ( "%s (runtime environment variable)" % key, value )
+
+
+class EnvPlugin( InstrumentPlugin ):
+ """ Instrumentation plugin capable of recording all or specific environment
+ variables for a job at runtime.
+ """
+ plugin_type = "env"
+ formatter = EnvFormatter()
+
+ def __init__( self, **kwargs ):
+ variables_str = kwargs.get( "variables", None )
+ if variables_str:
+ variables = [ v.strip() for v in variables_str.split(",") ]
+ else:
+ variables = None
+ self.variables = variables
+
+ def pre_execute_instrument( self, job_directory ):
+ """ Use env to dump all environment variables to a file.
+ """
+ return "env > '%s'" % self.__env_file( job_directory )
+
+ def post_execute_instrument( self, job_directory ):
+ return None
+
+ def job_properties( self, job_id, job_directory ):
+ """ Recover environment variables dumped out on compute server and filter
+ out specific variables if needed.
+ """
+ variables = self.variables
+
+ properties = {}
+ for line in open( self.__env_file( job_directory ) ).readlines():
+ var, value = line.split( "=", 1 )
+ if not variables or var in variables:
+ properties[ var ] = value
+
+ return properties
+
+ def __env_file( self, job_directory ):
+ return self._instrument_file_path( job_directory, "vars" )
+
+__all__ = [ EnvPlugin ]
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/metrics/instrumenters/meminfo.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/instrumenters/meminfo.py
@@ -0,0 +1,59 @@
+import re
+
+from galaxy import util
+
+from ..instrumenters import InstrumentPlugin
+from ...metrics import formatting
+
+MEMINFO_LINE = re.compile(r"(\w+)\s*\:\s*(\d+) kB")
+
+# Important (non-verbose) meminfo property titles.
+MEMINFO_TITLES = {
+ "memtotal": "Total System Memory",
+ "swaptotal": "Total System Swap"
+}
+
+
+class MemInfoFormatter( formatting.JobMetricFormatter ):
+
+ def format( self, key, value ):
+ title = MEMINFO_TITLES.get( key, key )
+ return title, util.nice_size( value * 1000 ) # kB = *1000, KB = *1024 - wikipedia
+
+
+class MemInfoPlugin( InstrumentPlugin ):
+ """ Gather information about processor configuration from /proc/cpuinfo.
+ Linux only.
+ """
+ plugin_type = "meminfo"
+ formatter = MemInfoFormatter()
+
+ def __init__( self, **kwargs ):
+ self.verbose = util.asbool( kwargs.get( "verbose", False ) )
+
+ def pre_execute_instrument( self, job_directory ):
+ return "cat /proc/meminfo > '%s'" % self.__instrument_meminfo_path( job_directory )
+
+ def job_properties( self, job_id, job_directory ):
+ properties = {}
+ with open( self.__instrument_meminfo_path( job_directory ) ) as f:
+ for line in f:
+ line = line.strip()
+ if not line: # Skip empty lines
+ continue
+ line_match = MEMINFO_LINE.match( line )
+ if not line_match:
+ continue
+ key = line_match.group( 1 ).lower()
+ # By default just grab important meminfo properties with titles
+ # defined for formatter. Grab everything in verbose mode for
+ # an arbitrary snapshot of memory at beginning of run.
+ if key in MEMINFO_TITLES or self.verbose:
+ value = long( line_match.group( 2 ) )
+ properties[ key ] = value
+ return properties
+
+ def __instrument_meminfo_path( self, job_directory ):
+ return self._instrument_file_path( job_directory, "meminfo" )
+
+__all__ = [ MemInfoPlugin ]
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/metrics/instrumenters/uname.py
--- /dev/null
+++ b/lib/galaxy/jobs/metrics/instrumenters/uname.py
@@ -0,0 +1,34 @@
+from ..instrumenters import InstrumentPlugin
+from ...metrics import formatting
+
+
+class UnameFormatter( formatting.JobMetricFormatter ):
+
+ def format( self, key, value ):
+ return "Operating System", value
+
+
+class UnamePlugin( InstrumentPlugin ):
+ """ Use uname to gather operating system information about remote system
+ job is running on. Linux only.
+ """
+ plugin_type = "uname"
+ formatter = UnameFormatter()
+
+ def __init__( self, **kwargs ):
+ self.uname_args = kwargs.get( "args", "-a" )
+
+ def pre_execute_instrument( self, job_directory ):
+ return "uname %s > '%s'" % ( self.uname_args, self.__instrument_uname_path( job_directory ) )
+
+ def job_properties( self, job_id, job_directory ):
+ properties = {}
+ with open( self.__instrument_uname_path( job_directory ) ) as f:
+ properties[ "uname" ] = f.read()
+ return properties
+
+ def __instrument_uname_path( self, job_directory ):
+ return self._instrument_file_path( job_directory, "uname" )
+
+
+__all__ = [ UnamePlugin ]
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/runners/__init__.py
--- a/lib/galaxy/jobs/runners/__init__.py
+++ b/lib/galaxy/jobs/runners/__init__.py
@@ -251,7 +251,10 @@
log.debug( 'execution of external set_meta for job %d finished' % job_wrapper.job_id )
def get_job_file(self, job_wrapper, **kwds):
+ job_metrics = job_wrapper.app.job_metrics
+ job_instrumenter = job_metrics.job_instrumenters[ job_wrapper.job_destination.id ]
options = dict(
+ job_instrumenter=job_instrumenter,
galaxy_lib=job_wrapper.galaxy_lib_dir,
env_setup_commands=job_wrapper.get_env_setup_clause(),
working_directory=os.path.abspath( job_wrapper.working_directory ),
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/runners/util/job_script/DEFAULT_JOB_FILE_TEMPLATE.sh
--- a/lib/galaxy/jobs/runners/util/job_script/DEFAULT_JOB_FILE_TEMPLATE.sh
+++ b/lib/galaxy/jobs/runners/util/job_script/DEFAULT_JOB_FILE_TEMPLATE.sh
@@ -12,6 +12,8 @@
export PYTHONPATH
fi
$env_setup_commands
+$instrument_pre_commands
cd $working_directory
$command
echo $? > $exit_code_path
+$instrument_post_commands
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/jobs/runners/util/job_script/__init__.py
--- a/lib/galaxy/jobs/runners/util/job_script/__init__.py
+++ b/lib/galaxy/jobs/runners/util/job_script/__init__.py
@@ -18,6 +18,8 @@
'headers': '',
'env_setup_commands': '',
'slots_statement': SLOTS_STATEMENT_CLUSTER_DEFAULT,
+ 'instrument_pre_commands': '',
+ 'instrument_post_commands': '',
}
@@ -47,6 +49,13 @@
"""
if any([param not in kwds for param in REQUIRED_TEMPLATE_PARAMS]):
raise Exception("Failed to create job_script, a required parameter is missing.")
+ job_instrumenter = kwds.get("job_instrumenter", None)
+ if job_instrumenter:
+ del kwds[ "job_instrumenter" ]
+ working_directory = kwds["working_directory"]
+ kwds["instrument_pre_commands"] = job_instrumenter.pre_execute_commands(working_directory) or ''
+ kwds["instrument_post_commands"] = job_instrumenter.post_execute_commands(working_directory) or ''
+
template_params = OPTIONAL_TEMPLATE_PARAMS.copy()
template_params.update(**kwds)
if not isinstance(template, Template):
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py
+++ b/lib/galaxy/model/__init__.py
@@ -17,6 +17,7 @@
import json
import socket
import time
+import numbers
from uuid import UUID, uuid4
from string import Template
from itertools import ifilter
@@ -87,6 +88,26 @@
return name
+class HasJobMetrics:
+
+ def _init_metrics( self ):
+ self.text_metrics = []
+ self.numeric_metrics = []
+
+ def add_metric( self, plugin, metric_name, metric_value ):
+ if isinstance( metric_value, numbers.Number ):
+ metric = self._numeric_metric( plugin, metric_name, metric_value )
+ self.numeric_metrics.append( metric )
+ else:
+ metric = self._text_metric( plugin, metric_name, metric_value )
+ self.text_metrics.append( metric )
+
+ @property
+ def metrics( self ):
+ # TODO: Make iterable, concatenate with chain
+ return self.text_metrics + self.numeric_metrics
+
+
class User( object, Dictifiable ):
use_pbkdf2 = True
"""
@@ -226,7 +247,31 @@
return Template( in_string ).safe_substitute( environment )
-class Job( object, Dictifiable ):
+class BaseJobMetric( object ):
+
+ def __init__( self, plugin, metric_name, metric_value ):
+ self.plugin = plugin
+ self.metric_name = metric_name
+ self.metric_value = metric_value
+
+
+class JobMetricText( BaseJobMetric ):
+ pass
+
+
+class JobMetricNumeric( BaseJobMetric ):
+ pass
+
+
+class TaskMetricText( BaseJobMetric ):
+ pass
+
+
+class TaskMetricNumeric( BaseJobMetric ):
+ pass
+
+
+class Job( object, HasJobMetrics, Dictifiable ):
dict_collection_visible_keys = [ 'id', 'state', 'exit_code', 'update_time', 'create_time' ]
dict_element_visible_keys = [ 'id', 'state', 'exit_code', 'update_time', 'create_time' ]
@@ -234,6 +279,9 @@
A job represents a request to run a tool given input datasets, tool
parameters, and output datasets.
"""
+ _numeric_metric = JobMetricNumeric
+ _text_metric = JobMetricText
+
states = Bunch( NEW = 'new',
UPLOAD = 'upload',
WAITING = 'waiting',
@@ -267,6 +315,7 @@
self.imported = False
self.handler = None
self.exit_code = None
+ self._init_metrics()
@property
def finished( self ):
@@ -370,6 +419,12 @@
self.info = info
def set_runner_name( self, job_runner_name ):
self.job_runner_name = job_runner_name
+
+ def get_job( self ):
+ # Added so job and task have same interface (.get_job() ) to get at
+ # underlying job object.
+ return self
+
def set_runner_external_id( self, job_runner_external_id ):
self.job_runner_external_id = job_runner_external_id
def set_post_job_actions( self, post_job_actions ):
@@ -472,10 +527,14 @@
return rval
-class Task( object ):
+
+class Task( object, HasJobMetrics ):
"""
A task represents a single component of a job.
"""
+ _numeric_metric = TaskMetricNumeric
+ _text_metric = TaskMetricText
+
states = Bunch( NEW = 'new',
WAITING = 'waiting',
QUEUED = 'queued',
@@ -498,6 +557,7 @@
self.stderr = ""
self.exit_code = None
self.prepare_input_files_cmd = prepare_files_cmd
+ self._init_metrics()
def get_param_values( self, app ):
"""
@@ -608,6 +668,7 @@
def set_prepare_input_files_cmd( self, prepare_input_files_cmd ):
self.prepare_input_files_cmd = prepare_input_files_cmd
+
class JobParameter( object ):
def __init__( self, name, value ):
self.name = name
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/model/mapping.py
--- a/lib/galaxy/model/mapping.py
+++ b/lib/galaxy/model/mapping.py
@@ -470,6 +470,52 @@
Column( "archive_dir", TEXT )
)
+
+JOB_METRIC_MAX_LENGTH = 1023
+
+model.JobMetricText.table = Table(
+ "job_metric_text",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Unicode(JOB_METRIC_MAX_LENGTH), ),
+)
+
+model.TaskMetricText.table = Table(
+ "task_metric_text",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "task_id", Integer, ForeignKey( "task.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Unicode(JOB_METRIC_MAX_LENGTH), ),
+)
+
+
+model.JobMetricNumeric.table = Table(
+ "job_metric_numeric",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Numeric( 22, 7 ), ),
+)
+
+
+model.TaskMetricNumeric.table = Table(
+ "task_metric_numeric",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "task_id", Integer, ForeignKey( "task.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Numeric( 22, 7 ), ),
+)
+
+
model.GenomeIndexToolData.table = Table( "genome_index_tool_data", metadata,
Column( "id", Integer, primary_key=True ),
Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
@@ -1569,6 +1615,26 @@
job=relation( model.Job ), dataset=relation(
model.LibraryDatasetDatasetAssociation, lazy=False ) ) )
+simple_mapping(
+ model.JobMetricText,
+ job=relation( model.Job, backref="text_metrics" ),
+)
+
+simple_mapping(
+ model.TaskMetricText,
+ task=relation( model.Task, backref="text_metrics" ),
+)
+
+simple_mapping(
+ model.JobMetricNumeric,
+ job=relation( model.Job, backref="numeric_metrics" ),
+)
+
+simple_mapping(
+ model.TaskMetricNumeric,
+ task=relation( model.Task, backref="numeric_metrics" ),
+)
+
mapper( model.JobParameter, model.JobParameter.table )
mapper( model.JobExternalOutputMetadata, model.JobExternalOutputMetadata.table,
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 lib/galaxy/model/migrate/versions/0119_job_metrics.py
--- /dev/null
+++ b/lib/galaxy/model/migrate/versions/0119_job_metrics.py
@@ -0,0 +1,102 @@
+"""
+Migration script for job metric plugins.
+"""
+
+from sqlalchemy import *
+from sqlalchemy.orm import *
+from migrate import *
+from migrate.changeset import *
+
+import datetime
+now = datetime.datetime.utcnow
+
+import logging
+log = logging.getLogger( __name__ )
+
+metadata = MetaData()
+
+TEXT_METRIC_MAX_LENGTH = 1023
+
+JobMetricText_table = Table(
+ "job_metric_text",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Unicode(TEXT_METRIC_MAX_LENGTH), ),
+)
+
+
+TaskMetricText_table = Table(
+ "task_metric_text",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "task_id", Integer, ForeignKey( "task.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Unicode(TEXT_METRIC_MAX_LENGTH), ),
+)
+
+
+JobMetricNumeric_table = Table(
+ "job_metric_numeric",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "job_id", Integer, ForeignKey( "job.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Numeric( 22, 7 ), ),
+)
+
+
+TaskMetricNumeric_table = Table(
+ "task_metric_numeric",
+ metadata,
+ Column( "id", Integer, primary_key=True ),
+ Column( "task_id", Integer, ForeignKey( "task.id" ), index=True ),
+ Column( "plugin", Unicode(255), ),
+ Column( "metric_name", Unicode(255), ),
+ Column( "metric_value", Numeric( 22, 7 ), ),
+)
+
+
+TABLES = [
+ JobMetricText_table,
+ TaskMetricText_table,
+ JobMetricNumeric_table,
+ TaskMetricNumeric_table,
+]
+
+
+def upgrade( migrate_engine ):
+ metadata.bind = migrate_engine
+ print __doc__
+ metadata.reflect()
+
+ for table in TABLES:
+ __create(table)
+
+
+def downgrade(migrate_engine):
+ metadata.bind = migrate_engine
+ metadata.reflect()
+
+ for table in TABLES:
+ __drop(table)
+
+
+def __create(table):
+ try:
+ table.create()
+ except Exception as e:
+ print str(e)
+ log.debug("Creating %s table failed: %s" % (table.name, str( e ) ) )
+
+
+def __drop(table):
+ try:
+ table.drop()
+ except Exception as e:
+ print str(e)
+ log.debug("Dropping %s table failed: %s" % (table.name, str( e ) ) )
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 templates/show_params.mako
--- a/templates/show_params.mako
+++ b/templates/show_params.mako
@@ -126,6 +126,13 @@
%if job and job.command_line and trans.user_is_admin():
<tr><td>Job Command-Line:</td><td>${ job.command_line | h }</td></tr>
%endif
+ %if job and trans.user_is_admin():
+ <% job_metrics = trans.app.job_metrics %>
+ %for metric in job.metrics:
+ <% metric_title, metric_value = job_metrics.format( metric.plugin, metric.metric_name, metric.metric_value ) %>
+ <tr><td>${ metric_title | h }</td><td>${ metric_value | h }</td></tr>
+ %endfor
+ %endif
</table><br />
diff -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 -r b24137a3a980cf913ba1b5ae023623fb416b54e6 test/unit/test_galaxy_mapping.py
--- a/test/unit/test_galaxy_mapping.py
+++ b/test/unit/test_galaxy_mapping.py
@@ -210,6 +210,46 @@
assert hist1.name == "History 2b"
# gvk TODO need to ad test for GalaxySessions, but not yet sure what they should look like.
+ def test_jobs( self ):
+ model = self.model
+ u = model.User( email="jobtest(a)foo.bar.baz", password="password" )
+ job = model.Job()
+ job.user = u
+ job.tool_id = "cat1"
+
+ self.persist( u, job )
+
+ loaded_job = model.session.query( model.Job ).filter( model.Job.user == u ).first()
+ assert loaded_job.tool_id == "cat1"
+
+ def test_job_metrics( self ):
+ model = self.model
+ u = model.User( email="jobtest(a)foo.bar.baz", password="password" )
+ job = model.Job()
+ job.user = u
+ job.tool_id = "cat1"
+
+ job.add_metric( "gx", "galaxy_slots", 5 )
+ job.add_metric( "system", "system_name", "localhost" )
+
+ self.persist( u, job )
+
+ task = model.Task( job=job, working_directory="/tmp", prepare_files_cmd="split.sh" )
+ task.add_metric( "gx", "galaxy_slots", 5 )
+ task.add_metric( "system", "system_name", "localhost" )
+ self.persist( task )
+
+ def test_tasks( self ):
+ model = self.model
+ u = model.User( email="jobtest(a)foo.bar.baz", password="password" )
+ job = model.Job()
+ task = model.Task( job=job, working_directory="/tmp", prepare_files_cmd="split.sh" )
+ job.user = u
+ self.persist( u, job, task )
+
+ loaded_task = model.session.query( model.Task ).filter( model.Task.job == job ).first()
+ assert loaded_task.prepare_input_files_cmd == "split.sh"
+
def test_history_contents( self ):
model = self.model
u = model.User( email="contents(a)foo.bar.baz", password="password" )
https://bitbucket.org/galaxy/galaxy-central/commits/e58259b27d22/
Changeset: e58259b27d22
User: jmchilton
Date: 2014-04-23 04:25:04
Summary: Job metrics - ensure all text values are unicode before persisting.
Gets rid of the following warning: "SAWarning: Unicode type received non-unicode bind param value.".
Affected #: 1 file
diff -r b24137a3a980cf913ba1b5ae023623fb416b54e6 -r e58259b27d229b5bdfe3e70772cef94804dda03c lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py
+++ b/lib/galaxy/model/__init__.py
@@ -95,10 +95,18 @@
self.numeric_metrics = []
def add_metric( self, plugin, metric_name, metric_value ):
+ if isinstance( plugin, str ):
+ plugin = unicode( plugin, 'utf-8' )
+
+ if isinstance( metric_name, str ):
+ metric_name = unicode( metric_name, 'utf-8' )
+
if isinstance( metric_value, numbers.Number ):
metric = self._numeric_metric( plugin, metric_name, metric_value )
self.numeric_metrics.append( metric )
else:
+ if isinstance( metric_value, str ):
+ metric_value = unicode( metric_value, 'utf-8' )
metric = self._text_metric( plugin, metric_name, metric_value )
self.text_metrics.append( metric )
https://bitbucket.org/galaxy/galaxy-central/commits/26575805a77b/
Changeset: 26575805a77b
User: jmchilton
Date: 2014-04-23 04:40:08
Summary: Fix test_job_configuration.py for job metrics PR.
Still need to go in and actually write some tests for the nested parsing that can happen.
Affected #: 1 file
diff -r e58259b27d229b5bdfe3e70772cef94804dda03c -r 26575805a77b61e893a6601401499c970f156a0c test/unit/jobs/test_job_configuration.py
--- a/test/unit/jobs/test_job_configuration.py
+++ b/test/unit/jobs/test_job_configuration.py
@@ -22,7 +22,7 @@
use_tasked_jobs=False,
)
self.__write_config_from( SIMPLE_JOB_CONF )
- self.app = bunch.Bunch( config=self.config )
+ self.app = bunch.Bunch( config=self.config, job_metrics=MockJobMetrics() )
self.__job_configuration = None
def tearDown( self ):
@@ -111,6 +111,8 @@
assert limits.concurrent_jobs[ "longjobs" ] == 1
assert limits.walltime_delta == datetime.timedelta( 0, 0, 0, 0, 0, 24 )
+ # TODO: Add job metrics parsing test.
+
@property
def job_config( self ):
if not self.__job_configuration:
@@ -126,3 +128,12 @@
def __write_config( self, contents ):
with open( os.path.join( self.temp_directory, "job_conf.xml" ), "w" ) as f:
f.write( contents )
+
+
+class MockJobMetrics( object ):
+
+ def __init__( self ):
+ pass
+
+ def set_destination_conf_element( self, id, element ):
+ pass
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: jmchilton: Unit tests for job conf parsing.
by commits-noreply@bitbucket.org 22 Apr '14
by commits-noreply@bitbucket.org 22 Apr '14
22 Apr '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/bc3fd881b07f/
Changeset: bc3fd881b07f
User: jmchilton
Date: 2014-04-23 03:50:18
Summary: Unit tests for job conf parsing.
Update advanced job sample to fix "bugs" discovered during testing limit parsing.
Affected #: 2 files
diff -r 68b49fdc63e2445ee2124e0185373d964bc64bc8 -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 job_conf.xml.sample_advanced
--- a/job_conf.xml.sample_advanced
+++ b/job_conf.xml.sample_advanced
@@ -159,8 +159,8 @@
<limits><!-- Certain limits can be defined. --><limit type="registered_user_concurrent_jobs">2</limit>
- <limit type="unregistered_user_concurrent_jobs">1</limit>
- <limit type="job_walltime">24:00:00</limit>
+ <limit type="anonymous_user_concurrent_jobs">1</limit>
+ <limit type="walltime">24:00:00</limit><limit type="concurrent_jobs" id="local">1</limit><limit type="concurrent_jobs" tag="mycluster">2</limit><limit type="concurrent_jobs" tag="longjobs">1</limit>
diff -r 68b49fdc63e2445ee2124e0185373d964bc64bc8 -r bc3fd881b07f5904ccad6d2d187e7d8e3c506652 test/unit/jobs/test_job_configuration.py
--- /dev/null
+++ b/test/unit/jobs/test_job_configuration.py
@@ -0,0 +1,128 @@
+import datetime
+import os
+import shutil
+import tempfile
+import unittest
+
+from galaxy.util import bunch
+from galaxy.jobs import JobConfiguration
+
+# File would be slightly more readable if contents were embedded directly, but
+# there are advantages to testing the documentation/examples.
+SIMPLE_JOB_CONF = os.path.join( os.path.dirname( __file__ ), "..", "..", "..", "job_conf.xml.sample_basic" )
+ADVANCED_JOB_CONF = os.path.join( os.path.dirname( __file__ ), "..", "..", "..", "job_conf.xml.sample_advanced" )
+
+
+class JobConfXmlParserTestCase( unittest.TestCase ):
+
+ def setUp( self ):
+ self.temp_directory = tempfile.mkdtemp()
+ self.config = bunch.Bunch(
+ job_config_file=os.path.join( self.temp_directory, "job_conf.xml" ),
+ use_tasked_jobs=False,
+ )
+ self.__write_config_from( SIMPLE_JOB_CONF )
+ self.app = bunch.Bunch( config=self.config )
+ self.__job_configuration = None
+
+ def tearDown( self ):
+ shutil.rmtree( self.temp_directory )
+
+ def test_load_simple_runner( self ):
+ runner_plugin = self.job_config.runner_plugins[ 0 ]
+ assert runner_plugin[ "id" ] == "local"
+ assert runner_plugin[ "load" ] == "galaxy.jobs.runners.local:LocalJobRunner"
+ assert runner_plugin[ "workers" ] == 4
+
+ def test_tasks_disabled( self ):
+ assert len( [ r for r in self.job_config.runner_plugins if r[ "id" ] == "tasks" ] ) == 0
+
+ def test_configuration_of_tasks( self ):
+ self.config.use_tasked_jobs = True
+ self.config.local_task_queue_workers = 5
+ task_runners = [ r for r in self.job_config.runner_plugins if r[ "id" ] == "tasks" ]
+ assert len( task_runners ) == 1
+ assert task_runners[ 0 ][ "workers" ] == 5
+
+ def test_load_simple_handler( self ):
+ main_handler = self.job_config.handlers[ "main" ]
+ assert main_handler[ 0 ] == "main"
+
+ def test_if_one_handler_implicit_default( self ):
+ assert self.job_config.default_handler_id == "main"
+
+ def test_explicit_handler_default( self ):
+ self.__with_advanced_config()
+ assert self.job_config.default_handler_id == "handlers"
+
+ def test_handler_tag_parsing( self ):
+ self.__with_advanced_config()
+ assert "handler0" in self.job_config.handlers[ "handlers" ]
+ assert "handler1" in self.job_config.handlers[ "handlers" ]
+
+ def test_load_simple_destination( self ):
+ local_dest = self.job_config.destinations[ "local" ][ 0 ]
+ assert local_dest.id == "local"
+ assert local_dest.runner == "local"
+
+ def test_load_destination_params( self ):
+ self.__with_advanced_config()
+ pbs_dest = self.job_config.destinations[ "pbs_longjobs" ][ 0 ]
+ assert pbs_dest.id == "pbs_longjobs"
+ assert pbs_dest.runner == "pbs"
+ dest_params = pbs_dest.params
+ assert dest_params[ "Resource_List" ] == "walltime=72:00:00"
+
+ def test_destination_tags( self ):
+ self.__with_advanced_config()
+ longjob_dests = self.job_config.destinations[ "longjobs" ]
+ assert len( longjob_dests ) == 2
+ assert longjob_dests[ 0 ].id == "pbs_longjobs"
+ assert longjob_dests[ 1 ].id == "remote_cluster"
+
+ def test_load_tool( self ):
+ self.__with_advanced_config()
+ baz_tool = self.job_config.tools[ "baz" ][ 0 ]
+ assert baz_tool.id == "baz"
+ assert baz_tool.handler == "special_handlers"
+ assert baz_tool.destination == "bigmem"
+
+ def test_load_tool_params( self ):
+ self.__with_advanced_config()
+ foo_tool = self.job_config.tools[ "foo" ][ 0 ]
+ assert foo_tool.params[ "source" ] == "trackster"
+
+ def test_default_limits( self ):
+ limits = self.job_config.limits
+ assert limits.registered_user_concurrent_jobs is None
+ assert limits.anonymous_user_concurrent_jobs is None
+ assert limits.walltime is None
+ assert limits.walltime_delta is None
+ assert limits.output_size is None
+ assert limits.concurrent_jobs == {}
+
+ def test_limit_overrides( self ):
+ self.__with_advanced_config()
+ limits = self.job_config.limits
+ assert limits.registered_user_concurrent_jobs == 2
+ assert limits.anonymous_user_concurrent_jobs == 1
+ assert limits.concurrent_jobs[ "local" ] == 1
+ assert limits.concurrent_jobs[ "mycluster" ] == 2
+ assert limits.concurrent_jobs[ "longjobs" ] == 1
+ assert limits.walltime_delta == datetime.timedelta( 0, 0, 0, 0, 0, 24 )
+
+ @property
+ def job_config( self ):
+ if not self.__job_configuration:
+ self.__job_configuration = JobConfiguration( self.app )
+ return self.__job_configuration
+
+ def __with_advanced_config( self ):
+ self.__write_config_from( ADVANCED_JOB_CONF )
+
+ def __write_config_from( self, path ):
+ self.__write_config( open( path, "r" ).read() )
+
+ def __write_config( self, contents ):
+ with open( os.path.join( self.temp_directory, "job_conf.xml" ), "w" ) as f:
+ f.write( contents )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: carlfeberhard: History/HDA UI: pass allow_user_dataset_purge from Galaxy.config to history-panel and hda-edit and respect by not rendering purge controls if false
by commits-noreply@bitbucket.org 22 Apr '14
by commits-noreply@bitbucket.org 22 Apr '14
22 Apr '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/68b49fdc63e2/
Changeset: 68b49fdc63e2
User: carlfeberhard
Date: 2014-04-22 19:16:50
Summary: History/HDA UI: pass allow_user_dataset_purge from Galaxy.config to history-panel and hda-edit and respect by not rendering purge controls if false
Affected #: 8 files
diff -r 2342d31a1f5cf906f1aecb0689fe64da384cf940 -r 68b49fdc63e2445ee2124e0185373d964bc64bc8 static/scripts/mvc/dataset/hda-edit.js
--- a/static/scripts/mvc/dataset/hda-edit.js
+++ b/static/scripts/mvc/dataset/hda-edit.js
@@ -36,6 +36,9 @@
this._render_rerunButton
];
+ /** allow user purge of dataset files? */
+ this.purgeAllowed = attributes.purgeAllowed || false;
+
//TODO: move to HiddenUntilActivatedViewMixin
/** should the tags editor be shown or hidden initially? */
this.tagsEditorShown = attributes.tagsEditorShown || false;
@@ -291,10 +294,15 @@
};
$newRender.find( '.dataset-deleted-msg' ).append([
- br, link( _l( 'Undelete it' ), 'dataset-undelete' ), p,
- br, link( _l( 'Permanently remove it from disk' ), 'dataset-purge' ), p
+ br, link( _l( 'Undelete it' ), 'dataset-undelete' ), p
].join( '' ));
+ if( this.purgeAllowed ){
+ $newRender.find( '.dataset-deleted-msg' ).append([
+ br, link( _l( 'Permanently remove it from disk' ), 'dataset-purge' ), p
+ ].join( '' ));
+ }
+
$newRender.find( '.dataset-hidden-msg' ).append([
br, link( _l( 'Unhide it' ), 'dataset-unhide' ), p
].join( '' ));
diff -r 2342d31a1f5cf906f1aecb0689fe64da384cf940 -r 68b49fdc63e2445ee2124e0185373d964bc64bc8 static/scripts/mvc/history/history-panel.js
--- a/static/scripts/mvc/history/history-panel.js
+++ b/static/scripts/mvc/history/history-panel.js
@@ -50,6 +50,9 @@
/** editor for annotations - sub-view */
this.annotationEditor = null;
+ /** allow user purge of dataset files? */
+ this.purgeAllowed = attributes.purgeAllowed || false;
+
// states/modes the panel can be in
/** is the panel currently showing the dataset selection controls? */
this.selecting = attributes.selecting || false;
@@ -197,41 +200,40 @@
* ajax calls made for multiple datasets are queued
*/
_setUpDatasetActionsPopup : function( $where ){
- var panel = this;
- ( new PopupMenu( $where.find( '.history-dataset-action-popup-btn' ), [
- {
- html: _l( 'Hide datasets' ), func: function(){
- var action = hdaModel.HistoryDatasetAssociation.prototype.hide;
- panel.getSelectedHdaCollection().ajaxQueue( action );
+ var panel = this,
+ actions = [
+ { html: _l( 'Hide datasets' ), func: function(){
+ var action = hdaModel.HistoryDatasetAssociation.prototype.hide;
+ panel.getSelectedHdaCollection().ajaxQueue( action );
+ }
+ },
+ { html: _l( 'Unhide datasets' ), func: function(){
+ var action = hdaModel.HistoryDatasetAssociation.prototype.unhide;
+ panel.getSelectedHdaCollection().ajaxQueue( action );
+ }
+ },
+ { html: _l( 'Delete datasets' ), func: function(){
+ var action = hdaModel.HistoryDatasetAssociation.prototype['delete'];
+ panel.getSelectedHdaCollection().ajaxQueue( action );
+ }
+ },
+ { html: _l( 'Undelete datasets' ), func: function(){
+ var action = hdaModel.HistoryDatasetAssociation.prototype.undelete;
+ panel.getSelectedHdaCollection().ajaxQueue( action );
+ }
}
- },
- {
- html: _l( 'Unhide datasets' ), func: function(){
- var action = hdaModel.HistoryDatasetAssociation.prototype.unhide;
- panel.getSelectedHdaCollection().ajaxQueue( action );
- }
- },
- {
- html: _l( 'Delete datasets' ), func: function(){
- var action = hdaModel.HistoryDatasetAssociation.prototype['delete'];
- panel.getSelectedHdaCollection().ajaxQueue( action );
- }
- },
- {
- html: _l( 'Undelete datasets' ), func: function(){
- var action = hdaModel.HistoryDatasetAssociation.prototype.undelete;
- panel.getSelectedHdaCollection().ajaxQueue( action );
- }
- },
- {
+ ];
+ if( panel.purgeAllowed ){
+ actions.push({
html: _l( 'Permanently delete datasets' ), func: function(){
if( confirm( _l( 'This will permanently remove the data in your datasets. Are you sure?' ) ) ){
var action = hdaModel.HistoryDatasetAssociation.prototype.purge;
panel.getSelectedHdaCollection().ajaxQueue( action );
}
}
- }
- ]));
+ });
+ }
+ return new PopupMenu( $where.find( '.history-dataset-action-popup-btn' ), actions );
},
// ------------------------------------------------------------------------ hda sub-views
@@ -265,6 +267,7 @@
expanded : this.storage.get( 'expandedHdas' )[ hdaId ],
//draggable : true,
selectable : this.selecting,
+ purgeAllowed : this.purgeAllowed,
hasUser : this.model.ownedByCurrUser(),
logger : this.logger,
tagsEditorShown : ( this.tagsEditor && !this.tagsEditor.hidden ),
diff -r 2342d31a1f5cf906f1aecb0689fe64da384cf940 -r 68b49fdc63e2445ee2124e0185373d964bc64bc8 static/scripts/packed/galaxy.workflow_editor.canvas.js
--- a/static/scripts/packed/galaxy.workflow_editor.canvas.js
+++ b/static/scripts/packed/galaxy.workflow_editor.canvas.js
@@ -1,1 +1,1 @@
-var Terminal=Backbone.Model.extend({initialize:function(a){this.element=a.element;this.connectors=[]},connect:function(a){this.connectors.push(a);if(this.node){this.node.markChanged()}},disconnect:function(a){this.connectors.splice($.inArray(a,this.connectors),1);if(this.node){this.node.markChanged()}},redraw:function(){$.each(this.connectors,function(a,b){b.redraw()})},destroy:function(){$.each(this.connectors.slice(),function(a,b){b.destroy()})}});var OutputTerminal=Terminal.extend({initialize:function(a){Terminal.prototype.initialize.call(this,a);this.datatypes=a.datatypes}});var InputTerminal=Terminal.extend({initialize:function(a){Terminal.prototype.initialize.call(this,a);this.datatypes=a.datatypes;this.multiple=a.multiple},canAccept:function(a){if(this._inputFilled()){return false}else{return this.attachable(a)}},_inputFilled:function(){return !(this.connectors.length<1||this.multiple)},attachable:function(a){for(var c in this.datatypes){var f=new Array();f=f.concat(a.datatypes);if(a.node.post_job_actions){for(var d in a.node.post_job_actions){var g=a.node.post_job_actions[d];if(g.action_type=="ChangeDatatypeAction"&&(g.output_name==""||g.output_name==a.name)&&g.action_arguments){f.push(g.action_arguments.newtype)}}}for(var b in f){if(f[b]=="input"||issubtype(f[b],this.datatypes[c])){return true}}}return false}});function Connector(b,a){this.canvas=null;this.dragging=false;this.inner_color="#FFFFFF";this.outer_color="#D8B365";if(b&&a){this.connect(b,a)}}$.extend(Connector.prototype,{connect:function(b,a){this.handle1=b;if(this.handle1){this.handle1.connect(this)}this.handle2=a;if(this.handle2){this.handle2.connect(this)}},destroy:function(){if(this.handle1){this.handle1.disconnect(this)}if(this.handle2){this.handle2.disconnect(this)}$(this.canvas).remove()},redraw:function(){var d=$("#canvas-container");if(!this.canvas){this.canvas=document.createElement("canvas");if(window.G_vmlCanvasManager){G_vmlCanvasManager.initElement(this.canvas)}d.append($(this.canvas));if(this.dragging){this.canvas.style.zIndex="300"}}var n=function(c){return $(c).offset().left-d.offset().left};var i=function(c){return $(c).offset().top-d.offset().top};if(!this.handle1||!this.handle2){return}var h=n(this.handle1.element)+5;var g=i(this.handle1.element)+5;var p=n(this.handle2.element)+5;var m=i(this.handle2.element)+5;var f=100;var k=Math.min(h,p);var a=Math.max(h,p);var j=Math.min(g,m);var t=Math.max(g,m);var b=Math.min(Math.max(Math.abs(t-j)/2,100),300);var o=k-f;var s=j-f;var q=a-k+2*f;var l=t-j+2*f;this.canvas.style.left=o+"px";this.canvas.style.top=s+"px";this.canvas.setAttribute("width",q);this.canvas.setAttribute("height",l);h-=o;g-=s;p-=o;m-=s;var r=this.canvas.getContext("2d");r.lineCap="round";r.strokeStyle=this.outer_color;r.lineWidth=7;r.beginPath();r.moveTo(h,g);r.bezierCurveTo(h+b,g,p-b,m,p,m);r.stroke();r.strokeStyle=this.inner_color;r.lineWidth=5;r.beginPath();r.moveTo(h,g);r.bezierCurveTo(h+b,g,p-b,m,p,m);r.stroke()}});var Node=Backbone.Model.extend({initialize:function(a){this.element=a.element;this.input_terminals={};this.output_terminals={};this.tool_errors={}},redraw:function(){$.each(this.input_terminals,function(a,b){b.redraw()});$.each(this.output_terminals,function(a,b){b.redraw()})},destroy:function(){$.each(this.input_terminals,function(a,b){b.destroy()});$.each(this.output_terminals,function(a,b){b.destroy()});workflow.remove_node(this);$(this.element).remove()},make_active:function(){$(this.element).addClass("toolForm-active")},make_inactive:function(){var a=this.element.get(0);(function(b){b.removeChild(a);b.appendChild(a)})(a.parentNode);$(a).removeClass("toolForm-active")},init_field_data:function(b){if(b.type){this.type=b.type}this.name=b.name;this.form_html=b.form_html;this.tool_state=b.tool_state;this.tool_errors=b.tool_errors;this.tooltip=b.tooltip?b.tooltip:"";this.annotation=b.annotation;this.post_job_actions=b.post_job_actions?b.post_job_actions:{};this.workflow_outputs=b.workflow_outputs?b.workflow_outputs:[];var a=this;var c=new NodeView({el:this.element[0],node:a,});a.nodeView=c;$.each(b.data_inputs,function(f,d){c.addDataInput(d)});if((b.data_inputs.length>0)&&(b.data_outputs.length>0)){c.addRule()}$.each(b.data_outputs,function(f,d){c.addDataOutput(d)});c.render();workflow.node_changed(this)},update_field_data:function(c){var b=this;nodeView=b.nodeView;this.tool_state=c.tool_state;this.form_html=c.form_html;this.tool_errors=c.tool_errors;this.annotation=c.annotation;var d=$.parseJSON(c.post_job_actions);this.post_job_actions=d?d:{};b.nodeView.renderToolErrors();var f=nodeView.$("div.inputs");var a=nodeView.newInputsDiv();$.each(c.data_inputs,function(h,g){b.nodeView.replaceDataInput(g,a)});f.replaceWith(a);f.find("div.input-data-row > .terminal").each(function(){this.terminal.destroy()});this.markChanged();this.redraw()},error:function(d){var a=$(this.element).find(".toolFormBody");a.find("div").remove();var c="<div style='color: red; text-style: italic;'>"+d+"</div>";this.form_html=c;a.html(c);workflow.node_changed(this)},markChanged:function(){workflow.node_changed(this)}});function Workflow(a){this.canvas_container=a;this.id_counter=0;this.nodes={};this.name=null;this.has_changes=false;this.active_form_has_changes=false}$.extend(Workflow.prototype,{add_node:function(a){a.id=this.id_counter;a.element.attr("id","wf-node-step-"+a.id);this.id_counter++;this.nodes[a.id]=a;this.has_changes=true;a.workflow=this},remove_node:function(a){if(this.active_node==a){this.clear_active_node()}delete this.nodes[a.id];this.has_changes=true},remove_all:function(){wf=this;$.each(this.nodes,function(b,a){a.destroy();wf.remove_node(a)})},rectify_workflow_outputs:function(){var b=false;var a=false;$.each(this.nodes,function(c,d){if(d.workflow_outputs&&d.workflow_outputs.length>0){b=true}$.each(d.post_job_actions,function(g,f){if(f.action_type==="HideDatasetAction"){a=true}})});if(b!==false||a!==false){$.each(this.nodes,function(c,g){if(g.type==="tool"){var f=false;if(g.post_job_actions==null){g.post_job_actions={};f=true}var d=[];$.each(g.post_job_actions,function(i,h){if(h.action_type=="HideDatasetAction"){d.push(i)}});if(d.length>0){$.each(d,function(h,j){f=true;delete g.post_job_actions[j]})}if(b){$.each(g.output_terminals,function(i,j){var h=true;$.each(g.workflow_outputs,function(l,m){if(j.name===m){h=false}});if(h===true){f=true;var k={action_type:"HideDatasetAction",output_name:j.name,action_arguments:{}};g.post_job_actions["HideDatasetAction"+j.name]=null;g.post_job_actions["HideDatasetAction"+j.name]=k}})}if(workflow.active_node==g&&f===true){workflow.reload_active_node()}}})}},to_simple:function(){var a={};$.each(this.nodes,function(c,f){var g={};$.each(f.input_terminals,function(i,j){g[j.name]=null;var h=[];$.each(j.connectors,function(k,l){h[k]={id:l.handle1.node.id,output_name:l.handle1.name};g[j.name]=h})});var b={};if(f.post_job_actions){$.each(f.post_job_actions,function(j,h){var k={action_type:h.action_type,output_name:h.output_name,action_arguments:h.action_arguments};b[h.action_type+h.output_name]=null;b[h.action_type+h.output_name]=k})}if(!f.workflow_outputs){f.workflow_outputs=[]}var d={id:f.id,type:f.type,tool_id:f.tool_id,tool_state:f.tool_state,tool_errors:f.tool_errors,input_connections:g,position:$(f.element).position(),annotation:f.annotation,post_job_actions:f.post_job_actions,workflow_outputs:f.workflow_outputs};a[f.id]=d});return{steps:a}},from_simple:function(b){wf=this;var c=0;wf.name=b.name;var a=false;$.each(b.steps,function(g,f){var d=prebuild_node(f.type,f.name,f.tool_id);d.init_field_data(f);if(f.position){d.element.css({top:f.position.top,left:f.position.left})}d.id=f.id;wf.nodes[d.id]=d;c=Math.max(c,parseInt(g));if(!a&&d.type==="tool"){if(d.workflow_outputs.length>0){a=true}else{$.each(d.post_job_actions,function(i,h){if(h.action_type==="HideDatasetAction"){a=true}})}}});wf.id_counter=c+1;$.each(b.steps,function(g,f){var d=wf.nodes[g];$.each(f.input_connections,function(i,h){if(h){if(!$.isArray(h)){h=[h]}$.each(h,function(k,j){var m=wf.nodes[j.id];var n=new Connector();n.connect(m.output_terminals[j.output_name],d.input_terminals[i]);n.redraw()})}});if(a&&d.type==="tool"){$.each(d.output_terminals,function(h,i){if(d.post_job_actions["HideDatasetAction"+i.name]===undefined){d.workflow_outputs.push(i.name);callout=$(d.element).find(".callout."+i.name);callout.find("img").attr("src",galaxy_config.root+"static/images/fugue/asterisk-small.png");workflow.has_changes=true}})}})},check_changes_in_active_form:function(){if(this.active_form_has_changes){this.has_changes=true;$("#right-content").find("form").submit();this.active_form_has_changes=false}},reload_active_node:function(){if(this.active_node){var a=this.active_node;this.clear_active_node();this.activate_node(a)}},clear_active_node:function(){if(this.active_node){this.active_node.make_inactive();this.active_node=null}parent.show_form_for_tool("<div>No node selected</div>")},activate_node:function(a){if(this.active_node!=a){this.check_changes_in_active_form();this.clear_active_node();parent.show_form_for_tool(a.form_html+a.tooltip,a);a.make_active();this.active_node=a}},node_changed:function(a){this.has_changes=true;if(this.active_node==a){this.check_changes_in_active_form();parent.show_form_for_tool(a.form_html+a.tooltip,a)}},layout:function(){this.check_changes_in_active_form();this.has_changes=true;var i={};var b={};$.each(this.nodes,function(l,k){if(i[l]===undefined){i[l]=0}if(b[l]===undefined){b[l]=[]}});$.each(this.nodes,function(l,k){$.each(k.input_terminals,function(m,n){$.each(n.connectors,function(p,q){var o=q.handle1.node;i[k.id]+=1;b[o.id].push(k.id)})})});node_ids_by_level=[];while(true){level_parents=[];for(var a in i){if(i[a]==0){level_parents.push(a)}}if(level_parents.length==0){break}node_ids_by_level.push(level_parents);for(var f in level_parents){var j=level_parents[f];delete i[j];for(var g in b[j]){i[b[j][g]]-=1}}}if(i.length){return}var d=this.nodes;var h=80;v_pad=30;var c=h;$.each(node_ids_by_level,function(k,l){l.sort(function(p,o){return $(d[p].element).position().top-$(d[o].element).position().top});var m=0;var n=v_pad;$.each(l,function(o,r){var q=d[r];var p=$(q.element);$(p).css({top:n,left:c});m=Math.max(m,$(p).width());n+=$(p).height()+v_pad});c+=m+h});$.each(d,function(k,l){l.redraw()})},bounds_for_all_nodes:function(){var d=Infinity,b=-Infinity,c=Infinity,a=-Infinity,f;$.each(this.nodes,function(h,g){e=$(g.element);f=e.position();d=Math.min(d,f.left);b=Math.max(b,f.left+e.width());c=Math.min(c,f.top);a=Math.max(a,f.top+e.width())});return{xmin:d,xmax:b,ymin:c,ymax:a}},fit_canvas_to_nodes:function(){var a=this.bounds_for_all_nodes();var f=this.canvas_container.position();var i=this.canvas_container.parent();var d=fix_delta(a.xmin,100);var h=fix_delta(a.ymin,100);d=Math.max(d,f.left);h=Math.max(h,f.top);var c=f.left-d;var g=f.top-h;var b=round_up(a.xmax+100,100)+d;var j=round_up(a.ymax+100,100)+h;b=Math.max(b,-c+i.width());j=Math.max(j,-g+i.height());this.canvas_container.css({left:c,top:g,width:b,height:j});this.canvas_container.children().each(function(){var k=$(this).position();$(this).css("left",k.left+d);$(this).css("top",k.top+h)})}});function fix_delta(a,b){if(a<b||a>3*b){new_pos=(Math.ceil(((a%b))/b)+1)*b;return(-(a-new_pos))}return 0}function round_up(a,b){return Math.ceil(a/b)*b}function prebuild_node(l,j,r){var i=$("<div class='toolForm toolFormInCanvas'></div>");var g=new Node({element:i});g.type=l;if(l=="tool"){g.tool_id=r}var n=$("<div class='toolFormTitle unselectable'>"+j+"</div>");i.append(n);i.css("left",$(window).scrollLeft()+20);i.css("top",$(window).scrollTop()+20);var m=$("<div class='toolFormBody'></div>");var h="<div><img height='16' align='middle' src='"+galaxy_config.root+"static/images/loading_small_white_bg.gif'/> loading tool info...</div>";m.append(h);g.form_html=h;i.append(m);var k=$("<div class='buttons' style='float: right;'></div>");k.append($("<div>").addClass("fa-icon-button fa fa-times").click(function(b){g.destroy()}));i.appendTo("#canvas-container");var d=$("#canvas-container").position();var c=$("#canvas-container").parent();var a=i.width();var q=i.height();i.css({left:(-d.left)+(c.width()/2)-(a/2),top:(-d.top)+(c.height()/2)-(q/2)});k.prependTo(n);a+=(k.width()+10);i.css("width",a);$(i).bind("dragstart",function(){workflow.activate_node(g)}).bind("dragend",function(){workflow.node_changed(this);workflow.fit_canvas_to_nodes();canvas_manager.draw_overview()}).bind("dragclickonly",function(){workflow.activate_node(g)}).bind("drag",function(o,p){var f=$(this).offsetParent().offset(),b=p.offsetX-f.left,s=p.offsetY-f.top;$(this).css({left:b,top:s});$(this).find(".terminal").each(function(){this.terminal.redraw()})});return g}function add_node(b,d,a){var c=prebuild_node(b,d,a);workflow.add_node(c);workflow.fit_canvas_to_nodes();canvas_manager.draw_overview();workflow.activate_node(c);return c}var ext_to_type=null;var type_to_type=null;function issubtype(b,a){b=ext_to_type[b];a=ext_to_type[a];return(type_to_type[b])&&(a in type_to_type[b])}function populate_datatype_info(a){ext_to_type=a.ext_to_class_name;type_to_type=a.class_to_classes}var NodeView=Backbone.View.extend({initialize:function(a){this.node=a.node;this.output_width=Math.max(150,this.$el.width());this.tool_body=this.$el.find(".toolFormBody");this.tool_body.find("div").remove();this.newInputsDiv().appendTo(this.tool_body)},render:function(){this.renderToolErrors();this.$el.css("width",Math.min(250,Math.max(this.$el.width(),this.output_width)))},renderToolErrors:function(){if(this.node.tool_errors){this.$el.addClass("tool-node-error")}else{this.$el.removeClass("tool-node-error")}},newInputsDiv:function(){return $("<div class='inputs'></div>")},updateMaxWidth:function(a){this.output_width=Math.max(this.output_width,a)},addRule:function(){this.tool_body.append($("<div class='rule'></div>"))},addDataInput:function(c){var d=new InputTerminalView({node:this.node,input:c});var f=d.el;var b=new DataInputView({terminalElement:f,input:c,nodeView:this,});var a=b.$el;var f=b.terminalElement;this.$(".inputs").append(a.prepend(f))},replaceDataInput:function(c,f){var g=new InputTerminalView({node:this.node,input:c});var d=g.el;this.$("div[name='"+c.name+"']").each(function(){$(this).find(".input-terminal").each(function(){var i=this.terminal.connectors[0];if(i){var h=d.terminal;if(i.handle1&&!h.attachable(i.handle1)){i.destroy()}else{h.connectors[0]=i;i.handle2=h}}});$(this).remove()});var b=new DataInputView({terminalElement:d,input:c,nodeView:this,skipResize:true,});var a=b.$el;f.append(a.prepend(d))},addDataOutput:function(a){var c=new OutputTerminalView({node:this.node,output:a});var b=new DataOutputView({output:a,terminalElement:c.el,nodeView:this,});this.tool_body.append(b.$el.append(b.terminalElement))}});var DataInputView=Backbone.View.extend({className:"form-row dataRow input-data-row",initialize:function(a){this.input=a.input;this.nodeView=a.nodeView;this.terminalElement=a.terminalElement;this.$el.attr("name",this.input.name).html(this.input.label);if(!a.skipResize){this.$el.css({position:"absolute",left:-1000,top:-1000,display:"none"});$("body").append(this.el);this.nodeView.updateMaxWidth(this.$el.outerWidth());this.$el.css({position:"",left:"",top:"",display:""});this.$el.remove()}},});var OutputCalloutView=Backbone.View.extend({tagName:"div",initialize:function(b){this.label=b.label;this.node=b.node;this.output=b.output;var a=this;this.$el.attr("class","callout "+this.label).css({display:"none"}).append($("<div class='buttons'></div>").append($("<img/>").attr("src",galaxy_config.root+"static/images/fugue/asterisk-small-outline.png").click(function(){if($.inArray(a.output.name,a.node.workflow_outputs)!=-1){a.node.workflow_outputs.splice($.inArray(a.output.name,a.node.workflow_outputs),1);a.$("img").attr("src",galaxy_config.root+"static/images/fugue/asterisk-small-outline.png")}else{a.node.workflow_outputs.push(a.output.name);a.$("img").attr("src",galaxy_config.root+"static/images/fugue/asterisk-small.png")}workflow.has_changes=true;canvas_manager.draw_overview()}))).tooltip({delay:500,title:"Mark dataset as a workflow output. All unmarked datasets will be hidden."});this.$el.css({top:"50%",margin:"-8px 0px 0px 0px",right:8});this.$el.show();this.resetImage()},resetImage:function(){if($.inArray(this.output.name,this.node.workflow_outputs)===-1){this.$("img").attr("src",galaxy_config.root+"static/images/fugue/asterisk-small-outline.png")}else{this.$("img").attr("src",galaxy_config.root+"static/images/fugue/asterisk-small.png")}},hoverImage:function(){this.$("img").attr("src",galaxy_config.root+"static/images/fugue/asterisk-small-yellow.png")}});var DataOutputView=Backbone.View.extend({className:"form-row dataRow",initialize:function(c){this.output=c.output;this.terminalElement=c.terminalElement;this.nodeView=c.nodeView;var a=this.output;var b=a.name;var d=this.nodeView.node;if(a.extensions.indexOf("input")<0){b=b+" ("+a.extensions.join(", ")+")"}this.$el.html(b);if(d.type=="tool"){var f=new OutputCalloutView({label:b,output:a,node:d,});this.$el.append(f.el);this.$el.hover(function(){f.hoverImage()},function(){f.resetImage()})}this.$el.css({position:"absolute",left:-1000,top:-1000,display:"none"});$("body").append(this.el);this.nodeView.updateMaxWidth(this.$el.outerWidth()+17);this.$el.css({position:"",left:"",top:"",display:""}).detach()}});var InputTerminalView=Backbone.View.extend({className:"terminal input-terminal",initialize:function(d){var h=d.node;var b=d.input;var c=b.name;var g=b.extensions;var a=b.multiple;var f=this.el.terminal=new InputTerminal({element:this.el,datatypes:g,multiple:a});f.node=h;f.name=c;h.input_terminals[c]=f},events:{dropinit:"onDropInit",dropstart:"onDropStart",dropend:"onDropEnd",drop:"onDrop",hover:"onHover",},onDropInit:function(b,c){var a=this.el.terminal;return $(c.drag).hasClass("output-terminal")&&a.canAccept(c.drag.terminal)},onDropStart:function(a,b){if(b.proxy.terminal){b.proxy.terminal.connectors[0].inner_color="#BBFFBB"}},onDropEnd:function(a,b){if(b.proxy.terminal){b.proxy.terminal.connectors[0].inner_color="#FFFFFF"}},onDrop:function(b,c){var a=this.el.terminal;new Connector(c.drag.terminal,a).redraw()},onHover:function(){var c=this.el;var b=c.terminal;if(b.connectors.length>0){var a=$("<div class='callout'></div>").css({display:"none"}).appendTo("body").append($("<div class='button'></div>").append($("<div/>").addClass("fa-icon-button fa fa-times").click(function(){$.each(b.connectors,function(f,d){if(d){d.destroy()}});a.remove()}))).bind("mouseleave",function(){$(this).remove()});a.css({top:$(c).offset().top-2,left:$(c).offset().left-a.width(),"padding-right":$(c).width()}).show()}},});var OutputTerminalView=Backbone.View.extend({className:"terminal output-terminal",initialize:function(c){var i=c.node;var a=c.output;var b=a.name;var h=a.extensions;var g=this.el;var f=g;var d=g.terminal=new OutputTerminal({element:g,datatypes:h});d.node=i;d.name=b;i.output_terminals[b]=d},events:{drag:"onDrag",dragstart:"onDragStart",dragend:"onDragEnd",},onDrag:function(b,c){var a=function(){var f=$(c.proxy).offsetParent().offset(),d=c.offsetX-f.left,g=c.offsetY-f.top;$(c.proxy).css({left:d,top:g});c.proxy.terminal.redraw();canvas_manager.update_viewport_overlay()};a();$("#canvas-container").get(0).scroll_panel.test(b,a)},onDragStart:function(b,f){$(f.available).addClass("input-terminal-active");workflow.check_changes_in_active_form();var a=$('<div class="drag-terminal" style="position: absolute;"></div>').appendTo("#canvas-container").get(0);a.terminal=new OutputTerminal({element:a});var g=new Connector();g.dragging=true;g.connect(this.el.terminal,a.terminal);return a},onDragEnd:function(a,b){b.proxy.terminal.connectors[0].destroy();$(b.proxy).remove();$(b.available).removeClass("input-terminal-active");$("#canvas-container").get(0).scroll_panel.stop()}});function ScrollPanel(a){this.panel=a}$.extend(ScrollPanel.prototype,{test:function(v,d){clearTimeout(this.timeout);var k=v.pageX,j=v.pageY,l=$(this.panel),c=l.position(),b=l.width(),i=l.height(),w=l.parent(),s=w.width(),a=w.height(),r=w.offset(),p=r.left,m=r.top,A=p+w.width(),u=m+w.height(),B=-(b-(s/2)),z=-(i-(a/2)),g=(s/2),f=(a/2),h=false,q=5,o=23;if(k-q<p){if(c.left<g){var n=Math.min(o,g-c.left);l.css("left",c.left+n);h=true}}else{if(k+q>A){if(c.left>B){var n=Math.min(o,c.left-B);l.css("left",c.left-n);h=true}}else{if(j-q<m){if(c.top<f){var n=Math.min(o,f-c.top);l.css("top",c.top+n);h=true}}else{if(j+q>u){if(c.top>z){var n=Math.min(o,c.top-B);l.css("top",(c.top-n)+"px");h=true}}}}}if(h){d();var l=this;this.timeout=setTimeout(function(){l.test(v,d)},50)}},stop:function(b,a){clearTimeout(this.timeout)}});function CanvasManager(b,a){this.cv=b;this.cc=this.cv.find("#canvas-container");this.oc=a.find("#overview-canvas");this.ov=a.find("#overview-viewport");this.init_drag()}$.extend(CanvasManager.prototype,{init_drag:function(){var b=this;var a=function(f,g){f=Math.min(f,b.cv.width()/2);f=Math.max(f,-b.cc.width()+b.cv.width()/2);g=Math.min(g,b.cv.height()/2);g=Math.max(g,-b.cc.height()+b.cv.height()/2);b.cc.css({left:f,top:g});b.update_viewport_overlay()};this.cc.each(function(){this.scroll_panel=new ScrollPanel(this)});var d,c;this.cv.bind("dragstart",function(){var g=$(this).offset();var f=b.cc.position();c=f.top-g.top;d=f.left-g.left}).bind("drag",function(f,g){a(g.offsetX+d,g.offsetY+c)}).bind("dragend",function(){workflow.fit_canvas_to_nodes();b.draw_overview()});this.ov.bind("drag",function(k,l){var h=b.cc.width(),n=b.cc.height(),m=b.oc.width(),j=b.oc.height(),f=$(this).offsetParent().offset(),i=l.offsetX-f.left,g=l.offsetY-f.top;a(-(i/m*h),-(g/j*n))}).bind("dragend",function(){workflow.fit_canvas_to_nodes();b.draw_overview()});$("#overview-border").bind("drag",function(g,i){var j=$(this).offsetParent();var h=j.offset();var f=Math.max(j.width()-(i.offsetX-h.left),j.height()-(i.offsetY-h.top));$(this).css({width:f,height:f});b.draw_overview()});$("#overview-border div").bind("drag",function(){})},update_viewport_overlay:function(){var b=this.cc,f=this.cv,a=this.oc,c=this.ov,d=b.width(),j=b.height(),i=a.width(),g=a.height(),h=b.position();c.css({left:-(h.left/d*i),top:-(h.top/j*g),width:(f.width()/d*i)-2,height:(f.height()/j*g)-2})},draw_overview:function(){var j=$("#overview-canvas"),m=j.parent().parent().width(),i=j.get(0).getContext("2d"),d=$("#canvas-container").width(),l=$("#canvas-container").height();var g,a,k,f;var h=this.cv.width();var b=this.cv.height();if(d<h&&l<b){k=d/h*m;f=(m-k)/2;g=l/b*m;a=(m-g)/2}else{if(d<l){a=0;g=m;k=Math.ceil(g*d/l);f=(m-k)/2}else{k=m;f=0;g=Math.ceil(k*l/d);a=(m-g)/2}}j.parent().css({left:f,top:a,width:k,height:g});j.attr("width",k);j.attr("height",g);$.each(workflow.nodes,function(t,q){i.fillStyle="#D2C099";i.strokeStyle="#D8B365";i.lineWidth=1;var s=$(q.element),n=s.position(),c=n.left/d*k,r=n.top/l*g,o=s.width()/d*k,p=s.height()/l*g;if(q.tool_errors){i.fillStyle="#FFCCCC";i.strokeStyle="#AA6666"}else{if(q.workflow_outputs!=undefined&&q.workflow_outputs.length>0){i.fillStyle="#E8A92D";i.strokeStyle="#E8A92D"}}i.fillRect(c,r,o,p);i.strokeRect(c,r,o,p)});this.update_viewport_overlay()}});
\ No newline at end of file
+var Terminal=Backbone.Model.extend({initialize:function(a){this.element=a.element;this.connectors=[]},connect:function(a){this.connectors.push(a);if(this.node){this.node.markChanged()}},disconnect:function(a){this.connectors.splice($.inArray(a,this.connectors),1);if(this.node){this.node.markChanged()}},redraw:function(){$.each(this.connectors,function(a,b){b.redraw()})},destroy:function(){$.each(this.connectors.slice(),function(a,b){b.destroy()})}});var OutputTerminal=Terminal.extend({initialize:function(a){Terminal.prototype.initialize.call(this,a);this.datatypes=a.datatypes}});var InputTerminal=Terminal.extend({initialize:function(a){Terminal.prototype.initialize.call(this,a);this.update(a.input)},update:function(a){this.datatypes=a.extensions;this.multiple=a.multiple},canAccept:function(a){if(this._inputFilled()){return false}else{return this.attachable(a)}},_inputFilled:function(){return !(this.connectors.length<1||this.multiple)},attachable:function(a){for(var c in this.datatypes){var f=new Array();f=f.concat(a.datatypes);if(a.node.post_job_actions){for(var d in a.node.post_job_actions){var g=a.node.post_job_actions[d];if(g.action_type=="ChangeDatatypeAction"&&(g.output_name==""||g.output_name==a.name)&&g.action_arguments){f.push(g.action_arguments.newtype)}}}for(var b in f){if(f[b]=="input"||issubtype(f[b],this.datatypes[c])){return true}}}return false}});function Connector(b,a){this.canvas=null;this.dragging=false;this.inner_color="#FFFFFF";this.outer_color="#D8B365";if(b&&a){this.connect(b,a)}}$.extend(Connector.prototype,{connect:function(b,a){this.handle1=b;if(this.handle1){this.handle1.connect(this)}this.handle2=a;if(this.handle2){this.handle2.connect(this)}},destroy:function(){if(this.handle1){this.handle1.disconnect(this)}if(this.handle2){this.handle2.disconnect(this)}$(this.canvas).remove()},redraw:function(){var d=$("#canvas-container");if(!this.canvas){this.canvas=document.createElement("canvas");if(window.G_vmlCanvasManager){G_vmlCanvasManager.initElement(this.canvas)}d.append($(this.canvas));if(this.dragging){this.canvas.style.zIndex="300"}}var n=function(c){return $(c).offset().left-d.offset().left};var i=function(c){return $(c).offset().top-d.offset().top};if(!this.handle1||!this.handle2){return}var h=n(this.handle1.element)+5;var g=i(this.handle1.element)+5;var p=n(this.handle2.element)+5;var m=i(this.handle2.element)+5;var f=100;var k=Math.min(h,p);var a=Math.max(h,p);var j=Math.min(g,m);var t=Math.max(g,m);var b=Math.min(Math.max(Math.abs(t-j)/2,100),300);var o=k-f;var s=j-f;var q=a-k+2*f;var l=t-j+2*f;this.canvas.style.left=o+"px";this.canvas.style.top=s+"px";this.canvas.setAttribute("width",q);this.canvas.setAttribute("height",l);h-=o;g-=s;p-=o;m-=s;var r=this.canvas.getContext("2d");r.lineCap="round";r.strokeStyle=this.outer_color;r.lineWidth=7;r.beginPath();r.moveTo(h,g);r.bezierCurveTo(h+b,g,p-b,m,p,m);r.stroke();r.strokeStyle=this.inner_color;r.lineWidth=5;r.beginPath();r.moveTo(h,g);r.bezierCurveTo(h+b,g,p-b,m,p,m);r.stroke()}});var Node=Backbone.Model.extend({initialize:function(a){this.element=a.element;this.input_terminals={};this.output_terminals={};this.tool_errors={}},redraw:function(){$.each(this.input_terminals,function(a,b){b.redraw()});$.each(this.output_terminals,function(a,b){b.redraw()})},destroy:function(){$.each(this.input_terminals,function(a,b){b.destroy()});$.each(this.output_terminals,function(a,b){b.destroy()});workflow.remove_node(this);$(this.element).remove()},make_active:function(){$(this.element).addClass("toolForm-active")},make_inactive:function(){var a=this.element.get(0);(function(b){b.removeChild(a);b.appendChild(a)})(a.parentNode);$(a).removeClass("toolForm-active")},init_field_data:function(b){if(b.type){this.type=b.type}this.name=b.name;this.form_html=b.form_html;this.tool_state=b.tool_state;this.tool_errors=b.tool_errors;this.tooltip=b.tooltip?b.tooltip:"";this.annotation=b.annotation;this.post_job_actions=b.post_job_actions?b.post_job_actions:{};this.workflow_outputs=b.workflow_outputs?b.workflow_outputs:[];var a=this;var c=new NodeView({el:this.element[0],node:a,});a.nodeView=c;$.each(b.data_inputs,function(f,d){c.addDataInput(d)});if((b.data_inputs.length>0)&&(b.data_outputs.length>0)){c.addRule()}$.each(b.data_outputs,function(f,d){c.addDataOutput(d)});c.render();workflow.node_changed(this)},update_field_data:function(d){var c=this;nodeView=c.nodeView;this.tool_state=d.tool_state;this.form_html=d.form_html;this.tool_errors=d.tool_errors;this.annotation=d.annotation;var f=$.parseJSON(d.post_job_actions);this.post_job_actions=f?f:{};c.nodeView.renderToolErrors();var g=nodeView.$("div.inputs");var a=nodeView.newInputsDiv();var b={};_.each(d.data_inputs,function(h){var i=c.nodeView.addDataInput(h,a);b[h.name]=i});_.each(_.difference(_.values(nodeView.terminalViews),_.values(b)),function(h){h.el.terminal.destroy()});nodeView.terminalViews=b;g.replaceWith(a);this.markChanged();this.redraw()},error:function(d){var a=$(this.element).find(".toolFormBody");a.find("div").remove();var c="<div style='color: red; text-style: italic;'>"+d+"</div>";this.form_html=c;a.html(c);workflow.node_changed(this)},markChanged:function(){workflow.node_changed(this)}});function Workflow(a){this.canvas_container=a;this.id_counter=0;this.nodes={};this.name=null;this.has_changes=false;this.active_form_has_changes=false}$.extend(Workflow.prototype,{add_node:function(a){a.id=this.id_counter;a.element.attr("id","wf-node-step-"+a.id);this.id_counter++;this.nodes[a.id]=a;this.has_changes=true;a.workflow=this},remove_node:function(a){if(this.active_node==a){this.clear_active_node()}delete this.nodes[a.id];this.has_changes=true},remove_all:function(){wf=this;$.each(this.nodes,function(b,a){a.destroy();wf.remove_node(a)})},rectify_workflow_outputs:function(){var b=false;var a=false;$.each(this.nodes,function(c,d){if(d.workflow_outputs&&d.workflow_outputs.length>0){b=true}$.each(d.post_job_actions,function(g,f){if(f.action_type==="HideDatasetAction"){a=true}})});if(b!==false||a!==false){$.each(this.nodes,function(c,g){if(g.type==="tool"){var f=false;if(g.post_job_actions==null){g.post_job_actions={};f=true}var d=[];$.each(g.post_job_actions,function(i,h){if(h.action_type=="HideDatasetAction"){d.push(i)}});if(d.length>0){$.each(d,function(h,j){f=true;delete g.post_job_actions[j]})}if(b){$.each(g.output_terminals,function(i,j){var h=true;$.each(g.workflow_outputs,function(l,m){if(j.name===m){h=false}});if(h===true){f=true;var k={action_type:"HideDatasetAction",output_name:j.name,action_arguments:{}};g.post_job_actions["HideDatasetAction"+j.name]=null;g.post_job_actions["HideDatasetAction"+j.name]=k}})}if(workflow.active_node==g&&f===true){workflow.reload_active_node()}}})}},to_simple:function(){var a={};$.each(this.nodes,function(c,f){var g={};$.each(f.input_terminals,function(i,j){g[j.name]=null;var h=[];$.each(j.connectors,function(k,l){h[k]={id:l.handle1.node.id,output_name:l.handle1.name};g[j.name]=h})});var b={};if(f.post_job_actions){$.each(f.post_job_actions,function(j,h){var k={action_type:h.action_type,output_name:h.output_name,action_arguments:h.action_arguments};b[h.action_type+h.output_name]=null;b[h.action_type+h.output_name]=k})}if(!f.workflow_outputs){f.workflow_outputs=[]}var d={id:f.id,type:f.type,tool_id:f.tool_id,tool_state:f.tool_state,tool_errors:f.tool_errors,input_connections:g,position:$(f.element).position(),annotation:f.annotation,post_job_actions:f.post_job_actions,workflow_outputs:f.workflow_outputs};a[f.id]=d});return{steps:a}},from_simple:function(b){wf=this;var c=0;wf.name=b.name;var a=false;$.each(b.steps,function(g,f){var d=prebuild_node(f.type,f.name,f.tool_id);d.init_field_data(f);if(f.position){d.element.css({top:f.position.top,left:f.position.left})}d.id=f.id;wf.nodes[d.id]=d;c=Math.max(c,parseInt(g));if(!a&&d.type==="tool"){if(d.workflow_outputs.length>0){a=true}else{$.each(d.post_job_actions,function(i,h){if(h.action_type==="HideDatasetAction"){a=true}})}}});wf.id_counter=c+1;$.each(b.steps,function(g,f){var d=wf.nodes[g];$.each(f.input_connections,function(i,h){if(h){if(!$.isArray(h)){h=[h]}$.each(h,function(k,j){var m=wf.nodes[j.id];var n=new Connector();n.connect(m.output_terminals[j.output_name],d.input_terminals[i]);n.redraw()})}});if(a&&d.type==="tool"){$.each(d.output_terminals,function(h,i){if(d.post_job_actions["HideDatasetAction"+i.name]===undefined){d.workflow_outputs.push(i.name);callout=$(d.element).find(".callout."+i.name);callout.find("img").attr("src",galaxy_config.root+"static/images/fugue/asterisk-small.png");workflow.has_changes=true}})}})},check_changes_in_active_form:function(){if(this.active_form_has_changes){this.has_changes=true;$("#right-content").find("form").submit();this.active_form_has_changes=false}},reload_active_node:function(){if(this.active_node){var a=this.active_node;this.clear_active_node();this.activate_node(a)}},clear_active_node:function(){if(this.active_node){this.active_node.make_inactive();this.active_node=null}parent.show_form_for_tool("<div>No node selected</div>")},activate_node:function(a){if(this.active_node!=a){this.check_changes_in_active_form();this.clear_active_node();parent.show_form_for_tool(a.form_html+a.tooltip,a);a.make_active();this.active_node=a}},node_changed:function(a){this.has_changes=true;if(this.active_node==a){this.check_changes_in_active_form();parent.show_form_for_tool(a.form_html+a.tooltip,a)}},layout:function(){this.check_changes_in_active_form();this.has_changes=true;var i={};var b={};$.each(this.nodes,function(l,k){if(i[l]===undefined){i[l]=0}if(b[l]===undefined){b[l]=[]}});$.each(this.nodes,function(l,k){$.each(k.input_terminals,function(m,n){$.each(n.connectors,function(p,q){var o=q.handle1.node;i[k.id]+=1;b[o.id].push(k.id)})})});node_ids_by_level=[];while(true){level_parents=[];for(var a in i){if(i[a]==0){level_parents.push(a)}}if(level_parents.length==0){break}node_ids_by_level.push(level_parents);for(var f in level_parents){var j=level_parents[f];delete i[j];for(var g in b[j]){i[b[j][g]]-=1}}}if(i.length){return}var d=this.nodes;var h=80;v_pad=30;var c=h;$.each(node_ids_by_level,function(k,l){l.sort(function(p,o){return $(d[p].element).position().top-$(d[o].element).position().top});var m=0;var n=v_pad;$.each(l,function(o,r){var q=d[r];var p=$(q.element);$(p).css({top:n,left:c});m=Math.max(m,$(p).width());n+=$(p).height()+v_pad});c+=m+h});$.each(d,function(k,l){l.redraw()})},bounds_for_all_nodes:function(){var d=Infinity,b=-Infinity,c=Infinity,a=-Infinity,f;$.each(this.nodes,function(h,g){e=$(g.element);f=e.position();d=Math.min(d,f.left);b=Math.max(b,f.left+e.width());c=Math.min(c,f.top);a=Math.max(a,f.top+e.width())});return{xmin:d,xmax:b,ymin:c,ymax:a}},fit_canvas_to_nodes:function(){var a=this.bounds_for_all_nodes();var f=this.canvas_container.position();var i=this.canvas_container.parent();var d=fix_delta(a.xmin,100);var h=fix_delta(a.ymin,100);d=Math.max(d,f.left);h=Math.max(h,f.top);var c=f.left-d;var g=f.top-h;var b=round_up(a.xmax+100,100)+d;var j=round_up(a.ymax+100,100)+h;b=Math.max(b,-c+i.width());j=Math.max(j,-g+i.height());this.canvas_container.css({left:c,top:g,width:b,height:j});this.canvas_container.children().each(function(){var k=$(this).position();$(this).css("left",k.left+d);$(this).css("top",k.top+h)})}});function fix_delta(a,b){if(a<b||a>3*b){new_pos=(Math.ceil(((a%b))/b)+1)*b;return(-(a-new_pos))}return 0}function round_up(a,b){return Math.ceil(a/b)*b}function prebuild_node(l,j,r){var i=$("<div class='toolForm toolFormInCanvas'></div>");var g=new Node({element:i});g.type=l;if(l=="tool"){g.tool_id=r}var n=$("<div class='toolFormTitle unselectable'>"+j+"</div>");i.append(n);i.css("left",$(window).scrollLeft()+20);i.css("top",$(window).scrollTop()+20);var m=$("<div class='toolFormBody'></div>");var h="<div><img height='16' align='middle' src='"+galaxy_config.root+"static/images/loading_small_white_bg.gif'/> loading tool info...</div>";m.append(h);g.form_html=h;i.append(m);var k=$("<div class='buttons' style='float: right;'></div>");k.append($("<div>").addClass("fa-icon-button fa fa-times").click(function(b){g.destroy()}));i.appendTo("#canvas-container");var d=$("#canvas-container").position();var c=$("#canvas-container").parent();var a=i.width();var q=i.height();i.css({left:(-d.left)+(c.width()/2)-(a/2),top:(-d.top)+(c.height()/2)-(q/2)});k.prependTo(n);a+=(k.width()+10);i.css("width",a);$(i).bind("dragstart",function(){workflow.activate_node(g)}).bind("dragend",function(){workflow.node_changed(this);workflow.fit_canvas_to_nodes();canvas_manager.draw_overview()}).bind("dragclickonly",function(){workflow.activate_node(g)}).bind("drag",function(o,p){var f=$(this).offsetParent().offset(),b=p.offsetX-f.left,s=p.offsetY-f.top;$(this).css({left:b,top:s});$(this).find(".terminal").each(function(){this.terminal.redraw()})});return g}function add_node(b,d,a){var c=prebuild_node(b,d,a);workflow.add_node(c);workflow.fit_canvas_to_nodes();canvas_manager.draw_overview();workflow.activate_node(c);return c}var ext_to_type=null;var type_to_type=null;function issubtype(b,a){b=ext_to_type[b];a=ext_to_type[a];return(type_to_type[b])&&(a in type_to_type[b])}function populate_datatype_info(a){ext_to_type=a.ext_to_class_name;type_to_type=a.class_to_classes}var NodeView=Backbone.View.extend({initialize:function(a){this.node=a.node;this.output_width=Math.max(150,this.$el.width());this.tool_body=this.$el.find(".toolFormBody");this.tool_body.find("div").remove();this.newInputsDiv().appendTo(this.tool_body);this.terminalViews={}},render:function(){this.renderToolErrors();this.$el.css("width",Math.min(250,Math.max(this.$el.width(),this.output_width)))},renderToolErrors:function(){if(this.node.tool_errors){this.$el.addClass("tool-node-error")}else{this.$el.removeClass("tool-node-error")}},newInputsDiv:function(){return $("<div class='inputs'></div>")},updateMaxWidth:function(a){this.output_width=Math.max(this.output_width,a)},addRule:function(){this.tool_body.append($("<div class='rule'></div>"))},addDataInput:function(d,a){var f=true;if(!a){a=this.$(".inputs");f=false}var h=this.terminalViews[d.name];if(!h){h=new InputTerminalView({node:this.node,input:d})}else{var g=h.el.terminal;g.update(d);_.each(g.connectors,function(j){if(j.handle1&&!g.attachable(j.handle1)){j.destroy()}})}this.terminalViews[d.name]=h;var i=h.el;var c=new DataInputView({terminalElement:i,input:d,nodeView:this,skipResize:f});var b=c.$el;var i=c.terminalElement;a.append(b.prepend(i));return h},addDataOutput:function(a){var c=new OutputTerminalView({node:this.node,output:a});var b=new DataOutputView({output:a,terminalElement:c.el,nodeView:this,});this.tool_body.append(b.$el.append(b.terminalElement))}});var DataInputView=Backbone.View.extend({className:"form-row dataRow input-data-row",initialize:function(a){this.input=a.input;this.nodeView=a.nodeView;this.terminalElement=a.terminalElement;this.$el.attr("name",this.input.name).html(this.input.label);if(!a.skipResize){this.$el.css({position:"absolute",left:-1000,top:-1000,display:"none"});$("body").append(this.el);this.nodeView.updateMaxWidth(this.$el.outerWidth());this.$el.css({position:"",left:"",top:"",display:""});this.$el.remove()}},});var OutputCalloutView=Backbone.View.extend({tagName:"div",initialize:function(b){this.label=b.label;this.node=b.node;this.output=b.output;var a=this;this.$el.attr("class","callout "+this.label).css({display:"none"}).append($("<div class='buttons'></div>").append($("<img/>").attr("src",galaxy_config.root+"static/images/fugue/asterisk-small-outline.png").click(function(){if($.inArray(a.output.name,a.node.workflow_outputs)!=-1){a.node.workflow_outputs.splice($.inArray(a.output.name,a.node.workflow_outputs),1);a.$("img").attr("src",galaxy_config.root+"static/images/fugue/asterisk-small-outline.png")}else{a.node.workflow_outputs.push(a.output.name);a.$("img").attr("src",galaxy_config.root+"static/images/fugue/asterisk-small.png")}workflow.has_changes=true;canvas_manager.draw_overview()}))).tooltip({delay:500,title:"Mark dataset as a workflow output. All unmarked datasets will be hidden."});this.$el.css({top:"50%",margin:"-8px 0px 0px 0px",right:8});this.$el.show();this.resetImage()},resetImage:function(){if($.inArray(this.output.name,this.node.workflow_outputs)===-1){this.$("img").attr("src",galaxy_config.root+"static/images/fugue/asterisk-small-outline.png")}else{this.$("img").attr("src",galaxy_config.root+"static/images/fugue/asterisk-small.png")}},hoverImage:function(){this.$("img").attr("src",galaxy_config.root+"static/images/fugue/asterisk-small-yellow.png")}});var DataOutputView=Backbone.View.extend({className:"form-row dataRow",initialize:function(c){this.output=c.output;this.terminalElement=c.terminalElement;this.nodeView=c.nodeView;var a=this.output;var b=a.name;var d=this.nodeView.node;if(a.extensions.indexOf("input")<0){b=b+" ("+a.extensions.join(", ")+")"}this.$el.html(b);if(d.type=="tool"){var f=new OutputCalloutView({label:b,output:a,node:d,});this.$el.append(f.el);this.$el.hover(function(){f.hoverImage()},function(){f.resetImage()})}this.$el.css({position:"absolute",left:-1000,top:-1000,display:"none"});$("body").append(this.el);this.nodeView.updateMaxWidth(this.$el.outerWidth()+17);this.$el.css({position:"",left:"",top:"",display:""}).detach()}});var InputTerminalView=Backbone.View.extend({className:"terminal input-terminal",initialize:function(c){var f=c.node;var a=c.input;var b=a.name;var d=this.el.terminal=new InputTerminal({element:this.el,input:a});d.node=f;d.name=b;f.input_terminals[b]=d},events:{dropinit:"onDropInit",dropstart:"onDropStart",dropend:"onDropEnd",drop:"onDrop",hover:"onHover",},onDropInit:function(b,c){var a=this.el.terminal;return $(c.drag).hasClass("output-terminal")&&a.canAccept(c.drag.terminal)},onDropStart:function(a,b){if(b.proxy.terminal){b.proxy.terminal.connectors[0].inner_color="#BBFFBB"}},onDropEnd:function(a,b){if(b.proxy.terminal){b.proxy.terminal.connectors[0].inner_color="#FFFFFF"}},onDrop:function(b,c){var a=this.el.terminal;new Connector(c.drag.terminal,a).redraw()},onHover:function(){var c=this.el;var b=c.terminal;if(b.connectors.length>0){var a=$("<div class='callout'></div>").css({display:"none"}).appendTo("body").append($("<div class='button'></div>").append($("<div/>").addClass("fa-icon-button fa fa-times").click(function(){$.each(b.connectors,function(f,d){if(d){d.destroy()}});a.remove()}))).bind("mouseleave",function(){$(this).remove()});a.css({top:$(c).offset().top-2,left:$(c).offset().left-a.width(),"padding-right":$(c).width()}).show()}},});var OutputTerminalView=Backbone.View.extend({className:"terminal output-terminal",initialize:function(c){var i=c.node;var a=c.output;var b=a.name;var h=a.extensions;var g=this.el;var f=g;var d=g.terminal=new OutputTerminal({element:g,datatypes:h});d.node=i;d.name=b;i.output_terminals[b]=d},events:{drag:"onDrag",dragstart:"onDragStart",dragend:"onDragEnd",},onDrag:function(b,c){var a=function(){var f=$(c.proxy).offsetParent().offset(),d=c.offsetX-f.left,g=c.offsetY-f.top;$(c.proxy).css({left:d,top:g});c.proxy.terminal.redraw();canvas_manager.update_viewport_overlay()};a();$("#canvas-container").get(0).scroll_panel.test(b,a)},onDragStart:function(b,f){$(f.available).addClass("input-terminal-active");workflow.check_changes_in_active_form();var a=$('<div class="drag-terminal" style="position: absolute;"></div>').appendTo("#canvas-container").get(0);a.terminal=new OutputTerminal({element:a});var g=new Connector();g.dragging=true;g.connect(this.el.terminal,a.terminal);return a},onDragEnd:function(a,b){b.proxy.terminal.connectors[0].destroy();$(b.proxy).remove();$(b.available).removeClass("input-terminal-active");$("#canvas-container").get(0).scroll_panel.stop()}});function ScrollPanel(a){this.panel=a}$.extend(ScrollPanel.prototype,{test:function(v,d){clearTimeout(this.timeout);var k=v.pageX,j=v.pageY,l=$(this.panel),c=l.position(),b=l.width(),i=l.height(),w=l.parent(),s=w.width(),a=w.height(),r=w.offset(),p=r.left,m=r.top,A=p+w.width(),u=m+w.height(),B=-(b-(s/2)),z=-(i-(a/2)),g=(s/2),f=(a/2),h=false,q=5,o=23;if(k-q<p){if(c.left<g){var n=Math.min(o,g-c.left);l.css("left",c.left+n);h=true}}else{if(k+q>A){if(c.left>B){var n=Math.min(o,c.left-B);l.css("left",c.left-n);h=true}}else{if(j-q<m){if(c.top<f){var n=Math.min(o,f-c.top);l.css("top",c.top+n);h=true}}else{if(j+q>u){if(c.top>z){var n=Math.min(o,c.top-B);l.css("top",(c.top-n)+"px");h=true}}}}}if(h){d();var l=this;this.timeout=setTimeout(function(){l.test(v,d)},50)}},stop:function(b,a){clearTimeout(this.timeout)}});function CanvasManager(b,a){this.cv=b;this.cc=this.cv.find("#canvas-container");this.oc=a.find("#overview-canvas");this.ov=a.find("#overview-viewport");this.init_drag()}$.extend(CanvasManager.prototype,{init_drag:function(){var b=this;var a=function(f,g){f=Math.min(f,b.cv.width()/2);f=Math.max(f,-b.cc.width()+b.cv.width()/2);g=Math.min(g,b.cv.height()/2);g=Math.max(g,-b.cc.height()+b.cv.height()/2);b.cc.css({left:f,top:g});b.update_viewport_overlay()};this.cc.each(function(){this.scroll_panel=new ScrollPanel(this)});var d,c;this.cv.bind("dragstart",function(){var g=$(this).offset();var f=b.cc.position();c=f.top-g.top;d=f.left-g.left}).bind("drag",function(f,g){a(g.offsetX+d,g.offsetY+c)}).bind("dragend",function(){workflow.fit_canvas_to_nodes();b.draw_overview()});this.ov.bind("drag",function(k,l){var h=b.cc.width(),n=b.cc.height(),m=b.oc.width(),j=b.oc.height(),f=$(this).offsetParent().offset(),i=l.offsetX-f.left,g=l.offsetY-f.top;a(-(i/m*h),-(g/j*n))}).bind("dragend",function(){workflow.fit_canvas_to_nodes();b.draw_overview()});$("#overview-border").bind("drag",function(g,i){var j=$(this).offsetParent();var h=j.offset();var f=Math.max(j.width()-(i.offsetX-h.left),j.height()-(i.offsetY-h.top));$(this).css({width:f,height:f});b.draw_overview()});$("#overview-border div").bind("drag",function(){})},update_viewport_overlay:function(){var b=this.cc,f=this.cv,a=this.oc,c=this.ov,d=b.width(),j=b.height(),i=a.width(),g=a.height(),h=b.position();c.css({left:-(h.left/d*i),top:-(h.top/j*g),width:(f.width()/d*i)-2,height:(f.height()/j*g)-2})},draw_overview:function(){var j=$("#overview-canvas"),m=j.parent().parent().width(),i=j.get(0).getContext("2d"),d=$("#canvas-container").width(),l=$("#canvas-container").height();var g,a,k,f;var h=this.cv.width();var b=this.cv.height();if(d<h&&l<b){k=d/h*m;f=(m-k)/2;g=l/b*m;a=(m-g)/2}else{if(d<l){a=0;g=m;k=Math.ceil(g*d/l);f=(m-k)/2}else{k=m;f=0;g=Math.ceil(k*l/d);a=(m-g)/2}}j.parent().css({left:f,top:a,width:k,height:g});j.attr("width",k);j.attr("height",g);$.each(workflow.nodes,function(t,q){i.fillStyle="#D2C099";i.strokeStyle="#D8B365";i.lineWidth=1;var s=$(q.element),n=s.position(),c=n.left/d*k,r=n.top/l*g,o=s.width()/d*k,p=s.height()/l*g;if(q.tool_errors){i.fillStyle="#FFCCCC";i.strokeStyle="#AA6666"}else{if(q.workflow_outputs!=undefined&&q.workflow_outputs.length>0){i.fillStyle="#E8A92D";i.strokeStyle="#E8A92D"}}i.fillRect(c,r,o,p);i.strokeRect(c,r,o,p)});this.update_viewport_overlay()}});
\ No newline at end of file
diff -r 2342d31a1f5cf906f1aecb0689fe64da384cf940 -r 68b49fdc63e2445ee2124e0185373d964bc64bc8 static/scripts/packed/mvc/dataset/hda-edit.js
--- a/static/scripts/packed/mvc/dataset/hda-edit.js
+++ b/static/scripts/packed/mvc/dataset/hda-edit.js
@@ -1,1 +1,1 @@
-define(["mvc/dataset/hda-model","mvc/dataset/hda-base","mvc/tags","mvc/annotations","utils/localization"],function(g,b,a,e,f){var h=b.HDABaseView.extend({initialize:function(i){b.HDABaseView.prototype.initialize.call(this,i);this.hasUser=i.hasUser;this.defaultPrimaryActionButtonRenderers=[this._render_showParamsButton,this._render_rerunButton];this.tagsEditorShown=i.tagsEditorShown||false;this.annotationEditorShown=i.annotationEditorShown||false},_render_titleButtons:function(){return b.HDABaseView.prototype._render_titleButtons.call(this).concat([this._render_editButton(),this._render_deleteButton()])},_render_editButton:function(){if((this.model.get("state")===g.HistoryDatasetAssociation.STATES.DISCARDED)||(this.model.get("state")===g.HistoryDatasetAssociation.STATES.NOT_VIEWABLE)||(!this.model.get("accessible"))){return null}var k=this.model.get("purged"),i=this.model.get("deleted"),j={title:f("Edit attributes"),href:this.urls.edit,target:this.linkTarget,classes:"dataset-edit"};if(i||k){j.disabled=true;if(k){j.title=f("Cannot edit attributes of datasets removed from disk")}else{if(i){j.title=f("Undelete dataset to edit attributes")}}}else{if(this.model.get("state")===g.HistoryDatasetAssociation.STATES.UPLOAD){j.disabled=true;j.title=f("This dataset must finish uploading before it can be edited")}else{if(this.model.get("state")===g.HistoryDatasetAssociation.STATES.NEW){j.disabled=true;j.title=f("This dataset is not yet editable")}}}j.faIcon="fa-pencil";return faIconButton(j)},_render_deleteButton:function(){if((this.model.get("state")===g.HistoryDatasetAssociation.STATES.NOT_VIEWABLE)||(!this.model.get("accessible"))){return null}var i=this,j={title:f("Delete"),classes:"dataset-delete",onclick:function(){i.$el.find(".icon-btn.dataset-delete").trigger("mouseout");i.model["delete"]()}};if(this.model.get("deleted")||this.model.get("purged")){j={title:f("Dataset is already deleted"),disabled:true}}j.faIcon="fa-times";return faIconButton(j)},_render_errButton:function(){if(this.model.get("state")!==g.HistoryDatasetAssociation.STATES.ERROR){return null}return faIconButton({title:f("View or report this error"),href:this.urls.report_error,classes:"dataset-report-error-btn",target:this.linkTarget,faIcon:"fa-bug"})},_render_rerunButton:function(){return faIconButton({title:f("Run this job again"),href:this.urls.rerun,classes:"dataset-rerun-btn",target:this.linkTarget,faIcon:"fa-refresh"})},_render_visualizationsButton:function(){var i=this.model.get("visualizations");if((!this.hasUser)||(!this.model.hasData())||(_.isEmpty(i))){return null}if(_.isObject(i[0])){return this._render_visualizationsFrameworkButton(i)}if(!this.urls.visualization){return null}var k=this.model.get("dbkey"),o=this.urls.visualization,l={},p={dataset_id:this.model.get("id"),hda_ldda:"hda"};if(k){p.dbkey=k}var j=faIconButton({title:f("Visualize"),classes:"dataset-visualize-btn",faIcon:"fa-bar-chart-o"});function m(q){if(q==="trackster"){return c(o,p,k)}return function(){Galaxy.frame.add({title:"Visualization",type:"url",content:o+"/"+q+"?"+$.param(p)})}}function n(q){return q.charAt(0).toUpperCase()+q.slice(1)}if(i.length===1){j.attr("data-original-title",f("Visualize in")+" "+f(n(i[0])));j.click(m(i[0]))}else{_.each(i,function(q){l[f(n(q))]=m(q)});make_popupmenu(j,l)}return j},_render_visualizationsFrameworkButton:function(i){if(!(this.model.hasData())||!(i&&!_.isEmpty(i))){return null}var k=faIconButton({title:f("Visualize"),classes:"dataset-visualize-btn",faIcon:"fa-bar-chart-o"});if(i.length===1){var j=i[0];k.attr("data-original-title",f("Visualize in")+" "+j.html);k.attr("href",j.href)}else{var l=[];_.each(i,function(m){m.func=function(n){if(Galaxy.frame&&Galaxy.frame.active){Galaxy.frame.add({title:"Visualization",type:"url",content:m.href});n.preventDefault();return false}return true};l.push(m);return false});PopupMenu.create(k,l)}return k},_buildNewRender:function(){var i=b.HDABaseView.prototype._buildNewRender.call(this);var j="<br />",l=".",k=function(m,n){return['<a href="javascript:void(0)" class="',n,'">',m,"</a>"].join("")};i.find(".dataset-deleted-msg").append([j,k(f("Undelete it"),"dataset-undelete"),l,j,k(f("Permanently remove it from disk"),"dataset-purge"),l].join(""));i.find(".dataset-hidden-msg").append([j,k(f("Unhide it"),"dataset-unhide"),l].join(""));return i},_render_body_failed_metadata:function(){var j=$("<a/>").attr({href:this.urls.edit,target:this.linkTarget}).text(f("set it manually or retry auto-detection")),i=$("<span/>").text(". "+f("You may be able to")+" ").append(j),k=b.HDABaseView.prototype._render_body_failed_metadata.call(this);k.find(".warningmessagesmall strong").append(i);return k},_render_body_error:function(){var i=b.HDABaseView.prototype._render_body_error.call(this);i.find(".dataset-actions .left").prepend(this._render_errButton());return i},_render_body_ok:function(){var i=b.HDABaseView.prototype._render_body_ok.call(this);if(this.model.isDeletedOrPurged()){return i}this.makeDbkeyEditLink(i);if(this.hasUser){i.find(".dataset-actions .left").append(this._render_visualizationsButton());this._renderTags(i);this._renderAnnotation(i)}return i},_renderTags:function(i){var j=this;this.tagsEditor=new a.TagsEditor({model:this.model,el:i.find(".tags-display"),onshowFirstTime:function(){this.render()},onshow:function(){j.tagsEditorShown=true},onhide:function(){j.tagsEditorShown=false},$activator:faIconButton({title:f("Edit dataset tags"),classes:"dataset-tag-btn",faIcon:"fa-tags"}).appendTo(i.find(".dataset-actions .right"))});if(this.tagsEditorShown){this.tagsEditor.toggle(true)}},_renderAnnotation:function(i){var j=this;this.annotationEditor=new e.AnnotationEditor({model:this.model,el:i.find(".annotation-display"),onshowFirstTime:function(){this.render()},onshow:function(){j.annotationEditorShown=true},onhide:function(){j.annotationEditorShown=false},$activator:faIconButton({title:f("Edit dataset annotation"),classes:"dataset-annotate-btn",faIcon:"fa-comment"}).appendTo(i.find(".dataset-actions .right"))});if(this.annotationEditorShown){this.annotationEditor.toggle(true)}},makeDbkeyEditLink:function(j){if(this.model.get("metadata_dbkey")==="?"&&!this.model.isDeletedOrPurged()){var i=$('<a class="value">?</a>').attr("href",this.urls.edit).attr("target",this.linkTarget);j.find(".dataset-dbkey .value").replaceWith(i)}},events:_.extend(_.clone(b.HDABaseView.prototype.events),{"click .dataset-undelete":function(i){this.model.undelete();return false},"click .dataset-unhide":function(i){this.model.unhide();return false},"click .dataset-purge":"confirmPurge"}),confirmPurge:function d(i){this.model.purge();return false},toString:function(){var i=(this.model)?(this.model+""):("(no model)");return"HDAView("+i+")"}});function c(i,k,j){return function(){var l={};if(j){l["f-dbkey"]=j}$.ajax({url:i+"/list_tracks?"+$.param(l),dataType:"html",error:function(){alert(("Could not add this dataset to browser")+".")},success:function(m){var n=window.parent;n.Galaxy.modal.show({title:"View Data in a New or Saved Visualization",buttons:{Cancel:function(){n.Galaxy.modal.hide()},"View in saved visualization":function(){n.Galaxy.modal.show({title:"Add Data to Saved Visualization",body:m,buttons:{Cancel:function(){n.Galaxy.modal.hide()},"Add to visualization":function(){$(n.document).find("input[name=id]:checked").each(function(){n.Galaxy.modal.hide();var o=$(this).val();k.id=o;n.Galaxy.frame.add({title:"Trackster",type:"url",content:i+"/trackster?"+$.param(k)})})}}})},"View in new visualization":function(){n.Galaxy.modal.hide();var o=i+"/trackster?"+$.param(k);n.Galaxy.frame.add({title:"Trackster",type:"url",content:o})}}})}});return false}}return{HDAEditView:h}});
\ No newline at end of file
+define(["mvc/dataset/hda-model","mvc/dataset/hda-base","mvc/tags","mvc/annotations","utils/localization"],function(g,b,a,e,f){var h=b.HDABaseView.extend({initialize:function(i){b.HDABaseView.prototype.initialize.call(this,i);this.hasUser=i.hasUser;this.defaultPrimaryActionButtonRenderers=[this._render_showParamsButton,this._render_rerunButton];this.purgeAllowed=i.purgeAllowed||false;this.tagsEditorShown=i.tagsEditorShown||false;this.annotationEditorShown=i.annotationEditorShown||false},_render_titleButtons:function(){return b.HDABaseView.prototype._render_titleButtons.call(this).concat([this._render_editButton(),this._render_deleteButton()])},_render_editButton:function(){if((this.model.get("state")===g.HistoryDatasetAssociation.STATES.DISCARDED)||(this.model.get("state")===g.HistoryDatasetAssociation.STATES.NOT_VIEWABLE)||(!this.model.get("accessible"))){return null}var k=this.model.get("purged"),i=this.model.get("deleted"),j={title:f("Edit attributes"),href:this.urls.edit,target:this.linkTarget,classes:"dataset-edit"};if(i||k){j.disabled=true;if(k){j.title=f("Cannot edit attributes of datasets removed from disk")}else{if(i){j.title=f("Undelete dataset to edit attributes")}}}else{if(this.model.get("state")===g.HistoryDatasetAssociation.STATES.UPLOAD){j.disabled=true;j.title=f("This dataset must finish uploading before it can be edited")}else{if(this.model.get("state")===g.HistoryDatasetAssociation.STATES.NEW){j.disabled=true;j.title=f("This dataset is not yet editable")}}}j.faIcon="fa-pencil";return faIconButton(j)},_render_deleteButton:function(){if((this.model.get("state")===g.HistoryDatasetAssociation.STATES.NOT_VIEWABLE)||(!this.model.get("accessible"))){return null}var i=this,j={title:f("Delete"),classes:"dataset-delete",onclick:function(){i.$el.find(".icon-btn.dataset-delete").trigger("mouseout");i.model["delete"]()}};if(this.model.get("deleted")||this.model.get("purged")){j={title:f("Dataset is already deleted"),disabled:true}}j.faIcon="fa-times";return faIconButton(j)},_render_errButton:function(){if(this.model.get("state")!==g.HistoryDatasetAssociation.STATES.ERROR){return null}return faIconButton({title:f("View or report this error"),href:this.urls.report_error,classes:"dataset-report-error-btn",target:this.linkTarget,faIcon:"fa-bug"})},_render_rerunButton:function(){return faIconButton({title:f("Run this job again"),href:this.urls.rerun,classes:"dataset-rerun-btn",target:this.linkTarget,faIcon:"fa-refresh"})},_render_visualizationsButton:function(){var i=this.model.get("visualizations");if((!this.hasUser)||(!this.model.hasData())||(_.isEmpty(i))){return null}if(_.isObject(i[0])){return this._render_visualizationsFrameworkButton(i)}if(!this.urls.visualization){return null}var k=this.model.get("dbkey"),o=this.urls.visualization,l={},p={dataset_id:this.model.get("id"),hda_ldda:"hda"};if(k){p.dbkey=k}var j=faIconButton({title:f("Visualize"),classes:"dataset-visualize-btn",faIcon:"fa-bar-chart-o"});function m(q){if(q==="trackster"){return c(o,p,k)}return function(){Galaxy.frame.add({title:"Visualization",type:"url",content:o+"/"+q+"?"+$.param(p)})}}function n(q){return q.charAt(0).toUpperCase()+q.slice(1)}if(i.length===1){j.attr("data-original-title",f("Visualize in")+" "+f(n(i[0])));j.click(m(i[0]))}else{_.each(i,function(q){l[f(n(q))]=m(q)});make_popupmenu(j,l)}return j},_render_visualizationsFrameworkButton:function(i){if(!(this.model.hasData())||!(i&&!_.isEmpty(i))){return null}var k=faIconButton({title:f("Visualize"),classes:"dataset-visualize-btn",faIcon:"fa-bar-chart-o"});if(i.length===1){var j=i[0];k.attr("data-original-title",f("Visualize in")+" "+j.html);k.attr("href",j.href)}else{var l=[];_.each(i,function(m){m.func=function(n){if(Galaxy.frame&&Galaxy.frame.active){Galaxy.frame.add({title:"Visualization",type:"url",content:m.href});n.preventDefault();return false}return true};l.push(m);return false});PopupMenu.create(k,l)}return k},_buildNewRender:function(){var i=b.HDABaseView.prototype._buildNewRender.call(this);var j="<br />",l=".",k=function(m,n){return['<a href="javascript:void(0)" class="',n,'">',m,"</a>"].join("")};i.find(".dataset-deleted-msg").append([j,k(f("Undelete it"),"dataset-undelete"),l].join(""));if(this.purgeAllowed){i.find(".dataset-deleted-msg").append([j,k(f("Permanently remove it from disk"),"dataset-purge"),l].join(""))}i.find(".dataset-hidden-msg").append([j,k(f("Unhide it"),"dataset-unhide"),l].join(""));return i},_render_body_failed_metadata:function(){var j=$("<a/>").attr({href:this.urls.edit,target:this.linkTarget}).text(f("set it manually or retry auto-detection")),i=$("<span/>").text(". "+f("You may be able to")+" ").append(j),k=b.HDABaseView.prototype._render_body_failed_metadata.call(this);k.find(".warningmessagesmall strong").append(i);return k},_render_body_error:function(){var i=b.HDABaseView.prototype._render_body_error.call(this);i.find(".dataset-actions .left").prepend(this._render_errButton());return i},_render_body_ok:function(){var i=b.HDABaseView.prototype._render_body_ok.call(this);if(this.model.isDeletedOrPurged()){return i}this.makeDbkeyEditLink(i);if(this.hasUser){i.find(".dataset-actions .left").append(this._render_visualizationsButton());this._renderTags(i);this._renderAnnotation(i)}return i},_renderTags:function(i){var j=this;this.tagsEditor=new a.TagsEditor({model:this.model,el:i.find(".tags-display"),onshowFirstTime:function(){this.render()},onshow:function(){j.tagsEditorShown=true},onhide:function(){j.tagsEditorShown=false},$activator:faIconButton({title:f("Edit dataset tags"),classes:"dataset-tag-btn",faIcon:"fa-tags"}).appendTo(i.find(".dataset-actions .right"))});if(this.tagsEditorShown){this.tagsEditor.toggle(true)}},_renderAnnotation:function(i){var j=this;this.annotationEditor=new e.AnnotationEditor({model:this.model,el:i.find(".annotation-display"),onshowFirstTime:function(){this.render()},onshow:function(){j.annotationEditorShown=true},onhide:function(){j.annotationEditorShown=false},$activator:faIconButton({title:f("Edit dataset annotation"),classes:"dataset-annotate-btn",faIcon:"fa-comment"}).appendTo(i.find(".dataset-actions .right"))});if(this.annotationEditorShown){this.annotationEditor.toggle(true)}},makeDbkeyEditLink:function(j){if(this.model.get("metadata_dbkey")==="?"&&!this.model.isDeletedOrPurged()){var i=$('<a class="value">?</a>').attr("href",this.urls.edit).attr("target",this.linkTarget);j.find(".dataset-dbkey .value").replaceWith(i)}},events:_.extend(_.clone(b.HDABaseView.prototype.events),{"click .dataset-undelete":function(i){this.model.undelete();return false},"click .dataset-unhide":function(i){this.model.unhide();return false},"click .dataset-purge":"confirmPurge"}),confirmPurge:function d(i){this.model.purge();return false},toString:function(){var i=(this.model)?(this.model+""):("(no model)");return"HDAView("+i+")"}});function c(i,k,j){return function(){var l={};if(j){l["f-dbkey"]=j}$.ajax({url:i+"/list_tracks?"+$.param(l),dataType:"html",error:function(){alert(("Could not add this dataset to browser")+".")},success:function(m){var n=window.parent;n.Galaxy.modal.show({title:"View Data in a New or Saved Visualization",buttons:{Cancel:function(){n.Galaxy.modal.hide()},"View in saved visualization":function(){n.Galaxy.modal.show({title:"Add Data to Saved Visualization",body:m,buttons:{Cancel:function(){n.Galaxy.modal.hide()},"Add to visualization":function(){$(n.document).find("input[name=id]:checked").each(function(){n.Galaxy.modal.hide();var o=$(this).val();k.id=o;n.Galaxy.frame.add({title:"Trackster",type:"url",content:i+"/trackster?"+$.param(k)})})}}})},"View in new visualization":function(){n.Galaxy.modal.hide();var o=i+"/trackster?"+$.param(k);n.Galaxy.frame.add({title:"Trackster",type:"url",content:o})}}})}});return false}}return{HDAEditView:h}});
\ No newline at end of file
diff -r 2342d31a1f5cf906f1aecb0689fe64da384cf940 -r 68b49fdc63e2445ee2124e0185373d964bc64bc8 static/scripts/packed/mvc/history/history-panel.js
--- a/static/scripts/packed/mvc/history/history-panel.js
+++ b/static/scripts/packed/mvc/history/history-panel.js
@@ -1,1 +1,1 @@
-define(["mvc/dataset/hda-model","mvc/dataset/hda-edit","mvc/history/readonly-history-panel","mvc/tags","mvc/annotations","utils/localization"],function(f,b,d,a,c,e){var g=d.ReadOnlyHistoryPanel.extend({HDAViewClass:b.HDAEditView,initialize:function(h){h=h||{};this.selectedHdaIds=[];this.tagsEditor=null;this.annotationEditor=null;this.selecting=h.selecting||false;this.annotationEditorShown=h.annotationEditorShown||false;this.tagsEditorShown=h.tagsEditorShown||false;d.ReadOnlyHistoryPanel.prototype.initialize.call(this,h)},_setUpModelEventHandlers:function(){d.ReadOnlyHistoryPanel.prototype._setUpModelEventHandlers.call(this);this.model.on("change:nice_size",this.updateHistoryDiskSize,this);this.model.hdas.on("change:deleted",this._handleHdaDeletionChange,this);this.model.hdas.on("change:visible",this._handleHdaVisibleChange,this);this.model.hdas.on("change:purged",function(h){this.model.fetch()},this)},renderModel:function(){var h=$("<div/>");h.append(g.templates.historyPanel(this.model.toJSON()));this.$emptyMessage(h).text(this.emptyMsg);if(Galaxy&&Galaxy.currUser&&Galaxy.currUser.id&&Galaxy.currUser.id===this.model.get("user_id")){this._renderTags(h);this._renderAnnotation(h)}h.find(".history-secondary-actions").prepend(this._renderSelectButton());h.find(".history-dataset-actions").toggle(this.selecting);h.find(".history-secondary-actions").prepend(this._renderSearchButton());this._setUpBehaviours(h);this.renderHdas(h);return h},_renderTags:function(h){var i=this;this.tagsEditor=new a.TagsEditor({model:this.model,el:h.find(".history-controls .tags-display"),onshowFirstTime:function(){this.render()},onshow:function(){i.toggleHDATagEditors(true,i.fxSpeed)},onhide:function(){i.toggleHDATagEditors(false,i.fxSpeed)},$activator:faIconButton({title:e("Edit history tags"),classes:"history-tag-btn",faIcon:"fa-tags"}).appendTo(h.find(".history-secondary-actions"))})},_renderAnnotation:function(h){var i=this;this.annotationEditor=new c.AnnotationEditor({model:this.model,el:h.find(".history-controls .annotation-display"),onshowFirstTime:function(){this.render()},onshow:function(){i.toggleHDAAnnotationEditors(true,i.fxSpeed)},onhide:function(){i.toggleHDAAnnotationEditors(false,i.fxSpeed)},$activator:faIconButton({title:e("Edit history annotation"),classes:"history-annotate-btn",faIcon:"fa-comment"}).appendTo(h.find(".history-secondary-actions"))})},_renderSelectButton:function(h){return faIconButton({title:e("Operations on multiple datasets"),classes:"history-select-btn",faIcon:"fa-check-square-o"})},_setUpBehaviours:function(h){h=h||this.$el;d.ReadOnlyHistoryPanel.prototype._setUpBehaviours.call(this,h);if(!this.model){return}this._setUpDatasetActionsPopup(h);if((!Galaxy.currUser||Galaxy.currUser.isAnonymous())||(Galaxy.currUser.id!==this.model.get("user_id"))){return}var i=this;h.find(".history-name").attr("title",e("Click to rename history")).tooltip({placement:"bottom"}).make_text_editable({on_finish:function(j){var k=i.model.get("name");if(j&&j!==k){i.$el.find(".history-name").text(j);i.model.save({name:j}).fail(function(){i.$el.find(".history-name").text(i.model.previous("name"))})}else{i.$el.find(".history-name").text(k)}}})},_setUpDatasetActionsPopup:function(h){var i=this;(new PopupMenu(h.find(".history-dataset-action-popup-btn"),[{html:e("Hide datasets"),func:function(){var j=f.HistoryDatasetAssociation.prototype.hide;i.getSelectedHdaCollection().ajaxQueue(j)}},{html:e("Unhide datasets"),func:function(){var j=f.HistoryDatasetAssociation.prototype.unhide;i.getSelectedHdaCollection().ajaxQueue(j)}},{html:e("Delete datasets"),func:function(){var j=f.HistoryDatasetAssociation.prototype["delete"];i.getSelectedHdaCollection().ajaxQueue(j)}},{html:e("Undelete datasets"),func:function(){var j=f.HistoryDatasetAssociation.prototype.undelete;i.getSelectedHdaCollection().ajaxQueue(j)}},{html:e("Permanently delete datasets"),func:function(){if(confirm(e("This will permanently remove the data in your datasets. Are you sure?"))){var j=f.HistoryDatasetAssociation.prototype.purge;i.getSelectedHdaCollection().ajaxQueue(j)}}}]))},_handleHdaDeletionChange:function(h){if(h.get("deleted")&&!this.storage.get("show_deleted")){this.removeHdaView(this.hdaViews[h.id])}},_handleHdaVisibleChange:function(h){if(h.hidden()&&!this.storage.get("show_hidden")){this.removeHdaView(this.hdaViews[h.id])}},_createHdaView:function(i){var h=i.get("id"),j=new this.HDAViewClass({model:i,linkTarget:this.linkTarget,expanded:this.storage.get("expandedHdas")[h],selectable:this.selecting,hasUser:this.model.ownedByCurrUser(),logger:this.logger,tagsEditorShown:(this.tagsEditor&&!this.tagsEditor.hidden),annotationEditorShown:(this.annotationEditor&&!this.annotationEditor.hidden)});this._setUpHdaListeners(j);return j},_setUpHdaListeners:function(i){var h=this;d.ReadOnlyHistoryPanel.prototype._setUpHdaListeners.call(this,i);i.on("selected",function(j){var k=j.model.get("id");h.selectedHdaIds=_.union(h.selectedHdaIds,[k])});i.on("de-selected",function(j){var k=j.model.get("id");h.selectedHdaIds=_.without(h.selectedHdaIds,k)})},toggleHDATagEditors:function(h){var i=arguments;_.each(this.hdaViews,function(j){if(j.tagsEditor){j.tagsEditor.toggle.apply(j.tagsEditor,i)}})},toggleHDAAnnotationEditors:function(h){var i=arguments;_.each(this.hdaViews,function(j){if(j.annotationEditor){j.annotationEditor.toggle.apply(j.annotationEditor,i)}})},removeHdaView:function(i){if(!i){return}var h=this;i.$el.fadeOut(h.fxSpeed,function(){i.off();i.remove();delete h.hdaViews[i.model.id];if(_.isEmpty(h.hdaViews)){h.$emptyMessage().fadeIn(h.fxSpeed,function(){h.trigger("empty-history",h)})}})},events:_.extend(_.clone(d.ReadOnlyHistoryPanel.prototype.events),{"click .history-select-btn":"toggleSelectors","click .history-select-all-datasets-btn":"selectAllDatasets","click .history-deselect-all-datasets-btn":"deselectAllDatasets"}),updateHistoryDiskSize:function(){this.$el.find(".history-size").text(this.model.get("nice_size"))},showSelectors:function(h){h=(h!==undefined)?(h):(this.fxSpeed);this.selecting=true;this.$(".history-dataset-actions").slideDown(h);_.each(this.hdaViews,function(i){i.showSelector()});this.selectedHdaIds=[]},hideSelectors:function(h){h=(h!==undefined)?(h):(this.fxSpeed);this.selecting=false;this.$(".history-dataset-actions").slideUp(h);_.each(this.hdaViews,function(i){i.hideSelector()});this.selectedHdaIds=[]},toggleSelectors:function(){if(!this.selecting){this.showSelectors()}else{this.hideSelectors()}},selectAllDatasets:function(h){_.each(this.hdaViews,function(i){i.select(h)})},deselectAllDatasets:function(h){_.each(this.hdaViews,function(i){i.deselect(h)})},getSelectedHdaViews:function(){return _.filter(this.hdaViews,function(h){return h.selected})},getSelectedHdaCollection:function(){return new f.HDACollection(_.map(this.getSelectedHdaViews(),function(h){return h.model}),{historyId:this.model.id})},toString:function(){return"HistoryPanel("+((this.model)?(this.model.get("name")):(""))+")"}});return{HistoryPanel:g}});
\ No newline at end of file
+define(["mvc/dataset/hda-model","mvc/dataset/hda-edit","mvc/history/readonly-history-panel","mvc/tags","mvc/annotations","utils/localization"],function(f,b,d,a,c,e){var g=d.ReadOnlyHistoryPanel.extend({HDAViewClass:b.HDAEditView,initialize:function(h){h=h||{};this.selectedHdaIds=[];this.tagsEditor=null;this.annotationEditor=null;this.purgeAllowed=h.purgeAllowed||false;this.selecting=h.selecting||false;this.annotationEditorShown=h.annotationEditorShown||false;this.tagsEditorShown=h.tagsEditorShown||false;d.ReadOnlyHistoryPanel.prototype.initialize.call(this,h)},_setUpModelEventHandlers:function(){d.ReadOnlyHistoryPanel.prototype._setUpModelEventHandlers.call(this);this.model.on("change:nice_size",this.updateHistoryDiskSize,this);this.model.hdas.on("change:deleted",this._handleHdaDeletionChange,this);this.model.hdas.on("change:visible",this._handleHdaVisibleChange,this);this.model.hdas.on("change:purged",function(h){this.model.fetch()},this)},renderModel:function(){var h=$("<div/>");h.append(g.templates.historyPanel(this.model.toJSON()));this.$emptyMessage(h).text(this.emptyMsg);if(Galaxy&&Galaxy.currUser&&Galaxy.currUser.id&&Galaxy.currUser.id===this.model.get("user_id")){this._renderTags(h);this._renderAnnotation(h)}h.find(".history-secondary-actions").prepend(this._renderSelectButton());h.find(".history-dataset-actions").toggle(this.selecting);h.find(".history-secondary-actions").prepend(this._renderSearchButton());this._setUpBehaviours(h);this.renderHdas(h);return h},_renderTags:function(h){var i=this;this.tagsEditor=new a.TagsEditor({model:this.model,el:h.find(".history-controls .tags-display"),onshowFirstTime:function(){this.render()},onshow:function(){i.toggleHDATagEditors(true,i.fxSpeed)},onhide:function(){i.toggleHDATagEditors(false,i.fxSpeed)},$activator:faIconButton({title:e("Edit history tags"),classes:"history-tag-btn",faIcon:"fa-tags"}).appendTo(h.find(".history-secondary-actions"))})},_renderAnnotation:function(h){var i=this;this.annotationEditor=new c.AnnotationEditor({model:this.model,el:h.find(".history-controls .annotation-display"),onshowFirstTime:function(){this.render()},onshow:function(){i.toggleHDAAnnotationEditors(true,i.fxSpeed)},onhide:function(){i.toggleHDAAnnotationEditors(false,i.fxSpeed)},$activator:faIconButton({title:e("Edit history annotation"),classes:"history-annotate-btn",faIcon:"fa-comment"}).appendTo(h.find(".history-secondary-actions"))})},_renderSelectButton:function(h){return faIconButton({title:e("Operations on multiple datasets"),classes:"history-select-btn",faIcon:"fa-check-square-o"})},_setUpBehaviours:function(h){h=h||this.$el;d.ReadOnlyHistoryPanel.prototype._setUpBehaviours.call(this,h);if(!this.model){return}this._setUpDatasetActionsPopup(h);if((!Galaxy.currUser||Galaxy.currUser.isAnonymous())||(Galaxy.currUser.id!==this.model.get("user_id"))){return}var i=this;h.find(".history-name").attr("title",e("Click to rename history")).tooltip({placement:"bottom"}).make_text_editable({on_finish:function(j){var k=i.model.get("name");if(j&&j!==k){i.$el.find(".history-name").text(j);i.model.save({name:j}).fail(function(){i.$el.find(".history-name").text(i.model.previous("name"))})}else{i.$el.find(".history-name").text(k)}}})},_setUpDatasetActionsPopup:function(h){var i=this,j=[{html:e("Hide datasets"),func:function(){var k=f.HistoryDatasetAssociation.prototype.hide;i.getSelectedHdaCollection().ajaxQueue(k)}},{html:e("Unhide datasets"),func:function(){var k=f.HistoryDatasetAssociation.prototype.unhide;i.getSelectedHdaCollection().ajaxQueue(k)}},{html:e("Delete datasets"),func:function(){var k=f.HistoryDatasetAssociation.prototype["delete"];i.getSelectedHdaCollection().ajaxQueue(k)}},{html:e("Undelete datasets"),func:function(){var k=f.HistoryDatasetAssociation.prototype.undelete;i.getSelectedHdaCollection().ajaxQueue(k)}}];if(i.purgeAllowed){j.push({html:e("Permanently delete datasets"),func:function(){if(confirm(e("This will permanently remove the data in your datasets. Are you sure?"))){var k=f.HistoryDatasetAssociation.prototype.purge;i.getSelectedHdaCollection().ajaxQueue(k)}}})}return new PopupMenu(h.find(".history-dataset-action-popup-btn"),j)},_handleHdaDeletionChange:function(h){if(h.get("deleted")&&!this.storage.get("show_deleted")){this.removeHdaView(this.hdaViews[h.id])}},_handleHdaVisibleChange:function(h){if(h.hidden()&&!this.storage.get("show_hidden")){this.removeHdaView(this.hdaViews[h.id])}},_createHdaView:function(i){var h=i.get("id"),j=new this.HDAViewClass({model:i,linkTarget:this.linkTarget,expanded:this.storage.get("expandedHdas")[h],selectable:this.selecting,purgeAllowed:this.purgeAllowed,hasUser:this.model.ownedByCurrUser(),logger:this.logger,tagsEditorShown:(this.tagsEditor&&!this.tagsEditor.hidden),annotationEditorShown:(this.annotationEditor&&!this.annotationEditor.hidden)});this._setUpHdaListeners(j);return j},_setUpHdaListeners:function(i){var h=this;d.ReadOnlyHistoryPanel.prototype._setUpHdaListeners.call(this,i);i.on("selected",function(j){var k=j.model.get("id");h.selectedHdaIds=_.union(h.selectedHdaIds,[k])});i.on("de-selected",function(j){var k=j.model.get("id");h.selectedHdaIds=_.without(h.selectedHdaIds,k)})},toggleHDATagEditors:function(h){var i=arguments;_.each(this.hdaViews,function(j){if(j.tagsEditor){j.tagsEditor.toggle.apply(j.tagsEditor,i)}})},toggleHDAAnnotationEditors:function(h){var i=arguments;_.each(this.hdaViews,function(j){if(j.annotationEditor){j.annotationEditor.toggle.apply(j.annotationEditor,i)}})},removeHdaView:function(i){if(!i){return}var h=this;i.$el.fadeOut(h.fxSpeed,function(){i.off();i.remove();delete h.hdaViews[i.model.id];if(_.isEmpty(h.hdaViews)){h.$emptyMessage().fadeIn(h.fxSpeed,function(){h.trigger("empty-history",h)})}})},events:_.extend(_.clone(d.ReadOnlyHistoryPanel.prototype.events),{"click .history-select-btn":"toggleSelectors","click .history-select-all-datasets-btn":"selectAllDatasets","click .history-deselect-all-datasets-btn":"deselectAllDatasets"}),updateHistoryDiskSize:function(){this.$el.find(".history-size").text(this.model.get("nice_size"))},showSelectors:function(h){h=(h!==undefined)?(h):(this.fxSpeed);this.selecting=true;this.$(".history-dataset-actions").slideDown(h);_.each(this.hdaViews,function(i){i.showSelector()});this.selectedHdaIds=[]},hideSelectors:function(h){h=(h!==undefined)?(h):(this.fxSpeed);this.selecting=false;this.$(".history-dataset-actions").slideUp(h);_.each(this.hdaViews,function(i){i.hideSelector()});this.selectedHdaIds=[]},toggleSelectors:function(){if(!this.selecting){this.showSelectors()}else{this.hideSelectors()}},selectAllDatasets:function(h){_.each(this.hdaViews,function(i){i.select(h)})},deselectAllDatasets:function(h){_.each(this.hdaViews,function(i){i.deselect(h)})},getSelectedHdaViews:function(){return _.filter(this.hdaViews,function(h){return h.selected})},getSelectedHdaCollection:function(){return new f.HDACollection(_.map(this.getSelectedHdaViews(),function(h){return h.model}),{historyId:this.model.id})},toString:function(){return"HistoryPanel("+((this.model)?(this.model.get("name")):(""))+")"}});return{HistoryPanel:g}});
\ No newline at end of file
diff -r 2342d31a1f5cf906f1aecb0689fe64da384cf940 -r 68b49fdc63e2445ee2124e0185373d964bc64bc8 templates/webapps/galaxy/history/view.mako
--- a/templates/webapps/galaxy/history/view.mako
+++ b/templates/webapps/galaxy/history/view.mako
@@ -173,6 +173,7 @@
window.historyPanel = new panelClass({
show_deleted : ${show_deleted_json},
show_hidden : ${show_hidden_json},
+ purgeAllowed : Galaxy.config.allow_user_dataset_purge,
el : $( "#history-" + historyJSON.id ),
model : history,
onready : function(){
diff -r 2342d31a1f5cf906f1aecb0689fe64da384cf940 -r 68b49fdc63e2445ee2124e0185373d964bc64bc8 templates/webapps/galaxy/root/history.mako
--- a/templates/webapps/galaxy/root/history.mako
+++ b/templates/webapps/galaxy/root/history.mako
@@ -25,6 +25,7 @@
window.panel = new historyPanel.CurrentHistoryPanel({
show_deleted : bootstrapped.show_deleted,
show_hidden : bootstrapped.show_hidden,
+ purgeAllowed : Galaxy.config.allow_user_dataset_purge,
el : $( "body" ),
model : new historyModel.History( bootstrapped.history, bootstrapped.hdas ),
onready : function(){
diff -r 2342d31a1f5cf906f1aecb0689fe64da384cf940 -r 68b49fdc63e2445ee2124e0185373d964bc64bc8 templates/webapps/galaxy/root/index.mako
--- a/templates/webapps/galaxy/root/index.mako
+++ b/templates/webapps/galaxy/root/index.mako
@@ -205,6 +205,7 @@
$(function(){
var currPanel = new historyPanel.CurrentHistoryPanel({
el : $( "#current-history-panel" ),
+ purgeAllowed : Galaxy.config.allow_user_dataset_purge,
linkTarget : 'galaxy_main',
onready : function loadAsCurrentHistoryPanel(){
this.connectToQuotaMeter( Galaxy.quotaMeter )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/a0d853e0fe80/
Changeset: a0d853e0fe80
User: davebgx
Date: 2014-04-22 18:58:42
Summary: Fix library template functional tests. Revert change to user info tests that was breaking library template tests.
Affected #: 3 files
diff -r 4a697eff12901cdb715667526bc097948646da86 -r a0d853e0fe804be7b21191957998ec7cdd4d8305 test/base/twilltestcase.py
--- a/test/base/twilltestcase.py
+++ b/test/base/twilltestcase.py
@@ -1698,17 +1698,16 @@
num_fields=1, num_options=0, field_name='1_field_name', strings_displayed=[],
strings_displayed_after_submit=[] ):
"""Create a new form definition."""
- strings_displayed_after_submit.extend( [ name, description, form_type ] )
self.visit_url( "%s/forms/create_form_definition" % self.url )
for check_str in strings_displayed:
self.check_page_for_string( check_str )
- tc.fv( "1", "name", name )
- tc.fv( "1", "description", description )
- tc.fv( "1", "form_type_select_field", form_type )
+ tc.fv( "create_form_definition", "name", name )
+ tc.fv( "create_form_definition", "description", description )
+ tc.fv( "create_form_definition", "form_type_select_field", form_type )
tc.submit( "create_form_button" )
if form_type == "Sequencing Sample Form":
tc.submit( "add_layout_grid" )
- tc.fv( "1", "grid_layout0", form_layout_name )
+ tc.fv( "create_form_definition", "grid_layout0", form_layout_name )
# if not adding any fields at this time, remove the default empty field
if num_fields == 0:
tc.submit( "remove_button" )
@@ -1720,8 +1719,8 @@
field_help_contents = 'Field %i help' % index1
field_default = 'field_default_0'
field_default_contents = '%s default contents' % form_type
- tc.fv( "1", field_label, field_contents )
- tc.fv( "1", field_help_name, field_help_contents )
+ tc.fv( "edit_form_definition", field_label, field_contents )
+ tc.fv( "edit_form_definition", field_help_name, field_help_contents )
if field_type == 'SelectField':
# SelectField field_type requires a refresh_on_change
self.refresh_form( 'field_type_0', field_type )
@@ -1735,11 +1734,11 @@
for index2 in range( num_options ):
option_field_name = 'field_0_option_%i' % index2
option_field_value = 'Option%i' % index2
- tc.fv( "1", option_field_name, option_field_value )
+ tc.fv( "edit_form_definition", option_field_name, option_field_value )
else:
- tc.fv( "1", "field_type_0", field_type )
- tc.fv( "1", 'field_name_0', field_name )
- tc.fv( "1", field_default, field_default_contents )
+ tc.fv( "edit_form_definition", "field_type_0", field_type )
+ tc.fv( "edit_form_definition", 'field_name_0', field_name )
+ tc.fv( "edit_form_definition", field_default, field_default_contents )
# All done... now save
tc.submit( "save_changes_button" )
for check_str in strings_displayed_after_submit:
diff -r 4a697eff12901cdb715667526bc097948646da86 -r a0d853e0fe804be7b21191957998ec7cdd4d8305 test/functional/test_library_templates.py
--- a/test/functional/test_library_templates.py
+++ b/test/functional/test_library_templates.py
@@ -436,7 +436,7 @@
strings_displayed=[ 'SelectField', 'Option1' ] )
# Import a dataset from the current history
filename = '8.bed'
- self.new_history( name='import with SelectField' )
+ self.new_history( name='import+with+SelectField' )
self.upload_file( filename )
hda = get_latest_hda()
self.upload_library_dataset( cntrller='library_admin',
diff -r 4a697eff12901cdb715667526bc097948646da86 -r a0d853e0fe804be7b21191957998ec7cdd4d8305 test/functional/test_user_info.py
--- a/test/functional/test_user_info.py
+++ b/test/functional/test_user_info.py
@@ -38,12 +38,15 @@
"""Testing creating a new user info form and editing it"""
# Logged in as admin_user
# Create a the first form
- self.create_form( name='Student',
- description="This is Student user info form's description",
- form_type=get_user_info_form_definition(),
+ name = "Student"
+ desc = "This is Student user info form's description"
+ form_type = get_user_info_form_definition()
+ self.create_form( name=name,
+ description=desc,
+ form_type=form_type,
num_fields=0,
strings_displayed=[ 'Create a new form definition' ],
- strings_displayed_after_submit=[] )
+ strings_displayed_after_submit=[ name, desc, form_type ] )
tmp_form = get_form( 'Student' )
# Add fields to the form
field_dicts = [ dict( label='Affiliation',
@@ -73,12 +76,14 @@
assert form_one is not None, 'Problem retrieving form named "Student" from the database'
assert len( form_one.fields ) == len( tmp_form.fields ) + len( field_dicts )
# Create the second form
- self.create_form( name='Researcher',
- description="This is Researcher user info form's description",
- form_type=get_user_info_form_definition(),
+ name = "Researcher"
+ desc = "This is Researcher user info form's description"
+ self.create_form( name=name,
+ description=desc,
+ form_type=form_type,
num_fields=0,
strings_displayed=[ 'Create a new form definition' ],
- strings_displayed_after_submit=[] )
+ strings_displayed_after_submit=[ name, desc, form_type ] )
tmp_form = get_form( 'Researcher' )
# Add fields to the form
self.edit_form( id=self.security.encode_id( tmp_form.current.id ),
https://bitbucket.org/galaxy/galaxy-central/commits/2342d31a1f5c/
Changeset: 2342d31a1f5c
User: davebgx
Date: 2014-04-22 18:59:21
Summary: Merge stable changes into default.
Affected #: 0 files
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
3 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/b6c06c9db59b/
Changeset: b6c06c9db59b
User: jmchilton
Date: 2014-04-22 18:13:27
Summary: Workflow Editor: Rework InputTerminal to implement/use an update method (w/tests).
This will enable replacing terminals instead of recreating them and reconnecting in subsequent changesets.
Affected #: 2 files
diff -r 008c26226230353a2440e6c7208de58575cfebff -r b6c06c9db59b3197a4f1c55327679ae6a6a08263 static/scripts/galaxy.workflow_editor.canvas.js
--- a/static/scripts/galaxy.workflow_editor.canvas.js
+++ b/static/scripts/galaxy.workflow_editor.canvas.js
@@ -38,8 +38,11 @@
var InputTerminal = Terminal.extend( {
initialize: function( attr ) {
Terminal.prototype.initialize.call( this, attr );
- this.datatypes = attr.datatypes;
- this.multiple = attr.multiple;
+ this.update( attr.input );
+ },
+ update: function( input ) {
+ this.datatypes = input.extensions;
+ this.multiple = input.multiple;
},
canAccept: function ( other ) {
if( this._inputFilled() ) {
@@ -981,13 +984,9 @@
var input = options.input;
var name = input.name;
- var types = input.extensions;
- var multiple = input.multiple;
-
- var terminal = this.el.terminal = new InputTerminal( { element: this.el, datatypes: types, multiple: multiple } );
+ var terminal = this.el.terminal = new InputTerminal( { element: this.el, input: input } );
terminal.node = node;
terminal.name = name;
-
node.input_terminals[name] = terminal;
},
diff -r 008c26226230353a2440e6c7208de58575cfebff -r b6c06c9db59b3197a4f1c55327679ae6a6a08263 test/qunit/tests/workflow_editor_tests.js
--- a/test/qunit/tests/workflow_editor_tests.js
+++ b/test/qunit/tests/workflow_editor_tests.js
@@ -63,7 +63,8 @@
setup: function() {
this.node = { };
this.element = $( "<div>" );
- this.input_terminal = new InputTerminal( { element: this.element, datatypes: [ "txt" ] } );
+ var input = { extensions: [ "txt" ], multiple: false };
+ this.input_terminal = new InputTerminal( { element: this.element, input: input } );
this.input_terminal.node = this.node;
},
test_connector: function( attr ) {
@@ -90,6 +91,16 @@
}
} );
+ test( "test update", function() {
+ deepEqual( this.input_terminal.datatypes, [ 'txt' ] );
+ equal( this.input_terminal.multiple, false );
+
+ this.input_terminal.update( { extensions: [ 'bam' ], multiple: true } );
+
+ deepEqual( this.input_terminal.datatypes, [ 'bam' ] );
+ equal( this.input_terminal.multiple, true );
+ } );
+
test( "test connect", function() {
this.node.markChanged = sinon.spy();
https://bitbucket.org/galaxy/galaxy-central/commits/47ff7cab1ab1/
Changeset: 47ff7cab1ab1
User: jmchilton
Date: 2014-04-22 18:13:27
Summary: Workflow Editor: Replace instead of recreate terminal on update.
This fixes at least one subtle bug related to multiple input data parameters (actually probably two bugs) because the old logic assumed there was only one connector per input terminal.
This should be more efficient, lead to some code duplication deletion in subsequent changesets, and really help dataset collections where terminals are much more complex (there are data inputs and collection inputs, and each can be mapped over by collections) - this helps manage complexity downstream.
Affected #: 2 files
diff -r b6c06c9db59b3197a4f1c55327679ae6a6a08263 -r 47ff7cab1ab1b92bcbb0db06d6892bdad856f005 static/scripts/galaxy.workflow_editor.canvas.js
--- a/static/scripts/galaxy.workflow_editor.canvas.js
+++ b/static/scripts/galaxy.workflow_editor.canvas.js
@@ -257,14 +257,17 @@
// Update input rows
var old_body = nodeView.$( "div.inputs" );
var new_body = nodeView.newInputsDiv();
- $.each( data.data_inputs, function( i, input ) {
- node.nodeView.replaceDataInput( input, new_body );
+ var newTerminalViews = {};
+ _.each( data.data_inputs, function( input ) {
+ var terminalView = node.nodeView.addDataInput( input, new_body );
+ newTerminalViews[ input.name ] = terminalView;
});
+ // Cleanup any leftover terminals
+ _.each( _.difference( _.values( nodeView.terminalViews ), _.values( newTerminalViews ) ), function( unusedView ) {
+ unusedView.el.terminal.destroy();
+ } );
+ nodeView.terminalViews = newTerminalViews;
old_body.replaceWith( new_body );
- // Cleanup any leftover terminals
- old_body.find( "div.input-data-row > .terminal" ).each( function() {
- this.terminal.destroy();
- });
// If active, reactivate with new form_html
this.markChanged();
this.redraw();
@@ -754,6 +757,7 @@
this.tool_body = this.$el.find( ".toolFormBody" );
this.tool_body.find( "div" ).remove();
this.newInputsDiv().appendTo( this.tool_body );
+ this.terminalViews = {};
},
render : function() {
@@ -786,6 +790,7 @@
node: this.node,
input: input
} );
+ this.terminalViews[ input.name ] = terminalView;
var terminalElement = terminalView.el;
var inputView = new DataInputView( {
"terminalElement": terminalElement,
@@ -798,29 +803,23 @@
},
replaceDataInput: function( input, new_body ) {
- var terminalView = new InputTerminalView( {
- node: this.node,
- input: input
- } );
+ var terminalView = this.terminalViews[ input.name ];
+ if( ! terminalView ) {
+ terminalView = new InputTerminalView( {
+ node: this.node,
+ input: input
+ } );
+ } else {
+ var terminal = terminalView.el.terminal;
+ terminal.update( input );
+ _.each( terminal.connectors, function( connector ) {
+ if( connector.handle1 && ! terminal.attachable( connector.handle1 ) ) {
+ connector.destroy();
+ }
+ } );
+ }
+ this.terminalViews[ input.name ] = terminalView;
var t = terminalView.el;
-
- // If already connected save old connection
- this.$( "div[name='" + input.name + "']" ).each( function() {
- $(this).find( ".input-terminal" ).each( function() {
- var c = this.terminal.connectors[0];
- if ( c ) {
- var terminal = t.terminal;
- if( c.handle1 && ! terminal.attachable( c.handle1 ) ) {
- // connection no longer valid, destroy it
- c.destroy();
- } else {
- terminal.connectors[0] = c;
- c.handle2 = terminal;
- }
- }
- });
- $(this).remove();
- });
var inputView = new DataInputView( {
"terminalElement": t,
"input": input,
@@ -831,7 +830,7 @@
// Append to new body
new_body.append( ib.prepend( t ) );
-
+ return terminalView;
},
addDataOutput: function( output ) {
diff -r b6c06c9db59b3197a4f1c55327679ae6a6a08263 -r 47ff7cab1ab1b92bcbb0db06d6892bdad856f005 test/qunit/tests/workflow_editor_tests.js
--- a/test/qunit/tests/workflow_editor_tests.js
+++ b/test/qunit/tests/workflow_editor_tests.js
@@ -385,11 +385,14 @@
var old_input_terminal = node.input_terminals.input1;
old_input_terminal.connectors.push( connector );
+ // Update node, make sure connector still the same...
test.update_field_data_with_new_input();
-
var new_input_terminal = node.input_terminals.input1;
- equal( old_input_terminal, old_input_terminal );
- notEqual( old_input_terminal, new_input_terminal );
+ equal( connector, new_input_terminal.connectors[ 0 ] );
+
+ // Update a second time, make sure connector still the same...
+ test.update_field_data_with_new_input();
+ new_input_terminal = node.input_terminals.input1;
equal( connector, new_input_terminal.connectors[ 0 ] );
} );
} );
https://bitbucket.org/galaxy/galaxy-central/commits/4a697eff1290/
Changeset: 4a697eff1290
User: jmchilton
Date: 2014-04-22 18:13:27
Summary: Workflow Editor: Merge addDataInput and replaceDataInput into common method.
Reduces some code duplication here and reduces even more code duplication downstream in dataset collections work.
Affected #: 2 files
diff -r 47ff7cab1ab1b92bcbb0db06d6892bdad856f005 -r 4a697eff12901cdb715667526bc097948646da86 static/scripts/galaxy.workflow_editor.canvas.js
--- a/static/scripts/galaxy.workflow_editor.canvas.js
+++ b/static/scripts/galaxy.workflow_editor.canvas.js
@@ -785,24 +785,14 @@
this.tool_body.append( $( "<div class='rule'></div>" ) );
},
- addDataInput: function( input ) {
- var terminalView = new InputTerminalView( {
- node: this.node,
- input: input
- } );
- this.terminalViews[ input.name ] = terminalView;
- var terminalElement = terminalView.el;
- var inputView = new DataInputView( {
- "terminalElement": terminalElement,
- "input": input,
- "nodeView": this,
- } );
- var ib = inputView.$el;
- var terminalElement = inputView.terminalElement;
- this.$( ".inputs" ).append( ib.prepend( terminalElement ) );
- },
-
- replaceDataInput: function( input, new_body ) {
+ addDataInput: function( input, body ) {
+ var skipResize = true;
+ if( ! body ) {
+ body = this.$( ".inputs" );
+ // initial addition to node - resize input to help calculate node
+ // width.
+ skipResize = false;
+ }
var terminalView = this.terminalViews[ input.name ];
if( ! terminalView ) {
terminalView = new InputTerminalView( {
@@ -819,17 +809,17 @@
} );
}
this.terminalViews[ input.name ] = terminalView;
- var t = terminalView.el;
+ var terminalElement = terminalView.el;
var inputView = new DataInputView( {
- "terminalElement": t,
- "input": input,
- "nodeView": this,
- "skipResize": true,
+ terminalElement: terminalElement,
+ input: input,
+ nodeView: this,
+ skipResize: skipResize
} );
var ib = inputView.$el;
- // Append to new body
- new_body.append( ib.prepend( t ) );
+ var terminalElement = inputView.terminalElement;
+ body.append( ib.prepend( terminalElement ) );
return terminalView;
},
diff -r 47ff7cab1ab1b92bcbb0db06d6892bdad856f005 -r 4a697eff12901cdb715667526bc097948646da86 test/qunit/tests/workflow_editor_tests.js
--- a/test/qunit/tests/workflow_editor_tests.js
+++ b/test/qunit/tests/workflow_editor_tests.js
@@ -477,7 +477,7 @@
test( "replacing terminal on data input update preserves connections", function() {
var connector = this.connectAttachedTerminal( "txt", "txt" );
var newElement = $("<div class='inputs'></div>");
- this.view.replaceDataInput( { name: "TestName", extensions: ["txt"] }, newElement );
+ this.view.addDataInput( { name: "TestName", extensions: ["txt"] }, newElement );
var terminal = newElement.find(".input-terminal")[ 0 ].terminal;
ok( connector.handle2 === terminal );
} );
@@ -487,7 +487,7 @@
var newElement = $("<div class='inputs'></div>");
var connector_destroy_spy = sinon.spy( connector, "destroy" );
// Replacing input with same name - but now of type bam should destroy connection.
- this.view.replaceDataInput( { name: "TestName", extensions: ["bam"] }, newElement );
+ this.view.addDataInput( { name: "TestName", extensions: ["bam"] }, newElement );
var terminal = newElement.find(".input-terminal")[ 0 ].terminal;
ok( connector_destroy_spy.called );
} );
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: martenson: rename biostar menu link
by commits-noreply@bitbucket.org 22 Apr '14
by commits-noreply@bitbucket.org 22 Apr '14
22 Apr '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/7a7985a007fb/
Changeset: 7a7985a007fb
Branch: stable
User: martenson
Date: 2014-04-22 16:53:35
Summary: rename biostar menu link
Affected #: 2 files
diff -r 1d06e19aa3aac0c87b21f8482b8fc73c89e277ad -r 7a7985a007fbce2bfa7bea5d8d23af1d63fecbb1 static/scripts/galaxy.menu.js
--- a/static/scripts/galaxy.menu.js
+++ b/static/scripts/galaxy.menu.js
@@ -195,7 +195,7 @@
if (this.options.biostar_url)
{
tab_help.add({
- title : "Galaxy Q&A Site",
+ title : "Galaxy Biostar",
content : this.options.biostar_url_redirect,
target : "_blank"
});
diff -r 1d06e19aa3aac0c87b21f8482b8fc73c89e277ad -r 7a7985a007fbce2bfa7bea5d8d23af1d63fecbb1 static/scripts/packed/galaxy.menu.js
--- a/static/scripts/packed/galaxy.menu.js
+++ b/static/scripts/packed/galaxy.menu.js
@@ -1,1 +1,1 @@
-define(["galaxy.masthead"],function(b){var a=Backbone.Model.extend({options:null,masthead:null,initialize:function(c){this.options=c.config;this.masthead=c.masthead;this.create()},create:function(){var e=new b.GalaxyMastheadTab({id:"analysis",title:"Analyze Data",content:"root/index",title_attribute:"Analysis home view"});this.masthead.append(e);var g={id:"workflow",title:"Workflow",content:"workflow",title_attribute:"Chain tools into workflows"};if(!this.options.user.valid){g.disabled=true}var d=new b.GalaxyMastheadTab(g);this.masthead.append(d);var i=new b.GalaxyMastheadTab({id:"shared",title:"Shared Data",content:"library/index",title_attribute:"Access published resources"});i.add({title:"Data Libraries",content:"library/index"});i.add({title:"Data Libraries Beta",content:"library/list",divider:true});i.add({title:"Published Histories",content:"history/list_published"});i.add({title:"Published Workflows",content:"workflow/list_published"});i.add({title:"Published Visualizations",content:"visualization/list_published"});i.add({title:"Published Pages",content:"page/list_published"});this.masthead.append(i);if(this.options.user.requests){var j=new b.GalaxyMastheadTab({id:"lab",title:"Lab"});j.add({title:"Sequencing Requests",content:"requests/index"});j.add({title:"Find Samples",content:"requests/find_samples_index"});j.add({title:"Help",content:this.options.lims_doc_url});this.masthead.append(j)}var c={id:"visualization",title:"Visualization",content:"visualization/list",title_attribute:"Visualize datasets"};if(!this.options.user.valid){c.disabled=true}var m=new b.GalaxyMastheadTab(c);if(this.options.user.valid){m.add({title:"New Track Browser",content:"visualization/trackster",target:"_frame"});m.add({title:"Saved Visualizations",content:"visualization/list",target:"_frame"})}this.masthead.append(m);if(this.options.enable_cloud_launch){var f=new b.GalaxyMastheadTab({id:"cloud",title:"Cloud",content:"cloudlaunch/index"});f.add({title:"New Cloud Cluster",content:"cloudlaunch/index"});this.masthead.append(f)}if(this.options.is_admin_user){var h=new b.GalaxyMastheadTab({id:"admin",title:"Admin",content:"admin/index",extra_class:"admin-only",title_attribute:"Administer this Galaxy"});this.masthead.append(h)}var l=new b.GalaxyMastheadTab({id:"help",title:"Help",title_attribute:"Support, contact, and community hubs"});if(this.options.biostar_url){l.add({title:"Galaxy Q&A Site",content:this.options.biostar_url_redirect,target:"_blank"});l.add({title:"Ask a question",content:"biostar/biostar_question_redirect",target:"_blank"})}l.add({title:"Support",content:this.options.support_url,target:"_blank"});l.add({title:"Search",content:this.options.search_url,target:"_blank"});l.add({title:"Mailing Lists",content:this.options.mailing_lists,target:"_blank"});l.add({title:"Videos",content:this.options.screencasts_url,target:"_blank"});l.add({title:"Wiki",content:this.options.wiki_url,target:"_blank"});l.add({title:"How to Cite Galaxy",content:this.options.citation_url,target:"_blank"});if(this.options.terms_url){l.add({title:"Terms and Conditions",content:this.options.terms_url,target:"_blank"})}this.masthead.append(l);if(!this.options.user.valid){var k=new b.GalaxyMastheadTab({id:"user",title:"User",extra_class:"loggedout-only",title_attribute:"Account registration or login"});k.add({title:"Login",content:"user/login",target:"galaxy_main"});if(this.options.allow_user_creation){k.add({title:"Register",content:"user/create",target:"galaxy_main"})}this.masthead.append(k)}else{var k=new b.GalaxyMastheadTab({id:"user",title:"User",extra_class:"loggedin-only",title_attribute:"Account preferences and saved data"});k.add({title:"Logged in as "+this.options.user.email});if(this.options.use_remote_user&&this.options.remote_user_logout_href){k.add({title:"Logout",content:this.options.remote_user_logout_href,target:"_top"})}else{k.add({title:"Preferences",content:"user?cntrller=user",target:"galaxy_main"});k.add({title:"Custom Builds",content:"user/dbkeys",target:"galaxy_main"});k.add({title:"Logout",content:"user/logout",target:"_top",divider:true})}k.add({title:"Saved Histories",content:"history/list",target:"galaxy_main"});k.add({title:"Saved Datasets",content:"dataset/list",target:"galaxy_main"});k.add({title:"Saved Pages",content:"page/list",target:"_top"});k.add({title:"API Keys",content:"user/api_keys?cntrller=user",target:"galaxy_main"});if(this.options.use_remote_user){k.add({title:"Public Name",content:"user/edit_username?cntrller=user",target:"galaxy_main"})}this.masthead.append(k)}if(this.options.active_view){this.masthead.highlight(this.options.active_view)}}});return{GalaxyMenu:a}});
\ No newline at end of file
+define(["galaxy.masthead"],function(b){var a=Backbone.Model.extend({options:null,masthead:null,initialize:function(c){this.options=c.config;this.masthead=c.masthead;this.create()},create:function(){var e=new b.GalaxyMastheadTab({id:"analysis",title:"Analyze Data",content:"root/index",title_attribute:"Analysis home view"});this.masthead.append(e);var g={id:"workflow",title:"Workflow",content:"workflow",title_attribute:"Chain tools into workflows"};if(!this.options.user.valid){g.disabled=true}var d=new b.GalaxyMastheadTab(g);this.masthead.append(d);var i=new b.GalaxyMastheadTab({id:"shared",title:"Shared Data",content:"library/index",title_attribute:"Access published resources"});i.add({title:"Data Libraries",content:"library/index"});i.add({title:"Data Libraries Beta",content:"library/list",divider:true});i.add({title:"Published Histories",content:"history/list_published"});i.add({title:"Published Workflows",content:"workflow/list_published"});i.add({title:"Published Visualizations",content:"visualization/list_published"});i.add({title:"Published Pages",content:"page/list_published"});this.masthead.append(i);if(this.options.user.requests){var j=new b.GalaxyMastheadTab({id:"lab",title:"Lab"});j.add({title:"Sequencing Requests",content:"requests/index"});j.add({title:"Find Samples",content:"requests/find_samples_index"});j.add({title:"Help",content:this.options.lims_doc_url});this.masthead.append(j)}var c={id:"visualization",title:"Visualization",content:"visualization/list",title_attribute:"Visualize datasets"};if(!this.options.user.valid){c.disabled=true}var m=new b.GalaxyMastheadTab(c);if(this.options.user.valid){m.add({title:"New Track Browser",content:"visualization/trackster",target:"_frame"});m.add({title:"Saved Visualizations",content:"visualization/list",target:"_frame"})}this.masthead.append(m);if(this.options.enable_cloud_launch){var f=new b.GalaxyMastheadTab({id:"cloud",title:"Cloud",content:"cloudlaunch/index"});f.add({title:"New Cloud Cluster",content:"cloudlaunch/index"});this.masthead.append(f)}if(this.options.is_admin_user){var h=new b.GalaxyMastheadTab({id:"admin",title:"Admin",content:"admin/index",extra_class:"admin-only",title_attribute:"Administer this Galaxy"});this.masthead.append(h)}var l=new b.GalaxyMastheadTab({id:"help",title:"Help",title_attribute:"Support, contact, and community hubs"});if(this.options.biostar_url){l.add({title:"Galaxy Biostar",content:this.options.biostar_url_redirect,target:"_blank"});l.add({title:"Ask a question",content:"biostar/biostar_question_redirect",target:"_blank"})}l.add({title:"Support",content:this.options.support_url,target:"_blank"});l.add({title:"Search",content:this.options.search_url,target:"_blank"});l.add({title:"Mailing Lists",content:this.options.mailing_lists,target:"_blank"});l.add({title:"Videos",content:this.options.screencasts_url,target:"_blank"});l.add({title:"Wiki",content:this.options.wiki_url,target:"_blank"});l.add({title:"How to Cite Galaxy",content:this.options.citation_url,target:"_blank"});if(this.options.terms_url){l.add({title:"Terms and Conditions",content:this.options.terms_url,target:"_blank"})}this.masthead.append(l);if(!this.options.user.valid){var k=new b.GalaxyMastheadTab({id:"user",title:"User",extra_class:"loggedout-only",title_attribute:"Account registration or login"});k.add({title:"Login",content:"user/login",target:"galaxy_main"});if(this.options.allow_user_creation){k.add({title:"Register",content:"user/create",target:"galaxy_main"})}this.masthead.append(k)}else{var k=new b.GalaxyMastheadTab({id:"user",title:"User",extra_class:"loggedin-only",title_attribute:"Account preferences and saved data"});k.add({title:"Logged in as "+this.options.user.email});if(this.options.use_remote_user&&this.options.remote_user_logout_href){k.add({title:"Logout",content:this.options.remote_user_logout_href,target:"_top"})}else{k.add({title:"Preferences",content:"user?cntrller=user",target:"galaxy_main"});k.add({title:"Custom Builds",content:"user/dbkeys",target:"galaxy_main"});k.add({title:"Logout",content:"user/logout",target:"_top",divider:true})}k.add({title:"Saved Histories",content:"history/list",target:"galaxy_main"});k.add({title:"Saved Datasets",content:"dataset/list",target:"galaxy_main"});k.add({title:"Saved Pages",content:"page/list",target:"_top"});k.add({title:"API Keys",content:"user/api_keys?cntrller=user",target:"galaxy_main"});if(this.options.use_remote_user){k.add({title:"Public Name",content:"user/edit_username?cntrller=user",target:"galaxy_main"})}this.masthead.append(k)}if(this.options.active_view){this.masthead.highlight(this.options.active_view)}}});return{GalaxyMenu:a}});
\ No newline at end of file
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: carlfeberhard: UI: rework zh and ja locale data
by commits-noreply@bitbucket.org 22 Apr '14
by commits-noreply@bitbucket.org 22 Apr '14
22 Apr '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/008c26226230/
Changeset: 008c26226230
User: carlfeberhard
Date: 2014-04-22 17:58:22
Summary: UI: rework zh and ja locale data
Affected #: 21 files
diff -r e7ec387cb3b3b8e6aaceade0c8bdd3da0077ac57 -r 008c26226230353a2440e6c7208de58575cfebff static/scripts/mvc/annotations.js
--- a/static/scripts/mvc/annotations.js
+++ b/static/scripts/mvc/annotations.js
@@ -1,6 +1,7 @@
define([
- "mvc/base-mvc"
-], function( baseMVC ){
+ "mvc/base-mvc",
+ "utils/localization"
+], function( baseMVC, _l ){
// =============================================================================
/** A view on any model that has a 'annotation' attribute
*/
diff -r e7ec387cb3b3b8e6aaceade0c8bdd3da0077ac57 -r 008c26226230353a2440e6c7208de58575cfebff static/scripts/mvc/dataset/hda-base.js
--- a/static/scripts/mvc/dataset/hda-base.js
+++ b/static/scripts/mvc/dataset/hda-base.js
@@ -239,8 +239,8 @@
var menuId = 'dataset-' + this.model.get( 'id' ) + '-popup',
html = [
'<div popupmenu="' + menuId + '">',
- '<a href="' + urls.download + '">', _l( 'Download Dataset' ), '</a>',
- '<a>' + _l( 'Additional Files' ) + '</a>',
+ '<a href="' + urls.download + '">', _l( 'Download dataset' ), '</a>',
+ '<a>' + _l( 'Additional files' ) + '</a>',
_.map( meta_files, function( meta_file ){
return [
@@ -647,13 +647,13 @@
// purged and deleted
'<% if( hda.purged ){ %>',
'<div class="dataset-purged-msg warningmessagesmall"><strong>',
- _l( 'This dataset has been deleted and removed from disk.' ),
+ _l( 'This dataset has been deleted and removed from disk' ) + '.',
'</strong></div>',
// deleted not purged
'<% } else { %>',
'<div class="dataset-deleted-msg warningmessagesmall"><strong>',
- _l( 'This dataset has been deleted.' ),
+ _l( 'This dataset has been deleted' ) + '.',
'</strong></div>',
'<% } %>',
'<% } %>',
@@ -661,7 +661,7 @@
// hidden
'<% if( !hda.visible ){ %>',
'<div class="dataset-hidden-msg warningmessagesmall"><strong>',
- _l( 'This dataset has been hidden.' ),
+ _l( 'This dataset has been hidden' ) + '.',
'</strong></div>',
'<% } %>',
'</div>',
diff -r e7ec387cb3b3b8e6aaceade0c8bdd3da0077ac57 -r 008c26226230353a2440e6c7208de58575cfebff static/scripts/mvc/dataset/hda-edit.js
--- a/static/scripts/mvc/dataset/hda-edit.js
+++ b/static/scripts/mvc/dataset/hda-edit.js
@@ -223,7 +223,7 @@
// No need for popup menu because there's a single visualization.
if( visualizations.length === 1 ){
- $icon.attr( 'data-original-title', _l( 'Visualize in ' ) + _l( titleCase( visualizations[0] ) ) );
+ $icon.attr( 'data-original-title', _l( 'Visualize in' ) + ' ' + _l( titleCase( visualizations[0] ) ) );
$icon.click( create_viz_action( visualizations[0] ) );
// >1: Populate menu dict with visualization fns, make the popupmenu
@@ -253,7 +253,7 @@
// No need for popup menu because there's a single visualization.
if( visualizations.length === 1 ) {
var onlyVisualization = visualizations[0];
- $icon.attr( 'data-original-title', _l( 'Visualize in ' ) + onlyVisualization.html );
+ $icon.attr( 'data-original-title', _l( 'Visualize in' ) + ' ' + onlyVisualization.html );
$icon.attr( 'href', onlyVisualization.href );
// >1: Populate menu dict with visualization fns, make the popupmenu
@@ -284,13 +284,20 @@
_buildNewRender : function(){
var $newRender = hdaBase.HDABaseView.prototype._buildNewRender.call( this );
- //TODO: this won't localize easily
- $newRender.find( '.dataset-deleted-msg' ).append(
- _l( ' Click <a href="javascript:void(0);" class="dataset-undelete">here</a> to undelete it' +
- ' or <a href="javascript:void(0);" class="dataset-purge">here</a> to immediately remove it from disk' ));
+//TODO: this won't localize easily
+ var br = '<br />', p = '.',
+ link = function( t, c ){
+ return [ '<a href="javascript:void(0)" class="', c, '">', t, '</a>' ].join( '' );
+ };
- $newRender.find( '.dataset-hidden-msg' ).append(
- _l( ' Click <a href="javascript:void(0);" class="dataset-unhide">here</a> to unhide it' ));
+ $newRender.find( '.dataset-deleted-msg' ).append([
+ br, link( _l( 'Undelete it' ), 'dataset-undelete' ), p,
+ br, link( _l( 'Permanently remove it from disk' ), 'dataset-purge' ), p
+ ].join( '' ));
+
+ $newRender.find( '.dataset-hidden-msg' ).append([
+ br, link( _l( 'Unhide it' ), 'dataset-unhide' ), p
+ ].join( '' ));
return $newRender;
},
diff -r e7ec387cb3b3b8e6aaceade0c8bdd3da0077ac57 -r 008c26226230353a2440e6c7208de58575cfebff static/scripts/mvc/history/current-history-panel.js
--- a/static/scripts/mvc/history/current-history-panel.js
+++ b/static/scripts/mvc/history/current-history-panel.js
@@ -221,7 +221,7 @@
$emptyMsg.empty();
$emptyMsg.html([
- _l( 'This history is empty. ' ), _l( 'You can ' ),
+ _l( 'This history is empty' ), '. ', _l( 'You can ' ),
'<a class="uploader-link" href="javascript:void(0)">',
_l( 'load your own data' ),
'</a>',
diff -r e7ec387cb3b3b8e6aaceade0c8bdd3da0077ac57 -r 008c26226230353a2440e6c7208de58575cfebff static/scripts/mvc/history/history-panel.js
--- a/static/scripts/mvc/history/history-panel.js
+++ b/static/scripts/mvc/history/history-panel.js
@@ -137,7 +137,7 @@
panel.toggleHDAAnnotationEditors( false, panel.fxSpeed );
},
$activator : faIconButton({
- title : _l( 'Edit history Annotation' ),
+ title : _l( 'Edit history annotation' ),
classes : 'history-annotate-btn',
faIcon : 'fa-comment'
}).appendTo( $where.find( '.history-secondary-actions' ) )
diff -r e7ec387cb3b3b8e6aaceade0c8bdd3da0077ac57 -r 008c26226230353a2440e6c7208de58575cfebff static/scripts/mvc/history/readonly-history-panel.js
--- a/static/scripts/mvc/history/readonly-history-panel.js
+++ b/static/scripts/mvc/history/readonly-history-panel.js
@@ -249,7 +249,7 @@
/** Modify an error message to be fancy and wear a monocle. */
_bePolite : function( msg ){
msg = msg || _l( 'An error occurred while getting updates from the server' );
- return msg + '. ' + _l( 'Please contact a Galaxy administrator if the problem persists.' );
+ return msg + '. ' + _l( 'Please contact a Galaxy administrator if the problem persists' ) + '.';
},
// ------------------------------------------------------------------------ loading history/hda models
@@ -284,6 +284,7 @@
return panel._loadHistoryFromXHR( xhr, attributes )
.fail( function( xhr, where, history ){
// throw an error up for the error handler
+//TODO: difficult to localize - use template
panel.trigger( 'error', panel, xhr, attributes, _l( 'An error was encountered while ' + where ),
{ historyId: historyId, history: history || {} });
})
@@ -1002,8 +1003,8 @@
'</div>',
'<div class="quota-message errormessage">',
- _l( 'You are over your disk quota.' ),
- _l( 'Tool execution is on hold until your disk usage drops below your allocated quota.' ),
+ _l( 'You are over your disk quota' ), '. ',
+ _l( 'Tool execution is on hold until your disk usage drops below your allocated quota' ), '.',
'</div>',
'<div class="tags-display"></div>',
@@ -1024,7 +1025,7 @@
// where the datasets/hdas are added
'<div class="datasets-list"></div>',
'<div class="empty-history-message infomessagesmall">',
- _l( 'Your history is empty. Click \'Get Data\' on the left pane to start' ),
+ _l( 'This history is empty' ),
'</div>'
].join( '' );
diff -r e7ec387cb3b3b8e6aaceade0c8bdd3da0077ac57 -r 008c26226230353a2440e6c7208de58575cfebff static/scripts/mvc/tags.js
--- a/static/scripts/mvc/tags.js
+++ b/static/scripts/mvc/tags.js
@@ -1,6 +1,7 @@
define([
- "mvc/base-mvc"
-], function( baseMVC ){
+ "mvc/base-mvc",
+ "utils/localization"
+], function( baseMVC, _l ){
// =============================================================================
/** A view on any model that has a 'tags' attribute (a list of tag strings)
* Incorporates the select2 jQuery plugin for tags display/editing:
diff -r e7ec387cb3b3b8e6aaceade0c8bdd3da0077ac57 -r 008c26226230353a2440e6c7208de58575cfebff static/scripts/nls/ja/locale.js
--- a/static/scripts/nls/ja/locale.js
+++ b/static/scripts/nls/ja/locale.js
@@ -1,389 +1,217 @@
/** ja localization */
define({
-// templates/history/options.mako:24
-"Are you sure you want to delete the current history?" :
-"現在のヒストリーを消すことに同意しますか?",
+// ---------------------------------------------------------------------------- histories
+// ---- history-model
+// ---- readonly-history-panel
+"This history is empty" :
+"ヒストリーは空です",
+"No matching datasets found" :
+"一致するデータセットが見つかりませんでした",
+//"An error occurred while getting updates from the server" :
+//false,
+//"Please contact a Galaxy administrator if the problem persists" :
+//false,
+//TODO:
+//"An error was encountered while <% where %>" :
+//false,
+"Search datasets" :
+"データセットを検索する",
+"You are currently viewing a deleted history!" :
+"消去したヒストリーをみています。",
+"You are over your disk quota" :
+"あなたはディスククォータを超えている",
+//"Tool execution is on hold until your disk usage drops below your allocated quota" :
+//false,
+"All" :
+"一式",
+"None" :
+"なし",
+"For all selected" :
+"各項目を",
-// templates/root/history.mako:38
-"collapse all" :
-"すべてをおりたたむ",
+// ---- history-panel
+//"Edit history tags" :
+//false,
+//"Edit history Annotation" :
+//false,
+"Click to rename history" :
+"ヒストリーの名前を変更するにはクリック",
+// multi operations
+"Operations on multiple datasets" :
+"複数のデータセットに対する操作",
+//"Hide datasets" :
+//false,
+//"Unhide datasets" :
+//false,
+//"Delete datasets" :
+//false,
+//"Undelete datasets" :
+//false,
+"Permanently delete datasets" :
+"永久にデータセットを削除",
+"This will permanently remove the data in your datasets. Are you sure?" :
+"これは永久にあなたのデータセット内のデータを削除します。本当に?",
-// templates/dataset/edit_attributes.mako:2
-"History Item Attributes" :
-"ヒストリーアイテム変数",
+// ---- annotated-history-panel
+"Dataset" :
+"データセット",
+//"Annotation" :
+//false,
-// templates/dataset/edit_attributes.mako:19
-"Edit Attributes" :
+// ---- current-history-panel
+"This history is empty. Click 'Get Data' on the left tool menu to start" :
+"ヒストリーは空です。解析をはじめるには、左パネルの 'データ取得' をクリック",
+"No matching datasets found" :
+"一致するデータセットが見つかりませんでした",
+"You must be logged in to create histories" :
+"ヒストリーを作成するためにはログインする必要があります",
+//TODO:
+//"You can <% loadYourOwn %> or <% externalSource %>" :
+//false,
+//"load your own data" :
+//false,
+//"get data from an external source" :
+//false,
+
+// these aren't in zh/ginga.po and the template doesn't localize
+//"Include Deleted Datasets" :
+//false,
+//"Include Hidden Datasets" :
+//false,
+
+
+// ---------------------------------------------------------------------------- datasets
+// ---- hda-model
+//"Unable to purge dataset" :
+//false,
+
+// ---- hda-base
+// display button
+//"Cannot display datasets removed from disk" :
+//false,
+//"This dataset must finish uploading before it can be viewed" :
+//false,
+//"This dataset is not yet viewable" :
+//false,
+"View data" :
+"データを表示",
+// download button
+"Download" :
+"ダウンロード",
+"Download dataset" :
+"データセットをダウンロード",
+//"Additional files" :
+//false,
+// info/show_params
+"View details" :
+"細部を表示",
+
+// dataset states
+// state: new
+//"This is a new dataset and not all of its data are available yet" :
+//false,
+// state: noPermission
+//"You do not have permission to view this dataset" :
+//false,
+// state: discarded
+//"The job creating this dataset was cancelled before completion" :
+//false,
+// state: queued
+"This job is waiting to run" :
+"ジョブは実行待ちです",
+// state: upload
+//"This dataset is currently uploading" :
+//false,
+// state: setting_metadata
+//"Metadata is being auto-detected" :
+//false,
+// state: running
+"This job is currently running" :
+"ジョブは実行中です",
+// state: paused
+//"This job is paused. Use the \"Resume Paused Jobs\" in the history menu to resume" :
+//false,
+// state: error
+"An error occurred with this dataset" :
+"このジョブの実行中に発生したエラー",
+// state: empty
+"No data" :
+"データ無し",
+// state: failed_metadata
+//"An error occurred setting the metadata for this dataset" :
+//false,
+
+// ajax error prefix
+//"There was an error getting the data for this dataset" :
+//false,
+
+// purged'd/del'd msg
+"This dataset has been deleted and removed from disk" :
+"このデータセットは、永続的にディスクから削除されました",
+"This dataset has been deleted" :
+"このデータセットは削除されました",
+"This dataset has been hidden" :
+"このデータセットは、非表示にされた",
+
+"format" :
+"フォーマット",
+"database" :
+"データベース",
+
+// ---- hda-edit
+"Edit attributes" :
"変数を編集する",
+//"Cannot edit attributes of datasets removed from disk" :
+//false,
+//"Undelete dataset to edit attributes" :
+//false,
+//"This dataset must finish uploading before it can be edited" :
+//false,
+//"This dataset is not yet editable" :
+//false,
-// templates/dataset/edit_attributes.mako:64
-"This will inspect the dataset and attempt to correct the above column values if they are not accurate." :
-"これはデータセットを調査して上記のカラムの値を修正することを試みます。",
-
-// templates/dataset/edit_attributes.mako:68
-"Required metadata values are missing. Some of these values may not be editable by the user. Selecting \"Auto-detect\" will attempt to fix these values." :
-"必要なメタデータの値が不明です。それらのいくつかの値はユーザによって編集可能にはなっていません。「自動判定」を選択するとそれらの値をただしくできるかもしれません。",
-
-// templates/dataset/edit_attributes.mako:78
-"Convert to new format" :
-"新しいフォーマットに変換する",
-
-// templates/dataset/edit_attributes.mako:84
-"Convert to" :
-"変換する",
-
-// templates/dataset/edit_attributes.mako:95
-"This will create a new dataset with the contents of this dataset converted to a new format." :
-"新しいフォーマットに変換したデータセットを新規作成します。",
-
-// templates/dataset/edit_attributes.mako:111
-"Change data type" :
-"データタイプを変更する",
-
-// templates/dataset/edit_attributes.mako:117
-"New Type" :
-"新しいタイプ",
-
-// templates/dataset/edit_attributes.mako:124
-"This will change the datatype of the existing dataset but <i>not</i> modify its contents. Use this if Galaxy has incorrectly guessed the type of your dataset." :
-"これは既存のデータセットのデータタイプを変更します。しかしデータセットの中身は変更しません。データセットのタイプの誤判定があったときに使用します。",
-
-// templates/dataset/edit_attributes.mako:137
-"Copy History Item" :
-"ヒストリーアイテムをコピーする",
-
-// templates/history/list.mako:3
-"Your saved histories" :
-"保存したヒストリー",
-
-// templates/history/list.mako:19
-"Stored Histories" :
-"格納してあるヒストリー",
-
-// templates/history/list.mako:21 templates/root/history.mako:239
-"hide deleted" :
-"削除したヒストリーを隠す",
-
-// templates/history/list.mako:23
-"show deleted" :
-"削除したヒストリーを表示する",
-
-// templates/history/list.mako:27
-"Name" :
-"名前",
-
-// templates/history/list.mako:27
-"Size" :
-"サイズ",
-
-// templates/history/list.mako:27
-"Last modified" :
-"最終更新日",
-
-// templates/history/list.mako:27
-"Actions" :
-"操作",
-
-// templates/history/list.mako:45
-"rename" :
-"名称変更する",
-
-// templates/history/list.mako:46
-"switch to" :
-"変更する",
-
-// templates/history/list.mako:47
-"delete" :
-"削除する",
-
-// templates/history/list.mako:49
-"undelete" :
-"削除から戻す",
-
-// templates/history/list.mako:55
-"Action" :
-"操作",
-
-// templates/history/list.mako:56 templates/history/options.mako:21
-"Share" :
-"共有",
-
-// templates/history/list.mako:56 templates/history/options.mako:15
-"Rename" :
-"名称変更する",
-
-// templates/history/list.mako:56 templates/history/options.mako:24
"Delete" :
"削除する",
+//"Dataset is already deleted" :
+//false,
-// templates/history/list.mako:58
-"Undelete" :
-"削除から戻す",
+"View or report this error" :
+"このエラーを届け出る",
-// templates/history/list.mako:65
-"You have no stored histories" :
-"保管してあるヒストリーはありません",
+"Run this job again" :
+"もう一度このジョブを実行する",
-// templates/history/options.mako:5
-"History Options" :
-"ヒストリーオプション",
+"Visualize" :
+"可視化する",
+//"Visualize in" :
+//false,
-// templates/history/options.mako:9
-"You must be " :
-"あなたは",
+"Undelete it" :
+"復元する",
+"Permanently remove it from disk" :
+"永久にディスクから削除",
+"Unhide it" :
+"非表示解除する",
-// templates/history/options.mako:9
-"logged in" :
-"ログイン",
+//"You may be able to" :
+//false,
+//"set it manually or retry auto-detection" :
+//false,
-// templates/history/options.mako:9
-" to store or switch histories." :
-"しないとヒストリーの保管や変更ができません。",
+//"Edit dataset tags" :
+//false,
+//"Edit dataset annotation" :
+//false,
-// templates/history/options.mako:15
-// python-format
-" current history (stored as \"%s\")" :
-" 現在のヒストリー(\"%s\" として保管されています)",
-// templates/history/options.mako:16
-"List" :
-"リストする",
+// ---------------------------------------------------------------------------- misc. MVC
+//"Tags" :
+//false,
+//"Annotation" :
+//false,
+//"Edit annotation" :
+//false,
-// templates/history/options.mako:16
-" previously stored histories" :
-" 以前に保管したヒストリー",
-// templates/history/options.mako:18
-"Create" :
-"作成する",
-
-// templates/history/options.mako:18
-" a new empty history" :
-" 新規ヒストリー",
-
-// templates/history/options.mako:20
-"Construct workflow" :
-"ワークフローを構築する",
-
-// templates/history/options.mako:20
-" from the current history" :
-" 現在のヒストリーから",
-
-// templates/history/options.mako:21 templates/history/options.mako:24
-" current history" :
-" 現在のヒストリー",
-
-// templates/history/options.mako:23
-"Show deleted" :
-"削除したヒストリーを表示する",
-
-// templates/history/options.mako:23
-" datasets in history" :
-" ヒストリーのデータセット",
-
-// templates/history/rename.mako:3 templates/history/rename.mako:6
-"Rename History" :
-"ヒストリーの名称変更をする",
-
-
-"Rename Histories" :
-"名称変更する",
-
-"Perform Action" :
-"操作を実行する",
-
-"Submit" :
-"登録する",
-
-
-
-// templates/history/rename.mako:10
-"Current Name" :
-"現在の名称",
-
-// templates/history/rename.mako:10
-"New Name" :
-"新しい名称",
-
-// templates/history/share.mako:3
-"Share histories" :
-"ヒストリーを共有する",
-
-// templates/history/share.mako:6
-"Share Histories" :
-"ヒストリーを共有する",
-
-// templates/history/share.mako:9
-"History Name:" :
-"ヒストリー名",
-
-// templates/history/share.mako:9
-"Number of Datasets:" :
-"データセット数",
-
-// templates/history/share.mako:9
-"Share Link" :
-"共有リンク",
-
-// templates/history/share.mako:15
-"This history contains no data." :
-"このヒストリーにはデータがありません。",
-
-// templates/history/share.mako:21
-"copy link to share" :
-"共有リンクをコピーする",
-
-// templates/history/share.mako:24
-"Email of User to share with:" :
-"共有したいユーザのEメール:",
-//"つぎのヒストリーを共有するユーザのEメールアドレス:"
-
-// templates/root/history.mako:7
-"Galaxy History" :
-"Galaxy ヒストリー",
-
-// templates/root/history.mako:237
-"refresh" :
-"リフレッシュ",
-
-// templates/root/history.mako:245
-"You are currently viewing a deleted history!" :
-"消去したヒストリーをみています。",
-
-// templates/root/history.mako:289
-"Your history is empty. Click 'Get Data' on the left pane to start" :
-"ヒストリーは空です。解析をはじめるには、左パネルの 'データ取得' をクリック",
-
-// templates/root/history_common.mako:41
-"Job is waiting to run" :
-"ジョブは実行待ちです",
-
-// templates/root/history_common.mako:43
-"Job is currently running" :
-"ジョブは実行中です",
-
-// templates/root/history_common.mako:46
-"An error occurred running this job: " :
-"このジョブの実行中に発生したエラー: ",
-
-// templates/root/history_common.mako:47
-"report this error" :
-"このエラーを報告する",
-
-// templates/root/history_common.mako:54
-"No data: " :
-"データ無し: ",
-
-// templates/root/history_common.mako:58
-"format: " :
-"フォーマット: ",
-
-// templates/root/history_common.mako:59
-"database: " :
-"データベース: ",
-
-// templates/root/history_common.mako:66 templates/root/masthead.mako:20
-"Info: " :
-"情報: ",
-
-// templates/root/history_common.mako:85
-// python-format
-"Error: unknown dataset state \"%s\"." :
-"エラー: 不明なデータ状態 \"%s\"。",
-
-// templates/root/index.mako:32
-"Options" :
-"オプション",
-
-// templates/root/index.mako:34
-"History" :
-"ヒストリー",
-
-// templates/root/masthead.mako:20
-"report bugs" :
-"バグを報告する",
-
-// templates/root/masthead.mako:21
-"wiki" :
-"wiki",
-
-// templates/root/masthead.mako:22
-"screencasts" :
-"スクリーンキャスト",
-
-// templates/root/masthead.mako:23
-"blog" :
-"ブログ",
-
-// templates/root/masthead.mako:31
-// python-format
-"Logged in as %s: " :
-"%s としてログイン中: ",
-
-// templates/root/masthead.mako:31
-"manage" :
-"管理",
-
-// templates/root/masthead.mako:32
-"logout" :
-"ログアウト",
-
-// templates/root/masthead.mako:34
-"Account: " :
-"アカウント: ",
-
-// templates/root/masthead.mako:34
-"create" :
-"作成",
-
-// templates/root/masthead.mako:35
-"login" :
-"ログイン",
-
-// templates/root/tool_menu.mako:52
-"Galaxy Tools" :
-"Galaxy ツール群",
-
-// templates/root/tool_menu.mako:129
-"Workflow" :
-"ワークフロー",
-
-// templates/root/tool_menu.mako:134
-"Manage" :
-"管理",
-
-// templates/root/tool_menu.mako:134
-"workflows" :
-"ワークフロー",
-
-// templates/user/index.mako:2 templates/user/index.mako:4
-"Account settings" :
-"アカウント設定",
-
-// templates/user/index.mako:7
-// python-format
-"You are currently logged in as %s." :
-"%s としてログイン中。",
-
-// templates/user/index.mako:9
-"Change your password" :
-"パスワード変更",
-
-// templates/user/index.mako:10
-"Update your email address" :
-"メールアドレス変更",
-
-// templates/user/index.mako:11
-"Logout" :
-"ログアウト",
-
-// templates/user/index.mako:16
-"Login" :
-"ログイン",
-
-// templates/user/index.mako:17
-"Create new account" :
-"新規アカウントを作成する"
-
-})
+});
diff -r e7ec387cb3b3b8e6aaceade0c8bdd3da0077ac57 -r 008c26226230353a2440e6c7208de58575cfebff static/scripts/nls/locale.js
--- a/static/scripts/nls/locale.js
+++ b/static/scripts/nls/locale.js
@@ -1,514 +1,219 @@
/** en/main localization hash - for use with requirejs' i18n plugin */
define({
root : {
-// ---------------------------------------------------------------------------- localized
-"history" :
+
+// ---------------------------------------------------------------------------- histories
+// ---- history-model
+// ---- readonly-history-panel
+"This history is empty" :
+false,
+"No matching datasets found" :
+false,
+"An error occurred while getting updates from the server" :
+false,
+"Please contact a Galaxy administrator if the problem persists" :
+false,
+//TODO:
+//"An error was encountered while <% where %>" :
+//false,
+"Search datasets" :
+false,
+"You are currently viewing a deleted history!" :
+false,
+"You are over your disk quota" :
+false,
+"Tool execution is on hold until your disk usage drops below your allocated quota" :
+false,
+"All" :
+false,
+"None" :
+false,
+"For all selected" :
false,
-// templates/history/options.mako:24
-"Are you sure you want to delete the current history?" :
+// ---- history-panel
+"Edit history tags" :
+false,
+"Edit history Annotation" :
+false,
+"Click to rename history" :
+false,
+// multi operations
+"Operations on multiple datasets" :
+false,
+"Hide datasets" :
+false,
+"Unhide datasets" :
+false,
+"Delete datasets" :
+false,
+"Undelete datasets" :
+false,
+"Permanently delete datasets" :
+false,
+"This will permanently remove the data in your datasets. Are you sure?" :
false,
-// templates/root/history.mako:38
-"collapse all" :
+// ---- annotated-history-panel
+"Dataset" :
+false,
+"Annotation" :
false,
-// templates/dataset/edit_attributes.mako:2
-"History Item Attributes" :
+// ---- current-history-panel
+"This history is empty. Click 'Get Data' on the left tool menu to start" :
+false,
+"No matching datasets found" :
+false,
+"You must be logged in to create histories" :
+false,
+//TODO:
+//"You can <% loadYourOwn %> or <% externalSource %>" :
+//false,
+//"load your own data" :
+//false,
+//"get data from an external source" :
+//false,
+
+// these aren't in zh/ginga.po and the template doesn't localize
+//"Include Deleted Datasets" :
+//false,
+//"Include Hidden Datasets" :
+//false,
+
+
+// ---------------------------------------------------------------------------- datasets
+// ---- hda-model
+"Unable to purge dataset" :
false,
-// templates/dataset/edit_attributes.mako:19
-"Edit Attributes" :
+// ---- hda-base
+// display button
+"Cannot display datasets removed from disk" :
+false,
+"This dataset must finish uploading before it can be viewed" :
+false,
+"This dataset is not yet viewable" :
+false,
+"View data" :
+false,
+// download button
+"Download" :
+false,
+"Download dataset" :
+false,
+"Additional files" :
+false,
+// info/show_params
+"View details" :
false,
-// templates/dataset/edit_attributes.mako:64
-"This will inspect the dataset and attempt to correct the above column values if they are not accurate." :
+// dataset states
+// state: new
+"This is a new dataset and not all of its data are available yet" :
+false,
+// state: noPermission
+"You do not have permission to view this dataset" :
+false,
+// state: discarded
+"The job creating this dataset was cancelled before completion" :
+false,
+// state: queued
+"This job is waiting to run" :
+false,
+// state: upload
+"This dataset is currently uploading" :
+false,
+// state: setting_metadata
+"Metadata is being auto-detected" :
+false,
+// state: running
+"This job is currently running" :
+false,
+// state: paused
+"This job is paused. Use the \"Resume Paused Jobs\" in the history menu to resume" :
+false,
+// state: error
+"An error occurred with this dataset" :
+false,
+// state: empty
+"No data" :
+false,
+// state: failed_metadata
+"An error occurred setting the metadata for this dataset" :
false,
-// templates/dataset/edit_attributes.mako:68
-"Required metadata values are missing. Some of these values may not be editable by the user. Selecting \"Auto-detect\" will attempt to fix these values." :
-false,
-
-// templates/dataset/edit_attributes.mako:78
-"Convert to new format" :
-false,
-
-// templates/dataset/edit_attributes.mako:84
-"Convert to" :
-false,
-
-// templates/dataset/edit_attributes.mako:95
-"This will create a new dataset with the contents of this dataset converted to a new format." :
-false,
-
-// templates/dataset/edit_attributes.mako:111
-"Change data type" :
-false,
-
-// templates/dataset/edit_attributes.mako:117
-"New Type" :
-false,
-
-// templates/dataset/edit_attributes.mako:124
-"This will change the datatype of the existing dataset but <i>not</i> modify its contents. Use this if Galaxy has incorrectly guessed the type of your dataset." :
-false,
-
-// templates/dataset/edit_attributes.mako:137
-"Copy History Item" :
-false,
-
-// templates/history/list.mako:3
-"Your saved histories" :
-false,
-
-// templates/history/list.mako:19
-"Stored Histories" :
-false,
-
-// templates/history/list.mako:21 templates/root/history.mako:239
-"hide deleted" :
-false,
-
-// templates/history/list.mako:23
-"show deleted" :
-false,
-
-// templates/history/list.mako:27
-"Name" :
-false,
-
-// templates/history/list.mako:27
-"Size" :
-false,
-
-// templates/history/list.mako:27
-"Last modified" :
-false,
-
-// templates/history/list.mako:27
-"Actions" :
-false,
-
-// templates/history/list.mako:45
-"rename" :
-false,
-
-// templates/history/list.mako:46
-"switch to" :
-false,
-
-// templates/history/list.mako:47
-"delete" :
-false,
-
-// templates/history/list.mako:49
-"undelete" :
-false,
-
-// templates/history/list.mako:55
-"Action" :
-false,
-
-// templates/history/list.mako:56 templates/history/options.mako:21
-"Share" :
-false,
-
-// templates/history/list.mako:56 templates/history/options.mako:15
-"Rename" :
-false,
-
-// templates/history/list.mako:56 templates/history/options.mako:24
-"Delete" :
-false,
-
-// templates/history/list.mako:58
-"Undelete" :
-false,
-
-// templates/history/list.mako:65
-"You have no stored histories" :
-false,
-
-// templates/history/options.mako:5
-"History Options" :
-false,
-
-// templates/history/options.mako:9
-"You must be " :
-false,
-
-// templates/history/options.mako:9
-"logged in" :
-false,
-
-// templates/history/options.mako:9
-" to store or switch histories." :
-false,
-
-// templates/history/options.mako:15
-// python-format
-" current history (stored as \"%s\")" :
-false,
-
-// templates/history/options.mako:16
-"List" :
-false,
-
-// templates/history/options.mako:16
-" previously stored histories" :
-false,
-
-// templates/history/options.mako:18
-"Create" :
-false,
-
-// templates/history/options.mako:18
-" a new empty history" :
-false,
-
-// templates/history/options.mako:20
-"Construct workflow" :
-false,
-
-// templates/history/options.mako:20
-" from the current history" :
-false,
-
-// templates/history/options.mako:21 templates/history/options.mako:24
-" current history" :
-false,
-
-// templates/history/options.mako:23
-"Show deleted" :
-false,
-
-// templates/history/options.mako:23
-" datasets in history" :
-false,
-
-// templates/history/rename.mako:3 templates/history/rename.mako:6
-"Rename History" :
-false,
-
-
-"Rename Histories" :
-false,
-
-"Perform Action" :
-false,
-
-"Submit" :
-false,
-
-// templates/history/rename.mako:10
-"Current Name" :
-false,
-
-// templates/history/rename.mako:10
-"New Name" :
-false,
-
-// templates/history/share.mako:3
-"Share histories" :
-false,
-
-// templates/history/share.mako:6
-"Share Histories" :
-false,
-
-// templates/history/share.mako:9
-"History Name:" :
-false,
-
-// templates/history/share.mako:9
-"Number of Datasets:" :
-false,
-
-// templates/history/share.mako:9
-"Share Link" :
-false,
-
-// templates/history/share.mako:15
-"This history contains no data." :
-false,
-
-// templates/history/share.mako:21
-"copy link to share" :
-false,
-
-// templates/history/share.mako:24
-"Email of User to share with:" :
-false,
-
-// templates/root/history.mako:7
-"Galaxy History" :
-false,
-
-// templates/root/history.mako:237
-"refresh" :
-false,
-
-// templates/root/history.mako:245
-"You are currently viewing a deleted history!" :
-false,
-
-// templates/root/history.mako:289
-"Your history is empty. Click 'Get Data' on the left pane to start" :
-false,
-
-// templates/root/history_common.mako:41
-"Job is waiting to run" :
-false,
-
-// templates/root/history_common.mako:43
-"Job is currently running" :
-false,
-
-// templates/root/history_common.mako:46
-"An error occurred running this job: " :
-false,
-
-// templates/root/history_common.mako:47
-"report this error" :
-false,
-
-// templates/root/history_common.mako:54
-"No data: " :
-false,
-
-// templates/root/history_common.mako:58
-"format: " :
-false,
-
-// templates/root/history_common.mako:59
-"database: " :
-false,
-
-// templates/root/history_common.mako:66 templates/root/masthead.mako:20
-"Info: " :
-false,
-
-// templates/root/history_common.mako:85
-// python-format
-"Error: unknown dataset state \"%s\"." :
-false,
-
-// templates/root/index.mako:32
-"Options" :
-false,
-
-// templates/root/index.mako:34
-"History" :
-false,
-
-// templates/root/masthead.mako:20
-"report bugs" :
-false,
-
-// templates/root/masthead.mako:21
-"wiki" :
-false,
-
-// templates/root/masthead.mako:22
-"screencasts" :
-false,
-
-// templates/root/masthead.mako:23
-"blog" :
-false,
-
-// templates/root/masthead.mako:31
-// python-format
-"Logged in as %s: " :
-false,
-
-// templates/root/masthead.mako:31
-"manage" :
-false,
-
-// templates/root/masthead.mako:32
-"logout" :
-false,
-
-// templates/root/masthead.mako:34
-"Account: " :
-false,
-
-// templates/root/masthead.mako:34
-"create" :
-false,
-
-// templates/root/masthead.mako:35
-"login" :
-false,
-
-// templates/root/tool_menu.mako:52
-"Galaxy Tools" :
-false,
-
-// templates/root/tool_menu.mako:129
-"Workflow" :
-false,
-
-// templates/root/tool_menu.mako:134
-"Manage" :
-false,
-
-// templates/root/tool_menu.mako:134
-"workflows" :
-false,
-
-// templates/user/index.mako:2 templates/user/index.mako:4
-"Account settings" :
-false,
-
-// templates/user/index.mako:7
-// python-format
-"You are currently logged in as %s." :
-false,
-
-// templates/user/index.mako:9
-"Change your password" :
-false,
-
-// templates/user/index.mako:10
-"Update your email address" :
-false,
-
-// templates/user/index.mako:11
-"Logout" :
-false,
-
-// templates/user/index.mako:16
-"Login" :
-false,
-
-// templates/user/index.mako:17
-"Create new account" :
-false,
-
-
-// ---------------------------------------------------------------------------- need to be localized
-"anonymous user" :
-false,
-
-"Using " :
-false,
-
+// ajax error prefix
"There was an error getting the data for this dataset" :
false,
-"This dataset has been deleted and removed from disk." :
+// purged'd/del'd msg
+"This dataset has been deleted and removed from disk" :
false,
-
-"This dataset has been deleted." :
+"This dataset has been deleted" :
false,
-
-"This dataset has been hidden." :
+"This dataset has been hidden" :
false,
"format" :
false,
-
"database" :
false,
-"This history is empty" :
+// ---- hda-edit
+"Edit attributes" :
+false,
+"Cannot edit attributes of datasets removed from disk" :
+false,
+"Undelete dataset to edit attributes" :
+false,
+"This dataset must finish uploading before it can be edited" :
+false,
+"This dataset is not yet editable" :
false,
-"No matching datasets found" :
+"Delete" :
+false,
+"Dataset is already deleted" :
false,
-"You are over your disk quota." :
-false,
-
-"Tool execution is on hold until your disk usage drops below your allocated quota." :
-false,
-
-"All" :
-false,
-
-"None" :
-false,
-
-"For all selected" :
-false,
-
-"This history is empty. Click 'Get Data' on the left tool menu to start" :
-false,
-
-"Include Deleted Datasets" :
-false,
-
-"Include Hidden Datasets" :
-false,
-
-"Edit history tags" :
-false,
-
-"Edit history Annotation" :
-false,
-
-"Operations on multiple datasets" :
-false,
-
-"Search datasets" :
-false,
-
-"clear search (esc)" :
-false,
-
-"loading..." :
-false,
-
-"Hide datasets" :
-false,
-
-"Unhide datasets" :
-false,
-
-"Delete datasets" :
-false,
-
-"Undelete datasets" :
-false,
-
-"Permanently delete datasets" :
-false,
-
-"Click to rename history" :
-false,
-
-"View data" :
-false,
-
-"Edit attributes" :
-false,
-
-"View details" :
+"View or report this error" :
false,
"Run this job again" :
false,
+"Visualize" :
+false,
+"Visualize in" :
+false,
+
+"Undelete it" :
+false,
+"Permanently remove it from disk" :
+false,
+"Unhide it" :
+false,
+
+"You may be able to" :
+false,
+"set it manually or retry auto-detection" :
+false,
+
"Edit dataset tags" :
false,
-
"Edit dataset annotation" :
false,
-" Click <a href=\"javascript:void(0);\" class=\"dataset-undelete\">here</a> to undelete it or <a href=\"javascript:void(0);\" class=\"dataset-purge\">here</a> to immediately remove it from disk" :
+
+// ---------------------------------------------------------------------------- misc. MVC
+"Tags" :
false,
-
-" Click <a href=\"javascript:void(0);\" class=\"dataset-unhide\">here</a> to unhide it" :
+"Annotation" :
false,
-
-"Download" :
+"Edit annotation" :
false,
-"Visualize" :
-false
-
// ----------------------------------------------------------------------------
},
This diff is so big that we needed to truncate the remainder.
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0