galaxy-commits
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
July 2013
- 1 participants
- 83 discussions
commit/galaxy-central: carlfeberhard: Visualizations registry: turned off by default in main .ini
by commits-noreply@bitbucket.org 18 Jul '13
by commits-noreply@bitbucket.org 18 Jul '13
18 Jul '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/d9fe5bea5a43/
Changeset: d9fe5bea5a43
User: carlfeberhard
Date: 2013-07-18 22:27:38
Summary: Visualizations registry: turned off by default in main .ini
Affected #: 1 file
diff -r 6781bf1f3de1514967d62b6cf4fe4910cf9ec0aa -r d9fe5bea5a43c4c4645408a9035ad818d89d012c universe_wsgi.ini.sample
--- a/universe_wsgi.ini.sample
+++ b/universe_wsgi.ini.sample
@@ -177,7 +177,7 @@
# Visualizations config directory, where to look for individual visualization
# xml configuration files. Those files define how visualizations apply to
# particular data and how to pass them the necessary parameters
-visualizations_config_directory = config/visualizations
+#visualizations_config_directory = config/visualizations
# Each job is given a unique empty directory as its current working directory.
# This option defines in what parent directory those directories will be
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: carlfeberhard: Visualizations registry: move built-in vis config files to sample, copying them over on registry start; add vis xml files to .hgignore; fix test data in scatterplot.xml
by commits-noreply@bitbucket.org 18 Jul '13
by commits-noreply@bitbucket.org 18 Jul '13
18 Jul '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/6781bf1f3de1/
Changeset: 6781bf1f3de1
User: carlfeberhard
Date: 2013-07-18 22:24:18
Summary: Visualizations registry: move built-in vis config files to sample, copying them over on registry start; add vis xml files to .hgignore; fix test data in scatterplot.xml
Affected #: 13 files
diff -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 -r 6781bf1f3de1514967d62b6cf4fe4910cf9ec0aa .hgignore
--- a/.hgignore
+++ b/.hgignore
@@ -60,7 +60,7 @@
job_conf.xml
data_manager_conf.xml
shed_data_manager_conf.xml
-visualizations_conf.xml
+config/visualizations/*.xml
static/welcome.html.*
static/welcome.html
diff -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 -r 6781bf1f3de1514967d62b6cf4fe4910cf9ec0aa config/visualizations/circster.xml
--- a/config/visualizations/circster.xml
+++ /dev/null
@@ -1,26 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE visualization SYSTEM "visualization.dtd">
-<visualization name="circster">
- <data_sources>
- <data_source>
- <model_class>HistoryDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
- <to_param param_attr="id">dataset_id</to_param>
- <to_param assign="hda">hda_ldda</to_param>
- </data_source>
- <data_source>
- <model_class>LibraryDatasetDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
- <to_param param_attr="id">dataset_id</to_param>
- <to_param assign="ldda">hda_ldda</to_param>
- </data_source>
- </data_sources>
- <params>
- <param type="visualization">id</param>
- <param type="hda_or_ldda">dataset_id</param>
- <param_modifier type="string" modifies="dataset_id">hda_ldda</param_modifier>
- <param type="dbkey">dbkey</param>
- </params>
- <template>circster.mako</template>
- <render_location>_top</render_location>
-</visualization>
diff -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 -r 6781bf1f3de1514967d62b6cf4fe4910cf9ec0aa config/visualizations/circster.xml.sample
--- /dev/null
+++ b/config/visualizations/circster.xml.sample
@@ -0,0 +1,26 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE visualization SYSTEM "visualization.dtd">
+<visualization name="circster">
+ <data_sources>
+ <data_source>
+ <model_class>HistoryDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ <to_param assign="hda">hda_ldda</to_param>
+ </data_source>
+ <data_source>
+ <model_class>LibraryDatasetDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ <to_param assign="ldda">hda_ldda</to_param>
+ </data_source>
+ </data_sources>
+ <params>
+ <param type="visualization">id</param>
+ <param type="hda_or_ldda">dataset_id</param>
+ <param_modifier type="string" modifies="dataset_id">hda_ldda</param_modifier>
+ <param type="dbkey">dbkey</param>
+ </params>
+ <template>circster.mako</template>
+ <render_location>_top</render_location>
+</visualization>
diff -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 -r 6781bf1f3de1514967d62b6cf4fe4910cf9ec0aa config/visualizations/example.xml.example
--- a/config/visualizations/example.xml.example
+++ /dev/null
@@ -1,30 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE visualization SYSTEM "visualization.dtd">
-<visualization name="test">
- <data_sources>
- <data_source>
- <model_class>HistoryDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
- <to_param param_attr="id">dataset_id</to_param>
- <to_param assign="bler">string</to_param>
- <to_param assign="False">boolean</to_param>
- <to_param assign="-5">integer</to_param>
- <to_param assign="3.14">float</to_param>
- <to_param assign="{}">json</to_param>
- </data_source>
- </data_sources>
- <params>
- <param>default</param>
- <param type="str">string</param>
- <param type="bool">boolean</param>
- <param type="int">integer</param>
- <param type="float">float</param>
- <param type="json">json</param>
- <param type="str" required="true">string</param>
- <param type="visualization" var_name_in_template="visualization">visualization_id</param>
- <param type="dataset" var_name_in_template="dataset">dataset_id</param>
- <param type="hda_or_ldda">dataset_instance</param>
- <param_modifier type="str" modifies="dataset_instance">hda_ldda</param_modifier>
- </params>
- <template>v_fwork_test.mako</template>
-</visualization>
diff -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 -r 6781bf1f3de1514967d62b6cf4fe4910cf9ec0aa config/visualizations/phyloviz.xml
--- a/config/visualizations/phyloviz.xml
+++ /dev/null
@@ -1,18 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE visualization SYSTEM "visualization.dtd">
-<visualization name="phyloviz">
- <data_sources>
- <data_source>
- <model_class>HistoryDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Newick</test>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Nexus</test>
- <to_param param_attr="id">dataset_id</to_param>
- </data_source>
- </data_sources>
- <params>
- <param type="dataset" var_name_in_template="hda" required="true">dataset_id</param>
- <param type="integer" default="0">tree_index</param>
- </params>
- <template>phyloviz.mako</template>
- <render_location>_top</render_location>
-</visualization>
diff -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 -r 6781bf1f3de1514967d62b6cf4fe4910cf9ec0aa config/visualizations/phyloviz.xml.sample
--- /dev/null
+++ b/config/visualizations/phyloviz.xml.sample
@@ -0,0 +1,18 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE visualization SYSTEM "visualization.dtd">
+<visualization name="phyloviz">
+ <data_sources>
+ <data_source>
+ <model_class>HistoryDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Newick</test>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Nexus</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ </data_source>
+ </data_sources>
+ <params>
+ <param type="dataset" var_name_in_template="hda" required="true">dataset_id</param>
+ <param type="integer" default="0">tree_index</param>
+ </params>
+ <template>phyloviz.mako</template>
+ <render_location>_top</render_location>
+</visualization>
diff -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 -r 6781bf1f3de1514967d62b6cf4fe4910cf9ec0aa config/visualizations/scatterplot.xml
--- a/config/visualizations/scatterplot.xml
+++ /dev/null
@@ -1,16 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE visualization SYSTEM "visualization.dtd">
-<visualization name="scatterplot">
- <data_sources>
- <data_source>
- <model_class>HistoryDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">tabular.Tabular</test>
- <to_param param_attr="id">dataset_id</to_param>
- </data_source>
- </data_sources>
- <params>
- <param type="dataset" var_name_in_template="hda" required="true">dataset_id</param>
- <param required="true">bler</param>
- </params>
- <template>scatterplot.mako</template>
-</visualization>
diff -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 -r 6781bf1f3de1514967d62b6cf4fe4910cf9ec0aa config/visualizations/scatterplot.xml.sample
--- /dev/null
+++ b/config/visualizations/scatterplot.xml.sample
@@ -0,0 +1,15 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE visualization SYSTEM "visualization.dtd">
+<visualization name="scatterplot">
+ <data_sources>
+ <data_source>
+ <model_class>HistoryDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">tabular.Tabular</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ </data_source>
+ </data_sources>
+ <params>
+ <param type="dataset" var_name_in_template="hda" required="true">dataset_id</param>
+ </params>
+ <template>scatterplot.mako</template>
+</visualization>
diff -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 -r 6781bf1f3de1514967d62b6cf4fe4910cf9ec0aa config/visualizations/sweepster.xml
--- a/config/visualizations/sweepster.xml
+++ /dev/null
@@ -1,25 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE visualization SYSTEM "visualization.dtd">
-<visualization name="sweepster">
- <data_sources>
- <data_source>
- <model_class>HistoryDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
- <to_param param_attr="id">dataset_id</to_param>
- <to_param assign="hda">hda_ldda</to_param>
- </data_source>
- <data_source>
- <model_class>LibraryDatasetDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
- <to_param param_attr="id">dataset_id</to_param>
- <to_param assign="ldda">hda_ldda</to_param>
- </data_source>
- </data_sources>
- <params>
- <param type="visualization" var_name_in_template="viz">visualization</param>
- <param type="hda_or_ldda" var_name_in_template="dataset">dataset_id</param>
- <param_modifier type="string" modifies="dataset_id">hda_ldda</param_modifier>
- </params>
- <template>sweepster.mako</template>
- <render_location>_top</render_location>
-</visualization>
diff -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 -r 6781bf1f3de1514967d62b6cf4fe4910cf9ec0aa config/visualizations/sweepster.xml.sample
--- /dev/null
+++ b/config/visualizations/sweepster.xml.sample
@@ -0,0 +1,25 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE visualization SYSTEM "visualization.dtd">
+<visualization name="sweepster">
+ <data_sources>
+ <data_source>
+ <model_class>HistoryDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ <to_param assign="hda">hda_ldda</to_param>
+ </data_source>
+ <data_source>
+ <model_class>LibraryDatasetDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ <to_param assign="ldda">hda_ldda</to_param>
+ </data_source>
+ </data_sources>
+ <params>
+ <param type="visualization" var_name_in_template="viz">visualization</param>
+ <param type="hda_or_ldda" var_name_in_template="dataset">dataset_id</param>
+ <param_modifier type="string" modifies="dataset_id">hda_ldda</param_modifier>
+ </params>
+ <template>sweepster.mako</template>
+ <render_location>_top</render_location>
+</visualization>
diff -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 -r 6781bf1f3de1514967d62b6cf4fe4910cf9ec0aa config/visualizations/trackster.xml
--- a/config/visualizations/trackster.xml
+++ /dev/null
@@ -1,29 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE visualization SYSTEM "visualization.dtd">
-<visualization name="trackster">
- <!--not tested yet -->
- <data_sources>
- <data_source>
- <model_class>HistoryDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
- <to_param param_attr="id">dataset_id</to_param>
- <to_param assign="hda">hda_ldda</to_param>
- <to_param param_attr="dbkey">dbkey</to_param>
- </data_source>
- <data_source>
- <model_class>LibraryDatasetDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
- <to_param param_attr="id">dataset_id</to_param>
- <to_param assign="ldda">hda_ldda</to_param>
- </data_source>
- </data_sources>
- <params>
- <param type="visualization">id</param>
- <param type="dataset">dataset_id</param>
- <param type="genome_region">genome_region</param>
- <param type="dbkey">dbkey</param>
- </params>
- <template_root>tracks</template_root>
- <template>browser.mako</template>
- <render_location>_top</render_location>
-</visualization>
diff -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 -r 6781bf1f3de1514967d62b6cf4fe4910cf9ec0aa config/visualizations/trackster.xml.sample
--- /dev/null
+++ b/config/visualizations/trackster.xml.sample
@@ -0,0 +1,29 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE visualization SYSTEM "visualization.dtd">
+<visualization name="trackster">
+ <!--not tested yet -->
+ <data_sources>
+ <data_source>
+ <model_class>HistoryDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ <to_param assign="hda">hda_ldda</to_param>
+ <to_param param_attr="dbkey">dbkey</to_param>
+ </data_source>
+ <data_source>
+ <model_class>LibraryDatasetDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ <to_param assign="ldda">hda_ldda</to_param>
+ </data_source>
+ </data_sources>
+ <params>
+ <param type="visualization">id</param>
+ <param type="dataset">dataset_id</param>
+ <param type="genome_region">genome_region</param>
+ <param type="dbkey">dbkey</param>
+ </params>
+ <template_root>tracks</template_root>
+ <template>browser.mako</template>
+ <render_location>_top</render_location>
+</visualization>
diff -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 -r 6781bf1f3de1514967d62b6cf4fe4910cf9ec0aa lib/galaxy/visualization/registry.py
--- a/lib/galaxy/visualization/registry.py
+++ b/lib/galaxy/visualization/registry.py
@@ -67,6 +67,7 @@
# load the registry from the xml files located in configuration_filepath using the given parser
configuration_filepath = os.path.join( galaxy_root, configuration_filepath )
self.configuration_filepath = self.check_conf_filepath( configuration_filepath )
+ self.move_sample_conf_files()
self.load()
# what to use to parse query strings into resources/vars for the template
@@ -82,6 +83,20 @@
raise IOError( 'visualization configuration directory (%s) not found' %( configuration_filepath ) )
return configuration_filepath
+ def move_sample_conf_files( self ):
+ """
+ Copies any `*.xml.sample` files in `configuration_filepath` to
+ `.xml` files of the same names if no file with that name already exists.
+
+ :returns: a list of the files moved
+ """
+ files_moved = []
+ for sample_file in glob.glob( os.path.join( self.configuration_filepath, '*.sample' ) ):
+ new_name = os.path.splitext( sample_file )[0]
+ if not os.path.exists( new_name ):
+ shutil.copy2( sample_file, new_name )
+ files_moved.append( new_name )
+
def load( self ):
"""
Builds the registry by parsing the xml in `self.configuration_filepath`
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: carlfeberhard: Visualizations registry: break visualizations_conf.xml into one xml file per visualization and place in config/visualizations
by commits-noreply@bitbucket.org 18 Jul '13
by commits-noreply@bitbucket.org 18 Jul '13
18 Jul '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/91de48b4fd3b/
Changeset: 91de48b4fd3b
User: carlfeberhard
Date: 2013-07-18 21:52:38
Summary: Visualizations registry: break visualizations_conf.xml into one xml file per visualization and place in config/visualizations
Affected #: 12 files
diff -r 3422736c168d6d864ec3c478d3e725b1184e3778 -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 config/visualizations/circster.xml
--- /dev/null
+++ b/config/visualizations/circster.xml
@@ -0,0 +1,26 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE visualization SYSTEM "visualization.dtd">
+<visualization name="circster">
+ <data_sources>
+ <data_source>
+ <model_class>HistoryDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ <to_param assign="hda">hda_ldda</to_param>
+ </data_source>
+ <data_source>
+ <model_class>LibraryDatasetDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ <to_param assign="ldda">hda_ldda</to_param>
+ </data_source>
+ </data_sources>
+ <params>
+ <param type="visualization">id</param>
+ <param type="hda_or_ldda">dataset_id</param>
+ <param_modifier type="string" modifies="dataset_id">hda_ldda</param_modifier>
+ <param type="dbkey">dbkey</param>
+ </params>
+ <template>circster.mako</template>
+ <render_location>_top</render_location>
+</visualization>
diff -r 3422736c168d6d864ec3c478d3e725b1184e3778 -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 config/visualizations/example.xml.example
--- /dev/null
+++ b/config/visualizations/example.xml.example
@@ -0,0 +1,30 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE visualization SYSTEM "visualization.dtd">
+<visualization name="test">
+ <data_sources>
+ <data_source>
+ <model_class>HistoryDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ <to_param assign="bler">string</to_param>
+ <to_param assign="False">boolean</to_param>
+ <to_param assign="-5">integer</to_param>
+ <to_param assign="3.14">float</to_param>
+ <to_param assign="{}">json</to_param>
+ </data_source>
+ </data_sources>
+ <params>
+ <param>default</param>
+ <param type="str">string</param>
+ <param type="bool">boolean</param>
+ <param type="int">integer</param>
+ <param type="float">float</param>
+ <param type="json">json</param>
+ <param type="str" required="true">string</param>
+ <param type="visualization" var_name_in_template="visualization">visualization_id</param>
+ <param type="dataset" var_name_in_template="dataset">dataset_id</param>
+ <param type="hda_or_ldda">dataset_instance</param>
+ <param_modifier type="str" modifies="dataset_instance">hda_ldda</param_modifier>
+ </params>
+ <template>v_fwork_test.mako</template>
+</visualization>
diff -r 3422736c168d6d864ec3c478d3e725b1184e3778 -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 config/visualizations/phyloviz.xml
--- /dev/null
+++ b/config/visualizations/phyloviz.xml
@@ -0,0 +1,18 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE visualization SYSTEM "visualization.dtd">
+<visualization name="phyloviz">
+ <data_sources>
+ <data_source>
+ <model_class>HistoryDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Newick</test>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Nexus</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ </data_source>
+ </data_sources>
+ <params>
+ <param type="dataset" var_name_in_template="hda" required="true">dataset_id</param>
+ <param type="integer" default="0">tree_index</param>
+ </params>
+ <template>phyloviz.mako</template>
+ <render_location>_top</render_location>
+</visualization>
diff -r 3422736c168d6d864ec3c478d3e725b1184e3778 -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 config/visualizations/scatterplot.xml
--- /dev/null
+++ b/config/visualizations/scatterplot.xml
@@ -0,0 +1,16 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE visualization SYSTEM "visualization.dtd">
+<visualization name="scatterplot">
+ <data_sources>
+ <data_source>
+ <model_class>HistoryDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">tabular.Tabular</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ </data_source>
+ </data_sources>
+ <params>
+ <param type="dataset" var_name_in_template="hda" required="true">dataset_id</param>
+ <param required="true">bler</param>
+ </params>
+ <template>scatterplot.mako</template>
+</visualization>
diff -r 3422736c168d6d864ec3c478d3e725b1184e3778 -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 config/visualizations/sweepster.xml
--- /dev/null
+++ b/config/visualizations/sweepster.xml
@@ -0,0 +1,25 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE visualization SYSTEM "visualization.dtd">
+<visualization name="sweepster">
+ <data_sources>
+ <data_source>
+ <model_class>HistoryDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ <to_param assign="hda">hda_ldda</to_param>
+ </data_source>
+ <data_source>
+ <model_class>LibraryDatasetDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ <to_param assign="ldda">hda_ldda</to_param>
+ </data_source>
+ </data_sources>
+ <params>
+ <param type="visualization" var_name_in_template="viz">visualization</param>
+ <param type="hda_or_ldda" var_name_in_template="dataset">dataset_id</param>
+ <param_modifier type="string" modifies="dataset_id">hda_ldda</param_modifier>
+ </params>
+ <template>sweepster.mako</template>
+ <render_location>_top</render_location>
+</visualization>
diff -r 3422736c168d6d864ec3c478d3e725b1184e3778 -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 config/visualizations/trackster.xml
--- /dev/null
+++ b/config/visualizations/trackster.xml
@@ -0,0 +1,29 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE visualization SYSTEM "visualization.dtd">
+<visualization name="trackster">
+ <!--not tested yet -->
+ <data_sources>
+ <data_source>
+ <model_class>HistoryDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ <to_param assign="hda">hda_ldda</to_param>
+ <to_param param_attr="dbkey">dbkey</to_param>
+ </data_source>
+ <data_source>
+ <model_class>LibraryDatasetDatasetAssociation</model_class>
+ <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
+ <to_param param_attr="id">dataset_id</to_param>
+ <to_param assign="ldda">hda_ldda</to_param>
+ </data_source>
+ </data_sources>
+ <params>
+ <param type="visualization">id</param>
+ <param type="dataset">dataset_id</param>
+ <param type="genome_region">genome_region</param>
+ <param type="dbkey">dbkey</param>
+ </params>
+ <template_root>tracks</template_root>
+ <template>browser.mako</template>
+ <render_location>_top</render_location>
+</visualization>
diff -r 3422736c168d6d864ec3c478d3e725b1184e3778 -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 config/visualizations/visualization.dtd
--- /dev/null
+++ b/config/visualizations/visualization.dtd
@@ -0,0 +1,132 @@
+<!-- runnable on NIX with xmllint -->
+
+<!-- each visualization must have a template (all other elements are optional) -->
+<!ELEMENT visualization (data_sources*,params*,template_root*,template,link_text*,render_location*)>
+<!-- visualization name (e.g. 'trackster', 'scatterplot', etc.) is required -->
+<!ATTLIST visualization
+ name CDATA #REQUIRED
+>
+
+<!ELEMENT data_sources (data_source*)>
+<!-- data sources are elements that describe what objects (HDAs, LDDAs, Job, User, etc.)
+ are applicable to a visualization. Often these are used to fetch applicable links
+ to the visualizations that use them.
+-->
+ <!ELEMENT data_source (model_class,(test|to_param)*)>
+ <!ELEMENT model_class (#PCDATA)>
+ <!-- model_class is currently the class name of the object you want to make a visualization
+ applicable to (e.g. HistoryDatasetAssociation). Currently only classes in galaxy.model
+ can be used.
+ REQUIRED and currently limited to: 'HistoryDatasetAssociation', 'LibraryDatasetDatasetAssociation'
+ -->
+ <!ELEMENT test (#PCDATA)>
+ <!-- tests help define what conditions the visualization can be applied to the model_class/target.
+ Currently, all tests are OR'd and there is no logical grouping. Tests are run in order.
+ (text): the text of this element is what the given target will be compared to (REQUIRED)
+ type: what type of test to run (e.g. when the target is an HDA the test will often be of type 'isinstance'
+ and test whether the HDA's datatype isinstace of a class)
+ DEFAULT: string comparison.
+ test_attr: what attribute of the target object should be used in the test. For instance, 'datatype'
+ will attempt to get the HDA.datatype from a target HDA. If the given object doesn't have
+ that attribute the test will fail (with no error). test_attr can be dot separated attributes,
+ looking up each in turn. For example, if the target was a history, one could access the
+ history.user.email by setting test_attr to 'user.email' (why you would want that, I don't know)
+ DEFAULT: to comparing the object itself (and not any of it's attributes)
+ result_type: if the result (the text of the element mentioned above) needs to be parsed into
+ something other than a string, result_type will tell the registry how to do this. E.g.
+ if result_type is 'datatype' the registry will assume the text is a datatype class name
+ and parse it into the proper class before the test (often 'isinstance') is run.
+ DEFAULT: no parsing (result should be a string)
+ -->
+ <!ATTLIST test
+ type CDATA #IMPLIED
+ test_attr CDATA #IMPLIED
+ result_type CDATA #IMPLIED
+ >
+
+ <!ELEMENT to_param (#PCDATA)>
+ <!-- to_param tells the registry how to parse the data_source into a query string param.
+ For example, HDA data_sources can set param_to text to 'dataset_id' and param_attr to 'id' and the
+ the target HDA (if it passes the tests) will be passed as "dataset_id=HDA.id"
+ (text): the query string param key this source will be parsed into (e.g. dataset_id)
+ REQUIRED
+ param_attr: the attribute of the data_source object to use as the value in the query string param.
+ E.g. param_attr='id' for an HDA data_source would use the (encoded) id.
+ NOTE: a to_param MUST have either a param_attr or assign
+ assign: you can use this to directly assign a value to a query string's param. E.g. if the
+ data_source is a LDDA we can set 'hda_or_ldda=ldda' using assign='ldda'.
+ NOTE: a to_param MUST have either a param_attr or assign
+ -->
+ <!ATTLIST to_param
+ param_attr CDATA #IMPLIED
+ assign CDATA #IMPLIED
+ >
+
+<!ELEMENT params ((param|param_modifier)*)>
+<!-- params describe what data will be sent to a visualization template and
+ how to convert them from a query string in a URL into variables usable in a template.
+ For example,
+ param_modifiers are a special class of parameters that modify other params
+ (e.g. hda_ldda can be 'hda' or 'ldda' and modifies/informs dataset_id to fetch an HDA or LDDA)
+-->
+ <!ELEMENT param (#PCDATA)>
+ <!-- param tells the registry how to parse the query string param back into a resource/data_source.
+ For example, if a query string has "dataset_id=NNN" and the type is 'dataset', the registry
+ will attempt to fetch the hda with id of NNN from the database and pass it to the template.
+ (text): the query string param key this source will be parsed from (e.g. dataset_id)
+ REQUIRED
+ type: the type of the resource.
+ Can be: str (DEFAULT), bool, int, float, json, visualization, dbkey, dataset, or hda_ldda.
+ default: if a param is not passed on the query string (and is not required) OR the given param
+ fails to parse, this value is used instead.
+ DEFAULT: None
+ required: set this to true if the param is required for the template. Rendering will with an error
+ if the param hasn't been sent.
+ DEFAULT: false
+ csv: set this to true if the param is a comma separated list. The registry will attempt to
+ parse each value as the given type and send the result as a list to the template.
+ DEFAULT: false
+ constrain_to: (currently unused) constain a param to a set of values, error if not valid.
+ DEFAULT: don't constrain
+ var_name_in_template: a new name for the resource/variable to use in the template. E.g. an initial
+ query string param key might be 'dataset_id' in the URL, the registry parses it into an HDA,
+ and if var_name_in_template is set to 'hda', the template will be able to access the HDA
+ with the variable name 'hda' (as in hda.title).
+ DEFAULT: keep the original query string name
+ -->
+ <!ATTLIST param
+ type CDATA #IMPLIED
+ default CDATA #IMPLIED
+ required CDATA #IMPLIED
+ csv CDATA #IMPLIED
+ constrain_to CDATA #IMPLIED
+ var_name_in_template CDATA #IMPLIED
+ >
+ <!-- param_modifiers are the same as param but have a REQUIRED 'modifies' attribute.
+ 'modifies' must point to the param name (the text part of param element) that it will modify.
+ E.g. <param_modifier modifies="dataset_id">hda_ldda</param_modifier>
+ -->
+ <!ELEMENT param_modifier (#PCDATA)>
+ <!ATTLIST param_modifier
+ modifies CDATA #REQUIRED
+ type CDATA #IMPLIED
+ default CDATA #IMPLIED
+ required CDATA #IMPLIED
+ csv CDATA #IMPLIED
+ constrain_to CDATA #IMPLIED
+ var_name_in_template CDATA #IMPLIED
+ >
+
+<!-- template_root: the directory to search for the template relative to templates/webapps/galaxy
+ (optional) DEFAULT: visualizations
+-->
+<!ELEMENT template_root (#PCDATA)>
+<!-- template: the template used to render the visualization. REQUIRED -->
+<!ELEMENT template (#PCDATA)>
+<!-- link_text: the text component of an html anchor displayed when the registry builds the link information -->
+<!ELEMENT link_text (#PCDATA)>
+<!-- render_location: used as the target attribute of the link to the visualization.
+ Can be 'galaxy_main', '_top', '_blank'. DEFAULT: 'galaxy_main'
+-->
+<!-- TODO: rename -> render_target -->
+<!ELEMENT render_location (#PCDATA)>
diff -r 3422736c168d6d864ec3c478d3e725b1184e3778 -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 lib/galaxy/app.py
--- a/lib/galaxy/app.py
+++ b/lib/galaxy/app.py
@@ -123,8 +123,10 @@
# Load genome indexer tool.
load_genome_index_tools( self.toolbox )
# visualizations registry: associates resources with visualizations, controls how to render
- self.visualizations_registry = ( VisualizationsRegistry( self.config.root, self.config.visualizations_conf_path )
- if self.config.visualizations_conf_path else None )
+ self.visualizations_registry = None
+ if self.config.visualizations_config_directory:
+ self.visualizations_registry = VisualizationsRegistry( self.config.root,
+ self.config.visualizations_config_directory )
# Load security policy.
self.security_agent = self.model.security_agent
self.host_security_agent = galaxy.security.HostAgent( model=self.security_agent.model, permitted_actions=self.security_agent.permitted_actions )
diff -r 3422736c168d6d864ec3c478d3e725b1184e3778 -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 lib/galaxy/config.py
--- a/lib/galaxy/config.py
+++ b/lib/galaxy/config.py
@@ -288,8 +288,8 @@
self.fluent_log = string_as_bool( kwargs.get( 'fluent_log', False ) )
self.fluent_host = kwargs.get( 'fluent_host', 'localhost' )
self.fluent_port = int( kwargs.get( 'fluent_port', 24224 ) )
- # visualizations registry config path
- self.visualizations_conf_path = kwargs.get( 'visualizations_conf_path', None )
+ # visualization registries config directory
+ self.visualizations_config_directory = kwargs.get( 'visualizations_config_directory', None )
@property
def sentry_dsn_public( self ):
diff -r 3422736c168d6d864ec3c478d3e725b1184e3778 -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 lib/galaxy/visualization/registry.py
--- a/lib/galaxy/visualization/registry.py
+++ b/lib/galaxy/visualization/registry.py
@@ -6,6 +6,7 @@
"""
import os
import shutil
+import glob
from galaxy import util
import galaxy.model
@@ -26,6 +27,7 @@
some confused vocabulary in docs, var names
tests:
anding, grouping, not
+ has_dataprovider
data_sources:
lists of
add description element to visualization.
@@ -62,10 +64,9 @@
return 'VisualizationsRegistry(%s)' %( listings_keys_str )
def __init__( self, galaxy_root, configuration_filepath ):
- # load the registry from the given xml file using the given parser
+ # load the registry from the xml files located in configuration_filepath using the given parser
configuration_filepath = os.path.join( galaxy_root, configuration_filepath )
- configuration_filepath = self.check_conf_filepath( configuration_filepath )
- self.configuration_filepath = configuration_filepath
+ self.configuration_filepath = self.check_conf_filepath( configuration_filepath )
self.load()
# what to use to parse query strings into resources/vars for the template
@@ -73,20 +74,13 @@
def check_conf_filepath( self, configuration_filepath ):
"""
- If given file at filepath exists, return that filepath. If not,
- see if filepath + '.sample' exists and, if so, copy that into filepath.
-
- If neither original or sample exist, throw an IOError (currently,
- this is a requireed file).
+ Checks for the existence of the given filepath.
+ :param configurarion_filepath: full filepath to the visualization config directory
+ :raises IOError: if the given directory doesn't exist
"""
- if os.path.exists( configuration_filepath ):
- return configuration_filepath
- else:
- sample_file = configuration_filepath + '.sample'
- if os.path.exists( sample_file ):
- shutil.copy2( sample_file, configuration_filepath )
- return configuration_filepath
- raise IOError( 'visualization configuration file (%s) not found' %( configuration_filepath ) )
+ if not os.path.exists( configuration_filepath ):
+ raise IOError( 'visualization configuration directory (%s) not found' %( configuration_filepath ) )
+ return configuration_filepath
def load( self ):
"""
@@ -274,11 +268,11 @@
VALID_RENDER_LOCATIONS = [ 'galaxy_main', '_top', '_blank' ]
@classmethod
- def parse( cls, xml_filepath, debug=True ):
+ def parse( cls, config_dir, debug=True ):
"""
- Static class interface
+ Static class interface.
"""
- return cls( debug ).parse_file( xml_filepath )
+ return cls( debug ).parse_files( config_dir )
def __init__( self, debug=False ):
self.debug = debug
@@ -288,38 +282,47 @@
self.param_parser = ParamParser()
self.param_modifier_parser = ParamModifierParser()
- def parse_file( self, xml_filepath ):
+ def parse_files( self, config_dir ):
"""
- Parse the given XML file for visualizations data.
+ Parse each XML file in `config_dir` for visualizations config data.
- If an error occurs while parsing a visualizations entry it is skipped.
+ If an error occurs while parsing a visualizations entry, it is skipped.
+ :returns: registry data in dictionary form
"""
returned = {}
try:
- xml_tree = galaxy.util.parse_xml( xml_filepath )
- for visualization_conf in xml_tree.getroot().findall( 'visualization' ):
- visualization = None
- visualization_name = visualization_conf.get( 'name' )
-
+ for xml_filepath in glob.glob( os.path.join( config_dir, '*.xml' ) ):
try:
- visualization = self.parse_visualization( visualization_conf )
+ visualization_name, visualization = self.parse_file( xml_filepath )
# skip vis' with parsing errors - don't shutdown the startup
except ParsingException, parse_exc:
- log.error( 'Skipped visualization configuration "%s" due to parsing errors: %s',
- visualization_name, str( parse_exc ), exc_info=self.debug )
+ log.error( 'Skipped visualization config "%s" due to parsing errors: %s',
+ xml_filepath, str( parse_exc ), exc_info=self.debug )
if visualization:
returned[ visualization_name ] = visualization
+ log.debug( 'Visualization config loaded for: %s', visualization_name )
except Exception, exc:
- log.error( 'Error parsing visualization configuration file %s: %s',
- xml_filepath, str( exc ), exc_info=( not self.debug ) )
- #TODO: change when this is required
+ log.error( 'Error parsing visualizations configuration directory %s: %s',
+ config_dir, str( exc ), exc_info=( not self.debug ) )
+ #TODO: change when this framework is on by default
if self.debug:
raise
return returned
+ def parse_file( self, xml_filepath ):
+ """
+ Parse the given XML file for visualizations data.
+ :returns: tuple of ( `visualization_name`, `visualization` )
+ """
+ xml_tree = galaxy.util.parse_xml( xml_filepath )
+ visualization_conf = xml_tree.getroot()
+ visualization_name = visualization_conf.get( 'name' )
+ visualization = self.parse_visualization( visualization_conf )
+ return visualization_name, visualization
+
def parse_visualization( self, xml_tree ):
"""
Parse the template, name, and any data_sources and params from the
diff -r 3422736c168d6d864ec3c478d3e725b1184e3778 -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 universe_wsgi.ini.sample
--- a/universe_wsgi.ini.sample
+++ b/universe_wsgi.ini.sample
@@ -174,9 +174,10 @@
# Galaxy.
#datatypes_config_file = datatypes_conf.xml
-# Visualizations config file, defines what visualizations apply to particular data and how to pass them
-# the necessary parameters
-#visualizations_conf_path = visualizations_conf.xml
+# Visualizations config directory, where to look for individual visualization
+# xml configuration files. Those files define how visualizations apply to
+# particular data and how to pass them the necessary parameters
+visualizations_config_directory = config/visualizations
# Each job is given a unique empty directory as its current working directory.
# This option defines in what parent directory those directories will be
diff -r 3422736c168d6d864ec3c478d3e725b1184e3778 -r 91de48b4fd3b5628d5a434f68bd6cf2473ad62c3 visualizations_conf.xml.sample
--- a/visualizations_conf.xml.sample
+++ /dev/null
@@ -1,292 +0,0 @@
-<?xml version="1.0"?>
-<!--
- This is the xml file to edit to add new visualizations to the framework.
- NOTE!: this is a work in progress!
-
- Note: also that visualizations that fail to parse in visualizations/registry will
- produce an error in the server log, but otherwise will be skipped and not available.
- If you can't find your visualization in the UI, check the server logs for errors
- during start up.
--->
-
-<!-- .......................................................................... DTD -->
-<!-- runnable on NIX with xmllint -->
-<!DOCTYPE visualizations [
- <!-- 0 or more visualizations -->
- <!ELEMENT visualizations (visualization*)>
- <!-- each visualization must have a template (all other elements are optional) -->
- <!ELEMENT visualization (data_sources*,params*,template_root*,template,link_text*,render_location*)>
- <!-- visualization name (e.g. 'trackster', 'scatterplot', etc.) is required -->
- <!ATTLIST visualization
- name CDATA #REQUIRED
- >
-
- <!ELEMENT data_sources (data_source*)>
- <!-- data sources are elements that describe what objects (HDAs, LDDAs, Job, User, etc.)
- are applicable to a visualization. Often these are used to fetch applicable links
- to the visualizations that use them.
- -->
- <!ELEMENT data_source (model_class,(test|to_param)*)>
- <!ELEMENT model_class (#PCDATA)>
- <!-- model_class is currently the class name of the object you want to make a visualization
- applicable to (e.g. HistoryDatasetAssociation). Currently only classes in galaxy.model
- can be used.
- REQUIRED and currently limited to: 'HistoryDatasetAssociation', 'LibraryDatasetDatasetAssociation'
- -->
- <!ELEMENT test (#PCDATA)>
- <!-- tests help define what conditions the visualization can be applied to the model_class/target.
- Currently, all tests are OR'd and there is no logical grouping. Tests are run in order.
- (text): the text of this element is what the given target will be compared to (REQUIRED)
- type: what type of test to run (e.g. when the target is an HDA the test will often be of type 'isinstance'
- and test whether the HDA's datatype isinstace of a class)
- DEFAULT: string comparison.
- test_attr: what attribute of the target object should be used in the test. For instance, 'datatype'
- will attempt to get the HDA.datatype from a target HDA. If the given object doesn't have
- that attribute the test will fail (with no error). test_attr can be dot separated attributes,
- looking up each in turn. For example, if the target was a history, one could access the
- history.user.email by setting test_attr to 'user.email' (why you would want that, I don't know)
- DEFAULT: to comparing the object itself (and not any of it's attributes)
- result_type: if the result (the text of the element mentioned above) needs to be parsed into
- something other than a string, result_type will tell the registry how to do this. E.g.
- if result_type is 'datatype' the registry will assume the text is a datatype class name
- and parse it into the proper class before the test (often 'isinstance') is run.
- DEFAULT: no parsing (result should be a string)
- -->
- <!ATTLIST test
- type CDATA #IMPLIED
- test_attr CDATA #IMPLIED
- result_type CDATA #IMPLIED
- >
-
- <!ELEMENT to_param (#PCDATA)>
- <!-- to_param tells the registry how to parse the data_source into a query string param.
- For example, HDA data_sources can set param_to text to 'dataset_id' and param_attr to 'id' and the
- the target HDA (if it passes the tests) will be passed as "dataset_id=HDA.id"
- (text): the query string param key this source will be parsed into (e.g. dataset_id)
- REQUIRED
- param_attr: the attribute of the data_source object to use as the value in the query string param.
- E.g. param_attr='id' for an HDA data_source would use the (encoded) id.
- NOTE: a to_param MUST have either a param_attr or assign
- assign: you can use this to directly assign a value to a query string's param. E.g. if the
- data_source is a LDDA we can set 'hda_or_ldda=ldda' using assign='ldda'.
- NOTE: a to_param MUST have either a param_attr or assign
- -->
- <!ATTLIST to_param
- param_attr CDATA #IMPLIED
- assign CDATA #IMPLIED
- >
-
- <!ELEMENT params ((param|param_modifier)*)>
- <!-- params describe what data will be sent to a visualization template and
- how to convert them from a query string in a URL into variables usable in a template.
- For example,
- param_modifiers are a special class of parameters that modify other params
- (e.g. hda_ldda can be 'hda' or 'ldda' and modifies/informs dataset_id to fetch an HDA or LDDA)
- -->
- <!ELEMENT param (#PCDATA)>
- <!-- param tells the registry how to parse the query string param back into a resource/data_source.
- For example, if a query string has "dataset_id=NNN" and the type is 'dataset', the registry
- will attempt to fetch the hda with id of NNN from the database and pass it to the template.
- (text): the query string param key this source will be parsed from (e.g. dataset_id)
- REQUIRED
- type: the type of the resource.
- Can be: str (DEFAULT), bool, int, float, json, visualization, dbkey, dataset, or hda_ldda.
- default: if a param is not passed on the query string (and is not required) OR the given param
- fails to parse, this value is used instead.
- DEFAULT: None
- required: set this to true if the param is required for the template. Rendering will with an error
- if the param hasn't been sent.
- DEFAULT: false
- csv: set this to true if the param is a comma separated list. The registry will attempt to
- parse each value as the given type and send the result as a list to the template.
- DEFAULT: false
- constrain_to: (currently unused) constain a param to a set of values, error if not valid.
- DEFAULT: don't constrain
- var_name_in_template: a new name for the resource/variable to use in the template. E.g. an initial
- query string param key might be 'dataset_id' in the URL, the registry parses it into an HDA,
- and if var_name_in_template is set to 'hda', the template will be able to access the HDA
- with the variable name 'hda' (as in hda.title).
- DEFAULT: keep the original query string name
- -->
- <!ATTLIST param
- type CDATA #IMPLIED
- default CDATA #IMPLIED
- required CDATA #IMPLIED
- csv CDATA #IMPLIED
- constrain_to CDATA #IMPLIED
- var_name_in_template CDATA #IMPLIED
- >
- <!-- param_modifiers are the same as param but have a REQUIRED 'modifies' attribute.
- 'modifies' must point to the param name (the text part of param element) that it will modify.
- E.g. <param_modifier modifies="dataset_id">hda_ldda</param_modifier>
- -->
- <!ELEMENT param_modifier (#PCDATA)>
- <!ATTLIST param_modifier
- modifies CDATA #REQUIRED
- type CDATA #IMPLIED
- default CDATA #IMPLIED
- required CDATA #IMPLIED
- csv CDATA #IMPLIED
- constrain_to CDATA #IMPLIED
- var_name_in_template CDATA #IMPLIED
- >
-
- <!-- template_root: the directory to search for the template relative to templates/webapps/galaxy
- (optional) DEFAULT: visualizations
- -->
- <!ELEMENT template_root (#PCDATA)>
- <!-- template: the template used to render the visualization. REQUIRED -->
- <!ELEMENT template (#PCDATA)>
- <!-- link_text: the text component of an html anchor displayed when the registry builds the link information -->
- <!ELEMENT link_text (#PCDATA)>
- <!-- render_location: used as the target attribute of the link to the visualization.
- Can be 'galaxy_main', '_top', '_blank'. DEFAULT: 'galaxy_main'
- -->
- <!-- TODO: rename -> render_target -->
- <!ELEMENT render_location (#PCDATA)>
-]>
-
-<!-- .......................................................................... configuration xml -->
-<visualizations>
- <visualization name="trackster">
- <!--not tested yet -->
- <data_sources>
- <data_source>
- <model_class>HistoryDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
- <to_param param_attr="id">dataset_id</to_param>
- <to_param assign="hda">hda_ldda</to_param>
- <to_param param_attr="dbkey">dbkey</to_param>
- </data_source>
- <data_source>
- <model_class>LibraryDatasetDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
- <to_param param_attr="id">dataset_id</to_param>
- <to_param assign="ldda">hda_ldda</to_param>
- </data_source>
- </data_sources>
- <params>
- <param type="visualization">id</param>
- <param type="dataset">dataset_id</param>
- <param type="genome_region">genome_region</param>
- <param type="dbkey">dbkey</param>
- </params>
- <template_root>tracks</template_root>
- <template>browser.mako</template>
- <render_location>_top</render_location>
- </visualization>
-
- <visualization name="circster">
- <data_sources>
- <data_source>
- <model_class>HistoryDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
- <to_param param_attr="id">dataset_id</to_param>
- <to_param assign="hda">hda_ldda</to_param>
- </data_source>
- <data_source>
- <model_class>LibraryDatasetDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
- <to_param param_attr="id">dataset_id</to_param>
- <to_param assign="ldda">hda_ldda</to_param>
- </data_source>
- </data_sources>
- <params>
- <param type="visualization">id</param>
- <param type="hda_or_ldda">dataset_id</param>
- <param_modifier type="string" modifies="dataset_id">hda_ldda</param_modifier>
- <param type="dbkey">dbkey</param>
- </params>
- <template>circster.mako</template>
- <render_location>_top</render_location>
- </visualization>
-
- <!--
- <visualization name="sweepster">
- <data_sources>
- <data_source>
- <model_class>HistoryDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
- <to_param param_attr="id">dataset_id</to_param>
- <to_param assign="hda">hda_ldda</to_param>
- </data_source>
- <data_source>
- <model_class>LibraryDatasetDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
- <to_param param_attr="id">dataset_id</to_param>
- <to_param assign="ldda">hda_ldda</to_param>
- </data_source>
- </data_sources>
- <params>
- <param type="visualization" var_name_in_template="viz">visualization</param>
- <param type="hda_or_ldda" var_name_in_template="dataset">dataset_id</param>
- <param_modifier type="string" modifies="dataset_id">hda_ldda</param_modifier>
- </params>
- <template>sweepster.mako</template>
- <render_location>_top</render_location>
- </visualization>
- -->
-
- <visualization name="phyloviz">
- <data_sources>
- <data_source>
- <model_class>HistoryDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Newick</test>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Nexus</test>
- <to_param param_attr="id">dataset_id</to_param>
- </data_source>
- </data_sources>
- <params>
- <param type="dataset" var_name_in_template="hda" required="true">dataset_id</param>
- <param type="integer" default="0">tree_index</param>
- </params>
- <template>phyloviz.mako</template>
- <render_location>_top</render_location>
- </visualization>
-
- <visualization name="scatterplot">
- <data_sources>
- <data_source>
- <model_class>HistoryDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">tabular.Tabular</test>
- <to_param param_attr="id">dataset_id</to_param>
- </data_source>
- </data_sources>
- <params>
- <param type="dataset" var_name_in_template="hda" required="true">dataset_id</param>
- </params>
- <template>scatterplot.mako</template>
- </visualization>
-
- <!--
- <visualization name="test">
- <data_sources>
- <data_source>
- <model_class>HistoryDatasetAssociation</model_class>
- <test type="isinstance" test_attr="datatype" result_type="datatype">data.Data</test>
- <to_param param_attr="id">dataset_id</to_param>
- <to_param assign="bler">string</to_param>
- <to_param assign="False">boolean</to_param>
- <to_param assign="-5">integer</to_param>
- <to_param assign="3.14">float</to_param>
- <to_param assign="{}">json</to_param>
- </data_source>
- </data_sources>
- <params>
- <param>default</param>
- <param type="str">string</param>
- <param type="bool">boolean</param>
- <param type="int">integer</param>
- <param type="float">float</param>
- <param type="json">json</param>
- <param type="str" required="true">string</param>
- <param type="visualization" var_name_in_template="visualization">visualization_id</param>
- <param type="dataset" var_name_in_template="dataset">dataset_id</param>
- <param type="hda_or_ldda">dataset_instance</param>
- <param_modifier type="str" modifies="dataset_instance">hda_ldda</param_modifier>
- </params>
- <template>v_fwork_test.mako</template>
- </visualization>
- -->
-
-</visualizations>
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: carlfeberhard: Dataproviders: change 'map' providers to 'dict' for better consistency w/ python
by commits-noreply@bitbucket.org 18 Jul '13
by commits-noreply@bitbucket.org 18 Jul '13
18 Jul '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/3422736c168d/
Changeset: 3422736c168d
User: carlfeberhard
Date: 2013-07-18 18:46:21
Summary: Dataproviders: change 'map' providers to 'dict' for better consistency w/ python
Affected #: 5 files
diff -r 416e08c46ed6b03598581c5d6c6b44587aa9f170 -r 3422736c168d6d864ec3c478d3e725b1184e3778 lib/galaxy/datatypes/binary.py
--- a/lib/galaxy/datatypes/binary.py
+++ b/lib/galaxy/datatypes/binary.py
@@ -285,23 +285,23 @@
settings[ 'comment_char' ] = '@'
return dataproviders.column.ColumnarDataProvider( samtools_source, **settings )
- @dataproviders.decorators.dataprovider_factory( 'map', dataproviders.column.MapDataProvider.settings )
- def map_dataprovider( self, dataset, **settings ):
+ @dataproviders.decorators.dataprovider_factory( 'dict', dataproviders.column.DictDataProvider.settings )
+ def dict_dataprovider( self, dataset, **settings ):
samtools_source = dataproviders.dataset.SamtoolsDataProvider( dataset )
settings[ 'comment_char' ] = '@'
- return dataproviders.column.MapDataProvider( samtools_source, **settings )
+ return dataproviders.column.DictDataProvider( samtools_source, **settings )
- # these can't be used directly - may need BamColumn, BamMap (Bam metadata -> column/map)
+ # these can't be used directly - may need BamColumn, BamDict (Bam metadata -> column/dict)
# OR - see genomic_region_dataprovider
#(a)dataproviders.decorators.dataprovider_factory( 'dataset-column', dataproviders.column.ColumnarDataProvider.settings )
#def dataset_column_dataprovider( self, dataset, **settings ):
# settings[ 'comment_char' ] = '@'
# return super( Sam, self ).dataset_column_dataprovider( dataset, **settings )
- #(a)dataproviders.decorators.dataprovider_factory( 'dataset-map', dataproviders.column.MapDataProvider.settings )
- #def dataset_map_dataprovider( self, dataset, **settings ):
+ #(a)dataproviders.decorators.dataprovider_factory( 'dataset-dict', dataproviders.column.DictDataProvider.settings )
+ #def dataset_dict_dataprovider( self, dataset, **settings ):
# settings[ 'comment_char' ] = '@'
- # return super( Sam, self ).dataset_map_dataprovider( dataset, **settings )
+ # return super( Sam, self ).dataset_dict_dataprovider( dataset, **settings )
@dataproviders.decorators.dataprovider_factory( 'header', dataproviders.line.RegexLineDataProvider.settings )
def header_dataprovider( self, dataset, **settings ):
@@ -309,12 +309,12 @@
samtools_source = dataproviders.dataset.SamtoolsDataProvider( dataset, '-H' )
return dataproviders.line.RegexLineDataProvider( samtools_source, **settings )
- @dataproviders.decorators.dataprovider_factory( 'id-seq-qual', dataproviders.column.MapDataProvider.settings )
+ @dataproviders.decorators.dataprovider_factory( 'id-seq-qual', dataproviders.column.DictDataProvider.settings )
def id_seq_qual_dataprovider( self, dataset, **settings ):
settings[ 'indeces' ] = [ 0, 9, 10 ]
settings[ 'column_types' ] = [ 'str', 'str', 'str' ]
settings[ 'column_names' ] = [ 'id', 'seq', 'qual' ]
- return self.map_dataprovider( dataset, **settings )
+ return self.dict_dataprovider( dataset, **settings )
@dataproviders.decorators.dataprovider_factory( 'genomic-region', dataproviders.column.ColumnarDataProvider.settings )
def genomic_region_dataprovider( self, dataset, **settings ):
@@ -330,12 +330,12 @@
settings[ 'column_types' ] = [ 'str', 'int', 'int' ]
return self.column_dataprovider( dataset, **settings )
- @dataproviders.decorators.dataprovider_factory( 'genomic-region-map', dataproviders.column.MapDataProvider.settings )
- def genomic_region_map_dataprovider( self, dataset, **settings ):
+ @dataproviders.decorators.dataprovider_factory( 'genomic-region-dict', dataproviders.column.DictDataProvider.settings )
+ def genomic_region_dict_dataprovider( self, dataset, **settings ):
settings[ 'indeces' ] = [ 2, 3, 3 ]
settings[ 'column_types' ] = [ 'str', 'int', 'int' ]
settings[ 'column_names' ] = [ 'chrom', 'start', 'end' ]
- return self.map_dataprovider( dataset, **settings )
+ return self.dict_dataprovider( dataset, **settings )
@dataproviders.decorators.dataprovider_factory( 'samtools' )
def samtools_dataprovider( self, dataset, **settings ):
diff -r 416e08c46ed6b03598581c5d6c6b44587aa9f170 -r 3422736c168d6d864ec3c478d3e725b1184e3778 lib/galaxy/datatypes/dataproviders/column.py
--- a/lib/galaxy/datatypes/dataproviders/column.py
+++ b/lib/galaxy/datatypes/dataproviders/column.py
@@ -220,9 +220,9 @@
super( FilteredByColumnDataProvider, self ).__init__( source, **kwargs )
-class MapDataProvider( ColumnarDataProvider ):
+class DictDataProvider( ColumnarDataProvider ):
"""
- Data provider that column_names and columns from the source's contents
+ Data provider that zips column_names and columns from the source's contents
into a dictionary.
A combination use of both `column_names` and `indeces` allows 'picking'
@@ -244,11 +244,11 @@
:type column_names:
"""
#TODO: allow passing in a map instead of name->index { 'name1': index1, ... }
- super( MapDataProvider, self ).__init__( source, **kwargs )
+ super( DictDataProvider, self ).__init__( source, **kwargs )
self.column_names = column_names or []
def __iter__( self ):
- parent_gen = super( MapDataProvider, self ).__iter__()
+ parent_gen = super( DictDataProvider, self ).__iter__()
for column_values in parent_gen:
map = dict( zip( self.column_names, column_values ) )
yield map
diff -r 416e08c46ed6b03598581c5d6c6b44587aa9f170 -r 3422736c168d6d864ec3c478d3e725b1184e3778 lib/galaxy/datatypes/dataproviders/dataset.py
--- a/lib/galaxy/datatypes/dataproviders/dataset.py
+++ b/lib/galaxy/datatypes/dataproviders/dataset.py
@@ -203,16 +203,16 @@
super( DatasetColumnarDataProvider, self ).__init__( dataset_source, **kwargs )
-class DatasetMapDataProvider( column.MapDataProvider ):
+class DatasetDictDataProvider( column.DictDataProvider ):
"""
Data provider that uses a DatasetDataProvider as it's source and the
- dataset's metadata to buuild settings for the MapDataProvider it's
+ dataset's metadata to buuild settings for the DictDataProvider it's
inherited from.
"""
def __init__( self, dataset, **kwargs ):
"""
- All kwargs are inherited from MapDataProvider.
- .. seealso:: column.MapDataProvider
+ All kwargs are inherited from DictDataProvider.
+ .. seealso:: column.DictDataProvider
If no kwargs are given, this class will attempt to get those kwargs
from the dataset source's metadata.
@@ -255,7 +255,7 @@
if not kwargs.get( 'column_types', None ):
kwargs[ 'column_types' ] = dataset_source.get_metadata_column_types( indeces=indeces )
- super( DatasetMapDataProvider, self ).__init__( dataset_source, **kwargs )
+ super( DatasetDictDataProvider, self ).__init__( dataset_source, **kwargs )
# ----------------------------------------------------------------------------- provides a bio-relevant datum
diff -r 416e08c46ed6b03598581c5d6c6b44587aa9f170 -r 3422736c168d6d864ec3c478d3e725b1184e3778 lib/galaxy/datatypes/interval.py
--- a/lib/galaxy/datatypes/interval.py
+++ b/lib/galaxy/datatypes/interval.py
@@ -339,9 +339,9 @@
def genomic_region_dataprovider( self, dataset, **settings ):
return dataproviders.dataset.GenomicRegionDataProvider( dataset, **settings )
- @dataproviders.decorators.dataprovider_factory( 'genomic-region-map',
+ @dataproviders.decorators.dataprovider_factory( 'genomic-region-dict',
dataproviders.dataset.GenomicRegionDataProvider.settings )
- def genomic_region_map_dataprovider( self, dataset, **settings ):
+ def genomic_region_dict_dataprovider( self, dataset, **settings ):
settings[ 'named_columns' ] = True
return self.genomic_region_dataprovider( dataset, **settings )
@@ -350,9 +350,9 @@
def interval_dataprovider( self, dataset, **settings ):
return dataproviders.dataset.IntervalDataProvider( dataset, **settings )
- @dataproviders.decorators.dataprovider_factory( 'interval-map',
+ @dataproviders.decorators.dataprovider_factory( 'interval-dict',
dataproviders.dataset.IntervalDataProvider.settings )
- def interval_map_dataprovider( self, dataset, **settings ):
+ def interval_dict_dataprovider( self, dataset, **settings ):
settings[ 'named_columns' ] = True
return self.interval_dataprovider( dataset, **settings )
@@ -818,9 +818,9 @@
def genomic_region_dataprovider( self, dataset, **settings ):
return dataproviders.dataset.GenomicRegionDataProvider( dataset, 0, 3, 4, **settings )
- @dataproviders.decorators.dataprovider_factory( 'genomic-region-map',
+ @dataproviders.decorators.dataprovider_factory( 'genomic-region-dict',
dataproviders.dataset.GenomicRegionDataProvider.settings )
- def genomic_region_map_dataprovider( self, dataset, **settings ):
+ def genomic_region_dict_dataprovider( self, dataset, **settings ):
settings[ 'named_columns' ] = True
return self.genomic_region_dataprovider( dataset, **settings )
@@ -829,9 +829,9 @@
def interval_dataprovider( self, dataset, **settings ):
return dataproviders.dataset.IntervalDataProvider( dataset, 0, 3, 4, 6, 2, **settings )
- @dataproviders.decorators.dataprovider_factory( 'interval-map',
+ @dataproviders.decorators.dataprovider_factory( 'interval-dict',
dataproviders.dataset.IntervalDataProvider.settings )
- def interval_map_dataprovider( self, dataset, **settings ):
+ def interval_dict_dataprovider( self, dataset, **settings ):
settings[ 'named_columns' ] = True
return self.interval_dataprovider( dataset, **settings )
@@ -1206,8 +1206,8 @@
dataset_source = dataproviders.dataset.DatasetDataProvider( dataset )
return dataproviders.dataset.WiggleDataProvider( dataset_source, **settings )
- @dataproviders.decorators.dataprovider_factory( 'wiggle-map', dataproviders.dataset.WiggleDataProvider.settings )
- def wiggle_map_dataprovider( self, dataset, **settings ):
+ @dataproviders.decorators.dataprovider_factory( 'wiggle-dict', dataproviders.dataset.WiggleDataProvider.settings )
+ def wiggle_dict_dataprovider( self, dataset, **settings ):
dataset_source = dataproviders.dataset.DatasetDataProvider( dataset )
settings[ 'named_columns' ] = True
return dataproviders.dataset.WiggleDataProvider( dataset_source, **settings )
diff -r 416e08c46ed6b03598581c5d6c6b44587aa9f170 -r 3422736c168d6d864ec3c478d3e725b1184e3778 lib/galaxy/datatypes/tabular.py
--- a/lib/galaxy/datatypes/tabular.py
+++ b/lib/galaxy/datatypes/tabular.py
@@ -357,16 +357,16 @@
"""Attempts to get column settings from dataset.metadata"""
return dataproviders.dataset.DatasetColumnarDataProvider( dataset, **settings )
- @dataproviders.decorators.dataprovider_factory( 'map', dataproviders.column.MapDataProvider.settings )
- def map_dataprovider( self, dataset, **settings ):
+ @dataproviders.decorators.dataprovider_factory( 'dict', dataproviders.column.DictDataProvider.settings )
+ def dict_dataprovider( self, dataset, **settings ):
"""Uses column settings that are passed in"""
dataset_source = dataproviders.dataset.DatasetDataProvider( dataset )
- return dataproviders.column.MapDataProvider( dataset_source, **settings )
+ return dataproviders.column.DictDataProvider( dataset_source, **settings )
- @dataproviders.decorators.dataprovider_factory( 'dataset-map', dataproviders.column.MapDataProvider.settings )
- def dataset_map_dataprovider( self, dataset, **settings ):
+ @dataproviders.decorators.dataprovider_factory( 'dataset-dict', dataproviders.column.DictDataProvider.settings )
+ def dataset_dict_dataprovider( self, dataset, **settings ):
"""Attempts to get column settings from dataset.metadata"""
- return dataproviders.dataset.DatasetMapDataProvider( dataset, **settings )
+ return dataproviders.dataset.DatasetDictDataProvider( dataset, **settings )
class Taxonomy( Tabular ):
@@ -522,15 +522,15 @@
settings[ 'comment_char' ] = '@'
return super( Sam, self ).dataset_column_dataprovider( dataset, **settings )
- @dataproviders.decorators.dataprovider_factory( 'map', dataproviders.column.MapDataProvider.settings )
- def map_dataprovider( self, dataset, **settings ):
+ @dataproviders.decorators.dataprovider_factory( 'dict', dataproviders.column.DictDataProvider.settings )
+ def dict_dataprovider( self, dataset, **settings ):
settings[ 'comment_char' ] = '@'
- return super( Sam, self ).map_dataprovider( dataset, **settings )
+ return super( Sam, self ).dict_dataprovider( dataset, **settings )
- @dataproviders.decorators.dataprovider_factory( 'dataset-map', dataproviders.column.MapDataProvider.settings )
- def dataset_map_dataprovider( self, dataset, **settings ):
+ @dataproviders.decorators.dataprovider_factory( 'dataset-dict', dataproviders.column.DictDataProvider.settings )
+ def dataset_dict_dataprovider( self, dataset, **settings ):
settings[ 'comment_char' ] = '@'
- return super( Sam, self ).dataset_map_dataprovider( dataset, **settings )
+ return super( Sam, self ).dataset_dict_dataprovider( dataset, **settings )
@dataproviders.decorators.dataprovider_factory( 'header', dataproviders.line.RegexLineDataProvider.settings )
def header_dataprovider( self, dataset, **settings ):
@@ -538,12 +538,12 @@
headers_source = dataproviders.line.RegexLineDataProvider( dataset_source, regex_list=[ '^@' ] )
return dataproviders.line.RegexLineDataProvider( headers_source, **settings )
- @dataproviders.decorators.dataprovider_factory( 'id-seq-qual', map_dataprovider.settings )
+ @dataproviders.decorators.dataprovider_factory( 'id-seq-qual', dict_dataprovider.settings )
def id_seq_qual_dataprovider( self, dataset, **settings ):
- # provided as an example of a specified column map (w/o metadata)
+ # provided as an example of a specified column dict (w/o metadata)
settings[ 'indeces' ] = [ 0, 9, 10 ]
settings[ 'column_names' ] = [ 'id', 'seq', 'qual' ]
- return self.map_dataprovider( dataset, **settings )
+ return self.dict_dataprovider( dataset, **settings )
@dataproviders.decorators.dataprovider_factory( 'genomic-region',
dataproviders.dataset.GenomicRegionDataProvider.settings )
@@ -551,9 +551,9 @@
settings[ 'comment_char' ] = '@'
return dataproviders.dataset.GenomicRegionDataProvider( dataset, 2, 3, 3, **settings )
- @dataproviders.decorators.dataprovider_factory( 'genomic-region-map',
+ @dataproviders.decorators.dataprovider_factory( 'genomic-region-dict',
dataproviders.dataset.GenomicRegionDataProvider.settings )
- def genomic_region_map_dataprovider( self, dataset, **settings ):
+ def genomic_region_dict_dataprovider( self, dataset, **settings ):
settings[ 'comment_char' ] = '@'
return dataproviders.dataset.GenomicRegionDataProvider( dataset, 2, 3, 3, True, **settings )
@@ -628,9 +628,9 @@
def genomic_region_dataprovider( self, dataset, **settings ):
return dataproviders.dataset.GenomicRegionDataProvider( dataset, **settings )
- @dataproviders.decorators.dataprovider_factory( 'genomic-region-map',
+ @dataproviders.decorators.dataprovider_factory( 'genomic-region-dict',
dataproviders.dataset.GenomicRegionDataProvider.settings )
- def genomic_region_map_dataprovider( self, dataset, **settings ):
+ def genomic_region_dict_dataprovider( self, dataset, **settings ):
settings[ 'named_columns' ] = True
return self.genomic_region_dataprovider( dataset, **settings )
@@ -677,9 +677,9 @@
def genomic_region_dataprovider( self, dataset, **settings ):
return dataproviders.dataset.GenomicRegionDataProvider( dataset, 0, 1, 1, **settings )
- @dataproviders.decorators.dataprovider_factory( 'genomic-region-map',
+ @dataproviders.decorators.dataprovider_factory( 'genomic-region-dict',
dataproviders.dataset.GenomicRegionDataProvider.settings )
- def genomic_region_map_dataprovider( self, dataset, **settings ):
+ def genomic_region_dict_dataprovider( self, dataset, **settings ):
settings[ 'named_columns' ] = True
return self.genomic_region_dataprovider( dataset, **settings )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Fix for discovering types to which a repository can be set.
by commits-noreply@bitbucket.org 18 Jul '13
by commits-noreply@bitbucket.org 18 Jul '13
18 Jul '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/416e08c46ed6/
Changeset: 416e08c46ed6
User: greg
Date: 2013-07-18 16:57:29
Summary: Fix for discovering types to which a repository can be set.
Affected #: 2 files
diff -r d95be6981cc9a862a458bebb218a5180319ecbd7 -r 416e08c46ed6b03598581c5d6c6b44587aa9f170 lib/galaxy/webapps/tool_shed/model/__init__.py
--- a/lib/galaxy/webapps/tool_shed/model/__init__.py
+++ b/lib/galaxy/webapps/tool_shed/model/__init__.py
@@ -164,7 +164,7 @@
return True
if self.times_downloaded == 0:
return True
- if self.type == rt_util.DEFAULT:
+ if self.type == rt_util.UNRESTRICTED:
return True
return False
diff -r d95be6981cc9a862a458bebb218a5180319ecbd7 -r 416e08c46ed6b03598581c5d6c6b44587aa9f170 lib/tool_shed/util/tool_dependency_util.py
--- a/lib/tool_shed/util/tool_dependency_util.py
+++ b/lib/tool_shed/util/tool_dependency_util.py
@@ -155,7 +155,7 @@
message += "This repository currently contains a single file named <b>%s</b>. If additional files will " % suc.TOOL_DEPENDENCY_DEFINITION_FILENAME
message += "not be added to this repository, then it's type should be set to <b>%s</b>.<br/>" % tool_dependency_definition_type_class.label
else:
- message += "This repository contains no tools, so these tool dependencies are considered orphans within this repository.<br/>"
+ message += "This repository contains no tools, so it's defined tool dependencies are considered orphans within this repository.<br/>"
return message
def get_installed_and_missing_tool_dependencies( trans, repository, all_tool_dependencies ):
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Enhance messaging in the tool shed as a result of the introduction of support for repository types.
by commits-noreply@bitbucket.org 18 Jul '13
by commits-noreply@bitbucket.org 18 Jul '13
18 Jul '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/d95be6981cc9/
Changeset: d95be6981cc9
User: greg
Date: 2013-07-18 16:51:23
Summary: Enhance messaging in the tool shed as a result of the introduction of support for repository types.
Affected #: 7 files
diff -r 3985ecec291f3b1a52997f0ad4fb6579045f16c2 -r d95be6981cc9a862a458bebb218a5180319ecbd7 lib/galaxy/webapps/tool_shed/controllers/repository.py
--- a/lib/galaxy/webapps/tool_shed/controllers/repository.py
+++ b/lib/galaxy/webapps/tool_shed/controllers/repository.py
@@ -2174,11 +2174,12 @@
key_rd_dicts_to_be_processed=None,
all_repository_dependencies=None,
handled_key_rd_dicts=None )
- # Handle messaging for orphan tool dependencies.
- orphan_message = tool_dependency_util.generate_message_for_orphan_tool_dependencies( metadata )
- if orphan_message:
- message += orphan_message
- status = 'warning'
+ if str( repository.type ) != rt_util.TOOL_DEPENDENCY_DEFINITION:
+ # Handle messaging for orphan tool dependencies.
+ orphan_message = tool_dependency_util.generate_message_for_orphan_tool_dependencies( trans, repository, metadata )
+ if orphan_message:
+ message += orphan_message
+ status = 'warning'
if is_malicious:
if trans.app.security_agent.can_push( trans.app, trans.user, repository ):
message += malicious_error_can_push
@@ -2976,11 +2977,6 @@
key_rd_dicts_to_be_processed=None,
all_repository_dependencies=None,
handled_key_rd_dicts=None )
- # Handle messaging for orphan tool dependencies.
- orphan_message = tool_dependency_util.generate_message_for_orphan_tool_dependencies( metadata )
- if orphan_message:
- message += orphan_message
- status = 'warning'
else:
metadata = None
is_malicious = suc.changeset_is_malicious( trans, id, repository.tip( trans.app ) )
diff -r 3985ecec291f3b1a52997f0ad4fb6579045f16c2 -r d95be6981cc9a862a458bebb218a5180319ecbd7 lib/galaxy/webapps/tool_shed/controllers/upload.py
--- a/lib/galaxy/webapps/tool_shed/controllers/upload.py
+++ b/lib/galaxy/webapps/tool_shed/controllers/upload.py
@@ -206,15 +206,16 @@
metadata_dict = repository.metadata_revisions[0].metadata
else:
metadata_dict = {}
- # Provide a warning message if a tool_dependencies.xml file is provided, but tool dependencies weren't loaded due to a requirement tag mismatch
- # or some other problem. Tool dependency definitions can define orphan tool dependencies (no relationship to any tools contained in the repository),
- # so warning messages are important because orphans are always valid. The repository owner must be warned in case they did not intend to define an
- # orphan dependency, but simply provided incorrect information (tool shed, name owner, changeset_revision) for the definition.
- # Handle messaging for orphan tool dependencies.
- orphan_message = tool_dependency_util.generate_message_for_orphan_tool_dependencies( metadata_dict )
- if orphan_message:
- message += orphan_message
- status = 'warning'
+ if str( repository.type ) != rt_util.TOOL_DEPENDENCY_DEFINITION:
+ # Provide a warning message if a tool_dependencies.xml file is provided, but tool dependencies weren't loaded due to a requirement tag mismatch
+ # or some other problem. Tool dependency definitions can define orphan tool dependencies (no relationship to any tools contained in the repository),
+ # so warning messages are important because orphans are always valid. The repository owner must be warned in case they did not intend to define an
+ # orphan dependency, but simply provided incorrect information (tool shed, name owner, changeset_revision) for the definition.
+ # Handle messaging for orphan tool dependencies.
+ orphan_message = tool_dependency_util.generate_message_for_orphan_tool_dependencies( trans, repository, metadata_dict )
+ if orphan_message:
+ message += orphan_message
+ status = 'warning'
# Handle messaging for invalid tool dependencies.
invalid_tool_dependencies_message = tool_dependency_util.generate_message_for_invalid_tool_dependencies( metadata_dict )
if invalid_tool_dependencies_message:
diff -r 3985ecec291f3b1a52997f0ad4fb6579045f16c2 -r d95be6981cc9a862a458bebb218a5180319ecbd7 lib/galaxy/webapps/tool_shed/model/__init__.py
--- a/lib/galaxy/webapps/tool_shed/model/__init__.py
+++ b/lib/galaxy/webapps/tool_shed/model/__init__.py
@@ -171,7 +171,7 @@
def can_change_type_to( self, app, new_type_label ):
if self.can_change_type( app ):
new_type = app.repository_types_registry.get_class_by_label( new_type_label )
- if new_type.is_valid_for_type( self ):
+ if new_type.is_valid_for_type( app, self ):
return True
return False
diff -r 3985ecec291f3b1a52997f0ad4fb6579045f16c2 -r d95be6981cc9a862a458bebb218a5180319ecbd7 lib/tool_shed/repository_types/metadata.py
--- a/lib/tool_shed/repository_types/metadata.py
+++ b/lib/tool_shed/repository_types/metadata.py
@@ -18,7 +18,7 @@
repo = hg.repository( ui.ui(), repository.repo_path( app ) )
return repo.changelog
- def is_valid_for_type( self, repository, revisions_to_check=None ):
+ def is_valid_for_type( self, app, repository, revisions_to_check=None ):
raise "Unimplemented Method"
diff -r 3985ecec291f3b1a52997f0ad4fb6579045f16c2 -r d95be6981cc9a862a458bebb218a5180319ecbd7 lib/tool_shed/util/tool_dependency_util.py
--- a/lib/tool_shed/util/tool_dependency_util.py
+++ b/lib/tool_shed/util/tool_dependency_util.py
@@ -5,6 +5,7 @@
from galaxy import util
from galaxy.model.orm import and_
import tool_shed.util.shed_util_common as suc
+import tool_shed.repository_types.util as rt_util
from tool_shed.util import xml_util
log = logging.getLogger( __name__ )
@@ -115,7 +116,7 @@
message = '%s ' % str( error )
return message
-def generate_message_for_orphan_tool_dependencies( metadata_dict ):
+def generate_message_for_orphan_tool_dependencies( trans, repository, metadata_dict ):
"""
The introduction of the support for orphan tool dependency definitions in tool shed repositories has resulted in the inability
to define an improperly configured tool dependency definition / tool config requirements tag combination as an invalid tool
@@ -127,28 +128,34 @@
if metadata_dict:
orphan_tool_dependencies = metadata_dict.get( 'orphan_tool_dependencies', None )
if orphan_tool_dependencies:
- if 'tools' not in metadata_dict and 'invalid_tools' not in metadata_dict:
+ if 'tools' in metadata_dict or 'invalid_tools' in metadata_dict:
+ for td_key, requirements_dict in orphan_tool_dependencies.items():
+ if td_key == 'set_environment':
+ # "set_environment": [{"name": "R_SCRIPT_PATH", "type": "set_environment"}]
+ message += "The settings for <b>name</b> and <b>type</b> from a contained tool configuration file's <b>requirement</b> tag "
+ message += "does not match the information for the following tool dependency definitions in the <b>tool_dependencies.xml</b> "
+ message += "file, so these tool dependencies have no relationship with any tools within this repository.<br/>"
+ for env_requirements_dict in requirements_dict:
+ name = env_requirements_dict[ 'name' ]
+ type = env_requirements_dict[ 'type' ]
+ message += "<b>* name:</b> %s, <b>type:</b> %s<br/>" % ( str( name ), str( type ) )
+ else:
+ # "R/2.15.1": {"name": "R", "readme": "some string", "type": "package", "version": "2.15.1"}
+ message += "The settings for <b>name</b>, <b>version</b> and <b>type</b> from a contained tool configuration file's "
+ message += "<b>requirement</b> tag does not match the information for the following tool dependency definitions in the "
+ message += "<b>tool_dependencies.xml</b> file, so these tool dependencies have no relationship with any tools within "
+ message += "this repository.<br/>"
+ name = requirements_dict[ 'name' ]
+ type = requirements_dict[ 'type' ]
+ version = requirements_dict[ 'version' ]
+ message += "<b>* name:</b> %s, <b>type:</b> %s, <b>version:</b> %s<br/>" % ( str( name ), str( type ), str( version ) )
+ message += "<br/>"
+ elif repository.can_change_type_to( trans.app, rt_util.TOOL_DEPENDENCY_DEFINITION ):
+ tool_dependency_definition_type_class = trans.app.repository_types_registry.get_class_by_label( rt_util.TOOL_DEPENDENCY_DEFINITION )
+ message += "This repository currently contains a single file named <b>%s</b>. If additional files will " % suc.TOOL_DEPENDENCY_DEFINITION_FILENAME
+ message += "not be added to this repository, then it's type should be set to <b>%s</b>.<br/>" % tool_dependency_definition_type_class.label
+ else:
message += "This repository contains no tools, so these tool dependencies are considered orphans within this repository.<br/>"
- for td_key, requirements_dict in orphan_tool_dependencies.items():
- if td_key == 'set_environment':
- # "set_environment": [{"name": "R_SCRIPT_PATH", "type": "set_environment"}]
- message += "The settings for <b>name</b> and <b>type</b> from a contained tool configuration file's <b>requirement</b> tag "
- message += "does not match the information for the following tool dependency definitions in the <b>tool_dependencies.xml</b> "
- message += "file, so these tool dependencies are considered orphans within this repository.<br/>"
- for env_requirements_dict in requirements_dict:
- name = env_requirements_dict[ 'name' ]
- type = env_requirements_dict[ 'type' ]
- message += "<b>* name:</b> %s, <b>type:</b> %s<br/>" % ( str( name ), str( type ) )
- else:
- # "R/2.15.1": {"name": "R", "readme": "some string", "type": "package", "version": "2.15.1"}
- message += "The settings for <b>name</b>, <b>version</b> and <b>type</b> from a contained tool configuration file's "
- message += "<b>requirement</b> tag does not match the information for the following tool dependency definitions in the "
- message += "<b>tool_dependencies.xml</b> file, so these tool dependencies are considered orphans within this repository.<br/>"
- name = requirements_dict[ 'name' ]
- type = requirements_dict[ 'type' ]
- version = requirements_dict[ 'version' ]
- message += "<b>* name:</b> %s, <b>type:</b> %s, <b>version:</b> %s<br/>" % ( str( name ), str( type ), str( version ) )
- message += "<br/>"
return message
def get_installed_and_missing_tool_dependencies( trans, repository, all_tool_dependencies ):
diff -r 3985ecec291f3b1a52997f0ad4fb6579045f16c2 -r d95be6981cc9a862a458bebb218a5180319ecbd7 test/tool_shed/functional/test_0100_complex_repository_dependencies.py
--- a/test/tool_shed/functional/test_0100_complex_repository_dependencies.py
+++ b/test/tool_shed/functional/test_0100_complex_repository_dependencies.py
@@ -51,7 +51,7 @@
uncompress_file=False,
remove_repo_files_not_in_tar=False,
commit_message='Uploaded tool_dependencies.xml.',
- strings_displayed=[ 'The settings for <b>name</b>, <b>version</b> and <b>type</b> from a contained tool' ],
+ strings_displayed=[ 'This repository currently contains a single file named <b>tool_dependencies.xml</b>' ],
strings_not_displayed=[] )
# Visit the manage repository page for package_bwa_0_5_9_0100.
self.display_manage_repository_page( repository, strings_displayed=[ 'Tool dependencies', 'may not be', 'in this repository' ] )
diff -r 3985ecec291f3b1a52997f0ad4fb6579045f16c2 -r d95be6981cc9a862a458bebb218a5180319ecbd7 test/tool_shed/functional/test_0170_complex_prior_installation_required.py
--- a/test/tool_shed/functional/test_0170_complex_prior_installation_required.py
+++ b/test/tool_shed/functional/test_0170_complex_prior_installation_required.py
@@ -56,7 +56,7 @@
uncompress_file=True,
remove_repo_files_not_in_tar=False,
commit_message='Uploaded matplotlib tool dependency tarball.',
- strings_displayed=['orphan'],
+ strings_displayed=[ 'This repository currently contains a single file named <b>tool_dependencies.xml</b>' ],
strings_not_displayed=[] )
def test_0010_create_numpy_repository( self ):
@@ -80,7 +80,7 @@
uncompress_file=True,
remove_repo_files_not_in_tar=False,
commit_message='Uploaded numpy tool dependency tarball.',
- strings_displayed=['orphan'],
+ strings_displayed=[ 'This repository currently contains a single file named <b>tool_dependencies.xml</b>' ],
strings_not_displayed=[] )
def test_0015_create_complex_repository_dependency( self ):
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: carlfeberhard: Saved histories (and grids in gen.): fix busted query in grids when using log_actions=True
by commits-noreply@bitbucket.org 17 Jul '13
by commits-noreply@bitbucket.org 17 Jul '13
17 Jul '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/3985ecec291f/
Changeset: 3985ecec291f
User: carlfeberhard
Date: 2013-07-18 00:46:42
Summary: Saved histories (and grids in gen.): fix busted query in grids when using log_actions=True
Affected #: 1 file
diff -r e008f7f78892ceb35bd1198d9e56528616ddfddb -r 3985ecec291f3b1a52997f0ad4fb6579045f16c2 lib/galaxy/web/framework/helpers/grids.py
--- a/lib/galaxy/web/framework/helpers/grids.py
+++ b/lib/galaxy/web/framework/helpers/grids.py
@@ -225,7 +225,13 @@
params = cur_filter_dict.copy()
params['sort'] = sort_key
params['async'] = ( 'async' in kwargs )
- trans.log_action( trans.get_user(), unicode( "grid.view" ), context, params )
+
+ #TODO:??
+ # commenting this out; when this fn calls session.add( action ) and session.flush the query from this fn
+ # is effectively 'wiped' out. Nate believes it has something to do with our use of session( autocommit=True )
+ # in mapping.py. If you change that to False, the log_action doesn't affect the query
+ # Below, I'm rendering the template first (that uses query), then calling log_action, then returning the page
+ #trans.log_action( trans.get_user(), unicode( "grid.view" ), context, params )
# Render grid.
def url( *args, **kwargs ):
@@ -260,7 +266,7 @@
# utf-8 unicode; however, this would require encoding the object as utf-8 before returning the grid
# results via a controller method, which is require substantial changes. Hence, for now, return grid
# as str.
- return trans.fill_template( iff( async_request, self.async_template, self.template ),
+ page = trans.fill_template( iff( async_request, self.async_template, self.template ),
grid=self,
query=query,
cur_page_num = page_num,
@@ -280,6 +286,9 @@
# Pass back kwargs so that grid template can set and use args without
# grid explicitly having to pass them.
kwargs=kwargs )
+ trans.log_action( trans.get_user(), unicode( "grid.view" ), context, params )
+ return page
+
def get_ids( self, **kwargs ):
id = []
if 'id' in kwargs:
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: carlfeberhard: Saved histories (and grids in gen.): fix busted query in grids when using log_actions=True
by commits-noreply@bitbucket.org 17 Jul '13
by commits-noreply@bitbucket.org 17 Jul '13
17 Jul '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/9ebd7732e616/
Changeset: 9ebd7732e616
Branch: stable
User: carlfeberhard
Date: 2013-07-18 00:42:46
Summary: Saved histories (and grids in gen.): fix busted query in grids when using log_actions=True
Affected #: 1 file
diff -r fd4113962c32b67ea0623b6cfd37be63e26ef7c6 -r 9ebd7732e616492f4570587496f1b9074cb2eac7 lib/galaxy/web/framework/helpers/grids.py
--- a/lib/galaxy/web/framework/helpers/grids.py
+++ b/lib/galaxy/web/framework/helpers/grids.py
@@ -225,7 +225,13 @@
params = cur_filter_dict.copy()
params['sort'] = sort_key
params['async'] = ( 'async' in kwargs )
- trans.log_action( trans.get_user(), unicode( "grid.view" ), context, params )
+
+ #TODO:??
+ # commenting this out; when this fn calls session.add( action ) and session.flush the query from this fn
+ # is effectively 'wiped' out. Nate believes it has something to do with our use of session( autocommit=True )
+ # in mapping.py. If you change that to False, the log_action doesn't affect the query
+ # Below, I'm rendering the template first (that uses query), then calling log_action, then returning the page
+ #trans.log_action( trans.get_user(), unicode( "grid.view" ), context, params )
# Render grid.
def url( *args, **kwargs ):
@@ -260,7 +266,7 @@
# utf-8 unicode; however, this would require encoding the object as utf-8 before returning the grid
# results via a controller method, which is require substantial changes. Hence, for now, return grid
# as str.
- return trans.fill_template( iff( async_request, self.async_template, self.template ),
+ page = trans.fill_template( iff( async_request, self.async_template, self.template ),
grid=self,
query=query,
cur_page_num = page_num,
@@ -280,6 +286,9 @@
# Pass back kwargs so that grid template can set and use args without
# grid explicitly having to pass them.
kwargs=kwargs )
+ trans.log_action( trans.get_user(), unicode( "grid.view" ), context, params )
+ return page
+
def get_ids( self, **kwargs ):
id = []
if 'id' in kwargs:
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: guerler: Dataset display eye-icon, fix context menu
by commits-noreply@bitbucket.org 17 Jul '13
by commits-noreply@bitbucket.org 17 Jul '13
17 Jul '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/e008f7f78892/
Changeset: e008f7f78892
User: guerler
Date: 2013-07-17 22:58:56
Summary: Dataset display eye-icon, fix context menu
Affected #: 1 file
diff -r ff21a344b9e14e0deb896bbe89a45c70ad608331 -r e008f7f78892ceb35bd1198d9e56528616ddfddb static/scripts/mvc/dataset/hda-base.js
--- a/static/scripts/mvc/dataset/hda-base.js
+++ b/static/scripts/mvc/dataset/hda-base.js
@@ -237,7 +237,13 @@
} else {
displayBtnData.title = _l( 'View data' );
- displayBtnData.href = "javascript:parent.frame_manager.frame_new({title: 'Data Viewer', type: 'url', location: 'center', content: '" + this.urls.display + "'});";
+
+ // default link for dataset
+ displayBtnData.href = this.urls.display;
+
+ // add frame manager option onclick event
+ var self = this;
+ displayBtnData.on_click = function() { parent.frame_manager.frame_new({title: "Data Viewer", type: "url", location: "center", content: self.urls.display }); };
}
this.displayButton = new IconButtonView({ model : new IconButton( displayBtnData ) });
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
17 Jul '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/ff21a344b9e1/
Changeset: ff21a344b9e1
User: greg
Date: 2013-07-17 21:34:54
Summary: Fixes when pushing changes to a repository in the tool shed from the command line: (a) only allow changes to a single file named tool_dependencies.xml to be pushed to a repository whose type is "Tool dependency definition" and (b) for any changes that are made to dependency definition files, validate the attributes of any <repository> tags, making sure that all required attributes (toolshed, name, owner, changeset_revision) are defined since automatically populating these attributes is supported only when using the tool shed's upload utility.
Affected #: 2 files
diff -r ef5cc1d272d2805d1c6597b200f9df58a291d863 -r ff21a344b9e14e0deb896bbe89a45c70ad608331 lib/galaxy/webapps/tool_shed/framework/middleware/hg.py
--- a/lib/galaxy/webapps/tool_shed/framework/middleware/hg.py
+++ b/lib/galaxy/webapps/tool_shed/framework/middleware/hg.py
@@ -1,19 +1,28 @@
-"""
-Middleware for handling hg authentication for users pushing change sets to local repositories.
-"""
-import os, logging
+"""Middle-ware for handling hg authentication for users pushing change sets to local repositories."""
+import os
+import logging
import sqlalchemy
+import sys
+import tempfile
from paste.auth.basic import AuthBasicAuthenticator
from paste.httpheaders import AUTH_TYPE
from paste.httpheaders import REMOTE_USER
from galaxy.util import asbool
+from galaxy.util import json
from galaxy.webapps.tool_shed import model
from galaxy.util.hash_util import new_secure_hash
+import tool_shed.util.shed_util_common as suc
+from tool_shed.util import commit_util
+import tool_shed.repository_types.util as rt_util
+
+from galaxy import eggs
+eggs.require( 'mercurial' )
import mercurial.__version__
log = logging.getLogger(__name__)
+CHUNK_SIZE = 65536
class Hg( object ):
@@ -23,10 +32,6 @@
self.config = config
# Authenticate this mercurial request using basic authentication
self.authentication = AuthBasicAuthenticator( 'hgweb in the tool shed', self.__basic_authentication )
- self.remote_address = None
- self.repository = None
- self.username = None
- self.action = None
# Determine the database url
if 'database_connection' in self.config:
self.db_url = self.config[ 'database_connection' ]
@@ -42,26 +47,21 @@
#
# Increment the value of the times_downloaded column in the repository table for the cloned repository.
if 'PATH_INFO' in environ:
- path_info = environ[ 'PATH_INFO' ].lstrip( '/' )
- # An example of path_info is: '/repos/test/column1'
- path_info_components = path_info.split( '/' )
- username = path_info_components[1]
- name = path_info_components[2]
# Instantiate a database connection
engine = sqlalchemy.create_engine( self.db_url )
connection = engine.connect()
- result_set = connection.execute( "select id from galaxy_user where username = '%s'" % username.lower() )
- for row in result_set:
- # Should only be 1 row...
- user_id = row[ 'id' ]
- result_set = connection.execute( "select times_downloaded from repository where user_id = %d and name = '%s'" % ( user_id, name.lower() ) )
+ path_info = environ[ 'PATH_INFO' ].lstrip( '/' )
+ user_id, repository_name = self.__get_user_id_repository_name_from_path_info( connection, path_info )
+ sql_cmd = "SELECT times_downloaded FROM repository WHERE user_id = %d AND name = '%s'" % ( user_id, repository_name.lower() )
+ result_set = connection.execute( sql_cmd )
for row in result_set:
# Should only be 1 row...
times_downloaded = row[ 'times_downloaded' ]
times_downloaded += 1
- connection.execute( "update repository set times_downloaded = %d where user_id = %d and name = '%s'" % ( times_downloaded, user_id, name.lower() ) )
+ sql_cmd = "UPDATE repository SET times_downloaded = %d WHERE user_id = %d AND name = '%s'" % ( times_downloaded, user_id, repository_name.lower() )
+ connection.execute( sql_cmd )
connection.close()
- if cmd in [ 'unbundle', 'pushkey' ]:
+ elif cmd in [ 'unbundle', 'pushkey' ]:
# This is an hg push from the command line. When doing this, the following commands, in order,
# will be retrieved from environ (see the docs at http://mercurial.selenic.com/wiki/WireProtocol)
# # If mercurial version >= '2.2.3': capabilities -> batch -> branchmap -> unbundle -> listkeys -> pushkey -> listkeys
@@ -87,6 +87,71 @@
# If all of these mechanisms fail, Mercurial will fail, printing an error message. In this case, it
# will not let you commit until you set up a username.
result = self.authentication( environ )
+ if not isinstance( result, str ) and cmd == 'unbundle' and 'wsgi.input' in environ:
+ bundle_data_stream = environ[ 'wsgi.input' ]
+ # Convert the incoming mercurial bundle into a json object and persit it to a temporary file for inspection.
+ fh = tempfile.NamedTemporaryFile( 'wb', prefix="tmp-hg-bundle" )
+ tmp_filename = fh.name
+ fh.close()
+ fh = open( tmp_filename, 'wb' )
+ while 1:
+ chunk = bundle_data_stream.read( CHUNK_SIZE )
+ if not chunk:
+ break
+ fh.write( chunk )
+ fh.close()
+ fh = open( tmp_filename, 'rb' )
+ changeset_groups = json.from_json_string( commit_util.bundle_to_json( fh ) )
+ fh.close()
+ try:
+ os.unlink( tmp_filename )
+ except:
+ pass
+ if changeset_groups:
+ # Check the repository type to make sure inappropriate files are not being pushed.
+ if 'PATH_INFO' in environ:
+ # Instantiate a database connection
+ engine = sqlalchemy.create_engine( self.db_url )
+ connection = engine.connect()
+ path_info = environ[ 'PATH_INFO' ].lstrip( '/' )
+ user_id, repository_name = self.__get_user_id_repository_name_from_path_info( connection, path_info )
+ sql_cmd = "SELECT type FROM repository WHERE user_id = %d AND name = '%s'" % ( user_id, repository_name.lower() )
+ result_set = connection.execute( sql_cmd )
+ for row in result_set:
+ # Should only be 1 row...
+ repository_type = str( row[ 'type' ] )
+ if repository_type == rt_util.TOOL_DEPENDENCY_DEFINITION:
+ # Handle repositories of type tool_dependency_definition, which can only contain a single file named tool_dependencies.xml.
+ for entry in changeset_groups:
+ if len( entry ) == 2:
+ # We possibly found an altered file entry.
+ filename, change_list = entry
+ if filename and isinstance( filename, str ):
+ if filename == suc.TOOL_DEPENDENCY_DEFINITION_FILENAME:
+ # Make sure the any complex repository dependency definitions contain valid <repository> tags.
+ is_valid, error_msg = commit_util.repository_tags_are_valid( filename, change_list )
+ if not is_valid:
+ log.debug( error_msg )
+ return self.__display_exception_remotely( start_response, error_msg )
+ else:
+ msg = "Only a single file named tool_dependencies.xml can be pushed to a repository of type 'Tool dependency definition'."
+ log.debug( msg )
+ return self.__display_exception_remotely( start_response, msg )
+ else:
+ # If the changeset includes changes to dependency definition files, make sure tag sets are not missing "toolshed" or
+ # "changeset_revision" attributes since automatically populating them is not supported when pushing from the command line.
+ # These attributes are automatically populated only when using the tool shed upload utility.
+ for entry in changeset_groups:
+ if len( entry ) == 2:
+ # We possibly found an altered file entry.
+ filename, change_list = entry
+ if filename and isinstance( filename, str ):
+ if filename in [ suc.REPOSITORY_DEPENDENCY_DEFINITION_FILENAME, suc.TOOL_DEPENDENCY_DEFINITION_FILENAME ]:
+ # We check both files since tool dependency definitions files can contain complex repository dependency definitions.
+ is_valid, error_msg = commit_util.repository_tags_are_valid( filename, change_list )
+ if not is_valid:
+ log.debug( error_msg )
+ return self.__display_exception_remotely( start_response, error_msg )
if isinstance( result, str ):
# Authentication was successful
AUTH_TYPE.update( environ, 'basic' )
@@ -95,6 +160,13 @@
return result.wsgi_application( environ, start_response )
return self.app( environ, start_response )
+ def __display_exception_remotely( self, start_response, msg ):
+ # Display the exception to the remote user's command line.
+ status = "500 %s" % msg
+ response_headers = [ ("content-type", "text/plain") ]
+ start_response( status, response_headers, sys.exc_info() )
+ return [ msg ]
+
def __get_hg_command( self, **kwd ):
"""Pulls mercurial commands from environ[ 'QUERY_STRING" ] and returns them."""
if 'QUERY_STRING' in kwd:
@@ -103,6 +175,18 @@
return qry.split( '=' )[ -1 ]
return None
+ def __get_user_id_repository_name_from_path_info( self, db_connection, path_info ):
+ # An example of path_info is: '/repos/test/column1'
+ path_info_components = path_info.split( '/' )
+ username = path_info_components[ 1 ]
+ repository_name = path_info_components[ 2 ]
+ # Get the id of the current user using hg from the command line.
+ result_set = db_connection.execute( "select id from galaxy_user where username = '%s'" % username.lower() )
+ for row in result_set:
+ # Should only be 1 row...
+ user_id = row[ 'id' ]
+ return user_id, repository_name
+
def __basic_authentication( self, environ, username, password ):
"""The environ parameter is needed in basic authentication. We also check it if use_remote_user is true."""
if asbool( self.config.get( 'use_remote_user', False ) ):
@@ -140,7 +224,7 @@
result_set = connection.execute( "select email, username, password from galaxy_user where email = '%s'" % ru_email )
for row in result_set:
# Should only be 1 row...
- db_email = row[ 'email' ]
+ db_email = row[ 'email' ]
db_password = row[ 'password' ]
db_username = row[ 'username' ]
connection.close()
diff -r ef5cc1d272d2805d1c6597b200f9df58a291d863 -r ff21a344b9e14e0deb896bbe89a45c70ad608331 lib/tool_shed/util/commit_util.py
--- a/lib/tool_shed/util/commit_util.py
+++ b/lib/tool_shed/util/commit_util.py
@@ -1,7 +1,9 @@
+import cStringIO
import logging
import os
import pkg_resources
import shutil
+import struct
import tempfile
from galaxy import util
from galaxy.datatypes import checkers
@@ -10,19 +12,29 @@
import tool_shed.util.shed_util_common as suc
from tool_shed.util import tool_util
from tool_shed.util import xml_util
-from galaxy import eggs
import tool_shed.repository_types.util as rt_util
+from galaxy import eggs
eggs.require( 'mercurial' )
from mercurial import commands
from mercurial import hg
from mercurial import ui
+from mercurial.changegroup import readbundle
+from mercurial.changegroup import readexactly
+from mercurial.changegroup import writebundle
log = logging.getLogger( __name__ )
UNDESIRABLE_DIRS = [ '.hg', '.svn', '.git', '.cvs' ]
UNDESIRABLE_FILES = [ '.hg_archival.txt', 'hgrc', '.DS_Store' ]
+def bundle_to_json( fh ):
+ """Convert the received HG10xx data stream (a mercurial 1.0 bundle created using hg push from the command line) to a json object."""
+ # See http://www.wstein.org/home/wstein/www/home/was/patches/hg_json
+ hg_unbundle10_obj = readbundle( fh, None )
+ groups = [ group for group in unpack_groups( hg_unbundle10_obj ) ]
+ return json.to_json_string( groups, indent=4 )
+
def check_archive( repository, archive ):
for member in archive.getmembers():
# Allow regular files and directories only
@@ -63,6 +75,23 @@
message = 'The file "%s" contains image content.\n' % str( file_path )
return message
+def get_change_lines_in_file_for_tag( tag, change_dict ):
+ """
+ The received change_dict is the jsonified version of the changes to a file in a changeset being pushed to the tool shed from the command line.
+ This method cleans and returns appropriate lines for inspection.
+ """
+ cleaned_lines = []
+ data_list = change_dict.get( 'data', [] )
+ for data_dict in data_list:
+ block = data_dict.get( 'block', '' )
+ lines = block.split( '\\n' )
+ for line in lines:
+ index = line.find( tag )
+ if index > -1:
+ line = line[ index: ]
+ cleaned_lines.append( line )
+ return cleaned_lines
+
def get_upload_point( repository, **kwd ):
upload_point = kwd.get( 'upload_point', None )
if upload_point is not None:
@@ -149,8 +178,8 @@
except OSError, e:
# The directory is not empty.
pass
- # See if any admin users have chosen to receive email alerts when a repository is
- # updated. If so, check every uploaded file to ensure content is appropriate.
+ # See if any admin users have chosen to receive email alerts when a repository is updated. If so, check every uploaded file to ensure
+ # content is appropriate.
check_contents = check_file_contents_for_email_alerts( trans )
for filename_in_archive in filenames_in_archive:
# Check file content to ensure it is appropriate.
@@ -307,6 +336,39 @@
return altered, root
return False, None
+def repository_tag_is_valid( filename, line ):
+ """
+ Checks changes made to <repository> tags in a dependency definition file being pushed to the tool shed from the command line to ensure that
+ all required attributes exist.
+ """
+ required_attributes = [ 'toolshed', 'name', 'owner', 'changeset_revision' ]
+ defined_attributes = line.split()
+ for required_attribute in required_attributes:
+ defined = False
+ for defined_attribute in defined_attributes:
+ if defined_attribute.startswith( required_attribute ):
+ defined = True
+ break
+ if not defined:
+ error_msg = 'The %s file contains a <repository> tag that is missing the required attribute %s. ' % ( filename, required_attribute )
+ error_msg += 'Automatically populating dependency definition attributes occurs only when using the tool shed upload utility. '
+ return False, error_msg
+ return True, ''
+
+def repository_tags_are_valid( filename, change_list ):
+ """
+ Make sure the any complex repository dependency definitions contain valid <repository> tags when pushing changes to the tool shed on the command
+ line.
+ """
+ tag = '<repository'
+ for change_dict in change_list:
+ lines = get_change_lines_in_file_for_tag( tag, change_dict )
+ for line in lines:
+ is_valid, error_msg = repository_tag_is_valid( filename, line )
+ if not is_valid:
+ return False, error_msg
+ return True, ''
+
def uncompress( repository, uploaded_file_name, uploaded_file_filename, isgzip, isbz2 ):
if isgzip:
handle_gzip( repository, uploaded_file_name )
@@ -314,3 +376,60 @@
if isbz2:
handle_bz2( repository, uploaded_file_name )
return uploaded_file_filename.rstrip( '.bz2' )
+
+def unpack_chunks( hg_unbundle10_obj ):
+ """
+ This method provides a generator of parsed chunks of a "group" in a mercurial unbundle10 object which is created when a changeset that is pushed
+ to a tool shed repository using hg push from the command line is read using readbundle.
+ """
+ while True:
+ length, = struct.unpack( '>l', readexactly( hg_unbundle10_obj, 4 ) )
+ if length <= 4:
+ # We found a "null chunk", which ends the group.
+ break
+ if length < 84:
+ raise Exception( "negative data length" )
+ node, p1, p2, cs = struct.unpack( '20s20s20s20s', readexactly( hg_unbundle10_obj, 80 ) )
+ yield { 'node': node.encode( 'hex' ),
+ 'p1': p1.encode( 'hex' ),
+ 'p2': p2.encode( 'hex' ),
+ 'cs': cs.encode( 'hex' ),
+ 'data': [ patch for patch in unpack_patches( hg_unbundle10_obj, length - 84 ) ] }
+
+def unpack_groups( hg_unbundle10_obj ):
+ """
+ This method provides a generator of parsed groups from a mercurial unbundle10 object which is created when a changeset that is pushed
+ to a tool shed repository using hg push from the command line is read using readbundle.
+ """
+ # Process the changelog group.
+ yield [ chunk for chunk in unpack_chunks( hg_unbundle10_obj ) ]
+ # Process the manifest group.
+ yield [ chunk for chunk in unpack_chunks( hg_unbundle10_obj ) ]
+ while True:
+ length, = struct.unpack( '>l', readexactly( hg_unbundle10_obj, 4 ) )
+ if length <= 4:
+ # We found a "null meta chunk", which ends the changegroup.
+ break
+ filename = readexactly( hg_unbundle10_obj, length-4 ).encode( 'string_escape' )
+ # Process the file group.
+ yield ( filename, [ chunk for chunk in unpack_chunks( hg_unbundle10_obj ) ] )
+
+def unpack_patches( hg_unbundle10_obj, remaining ):
+ """
+ This method provides a generator of patches from the data field in a chunk. As there is no delimiter for this data field, a length argument is
+ required.
+ """
+ while remaining >= 12:
+ start, end, blocklen = struct.unpack( '>lll', readexactly( hg_unbundle10_obj, 12 ) )
+ remaining -= 12
+ if blocklen > remaining:
+ raise Exception( "unexpected end of patch stream" )
+ block = readexactly( hg_unbundle10_obj, blocklen )
+ remaining -= blocklen
+ yield { 'start': start,
+ 'end': end,
+ 'blocklen': blocklen,
+ 'block': block.encode( 'string_escape' ) }
+ if remaining > 0:
+ print remaining
+ raise Exception( "unexpected end of patch stream" )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0