galaxy-dev
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2008 -----
- December
- November
- October
- September
- August
September 2009
- 15 participants
- 140 discussions
Hello,
About the "Galaxy as a subdirectory" section of the documentation I have a
couple questions.
Assuming the rewrite rules be changes to the apache httpd.conf file should
they be part of a directory tag and should the directory specified just be
the path to the galaxy_dist directory in the main server configuration?
Also to change the port from 8080 to another port are the only changes
necessary the following rewrite rule : RewriteRule ^/galaxy(.*)
http://localhost:8080$1 [P] and "port=8080" in the server:main section of
the universe_wsgi.ini file?
Thanks,
Kimberly
2
1
28 Sep '09
Matthias Dodt wrote:
> We would like to integrate a new datasource in our galaxy system. We
> want data to be inserted into galaxy via copying them directly from
> fileservers instead of uploading them through a http connection- maybe
> via some browser-based "explorer" who gives access to certain folders-
> Is there any galaxy-build in feature which may support something like this?
Hi Matthias,
I have relocated this question to galaxy-dev since it deals with local
implementation.
This is available now via the 'library_import_dir' and
'user_library_import_dir' options in universe_wsgi.ini.
library_import_dir: A directory containing subdirectories, from which
all contents may be uploaded into a data library via the "upload a
directory" version of the upload form. We often symlink files to these
subdirectories to avoid double copies.
user_library_import_dir: A directory containing subdirectories named as
Galaxy user email addresses which work as above, except that regular
users may upload from these directories to libraries (assuming they have
"add to library" permissions).
--nate
2
2
Hi,
some minor suggestion: could you place the config of the tool_runner
(local, sge, etc.) into tool_conf.xml or even into the tools xml file?
To check if a tool is running locally I have to look up the location of
the tools xml file in tool_conf.xml, then find the tools id in this file
and then look into universe_wsgi.ini to find out it's tool_runner.
Quite confusing
regards, Andreas
2
1
25 Sep '09
details: http://www.bx.psu.edu/hg/galaxy/rev/a2849c4b7219
changeset: 2776:a2849c4b7219
user: rc
date: Fri Sep 25 11:02:02 2009 -0400
description:
Fixed a request grids bug. Filters are working now.
2 file(s) affected in this change:
lib/galaxy/web/controllers/requests.py
lib/galaxy/web/controllers/requests_admin.py
diffs (47 lines):
diff -r 200e243a28e0 -r a2849c4b7219 lib/galaxy/web/controllers/requests.py
--- a/lib/galaxy/web/controllers/requests.py Fri Sep 25 10:45:10 2009 -0400
+++ b/lib/galaxy/web/controllers/requests.py Fri Sep 25 11:02:02 2009 -0400
@@ -51,7 +51,11 @@
def get_request_type(self, trans, request):
return request.type.name
def apply_default_filter( self, trans, query, **kwargs ):
- return query.filter_by( user=trans.user )
+ query = query.filter_by( user=trans.user )
+ if self.default_filter:
+ return query.filter_by( **self.default_filter )
+ else:
+ return query
def number_of_samples(self, trans, request):
return str(len(request.samples))
def get_state(self, trans, request):
@@ -475,6 +479,7 @@
params = util.Params( kwd )
lib_id = params.get( 'library_id', 'none' )
# if editing a request
+ selected_lib = None
if request and lib_id == 'none':
if request.library:
lib_id = str(request.library.id)
diff -r 200e243a28e0 -r a2849c4b7219 lib/galaxy/web/controllers/requests_admin.py
--- a/lib/galaxy/web/controllers/requests_admin.py Fri Sep 25 10:45:10 2009 -0400
+++ b/lib/galaxy/web/controllers/requests_admin.py Fri Sep 25 11:02:02 2009 -0400
@@ -54,6 +54,11 @@
return request_type.name
def number_of_samples(self, trans, request):
return str(len(request.samples))
+ def apply_default_filter( self, trans, query, **kwargs ):
+ if self.default_filter:
+ return query.filter_by( **self.default_filter )
+ else:
+ return query
class Requests( BaseController ):
request_grid = RequestsListGrid()
@@ -721,6 +726,7 @@
params = util.Params( kwd )
lib_id = params.get( 'library_id', 'none' )
# if editing a request
+ selected_lib = None
if request and lib_id == 'none':
if request.library:
lib_id = str(request.library.id)
1
0
25 Sep '09
details: http://www.bx.psu.edu/hg/galaxy/rev/6a86a558f405
changeset: 2777:6a86a558f405
user: Greg Von Kuster <greg(a)bx.psu.edu>
date: Fri Sep 25 11:32:47 2009 -0400
description:
Fix a bug in the security agent where multiples roles associated with the same permission on the same object were not properly handled.
1 file(s) affected in this change:
lib/galaxy/security/__init__.py
diffs (100 lines):
diff -r a2849c4b7219 -r 6a86a558f405 lib/galaxy/security/__init__.py
--- a/lib/galaxy/security/__init__.py Fri Sep 25 11:02:02 2009 -0400
+++ b/lib/galaxy/security/__init__.py Fri Sep 25 11:32:47 2009 -0400
@@ -33,7 +33,7 @@
def get_actions( self ):
"""Get all permitted actions as a list of Action objects"""
return self.permitted_actions.__dict__.values()
- def get_item_action( self, action, item ):
+ def get_item_actions( self, action, item ):
raise 'No valid method of retrieving action (%s) for item %s.' % ( action, item )
def guess_derived_permissions_for_datasets( self, datasets = [] ):
raise "Unimplemented Method"
@@ -102,10 +102,15 @@
Returns true when user has permission to perform an action on an
instance of Dataset.
"""
- dataset_action = self.get_item_action( action, dataset )
- if dataset_action is None:
+ dataset_actions = self.get_item_actions( action, dataset )
+ if not dataset_actions:
return action.model == 'restrict'
- return dataset_action.role in roles
+ ret_val = False
+ for dataset_action in dataset_actions:
+ if dataset_action.role in roles:
+ ret_val = True
+ break
+ return ret_val
def can_access_dataset( self, roles, dataset ):
return self.allow_dataset_action( roles, self.permitted_actions.DATASET_ACCESS, dataset )
def can_manage_dataset( self, roles, dataset ):
@@ -120,23 +125,25 @@
# All permissions are granted, so non-users cannot have permissions
return False
# Check to see if user has access to any of the roles associated with action
- item_action = self.get_item_action( action, item )
- if item_action is None:
+ item_actions = self.get_item_actions( action, item )
+ if not item_actions:
# All permissions are granted, so item must have action
return False
- return item_action.role in roles
+ ret_val = False
+ for item_action in item_actions:
+ if item_action.role in roles:
+ ret_val = True
+ break
+ return ret_val
def can_add_library_item( self, user, roles, item ):
return self.allow_library_item_action( user, roles, self.permitted_actions.LIBRARY_ADD, item )
def can_modify_library_item( self, user, roles, item ):
return self.allow_library_item_action( user, roles, self.permitted_actions.LIBRARY_MODIFY, item )
def can_manage_library_item( self, user, roles, item ):
return self.allow_library_item_action( user, roles, self.permitted_actions.LIBRARY_MANAGE, item )
- def get_item_action( self, action, item ):
+ def get_item_actions( self, action, item ):
# item must be one of: Dataset, Library, LibraryFolder, LibraryDataset, LibraryDatasetDatasetAssociation
- for permission in item.actions:
- if permission.action == action.action:
- return permission
- return None
+ return [ permission for permission in item.actions if permission.action == action.action ]
def guess_derived_permissions_for_datasets( self, datasets=[] ):
"""Returns a dict of { action : [ role, role, ... ] } for the output dataset based upon provided datasets"""
perms = {}
@@ -427,12 +434,12 @@
on library_item. Otherwise, cycle through all sub-folders in library_item until one is found that meets
this criteria, if it exists. This method does not necessarily scan the entire library as it returns
when it finds the first library_item that allows user to perform any one action in actions_to_check.
- """
+ """
for action in actions_to_check:
if self.allow_library_item_action( user, roles, action, library_item ):
return True, hidden_folder_ids
if isinstance( library_item, self.model.Library ):
- return self.show_library_item( user, roles, library_item.root_folder, actions_to_check, hidden_folder_ids=hidden_folder_ids )
+ return self.show_library_item( user, roles, library_item.root_folder, actions_to_check, hidden_folder_ids='' )
if isinstance( library_item, self.model.LibraryFolder ):
for folder in library_item.active_folders:
can_show, hidden_folder_ids = self.show_library_item( user, roles, folder, actions_to_check, hidden_folder_ids=hidden_folder_ids )
@@ -525,13 +532,14 @@
.options( eagerload_all( "dataset.actions" ) ) \
.all()
for ldda in lddas:
- ldda_access = self.get_item_action( action, ldda.dataset )
- if ldda_access is None:
+ ldda_access_permissions = self.get_item_actions( action, ldda.dataset )
+ if not ldda_access_permissions:
# Dataset is public
return True, hidden_folder_ids
- if ldda_access.role in roles:
- # The current user has access permission on the dataset
- return True, hidden_folder_ids
+ for ldda_access_permission in ldda_access_permissions:
+ if ldda_access_permission.role in roles:
+ # The current user has access permission on the dataset
+ return True, hidden_folder_ids
for sub_folder in folder.active_folders:
can_access, hidden_folder_ids = self.check_folder_contents( user, roles, sub_folder, hidden_folder_ids=hidden_folder_ids )
if can_access:
1
0
details: http://www.bx.psu.edu/hg/galaxy/rev/30fde8c9b03e
changeset: 2774:30fde8c9b03e
user: jeremy goecks <jeremy.goecks at emory.edu>
date: Fri Sep 25 09:20:26 2009 -0400
description:
Ensure that bool filters work in grids
1 file(s) affected in this change:
lib/galaxy/web/framework/helpers/grids.py
diffs (15 lines):
diff -r 5db45e48f954 -r 30fde8c9b03e lib/galaxy/web/framework/helpers/grids.py
--- a/lib/galaxy/web/framework/helpers/grids.py Fri Sep 25 02:06:03 2009 -0400
+++ b/lib/galaxy/web/framework/helpers/grids.py Fri Sep 25 09:20:26 2009 -0400
@@ -56,7 +56,10 @@
query = column.filter( trans.sa_session, query, column_filter )
# Upate current filter dict.
cur_filter_dict[ column.key ] = column_filter
- # Carry filter along to newly generated urls.
+ # Carry filter along to newly generated urls; make sure filter is a string so
+ # that we can encode to UTF-8 and thus handle user input to filters.
+ if not isinstance( column_filter, basestring ):
+ column_filter = unicode(column_filter)
extra_url_args[ "f-" + column.key ] = column_filter.encode("utf-8")
# Process sort arguments
1
0
25 Sep '09
details: http://www.bx.psu.edu/hg/galaxy/rev/200e243a28e0
changeset: 2775:200e243a28e0
user: gua110
date: Fri Sep 25 10:45:10 2009 -0400
description:
Fix functional test for fastx_artifacts_filter tool.
1 file(s) affected in this change:
tools/fastx_toolkit/fastx_artifacts_filter.xml
diffs (21 lines):
diff -r 30fde8c9b03e -r 200e243a28e0 tools/fastx_toolkit/fastx_artifacts_filter.xml
--- a/tools/fastx_toolkit/fastx_artifacts_filter.xml Fri Sep 25 09:20:26 2009 -0400
+++ b/tools/fastx_toolkit/fastx_artifacts_filter.xml Fri Sep 25 10:45:10 2009 -0400
@@ -3,7 +3,7 @@
<command>zcat -f '$input' | fastx_artifacts_filter -v -o "$output"</command>
<inputs>
- <param format="fasta,fastqsanger" name="input" type="data" label="Library to filter" />
+ <param format="fasta,fastqsanger,fastqsolexa" name="input" type="data" label="Library to filter" />
</inputs>
@@ -15,7 +15,7 @@
</test>
<test>
<!-- Filter FASTQ file -->
- <param name="input" value="fastx_artifacts2.fastq" />
+ <param name="input" value="fastx_artifacts2.fastq" ftype="fastqsanger" />
<output name="output" file="fastx_artifacts2.out" />
</test>
</tests>
1
0
details: http://www.bx.psu.edu/hg/galaxy/rev/5db45e48f954
changeset: 2773:5db45e48f954
user: Kanwei Li <kanwei(a)gmail.com>
date: Fri Sep 25 02:06:03 2009 -0400
description:
fix some typos for tools in folders N-Z
34 file(s) affected in this change:
tools/new_operations/basecoverage.xml
tools/new_operations/cluster.xml
tools/new_operations/complement.xml
tools/new_operations/concat.xml
tools/new_operations/coverage.xml
tools/new_operations/get_flanks.xml
tools/new_operations/intersect.xml
tools/new_operations/join.xml
tools/new_operations/merge.xml
tools/new_operations/subtract.xml
tools/regVariation/microsats_alignment_level.xml
tools/regVariation/quality_filter.xml
tools/regVariation/substitution_rates.xml
tools/samtools/pileup_parser.xml
tools/samtools/sam2interval.xml
tools/samtools/sam_bitwise_flag_filter.xml
tools/samtools/sam_pileup.xml
tools/samtools/sam_to_bam.xml
tools/solid_tools/maq_cs_wrapper.xml
tools/solid_tools/solid_qual_boxplot.xml
tools/sr_mapping/bowtie_wrapper.xml
tools/sr_mapping/bwa_wrapper.xml
tools/sr_mapping/fastq_statistics.xml
tools/sr_mapping/lastz_wrapper.xml
tools/stats/aggregate_binned_scores_in_intervals.xml
tools/stats/filtering.xml
tools/stats/gsummary.xml
tools/stats/wiggle_to_simple.xml
tools/taxonomy/find_diag_hits.xml
tools/taxonomy/gi2taxonomy.xml
tools/taxonomy/t2ps_wrapper.xml
tools/taxonomy/t2t_report.xml
tools/visualization/LAJ.xml
tools/visualization/genetrack.xml
diffs (744 lines):
diff -r 210e048e7ec7 -r 5db45e48f954 tools/new_operations/basecoverage.xml
--- a/tools/new_operations/basecoverage.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/new_operations/basecoverage.xml Fri Sep 25 02:06:03 2009 -0400
@@ -24,7 +24,7 @@
.. class:: infomark
-**TIP:** If your query does not appear in the pulldown menu -> it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns
+**TIP:** If your query does not appear in the pulldown menu, it means that it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns.
This operation counts the total bases covered by a set of intervals. Bases that are covered by more than one interval are **not** counted more than once towards the total.
diff -r 210e048e7ec7 -r 5db45e48f954 tools/new_operations/cluster.xml
--- a/tools/new_operations/cluster.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/new_operations/cluster.xml Fri Sep 25 02:06:03 2009 -0400
@@ -59,7 +59,7 @@
.. class:: infomark
-**TIP:** If your query does not appear in the pulldown menu -> it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns
+**TIP:** If your query does not appear in the pulldown menu, it means that it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns.
-----
diff -r 210e048e7ec7 -r 5db45e48f954 tools/new_operations/complement.xml
--- a/tools/new_operations/complement.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/new_operations/complement.xml Fri Sep 25 02:06:03 2009 -0400
@@ -33,7 +33,7 @@
.. class:: infomark
-**TIP:** If your query does not appear in the pulldown menu -> it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns
+**TIP:** If your query does not appear in the pulldown menu, it means that it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns.
This operation complements the regions of a set of intervals. Regions are returned that represent the empty space in the input interval.
diff -r 210e048e7ec7 -r 5db45e48f954 tools/new_operations/concat.xml
--- a/tools/new_operations/concat.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/new_operations/concat.xml Fri Sep 25 02:06:03 2009 -0400
@@ -27,7 +27,7 @@
.. class:: infomark
-**TIP:** If your query does not appear in the pulldown menu -> it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns
+**TIP:** If your query does not appear in the pulldown menu -> it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns.
-----
diff -r 210e048e7ec7 -r 5db45e48f954 tools/new_operations/coverage.xml
--- a/tools/new_operations/coverage.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/new_operations/coverage.xml Fri Sep 25 02:06:03 2009 -0400
@@ -34,7 +34,7 @@
.. class:: infomark
-**TIP:** If your query does not appear in the pulldown menu -> it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns
+**TIP:** If your query does not appear in the pulldown menu -> it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns.
Find the coverage of intervals in the first query on intervals in the second query. The coverage is added as two columns, the first being bases covered, and the second being the fraction of bases covered by that interval.
diff -r 210e048e7ec7 -r 5db45e48f954 tools/new_operations/get_flanks.xml
--- a/tools/new_operations/get_flanks.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/new_operations/get_flanks.xml Fri Sep 25 02:06:03 2009 -0400
@@ -41,7 +41,7 @@
</tests>
<help>
-This tool finds the upstream and/or downstream flanking region/s of all the selected regions in the input file.
+This tool finds the upstream and/or downstream flanking region(s) of all the selected regions in the input file.
**Note:** Every line should contain at least 3 columns: Chromosome number, Start and Stop co-ordinates. If any of these columns is missing or if start and stop co-ordinates are not numerical, the tool may encounter exceptions and such lines are skipped as invalid. The number of invalid skipped lines is documented in the resulting history item as a "Data issue".
diff -r 210e048e7ec7 -r 5db45e48f954 tools/new_operations/intersect.xml
--- a/tools/new_operations/intersect.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/new_operations/intersect.xml Fri Sep 25 02:06:03 2009 -0400
@@ -21,27 +21,27 @@
<data format="input" name="output" metadata_source="input1" />
</outputs>
<code file="operation_filter.py"/>
- <tests>
- <test>
- <param name="input1" value="1.bed" />
- <param name="input2" value="2.bed" />
- <param name="min" value="1" />
- <param name="returntype" value="" />
- <output name="output" file="gops_intersect_out.bed" />
+ <tests>
+ <test>
+ <param name="input1" value="1.bed" />
+ <param name="input2" value="2.bed" />
+ <param name="min" value="1" />
+ <param name="returntype" value="" />
+ <output name="output" file="gops_intersect_out.bed" />
</test>
<test>
<param name="input1" value="1.bed" />
<param name="input2" value="2_mod.bed" ftype="interval"/>
<param name="min" value="1" />
- <param name="returntype" value="" />
+ <param name="returntype" value="" />
<output name="output" file="gops_intersect_diffCols.bed" />
- </test>
- <test>
- <param name="input1" value="1.bed" />
- <param name="input2" value="2_mod.bed" ftype="interval"/>
- <param name="min" value="1" />
- <param name="returntype" value="Overlapping pieces of Intervals" />
- <output name="output" file="gops_intersect_p_diffCols.bed" />
+ </test>
+ <test>
+ <param name="input1" value="1.bed" />
+ <param name="input2" value="2_mod.bed" ftype="interval"/>
+ <param name="min" value="1" />
+ <param name="returntype" value="Overlapping pieces of Intervals" />
+ <output name="output" file="gops_intersect_p_diffCols.bed" />
</test>
<test>
<param name="input1" value="1.bed" />
@@ -76,7 +76,7 @@
.. class:: infomark
-**TIP:** If your query does not appear in the pulldown menu -> it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns
+**TIP:** If your query does not appear in the pulldown menu, it means that it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns.
-----
diff -r 210e048e7ec7 -r 5db45e48f954 tools/new_operations/join.xml
--- a/tools/new_operations/join.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/new_operations/join.xml Fri Sep 25 02:06:03 2009 -0400
@@ -70,7 +70,7 @@
.. class:: infomark
-**TIP:** If your query does not appear in the pulldown menu -> it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns
+**TIP:** If your query does not appear in the pulldown menu, it means that it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns.
-----
@@ -85,7 +85,7 @@
**Syntax**
- **Where overlap** specifies the minimum overlap between intervals that allows them to be joined.
-- **Return only records that are joined** returns only the records of the first query that join to a recond in the second query. This is analogous to an INNER JOIN.
+- **Return only records that are joined** returns only the records of the first query that join to a record in the second query. This is analogous to an INNER JOIN.
- **Return all records of first query (fill null with ".")** returns all intervals of the first query, and any intervals that do not join an interval from the second query are filled in with a period(.). This is analogous to a LEFT JOIN.
- **Return all records of second query (fill null with ".")** returns all intervals of the second query, and any intervals that do not join an interval from the first query are filled in with a period(.). **Note that this may produce an invalid interval file, since a period(.) is not a valid chrom, start, end or strand.**
- **Return all records of both queries (fill nulls with ".")** returns all records from both queries, and fills on either the right or left with periods. **Note that this may produce an invalid interval file, since a period(.) is not a valid chrom, start, end or strand.**
diff -r 210e048e7ec7 -r 5db45e48f954 tools/new_operations/merge.xml
--- a/tools/new_operations/merge.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/new_operations/merge.xml Fri Sep 25 02:06:03 2009 -0400
@@ -36,7 +36,7 @@
.. class:: infomark
-**TIP:** If your query does not appear in the pulldown menu -> it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns
+**TIP:** If your query does not appear in the pulldown menu, it means that it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns.
-----
@@ -48,7 +48,7 @@
-----
-This operation merges all overlaping intervals into single intervals.
+This operation merges all overlapping intervals into single intervals.
**Example**
diff -r 210e048e7ec7 -r 5db45e48f954 tools/new_operations/subtract.xml
--- a/tools/new_operations/subtract.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/new_operations/subtract.xml Fri Sep 25 02:06:03 2009 -0400
@@ -58,7 +58,7 @@
.. class:: infomark
-**TIP:** If your query does not appear in the pulldown menu -> it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns
+**TIP:** If your query does not appear in the pulldown menu, it means that it is not in interval format. Use "edit attributes" to set chromosome, start, end, and strand columns.
-----
diff -r 210e048e7ec7 -r 5db45e48f954 tools/regVariation/microsats_alignment_level.xml
--- a/tools/regVariation/microsats_alignment_level.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/regVariation/microsats_alignment_level.xml Fri Sep 25 02:06:03 2009 -0400
@@ -6,8 +6,8 @@
<inputs>
<page>
<param format="fasta" name="input1" type="data" label="Select data"/>
- <param name="separation" size="10" type="integer" value="10" label="Minimum basepair distance between adjacent microsatellites"
- help="A value of 10 means: Adjacent microsatellites separated by less than 10 basepairs will be excluded from the output."/>
+ <param name="separation" size="10" type="integer" value="10" label="Minimum base pair distance between adjacent microsatellites"
+ help="A value of 10 means: Adjacent microsatellites separated by less than 10 base pairs will be excluded from the output."/>
<param name="mono_threshold" size="10" type="integer" value="9" label="Minimum Threshold for the number of repeats for mononucleotide microsatellites"
help="A value of 9 means: All mononucleotide microsatellites having fewer than 9 repeats will be excluded from the output."/>
<param name="non_mono_threshold" size="10" type="integer" value="4" label="Minimum Threshold for the number of repeats for non-mononucleotide microsatellites"
diff -r 210e048e7ec7 -r 5db45e48f954 tools/regVariation/quality_filter.xml
--- a/tools/regVariation/quality_filter.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/regVariation/quality_filter.xml Fri Sep 25 02:06:03 2009 -0400
@@ -91,8 +91,8 @@
**Note**
-Any block/s not containing the primary species(species whose quality scores is to be used), will be omitted.
-Also, any primary species whose quality scores are not available in galaxy, will be considered as a non-primary species. This info will appear as a message in the job history panel.
+Any block/s not containing the primary species (species whose quality scores is to be used), will be omitted.
+Also, any primary species whose quality scores are not available in Galaxy will be considered as a non-primary species. This info will appear as a message in the job history panel.
-----
diff -r 210e048e7ec7 -r 5db45e48f954 tools/regVariation/substitution_rates.xml
--- a/tools/regVariation/substitution_rates.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/regVariation/substitution_rates.xml Fri Sep 25 02:06:03 2009 -0400
@@ -43,7 +43,7 @@
**What it does**
-This tool takes a pairwise MAF file as input and estimates substitution rate according to Jukes-Cantor JC69 model. The 3 new columns appended to the output are explanied below:
+This tool takes a pairwise MAF file as input and estimates substitution rate according to Jukes-Cantor JC69 model. The 3 new columns appended to the output are explained below:
- L: number of nucleotides compared
- N: number of different nucleotides
diff -r 210e048e7ec7 -r 5db45e48f954 tools/samtools/pileup_parser.xml
--- a/tools/samtools/pileup_parser.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/samtools/pileup_parser.xml Fri Sep 25 02:06:03 2009 -0400
@@ -99,18 +99,18 @@
</tests>
<help>
-**What is does**
+**What it does**
-Allows to find sequence variants and/or sites covered by specified number of reads with bases above a set quality threshold. The tool works on six and ten column pileup formats produced with *samtools pileup* command. However, it also allows you to specify columns in the input file manually. The tool assumes the following:
+Allows one to find sequence variants and/or sites covered by a specified number of reads with bases above a set quality threshold. The tool works on six and ten column pileup formats produced with *samtools pileup* command. However, it also allows you to specify columns in the input file manually. The tool assumes the following:
- the quality scores follow phred33 convention, where input qualities are ASCII characters equal to the Phred quality plus 33.
-- the pileup dataset was produced by *samtools pileup* command (although you can override this by setting column assignments manually).
+- the pileup dataset was produced by the *samtools pileup* command (although you can override this by setting column assignments manually).
--------
**Types of pileup datasets**
-The description of pileup format below is largely based on information that can be found on SAMTools_ documentation page. The 6- and 10-column variants are described below.
+The descriptions of the following pileup formats are largely based on information that can be found on the SAMTools_ documentation page. The 6- and 10-column variants are described below.
.. _SAMTools: http://samtools.sourceforge.net/pileup.shtml
@@ -136,7 +136,7 @@
**Ten column pileup**
-The `ten-column`__ pileup incoroporates additional consensus information generated with *-c* option of *samtools pileup* command::
+The `ten-column`__ pileup incorporates additional consensus information generated with the *-c* option of the *samtools pileup* command::
1 2 3 4 5 6 7 8 9 10
@@ -187,7 +187,7 @@
chrM 414 C 4 ...a III2
chrM 415 C 4 TTTt III7
-you will get this::
+you will get::
chrM 413 G 4 ..t, IIIH 0 0 0 1 3
chrM 415 C 4 TTTt III7 0 0 0 4 4
@@ -232,26 +232,26 @@
12 Quality adjusted coverage
-Note that in this case coordinates of SNPs were converted to intervals, where the start coordinate is 0-based and the end coordinate in 1-based using the UCSC Table Browser convention.
+Note that in this case the coordinates of SNPs were converted to intervals, where the start coordinate is 0-based and the end coordinate in 1-based using the UCSC Table Browser convention.
-Although three positions have variants in the original file (413, 414, and 415), only 413 and 415 are reported, because the quality values associated with these two SNPs are above threshold of 20. In the case of 414 the **a** allele has quality value of 17 ( ord("2")-33 ), and therefore it is not reported. In each of the reported lines the program added five columns. Let's take a look at this line::
+Although three positions have variants in the original file (413, 414, and 415), only 413 and 415 are reported because the quality values associated with these two SNPs are above the threshold of 20. In the case of 414 the **a** allele has a quality value of 17 ( ord("2")-33 ), and is therefore not reported. Note that five columns have been added to each of the reported lines::
chrM 413 G 4 ..t, IIIH 0 0 0 1 3
-here there is one variant, and it is a **t**. Because the fourth column represents **T** counts, it is incremented by 1. The last column shows that at this position three reads has bases above the quality threshold of 20.
+Here, there is one variant, **t**. Because the fourth column represents **T** counts, it is incremented by 1. The last column shows that at this position, three reads have bases above the quality threshold of 20.
-----
**Example 1**: Just variants
-In this mode the tool outputs only those lines from the input datasets where at least one read contains a sequence variant with quality above the limit set by the **Do not consider read bases with quality lower than** option. For example, suppose one has a pileup dataset like this::
+In this mode, the tool only outputs the lines from the input datasets where at least one read contains a sequence variant with quality above the threshold set by the **Do not consider read bases with quality lower than** option. For example, suppose one has a pileup dataset like the following::
chrM 412 A 2 ., II
chrM 413 G 4 ..t, III2
chrM 414 C 4 ...a III2
chrM 415 C 4 TTTt III7
-to call all variants (with no restriction by coverage) with quality above phred value of 20 we will need to set parameters as follows:
+To call all variants (with no restriction by coverage) with quality above phred value of 20, we will need to set the parameters as follows:
.. image:: ../static/images/pileup_parser_help1.png
@@ -260,13 +260,13 @@
chrM 413 G 4 ..t, IIIH 0 0 0 1 3
chrM 415 C 4 TTTt III7 0 0 0 4 4
-**Note** that position 414 is not reported because the *a* variant has associated quality value of 17 (because ord('2')-33 = 17) in is below the phred threshold 20 set by the **Count variants with quality above this value** parameter.
+**Note** that position 414 is not reported because the *a* variant has associated quality value of 17 (because ord('2')-33 = 17) and is below the phred threshold of 20 set by the **Count variants with quality above this value** parameter.
-----
**Example 2**: Report everything
-In addition to calling variants it is often useful to know the quality adjusted coverage. Running the tool with these parameters:
+In addition to calling variants, it is often useful to know the quality adjusted coverage. Running the tool with these parameters:
.. image:: ../static/images/pileup_parser_help2.png
@@ -277,10 +277,9 @@
chrM 414 C 4 ...a III2 0 0 0 0 3
chrM 415 C 4 TTTt III7 0 0 0 4 4
-Here, for instance, you can see that although the total coverage at position 414 is 4 (column 4) the quality adjusted coverage is 3 (last column). This is because inly three reads out of four have bases with quality above the set threshold of 20 (the actual qualities are III2 or, after conversion, 40, 40, 40, 17).
+Here, you can see that although the total coverage at position 414 is 4 (column 4), the quality adjusted coverage is 3 (last column). This is because only three out of four reads have bases with quality above the set threshold of 20 (the actual qualities are III2 or, after conversion, 40, 40, 40, 17).
-Now, one can use the last column of this dataset to filter out (using Galaxy's filter tool) positions where quality adjusted coverage (last column) is below a set threshold.
-
+One can use the last column of this dataset to filter out (using Galaxy's **Filter** tool) positions where quality adjusted coverage (last column) is below a set threshold.
</help>
</tool>
diff -r 210e048e7ec7 -r 5db45e48f954 tools/samtools/sam2interval.xml
--- a/tools/samtools/sam2interval.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/samtools/sam2interval.xml Fri Sep 25 02:06:03 2009 -0400
@@ -31,7 +31,7 @@
**What it does**
-Converts positional information from a SAM dataset into interval format with 0-based start and 1-based end. CIGAR string of SAM format is usd to compute the end coordinate.
+Converts positional information from a SAM dataset into interval format with 0-based start and 1-based end. CIGAR string of SAM format is used to compute the end coordinate.
-----
diff -r 210e048e7ec7 -r 5db45e48f954 tools/samtools/sam_bitwise_flag_filter.xml
--- a/tools/samtools/sam_bitwise_flag_filter.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/samtools/sam_bitwise_flag_filter.xml Fri Sep 25 02:06:03 2009 -0400
@@ -46,7 +46,7 @@
**What it does**
-Allows parsing SAM datasets using bitwise flag (the second column). The bits in the flag are defined as follows::
+Allows parsing of SAM datasets using bitwise flag (the second column). The bits in the flag are defined as follows::
Bit Info
------ --------------------------------------------------------------------------
@@ -67,7 +67,7 @@
Note the following:
- Flag 0x02, 0x08, 0x20, 0x40 and 0x80 are only meaningful when flag 0x01 is present.
-- If in a read pair the information on which read is the first in the pair is lost in the upstream analysis, flag 0x01 should be present and 0x40 and 0x80 are both zero.
+- If in a read pair the information on which read is the first in the pair is lost in the upstream analysis, flag 0x01 should be set, while 0x40 and 0x80 should both be zero.
-----
@@ -82,12 +82,12 @@
r003 16 ref 29 30 6H5M * 0 0 TAGGC * NM:i:0
r001 83 ref 37 30 9M = 7 -39 CAGCGCCAT *
-To select properly mapped pairs click the **Add new Flag** button and set *Read mapped in a proper pair* to **Yes**. The following two reads will be returned::
+To select properly mapped pairs, click the **Add new Flag** button and set *Read mapped in a proper pair* to **Yes**. The following two reads will be returned::
r001 163 ref 7 30 8M2I4M1D3M = 37 39 TTAGATAAAGGATACTA *
r001 83 ref 37 30 9M = 7 -39 CAGCGCCAT *
-For more information please consult the `SAM format description`__.
+For more information, please consult the `SAM format description`__.
.. __: http://www.ncbi.nlm.nih.gov/pubmed/19505943
diff -r 210e048e7ec7 -r 5db45e48f954 tools/samtools/sam_pileup.xml
--- a/tools/samtools/sam_pileup.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/samtools/sam_pileup.xml Fri Sep 25 02:06:03 2009 -0400
@@ -77,7 +77,7 @@
**What it does**
-Uses SAMTools_' pileup command to produce a pileup dataset from a provided BAM dataset. It generated two types of pileup datasets depending on chosen options. If *Call consensus according to MAQ model?* option is set to **No**, the tool produces simple pileup. If the option is set to **Yes**, a ten column pileup dataset with consensus is generated. Both types of datasets are briefly summarized below.
+Uses SAMTools_' pileup command to produce a pileup dataset from a provided BAM dataset. It generates two types of pileup datasets depending on the specified options. If *Call consensus according to MAQ model?* option is set to **No**, the tool produces simple pileup. If the option is set to **Yes**, a ten column pileup dataset with consensus is generated. Both types of datasets are briefly summarized below.
.. _SAMTools: http://samtools.sourceforge.net/samtools.shtml
@@ -111,7 +111,7 @@
**Ten column pileup**
-The `ten-column`__ pileup incoroporates additional consensus information generated with *-c* option of *samtools pileup* command::
+The `ten-column`__ pileup incorporates additional consensus information generated with *-c* option of *samtools pileup* command::
1 2 3 4 5 6 7 8 9 10
diff -r 210e048e7ec7 -r 5db45e48f954 tools/samtools/sam_to_bam.xml
--- a/tools/samtools/sam_to_bam.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/samtools/sam_to_bam.xml Fri Sep 25 02:06:03 2009 -0400
@@ -51,7 +51,7 @@
**What it does**
-This tool uses the SAMTools_ toolkit to produce a indexed BAM file based on a sorted input SAM file.
+This tool uses the SAMTools_ toolkit to produce an indexed BAM file based on a sorted input SAM file.
.. _SAMTools: http://samtools.sourceforge.net/samtools.shtml
diff -r 210e048e7ec7 -r 5db45e48f954 tools/solid_tools/maq_cs_wrapper.xml
--- a/tools/solid_tools/maq_cs_wrapper.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/solid_tools/maq_cs_wrapper.xml Fri Sep 25 02:06:03 2009 -0400
@@ -71,7 +71,7 @@
**What it does**
-This tool maps SOLiD colour-space reads against the target genome using MAQ. It produces three output datasets:
+This tool maps SOLiD color-space reads against the target genome using MAQ. It produces three output datasets:
**ALIGNMENT INFO** : contains the read alignment information,
diff -r 210e048e7ec7 -r 5db45e48f954 tools/solid_tools/solid_qual_boxplot.xml
--- a/tools/solid_tools/solid_qual_boxplot.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/solid_tools/solid_qual_boxplot.xml Fri Sep 25 02:06:03 2009 -0400
@@ -26,7 +26,7 @@
* Black horizontal lines are medians
* Rectangular red boxes show the Inter-quartile Range (IQR) (top value is Q3, bottom value is Q1)
-* Whiskers show outlier at max. 1.5*IQR
+* Whiskers show outliers at max. 1.5*IQR
.. image:: ../static/images/solid_qual.png
diff -r 210e048e7ec7 -r 5db45e48f954 tools/sr_mapping/bowtie_wrapper.xml
--- a/tools/sr_mapping/bowtie_wrapper.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/sr_mapping/bowtie_wrapper.xml Fri Sep 25 02:06:03 2009 -0400
@@ -181,7 +181,7 @@
<param name="dcv" type="integer" value="1024" label="The period for the difference-cover sample (--dcv)" />
</when>
</conditional>
- <param name="nodc" type="select" label="Whether or not to disable the use of the difference-cover sample (--nodc)" help="Suffix sorting becomes quadratic-time in the worst case (a very repetetive reference)">
+ <param name="nodc" type="select" label="Whether or not to disable the use of the difference-cover sample (--nodc)" help="Suffix sorting becomes quadratic-time in the worst case (a very repetitive reference)">
<option value="dc">Use difference-cover sample</option>
<option value="nodc">Disable difference-cover sample</option>
</param>
diff -r 210e048e7ec7 -r 5db45e48f954 tools/sr_mapping/bwa_wrapper.xml
--- a/tools/sr_mapping/bwa_wrapper.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/sr_mapping/bwa_wrapper.xml Fri Sep 25 02:06:03 2009 -0400
@@ -141,7 +141,7 @@
<param name="mismatchPenalty" type="integer" value="3" label="Mismatch penalty" help="BWA will not search for suboptimal hits with a score lower than [value]" />
<param name="gapOpenPenalty" type="integer" value="11" label="Gap open penalty" />
<param name="gapExtensPenalty" type="integer" value="4" label="Gap extension penalty" />
- <param name="colorSpaceRev" type="select" label="Reverse query but don't compement it" help="Reverse query for all alignment in color space">
+ <param name="colorSpaceRev" type="select" label="Reverse query but don't complement it" help="Reverse query for all alignment in color space">
<option value="false">Don't reverse query</option>
<option value="true">Reverse query</option>
</param>
@@ -293,7 +293,7 @@
**What it does**
-**BWA** is a high performance sequence aligner that succeeds MAQ. It is based on BWT-SW but uses a completely different algorithm, and it is aimed toward short read alignments. It is fast--it can map the human genome in only 15-25 minutes. Heng Li of the Sanger Institute wrote the majority of the code, with contributions by Chi-Kwong Wong at the University of Hong Kong, Nong Ge at Sun Yat-Sen University, and Yuta Mori.
+**BWA** is a high performance sequence aligner that succeeds MAQ. It is based on BWT-SW but uses a completely different algorithm and is aimed towards short read alignments. It is fast--it can map the human genome in only 15-25 minutes. Heng Li of the Sanger Institute wrote the majority of the code, with contributions by Chi-Kwong Wong at the University of Hong Kong, Nong Ge at Sun Yat-Sen University, and Yuta Mori.
------
diff -r 210e048e7ec7 -r 5db45e48f954 tools/sr_mapping/fastq_statistics.xml
--- a/tools/sr_mapping/fastq_statistics.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/sr_mapping/fastq_statistics.xml Fri Sep 25 02:06:03 2009 -0400
@@ -2,7 +2,7 @@
<description>for Solexa file</description>
<command>cat $input | solexa_quality_statistics -o $output</command>
<inputs>
- <param format="fastqsolexa" name="input" type="data" label="Library to analyse" />
+ <param format="fastqsolexa" name="input" type="data" label="Library to analyze" />
</inputs>
<outputs>
<data format="txt" name="output" />
diff -r 210e048e7ec7 -r 5db45e48f954 tools/sr_mapping/lastz_wrapper.xml
--- a/tools/sr_mapping/lastz_wrapper.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/sr_mapping/lastz_wrapper.xml Fri Sep 25 02:06:03 2009 -0400
@@ -76,20 +76,20 @@
<param name="max_ident" type="integer" size="3" value="100" label="Do not report matches above this identity (%)"/>
<param name="min_cvrg" type="integer" size="3" value="0" label="Do not report matches that cover less than this fraction (%) of each read"/>
</inputs>
- <outputs>
+ <outputs>
<data format="tabular" name="output1">
<change_format>
<when input="out_format" value="maf" format="maf" />
</change_format>
</data>
- <data format="coverage" name="output2" />
+ <data format="coverage" name="output2" />
</outputs>
<requirements>
<requirement type="binary">lastz</requirement>
</requirements>
- <tests>
- <test>
- <param name="input1" value="phiX.fa" ftype="fasta" />
+ <tests>
+ <test>
+ <param name="input1" value="phiX.fa" ftype="fasta" />
<param name="input2" value="B1.fa" ftype="fasta" />
<param name="source_select" value="pre_set" />
<param name="pre_set_options" value="yasra95short" />
@@ -98,10 +98,10 @@
<param name="max_ident" value="100" />
<param name="min_cvrg" value="0" />
<param name="out_format" value="diffs" />
- <output name="output1" file="lastz_diffs.txt" />
+ <output name="output1" file="lastz_diffs.txt" />
</test>
- <test>
- <param name="input1" value="phiX.fa" ftype="fasta" />
+ <test>
+ <param name="input1" value="phiX.fa" ftype="fasta" />
<param name="input2" value="B1.fa" ftype="fasta" />
<param name="source_select" value="pre_set" />
<param name="pre_set_options" value="yasra95short" />
@@ -111,18 +111,18 @@
<param name="max_ident" value="100" />
<param name="min_cvrg" value="0" />
<param name="out_format" value="diffs" />
- <output name="output1" file="lastz_diffs_ref_name.txt" />
+ <output name="output1" file="lastz_diffs_ref_name.txt" />
</test>
- </tests>
+ </tests>
<help>
**What it does**
-**LASTZ** is a high perfomance pairwise sequence aligner derived from BLASTZ. It is written by Bob Harris in Webb Miller's laboratory at Penn State. Special scoring sets were derived to improve the performance, both in runtime and quality. The Galaxy version of LASTZ is geared towards aligning of short (Illumina/Solexa, AB/SOLiD) and medium (Roche/454) reads against a reference sequence.
+**LASTZ** is a high performance pairwise sequence aligner derived from BLASTZ. It is written by Bob Harris in Webb Miller's laboratory at Penn State. Special scoring sets were derived to improve runtime performance and quality. The Galaxy version of LASTZ is geared towards aligning of short (Illumina/Solexa, AB/SOLiD) and medium (Roche/454) reads against a reference sequence.
.. class:: warningmark
-At present this tools supports aligning reads against a single reference sequence no longer than 1 Megabase. This limitation will be lifted in the coming months as our short read analysis hardware capacity is expanding.
+This tool presently supports aligning reads against a single reference sequence no longer than 1 Megabase. This limitation will be lifted in the coming months as our short read analysis hardware capacity expands.
------
diff -r 210e048e7ec7 -r 5db45e48f954 tools/stats/aggregate_binned_scores_in_intervals.xml
--- a/tools/stats/aggregate_binned_scores_in_intervals.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/stats/aggregate_binned_scores_in_intervals.xml Fri Sep 25 02:06:03 2009 -0400
@@ -62,7 +62,7 @@
.. class:: warningmark
-This tool currently only has cached data for genome builds hg16, hg17 and hg18. However, you may use your own data point (wiggle) data, such as is available from UCSC. If you are trying to use your own data point file and it is not appearing as an option, make sure that the builds for your history items are the same.
+This tool currently only has cached data for genome builds hg16, hg17 and hg18. However, you may use your own data point (wiggle) data, such as those available from UCSC. If you are trying to use your own data point file and it is not appearing as an option, make sure that the builds for your history items are the same.
.. class:: warningmark
diff -r 210e048e7ec7 -r 5db45e48f954 tools/stats/filtering.xml
--- a/tools/stats/filtering.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/stats/filtering.xml Fri Sep 25 02:06:03 2009 -0400
@@ -42,7 +42,7 @@
**Syntax**
-The filter tool allows you to restrict the datset using simple conditional statements
+The filter tool allows you to restrict the dataset using simple conditional statements.
- Columns are referenced with **c** and a **number**. For example, **c1** refers to the first column of a tab-delimited file
- Make sure that multi-character operators contain no white space ( e.g., **<=** is valid while **< =** is not valid )
diff -r 210e048e7ec7 -r 5db45e48f954 tools/stats/gsummary.xml
--- a/tools/stats/gsummary.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/stats/gsummary.xml Fri Sep 25 02:06:03 2009 -0400
@@ -24,7 +24,7 @@
.. class:: warningmark
-This tool expects input datasets to consist of tab-delimited columns (blank or comment lines beginning with a # character are automatically skipped).
+This tool expects input datasets consisting of tab-delimited columns (blank or comment lines beginning with a # character are automatically skipped).
.. class:: infomark
@@ -48,7 +48,7 @@
- Columns are referenced with **c** and a **number**. For example, **c1** refers to the first column of a tab-delimited file.
-- Examples of expressions:
+- For example:
- **log(c5)** calculates the summary statistics for the natural log of column 5
- **(c5 + c6 + c7) / 3** calculates the summary statistics on the average of columns 5-7
diff -r 210e048e7ec7 -r 5db45e48f954 tools/stats/wiggle_to_simple.xml
--- a/tools/stats/wiggle_to_simple.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/stats/wiggle_to_simple.xml Fri Sep 25 02:06:03 2009 -0400
@@ -18,7 +18,7 @@
This tool converts wiggle data into interval type.
-- **Wiggle format**: The .wig format is line-oriented. Wiggle data is preceeded by a UCSC track definition line. Following the track definition line is the track data, which can be entered in three different formats described below.
+- **Wiggle format**: The .wig format is line-oriented. Wiggle data is preceded by a UCSC track definition line. Following the track definition line is the track data, which can be entered in three different formats described below.
- **BED format** with no declaration line and four columns of data::
diff -r 210e048e7ec7 -r 5db45e48f954 tools/taxonomy/find_diag_hits.xml
--- a/tools/taxonomy/find_diag_hits.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/taxonomy/find_diag_hits.xml Fri Sep 25 02:06:03 2009 -0400
@@ -67,7 +67,7 @@
* *Select column with sequence id* set to **c1**
* *Select taxonomic ranks* with **order**, and **genus** checked
- * *Output format* set to **Dignostic read list**
+ * *Output format* set to **Diagnostic read list**
will return::
@@ -89,7 +89,7 @@
.. class:: warningmark
-This tool omits "**n**" corresponding to ranks missing from NCBI taxonomy. In the above example *Home sapiens* conatains the order name (Primates) while *Bos taurus* does not.
+This tool omits "**n**" corresponding to ranks missing from NCBI taxonomy. In the above example *Home sapiens* contains the order name (Primates) while *Bos taurus* does not.
</help>
diff -r 210e048e7ec7 -r 5db45e48f954 tools/taxonomy/gi2taxonomy.xml
--- a/tools/taxonomy/gi2taxonomy.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/taxonomy/gi2taxonomy.xml Fri Sep 25 02:06:03 2009 -0400
@@ -45,7 +45,7 @@
| 1L_EYKX4VC01BXWX1_265 | 1430919 | 90.09 | 212 | 15 | 6 | 252.00 |
+-----------------------+----------+----------+-----------------+------------+------+--------+
-and you want to obtain full taxonomic representation for GIs listed in *targetGI* column. If you set paramenters as shown here:
+and you want to obtain full taxonomic representation for GIs listed in *targetGI* column. If you set parameters as shown here:
.. image:: ../static/images/fetchTax.png
diff -r 210e048e7ec7 -r 5db45e48f954 tools/taxonomy/t2ps_wrapper.xml
--- a/tools/taxonomy/t2ps_wrapper.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/taxonomy/t2ps_wrapper.xml Fri Sep 25 02:06:03 2009 -0400
@@ -108,7 +108,7 @@
**Explanation of phylogenetic tree markup**
-Branches of the tree are colored according to the heatmap below. The "bluer" the branch the lesser the numer of leaves it leads to and vice versa.
+Branches of the tree are colored according to the heatmap below. The "bluer" the branch the lesser the number of leaves it leads to and vice versa.
.. image:: ../static/images/t2ps_heatmap.png
diff -r 210e048e7ec7 -r 5db45e48f954 tools/taxonomy/t2t_report.xml
--- a/tools/taxonomy/t2t_report.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/taxonomy/t2t_report.xml Fri Sep 25 02:06:03 2009 -0400
@@ -30,19 +30,39 @@
Suppose the *Taxonomy manipulation->Fetch Taxonomic Ranks* generated the following taxonomy representation::
- 9916 2 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Laurasiatheria n Ruminantia n Bovidae Bovinae n n Bos n Bos taurus n
9606 12585 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Euarchontoglires Primates Haplorrhini Hominoidea Hominidae n n n Homo n Homo sapiens n
+ 9916 2 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Laurasiatheria n Ruminantia n Bovidae Bovinae n n Bos n Bos taurus n
+ 9606 12585 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Euarchontoglires Primates Haplorrhini Hominoidea Hominidae n n n Homo n Homo sapiens n
Running this tool will generate the following output::
Rank Rank Name Count
-------------------------------------
- root root 2
superkingdom Eukaryota 2
kingdom Metazoa 2
phylum Chordata 2
subphylum Craniata 2
superclass Gnathostomata 2
class Mammalia 2
superorder Euarchontoglires 1
superorder Laurasiatheria 1
order Primates 1
suborder Haplorrhini 1
suborder Ruminantia 1
superfamily Hominoidea 1
family Bovidae 1
family Hominidae 1
subfamily Bovinae 1
genus Bos 1
genus Homo 1
species Bos taurus 1
species Homo sapiens 1
+ root root 2
+ superkingdom Eukaryota 2
+ kingdom Metazoa 2
+ phylum Chordata 2
+ subphylum Craniata 2
+ superclass Gnathostomata 2
+ class Mammalia 2
+ superorder Euarchontoglires 1
+ superorder Laurasiatheria 1
+ order Primates 1
+ suborder Haplorrhini 1
+ suborder Ruminantia 1
+ superfamily Hominoidea 1
+ family Bovidae 1
+ family Hominidae 1
+ subfamily Bovinae 1
+ genus Bos 1
+ genus Homo 1
+ species Bos taurus 1
+ species Homo sapiens 1
-The outoput is sorted on Rank and then on Rank Name.
+The output is sorted on Rank and then on Rank Name.
.. class:: warningmark
-**Note** that this tool omits "**n**" corresponding to ranks missing from NCBI taxonomy. In the above example *Home sapiens* conatains the order name (Primates) while *Bos taurus* does not.
+**Note** that this tool omits "**n**" corresponding to ranks missing from NCBI taxonomy. In the above example *Home sapiens* contains the order name (Primates) while *Bos taurus* does not.
</help>
diff -r 210e048e7ec7 -r 5db45e48f954 tools/visualization/LAJ.xml
--- a/tools/visualization/LAJ.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/visualization/LAJ.xml Fri Sep 25 02:06:03 2009 -0400
@@ -15,7 +15,7 @@
<data name="out_file1" format="laj"/>
</outputs>
<help>
-You can use this tool to view a set of LAV alignments. You may include FASTA formated sequences for both species.
+You can use this tool to view a set of LAV alignments. You may include FASTA formatted sequences for both species.
For detailed information on LAJ, click here_.
diff -r 210e048e7ec7 -r 5db45e48f954 tools/visualization/genetrack.xml
--- a/tools/visualization/genetrack.xml Fri Sep 25 00:39:29 2009 -0400
+++ b/tools/visualization/genetrack.xml Fri Sep 25 02:06:03 2009 -0400
@@ -48,7 +48,7 @@
</requirements>
<help>
This tool takes the input Fit Data and creates a peak and curve plot
-showing the reads and fitness on each basepair. Features can be
+showing the reads and fitness on each base pair. Features can be
plotted below as tracks. Fit data is coverage output from tools like
the Lastz tool. Features are simply interval datasets that may be
plotted as tracks below the optional fit data. Both the fit data and
@@ -62,7 +62,7 @@
- **Track Label** is the name of the generated track.
- **Fit Data** is the dataset to calculate coverage/reads across
- basepairs and generate a curve. This is optional, and tracks may
+ base pairs and generate a curve. This is optional, and tracks may
be created simply showing features.
- **Features** are datasets (interval format) to be plotted as tracks.
1
0
details: http://www.bx.psu.edu/hg/galaxy/rev/210e048e7ec7
changeset: 2772:210e048e7ec7
user: Kanwei Li <kanwei(a)gmail.com>
date: Fri Sep 25 00:39:29 2009 -0400
description:
Style fixes
2 file(s) affected in this change:
static/june_2007_style/panel_layout.css.tmpl
templates/root/index.mako
diffs (25 lines):
diff -r aff67d57fd38 -r 210e048e7ec7 static/june_2007_style/panel_layout.css.tmpl
--- a/static/june_2007_style/panel_layout.css.tmpl Thu Sep 24 21:17:32 2009 -0400
+++ b/static/june_2007_style/panel_layout.css.tmpl Fri Sep 25 00:39:29 2009 -0400
@@ -233,7 +233,8 @@
position:absolute;
top:0;
left:0;
- width:100%;
+ width:100%;
+ min-width:900px;
height:32px;
background: ${masthead_bg};
color:#fff;
diff -r aff67d57fd38 -r 210e048e7ec7 templates/root/index.mako
--- a/templates/root/index.mako Thu Sep 24 21:17:32 2009 -0400
+++ b/templates/root/index.mako Fri Sep 25 00:39:29 2009 -0400
@@ -96,7 +96,7 @@
<div class="unified-panel-header" unselectable="on">
<div class="unified-panel-header-inner">
<div style="float: right">
- <a id="history-options-button" class='panel-header-button' href="${h.url_for( controller='root', action='history_options' )}" target="galaxy_main"><span style="padding:0 20px 0 5px;background:url(/images/dropdownarrow.png) no-repeat right center;">${_('Options')}</span></a>
+ <a id="history-options-button" class='panel-header-button' href="${h.url_for( controller='root', action='history_options' )}" target="galaxy_main"><span style="padding:0 20px 0 5px;background:url(${h.url_for( '/images/dropdownarrow.png')}) no-repeat right center;">${_('Options')}</span></a>
</div>
<div class="panel-header-text">${_('History')}</div>
</div>
1
0
25 Sep '09
details: http://www.bx.psu.edu/hg/galaxy/rev/40f8f713cbd8
changeset: 2769:40f8f713cbd8
user: jeremy goecks <jeremy.goecks(a)emory.edu>
date: Thu Sep 24 19:00:44 2009 -0400
description:
Made history grid filterable by tags (and values) and by status.
9 file(s) affected in this change:
lib/galaxy/tags/tag_handler.py
lib/galaxy/web/controllers/history.py
lib/galaxy/web/controllers/tag.py
lib/galaxy/web/framework/helpers/grids.py
static/june_2007_style/autocomplete_tagging.css.tmpl
static/june_2007_style/blue/autocomplete_tagging.css
templates/history/grid.mako
templates/root/history.mako
templates/tagging_common.mako
diffs (837 lines):
diff -r 35dd55a7898e -r 40f8f713cbd8 lib/galaxy/tags/tag_handler.py
--- a/lib/galaxy/tags/tag_handler.py Thu Sep 24 16:52:15 2009 -0400
+++ b/lib/galaxy/tags/tag_handler.py Thu Sep 24 19:00:44 2009 -0400
@@ -21,8 +21,8 @@
def get_tag_assoc_class(self, entity_class):
return self.tag_assoc_classes[entity_class]
- # Remove a tag from an item.
def remove_item_tag(self, item, tag_name):
+ """Remove a tag from an item."""
# Get item tag association.
item_tag_assoc = self._get_item_tag_assoc(item, tag_name)
@@ -35,8 +35,8 @@
return False
- # Delete tags from an item.
def delete_item_tags(self, item):
+ """Delete tags from an item."""
# Delete item-tag associations.
for tag in item.tags:
tag.delete()
@@ -44,8 +44,8 @@
# Delete tags from item.
del item.tags[:]
- # Returns true if item is has a given tag.
def item_has_tag(self, item, tag):
+ """Returns true if item is has a given tag."""
# Get tag name.
if isinstance(tag, basestring):
tag_name = tag
@@ -59,22 +59,25 @@
return False
- # Apply tags to an item.
def apply_item_tags(self, db_session, item, tags_str):
+ """Apply tags to an item."""
# Parse tags.
- parsed_tags = self._parse_tags(tags_str)
+ parsed_tags = self.parse_tags(tags_str)
# Apply each tag.
for name, value in parsed_tags.items():
+ # Use lowercase name for searching/creating tag.
+ lc_name = name.lower()
+
# Get or create item-tag association.
- item_tag_assoc = self._get_item_tag_assoc(item, name)
+ item_tag_assoc = self._get_item_tag_assoc(item, lc_name)
if not item_tag_assoc:
#
# Create item-tag association.
#
# Create tag; if None, skip the tag (and log error).
- tag = self._get_or_create_tag(db_session, name)
+ tag = self._get_or_create_tag(db_session, lc_name)
if not tag:
# Log error?
continue
@@ -88,16 +91,15 @@
item_tag_assoc.tag = tag
# Apply attributes to item-tag association. Strip whitespace from user name and tag.
+ lc_value = None
if value:
- trimmed_value = value.strip()
- else:
- trimmed_value = value
- item_tag_assoc.user_tname = name.strip()
- item_tag_assoc.user_value = trimmed_value
- item_tag_assoc.value = self._scrub_tag_value(value)
+ lc_value = value.lower()
+ item_tag_assoc.user_tname = name
+ item_tag_assoc.user_value = value
+ item_tag_assoc.value = lc_value
- # Build a string from an item's tags.
def get_tags_str(self, tags):
+ """Build a string from an item's tags."""
# Return empty string if there are no tags.
if not tags:
return ""
@@ -111,16 +113,18 @@
tags_str_list.append(tag_str)
return ", ".join(tags_str_list)
- # Get a Tag object from a tag id.
def get_tag_by_id(self, db_session, tag_id):
+ """Get a Tag object from a tag id."""
return db_session.query(Tag).filter(Tag.id==tag_id).first()
- # Get a Tag object from a tag name (string).
def get_tag_by_name(self, db_session, tag_name):
- return db_session.query(Tag).filter(Tag.name==tag_name).first()
+ """Get a Tag object from a tag name (string)."""
+ if tag_name:
+ return db_session.query( Tag ).filter( Tag.name==tag_name.lower() ).first()
+ return None
- # Create a Tag object from a tag string.
def _create_tag(self, db_session, tag_str):
+ """Create a Tag object from a tag string."""
tag_hierarchy = tag_str.split(self.__class__.hierarchy_separator)
tag_prefix = ""
parent_tag = None
@@ -139,8 +143,8 @@
tag_prefix = tag.name + self.__class__.hierarchy_separator
return tag
- # Get or create a Tag object from a tag string.
def _get_or_create_tag(self, db_session, tag_str):
+ """Get or create a Tag object from a tag string."""
# Scrub tag; if tag is None after being scrubbed, return None.
scrubbed_tag_str = self._scrub_tag_name(tag_str)
if not scrubbed_tag_str:
@@ -155,18 +159,18 @@
return tag
- # Return ItemTagAssociation object for an item and a tag string; returns None if there is
- # no such tag.
def _get_item_tag_assoc(self, item, tag_name):
+ """Return ItemTagAssociation object for an item and a tag string; returns None if there is
+ no such tag."""
scrubbed_tag_name = self._scrub_tag_name(tag_name)
for item_tag_assoc in item.tags:
if item_tag_assoc.tag.name == scrubbed_tag_name:
return item_tag_assoc
return None
- # Returns a list of raw (tag-name, value) pairs derived from a string; method does not scrub tags.
- # Return value is a dictionary where tag-names are keys.
- def _parse_tags(self, tag_str):
+ def parse_tags(self, tag_str):
+ """Returns a list of raw (tag-name, value) pairs derived from a string; method scrubs tag names and values as well.
+ Return value is a dictionary where tag-names are keys."""
# Gracefully handle None.
if not tag_str:
return dict()
@@ -179,11 +183,13 @@
name_value_pairs = dict()
for raw_tag in raw_tags:
nv_pair = self._get_name_value_pair(raw_tag)
- name_value_pairs[nv_pair[0]] = nv_pair[1]
+ scrubbed_name = self._scrub_tag_name( nv_pair[0] )
+ scrubbed_value = self._scrub_tag_value( nv_pair[1] )
+ name_value_pairs[scrubbed_name] = scrubbed_value
return name_value_pairs
- # Scrub a tag value.
def _scrub_tag_value(self, value):
+ """Scrub a tag value."""
# Gracefully handle None:
if not value:
return None
@@ -192,11 +198,10 @@
reg_exp = re.compile('\s')
scrubbed_value = re.sub(reg_exp, "", value)
- # Lowercase and return.
- return scrubbed_value.lower()
+ return scrubbed_value
- # Scrub a tag name.
def _scrub_tag_name(self, name):
+ """Scrub a tag name."""
# Gracefully handle None:
if not name:
return None
@@ -213,21 +218,20 @@
if len(scrubbed_name) < 3 or len(scrubbed_name) > 255:
return None
- # Lowercase and return.
- return scrubbed_name.lower()
+ return scrubbed_name
- # Scrub a tag name list.
def _scrub_tag_name_list(self, tag_name_list):
+ """Scrub a tag name list."""
scrubbed_tag_list = list()
for tag in tag_name_list:
- scrubbed_tag_list.append(self._scrub_tag_name(tag))
+ scrubbed_tag_list.append( self._scrub_tag_name(tag) )
return scrubbed_tag_list
- # Get name, value pair from a tag string.
def _get_name_value_pair(self, tag_str):
+ """Get name, value pair from a tag string."""
# Use regular expression to parse name, value.
- reg_exp = re.compile("[" + self.__class__.key_value_separators + "]")
- name_value_pair = reg_exp.split(tag_str)
+ reg_exp = re.compile( "[" + self.__class__.key_value_separators + "]" )
+ name_value_pair = reg_exp.split( tag_str )
# Add empty slot if tag does not have value.
if len(name_value_pair) < 2:
diff -r 35dd55a7898e -r 40f8f713cbd8 lib/galaxy/web/controllers/history.py
--- a/lib/galaxy/web/controllers/history.py Thu Sep 24 16:52:15 2009 -0400
+++ b/lib/galaxy/web/controllers/history.py Thu Sep 24 19:00:44 2009 -0400
@@ -2,8 +2,11 @@
from galaxy.web.framework.helpers import time_ago, iff, grids
from galaxy import util
from galaxy.model.mapping import desc
+from galaxy.model import History
from galaxy.model.orm import *
from galaxy.util.json import *
+from galaxy.tags.tag_handler import TagHandler
+from sqlalchemy.sql.expression import ClauseElement
import webhelpers, logging, operator
from datetime import datetime
from cgi import escape
@@ -39,16 +42,55 @@
return dict( operation="sharing" )
return None
class TagsColumn( grids.GridColumn ):
- def __init__(self, col_name):
- grids.GridColumn.__init__(self, col_name)
+ def __init__(self, col_name, key, filterable):
+ grids.GridColumn.__init__(self, col_name, key=key, filterable=filterable)
+ # Tags cannot be sorted.
+ self.sortable = False
self.tag_elt_id_gen = 0
-
def get_value( self, trans, grid, history ):
self.tag_elt_id_gen += 1
elt_id="tagging-elt" + str( self.tag_elt_id_gen )
div_elt = "<div id=%s></div>" % elt_id
- return div_elt + trans.fill_template( "/tagging_common.mako", trans=trans,
- tagged_item=history, elt_id = elt_id, in_form="true", input_size="20" )
+ return div_elt + trans.fill_template( "/tagging_common.mako", trans=trans, tagged_item=history,
+ elt_id = elt_id, in_form="true", input_size="20", tag_click_fn="add_tag_to_grid_filter" )
+ def filter( self, db_session, query, column_filter ):
+ """ Modify query to include only histories with tags in column_filter. """
+ if column_filter == "All":
+ pass
+ elif column_filter:
+ # Parse filter to extract multiple tags.
+ tag_handler = TagHandler()
+ raw_tags = tag_handler.parse_tags( column_filter.encode("utf-8") )
+ for name, value in raw_tags.items():
+ tag = tag_handler.get_tag_by_name( db_session, name )
+ if tag:
+ query = query.filter( History.tags.any( tag_id=tag.id ) )
+ if value:
+ query = query.filter( History.tags.any( value=value.lower() ) )
+ else:
+ # Tag doesn't exist; unclear what to do here, but the literal thing to do is add the criterion, which
+ # will then yield a query that returns no results.
+ query = query.filter( History.tags.any( user_tname=name ) )
+ return query
+ def get_accepted_filters( self ):
+ """ Returns a list of accepted filters for this column. """
+ accepted_filter_labels_and_vals = { "All": "All" }
+ accepted_filters = []
+ for label, val in accepted_filter_labels_and_vals.items():
+ args = { self.key: val }
+ accepted_filters.append( grids.GridColumnFilter( label, args) )
+ return accepted_filters
+
+
+ class DeletedColumn( grids.GridColumn ):
+ def get_accepted_filters( self ):
+ """ Returns a list of accepted filters for this column. """
+ accepted_filter_labels_and_vals = { "Active" : "False", "Deleted" : "True", "All": "All" }
+ accepted_filters = []
+ for label, val in accepted_filter_labels_and_vals.items():
+ args = { self.key: val }
+ accepted_filters.append( grids.GridColumnFilter( label, args) )
+ return accepted_filters
# Grid definition
title = "Stored histories"
@@ -60,12 +102,12 @@
link=( lambda item: iff( item.deleted, None, dict( operation="switch", id=item.id ) ) ),
attach_popup=True ),
DatasetsByStateColumn( "Datasets (by state)", ncells=4 ),
- TagsColumn( "Tags"),
+ TagsColumn( "Tags", key="tags", filterable=True),
StatusColumn( "Status", attach_popup=False ),
grids.GridColumn( "Created", key="create_time", format=time_ago ),
grids.GridColumn( "Last Updated", key="update_time", format=time_ago ),
# Valid for filtering but invisible
- grids.GridColumn( "Deleted", key="deleted", visible=False )
+ DeletedColumn( "Status", key="deleted", visible=False, filterable=True )
]
operations = [
grids.GridOperation( "Switch", allow_multiple=False, condition=( lambda item: not item.deleted ) ),
@@ -80,9 +122,9 @@
standard_filters = [
grids.GridColumnFilter( "Active", args=dict( deleted=False ) ),
grids.GridColumnFilter( "Deleted", args=dict( deleted=True ) ),
- grids.GridColumnFilter( "All", args=dict( deleted='All' ) )
+ grids.GridColumnFilter( "All", args=dict( deleted='All' ) ),
]
- default_filter = dict( deleted=False )
+ default_filter = dict( deleted="False", tags="All" )
def get_current_item( self, trans ):
return trans.get_history()
def apply_default_filter( self, trans, query, **kwargs ):
diff -r 35dd55a7898e -r 40f8f713cbd8 lib/galaxy/web/controllers/tag.py
--- a/lib/galaxy/web/controllers/tag.py Thu Sep 24 16:52:15 2009 -0400
+++ b/lib/galaxy/web/controllers/tag.py Thu Sep 24 19:00:44 2009 -0400
@@ -34,7 +34,7 @@
self._do_security_check(trans, item)
- self.tag_handler.apply_item_tags( trans.sa_session, item, unicode(new_tag).encode('utf-8') )
+ self.tag_handler.apply_item_tags( trans.sa_session, item, new_tag.encode('utf-8') )
trans.sa_session.flush()
@web.expose
@@ -45,7 +45,7 @@
self._do_security_check(trans, item)
- self.tag_handler.remove_item_tag( item, unicode(tag_name).encode('utf-8') )
+ self.tag_handler.remove_item_tag( item, tag_name.encode('utf-8') )
#print tag_name
#print unicode(tag_name)
trans.sa_session.flush()
@@ -60,41 +60,53 @@
self._do_security_check(trans, item)
tag_handler.delete_item_tags(item)
- self.tag_handler.apply_item_tags( trans.sa_session, item, unicode(new_tags).encode('utf-8') )
+ self.tag_handler.apply_item_tags( trans.sa_session, item, new_tags.encode('utf-8') )
trans.sa_session.flush()
@web.expose
@web.require_login( "get autocomplete data for an item's tags" )
- def tag_autocomplete_data(self, trans, id=None, item_class=None, q=None, limit=None, timestamp=None):
+ def tag_autocomplete_data( self, trans, q=None, limit=None, timestamp=None, id=None, item_class=None ):
""" Get autocomplete data for an item's tags. """
-
+
#
# Get item, do security check, and get autocomplete data.
#
- item = self._get_item(trans, item_class, trans.security.decode_id(id))
+ item = None
+ if id is not None:
+ item = self._get_item(trans, item_class, trans.security.decode_id(id))
+ self._do_security_check(trans, item)
+
+ # Get item class. TODO: we should have a mapper that goes from class_name to class object.
+ if item_class == 'History':
+ item_class = History
+ elif item_class == 'HistoryDatasetAssociation':
+ item_class = HistoryDatasetAssociation
- self._do_security_check(trans, item)
-
- q = unicode(q).encode('utf-8')
+ q = q.encode('utf-8')
if q.find(":") == -1:
- return self._get_tag_autocomplete_names(trans, item, q, limit, timestamp)
+ return self._get_tag_autocomplete_names(trans, q, limit, timestamp, item, item_class)
else:
- return self._get_tag_autocomplete_values(trans, item, q, limit, timestamp)
+ return self._get_tag_autocomplete_values(trans, q, limit, timestamp, item, item_class)
- def _get_tag_autocomplete_names(self, trans, item, q, limit, timestamp):
+ def _get_tag_autocomplete_names( self, trans, q, limit, timestamp, item=None, item_class=None ):
"""Returns autocomplete data for tag names ordered from most frequently used to
least frequently used."""
#
# Get user's item tags and usage counts.
#
- # Get item-tag association class.
- item_tag_assoc_class = self.tag_handler.get_tag_assoc_class(item.__class__)
+ # Get item's class object and item-tag association class.
+ if item is None and item_class is None:
+ raise RuntimeError("Both item and item_class cannot be None")
+ elif item is not None:
+ item_class = item.__class__
+
+ item_tag_assoc_class = self.tag_handler.get_tag_assoc_class(item_class)
# Build select statement.
cols_to_select = [ item_tag_assoc_class.table.c.tag_id, func.count('*') ]
- from_obj = item_tag_assoc_class.table.join(item.table).join(Tag)
- where_clause = and_(self._get_column_for_filtering_item_by_user_id(item.__class__)==trans.get_user().id,
+ from_obj = item_tag_assoc_class.table.join(item_class.table).join(Tag)
+ where_clause = and_(self._get_column_for_filtering_item_by_user_id(item_class)==trans.get_user().id,
Tag.table.c.name.like(q + "%"))
order_by = [ func.count("*").desc() ]
group_by = item_tag_assoc_class.table.c.tag_id
@@ -109,18 +121,18 @@
for row in result_set:
tag = self.tag_handler.get_tag_by_id(trans.sa_session, row[0])
- # Exclude tags that are already applied to the history.
- if self.tag_handler.item_has_tag(item, tag):
+ # Exclude tags that are already applied to the item.
+ if ( item is not None ) and ( self.tag_handler.item_has_tag(item, tag) ):
continue
# Add tag to autocomplete data. Use the most frequent name that user
# has employed for the tag.
tag_names = self._get_usernames_for_tag(trans.sa_session, trans.get_user(),
- tag, item.__class__, item_tag_assoc_class)
+ tag, item_class, item_tag_assoc_class)
ac_data += tag_names[0] + "|" + tag_names[0] + "\n"
return ac_data
- def _get_tag_autocomplete_values(self, trans, item, q, limit, timestamp):
+ def _get_tag_autocomplete_values(self, trans, q, limit, timestamp, item=None, item_class=None):
"""Returns autocomplete data for tag values ordered from most frequently used to
least frequently used."""
@@ -132,13 +144,18 @@
if tag is None:
return ""
- # Get item-tag association class.
- item_tag_assoc_class = self.tag_handler.get_tag_assoc_class(item.__class__)
+ # Get item's class object and item-tag association class.
+ if item is None and item_class is None:
+ raise RuntimeError("Both item and item_class cannot be None")
+ elif item is not None:
+ item_class = item.__class__
+
+ item_tag_assoc_class = self.tag_handler.get_tag_assoc_class(item_class)
# Build select statement.
cols_to_select = [ item_tag_assoc_class.table.c.value, func.count('*') ]
- from_obj = item_tag_assoc_class.table.join(item.table).join(Tag)
- where_clause = and_(self._get_column_for_filtering_item_by_user_id(item.__class__)==trans.get_user().id,
+ from_obj = item_tag_assoc_class.table.join(item_class.table).join(Tag)
+ where_clause = and_(self._get_column_for_filtering_item_by_user_id(item_class)==trans.get_user().id,
Tag.table.c.id==tag.id,
item_tag_assoc_class.table.c.value.like(tag_value + "%"))
order_by = [ func.count("*").desc(), item_tag_assoc_class.table.c.value ]
diff -r 35dd55a7898e -r 40f8f713cbd8 lib/galaxy/web/framework/helpers/grids.py
--- a/lib/galaxy/web/framework/helpers/grids.py Thu Sep 24 16:52:15 2009 -0400
+++ b/lib/galaxy/web/framework/helpers/grids.py Thu Sep 24 19:00:44 2009 -0400
@@ -38,19 +38,27 @@
query = self.apply_default_filter( trans, query, **kwargs )
# Maintain sort state in generated urls
extra_url_args = {}
- # Process filtering arguments
- filter_args = {}
- if self.default_filter:
- filter_args.update( self.default_filter )
+ # Process filtering arguments to (a) build a query that actuates the filter and (b) builds a
+ # dictionary that denotes the current filter.
+ cur_filter_dict = {}
for column in self.columns:
if column.key:
+ # Look for filter criterion in kwargs; if not found, look in default filter.
+ column_filter = None
if "f-" + column.key in kwargs:
column_filter = kwargs.get( "f-" + column.key )
- query = column.filter( query, column_filter, filter_args )
- # Carry filter along to newly generated urls
- extra_url_args[ "f-" + column.key ] = column_filter
- if filter_args:
- query = query.filter_by( **filter_args )
+ elif ( self.default_filter ) and ( column.key in self.default_filter ):
+ column_filter = self.default_filter.get( column.key )
+
+ # If column filter found, apply it.
+ if column_filter is not None:
+ # Update query.
+ query = column.filter( trans.sa_session, query, column_filter )
+ # Upate current filter dict.
+ cur_filter_dict[ column.key ] = column_filter
+ # Carry filter along to newly generated urls.
+ extra_url_args[ "f-" + column.key ] = column_filter.encode("utf-8")
+
# Process sort arguments
sort_key = sort_order = None
if 'sort' in kwargs:
@@ -92,6 +100,7 @@
return trans.fill_template( self.template,
grid=self,
query=query,
+ cur_filter_dict=cur_filter_dict,
sort_key=sort_key,
encoded_sort_key=encoded_sort_key,
sort_order=sort_order,
@@ -125,7 +134,7 @@
return query
class GridColumn( object ):
- def __init__( self, label, key=None, method=None, format=None, link=None, attach_popup=False, visible=True, ncells=1 ):
+ def __init__( self, label, key=None, method=None, format=None, link=None, attach_popup=False, visible=True, ncells=1, filterable=False ):
self.label = label
self.key = key
self.method = method
@@ -134,6 +143,7 @@
self.attach_popup = attach_popup
self.visible = visible
self.ncells = ncells
+ self.filterable = filterable
# Currently can only sort of columns that have a database
# representation, not purely derived.
if self.key:
@@ -154,20 +164,23 @@
if self.link and self.link( item ):
return self.link( item )
return None
- def filter( self, query, column_filter, filter_args ):
- """
- Must modify filter_args for carrying forward, and return query
- (possibly filtered).
- """
+ def filter( self, db_session, query, column_filter ):
+ """ Modify query to reflect the column filter. """
+ if column_filter == "All":
+ pass
if column_filter == "True":
- filter_args[self.key] = True
query = query.filter_by( **{ self.key: True } )
elif column_filter == "False":
- filter_args[self.key] = False
query = query.filter_by( **{ self.key: False } )
- elif column_filter == "All":
- del filter_args[self.key]
return query
+ def get_accepted_filters( self ):
+ """ Returns a list of accepted filters for this column. """
+ accepted_filters_vals = [ "False", "True", "All" ]
+ accepted_filters = []
+ for val in accepted_filters_vals:
+ args = { self.key: val }
+ accepted_filters.append( GridColumnFilter( val, args) )
+ return accepted_filters
class GridOperation( object ):
def __init__( self, label, key=None, condition=None, allow_multiple=True, target=None, url_args=None ):
diff -r 35dd55a7898e -r 40f8f713cbd8 static/june_2007_style/autocomplete_tagging.css.tmpl
--- a/static/june_2007_style/autocomplete_tagging.css.tmpl Thu Sep 24 16:52:15 2009 -0400
+++ b/static/june_2007_style/autocomplete_tagging.css.tmpl Thu Sep 24 19:00:44 2009 -0400
@@ -76,7 +76,7 @@
.toggle-link
{
- font-weight: bold;
+ font-weight: normal;
padding: 0.3em;
margin-bottom: 1em;
width: 100%;
diff -r 35dd55a7898e -r 40f8f713cbd8 static/june_2007_style/blue/autocomplete_tagging.css
--- a/static/june_2007_style/blue/autocomplete_tagging.css Thu Sep 24 16:52:15 2009 -0400
+++ b/static/june_2007_style/blue/autocomplete_tagging.css Thu Sep 24 19:00:44 2009 -0400
@@ -76,7 +76,7 @@
.toggle-link
{
- font-weight: bold;
+ font-weight: normal;
padding: 0.3em;
margin-bottom: 1em;
width: 100%;
diff -r 35dd55a7898e -r 40f8f713cbd8 templates/history/grid.mako
--- a/templates/history/grid.mako Thu Sep 24 16:52:15 2009 -0400
+++ b/templates/history/grid.mako Thu Sep 24 19:00:44 2009 -0400
@@ -1,3 +1,5 @@
+<%! from galaxy.web.framework.helpers.grids import GridColumnFilter %>
+
<%inherit file="/base.mako"/>
<%def name="title()">${grid.title}</%def>
@@ -25,6 +27,74 @@
});
})
});
+
+ // Set up autocomplete for tag filter input.
+ var t = $("#input-tag-filter");
+ t.keyup( function( e )
+ {
+ if ( e.keyCode == 27 )
+ {
+ // Escape key
+ $(this).trigger( "blur" );
+ } else if (
+ ( e.keyCode == 13 ) || // Return Key
+ ( e.keyCode == 188 ) || // Comma
+ ( e.keyCode == 32 ) // Space
+ )
+ {
+ //
+ // Check input.
+ //
+
+ new_value = this.value;
+
+ // Do nothing if return key was used to autocomplete.
+ if (return_key_pressed_for_autocomplete == true)
+ {
+ return_key_pressed_for_autocomplete = false;
+ return false;
+ }
+
+ // Suppress space after a ":"
+ if ( new_value.indexOf(": ", new_value.length - 2) != -1)
+ {
+ this.value = new_value.substring(0, new_value.length-1);
+ return false;
+ }
+
+ // Remove trigger keys from input.
+ if ( (e.keyCode == 188) || (e.keyCode == 32) )
+ new_value = new_value.substring( 0 , new_value.length - 1 );
+
+ // Trim whitespace.
+ new_value = new_value.replace(/^\s+|\s+$/g,"");
+
+ // Too short?
+ if (new_value.length < 3)
+ return false;
+
+ //
+ // New tag OK.
+ //
+ }
+ });
+
+ // Add autocomplete to input.
+ var format_item_func = function(key, row_position, num_rows, value, search_term)
+ {
+ tag_name_and_value = value.split(":");
+ return (tag_name_and_value.length == 1 ? tag_name_and_value[0] :tag_name_and_value[1]);
+ //var array = new Array(key, value, row_position, num_rows,
+ //search_term ); return "\"" + array.join("*") + "\"";
+ }
+ var autocomplete_options =
+ { selectFirst: false, formatItem : format_item_func, autoFill: false, highlight: false, mustMatch: true };
+
+ t.autocomplete("${h.url_for( controller='tag', action='tag_autocomplete_data', item_class='History' )}", autocomplete_options);
+
+ //t.addClass("tag-input");
+
+ return t;
});
## Can this be moved into base.mako?
%if refresh_frames:
@@ -55,6 +125,25 @@
}
%endif
%endif
+
+ //
+ // Add a tag to the current grid filter; this adds the tag to the filter and then issues a request to refresh the grid.
+ //
+ function add_tag_to_grid_filter(tag_name, tag_value)
+ {
+ // Use tag as a filter: replace TAGNAME with tag_name and issue query.
+ <%
+ url_args = {}
+ if "tags" in cur_filter_dict and cur_filter_dict["tags"] != "All":
+ url_args["f-tags"] = cur_filter_dict["tags"].encode("utf-8") + ", TAGNAME"
+ else:
+ url_args["f-tags"] = "TAGNAME"
+ %>
+ var url_base = "${url( url_args )}";
+ var url = url_base.replace("TAGNAME", tag_name);
+ self.location = url;
+ }
+
</script>
</%def>
@@ -73,19 +162,50 @@
</style>
</%def>
-%if grid.standard_filters:
- <div class="grid-header">
- <h2>${grid.title}</h2>
- <span class="title">Filter:</span>
- %for i, filter in enumerate( grid.standard_filters ):
- %if i > 0:
- <span>|</span>
+<div class="grid-header">
+ <h2>${grid.title}</h2>
+
+ ## Print grid filter.
+ <form name="history_actions" action="javascript:add_tag_to_grid_filter($('#input-tag-filter').attr('value'))" method="get" >
+ <strong>Filter: </strong>
+ %for column in grid.columns:
+ %if column.filterable:
+ <span> by ${column.label.lower()}:</span>
+ ## For now, include special case to handle tags.
+ %if column.key == "tags":
+ %if cur_filter_dict[column.key] != "All":
+ <span class="filter" "style='font-style: italic'">
+ ${cur_filter_dict[column.key]}
+ </span>
+ <span>|</span>
+ %endif
+ <input id="input-tag-filter" name="f-tags" type="text" value="" size="15"/>
+ <span>|</span>
+ %endif
+
+ ## Handle other columns.
+ %for i, filter in enumerate( column.get_accepted_filters() ):
+ %if i > 0:
+ <span>|</span>
+ %endif
+ %if cur_filter_dict[column.key] == filter.args[column.key]:
+ <span class="filter" "style='font-style: italic'">${filter.label}</span>
+ %else:
+ <span class="filter"><a href="${url( filter.get_url_args() )}">${filter.label}</a></span>
+ %endif
+ %endfor
+ <span> </span>
%endif
- <span class="filter"><a href="${url( filter.get_url_args() )}">${filter.label}</a></span>
%endfor
- </div>
-%endif
-
+
+ ## Link to clear all filters.
+ <%
+ args = { "deleted" : "False", "tags" : "All" }
+ no_filter = GridColumnFilter("Clear", args)
+ %>
+ <span><a href="${url( no_filter.get_url_args() )}">${no_filter.label}</a></span>
+ </form>
+</div>
<form name="history_actions" action="${url()}" method="post" >
<table class="grid">
<thead>
diff -r 35dd55a7898e -r 40f8f713cbd8 templates/root/history.mako
--- a/templates/root/history.mako Thu Sep 24 16:52:15 2009 -0400
+++ b/templates/root/history.mako Thu Sep 24 19:00:44 2009 -0400
@@ -242,6 +242,25 @@
}
});
};
+
+ //
+ // Function provides text for tagging toggle link.
+ //
+ var get_toggle_link_text = function(tags)
+ {
+ var text = "";
+ var num_tags = array_length(tags);
+ if (num_tags != 0)
+ {
+ text = num_tags + (num_tags != 1 ? " Tags" : " Tag");
+ }
+ else
+ {
+ // No tags.
+ text = "Add tags to history";
+ }
+ return text;
+ };
</script>
<style>
@@ -289,7 +308,7 @@
%if trans.get_user() is not None:
<div id='history-tag-area' class="tag-element"></div>
- ${render_tagging_element(history, "history-tag-area")}
+ ${render_tagging_element(history, "history-tag-area", get_toggle_link_text_fn='get_toggle_link_text')}
%endif
%if not datasets:
diff -r 35dd55a7898e -r 40f8f713cbd8 templates/tagging_common.mako
--- a/templates/tagging_common.mako Thu Sep 24 16:52:15 2009 -0400
+++ b/templates/tagging_common.mako Thu Sep 24 19:00:44 2009 -0400
@@ -1,12 +1,11 @@
## Render a tagging element if there is a tagged_item.
%if tagged_item is not None and elt_id is not None:
- ${render_tagging_element(tagged_item, elt_id=elt_id, in_form=in_form, input_size=input_size)}
+ ${render_tagging_element(tagged_item, elt_id=elt_id, in_form=in_form, input_size=input_size, tag_click_fn=tag_click_fn)}
%endif
## Render the tags 'tags' as an autocomplete element.
-<%def name="render_tagging_element(tagged_item, elt_id, use_toggle_link='true', in_form='false', input_size='15')">
+<%def name="render_tagging_element(tagged_item, elt_id, use_toggle_link='true', in_form='false', input_size='15', tag_click_fn='default_tag_click_fn', get_toggle_link_text_fn='default_get_toggle_link_text_fn')">
<script type="text/javascript">
-
//
// Set up autocomplete tagger.
//
@@ -39,9 +38,9 @@
};
//
- // Function get text to display on the toggle link.
+ // Default function get text to display on the toggle link.
//
- var get_toggle_link_text = function(tags)
+ var default_get_toggle_link_text_fn = function(tags)
{
var text = "";
var num_tags = array_length(tags);
@@ -73,30 +72,19 @@
else
{
// No tags.
- text = "Add tags to history";
+ text = "Add tags";
}
return text;
};
- //
- // Function to handle a tag click.
- //
- var tag_click_fn = function(tag_name, tag_value)
- {
- /*
- alert(tag_name);
-
- // Do URL request to get histories tag.
- self.location = "http://www.yahoo.com";
- */
- };
+ // Default function to handle a tag click.
+ var default_tag_click_fn = function(tag_name, tag_value) {};
var options =
{
tags : ${h.to_json_string(tag_names_and_values)},
- get_toggle_link_text_fn: get_toggle_link_text,
- tag_click_fn: tag_click_fn,
- ##tag_click_fn: function(name, value) { /* Do nothing. */ },
+ get_toggle_link_text_fn: ${get_toggle_link_text_fn},
+ tag_click_fn: ${tag_click_fn},
<% tagged_item_id = trans.security.encode_id(tagged_item.id) %>
ajax_autocomplete_tag_url: "${h.url_for( controller='tag', action='tag_autocomplete_data', id=tagged_item_id, item_class=tagged_item.__class__.__name__ )}",
ajax_add_tag_url: "${h.url_for( controller='tag', action='add_tag_async', id=tagged_item_id, item_class=tagged_item.__class__.__name__ )}",
1
0