galaxy-commits
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
January 2015
- 2 participants
- 248 discussions
commit/galaxy-central: guerler: Api: Redirect deprecated ftp files api
by commits-noreply@bitbucket.org 29 Jan '15
by commits-noreply@bitbucket.org 29 Jan '15
29 Jan '15
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/18aa57e52605/
Changeset: 18aa57e52605
User: guerler
Date: 2015-01-29 17:18:03+00:00
Summary: Api: Redirect deprecated ftp files api
Affected #: 1 file
diff -r 4fcd087ac38c5ff89e715b969de0210fd3c395e0 -r 18aa57e52605ca18e6f0e4d1fcfcdff53b0664fa lib/galaxy/webapps/galaxy/buildapp.py
--- a/lib/galaxy/webapps/galaxy/buildapp.py
+++ b/lib/galaxy/webapps/galaxy/buildapp.py
@@ -179,6 +179,7 @@
webapp.mapper.resource( 'form', 'forms', path_prefix='/api' )
webapp.mapper.resource( 'request_type', 'request_types', path_prefix='/api' )
webapp.mapper.resource( 'role', 'roles', path_prefix='/api' )
+ webapp.mapper.connect( '/api/ftp_files', controller='remote_files' )
webapp.mapper.resource( 'remote_file', 'remote_files', path_prefix='/api' )
webapp.mapper.resource( 'group', 'groups', path_prefix='/api' )
webapp.mapper.resource_with_deleted( 'quota', 'quotas', path_prefix='/api' )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: guerler: Api: Rename ftp_files to remote_files
by commits-noreply@bitbucket.org 29 Jan '15
by commits-noreply@bitbucket.org 29 Jan '15
29 Jan '15
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/4fcd087ac38c/
Changeset: 4fcd087ac38c
User: guerler
Date: 2015-01-29 16:56:56+00:00
Summary: Api: Rename ftp_files to remote_files
Affected #: 9 files
diff -r be28a5fda9ddbc05df0e545a3a85434630212433 -r 4fcd087ac38c5ff89e715b969de0210fd3c395e0 client/galaxy/scripts/mvc/library/library-model.js
--- a/client/galaxy/scripts/mvc/library/library-model.js
+++ b/client/galaxy/scripts/mvc/library/library-model.js
@@ -199,7 +199,7 @@
*/
var Jstree = Backbone.Model.extend({
- urlRoot: '/api/ftp_files'
+ urlRoot: '/api/remote_files'
});
return {
diff -r be28a5fda9ddbc05df0e545a3a85434630212433 -r 4fcd087ac38c5ff89e715b969de0210fd3c395e0 client/galaxy/scripts/mvc/upload/upload-ftp.js
--- a/client/galaxy/scripts/mvc/upload/upload-ftp.js
+++ b/client/galaxy/scripts/mvc/upload/upload-ftp.js
@@ -26,7 +26,7 @@
// load extension
Utils.get({
- url : galaxy_config.root + 'api/ftp_files',
+ url : galaxy_config.root + 'api/remote_files',
success : function(ftp_files) { self._fill(ftp_files); },
error : function() { self._fill(); }
});
diff -r be28a5fda9ddbc05df0e545a3a85434630212433 -r 4fcd087ac38c5ff89e715b969de0210fd3c395e0 lib/galaxy/webapps/galaxy/api/ftp_files.py
--- a/lib/galaxy/webapps/galaxy/api/ftp_files.py
+++ /dev/null
@@ -1,147 +0,0 @@
-"""
-API operations on remote files.
-"""
-import os
-import time
-import hashlib
-from galaxy import web
-from galaxy import exceptions
-from galaxy.web import _future_expose_api as expose_api
-from galaxy.util import jstree
-from galaxy.web.base.controller import BaseAPIController, url_for
-from operator import itemgetter
-
-import logging
-log = logging.getLogger( __name__ )
-
-class FTPFilesAPIController( BaseAPIController ):
-
- @expose_api
- def index( self, trans, **kwd ):
- """
- GET /api/ftp_files/
-
- Displays remote files.
-
- :param target: target to load available datasets from, defaults to ftp
- possible values: ftp, userdir
- :type target: str
-
- :param format: requested format of data, defaults to flat
- possible values: flat, jstree, ajax
-
- :returns: list of available files
- :rtype: list
- """
- target = kwd.get( 'target', None )
- format = kwd.get( 'format', None )
-
- if target == 'userdir':
- user_login = trans.user.email
- user_base_dir = trans.app.config.user_library_import_dir
- if user_base_dir is None:
- raise exceptions.ConfigDoesNotAllowException( 'The configuration of this Galaxy instance does not allow upload from user directories.' )
- full_import_dir = os.path.join( user_base_dir, user_login )
- if full_import_dir is not None:
- if format == 'jstree':
- disable = kwd.get( 'disable', 'folders')
- try:
- userdir_jstree = self.__create_jstree( full_import_dir, disable )
- response = userdir_jstree.jsonData()
- except Exception, exception:
- log.debug( str( exception ) )
- raise exceptions.InternalServerError( 'Could not create tree representation of the given folder: ' + str( full_import_dir ) )
- elif format == 'ajax':
- raise exceptions.NotImplemented( 'Not implemented yet. Sorry.' )
- else:
- try:
- response = self.__load_all_filenames( full_import_dir )
- except Exception, exception:
- log.error( 'Could not get user import files: %s', str( exception ), exc_info=True )
- raise exceptions.InternalServerError( 'Could not get the files from your user directory folder.' )
- else:
- raise exceptions.InternalServerError( 'Could not get the files from your user directory folder.' )
- elif target == 'importdir':
- base_dir = trans.app.config.library_import_dir
- if base_dir is None:
- raise exceptions.ConfigDoesNotAllowException( 'The configuration of this Galaxy instance does not allow usage of import directory.' )
- if format == 'jstree':
- disable = kwd.get( 'disable', 'folders')
- try:
- importdir_jstree = self.__create_jstree( base_dir, disable )
- response = importdir_jstree.jsonData()
- except Exception, exception:
- log.debug( str( exception ) )
- raise exceptions.InternalServerError( 'Could not create tree representation of the given folder: ' + str( base_dir ) )
- elif format == 'ajax':
- raise exceptions.NotImplemented( 'Not implemented yet. Sorry.' )
- else:
- try:
- response = self.__load_all_filenames( base_dir )
- except Exception, exception:
- log.error( 'Could not get user import files: %s', str( exception ), exc_info=True )
- raise exceptions.InternalServerError( 'Could not get the files from your import directory folder.' )
- else:
- user_ftp_base_dir = trans.app.config.ftp_upload_dir
- if user_ftp_base_dir is None:
- raise exceptions.ConfigDoesNotAllowException( 'The configuration of this Galaxy instance does not allow upload from FTP directories.' )
- try:
- user_ftp_dir = None
- identifier = trans.app.config.ftp_upload_dir_identifier
- user_ftp_dir = os.path.join( user_ftp_base_dir, getattr(trans.user, identifier) )
- if user_ftp_dir is not None:
- response = self.__load_all_filenames( user_ftp_dir )
- else:
- raise exceptions.ConfigDoesNotAllowException( 'You do not have an FTP directory named as your login at this Galaxy instance.' )
- except Exception, exception:
- log.error( 'Could not get ftp files: %s', str( exception ), exc_info=True )
- raise exceptions.InternalServerError( 'Could not get the files from your FTP folder.' )
- return response
-
- def __load_all_filenames( self, directory ):
- """
- Loads recursively all files within the given folder and its
- subfolders and returns a flat list.
- """
- response = []
- if os.path.exists( directory ):
- for ( dirpath, dirnames, filenames ) in os.walk( directory ):
- for filename in filenames:
- path = os.path.relpath( os.path.join( dirpath, filename ), directory )
- statinfo = os.lstat( os.path.join( dirpath, filename ) )
- response.append( dict( path = path,
- size = statinfo.st_size,
- ctime = time.strftime( "%m/%d/%Y %I:%M:%S %p", time.localtime( statinfo.st_ctime ) ) ) )
- else:
- raise exceptions.ConfigDoesNotAllowException( 'The given directory does not exist.' )
- # sort by path
- response = sorted(response, key=itemgetter("path"))
- return response
-
- def __create_jstree( self, directory, disable='folders' ):
- """
- Loads recursively all files and folders within the given folder
- and its subfolders and returns jstree representation
- of its structure.
- """
- userdir_jstree = None
- jstree_paths = []
- if os.path.exists( directory ):
- for ( dirpath, dirnames, filenames ) in os.walk( directory ):
-
- for dirname in dirnames:
- dir_path = os.path.relpath( os.path.join( dirpath, dirname ), directory )
- dir_path_hash = hashlib.sha1( dir_path ).hexdigest()
- disabled = True if disable == 'folders' else False
- jstree_paths.append( jstree.Path( dir_path, dir_path_hash, { 'type': 'folder', 'state': { 'disabled': disabled }, 'li_attr': { 'full_path': dir_path } } ) )
-
- for filename in filenames:
- file_path = os.path.relpath( os.path.join( dirpath, filename ), directory )
- file_path_hash = hashlib.sha1( file_path ).hexdigest()
- disabled = True if disable == 'files' else False
- jstree_paths.append( jstree.Path( file_path, file_path_hash, { 'type': 'file', 'state': { 'disabled': disabled }, 'li_attr': { 'full_path': file_path } } ) )
- else:
- raise exceptions.ConfigDoesNotAllowException( 'The given directory does not exist.' )
-
- userdir_jstree = jstree.JSTree( jstree_paths )
- return userdir_jstree
diff -r be28a5fda9ddbc05df0e545a3a85434630212433 -r 4fcd087ac38c5ff89e715b969de0210fd3c395e0 lib/galaxy/webapps/galaxy/api/remote_files.py
--- /dev/null
+++ b/lib/galaxy/webapps/galaxy/api/remote_files.py
@@ -0,0 +1,147 @@
+"""
+API operations on remote files.
+"""
+import os
+import time
+import hashlib
+from galaxy import web
+from galaxy import exceptions
+from galaxy.web import _future_expose_api as expose_api
+from galaxy.util import jstree
+from galaxy.web.base.controller import BaseAPIController, url_for
+from operator import itemgetter
+
+import logging
+log = logging.getLogger( __name__ )
+
+class RemoteFilesAPIController( BaseAPIController ):
+
+ @expose_api
+ def index( self, trans, **kwd ):
+ """
+ GET /api/remote_files/
+
+ Displays remote files.
+
+ :param target: target to load available datasets from, defaults to ftp
+ possible values: ftp, userdir
+ :type target: str
+
+ :param format: requested format of data, defaults to flat
+ possible values: flat, jstree, ajax
+
+ :returns: list of available files
+ :rtype: list
+ """
+ target = kwd.get( 'target', None )
+ format = kwd.get( 'format', None )
+
+ if target == 'userdir':
+ user_login = trans.user.email
+ user_base_dir = trans.app.config.user_library_import_dir
+ if user_base_dir is None:
+ raise exceptions.ConfigDoesNotAllowException( 'The configuration of this Galaxy instance does not allow upload from user directories.' )
+ full_import_dir = os.path.join( user_base_dir, user_login )
+ if full_import_dir is not None:
+ if format == 'jstree':
+ disable = kwd.get( 'disable', 'folders')
+ try:
+ userdir_jstree = self.__create_jstree( full_import_dir, disable )
+ response = userdir_jstree.jsonData()
+ except Exception, exception:
+ log.debug( str( exception ) )
+ raise exceptions.InternalServerError( 'Could not create tree representation of the given folder: ' + str( full_import_dir ) )
+ elif format == 'ajax':
+ raise exceptions.NotImplemented( 'Not implemented yet. Sorry.' )
+ else:
+ try:
+ response = self.__load_all_filenames( full_import_dir )
+ except Exception, exception:
+ log.error( 'Could not get user import files: %s', str( exception ), exc_info=True )
+ raise exceptions.InternalServerError( 'Could not get the files from your user directory folder.' )
+ else:
+ raise exceptions.InternalServerError( 'Could not get the files from your user directory folder.' )
+ elif target == 'importdir':
+ base_dir = trans.app.config.library_import_dir
+ if base_dir is None:
+ raise exceptions.ConfigDoesNotAllowException( 'The configuration of this Galaxy instance does not allow usage of import directory.' )
+ if format == 'jstree':
+ disable = kwd.get( 'disable', 'folders')
+ try:
+ importdir_jstree = self.__create_jstree( base_dir, disable )
+ response = importdir_jstree.jsonData()
+ except Exception, exception:
+ log.debug( str( exception ) )
+ raise exceptions.InternalServerError( 'Could not create tree representation of the given folder: ' + str( base_dir ) )
+ elif format == 'ajax':
+ raise exceptions.NotImplemented( 'Not implemented yet. Sorry.' )
+ else:
+ try:
+ response = self.__load_all_filenames( base_dir )
+ except Exception, exception:
+ log.error( 'Could not get user import files: %s', str( exception ), exc_info=True )
+ raise exceptions.InternalServerError( 'Could not get the files from your import directory folder.' )
+ else:
+ user_ftp_base_dir = trans.app.config.ftp_upload_dir
+ if user_ftp_base_dir is None:
+ raise exceptions.ConfigDoesNotAllowException( 'The configuration of this Galaxy instance does not allow upload from FTP directories.' )
+ try:
+ user_ftp_dir = None
+ identifier = trans.app.config.ftp_upload_dir_identifier
+ user_ftp_dir = os.path.join( user_ftp_base_dir, getattr(trans.user, identifier) )
+ if user_ftp_dir is not None:
+ response = self.__load_all_filenames( user_ftp_dir )
+ else:
+ raise exceptions.ConfigDoesNotAllowException( 'You do not have an FTP directory named as your login at this Galaxy instance.' )
+ except Exception, exception:
+ log.error( 'Could not get ftp files: %s', str( exception ), exc_info=True )
+ raise exceptions.InternalServerError( 'Could not get the files from your FTP folder.' )
+ return response
+
+ def __load_all_filenames( self, directory ):
+ """
+ Loads recursively all files within the given folder and its
+ subfolders and returns a flat list.
+ """
+ response = []
+ if os.path.exists( directory ):
+ for ( dirpath, dirnames, filenames ) in os.walk( directory ):
+ for filename in filenames:
+ path = os.path.relpath( os.path.join( dirpath, filename ), directory )
+ statinfo = os.lstat( os.path.join( dirpath, filename ) )
+ response.append( dict( path = path,
+ size = statinfo.st_size,
+ ctime = time.strftime( "%m/%d/%Y %I:%M:%S %p", time.localtime( statinfo.st_ctime ) ) ) )
+ else:
+ raise exceptions.ConfigDoesNotAllowException( 'The given directory does not exist.' )
+ # sort by path
+ response = sorted(response, key=itemgetter("path"))
+ return response
+
+ def __create_jstree( self, directory, disable='folders' ):
+ """
+ Loads recursively all files and folders within the given folder
+ and its subfolders and returns jstree representation
+ of its structure.
+ """
+ userdir_jstree = None
+ jstree_paths = []
+ if os.path.exists( directory ):
+ for ( dirpath, dirnames, filenames ) in os.walk( directory ):
+
+ for dirname in dirnames:
+ dir_path = os.path.relpath( os.path.join( dirpath, dirname ), directory )
+ dir_path_hash = hashlib.sha1( dir_path ).hexdigest()
+ disabled = True if disable == 'folders' else False
+ jstree_paths.append( jstree.Path( dir_path, dir_path_hash, { 'type': 'folder', 'state': { 'disabled': disabled }, 'li_attr': { 'full_path': dir_path } } ) )
+
+ for filename in filenames:
+ file_path = os.path.relpath( os.path.join( dirpath, filename ), directory )
+ file_path_hash = hashlib.sha1( file_path ).hexdigest()
+ disabled = True if disable == 'files' else False
+ jstree_paths.append( jstree.Path( file_path, file_path_hash, { 'type': 'file', 'state': { 'disabled': disabled }, 'li_attr': { 'full_path': file_path } } ) )
+ else:
+ raise exceptions.ConfigDoesNotAllowException( 'The given directory does not exist.' )
+
+ userdir_jstree = jstree.JSTree( jstree_paths )
+ return userdir_jstree
diff -r be28a5fda9ddbc05df0e545a3a85434630212433 -r 4fcd087ac38c5ff89e715b969de0210fd3c395e0 lib/galaxy/webapps/galaxy/buildapp.py
--- a/lib/galaxy/webapps/galaxy/buildapp.py
+++ b/lib/galaxy/webapps/galaxy/buildapp.py
@@ -179,7 +179,7 @@
webapp.mapper.resource( 'form', 'forms', path_prefix='/api' )
webapp.mapper.resource( 'request_type', 'request_types', path_prefix='/api' )
webapp.mapper.resource( 'role', 'roles', path_prefix='/api' )
- webapp.mapper.resource( 'ftp_file', 'ftp_files', path_prefix='/api' )
+ webapp.mapper.resource( 'remote_file', 'remote_files', path_prefix='/api' )
webapp.mapper.resource( 'group', 'groups', path_prefix='/api' )
webapp.mapper.resource_with_deleted( 'quota', 'quotas', path_prefix='/api' )
webapp.mapper.connect( '/api/tools/{id:.+?}/build', action='build', controller="tools" )
diff -r be28a5fda9ddbc05df0e545a3a85434630212433 -r 4fcd087ac38c5ff89e715b969de0210fd3c395e0 static/scripts/mvc/library/library-model.js
--- a/static/scripts/mvc/library/library-model.js
+++ b/static/scripts/mvc/library/library-model.js
@@ -199,7 +199,7 @@
*/
var Jstree = Backbone.Model.extend({
- urlRoot: '/api/ftp_files'
+ urlRoot: '/api/remote_files'
});
return {
diff -r be28a5fda9ddbc05df0e545a3a85434630212433 -r 4fcd087ac38c5ff89e715b969de0210fd3c395e0 static/scripts/mvc/upload/upload-ftp.js
--- a/static/scripts/mvc/upload/upload-ftp.js
+++ b/static/scripts/mvc/upload/upload-ftp.js
@@ -26,7 +26,7 @@
// load extension
Utils.get({
- url : galaxy_config.root + 'api/ftp_files',
+ url : galaxy_config.root + 'api/remote_files',
success : function(ftp_files) { self._fill(ftp_files); },
error : function() { self._fill(); }
});
diff -r be28a5fda9ddbc05df0e545a3a85434630212433 -r 4fcd087ac38c5ff89e715b969de0210fd3c395e0 static/scripts/packed/mvc/library/library-model.js
--- a/static/scripts/packed/mvc/library/library-model.js
+++ b/static/scripts/packed/mvc/library/library-model.js
@@ -1,1 +1,1 @@
-define([],function(){var f=Backbone.Model.extend({urlRoot:"/api/libraries/",isVisible:function(n){var m=true;if((!n)&&(this.get("deleted"))){m=false}return m}});var i=Backbone.Collection.extend({url:"/api/libraries",model:f,sort_key:"name",sort_order:null,initialize:function(m){m=m||{}},getVisible:function(n,o){o=o||[];var m=new i(this.filter(function(p){return p.isVisible(n)}));return m},sortByNameAsc:function(){this.comparator=function(n,m){if(n.get("name").toLowerCase()>m.get("name").toLowerCase()){return 1}if(m.get("name").toLowerCase()>n.get("name").toLowerCase()){return -1}return 0};this.sort();return this},sortByNameDesc:function(){this.comparator=function(n,m){if(n.get("name").toLowerCase()>m.get("name").toLowerCase()){return -1}if(m.get("name").toLowerCase()>n.get("name").toLowerCase()){return 1}return 0};this.sort();return this}});var g=Backbone.Model.extend({urlRoot:"/api/libraries/datasets/"});var l=Backbone.Model.extend({urlRoot:"/api/libraries/datasets/"});var a=Backbone.Model.extend({urlRoot:"/api/folders"});var c=Backbone.Collection.extend({model:g,sortByNameAsc:function(){this.comparator=function(n,m){if(n.get("type")===m.get("type")){if(n.get("name").toLowerCase()>m.get("name").toLowerCase()){return 1}if(m.get("name").toLowerCase()>n.get("name").toLowerCase()){return -1}return 0}else{if(n.get("type")==="folder"){return -1}else{return 1}}};this.sort();return this},sortByNameDesc:function(){this.comparator=function(n,m){if(n.get("type")===m.get("type")){if(n.get("name").toLowerCase()>m.get("name").toLowerCase()){return -1}if(m.get("name").toLowerCase()>n.get("name").toLowerCase()){return 1}return 0}else{if(n.get("type")==="folder"){return -1}else{return 1}}};this.sort();return this}});var e=Backbone.Model.extend({defaults:{folder:new c(),urlRoot:"/api/folders/",id:"unknown"},parse:function(m){this.get("folder").reset(m.folder_contents);return m}});var b=Backbone.Model.extend({urlRoot:"/api/histories/"});var d=Backbone.Collection.extend({urlRoot:"/api/histories/",initialize:function(m){this.id=m.id},url:function(){return this.urlRoot+this.id+"/contents"},model:b});var h=Backbone.Model.extend({urlRoot:"/api/histories/"});var k=Backbone.Collection.extend({url:"/api/histories",model:h});var j=Backbone.Model.extend({urlRoot:"/api/ftp_files"});return{Library:f,FolderAsModel:a,Libraries:i,Item:g,Ldda:l,Folder:c,FolderContainer:e,HistoryItem:b,HistoryContents:d,GalaxyHistory:h,GalaxyHistories:k,Jstree:j}});
\ No newline at end of file
+define([],function(){var f=Backbone.Model.extend({urlRoot:"/api/libraries/",isVisible:function(n){var m=true;if((!n)&&(this.get("deleted"))){m=false}return m}});var i=Backbone.Collection.extend({url:"/api/libraries",model:f,sort_key:"name",sort_order:null,initialize:function(m){m=m||{}},getVisible:function(n,o){o=o||[];var m=new i(this.filter(function(p){return p.isVisible(n)}));return m},sortByNameAsc:function(){this.comparator=function(n,m){if(n.get("name").toLowerCase()>m.get("name").toLowerCase()){return 1}if(m.get("name").toLowerCase()>n.get("name").toLowerCase()){return -1}return 0};this.sort();return this},sortByNameDesc:function(){this.comparator=function(n,m){if(n.get("name").toLowerCase()>m.get("name").toLowerCase()){return -1}if(m.get("name").toLowerCase()>n.get("name").toLowerCase()){return 1}return 0};this.sort();return this}});var g=Backbone.Model.extend({urlRoot:"/api/libraries/datasets/"});var l=Backbone.Model.extend({urlRoot:"/api/libraries/datasets/"});var a=Backbone.Model.extend({urlRoot:"/api/folders"});var c=Backbone.Collection.extend({model:g,sortByNameAsc:function(){this.comparator=function(n,m){if(n.get("type")===m.get("type")){if(n.get("name").toLowerCase()>m.get("name").toLowerCase()){return 1}if(m.get("name").toLowerCase()>n.get("name").toLowerCase()){return -1}return 0}else{if(n.get("type")==="folder"){return -1}else{return 1}}};this.sort();return this},sortByNameDesc:function(){this.comparator=function(n,m){if(n.get("type")===m.get("type")){if(n.get("name").toLowerCase()>m.get("name").toLowerCase()){return -1}if(m.get("name").toLowerCase()>n.get("name").toLowerCase()){return 1}return 0}else{if(n.get("type")==="folder"){return -1}else{return 1}}};this.sort();return this}});var e=Backbone.Model.extend({defaults:{folder:new c(),urlRoot:"/api/folders/",id:"unknown"},parse:function(m){this.get("folder").reset(m.folder_contents);return m}});var b=Backbone.Model.extend({urlRoot:"/api/histories/"});var d=Backbone.Collection.extend({urlRoot:"/api/histories/",initialize:function(m){this.id=m.id},url:function(){return this.urlRoot+this.id+"/contents"},model:b});var h=Backbone.Model.extend({urlRoot:"/api/histories/"});var k=Backbone.Collection.extend({url:"/api/histories",model:h});var j=Backbone.Model.extend({urlRoot:"/api/remote_files"});return{Library:f,FolderAsModel:a,Libraries:i,Item:g,Ldda:l,Folder:c,FolderContainer:e,HistoryItem:b,HistoryContents:d,GalaxyHistory:h,GalaxyHistories:k,Jstree:j}});
\ No newline at end of file
diff -r be28a5fda9ddbc05df0e545a3a85434630212433 -r 4fcd087ac38c5ff89e715b969de0210fd3c395e0 static/scripts/packed/mvc/upload/upload-ftp.js
--- a/static/scripts/packed/mvc/upload/upload-ftp.js
+++ b/static/scripts/packed/mvc/upload/upload-ftp.js
@@ -1,1 +1,1 @@
-define(["utils/utils"],function(a){return Backbone.View.extend({options:{class_add:"upload-icon-button fa fa-square-o",class_remove:"upload-icon-button fa fa-check-square-o",class_partial:"upload-icon-button fa fa-minus-square-o"},initialize:function(c){this.app=c;var b=this;this.setElement(this._template());this.rows=[];a.get({url:galaxy_config.root+"api/ftp_files",success:function(d){b._fill(d)},error:function(){b._fill()}})},events:{mousedown:function(b){b.preventDefault()}},_fill:function(d){if(d&&d.length>0){this.$el.find("#upload-ftp-content").html($(this._templateTable()));var c=0;for(key in d){this.rows.push(this._add(d[key]));c+=d[key].size}this.$el.find("#upload-ftp-number").html(d.length+" files");this.$el.find("#upload-ftp-disk").html(a.bytesToString(c,true));this.$select_all=$("#upload-selectall");this.$select_all.addClass(this.options.class_add);var b=this;this.$select_all.on("click",function(){var g=b.$select_all.hasClass(b.options.class_add);for(key in d){var f=d[key];var e=b._find(f);if(!e&&g||e&&!g){b.rows[key].trigger("click")}}});b._refresh()}else{this.$el.find("#upload-ftp-content").html($(this._templateInfo()))}this.$el.find("#upload-ftp-wait").hide()},_add:function(f){var d=this;var e=$(this._templateRow(f));var b=e.find(".icon");$(this.el).find("tbody").append(e);var c="";if(this._find(f)){c=this.options.class_remove}else{c=this.options.class_add}b.addClass(c);e.on("click",function(){var g=d._find(f);b.removeClass();if(!g){d.app.uploadbox.add([{mode:"ftp",name:f.path,size:f.size,path:f.path}]);b.addClass(d.options.class_remove)}else{d.app.collection.remove(g);b.addClass(d.options.class_add)}d._refresh()});return e},_refresh:function(){var b=this.app.collection.where({file_mode:"ftp"});this.$select_all.removeClass();if(b.length==0){this.$select_all.addClass(this.options.class_add)}else{if(b.length==this.rows.length){this.$select_all.addClass(this.options.class_remove)}else{this.$select_all.addClass(this.options.class_partial)}}},_find:function(c){var b=this.app.collection.findWhere({file_path:c.path,status:"init",file_mode:"ftp"});if(b){return b.get("id")}return null},_templateRow:function(b){return'<tr class="upload-ftp-row"><td><div class="icon"/></td><td class="label"><p>'+b.path+'</p></td><td class="nonlabel">'+a.bytesToString(b.size)+'</td><td class="nonlabel">'+b.ctime+"</td></tr>"},_templateTable:function(){return'<span style="whitespace: nowrap; float: left;">Available files: </span><span style="whitespace: nowrap; float: right;"><span class="upload-icon fa fa-file-text-o"/><span id="upload-ftp-number"/> <span class="upload-icon fa fa-hdd-o"/><span id="upload-ftp-disk"/></span><table class="grid" style="float: left;"><thead><tr><th><div id="upload-selectall"></th><th>Name</th><th>Size</th><th>Created</th></tr></thead><tbody></tbody></table>'},_templateInfo:function(){return'<div class="upload-ftp-warning warningmessage">Your FTP directory does not contain any files.</div>'},_template:function(){return'<div class="upload-ftp"><div id="upload-ftp-wait" class="upload-ftp-wait fa fa-spinner fa-spin"/><div class="upload-ftp-help">This Galaxy server allows you to upload files via FTP. To upload some files, log in to the FTP server at <strong>'+this.app.options.ftp_upload_site+'</strong> using your Galaxy credentials (email address and password).</div><div id="upload-ftp-content"></div><div>'}})});
\ No newline at end of file
+define(["utils/utils"],function(a){return Backbone.View.extend({options:{class_add:"upload-icon-button fa fa-square-o",class_remove:"upload-icon-button fa fa-check-square-o",class_partial:"upload-icon-button fa fa-minus-square-o"},initialize:function(c){this.app=c;var b=this;this.setElement(this._template());this.rows=[];a.get({url:galaxy_config.root+"api/remote_files",success:function(d){b._fill(d)},error:function(){b._fill()}})},events:{mousedown:function(b){b.preventDefault()}},_fill:function(d){if(d&&d.length>0){this.$el.find("#upload-ftp-content").html($(this._templateTable()));var c=0;for(key in d){this.rows.push(this._add(d[key]));c+=d[key].size}this.$el.find("#upload-ftp-number").html(d.length+" files");this.$el.find("#upload-ftp-disk").html(a.bytesToString(c,true));this.$select_all=$("#upload-selectall");this.$select_all.addClass(this.options.class_add);var b=this;this.$select_all.on("click",function(){var g=b.$select_all.hasClass(b.options.class_add);for(key in d){var f=d[key];var e=b._find(f);if(!e&&g||e&&!g){b.rows[key].trigger("click")}}});b._refresh()}else{this.$el.find("#upload-ftp-content").html($(this._templateInfo()))}this.$el.find("#upload-ftp-wait").hide()},_add:function(f){var d=this;var e=$(this._templateRow(f));var b=e.find(".icon");$(this.el).find("tbody").append(e);var c="";if(this._find(f)){c=this.options.class_remove}else{c=this.options.class_add}b.addClass(c);e.on("click",function(){var g=d._find(f);b.removeClass();if(!g){d.app.uploadbox.add([{mode:"ftp",name:f.path,size:f.size,path:f.path}]);b.addClass(d.options.class_remove)}else{d.app.collection.remove(g);b.addClass(d.options.class_add)}d._refresh()});return e},_refresh:function(){var b=this.app.collection.where({file_mode:"ftp"});this.$select_all.removeClass();if(b.length==0){this.$select_all.addClass(this.options.class_add)}else{if(b.length==this.rows.length){this.$select_all.addClass(this.options.class_remove)}else{this.$select_all.addClass(this.options.class_partial)}}},_find:function(c){var b=this.app.collection.findWhere({file_path:c.path,status:"init",file_mode:"ftp"});if(b){return b.get("id")}return null},_templateRow:function(b){return'<tr class="upload-ftp-row"><td><div class="icon"/></td><td class="label"><p>'+b.path+'</p></td><td class="nonlabel">'+a.bytesToString(b.size)+'</td><td class="nonlabel">'+b.ctime+"</td></tr>"},_templateTable:function(){return'<span style="whitespace: nowrap; float: left;">Available files: </span><span style="whitespace: nowrap; float: right;"><span class="upload-icon fa fa-file-text-o"/><span id="upload-ftp-number"/> <span class="upload-icon fa fa-hdd-o"/><span id="upload-ftp-disk"/></span><table class="grid" style="float: left;"><thead><tr><th><div id="upload-selectall"></th><th>Name</th><th>Size</th><th>Created</th></tr></thead><tbody></tbody></table>'},_templateInfo:function(){return'<div class="upload-ftp-warning warningmessage">Your FTP directory does not contain any files.</div>'},_template:function(){return'<div class="upload-ftp"><div id="upload-ftp-wait" class="upload-ftp-wait fa fa-spinner fa-spin"/><div class="upload-ftp-help">This Galaxy server allows you to upload files via FTP. To upload some files, log in to the FTP server at <strong>'+this.app.options.ftp_upload_site+'</strong> using your Galaxy credentials (email address and password).</div><div id="upload-ftp-content"></div><div>'}})});
\ No newline at end of file
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: martenson: found a known Whoosh bug, trying to avoid it
by commits-noreply@bitbucket.org 29 Jan '15
by commits-noreply@bitbucket.org 29 Jan '15
29 Jan '15
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/be28a5fda9dd/
Changeset: be28a5fda9dd
User: martenson
Date: 2015-01-29 16:43:29+00:00
Summary: found a known Whoosh bug, trying to avoid it
https://bitbucket.org/mchaput/whoosh/issue/386/keyerror-1l
Affected #: 2 files
diff -r e3196615d5faf9c0b713933664150e9cc1ef1e21 -r be28a5fda9ddbc05df0e545a3a85434630212433 lib/galaxy/webapps/tool_shed/search/repo_search.py
--- a/lib/galaxy/webapps/tool_shed/search/repo_search.py
+++ b/lib/galaxy/webapps/tool_shed/search/repo_search.py
@@ -49,7 +49,12 @@
# Arbitrary for now
reasonable_hits = 100.0
- times_downloaded = int( searcher.stored_fields( docnum )[ "times_downloaded" ] )
+
+ stored_times_downloaded = searcher.stored_fields( docnum )[ "times_downloaded" ]
+ if not isinstance( stored_times_downloaded, ( int, long ) ):
+ times_downloaded = int( stored_times_downloaded )
+ else:
+ times_downloaded = stored_times_downloaded
# Add 1 to prevent 0 being divided
if times_downloaded == 0:
times_downloaded = 1
diff -r e3196615d5faf9c0b713933664150e9cc1ef1e21 -r be28a5fda9ddbc05df0e545a3a85434630212433 scripts/tool_shed/build_ts_whoosh_index.py
--- a/scripts/tool_shed/build_ts_whoosh_index.py
+++ b/scripts/tool_shed/build_ts_whoosh_index.py
@@ -93,7 +93,10 @@
long_description = repo.long_description
homepage_url = repo.homepage_url
remote_repository_url = repo.remote_repository_url
+
times_downloaded = repo.times_downloaded
+ if not isinstance( times_downloaded, ( int, long ) ):
+ times_downloaded = 0
repo_owner_username = ''
if repo.user_id is not None:
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: carlfeberhard: Fix to 68f0ed3: remove old import
by commits-noreply@bitbucket.org 29 Jan '15
by commits-noreply@bitbucket.org 29 Jan '15
29 Jan '15
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/e3196615d5fa/
Changeset: e3196615d5fa
User: carlfeberhard
Date: 2015-01-29 16:35:39+00:00
Summary: Fix to 68f0ed3: remove old import
Affected #: 1 file
diff -r 89ea937034b80e7cad24005ffb018f2b1c4f5f65 -r e3196615d5faf9c0b713933664150e9cc1ef1e21 lib/galaxy/managers/base.py
--- a/lib/galaxy/managers/base.py
+++ b/lib/galaxy/managers/base.py
@@ -32,7 +32,6 @@
from galaxy import exceptions
from galaxy import model
from galaxy.model import tool_shed_install
-from galaxy.managers import filters as filter_parser
import logging
log = logging.getLogger( __name__ )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: davebgx: Set time to epoch instead of throwing server error if repository_metadata.time_last_tested is unset.
by commits-noreply@bitbucket.org 29 Jan '15
by commits-noreply@bitbucket.org 29 Jan '15
29 Jan '15
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/89ea937034b8/
Changeset: 89ea937034b8
User: davebgx
Date: 2015-01-29 16:23:35+00:00
Summary: Set time to epoch instead of throwing server error if repository_metadata.time_last_tested is unset.
Affected #: 1 file
diff -r c805913c4646238890e727a5f1c203058c6413b6 -r 89ea937034b80e7cad24005ffb018f2b1c4f5f65 lib/galaxy/webapps/tool_shed/controllers/repository.py
--- a/lib/galaxy/webapps/tool_shed/controllers/repository.py
+++ b/lib/galaxy/webapps/tool_shed/controllers/repository.py
@@ -1696,7 +1696,10 @@
description_lines = []
# Per the RSS 2.0 specification, all dates in RSS feeds must be formatted as specified in RFC 822
# section 5.1, e.g. Sat, 07 Sep 2002 00:00:01 UT
- time_tested = repository_metadata.time_last_tested.strftime( '%a, %d %b %Y %H:%M:%S UT' )
+ if repository_metadata.time_last_tested is None:
+ time_tested = 'Thu, 01 Jan 1970 00:00:00 UT'
+ else:
+ time_tested = repository_metadata.time_last_tested.strftime( '%a, %d %b %Y %H:%M:%S UT' )
# Generate a citable URL for this repository with owner and changeset revision.
repository_citable_url = common_util.url_join( tool_shed_url,
'view',
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: carlfeberhard: History API: update documentation
by commits-noreply@bitbucket.org 29 Jan '15
by commits-noreply@bitbucket.org 29 Jan '15
29 Jan '15
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/c805913c4646/
Changeset: c805913c4646
User: carlfeberhard
Date: 2015-01-29 16:13:28+00:00
Summary: History API: update documentation
Affected #: 2 files
diff -r e0441a0f85307fb9caf854fd10133f71cc3852d3 -r c805913c4646238890e727a5f1c203058c6413b6 lib/galaxy/managers/histories.py
--- a/lib/galaxy/managers/histories.py
+++ b/lib/galaxy/managers/histories.py
@@ -384,12 +384,12 @@
#TODO: I'm not entirely convinced this (or tags) are a good idea for filters since they involve a/the user
self.fn_filter_parsers.update({
#TODO: add this in annotatable mixin
- 'annotation' : { 'op': { 'in' : self.filter_annotation_contains, } },
+ 'annotation' : { 'op': { 'has' : self.filter_annotation_contains, } },
#TODO: add this in taggable mixin
'tag' : {
'op': {
'eq' : self.filter_has_tag,
- 'in' : self.filter_has_partial_tag,
+ 'has': self.filter_has_partial_tag,
}
}
})
diff -r e0441a0f85307fb9caf854fd10133f71cc3852d3 -r c805913c4646238890e727a5f1c203058c6413b6 lib/galaxy/webapps/galaxy/api/histories.py
--- a/lib/galaxy/webapps/galaxy/api/histories.py
+++ b/lib/galaxy/webapps/galaxy/api/histories.py
@@ -56,6 +56,44 @@
:rtype: list
:returns: list of dictionaries containing summary history information
+
+ The following are optional parameters:
+ view: string, one of ('summary','detailed'), defaults to 'summary'
+ controls which set of properties to return
+ keys: comma separated strings, unused by default
+ keys/names of individual properties to return
+
+ If neither keys or views are sent, the default view (set of keys) is returned.
+ If both a view and keys are sent, the key list and the view's keys are
+ combined.
+ If keys are send and no view, only those properties in keys are returned.
+
+ For which properties are available see:
+ galaxy/managers/histories/HistorySerializer
+
+ The list returned can be filtered by using two optional parameters:
+ q: string, generally a property name to filter by followed
+ by an (often optional) hyphen and operator string.
+ qv: string, the value to filter by
+
+ ..example:
+ To filter the list to only those created after 2015-01-29,
+ the query string would look like:
+ '?q=create_time-gt&qv=2015-01-29'
+
+ Multiple filters can be sent in using multiple q/qv pairs:
+ '?q=create_time-gt&qv=2015-01-29&q=tag-has&qv=experiment-1'
+
+ The list returned can be paginated using two optional parameters:
+ limit: integer, defaults to no value and no limit (return all)
+ how many items to return
+ offset: integer, defaults to 0 and starts at the beginning
+ skip the first ( offset - 1 ) items and begin returning
+ at the Nth item
+
+ ..example:
+ limit and offset can be combined. Skip the first two and return five:
+ '?limit=5&offset=3'
"""
serialization_params = self._parse_serialization_params( kwd, 'summary' )
limit, offset = self.parse_limit_offset( kwd )
@@ -126,6 +164,9 @@
:type deleted: boolean
:param deleted: if True, allow information on a deleted history to be shown.
+ :param keys: same as the use of `keys` in the `index` function above
+ :param view: same as the use of `view` in the `index` function above
+
:rtype: dictionary
:returns: detailed history information
"""
@@ -172,6 +213,9 @@
* archive_source: the url that will generate the archive to import
* archive_type: 'url' (default)
+ :param keys: same as the use of `keys` in the `index` function above
+ :param view: same as the use of `view` in the `index` function above
+
:rtype: dict
:returns: element view of new history
"""
@@ -220,6 +264,9 @@
* payload: a dictionary itself containing:
* purge: if True, purge the history and all of its HDAs
+ :param keys: same as the use of `keys` in the `index` function above
+ :param view: same as the use of `view` in the `index` function above
+
:rtype: dict
:returns: an error object if an error occurred or a dictionary containing:
* id: the encoded id of the history,
@@ -250,6 +297,9 @@
:type id: str
:param id: the encoded id of the history to undelete
+ :param keys: same as the use of `keys` in the `index` function above
+ :param view: same as the use of `view` in the `index` function above
+
:rtype: str
:returns: 'OK' if the history was undeleted
"""
@@ -275,6 +325,9 @@
* annotation: an annotation for the history
+ :param keys: same as the use of `keys` in the `index` function above
+ :param view: same as the use of `view` in the `index` function above
+
:rtype: dict
:returns: an error object if an error occurred or a dictionary containing
any values that were different from the original and, therefore, updated
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: guerler: Workflow: Revise action form
by commits-noreply@bitbucket.org 29 Jan '15
by commits-noreply@bitbucket.org 29 Jan '15
29 Jan '15
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/e0441a0f8530/
Changeset: e0441a0f8530
User: guerler
Date: 2015-01-29 15:56:15+00:00
Summary: Workflow: Revise action form
Affected #: 9 files
diff -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 -r e0441a0f85307fb9caf854fd10133f71cc3852d3 client/galaxy/scripts/mvc/tools/tools-form-workflow.js
--- a/client/galaxy/scripts/mvc/tools/tools-form-workflow.js
+++ b/client/galaxy/scripts/mvc/tools/tools-form-workflow.js
@@ -9,27 +9,29 @@
initialize: function(options) {
// link with node representation in workflow module
this.node = workflow.active_node;
-
if (!this.node) {
console.debug('FAILED - tools-form-workflow:initialize() - Node not found in workflow.');
return;
}
-
+
+ // link actions
+ this.post_job_actions = this.node.post_job_actions || {};
+
// initialize parameters
this.options = options;
-
+
// set labels
this.options.text_enable = 'In Advance';
this.options.text_disable = 'At Runtime';
this.options.use_defaults = true;
-
+
// declare fields as optional
Utils.deepeach(options.inputs, function(item) {
if (item.type) {
item.optional = (['data', 'data_hidden', 'hidden', 'drill_down', 'repeat', 'conditional']).indexOf(item.type) == -1;
}
});
-
+
// declare conditional fields as not optional
Utils.deepeach(options.inputs, function(item) {
if (item.type) {
@@ -51,32 +53,53 @@
}
});
},
-
+
/** Builds all sub sections
*/
_makeSections: function(inputs){
// for annotation
inputs[Utils.uuid()] = {
- label : 'Edit Step Attributes',
- type : 'section',
- expand : this.node.annotation,
- inputs : [{
- label : 'Annotation / Notes',
- name : 'annotation',
- type : 'text',
- area : true,
- help : 'Add an annotation or notes to this step; annotations are available when a workflow is viewed.',
- value : this.node.annotation
- }]
+ label : 'Annotation / Notes',
+ name : 'annotation',
+ type : 'text',
+ area : true,
+ help : 'Add an annotation or note for this step. It will be shown with the workflow.',
+ value : this.node.annotation
}
-
- // for actions
- this.post_job_actions = this.node.post_job_actions;
- for (var i in this.node.output_terminals) {
- inputs[Utils.uuid()] = this._makeSection(i);
+
+ // get first output id
+ var output_id = this.node.output_terminals && Object.keys(this.node.output_terminals)[0];
+ if (output_id) {
+ // send email on job completion
+ inputs[Utils.uuid()] = {
+ name : 'pja__' + output_id + '__EmailAction',
+ label : 'Email notification',
+ type : 'boolean',
+ value : String(Boolean(this.post_job_actions['EmailAction' + output_id])),
+ ignore : 'false',
+ help : 'An email notification will be send when the job has completed.',
+ payload : {
+ 'host' : window.location.host
+ }
+ };
+
+ // delete non-output files
+ inputs[Utils.uuid()] = {
+ name : 'pja__' + output_id + '__DeleteIntermediatesAction',
+ label : 'Output cleanup',
+ type : 'boolean',
+ value : String(Boolean(this.post_job_actions['DeleteIntermediatesAction' + output_id])),
+ ignore : 'false',
+ help : 'Delete intermediate outputs if they are not used as input for another job.'
+ };
+
+ // add output specific actions
+ for (var i in this.node.output_terminals) {
+ inputs[Utils.uuid()] = this._makeSection(i);
+ }
}
},
-
+
/** Builds sub section with step actions/annotation
*/
_makeSection: function(output_id){
@@ -88,12 +111,12 @@
1 : this.datatypes[key]
});
}
-
+
// sort extensions
extensions.sort(function(a, b) {
return a.label > b.label ? 1 : a.label < b.label ? -1 : 0;
});
-
+
// add additional options
extensions.unshift({
0 : 'Sequences',
@@ -105,12 +128,12 @@
});
extensions.unshift({
0 : 'Leave unchanged',
- 1 : 'None'
+ 1 : ''
});
-
+
// create custom sub section
var input_config = {
- label : 'Edit Step Action: \'' + output_id + '\'',
+ label : 'Add Actions: \'' + output_id + '\'',
type : 'section',
inputs : [{
action : 'RenameDatasetAction',
@@ -125,7 +148,7 @@
argument : 'newtype',
label : 'Change datatype',
type : 'select',
- ignore : 'None',
+ ignore : '',
options : extensions,
help : 'This action will change the datatype of the output to the indicated value.'
},{
@@ -143,59 +166,42 @@
action : 'ColumnSetAction',
argument : 'chromCol',
label : 'Chrom column',
- type : 'text',
+ type : 'integer',
value : '',
ignore : ''
},{
action : 'ColumnSetAction',
argument : 'startCol',
label : 'Start column',
- type : 'text',
+ type : 'integer',
value : '',
ignore : ''
},{
action : 'ColumnSetAction',
argument : 'endCol',
label : 'End column',
- type : 'text',
+ type : 'integer',
value : '',
ignore : ''
},{
action : 'ColumnSetAction',
argument : 'strandCol',
label : 'Strand column',
- type : 'text',
+ type : 'integer',
value : '',
ignore : ''
},{
action : 'ColumnSetAction',
argument : 'nameCol',
label : 'Name column',
- type : 'text',
+ type : 'integer',
value : '',
ignore : ''
}],
help : 'This action will set column assignments in the output dataset. Blank fields are ignored.'
- },{
- action : 'EmailAction',
- label : 'Email notification',
- type : 'boolean',
- value : 'false',
- ignore : 'false',
- help : 'This action will send an email notifying you when the job is done.',
- payload : {
- 'host' : window.location.host
- }
- },{
- action : 'DeleteIntermediatesAction',
- label : 'Delete non-outputs',
- type : 'boolean',
- value : 'false',
- ignore : 'false',
- help : 'All non-output steps of this workflow will have datasets deleted if they are no longer being used as job inputs when the job this action is attached to is finished. You *must* be using workflow outputs (the snowflake) in your workflow for this to have any effect.'
}]
};
-
+
// visit input nodes and enrich by name/value pairs from server data
var self = this;
function visit (head, head_list) {
@@ -209,7 +215,7 @@
if (input.argument) {
input.name += '__' + input.argument;
}
-
+
// modify names of payload arguments
if (input.payload) {
for (var p_id in input.payload) {
@@ -218,7 +224,7 @@
delete p;
}
}
-
+
// access/verify existence of value
var d = self.post_job_actions[input.action + output_id];
if (d) {
@@ -242,11 +248,11 @@
}
}
visit(input_config);
-
+
// return final configuration
return input_config;
},
-
+
/** Builds a new model through api call and recreates the entire form
*/
_buildModel: function() {
@@ -260,24 +266,24 @@
}
});
},
-
+
/** Request a new model for an already created tool form and updates the form inputs
*/
_updateModel: function() {
// create the request dictionary
var self = this;
var current_state = this.tree.finalize();
-
+
// log tool state
console.debug('tools-form-workflow::_refreshForm() - Refreshing states.');
console.debug(current_state);
-
+
// register process
var process_id = this.deferred.register();
// build model url for request
var model_url = galaxy_config.root + 'workflow/editor_form_post?tool_id=' + this.options.id + '&__is_dynamic__=False';
-
+
// post job
Utils.request({
type : 'GET',
diff -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 -r e0441a0f85307fb9caf854fd10133f71cc3852d3 client/galaxy/scripts/mvc/tools/tools-section.js
--- a/client/galaxy/scripts/mvc/tools/tools-section.js
+++ b/client/galaxy/scripts/mvc/tools/tools-section.js
@@ -275,6 +275,7 @@
}
});
portlet.append(sub_section.$el);
+ portlet.append($('<div/>').addClass('ui-table-form-info').html(input_def.help));
// add event handler visibility button
var visible = false;
diff -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 -r e0441a0f85307fb9caf854fd10133f71cc3852d3 client/galaxy/scripts/mvc/ui/ui-slider.js
--- a/client/galaxy/scripts/mvc/ui/ui-slider.js
+++ b/client/galaxy/scripts/mvc/ui/ui-slider.js
@@ -54,7 +54,7 @@
}
// add text field event
- this.$text.on('change', function () {
+ this.$text.on('input', function () {
self.value($(this).val());
});
diff -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 -r e0441a0f85307fb9caf854fd10133f71cc3852d3 static/scripts/mvc/tools/tools-form-workflow.js
--- a/static/scripts/mvc/tools/tools-form-workflow.js
+++ b/static/scripts/mvc/tools/tools-form-workflow.js
@@ -9,27 +9,29 @@
initialize: function(options) {
// link with node representation in workflow module
this.node = workflow.active_node;
-
if (!this.node) {
console.debug('FAILED - tools-form-workflow:initialize() - Node not found in workflow.');
return;
}
-
+
+ // link actions
+ this.post_job_actions = this.node.post_job_actions || {};
+
// initialize parameters
this.options = options;
-
+
// set labels
this.options.text_enable = 'In Advance';
this.options.text_disable = 'At Runtime';
this.options.use_defaults = true;
-
+
// declare fields as optional
Utils.deepeach(options.inputs, function(item) {
if (item.type) {
item.optional = (['data', 'data_hidden', 'hidden', 'drill_down', 'repeat', 'conditional']).indexOf(item.type) == -1;
}
});
-
+
// declare conditional fields as not optional
Utils.deepeach(options.inputs, function(item) {
if (item.type) {
@@ -51,32 +53,53 @@
}
});
},
-
+
/** Builds all sub sections
*/
_makeSections: function(inputs){
// for annotation
inputs[Utils.uuid()] = {
- label : 'Edit Step Attributes',
- type : 'section',
- expand : this.node.annotation,
- inputs : [{
- label : 'Annotation / Notes',
- name : 'annotation',
- type : 'text',
- area : true,
- help : 'Add an annotation or notes to this step; annotations are available when a workflow is viewed.',
- value : this.node.annotation
- }]
+ label : 'Annotation / Notes',
+ name : 'annotation',
+ type : 'text',
+ area : true,
+ help : 'Add an annotation or note for this step. It will be shown with the workflow.',
+ value : this.node.annotation
}
-
- // for actions
- this.post_job_actions = this.node.post_job_actions;
- for (var i in this.node.output_terminals) {
- inputs[Utils.uuid()] = this._makeSection(i);
+
+ // get first output id
+ var output_id = this.node.output_terminals && Object.keys(this.node.output_terminals)[0];
+ if (output_id) {
+ // send email on job completion
+ inputs[Utils.uuid()] = {
+ name : 'pja__' + output_id + '__EmailAction',
+ label : 'Email notification',
+ type : 'boolean',
+ value : String(Boolean(this.post_job_actions['EmailAction' + output_id])),
+ ignore : 'false',
+ help : 'An email notification will be send when the job has completed.',
+ payload : {
+ 'host' : window.location.host
+ }
+ };
+
+ // delete non-output files
+ inputs[Utils.uuid()] = {
+ name : 'pja__' + output_id + '__DeleteIntermediatesAction',
+ label : 'Output cleanup',
+ type : 'boolean',
+ value : String(Boolean(this.post_job_actions['DeleteIntermediatesAction' + output_id])),
+ ignore : 'false',
+ help : 'Delete intermediate outputs if they are not used as input for another job.'
+ };
+
+ // add output specific actions
+ for (var i in this.node.output_terminals) {
+ inputs[Utils.uuid()] = this._makeSection(i);
+ }
}
},
-
+
/** Builds sub section with step actions/annotation
*/
_makeSection: function(output_id){
@@ -88,12 +111,12 @@
1 : this.datatypes[key]
});
}
-
+
// sort extensions
extensions.sort(function(a, b) {
return a.label > b.label ? 1 : a.label < b.label ? -1 : 0;
});
-
+
// add additional options
extensions.unshift({
0 : 'Sequences',
@@ -105,12 +128,12 @@
});
extensions.unshift({
0 : 'Leave unchanged',
- 1 : 'None'
+ 1 : ''
});
-
+
// create custom sub section
var input_config = {
- label : 'Edit Step Action: \'' + output_id + '\'',
+ label : 'Add Actions: \'' + output_id + '\'',
type : 'section',
inputs : [{
action : 'RenameDatasetAction',
@@ -125,7 +148,7 @@
argument : 'newtype',
label : 'Change datatype',
type : 'select',
- ignore : 'None',
+ ignore : '',
options : extensions,
help : 'This action will change the datatype of the output to the indicated value.'
},{
@@ -143,59 +166,42 @@
action : 'ColumnSetAction',
argument : 'chromCol',
label : 'Chrom column',
- type : 'text',
+ type : 'integer',
value : '',
ignore : ''
},{
action : 'ColumnSetAction',
argument : 'startCol',
label : 'Start column',
- type : 'text',
+ type : 'integer',
value : '',
ignore : ''
},{
action : 'ColumnSetAction',
argument : 'endCol',
label : 'End column',
- type : 'text',
+ type : 'integer',
value : '',
ignore : ''
},{
action : 'ColumnSetAction',
argument : 'strandCol',
label : 'Strand column',
- type : 'text',
+ type : 'integer',
value : '',
ignore : ''
},{
action : 'ColumnSetAction',
argument : 'nameCol',
label : 'Name column',
- type : 'text',
+ type : 'integer',
value : '',
ignore : ''
}],
help : 'This action will set column assignments in the output dataset. Blank fields are ignored.'
- },{
- action : 'EmailAction',
- label : 'Email notification',
- type : 'boolean',
- value : 'false',
- ignore : 'false',
- help : 'This action will send an email notifying you when the job is done.',
- payload : {
- 'host' : window.location.host
- }
- },{
- action : 'DeleteIntermediatesAction',
- label : 'Delete non-outputs',
- type : 'boolean',
- value : 'false',
- ignore : 'false',
- help : 'All non-output steps of this workflow will have datasets deleted if they are no longer being used as job inputs when the job this action is attached to is finished. You *must* be using workflow outputs (the snowflake) in your workflow for this to have any effect.'
}]
};
-
+
// visit input nodes and enrich by name/value pairs from server data
var self = this;
function visit (head, head_list) {
@@ -209,7 +215,7 @@
if (input.argument) {
input.name += '__' + input.argument;
}
-
+
// modify names of payload arguments
if (input.payload) {
for (var p_id in input.payload) {
@@ -218,7 +224,7 @@
delete p;
}
}
-
+
// access/verify existence of value
var d = self.post_job_actions[input.action + output_id];
if (d) {
@@ -242,11 +248,11 @@
}
}
visit(input_config);
-
+
// return final configuration
return input_config;
},
-
+
/** Builds a new model through api call and recreates the entire form
*/
_buildModel: function() {
@@ -260,24 +266,24 @@
}
});
},
-
+
/** Request a new model for an already created tool form and updates the form inputs
*/
_updateModel: function() {
// create the request dictionary
var self = this;
var current_state = this.tree.finalize();
-
+
// log tool state
console.debug('tools-form-workflow::_refreshForm() - Refreshing states.');
console.debug(current_state);
-
+
// register process
var process_id = this.deferred.register();
// build model url for request
var model_url = galaxy_config.root + 'workflow/editor_form_post?tool_id=' + this.options.id + '&__is_dynamic__=False';
-
+
// post job
Utils.request({
type : 'GET',
diff -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 -r e0441a0f85307fb9caf854fd10133f71cc3852d3 static/scripts/mvc/tools/tools-section.js
--- a/static/scripts/mvc/tools/tools-section.js
+++ b/static/scripts/mvc/tools/tools-section.js
@@ -275,6 +275,7 @@
}
});
portlet.append(sub_section.$el);
+ portlet.append($('<div/>').addClass('ui-table-form-info').html(input_def.help));
// add event handler visibility button
var visible = false;
diff -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 -r e0441a0f85307fb9caf854fd10133f71cc3852d3 static/scripts/mvc/ui/ui-slider.js
--- a/static/scripts/mvc/ui/ui-slider.js
+++ b/static/scripts/mvc/ui/ui-slider.js
@@ -54,7 +54,7 @@
}
// add text field event
- this.$text.on('change', function () {
+ this.$text.on('input', function () {
self.value($(this).val());
});
diff -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 -r e0441a0f85307fb9caf854fd10133f71cc3852d3 static/scripts/packed/mvc/tools/tools-form-workflow.js
--- a/static/scripts/packed/mvc/tools/tools-form-workflow.js
+++ b/static/scripts/packed/mvc/tools/tools-form-workflow.js
@@ -1,1 +1,1 @@
-define(["utils/utils","mvc/tools/tools-form-base"],function(b,a){var c=a.extend({initialize:function(e){this.node=workflow.active_node;if(!this.node){console.debug("FAILED - tools-form-workflow:initialize() - Node not found in workflow.");return}this.options=e;this.options.text_enable="In Advance";this.options.text_disable="At Runtime";this.options.use_defaults=true;b.deepeach(e.inputs,function(f){if(f.type){f.optional=(["data","data_hidden","hidden","drill_down","repeat","conditional"]).indexOf(f.type)==-1}});b.deepeach(e.inputs,function(f){if(f.type){if(f.type=="conditional"){f.test_param.optional=false}}});var d=this;b.get({url:galaxy_config.root+"api/datatypes",cache:true,success:function(f){d.datatypes=f;d._makeSections(e.inputs);a.prototype.initialize.call(d,e)}})},_makeSections:function(d){d[b.uuid()]={label:"Edit Step Attributes",type:"section",expand:this.node.annotation,inputs:[{label:"Annotation / Notes",name:"annotation",type:"text",area:true,help:"Add an annotation or notes to this step; annotations are available when a workflow is viewed.",value:this.node.annotation}]};this.post_job_actions=this.node.post_job_actions;for(var e in this.node.output_terminals){d[b.uuid()]=this._makeSection(e)}},_makeSection:function(h){var g=[];for(key in this.datatypes){g.push({0:this.datatypes[key],1:this.datatypes[key]})}g.sort(function(j,i){return j.label>i.label?1:j.label<i.label?-1:0});g.unshift({0:"Sequences",1:"Sequences"});g.unshift({0:"Roadmaps",1:"Roadmaps"});g.unshift({0:"Leave unchanged",1:"None"});var f={label:"Edit Step Action: '"+h+"'",type:"section",inputs:[{action:"RenameDatasetAction",argument:"newname",label:"Rename dataset",type:"text",value:"",ignore:"",help:'This action will rename the result dataset. Click <a href="https://wiki.galaxyproject.org/Learn/AdvancedWorkflow/Variables">here</a> for more information.'},{action:"ChangeDatatypeAction",argument:"newtype",label:"Change datatype",type:"select",ignore:"None",options:g,help:"This action will change the datatype of the output to the indicated value."},{action:"TagDatasetAction",argument:"tags",label:"Tags",type:"text",value:"",ignore:"",help:"This action will set tags for the dataset."},{label:"Assign columns",type:"section",inputs:[{action:"ColumnSetAction",argument:"chromCol",label:"Chrom column",type:"text",value:"",ignore:""},{action:"ColumnSetAction",argument:"startCol",label:"Start column",type:"text",value:"",ignore:""},{action:"ColumnSetAction",argument:"endCol",label:"End column",type:"text",value:"",ignore:""},{action:"ColumnSetAction",argument:"strandCol",label:"Strand column",type:"text",value:"",ignore:""},{action:"ColumnSetAction",argument:"nameCol",label:"Name column",type:"text",value:"",ignore:""}],help:"This action will set column assignments in the output dataset. Blank fields are ignored."},{action:"EmailAction",label:"Email notification",type:"boolean",value:"false",ignore:"false",help:"This action will send an email notifying you when the job is done.",payload:{host:window.location.host}},{action:"DeleteIntermediatesAction",label:"Delete non-outputs",type:"boolean",value:"false",ignore:"false",help:"All non-output steps of this workflow will have datasets deleted if they are no longer being used as job inputs when the job this action is attached to is finished. You *must* be using workflow outputs (the snowflake) in your workflow for this to have any effect."}]};var d=this;function e(n,o){o=o||[];o.push(n);for(var m in n.inputs){var k=n.inputs[m];if(k.action){k.name="pja__"+h+"__"+k.action;if(k.argument){k.name+="__"+k.argument}if(k.payload){for(var s in k.payload){var q=k.payload[s];k.payload[k.name+"__"+s]=q;delete q}}var r=d.post_job_actions[k.action+h];if(r){for(var l in o){o[l].expand=true}if(k.argument){k.value=r.action_arguments&&r.action_arguments[k.argument]||k.value}else{k.value="true"}}}if(k.inputs){e(k,o.slice(0))}}}e(f);return f},_buildModel:function(){Galaxy.modal.show({title:"Coming soon...",body:"This feature has not been implemented yet.",buttons:{Close:function(){Galaxy.modal.hide()}}})},_updateModel:function(){var d=this;var e=this.tree.finalize();console.debug("tools-form-workflow::_refreshForm() - Refreshing states.");console.debug(e);var g=this.deferred.register();var f=galaxy_config.root+"workflow/editor_form_post?tool_id="+this.options.id+"&__is_dynamic__=False";b.request({type:"GET",url:f,data:e,success:function(h){d.node.update_field_data(h);d._errors(h&&h.tool_model);d.deferred.done(g);console.debug("tools-form::_refreshForm() - States refreshed.");console.debug(h)},error:function(h){d.deferred.done(g);console.debug("tools-form::_refreshForm() - Refresh request failed.");console.debug(h)}})}});return{View:c}});
\ No newline at end of file
+define(["utils/utils","mvc/tools/tools-form-base"],function(b,a){var c=a.extend({initialize:function(e){this.node=workflow.active_node;if(!this.node){console.debug("FAILED - tools-form-workflow:initialize() - Node not found in workflow.");return}this.post_job_actions=this.node.post_job_actions||{};this.options=e;this.options.text_enable="In Advance";this.options.text_disable="At Runtime";this.options.use_defaults=true;b.deepeach(e.inputs,function(f){if(f.type){f.optional=(["data","data_hidden","hidden","drill_down","repeat","conditional"]).indexOf(f.type)==-1}});b.deepeach(e.inputs,function(f){if(f.type){if(f.type=="conditional"){f.test_param.optional=false}}});var d=this;b.get({url:galaxy_config.root+"api/datatypes",cache:true,success:function(f){d.datatypes=f;d._makeSections(e.inputs);a.prototype.initialize.call(d,e)}})},_makeSections:function(d){d[b.uuid()]={label:"Annotation / Notes",name:"annotation",type:"text",area:true,help:"Add an annotation or note for this step. It will be shown with the workflow.",value:this.node.annotation};var f=this.node.output_terminals&&Object.keys(this.node.output_terminals)[0];if(f){d[b.uuid()]={name:"pja__"+f+"__EmailAction",label:"Email notification",type:"boolean",value:String(Boolean(this.post_job_actions["EmailAction"+f])),ignore:"false",help:"An email notification will be send when the job has completed.",payload:{host:window.location.host}};d[b.uuid()]={name:"pja__"+f+"__DeleteIntermediatesAction",label:"Output cleanup",type:"boolean",value:String(Boolean(this.post_job_actions["DeleteIntermediatesAction"+f])),ignore:"false",help:"Delete intermediate outputs if they are not used as input for another job."};for(var e in this.node.output_terminals){d[b.uuid()]=this._makeSection(e)}}},_makeSection:function(h){var g=[];for(key in this.datatypes){g.push({0:this.datatypes[key],1:this.datatypes[key]})}g.sort(function(j,i){return j.label>i.label?1:j.label<i.label?-1:0});g.unshift({0:"Sequences",1:"Sequences"});g.unshift({0:"Roadmaps",1:"Roadmaps"});g.unshift({0:"Leave unchanged",1:""});var f={label:"Add Actions: '"+h+"'",type:"section",inputs:[{action:"RenameDatasetAction",argument:"newname",label:"Rename dataset",type:"text",value:"",ignore:"",help:'This action will rename the result dataset. Click <a href="https://wiki.galaxyproject.org/Learn/AdvancedWorkflow/Variables">here</a> for more information.'},{action:"ChangeDatatypeAction",argument:"newtype",label:"Change datatype",type:"select",ignore:"",options:g,help:"This action will change the datatype of the output to the indicated value."},{action:"TagDatasetAction",argument:"tags",label:"Tags",type:"text",value:"",ignore:"",help:"This action will set tags for the dataset."},{label:"Assign columns",type:"section",inputs:[{action:"ColumnSetAction",argument:"chromCol",label:"Chrom column",type:"integer",value:"",ignore:""},{action:"ColumnSetAction",argument:"startCol",label:"Start column",type:"integer",value:"",ignore:""},{action:"ColumnSetAction",argument:"endCol",label:"End column",type:"integer",value:"",ignore:""},{action:"ColumnSetAction",argument:"strandCol",label:"Strand column",type:"integer",value:"",ignore:""},{action:"ColumnSetAction",argument:"nameCol",label:"Name column",type:"integer",value:"",ignore:""}],help:"This action will set column assignments in the output dataset. Blank fields are ignored."}]};var d=this;function e(n,o){o=o||[];o.push(n);for(var m in n.inputs){var k=n.inputs[m];if(k.action){k.name="pja__"+h+"__"+k.action;if(k.argument){k.name+="__"+k.argument}if(k.payload){for(var s in k.payload){var q=k.payload[s];k.payload[k.name+"__"+s]=q;delete q}}var r=d.post_job_actions[k.action+h];if(r){for(var l in o){o[l].expand=true}if(k.argument){k.value=r.action_arguments&&r.action_arguments[k.argument]||k.value}else{k.value="true"}}}if(k.inputs){e(k,o.slice(0))}}}e(f);return f},_buildModel:function(){Galaxy.modal.show({title:"Coming soon...",body:"This feature has not been implemented yet.",buttons:{Close:function(){Galaxy.modal.hide()}}})},_updateModel:function(){var d=this;var e=this.tree.finalize();console.debug("tools-form-workflow::_refreshForm() - Refreshing states.");console.debug(e);var g=this.deferred.register();var f=galaxy_config.root+"workflow/editor_form_post?tool_id="+this.options.id+"&__is_dynamic__=False";b.request({type:"GET",url:f,data:e,success:function(h){d.node.update_field_data(h);d._errors(h&&h.tool_model);d.deferred.done(g);console.debug("tools-form::_refreshForm() - States refreshed.");console.debug(h)},error:function(h){d.deferred.done(g);console.debug("tools-form::_refreshForm() - Refresh request failed.");console.debug(h)}})}});return{View:c}});
\ No newline at end of file
diff -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 -r e0441a0f85307fb9caf854fd10133f71cc3852d3 static/scripts/packed/mvc/tools/tools-section.js
--- a/static/scripts/packed/mvc/tools/tools-section.js
+++ b/static/scripts/packed/mvc/tools/tools-section.js
@@ -1,1 +1,1 @@
-define(["utils/utils","mvc/ui/ui-table","mvc/ui/ui-misc","mvc/ui/ui-portlet","mvc/tools/tools-repeat","mvc/tools/tools-select-content","mvc/tools/tools-input"],function(e,b,h,d,c,a,f){var g=Backbone.View.extend({initialize:function(j,i){this.app=j;this.inputs=i.inputs;i.cls="ui-table-plain";i.cls_tr="section-row";this.table=new b.View(i);this.setElement(this.table.$el);this.render()},render:function(){this.table.delAll();for(var j in this.inputs){this.add(this.inputs[j])}},add:function(k){var j=this;var i=jQuery.extend(true,{},k);i.id=k.id=e.uuid();this.app.input_list[i.id]=i;var l=i.type;switch(l){case"conditional":this._addConditional(i);break;case"repeat":this._addRepeat(i);break;case"section":this._addSection(i);break;default:this._addRow(i)}},_addConditional:function(j){var k=this;j.test_param.id=j.id;var n=this._addRow(j.test_param);n.options.onchange=function(w){var v=k.app.tree.matchCase(j,w);for(var u in j.cases){var q=j.cases[u];var t=j.id+"-section-"+u;var p=k.table.get(t);var s=false;for(var r in q.inputs){if(!q.inputs[r].hidden){s=true;break}}if(u==v&&s){p.fadeIn("fast")}else{p.hide()}}k.app.trigger("refresh")};for(var m in j.cases){var l=j.id+"-section-"+m;var o=new g(this.app,{inputs:j.cases[m].inputs});o.$el.addClass("ui-table-form-section");this.table.add(o.$el);this.table.append(l)}n.trigger("change")},_addRepeat:function(p){var s=this;var q=0;function n(i,u){var t=p.id+"-section-"+(q++);var v=null;if(u){v=function(){l.del(t);l.retitle(p.title);s.app.trigger("refresh")}}var w=new g(s.app,{inputs:i});l.add({id:t,title:p.title,$el:w.$el,ondel:v});l.retitle(p.title)}var l=new c.View({title_new:p.title,max:p.max,onnew:function(){n(p.inputs,true);s.app.trigger("refresh")}});var j=p.min;var r=_.size(p.cache);for(var m=0;m<Math.max(r,j);m++){var o=null;if(m<r){o=p.cache[m]}else{o=p.inputs}n(o,m>=j)}var k=new f(this.app,{label:p.title,help:p.help,field:l});k.$el.addClass("ui-table-form-section");this.table.add(k.$el);this.table.append(p.id)},_addSection:function(i){var j=this;var n=new g(j.app,{inputs:i.inputs});var m=new h.ButtonIcon({icon:"fa-eye-slash",tooltip:"Show/hide section",cls:"ui-button-icon-plain"});var l=new d.View({title:i.label,cls:"ui-portlet-section",operations:{button_visible:m}});l.append(n.$el);var k=false;l.$content.hide();l.$header.css("cursor","pointer");l.$header.on("click",function(){if(k){k=false;l.$content.hide();m.setIcon("fa-eye-slash")}else{k=true;l.$content.fadeIn("fast");m.setIcon("fa-eye")}});if(i.expand){l.$header.trigger("click")}this.table.add(l.$el);this.table.append(i.id)},_addRow:function(i){var l=i.id;var j=this._createField(i);this.app.field_list[l]=j;var k=new f(this.app,{label:i.label,default_value:i.default_value,optional:i.optional,help:i.help,field:j});this.app.element_list[l]=k;this.table.add(k.$el);this.table.append(l);if(i.hidden){this.table.get(l).hide()}return j},_createField:function(i){var j=null;switch(i.type){case"text":j=this._fieldText(i);break;case"select":j=this._fieldSelect(i);break;case"data":j=this._fieldData(i);break;case"data_collection":j=this._fieldData(i);break;case"data_column":i.error_text="Missing columns in referenced dataset.";j=this._fieldSelect(i);break;case"hidden":j=this._fieldHidden(i);break;case"hidden_data":j=this._fieldHidden(i);break;case"integer":j=this._fieldSlider(i);break;case"float":j=this._fieldSlider(i);break;case"boolean":j=this._fieldBoolean(i);break;case"genomebuild":i.searchable=true;j=this._fieldSelect(i);break;case"drill_down":j=this._fieldDrilldown(i);break;case"baseurl":j=this._fieldHidden(i);break;default:this.app.incompatible=true;if(i.options){j=this._fieldSelect(i)}else{j=this._fieldText(i)}console.debug("tools-form::_addRow() : Auto matched field type ("+i.type+").")}if(i.value!==undefined){j.value(i.value)}return j},_fieldData:function(i){if(!this.app.options.is_dynamic){i.info="Data input '"+i.name+"' ("+e.textify(i.extensions.toString())+")";i.value=null;return this._fieldHidden(i)}var j=this;return new a.View(this.app,{id:"field-"+i.id,extensions:i.extensions,optional:i.optional,multiple:i.multiple,type:i.type,data:i.options,onchange:function(){j.app.trigger("refresh")}})},_fieldSelect:function(j){if(!this.app.options.is_dynamic&&j.is_dynamic){return this._fieldText(j)}var l=[];for(var m in j.options){var n=j.options[m];l.push({label:n[0],value:n[1]})}var o=h.Select;switch(j.display){case"checkboxes":o=h.Checkbox;break;case"radio":o=h.Radio;break}var k=this;return new o.View({id:"field-"+j.id,data:l,error_text:j.error_text||"No options available",multiple:j.multiple,searchable:j.searchable,onchange:function(){k.app.trigger("refresh")}})},_fieldDrilldown:function(i){if(!this.app.options.is_dynamic&&i.is_dynamic){return this._fieldText(i)}var j=this;return new h.Drilldown.View({id:"field-"+i.id,data:i.options,display:i.display,onchange:function(){j.app.trigger("refresh")}})},_fieldText:function(i){if(i.options){i.area=i.multiple;if(!e.validate(i.value)){i.value=""}else{if(i.value instanceof Array){i.value=value.toString()}else{i.value=String(i.value).replace(/[\[\]'"\s]/g,"");if(i.multiple){i.value=i.value.replace(/,/g,"\n")}}}}var j=this;return new h.Input({id:"field-"+i.id,area:i.area,onchange:function(){j.app.trigger("refresh")}})},_fieldSlider:function(i){var j=this;return new h.Slider.View({id:"field-"+i.id,precise:i.type=="float",min:i.min,max:i.max,onchange:function(){j.app.trigger("refresh")}})},_fieldHidden:function(i){return new h.Hidden({id:"field-"+i.id,info:i.info})},_fieldBoolean:function(i){var j=this;return new h.RadioButton.View({id:"field-"+i.id,data:[{label:"Yes",value:"true"},{label:"No",value:"false"}],onchange:function(){j.app.trigger("refresh")}})}});return{View:g}});
\ No newline at end of file
+define(["utils/utils","mvc/ui/ui-table","mvc/ui/ui-misc","mvc/ui/ui-portlet","mvc/tools/tools-repeat","mvc/tools/tools-select-content","mvc/tools/tools-input"],function(e,b,h,d,c,a,f){var g=Backbone.View.extend({initialize:function(j,i){this.app=j;this.inputs=i.inputs;i.cls="ui-table-plain";i.cls_tr="section-row";this.table=new b.View(i);this.setElement(this.table.$el);this.render()},render:function(){this.table.delAll();for(var j in this.inputs){this.add(this.inputs[j])}},add:function(k){var j=this;var i=jQuery.extend(true,{},k);i.id=k.id=e.uuid();this.app.input_list[i.id]=i;var l=i.type;switch(l){case"conditional":this._addConditional(i);break;case"repeat":this._addRepeat(i);break;case"section":this._addSection(i);break;default:this._addRow(i)}},_addConditional:function(j){var k=this;j.test_param.id=j.id;var n=this._addRow(j.test_param);n.options.onchange=function(w){var v=k.app.tree.matchCase(j,w);for(var u in j.cases){var q=j.cases[u];var t=j.id+"-section-"+u;var p=k.table.get(t);var s=false;for(var r in q.inputs){if(!q.inputs[r].hidden){s=true;break}}if(u==v&&s){p.fadeIn("fast")}else{p.hide()}}k.app.trigger("refresh")};for(var m in j.cases){var l=j.id+"-section-"+m;var o=new g(this.app,{inputs:j.cases[m].inputs});o.$el.addClass("ui-table-form-section");this.table.add(o.$el);this.table.append(l)}n.trigger("change")},_addRepeat:function(p){var s=this;var q=0;function n(i,u){var t=p.id+"-section-"+(q++);var v=null;if(u){v=function(){l.del(t);l.retitle(p.title);s.app.trigger("refresh")}}var w=new g(s.app,{inputs:i});l.add({id:t,title:p.title,$el:w.$el,ondel:v});l.retitle(p.title)}var l=new c.View({title_new:p.title,max:p.max,onnew:function(){n(p.inputs,true);s.app.trigger("refresh")}});var j=p.min;var r=_.size(p.cache);for(var m=0;m<Math.max(r,j);m++){var o=null;if(m<r){o=p.cache[m]}else{o=p.inputs}n(o,m>=j)}var k=new f(this.app,{label:p.title,help:p.help,field:l});k.$el.addClass("ui-table-form-section");this.table.add(k.$el);this.table.append(p.id)},_addSection:function(i){var j=this;var n=new g(j.app,{inputs:i.inputs});var m=new h.ButtonIcon({icon:"fa-eye-slash",tooltip:"Show/hide section",cls:"ui-button-icon-plain"});var l=new d.View({title:i.label,cls:"ui-portlet-section",operations:{button_visible:m}});l.append(n.$el);l.append($("<div/>").addClass("ui-table-form-info").html(i.help));var k=false;l.$content.hide();l.$header.css("cursor","pointer");l.$header.on("click",function(){if(k){k=false;l.$content.hide();m.setIcon("fa-eye-slash")}else{k=true;l.$content.fadeIn("fast");m.setIcon("fa-eye")}});if(i.expand){l.$header.trigger("click")}this.table.add(l.$el);this.table.append(i.id)},_addRow:function(i){var l=i.id;var j=this._createField(i);this.app.field_list[l]=j;var k=new f(this.app,{label:i.label,default_value:i.default_value,optional:i.optional,help:i.help,field:j});this.app.element_list[l]=k;this.table.add(k.$el);this.table.append(l);if(i.hidden){this.table.get(l).hide()}return j},_createField:function(i){var j=null;switch(i.type){case"text":j=this._fieldText(i);break;case"select":j=this._fieldSelect(i);break;case"data":j=this._fieldData(i);break;case"data_collection":j=this._fieldData(i);break;case"data_column":i.error_text="Missing columns in referenced dataset.";j=this._fieldSelect(i);break;case"hidden":j=this._fieldHidden(i);break;case"hidden_data":j=this._fieldHidden(i);break;case"integer":j=this._fieldSlider(i);break;case"float":j=this._fieldSlider(i);break;case"boolean":j=this._fieldBoolean(i);break;case"genomebuild":i.searchable=true;j=this._fieldSelect(i);break;case"drill_down":j=this._fieldDrilldown(i);break;case"baseurl":j=this._fieldHidden(i);break;default:this.app.incompatible=true;if(i.options){j=this._fieldSelect(i)}else{j=this._fieldText(i)}console.debug("tools-form::_addRow() : Auto matched field type ("+i.type+").")}if(i.value!==undefined){j.value(i.value)}return j},_fieldData:function(i){if(!this.app.options.is_dynamic){i.info="Data input '"+i.name+"' ("+e.textify(i.extensions.toString())+")";i.value=null;return this._fieldHidden(i)}var j=this;return new a.View(this.app,{id:"field-"+i.id,extensions:i.extensions,optional:i.optional,multiple:i.multiple,type:i.type,data:i.options,onchange:function(){j.app.trigger("refresh")}})},_fieldSelect:function(j){if(!this.app.options.is_dynamic&&j.is_dynamic){return this._fieldText(j)}var l=[];for(var m in j.options){var n=j.options[m];l.push({label:n[0],value:n[1]})}var o=h.Select;switch(j.display){case"checkboxes":o=h.Checkbox;break;case"radio":o=h.Radio;break}var k=this;return new o.View({id:"field-"+j.id,data:l,error_text:j.error_text||"No options available",multiple:j.multiple,searchable:j.searchable,onchange:function(){k.app.trigger("refresh")}})},_fieldDrilldown:function(i){if(!this.app.options.is_dynamic&&i.is_dynamic){return this._fieldText(i)}var j=this;return new h.Drilldown.View({id:"field-"+i.id,data:i.options,display:i.display,onchange:function(){j.app.trigger("refresh")}})},_fieldText:function(i){if(i.options){i.area=i.multiple;if(!e.validate(i.value)){i.value=""}else{if(i.value instanceof Array){i.value=value.toString()}else{i.value=String(i.value).replace(/[\[\]'"\s]/g,"");if(i.multiple){i.value=i.value.replace(/,/g,"\n")}}}}var j=this;return new h.Input({id:"field-"+i.id,area:i.area,onchange:function(){j.app.trigger("refresh")}})},_fieldSlider:function(i){var j=this;return new h.Slider.View({id:"field-"+i.id,precise:i.type=="float",min:i.min,max:i.max,onchange:function(){j.app.trigger("refresh")}})},_fieldHidden:function(i){return new h.Hidden({id:"field-"+i.id,info:i.info})},_fieldBoolean:function(i){var j=this;return new h.RadioButton.View({id:"field-"+i.id,data:[{label:"Yes",value:"true"},{label:"No",value:"false"}],onchange:function(){j.app.trigger("refresh")}})}});return{View:g}});
\ No newline at end of file
diff -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 -r e0441a0f85307fb9caf854fd10133f71cc3852d3 static/scripts/packed/mvc/ui/ui-slider.js
--- a/static/scripts/packed/mvc/ui/ui-slider.js
+++ b/static/scripts/packed/mvc/ui/ui-slider.js
@@ -1,1 +1,1 @@
-define(["utils/utils"],function(a){var b=Backbone.View.extend({optionsDefault:{min:null,max:null,step:null,precise:false,split:10000},initialize:function(d){var c=this;this.options=a.merge(d,this.optionsDefault);this.setElement(this._template(this.options));this.useslider=this.options.max!==null&&this.options.min!==null&&this.options.max>this.options.min;if(this.options.step===null){this.options.step=1;if(this.options.precise&&this.useslider){this.options.step=(this.options.max-this.options.min)/this.options.split}}if(this.useslider){this.$slider=this.$el.find("#slider");this.$slider.slider(this.options);this.$slider.on("slide",function(f,g){c.value(g.value)})}else{this.$el.find(".ui-form-slider-text").css("width","100%")}this.$text=this.$el.find("#text");if(this.options.value!==undefined){this.value(this.options.value)}this.$text.on("change",function(){c.value($(this).val())});var e=[];this.$text.on("keyup",function(f){e[f.which]=false});this.$text.on("keydown",function(g){var f=g.which;e[f]=true;if(!(f==8||f==9||f==13||f==37||f==39||(f>=48&&f<=57)||(f==190&&$(this).val().indexOf(".")==-1&&c.options.precise)||(f==189&&$(this).val().indexOf("-")==-1)||e[91]||e[17])){event.preventDefault()}})},value:function(c){if(c!==undefined){if(isNaN(c)){c=0}if(this.options.max!==null){c=Math.min(c,this.options.max)}if(this.options.min!==null){c=Math.max(c,this.options.min)}if(this.options.onchange){this.options.onchange(c)}this.$slider&&this.$slider.slider("value",c);this.$text.val(c)}return this.$text.val()},_template:function(c){return'<div id="'+c.id+'" class="ui-form-slider"><input id="text" type="text" class="ui-form-slider-text"/><div id="slider" class="ui-form-slider-element"/></div>'}});return{View:b}});
\ No newline at end of file
+define(["utils/utils"],function(a){var b=Backbone.View.extend({optionsDefault:{min:null,max:null,step:null,precise:false,split:10000},initialize:function(d){var c=this;this.options=a.merge(d,this.optionsDefault);this.setElement(this._template(this.options));this.useslider=this.options.max!==null&&this.options.min!==null&&this.options.max>this.options.min;if(this.options.step===null){this.options.step=1;if(this.options.precise&&this.useslider){this.options.step=(this.options.max-this.options.min)/this.options.split}}if(this.useslider){this.$slider=this.$el.find("#slider");this.$slider.slider(this.options);this.$slider.on("slide",function(f,g){c.value(g.value)})}else{this.$el.find(".ui-form-slider-text").css("width","100%")}this.$text=this.$el.find("#text");if(this.options.value!==undefined){this.value(this.options.value)}this.$text.on("input",function(){c.value($(this).val())});var e=[];this.$text.on("keyup",function(f){e[f.which]=false});this.$text.on("keydown",function(g){var f=g.which;e[f]=true;if(!(f==8||f==9||f==13||f==37||f==39||(f>=48&&f<=57)||(f==190&&$(this).val().indexOf(".")==-1&&c.options.precise)||(f==189&&$(this).val().indexOf("-")==-1)||e[91]||e[17])){event.preventDefault()}})},value:function(c){if(c!==undefined){if(isNaN(c)){c=0}if(this.options.max!==null){c=Math.min(c,this.options.max)}if(this.options.min!==null){c=Math.max(c,this.options.min)}if(this.options.onchange){this.options.onchange(c)}this.$slider&&this.$slider.slider("value",c);this.$text.val(c)}return this.$text.val()},_template:function(c){return'<div id="'+c.id+'" class="ui-form-slider"><input id="text" type="text" class="ui-form-slider-text"/><div id="slider" class="ui-form-slider-element"/></div>'}});return{View:b}});
\ No newline at end of file
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
29 Jan '15
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/68f0ed37e089/
Changeset: 68f0ed37e089
User: carlfeberhard
Date: 2015-01-29 15:42:54+00:00
Summary: History API: allow filtering, limit, and offset in index params; Implement managers/base.FilterParser to configure how models can be filtered; Change ModelManager.list to allow limit, offset, and post-query (functional) filters; Add user filter, limit, and offset parsers to BaseController; Add tests for previous; Remove some UserManager functions; Continue filling out Annotatable mixin
Affected #: 13 files
diff -r 8a234cd86e8a972a43f0e13617c9f09ae1216fee -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 lib/galaxy/managers/annotatable.py
--- a/lib/galaxy/managers/annotatable.py
+++ b/lib/galaxy/managers/annotatable.py
@@ -11,6 +11,26 @@
annotation_assoc = None
# TODO: most of this seems to be covered by item_attrs.UsesAnnotations
+ # TODO: use these below (serializer/deserializer)
+ def user_annotation( self, trans, item, user ):
+ return item.get_item_annotation_str( self.app.model.context, user, item )
+
+ def owner_annotation( self, trans, item ):
+ return self.user_annotation( trans, item, item.user )
+
+ def delete_annotation( self, trans, item, user ):
+ return item.delete_item_annotation( self.app.model.context, user, item )
+
+ def annotate( self, trans, item, user, annotation ):
+ if annotation is None:
+ self.delete_annotation( self, trans, item, user )
+ return None
+
+ annotation_obj = item.add_item_annotation( self.app.model.context, user, item, annotation )
+ return annotation_obj.annotation
+
+ #def by_user( self, trans, user, **kwargs ):
+ # pass
class AnnotatableSerializer( object ):
diff -r 8a234cd86e8a972a43f0e13617c9f09ae1216fee -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 lib/galaxy/managers/base.py
--- a/lib/galaxy/managers/base.py
+++ b/lib/galaxy/managers/base.py
@@ -32,6 +32,7 @@
from galaxy import exceptions
from galaxy import model
from galaxy.model import tool_shed_install
+from galaxy.managers import filters as filter_parser
import logging
log = logging.getLogger( __name__ )
@@ -159,12 +160,7 @@
def __init__( self, app ):
self.app = app
- def _default_order_by( self ):
- """
- Returns a tuple of columns for the default order when getting multiple models.
- """
- return ( self.model_class.create_time, )
-
+ # .... query foundation wrapper
def query( self, trans, eagerloads=True, filters=None, order_by=None, limit=None, offset=None, **kwargs ):
"""
Return a basic query from model_class, filters, order_by, and limit and offset.
@@ -172,17 +168,17 @@
Set eagerloads to False to disable them for this query.
"""
query = trans.sa_session.query( self.model_class )
-
# joined table loading
if eagerloads is False:
query = query.enable_eagerloads( False )
- # TODO: if non-orm filters are the only option, here is where they'd go
- query = self._apply_filters( query, filters )
- query = self._apply_order_by_limit_offset( query, order_by, limit, offset )
+ query = self._apply_orm_filters( query, filters )
+ query = self._apply_order_by( query, order_by )
+ query = self._apply_orm_limit_offset( query, limit, offset )
return query
- def _apply_filters( self, query, filters ):
+ # .... filters
+ def _apply_orm_filters( self, query, filters ):
"""
Add any filters to the given query.
"""
@@ -213,13 +209,7 @@
filtersB = [ filtersB ]
return filtersA + filtersB
- def _apply_order_by_limit_offset( self, query, order_by, limit, offset ):
- """
- Return the query after adding the order_by, limit, and offset clauses.
- """
- query = self._apply_order_by( query, order_by )
- return self._apply_limit_offset( query, limit, offset )
-
+ # .... order, limit, and offset
def _apply_order_by( self, query, order_by ):
"""
Return the query after adding the order_by clauses.
@@ -233,7 +223,13 @@
return query.order_by( *order_by )
return query.order_by( order_by )
- def _apply_limit_offset( self, query, limit, offset ):
+ def _default_order_by( self ):
+ """
+ Returns a tuple of columns for the default order when getting multiple models.
+ """
+ return ( self.model_class.create_time, )
+
+ def _apply_orm_limit_offset( self, query, limit, offset ):
"""
Return the query after applying the given limit and offset (if not None).
"""
@@ -243,6 +239,7 @@
query = query.offset( offset )
return query
+ # .... query resolution
def one( self, trans, **kwargs ):
"""
Sends kwargs to build the query and returns one and only one model.
@@ -284,25 +281,94 @@
id_filter = self.model_class.id == id
return self.one( trans, filters=id_filter, **kwargs )
- def list( self, trans, query=None, **kwargs ):
+ # .... multirow queries
+ def _orm_list( self, trans, query=None, **kwargs ):
"""
Sends kwargs to build the query return all models found.
"""
query = query or self.query( trans, **kwargs )
return query.all()
- def _query_by_ids( self, trans, ids, filters=None, **kwargs ):
+ #def list( self, trans, query=None, filters=None, order_by=None, limit=None, offset=None, **kwargs ):
+ def list( self, trans, filters=None, order_by=None, limit=None, offset=None, **kwargs ):
"""
- Builds a query to find a list of models with the given list of `ids`.
+ Returns all objects matching the given filters
"""
- ids_filter = self.model_class.id.in_( ids )
- return self.query( trans, filters=self._munge_filters( ids_filter, filters ), **kwargs )
+ orm_filters, fn_filters = self._split_filters( filters )
+ if not fn_filters:
+ # if no fn_filtering required, we can use the 'all orm' version with limit offset
+ return self._orm_list( trans, filters=orm_filters, order_by=order_by, limit=limit, offset=offset, **kwargs )
- def by_ids( self, trans, ids, **kwargs ):
+ # fn filters will change the number of items returnable by limit/offset - remove them here from the orm query
+ query = self.query( trans, filters=orm_filters, order_by=order_by, limit=None, offset=None, **kwargs )
+ items = query.all()
+
+ # apply limit, offset after SQL filtering
+ items = self._apply_fn_filters_gen( items, fn_filters )
+ return list( self._apply_fn_limit_offset_gen( items, limit, offset ) )
+
+ def _split_filters( self, filters ):
+ """
+ Splits `filters` into a tuple of two lists:
+ a list of filters to be added to the SQL query
+ and a list of functional filters to be applied after the SQL query.
+ """
+ orm_filters, fn_filters = ( [], [] )
+ if filters is None:
+ return ( orm_filters, fn_filters )
+ if not isinstance( filters, list ):
+ filters = [ filters ]
+ for filter_ in filters:
+ if self._is_fn_filter( filter_ ):
+ fn_filters.append( filter_ )
+ else:
+ orm_filters.append( filter_ )
+ return ( orm_filters, fn_filters )
+
+ def _is_fn_filter( self, filter_ ):
+ """
+ Returns True if `filter_` is a functional filter to be applied after the SQL query.
+ """
+ return callable( filter_ )
+
+ def _apply_fn_filters_gen( self, items, filters ):
+ """
+ If all the filter functions in `filters` return True for an item in `items`,
+ yield that item.
+ """
+ #cpu-expensive
+ for item in items:
+ filter_results = map( lambda f: f( item ), filters )
+ if all( filter_results ):
+ yield item
+
+ def _apply_fn_limit_offset_gen( self, items, limit, offset ):
+ """
+ Iterate over `items` and begin yielding items after
+ `offset` number of items and stop when we've yielded
+ `limit` number of items.
+ """
+ # change negative limit, offset to None
+ if limit is not None and limit < 0:
+ limit = None
+ if offset is not None and offset < 0:
+ offset = None
+
+ yielded = 0
+ for i, item in enumerate( items ):
+ if offset is not None and i < offset:
+ continue
+ if limit is not None and yielded >= limit:
+ break
+ yield item
+ yielded += 1
+
+ def by_ids( self, trans, ids, filters=None, **kwargs ):
"""
Returns an in-order list of models with the matching ids in `ids`.
"""
- found = self._query_by_ids( trans, ids, **kwargs ).all()
+ ids_filter = self.model_class.id.in_( ids )
+ found = self.list( trans, filters=self._munge_filters( ids_filter, filters ), **kwargs )
# TODO: this does not order by the original 'ids' array
# ...could use get (supposedly since found are in the session, the db won't be hit twice)
@@ -607,6 +673,9 @@
val = self.validate.int_range( key, val, min, max )
return self.default_deserializer( trans, item, key, val )
+ #def deserialize_date( self, trans, item, key, val ):
+ # #TODO: parse isoformat date into date object
+
# ... common deserializers for Galaxy
def deserialize_genome_build( self, trans, item, key, val ):
"""
@@ -691,6 +760,229 @@
# pass
+# ==== Building query filters based on model data
+class FilterParser( object ):
+ """
+ Converts string tuples (partially converted query string params) of
+ attr, op, val into either:
+ - ORM based filters (filters that can be applied by the ORM at the SQL
+ level) or
+ - functional filters (filters that use derived values or values not
+ within the SQL tables)
+ These filters can then be applied to queries.
+
+ This abstraction allows 'smarter' application of limit and offset at either the
+ SQL level or the generator/list level based on the presence of functional
+ filters. In other words, if no functional filters are present, limit and offset
+ may be applied at the SQL level. If functional filters are present, limit and
+ offset need to applied at the list level.
+
+ These might be safely be replaced in the future by creating SQLAlchemy
+ hybrid properties or more thoroughly mapping derived values.
+ """
+ #??: this class kindof 'lives' in both the world of the controllers/param-parsing and to models/orm
+ # (as the model informs how the filter params are parsed)
+ # I have no great idea where this 'belongs', so it's here for now
+
+ #: model class
+ model_class = None
+
+ def __init__( self, app ):
+ """
+ Set up serializer map, any additional serializable keys, and views here.
+ """
+ self.app = app
+
+ #: dictionary containing parsing data for ORM/SQLAlchemy-based filters
+ #..note: although kind of a pain in the ass and verbose, opt-in/whitelisting allows more control
+ #: over potentially expensive queries
+ self.orm_filter_parsers = {}
+
+ #: dictionary containing parsing data for functional filters - applied after a query is made
+ self.fn_filter_parsers = {}
+
+ # set up both of the above
+ self._add_parsers()
+
+ def _add_parsers( self ):
+ """
+ Set up, extend, or alter `orm_filter_parsers` and `fn_filter_parsers`.
+ """
+ pass
+
+ def parse_filters( self, filter_tuple_list ):
+ """
+ Parse string 3-tuples (attr, op, val) into orm or functional filters.
+ """
+ parsed = []
+ for ( attr, op, val ) in filter_tuple_list:
+ filter_ = self.parse_filter( attr, op, val )
+ parsed.append( filter_ )
+ return parsed
+
+ def parse_filter( self, attr, op, val ):
+ """
+ Attempt to parse filter as a custom/fn filter, then an orm filter, and
+ if neither work - raise an error.
+
+ :raises exceptions.RequestParameterInvalidException: if no functional or orm
+ filter can be parsed.
+ """
+ try:
+ # check for a custom filter
+ fn_filter = self._parse_fn_filter( attr, op, val )
+ if fn_filter is not None:
+ return fn_filter
+
+ # if no custom filter found, try to make an ORM filter
+ #note: have to use explicit is None here, bool( sqlalx.filter ) == False
+ orm_filter = self._parse_orm_filter( attr, op, val )
+ if orm_filter is not None:
+ return orm_filter
+
+ # by convention, assume most val parsers raise ValueError
+ except ValueError, val_err:
+ raise exceptions.RequestParameterInvalidException( 'unparsable value for filter',
+ column=attr, operation=op, value=val )
+
+ # if neither of the above work, raise an error with how-to info
+ #TODO: send back all valid filter keys in exception for added user help
+ raise exceptions.RequestParameterInvalidException( 'bad filter', column=attr, operation=op )
+
+ # ---- fn filters
+ def _parse_fn_filter( self, attr, op, val ):
+ """
+ """
+ # fn_filter_list is a dict: fn_filter_list[ attr ] = { 'opname1' : opfn1, 'opname2' : opfn2, etc. }
+
+ # attr, op is a nested dictionary pointing to the filter fn
+ attr_map = self.fn_filter_parsers.get( attr, None )
+ if not attr_map:
+ return None
+ allowed_ops = attr_map.get( 'op' )
+ # allowed ops is a map here, op => fn
+ filter_fn = allowed_ops.get( op, None )
+ if not filter_fn:
+ return None
+ # parse the val from string using the 'val' parser if present (otherwise, leave as string)
+ val_parser = attr_map.get( 'val', None )
+ if val_parser:
+ val = val_parser( val )
+
+ # curry/partial and fold the val in there now
+ return lambda i: filter_fn( i, val )
+
+ # ---- ORM filters
+ def _parse_orm_filter( self, attr, op, val ):
+ """
+ """
+ # orm_filter_list is a dict: orm_filter_list[ attr ] = <list of allowed ops>
+ # attr must be a whitelisted column
+ column = self.model_class.table.columns.get( attr )
+ column_map = self.orm_filter_parsers.get( attr, None )
+ if column is None or not column_map:
+ return None
+ # op must be whitelisted: contained in the list orm_filter_list[ attr ][ 'op' ]
+ allowed_ops = column_map.get( 'op' )
+ if op not in allowed_ops:
+ return None
+ op = self._convert_op_string_to_fn( column, op )
+ # parse the val from string using the 'val' parser if present (otherwise, leave as string)
+ val_parser = column_map.get( 'val', None )
+ if val_parser:
+ val = val_parser( val )
+
+ orm_filter = op( val )
+ return orm_filter
+
+ #: these are the easier/shorter string equivalents to the python operator fn names that need '__' around them
+ UNDERSCORED_OPS = ( 'lt', 'le', 'eq', 'ne', 'ge', 'gt' )
+ #UNCHANGED_OPS = ( 'like' )
+ def _convert_op_string_to_fn( self, column, op_string ):
+ """
+ """
+ # correct op_string to usable function key
+ fn_name = op_string
+ if op_string in self.UNDERSCORED_OPS:
+ fn_name = '__' + op_string + '__'
+ elif op_string == 'in':
+ fn_name = 'in_'
+
+ # get the column fn using the op_string and error if not a callable attr
+ #TODO: special case 'not in' - or disallow?
+ op_fn = getattr( column, fn_name, None )
+ if not op_fn or not callable( op_fn ):
+ return None
+ return op_fn
+
+ # --- more parsers! yay!
+#TODO: These should go somewhere central - we've got ~6 parser modules/sections now
+ #TODO: to annotatable
+ def _owner_annotation( self, item ):
+ """
+ Get the annotation by the item's owner.
+ """
+ if not item.user:
+ return None
+ for annotation in item.annotations:
+ if annotation.user == item.user:
+ return annotation.annotation
+ return None
+
+ def filter_annotation_contains( self, item, val ):
+ """
+ Test whether `val` is in the owner's annotation.
+ """
+ owner_annotation = self._owner_annotation( item )
+ if owner_annotation is None:
+ return False
+ return val in owner_annotation
+
+ #TODO: to taggable
+ def _filter_tags( self, item, val, fn_name='__eq__' ):
+ """
+ Test whether the string version of any tag `fn_name`s (__eq__, contains)
+ `val`.
+ """
+ #TODO: which user is this? all?
+ for tag in item.tags:
+ tag_str = tag.user_tname
+ if tag.value is not None:
+ tag_str += ":" + tag.user_value
+ if tag_str[ fn_name ]( val ):
+ return True
+ return False
+
+ def filter_has_partial_tag( self, item, val ):
+ """
+ Return True if any tag partially contains `val`.
+ """
+ return self._filter_tags( item, val, fn_name='contains' )
+
+ def filter_has_tag( self, item, val ):
+ """
+ Return True if any tag exactly equals `val`.
+ """
+ return self._filter_tags( item, val, fn_name='__eq__' )
+
+ def parse_bool( self, bool_string ):
+ """
+ Parse a boolean from a string.
+ """
+ #Be strict here to remove complexity of options.
+ if bool_string in ( 'True', True ):
+ return True
+ if bool_string in ( 'False', False ):
+ return False
+ raise ValueError( 'invalid boolean: ' + str( bool_string ) )
+
+ def parse_id_list( self, id_list_string, sep=',' ):
+ """
+ Split `id_list_string` at `sep`.
+ """
+ return id_list_string.split( sep )
+
+
# ==== Security Mixins
class AccessibleModelInterface( object ):
"""
@@ -794,8 +1086,9 @@
:raises exceptions.ItemAccessibilityException:
"""
+ raise exceptions.NotImplemented( "Abstract Interface Method" )
# just alias to by_user (easier/same thing)
- return self.by_user( trans, user, **kwargs )
+ #return self.by_user( trans, user, **kwargs )
def filter_owned( self, trans, user, **kwargs ):
"""
diff -r 8a234cd86e8a972a43f0e13617c9f09ae1216fee -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 lib/galaxy/managers/histories.py
--- a/lib/galaxy/managers/histories.py
+++ b/lib/galaxy/managers/histories.py
@@ -63,7 +63,7 @@
return super( HistoryManager, self ).is_owner( trans, history, user )
#TODO: possibly to sharable
- def most_recent( self, trans, user, **kwargs ):
+ def most_recent( self, trans, user, filters=None, **kwargs ):
"""
Return the most recently update history for the user.
"""
@@ -71,7 +71,8 @@
if not user:
return None if trans.history.deleted else trans.history
desc_update_time = self.model_class.table.c.update_time
- return self._query_by_user( trans, user, order_by=desc_update_time, limit=1, **kwargs ).first()
+ filters = self._munge_filters( filters, self.model_class.user_id == user.id )
+ return self.query( trans, filters=filters, order_by=desc_update_time, limit=1, **kwargs ).first()
# .... purgable
def purge( self, trans, history, flush=True, **kwargs ):
@@ -109,7 +110,7 @@
return self.set_current( trans, self.by_id( trans, history_id ) )
# .... serialization
- #TODO: move to serializer (i.e. history with contents attr)
+#TODO: move to serializer (i.e. history with contents attr)
def _get_history_data( self, trans, history ):
"""
Returns a dictionary containing ``history`` and ``contents``, serialized
@@ -342,7 +343,6 @@
super( HistoryDeserializer, self ).__init__( app )
self.history_manager = self.manager
- #assumes: incoming from json.loads and sanitized
def add_deserializers( self ):
super( HistoryDeserializer, self ).add_deserializers()
base.PurgableModelDeserializer.add_deserializers( self )
@@ -351,3 +351,45 @@
'name' : self.deserialize_basestring,
'genome_build' : self.deserialize_genome_build,
})
+
+
+class HistoryFilters( base.FilterParser ):
+ model_class = model.History
+
+ def _add_parsers( self ):
+ super( HistoryFilters, self )._add_parsers()
+ self.orm_filter_parsers.update({
+ #TODO: these three are (prob.) applicable to all models
+ 'id' : { 'op': ( 'in' ), 'val': self.parse_id_list },
+ # dates can be directly passed through the orm into a filter (no need to parse into datetime object)
+ 'create_time' : { 'op': ( 'le', 'ge' ) },
+ 'update_time' : { 'op': ( 'le', 'ge' ) },
+
+ # history specific
+ 'name' : { 'op': ( 'eq', 'contains', 'like' ) },
+ 'genome_build' : { 'op': ( 'eq', 'contains', 'like' ) },
+
+ #TODO: purgable
+ 'deleted' : { 'op': ( 'eq' ), 'val': self.parse_bool },
+ 'purged' : { 'op': ( 'eq' ), 'val': self.parse_bool },
+
+ #TODO: sharable
+ 'importable' : { 'op': ( 'eq' ), 'val': self.parse_bool },
+ 'published' : { 'op': ( 'eq' ), 'val': self.parse_bool },
+ 'slug' : { 'op': ( 'eq', 'contains', 'like' ) },
+ # chose by user should prob. only be available for admin? (most often we'll only need trans.user)
+ #'user' : { 'op': ( 'eq' ), 'val': self.parse_id_list },
+ })
+
+ #TODO: I'm not entirely convinced this (or tags) are a good idea for filters since they involve a/the user
+ self.fn_filter_parsers.update({
+ #TODO: add this in annotatable mixin
+ 'annotation' : { 'op': { 'in' : self.filter_annotation_contains, } },
+ #TODO: add this in taggable mixin
+ 'tag' : {
+ 'op': {
+ 'eq' : self.filter_has_tag,
+ 'in' : self.filter_has_partial_tag,
+ }
+ }
+ })
diff -r 8a234cd86e8a972a43f0e13617c9f09ae1216fee -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 lib/galaxy/managers/ratable.py
--- a/lib/galaxy/managers/ratable.py
+++ b/lib/galaxy/managers/ratable.py
@@ -16,6 +16,9 @@
#TODO: most of this seems to be covered by item_attrs.UsesItemRatings
+ #def by_user( self, trans, user, **kwargs ):
+ # pass
+
class RatableSerializer( object ):
@@ -33,6 +36,7 @@
"""
pass
+
class RatableDeserializer( object ):
def add_deserializers( self ):
diff -r 8a234cd86e8a972a43f0e13617c9f09ae1216fee -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 lib/galaxy/managers/sharable.py
--- a/lib/galaxy/managers/sharable.py
+++ b/lib/galaxy/managers/sharable.py
@@ -41,22 +41,14 @@
self.user_manager = users.UserManager( app )
# .... has a user
- def _query_by_user( self, trans, user, filters=None, **kwargs ):
+ def by_user( self, trans, user, filters=None, **kwargs ):
"""
- Return query for all items (of model_class type) associated with the given
+ Return list for all items (of model_class type) associated with the given
`user`.
"""
user_filter = self.model_class.user_id == user.id
filters=self._munge_filters( user_filter, filters )
- return self.query( trans, filters=filters, **kwargs )
-
- def by_user( self, trans, user, **kwargs ):
- """
- Return list for all items (of model_class type) associated with the given
- `user`.
- """
- query = self._query_by_user( trans, user, **kwargs )
- return self.list( trans, query=query, **kwargs )
+ return self.list( trans, filters=filters, **kwargs )
# .... owned model interface
def is_owner( self, trans, item, user ):
diff -r 8a234cd86e8a972a43f0e13617c9f09ae1216fee -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 lib/galaxy/managers/taggable.py
--- a/lib/galaxy/managers/taggable.py
+++ b/lib/galaxy/managers/taggable.py
@@ -15,6 +15,9 @@
#TODO: most of this can be done by delegating to the TagManager?
+ #def by_user( self, trans, user, **kwargs ):
+ # pass
+
class TaggableSerializer( object ):
diff -r 8a234cd86e8a972a43f0e13617c9f09ae1216fee -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 lib/galaxy/managers/users.py
--- a/lib/galaxy/managers/users.py
+++ b/lib/galaxy/managers/users.py
@@ -145,37 +145,9 @@
return user
# ---- current
- def is_current_user( self, trans, user ):
- """
- Return True if this user is the trans' current user.
- """
+ def current_user( self, trans ):
# define here for single point of change and make more readable
- return user == trans.user
-
- def is_current_user_anonymous( self, trans ):
- """
- Return True if the current user is anonymous.
- """
- return self.is_anonymous( trans.user )
-
- def is_current_user_admin( self, trans ):
- """
- Return True if the current user is admin.
- """
- return self.is_admin( trans, trans.user )
-
- # ---- User-created notes/metadata on other models
- #def tags( self, trans, user, **kwargs ):
- # """
- # Return all tags created by this user.
- # """
- # pass
-
- #def annotations( self, trans, user, **kwargs ):
- # """
- # Return all annotations created by this user.
- # """
- # pass
+ return trans.user
# ---- api keys
def create_api_key( self, trans, user ):
diff -r 8a234cd86e8a972a43f0e13617c9f09ae1216fee -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 lib/galaxy/web/base/controller.py
--- a/lib/galaxy/web/base/controller.py
+++ b/lib/galaxy/web/base/controller.py
@@ -100,6 +100,7 @@
def get_role( self, trans, id, check_ownership=False, check_accessible=False, deleted=None ):
return self.get_object( trans, id, 'Role', check_ownership=False, check_accessible=False, deleted=deleted )
+ # ---- parsing query params
def decode_id( self, id ):
try:
return self.app.security.decode_id( id )
@@ -115,6 +116,53 @@
"""
return trans.security.encode_all_ids( rval, recursive=recursive )
+ def parse_filter_params( self, qdict, filter_attr_key='q', filter_value_key='qv', attr_op_split_char='-' ):
+ """
+ """
+ #TODO: import DEFAULT_OP from FilterParser
+ DEFAULT_OP = 'eq'
+ if filter_attr_key not in qdict:
+ return []
+ #precondition: attrs/value pairs are in-order in the qstring
+ attrs = qdict.get( filter_attr_key )
+ if not isinstance( attrs, list ):
+ attrs = [ attrs ]
+ # ops are strings placed after the attr strings and separated by a split char (e.g. 'create_time-lt')
+ # ops are optional and default to 'eq'
+ reparsed_attrs = []
+ ops = []
+ for attr in attrs:
+ op = DEFAULT_OP
+ if attr_op_split_char in attr:
+ #note: only split the last (e.g. q=community-tags-in&qv=rna yields ( 'community-tags', 'in', 'rna' )
+ attr, op = attr.rsplit( attr_op_split_char, 1 )
+ ops.append( op )
+ reparsed_attrs.append( attr )
+ attrs = reparsed_attrs
+
+ values = qdict.get( filter_value_key, [] )
+ if not isinstance( values, list ):
+ values = [ values ]
+ #TODO: it may be more helpful to the consumer if we error on incomplete 3-tuples
+ # (instead of relying on zip to shorten)
+ return zip( attrs, ops, values )
+
+ def parse_limit_offset( self, qdict ):
+ """
+ """
+ def _parse_pos_int( i ):
+ try:
+ new_val = int( i )
+ if new_val >= 0:
+ return new_val
+ except ( TypeError, ValueError ):
+ pass
+ return None
+
+ limit = _parse_pos_int( qdict.get( 'limit', None ) )
+ offset = _parse_pos_int( qdict.get( 'offset', None ) )
+ return ( limit, offset )
+
Root = BaseController
diff -r 8a234cd86e8a972a43f0e13617c9f09ae1216fee -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 lib/galaxy/webapps/galaxy/api/histories.py
--- a/lib/galaxy/webapps/galaxy/api/histories.py
+++ b/lib/galaxy/webapps/galaxy/api/histories.py
@@ -19,7 +19,7 @@
from galaxy.web.base.controller import ExportsHistoryMixin
from galaxy.web.base.controller import ImportsHistoryMixin
-from galaxy.managers import histories, citations
+from galaxy.managers import histories, citations, users
from galaxy import util
from galaxy.util import string_as_bool
@@ -35,9 +35,11 @@
def __init__( self, app ):
super( HistoriesController, self ).__init__( app )
self.citations_manager = citations.CitationsManager( app )
+ self.user_manager = users.UserManager( app )
self.history_manager = histories.HistoryManager( app )
self.history_serializer = histories.HistorySerializer( app )
self.history_deserializer = histories.HistoryDeserializer( app )
+ self.history_filters = histories.HistoryFilters( app )
@expose_api_anonymous
def index( self, trans, deleted='False', **kwd ):
@@ -55,21 +57,58 @@
:rtype: list
:returns: list of dictionaries containing summary history information
"""
+ serialization_params = self._parse_serialization_params( kwd, 'summary' )
+ limit, offset = self.parse_limit_offset( kwd )
+ filter_params = self.parse_filter_params( kwd )
+
+ # bail early with current history if user is anonymous
+ current_user = self.user_manager.current_user( trans )
+ if self.user_manager.is_anonymous( current_user ):
+ current_history = self.history_manager.get_current( trans )
+ #note: ignores filters, limit, offset
+ return [ self.history_serializer.serialize_to_view( trans, current_history, **serialization_params ) ]
+
+ filters = []
+ # support the old default of not-returning/filtering-out deleted histories
+ filters += self._get_deleted_filter( deleted, filter_params )
+ # users are limited to requesting only their own histories (here)
+ filters += [ self.app.model.History.user == current_user ]
+ # and any sent in from the query string
+ filters += self.history_filters.parse_filters( filter_params )
+
+ #TODO: eventually make order_by a param as well
+ order_by = sqlalchemy.desc( self.app.model.History.create_time )
+ histories = self.history_manager.list( trans, filters=filters, order_by=order_by, limit=limit, offset=offset )
+
rval = []
- serialization_params = self._parse_serialization_params( kwd, 'summary' )
-
- deleted_filter = ( self.app.model.History.deleted == False )
- if string_as_bool( deleted ):
- deleted_filter = ( self.app.model.History.deleted == True )
-
- #TODO: create time? is this right?
- order_by = sqlalchemy.desc( self.app.model.History.create_time )
- histories = self.history_manager.by_user( trans, user=trans.user, filters=deleted_filter, order_by=order_by )
for history in histories:
history_dict = self.history_serializer.serialize_to_view( trans, history, **serialization_params )
rval.append( history_dict )
+ return rval
- return rval
+ def _get_deleted_filter( self, deleted, filter_params ):
+ #TODO: this should all be removed (along with the default) in v2
+ # support the old default of not-returning/filtering-out deleted histories
+ try:
+ # the consumer must explicitly ask for both deleted and non-deleted
+ # but pull it from the parsed params (as the filter system will error on None)
+ deleted_filter_index = filter_params.index( ( 'deleted', 'eq', 'None' ) )
+ filter_params.pop( deleted_filter_index )
+ return []
+ except ValueError:
+ pass
+
+ # the deleted string bool was also used as an 'include deleted' flag
+ if deleted in ( 'True', 'true' ):
+ return [ self.app.model.History.deleted == True ]
+
+ # the third option not handled here is 'return only deleted'
+ # if this is passed in (in the form below), simply return and let the filter system handle it
+ if ( 'deleted', 'eq', 'True' ) in filter_params:
+ return []
+
+ # otherwise, do the default filter of removing the deleted histories
+ return [ self.app.model.History.deleted == False ]
@expose_api_anonymous
def show( self, trans, id, deleted='False', **kwd ):
diff -r 8a234cd86e8a972a43f0e13617c9f09ae1216fee -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 test/unit/managers/mock.py
--- a/test/unit/managers/mock.py
+++ b/test/unit/managers/mock.py
@@ -18,6 +18,9 @@
# =============================================================================
+class OpenObject( object ):
+ pass
+
class MockAppConfig( Bunch ):
def __init__( self, **kwargs ):
Bunch.__init__( self, **kwargs )
diff -r 8a234cd86e8a972a43f0e13617c9f09ae1216fee -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 test/unit/managers/test_HistoryManager.py
--- a/test/unit/managers/test_HistoryManager.py
+++ b/test/unit/managers/test_HistoryManager.py
@@ -20,6 +20,7 @@
import mock
from test_ModelManager import BaseTestCase
from galaxy.managers.histories import HistoryManager
+from galaxy.managers.histories import HistoryFilters
# =============================================================================
@@ -92,9 +93,6 @@
user_histories = self.history_mgr.by_user( self.trans, owner )
self.assertEqual( user_histories, [ item1, item2 ] )
- query = self.history_mgr._query_by_user( self.trans, owner )
- self.assertEqual( query.all(), user_histories )
-
def test_ownable( self ):
owner = self.user_mgr.create( self.trans, **user2_data )
non_owner = self.user_mgr.create( self.trans, **user3_data )
@@ -307,6 +305,239 @@
self.assertEqual( self.history_mgr.get_current( self.trans ), history1 )
+ # ---- functional and orm filter splitting and resolution
+ def test_parse_filters( self ):
+ filter_parser = HistoryFilters( self.app )
+ filters = filter_parser.parse_filters([
+ ( 'name', 'eq', 'wot' ),
+ ( 'deleted', 'eq', 'True' ),
+ ( 'annotation', 'in', 'hrrmm' )
+ ])
+ self.log( 'both orm and fn filters should be parsed and returned' )
+ self.assertEqual( len( filters ), 3 )
+
+ self.log( 'values should be parsed' )
+ self.assertEqual( filters[1].right.value, True )
+
+ def test_parse_filters_invalid_filters( self ):
+ filter_parser = HistoryFilters( self.app )
+ self.log( 'should error on non-column attr')
+ self.assertRaises( exceptions.RequestParameterInvalidException, filter_parser.parse_filters, [
+ ( 'merp', 'eq', 'wot' ),
+ ])
+ self.log( 'should error on non-whitelisted attr')
+ self.assertRaises( exceptions.RequestParameterInvalidException, filter_parser.parse_filters, [
+ ( 'user_id', 'eq', 'wot' ),
+ ])
+ self.log( 'should error on non-whitelisted op')
+ self.assertRaises( exceptions.RequestParameterInvalidException, filter_parser.parse_filters, [
+ ( 'name', 'lt', 'wot' ),
+ ])
+ self.log( 'should error on non-listed fn op')
+ self.assertRaises( exceptions.RequestParameterInvalidException, filter_parser.parse_filters, [
+ ( 'annotation', 'like', 'wot' ),
+ ])
+ self.log( 'should error on val parsing error')
+ self.assertRaises( exceptions.RequestParameterInvalidException, filter_parser.parse_filters, [
+ ( 'deleted', 'eq', 'true' ),
+ ])
+
+ def test_orm_filter_parsing( self ):
+ filter_parser = HistoryFilters( self.app )
+ user2 = self.user_mgr.create( self.trans, **user2_data )
+ history1 = self.history_mgr.create( self.trans, name='history1', user=user2 )
+ history2 = self.history_mgr.create( self.trans, name='history2', user=user2 )
+ history3 = self.history_mgr.create( self.trans, name='history3', user=user2 )
+
+ filters = filter_parser.parse_filters([
+ ( 'name', 'like', 'history%' ),
+ ])
+ histories = self.history_mgr.list( self.trans, filters=filters )
+ #for h in histories:
+ # print h.name
+ self.assertEqual( histories, [ history1, history2, history3 ])
+
+ filters = filter_parser.parse_filters([ ( 'name', 'like', '%2' ), ])
+ self.assertEqual( self.history_mgr.list( self.trans, filters=filters ), [ history2 ])
+
+ filters = filter_parser.parse_filters([ ( 'name', 'eq', 'history2' ), ])
+ self.assertEqual( self.history_mgr.list( self.trans, filters=filters ), [ history2 ])
+
+ self.history_mgr.update( self.trans, history1, dict( deleted=True ) )
+ filters = filter_parser.parse_filters([ ( 'deleted', 'eq', 'True' ), ])
+ self.assertEqual( self.history_mgr.list( self.trans, filters=filters ), [ history1 ])
+ filters = filter_parser.parse_filters([ ( 'deleted', 'eq', 'False' ), ])
+ self.assertEqual( self.history_mgr.list( self.trans, filters=filters ), [ history2, history3 ])
+ self.assertEqual( self.history_mgr.list( self.trans ), [ history1, history2, history3 ])
+
+ self.history_mgr.update( self.trans, history3, dict( deleted=True ) )
+ self.history_mgr.update( self.trans, history1, dict( importable=True ) )
+ self.history_mgr.update( self.trans, history2, dict( importable=True ) )
+ filters = filter_parser.parse_filters([
+ ( 'deleted', 'eq', 'True' ),
+ ( 'importable', 'eq', 'True' ),
+ ])
+ self.assertEqual( self.history_mgr.list( self.trans, filters=filters ), [ history1 ])
+ self.assertEqual( self.history_mgr.list( self.trans ), [ history1, history2, history3 ])
+
+ def test_fn_filter_parsing( self ):
+ filter_parser = HistoryFilters( self.app )
+ user2 = self.user_mgr.create( self.trans, **user2_data )
+ history1 = self.history_mgr.create( self.trans, name='history1', user=user2 )
+ history2 = self.history_mgr.create( self.trans, name='history2', user=user2 )
+ history3 = self.history_mgr.create( self.trans, name='history3', user=user2 )
+
+ filters = filter_parser.parse_filters([ ( 'annotation', 'in', 'no play' ), ])
+ anno_filter = filters[0]
+
+ history3.add_item_annotation( self.trans.sa_session, user2, history3, "All work and no play" )
+ self.trans.sa_session.flush()
+
+ self.assertTrue( anno_filter( history3 ) )
+ self.assertFalse( anno_filter( history2 ) )
+
+ self.assertEqual( self.history_mgr.list( self.trans, filters=filters ), [ history3 ])
+
+ self.log( 'should allow combinations of orm and fn filters' )
+ self.history_mgr.update( self.trans, history3, dict( importable=True ) )
+ self.history_mgr.update( self.trans, history2, dict( importable=True ) )
+ history1.add_item_annotation( self.trans.sa_session, user2, history1, "All work and no play" )
+ self.trans.sa_session.flush()
+
+ shining_examples = self.history_mgr.list( self.trans, filters=filter_parser.parse_filters([
+ ( 'importable', 'eq', 'True' ),
+ ( 'annotation', 'in', 'no play' ),
+ ]))
+ self.assertEqual( shining_examples, [ history3 ])
+
+ def test_fn_filter_currying( self ):
+ filter_parser = HistoryFilters( self.app )
+ filter_parser.fn_filter_parsers = {
+ 'name_len' : { 'op': { 'lt' : lambda i, v: len( i.name ) < v }, 'val': int }
+ }
+ self.log( 'should be 2 filters now' )
+ self.assertEqual( len( filter_parser.fn_filter_parsers ), 1 )
+ filters = filter_parser.parse_filters([
+ ( 'name_len', 'lt', '4' )
+ ])
+ self.log( 'should have parsed out a single filter' )
+ self.assertEqual( len( filters ), 1 )
+
+ filter_ = filters[0]
+ fake = mock.OpenObject()
+ fake.name = '123'
+ self.log( '123 should return true through the filter' )
+ self.assertTrue( filter_( fake ) )
+ fake.name = '1234'
+ self.log( '1234 should return false through the filter' )
+ self.assertFalse( filter_( fake ) )
+
+ def test_list( self ):
+ """
+ Test limit and offset in conjunction with both orm and fn filtering.
+ """
+ filter_parser = HistoryFilters( self.app )
+ user2 = self.user_mgr.create( self.trans, **user2_data )
+ history1 = self.history_mgr.create( self.trans, name='history1', user=user2 )
+ history2 = self.history_mgr.create( self.trans, name='history2', user=user2 )
+ history3 = self.history_mgr.create( self.trans, name='history3', user=user2 )
+ history4 = self.history_mgr.create( self.trans, name='history4', user=user2 )
+
+ self.history_mgr.delete( self.trans, history1 )
+ self.history_mgr.delete( self.trans, history2 )
+ self.history_mgr.delete( self.trans, history3 )
+
+ test_annotation = "testing"
+ history2.add_item_annotation( self.trans.sa_session, user2, history2, test_annotation )
+ self.trans.sa_session.flush()
+ history3.add_item_annotation( self.trans.sa_session, user2, history3, test_annotation )
+ self.trans.sa_session.flush()
+ history3.add_item_annotation( self.trans.sa_session, user2, history4, test_annotation )
+ self.trans.sa_session.flush()
+
+ all_histories = [ history1, history2, history3, history4 ]
+ deleted_and_annotated = [ history2, history3 ]
+
+ self.log( "no offset, no limit should work" )
+ self.assertEqual( self.history_mgr.list( self.trans, offset=None, limit=None ), all_histories )
+ self.assertEqual( self.history_mgr.list( self.trans ), all_histories )
+ self.log( "no offset, limit should work" )
+ self.assertEqual( self.history_mgr.list( self.trans, limit=2 ), [ history1, history2 ] )
+ self.log( "offset, no limit should work" )
+ self.assertEqual( self.history_mgr.list( self.trans, offset=1 ), [ history2, history3, history4 ] )
+ self.log( "offset, limit should work" )
+ self.assertEqual( self.history_mgr.list( self.trans, offset=1, limit=1 ), [ history2 ] )
+
+ self.log( "zero limit should return empty list" )
+ self.assertEqual( self.history_mgr.list( self.trans, limit=0 ), [] )
+ self.log( "past len offset should return empty list" )
+ self.assertEqual( self.history_mgr.list( self.trans, offset=len( all_histories ) ), [] )
+ self.log( "negative limit should return full list" )
+ self.assertEqual( self.history_mgr.list( self.trans, limit=-1 ), all_histories )
+ self.log( "negative offset should return full list" )
+ self.assertEqual( self.history_mgr.list( self.trans, offset=-1 ), all_histories )
+
+ filters = [ model.History.deleted == True ]
+ self.log( "orm filtered, no offset, no limit should work" )
+ found = self.history_mgr.list( self.trans, filters=filters )
+ self.assertEqual( found, [ history1, history2, history3 ] )
+ self.log( "orm filtered, no offset, limit should work" )
+ found = self.history_mgr.list( self.trans, filters=filters, limit=2 )
+ self.assertEqual( found, [ history1, history2 ] )
+ self.log( "orm filtered, offset, no limit should work" )
+ found = self.history_mgr.list( self.trans, filters=filters, offset=1 )
+ self.assertEqual( found, [ history2, history3 ] )
+ self.log( "orm filtered, offset, limit should work" )
+ found = self.history_mgr.list( self.trans, filters=filters, offset=1, limit=1 )
+ self.assertEqual( found, [ history2 ] )
+
+ filters = filter_parser.parse_filters([ ( 'annotation', 'in', test_annotation ) ])
+ self.log( "fn filtered, no offset, no limit should work" )
+ found = self.history_mgr.list( self.trans, filters=filters )
+ self.assertEqual( found, [ history2, history3, history4 ] )
+ self.log( "fn filtered, no offset, limit should work" )
+ found = self.history_mgr.list( self.trans, filters=filters, limit=2 )
+ self.assertEqual( found, [ history2, history3 ] )
+ self.log( "fn filtered, offset, no limit should work" )
+ found = self.history_mgr.list( self.trans, filters=filters, offset=1 )
+ self.assertEqual( found, [ history3, history4 ] )
+ self.log( "fn filtered, offset, limit should work" )
+ found = self.history_mgr.list( self.trans, filters=filters, offset=1, limit=1 )
+ self.assertEqual( found, [ history3 ] )
+
+ filters = filter_parser.parse_filters([
+ ( 'deleted', 'eq', 'True' ),
+ ( 'annotation', 'in', test_annotation )
+ ])
+ self.log( "orm and fn filtered, no offset, no limit should work" )
+ found = self.history_mgr.list( self.trans, filters=filters )
+ self.assertEqual( found, [ history2, history3 ] )
+ self.log( "orm and fn filtered, no offset, limit should work" )
+ found = self.history_mgr.list( self.trans, filters=filters, limit=1 )
+ self.assertEqual( found, [ history2 ] )
+ self.log( "orm and fn filtered, offset, no limit should work" )
+ found = self.history_mgr.list( self.trans, filters=filters, offset=1 )
+ self.assertEqual( found, [ history3 ] )
+ self.log( "orm and fn filtered, offset, limit should work" )
+ found = self.history_mgr.list( self.trans, filters=filters, offset=1, limit=1 )
+ self.assertEqual( found, [ history3 ] )
+
+ self.log( "orm and fn filtered, zero limit should return empty list" )
+ found = self.history_mgr.list( self.trans, filters=filters, limit=0 )
+ self.assertEqual( found, [] )
+ self.log( "orm and fn filtered, past len offset should return empty list" )
+ found = self.history_mgr.list( self.trans, filters=filters, offset=len( deleted_and_annotated ) )
+ self.assertEqual( found, [] )
+ self.log( "orm and fn filtered, negative limit should return full list" )
+ found = self.history_mgr.list( self.trans, filters=filters, limit=-1 )
+ self.assertEqual( found, deleted_and_annotated )
+ self.log( "orm and fn filtered, negative offset should return full list" )
+ found = self.history_mgr.list( self.trans, filters=filters, offset=-1 )
+ self.assertEqual( found, deleted_and_annotated )
+
+
+
+
# =============================================================================
if __name__ == '__main__':
# or more generally, nosetests test_resourcemanagers.py -s -v
diff -r 8a234cd86e8a972a43f0e13617c9f09ae1216fee -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 test/unit/managers/test_ModelManager.py
--- a/test/unit/managers/test_ModelManager.py
+++ b/test/unit/managers/test_ModelManager.py
@@ -22,6 +22,7 @@
import mock
from galaxy.managers.users import UserManager
+from galaxy.managers import base
# =============================================================================
admin_email = 'admin(a)admin.admin'
diff -r 8a234cd86e8a972a43f0e13617c9f09ae1216fee -r 68f0ed37e08942acb1bb84b63911b22d7d4b6f65 test/unit/managers/test_UserManager.py
--- a/test/unit/managers/test_UserManager.py
+++ b/test/unit/managers/test_UserManager.py
@@ -110,8 +110,8 @@
user3 = self.user_mgr.create( self.trans, **user3_data )
self.log( "should be able to tell if a user is the current (trans) user" )
- self.assertTrue( self.user_mgr.is_current_user( self.trans, self.admin_user ) )
- self.assertFalse( self.user_mgr.is_current_user( self.trans, user2 ) )
+ self.assertEqual( self.user_mgr.current_user( self.trans ), self.admin_user )
+ self.assertNotEqual( self.user_mgr.current_user( self.trans ), user2 )
def test_api_keys( self ):
user2 = self.user_mgr.create( self.trans, **user2_data )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
3 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/156321d8d23b/
Changeset: 156321d8d23b
User: nsoranzo
Date: 2015-01-29 13:52:19+00:00
Summary: Fix INTEGRATED_TOOL_PANEL_DESCRIPTION .
Affected #: 1 file
diff -r 2171bcc223ca38613f50bba79df7739aefeb701b -r 156321d8d23bb777be1b5dde6b89c1bb776b9ddd lib/galaxy/tools/toolbox/base.py
--- a/lib/galaxy/tools/toolbox/base.py
+++ b/lib/galaxy/tools/toolbox/base.py
@@ -38,13 +38,17 @@
INTEGRATED_TOOL_PANEL_DESCRIPTION = """
-This is Galaxy's integrated tool panel and probably should not be modified
-directly. It will be regenerated each time Galaxy starts up. To modify locally
-managed tools (e.g. from tool_conf.xml) modify that file directly and restart
-Galaxy. Whenever possible Tool Shed managed tools (e.g. from shed_tool_conf.xml)
-should be managed from within the Galaxy interface of via is UI - but if changes
-are nessecary (such as to hide a tool or re-assign its section) modify that file
-and restart Galaxy.
+This is Galaxy's integrated tool panel and should be modified directly only for
+reordering tools inside a section. Each time Galaxy starts up, this file is
+synchronized with the various tool config files: tools, sections and labels
+added to one of these files, will be added also here in the appropriate place,
+while elements removed from the tool config files will be correspondingly
+deleted from this file.
+To modify locally managed tools (e.g. from tool_conf.xml) modify that file
+directly and restart Galaxy. Whenever possible Tool Shed managed tools (e.g.
+from shed_tool_conf.xml) should be managed from within the Galaxy interface or
+via its API - but if changes are necessary (such as to hide a tool or re-assign
+its section) modify that file and restart Galaxy.
"""
https://bitbucket.org/galaxy/galaxy-central/commits/d55ccd0bd71e/
Changeset: d55ccd0bd71e
User: nsoranzo
Date: 2015-01-29 14:43:59+00:00
Summary: Enhancements for galaxy.ini documentation
Affected #: 1 file
diff -r 156321d8d23bb777be1b5dde6b89c1bb776b9ddd -r d55ccd0bd71e688c69753edf15cbff2f6a2b82ba config/galaxy.ini.sample
--- a/config/galaxy.ini.sample
+++ b/config/galaxy.ini.sample
@@ -42,7 +42,8 @@
# Number of threads in the web server thread pool.
#threadpool_workers = 10
-# Set the number of seconds a thread can work before you should kill it (assuming it will never finish) to 3 hours.
+# Set the number of seconds a thread can work before you should kill it
+# (assuming it will never finish) to 3 hours. Default is 600 (10 minutes).
threadpool_kill_thread_limit = 10800
# ---- Filters --------------------------------------------------------------
@@ -172,10 +173,11 @@
# install from in the admin interface (.sample used if default does not exist).
#tool_sheds_config_file = config/tool_sheds_conf.xml
-# If the following option is set to True - Galaxy will monitor individual tools
-# and tool directories specified in tool_conf.xml for changes and reload these
-# tools. Watchdog must be installed and available to Galaxy to use this option.
-# See https://pypi.python.org/pypi/watchdog.
+# Enable monitoring of tools and tool directories listed in any tool config file
+# specified in tool_config_file option. If changes are found, tools are
+# automatically reloaded.
+# Watchdog ( https://pypi.python.org/pypi/watchdog ) must be installed and
+# available to Galaxy to use this option.
#watch_tools = False
# Enable automatic polling of relative tool sheds to see if any updates
@@ -685,6 +687,8 @@
# used by the algorithm that encodes and decodes these values. It can be any
# string. If left unchanged, anyone could construct a cookie that would grant
# them access to others' sessions.
+# One simple way to generate a value for this is with the shell command:
+# python -c 'import time; print time.time()' | md5sum | cut -f 1 -d ' '
#id_secret = USING THE DEFAULT IS NOT SECURE!
# User authentication can be delegated to an upstream proxy server (usually
https://bitbucket.org/galaxy/galaxy-central/commits/8a234cd86e8a/
Changeset: 8a234cd86e8a
User: nsoranzo
Date: 2015-01-29 14:47:47+00:00
Summary: Whitespace fixes. 80 character limit. Small rewordings.
Affected #: 1 file
diff -r d55ccd0bd71e688c69753edf15cbff2f6a2b82ba -r 8a234cd86e8a972a43f0e13617c9f09ae1216fee config/galaxy.ini.sample
--- a/config/galaxy.ini.sample
+++ b/config/galaxy.ini.sample
@@ -8,7 +8,7 @@
# Throughout this sample configuration file, except where stated otherwise,
# uncommented values override the default if left unset, whereas commented
-# values are set to the default value. Relative paths are relative to the root
+# values are set to the default value. Relative paths are relative to the root
# Galaxy directory.
#
# Examples of many of these options are explained in more detail in the wiki:
@@ -118,10 +118,10 @@
#database_query_profiling_proxy = False
# By default, Galaxy will use the same database to track user data and
-# tool shed install data. There are many situtations in which it is
+# tool shed install data. There are many situtations in which it is
# valuable to seperate these - for instance bootstrapping fresh Galaxy
-# instances with pretested installs. The following optin can be used to
-# separate the tool shed install database (all other options listed above
+# instances with pretested installs. The following optin can be used to
+# separate the tool shed install database (all other options listed above
# but prefixed with install_ are also available).
#install_database_connection = sqlite:///./database/universe.sqlite?isolation_level=IMMEDIATE
@@ -139,14 +139,16 @@
# config/tool_conf.xml does not exist).
#tool_config_file = config/tool_conf.xml,config/shed_tool_conf.xml
-# Enable / disable checking if any tools defined in the above non-shed tool_config_files
-# (i.e., tool_conf.xml) have been migrated from the Galaxy code distribution to the Tool
-# Shed. This setting should generally be set to False only for development Galaxy environments
-# that are often rebuilt from scratch where migrated tools do not need to be available in the
-# Galaxy tool panel. If the following setting remains commented, the default setting will be True.
+# Enable / disable checking if any tools defined in the above non-shed
+# tool_config_files (i.e., tool_conf.xml) have been migrated from the Galaxy
+# code distribution to the Tool Shed. This setting should generally be set to
+# False only for development Galaxy environments that are often rebuilt from
+# scratch where migrated tools do not need to be available in the Galaxy tool
+# panel. If the following setting remains commented, the default setting will
+# be True.
#check_migrate_tools = True
-# Tool config maintained by tool migration scripts. If you use the migration
+# Tool config maintained by tool migration scripts. If you use the migration
# scripts to install tools that have been migrated to the tool shed upon a new
# release, they will be added to this tool config file.
#migrated_tools_config = config/migrated_tools_conf.xml
@@ -154,11 +156,12 @@
# File that contains the XML section and tool tags from all tool panel config
# files integrated into a single file that defines the tool panel layout. This
# file can be changed by the Galaxy administrator to alter the layout of the
-# tool panel. If not present, Galaxy will create it.
+# tool panel. If not present, Galaxy will create it.
#integrated_tool_panel_config = integrated_tool_panel.xml
# Default path to the directory containing the tools defined in tool_conf.xml.
-# Other tool config files must include the tool_path as an attribute in the <toolbox> tag.
+# Other tool config files must include the tool_path as an attribute in the
+# <toolbox> tag.
#tool_path = tools
# Path to the directory in which tool dependencies are placed. This is used by
@@ -181,26 +184,28 @@
#watch_tools = False
# Enable automatic polling of relative tool sheds to see if any updates
-# are available for installed repositories. Ideally only one Galaxy
+# are available for installed repositories. Ideally only one Galaxy
# server process should be able to check for repository updates. The
# setting for hours_between_check should be an integer between 1 and 24.
#enable_tool_shed_check = False
#hours_between_check = 12
-# Enable use of an in-memory registry with bi-directional relationships
-# between repositories (i.e., in addition to lists of dependencies for a
-# repository, keep an in-memory registry of dependent items for each repository.
+# Enable use of an in-memory registry with bi-directional relationships between
+# repositories (i.e., in addition to lists of dependencies for a repository,
+# keep an in-memory registry of dependent items for each repository.
#manage_dependency_relationships = False
# XML config file that contains data table entries for the
-# ToolDataTableManager. This file is manually # maintained by the Galaxy
+# ToolDataTableManager. This file is manually # maintained by the Galaxy
# administrator (.sample used if default does not exist).
#tool_data_table_config_path = config/tool_data_table_conf.xml
-# XML config file that contains additional data table entries for the ToolDataTableManager. This file
-# is automatically generated based on the current installed tool shed repositories that contain valid
-# tool_data_table_conf.xml.sample files. At the time of installation, these entries are automatically
-# added to the following file, which is parsed and applied to the ToolDataTableManager at server start up.
+# XML config file that contains additional data table entries for the
+# ToolDataTableManager. This file is automatically generated based on the
+# current installed tool shed repositories that contain valid
+# tool_data_table_conf.xml.sample files. At the time of installation, these
+# entries are automatically added to the following file, which is parsed and
+# applied to the ToolDataTableManager at server start up.
#shed_tool_data_table_config = config/shed_tool_data_table_conf.xml
# Directory where data used by tools is located, see the samples in that
@@ -215,24 +220,26 @@
#len_file_path = tool-data/shared/ucsc/chrom
# Datatypes config file(s), defines what data (file) types are available in
-# Galaxy (.sample is used if default does not exist). If a datatype appears in
-# multiple files - the last definition is used (though the first sniffer is used
+# Galaxy (.sample is used if default does not exist). If a datatype appears in
+# multiple files, the last definition is used (though the first sniffer is used
# so limit sniffer definitions to one file).
#datatypes_config_file = config/datatypes_conf.xml
# Disable the 'Auto-detect' option for file uploads
#datatypes_disable_auto = False
-# Visualizations config directory: where to look for individual visualization plugins.
-# The path is relative to the Galaxy root dir. To use an absolute path begin the path
-# with '/'. This is a comma separated list. Defaults to "config/plugins/visualizations".
+# Visualizations config directory: where to look for individual visualization
+# plugins. The path is relative to the Galaxy root dir. To use an absolute
+# path begin the path with '/'. This is a comma separated list.
+# Defaults to "config/plugins/visualizations".
#visualization_plugins_directory = config/plugins/visualizations
-# Interactive environment plugins root directory: where to look for interactive environment
-# plugins. By default none will be loaded. Set to config/plugins/interactive_environments
-# to load Galaxy's stock plugins (currently just IPython). These will require Docker
-# to be configured and have security considerations so proceed with caution.
-#interactive_environment_plugins_directory =
+# Interactive environment plugins root directory: where to look for interactive
+# environment plugins. By default none will be loaded. Set to
+# config/plugins/interactive_environments to load Galaxy's stock plugins
+# (currently just IPython). These will require Docker to be configured and
+# have security considerations, so proceed with caution.
+#interactive_environment_plugins_directory =
# Each job is given a unique empty directory as its current working directory.
# This option defines in what parent directory those directories will be
@@ -247,18 +254,20 @@
# used for the cache
#template_cache_path = database/compiled_templates
-# Citation related caching. Tool citations information maybe fetched from
-# external sources such as http://dx.doi.org/ by Galaxy - the following parameters
-# can be used to control the caching used to store this information.
+# Citation related caching. Tool citations information maybe fetched from
+# external sources such as http://dx.doi.org/ by Galaxy - the following
+# parameters can be used to control the caching used to store this information.
#citation_cache_type = file
#citation_cache_data_dir = database/citations/data
#citation_cache_lock_dir = database/citations/lock
-# External service types config file, defines what types of external_services configurations
-# are available in Galaxy (.sample is used if default does not exist).
+# External service types config file, defining what types of external_services
+# configurations are available in Galaxy (.sample is used if default does not
+# exist).
#external_service_type_config_file = config/external_service_types_conf.xml
-# Path to the directory containing the external_service_types defined in the config.
+# Path to the directory containing the external_service_types defined in the
+# config.
#external_service_type_path = external_service_types
# Tools with a number of outputs not known until runtime can write these
@@ -316,10 +325,11 @@
# -- Mail and notification
-# Galaxy sends mail for various things: Subscribing users to the mailing list
-# if they request it, emailing password resets, notification from the Galaxy
-# Sample Tracking system, reporting dataset errors, and sending activation emails.
-# To do this, it needs to send mail through an SMTP server, which you may define here (host:port).
+# Galaxy sends mail for various things: subscribing users to the mailing list
+# if they request it, password resets, notifications from the Galaxy Sample
+# Tracking system, reporting dataset errors, and sending activation emails.
+# To do this, it needs to send mail through an SMTP server, which you may
+# define here (host:port).
# Galaxy will automatically try STARTTLS but will continue upon failure.
#smtp_server = None
@@ -337,37 +347,44 @@
#mailing_join_addr = galaxy-announce-join(a)bx.psu.edu
# Datasets in an error state include a link to report the error. Those reports
-# will be sent to this address. Error reports are disabled if no address is set.
-# Also this email is shown as a contact to user in case of Galaxy misconfiguration and other events user may encounter.
+# will be sent to this address. Error reports are disabled if no address is
+# set. Also this email is shown as a contact to user in case of Galaxy
+# misconfiguration and other events user may encounter.
#error_email_to = None
-# Activation email is used as a sender ('from' field) for the account activation mail.
-# We recommend using string in the following format: Galaxy Project <galaxy-no-reply(a)example.com>
+# Activation email is used as a sender ('from' field) for the account
+# activation mail. We recommend using string in the following format:
+# Galaxy Project <galaxy-no-reply(a)example.com>
#activation_email = None
-# URL of the support resource for the galaxy instance. Used in activation emails.
+# URL of the support resource for the galaxy instance. Used in activation
+# emails.
#instance_resource_url = http://wiki.galaxyproject.org/
-# E-mail domains blacklist is used for filtering out users that are using disposable email address
-# during the registration. If their address domain matches any domain in the BL they are refused the registration.
+# E-mail domains blacklist is used for filtering out users that are using
+# disposable email address during the registration. If their address domain
+# matches any domain in the blacklist, they are refused the registration.
#blacklist_file = config/disposable_email_blacklist.conf
-# Registration warning message is used to discourage people from registering multiple accounts. Applies mostly for the main Galaxy instance.
+# Registration warning message is used to discourage people from registering
+# multiple accounts. Applies mostly for the main Galaxy instance.
# If no message specified the warning box will not be shown.
#registration_warning_message = Please register only one account - we provide this service free of charge and have limited computational resources. Multi-accounts are tracked and will be subjected to account termination and data deletion.
# -- Account activation
-# This is user account activation feature global flag. If set to "False" the rest of the Account
-# activation configuration is ignored and user activation is disabled (a.k.a. accounts are active since registration).
-# Note the activation is also not working in case the smtp server is not defined.
+# User account activation feature global flag. If set to "False", the rest of
+# the Account activation configuration is ignored and user activation is
+# disabled (i.e. accounts are active since registration).
+# The activation is also not working in case the SMTP server is not defined.
#user_activation_on = False
-# Activation grace period. Activation is not forced (login is not disabled) until
-# grace period has passed. Users under grace period can't run jobs (see inactivity_box_content).
-# In hours. Default is 3. Enter 0 to disable grace period.
-# Users with OpenID logins have grace period forever.
+# Activation grace period (in hours). Activation is not forced (login is not
+# disabled) until grace period has passed. Users under grace period can't run
+# jobs (see inactivity_box_content). Default is 3. Enter 0 to disable grace
+# period.
+# Users with OpenID logins have grace period forever.
#activation_grace_period = 0
# Used for warning box for inactive accounts (unable to run jobs).
@@ -378,7 +395,7 @@
# -- Analytics
# You can enter tracking code here to track visitor's behavior
-# through your Google Analytics account. Example: UA-XXXXXXXX-Y
+# through your Google Analytics account. Example: UA-XXXXXXXX-Y
#ga_code = None
# -- Display sites
@@ -401,17 +418,19 @@
# them).
#display_servers = hgw1.cse.ucsc.edu,hgw2.cse.ucsc.edu,hgw3.cse.ucsc.edu,hgw4.cse.ucsc.edu,hgw…
-# To disable the old-style display applications that are hardcoded into datatype classes,
-# set enable_old_display_applications = False.
-# This may be desirable due to using the new-style, XML-defined, display applications that
-# have been defined for many of the datatypes that have the old-style.
+# To disable the old-style display applications that are hardcoded into
+# datatype classes, set enable_old_display_applications = False.
+# This may be desirable due to using the new-style, XML-defined, display
+# applications that have been defined for many of the datatypes that have the
+# old-style.
# There is also a potential security concern with the old-style applications,
-# where a malicious party could provide a link that appears to reference the Galaxy server,
-# but contains a redirect to a third-party server, tricking a Galaxy user to access said
-# site.
+# where a malicious party could provide a link that appears to reference the
+# Galaxy server, but contains a redirect to a third-party server, tricking a
+# Galaxy user to access said site.
#enable_old_display_applications = True
-# -- Next gen LIMS interface on top of existing Galaxy Sample/Request management code.
+# -- Next gen LIMS interface on top of existing Galaxy Sample/Request
+# management code.
use_nglims = False
nglims_config_file = tool-data/nglims.yaml
@@ -428,18 +447,20 @@
# Format string used when showing date and time information.
# The string may contain:
-# - the directives used by Python time.strftime() function (see http://docs.python.org/2/library/time.html#time.strftime ),
+# - the directives used by Python time.strftime() function (see
+# http://docs.python.org/2/library/time.html#time.strftime ),
# - $locale (complete format string for the server locale),
-# - $iso8601 (complete format string as specified by ISO 8601 international standard).
+# - $iso8601 (complete format string as specified by ISO 8601 international
+# standard).
# pretty_datetime_format = $locale (UTC)
-# URL (with schema http/https) of the Galaxy instance as accessible within your local
-# network - if specified used as a default by pulsar file staging and IPython docker
-# container for communicating back with Galaxy via the API.
+# URL (with schema http/https) of the Galaxy instance as accessible within your
+# local network - if specified used as a default by pulsar file staging and
+# IPython Docker container for communicating back with Galaxy via the API.
#galaxy_infrastructure_url = http://localhost:8080
-# The URL of the page to display in Galaxy's middle pane when loaded. This can be
-# an absolute or relative URL.
+# The URL of the page to display in Galaxy's middle pane when loaded. This can
+# be an absolute or relative URL.
#welcome_url = /static/welcome.html
# The URL linked by the "Galaxy/brand" text.
@@ -470,7 +491,7 @@
# The URL linked by the "Galaxy Q&A" link in the "Help" menu
# The Galaxy Q&A site is under development; when the site is done, this URL
# will be set and uncommented.
-#qa_url =
+#qa_url =
# Serve static content, which must be enabled if you're not serving it via a
# proxy server. These options should be self explanatory and so are not
@@ -527,9 +548,9 @@
# will cause modern web browsers to not allow Galaxy to be embedded in
# the frames of web applications hosted at other hosts - this can help
# prevent a class of attack called clickjacking
-# (https://www.owasp.org/index.php/Clickjacking) If you configure a
+# (https://www.owasp.org/index.php/Clickjacking) If you configure a
# proxy in front of Galaxy - please ensure this header remains intact
-# to protect your users. Uncomment and leave empty to not set the
+# to protect your users. Uncomment and leave empty to not set the
# `X-Frame-Options` header.
#x_frame_options = SAMEORIGIN
@@ -545,10 +566,10 @@
#nginx_upload_path = False
# Have Galaxy manage dynamic proxy component for routing requests to other
-# services based on Galaxy's session cookie. It will attempt to do this by
+# services based on Galaxy's session cookie. It will attempt to do this by
# default though you do need to install node+npm and do an npm install from
-# `lib/galaxy/web/proxy/js`. It is generally more robust to configure this
-# externally managing it however Galaxy is managed. If True Galaxy will only
+# `lib/galaxy/web/proxy/js`. It is generally more robust to configure this
+# externally managing it however Galaxy is managed. If True Galaxy will only
# launch the proxy if it is actually going to be used (e.g. for IPython).
#dynamic_proxy_manage=True
@@ -564,7 +585,8 @@
# Enable verbose debugging of Galaxy-managed dynamic proxy.
#dynamic_proxy_debug=False
-# The dynamic proxy is proxied by an external proxy (e.g. apache frontend to nodejs to wrap connections in SSL)
+# The dynamic proxy is proxied by an external proxy (e.g. apache frontend to
+# nodejs to wrap connections in SSL).
#dynamic_proxy_external_proxy=False
# -- Logging and Debugging
@@ -582,28 +604,28 @@
# Turn on logging of application events and some user events to the database.
#log_events = True
-# Turn on logging of user actions to the database. Actions currently logged are
-# grid views, tool searches, and use of "recently" used tools menu. The
+# Turn on logging of user actions to the database. Actions currently logged
+# are grid views, tool searches, and use of "recently" used tools menu. The
# log_events and log_actions functionality will eventually be merged.
#log_actions = True
-# Sanitize All HTML Tool Output By default, all tool output served as
-# 'text/html' will be sanitized thoroughly. This can be disabled if
-# you have special tools that require unaltered output. WARNING:
-# Disabling this does make the Galxy instance susceptible to XSS
-# attacks initiated by your users.
+# Sanitize all HTML tool output. By default, all tool output served as
+# 'text/html' will be sanitized thoroughly. This can be disabled if you have
+# special tools that require unaltered output. WARNING: disabling this does
+# make the Galaxy instance susceptible to XSS attacks initiated by your users.
#sanitize_all_html = True
# By default Galaxy will serve non-HTML tool output that may potentially
-# contain browser executable JavaScript content as plain text. This will for
+# contain browser executable JavaScript content as plain text. This will for
# instance cause SVG datasets to not render properly and so may be disabled
# by setting the following option to True.
#serve_xss_vulnerable_mimetypes = False
-# Set the following to True to use ipython nbconvert to build HTML from IPython
-# notebooks in Galaxy histories. This process may allow users to execute arbitrary
-# code or serve arbitrary HTML. If enabled ipython must be available and on Galaxy's
-# PATH, to do this run `pip install jinja2 pygments ipython` in Galaxy's virtualenv.
+# Set the following to True to use IPython nbconvert to build HTML from IPython
+# notebooks in Galaxy histories. This process may allow users to execute
+# arbitrary code or serve arbitrary HTML. If enabled, IPython must be
+# available and on Galaxy's PATH, to do this run
+# `pip install jinja2 pygments ipython` in Galaxy's virtualenv.
#trust_ipython_notebook_conversion = False
# Debug enables access to various config options useful for development and
@@ -659,7 +681,7 @@
# if testing shows that Galaxy is able to create files using these formats.
# Specific formats can be disabled with this option, separate more than one
# format with commas. Available formats are currently 'zip', 'gz', and 'bz2'.
-#disable_library_comptypes =
+#disable_library_comptypes =
# Some sequencer integration features in beta allow you to automatically
# transfer datasets. This is done using a lightweight transfer manager which
@@ -710,10 +732,11 @@
#remote_user_header = HTTP_REMOTE_USER
# If use_remote_user is enabled, anyone who can log in to the Galaxy host may
-# impersonate any other user by simply sending the appropriate header. Thus a
+# impersonate any other user by simply sending the appropriate header. Thus a
# secret shared between the upstream proxy server, and Galaxy is required.
-# If anyone other than the Galaxy user is using the server, then apache/nginx should
-# pass a value in the header 'GX_SECRET' that is identical the one below
+# If anyone other than the Galaxy user is using the server, then apache/nginx
+# should pass a value in the header 'GX_SECRET' that is identical to the one
+# below.
#remote_user_secret = USING THE DEFAULT IS NOT SECURE!
# If use_remote_user is enabled, you can set this to a URL that will log your
@@ -798,8 +821,9 @@
#api_allow_run_as = None
# Master key that allows many API admin actions to be used without actually
-# having a defined admin user in the database/config. Only set this if you need
-# to bootstrap Galaxy, you probably do not want to set this on public servers.
+# having a defined admin user in the database/config. Only set this if you
+# need to bootstrap Galaxy, you probably do not want to set this on public
+# servers.
#master_api_key = changethis
# Enable tool tags (associating tools with tags). This has its own option
@@ -807,7 +831,7 @@
# large servers.
#enable_tool_tags = False
-# Enable a feature when running workflows. When enabled, default datasets
+# Enable a feature when running workflows. When enabled, default datasets
# are selected for "Set at Runtime" inputs from the history such that the
# same input will not be selected twice, unless there are more inputs than
# compatible datasets in the history.
@@ -843,7 +867,7 @@
# File where Data Managers are configured (.sample used if default does not
# exist)
#data_manager_config_file = config/data_manager_conf.xml
-# File where Tool Shed based Data Managers are configured
+# File where Tool Shed based Data Managers are configured
#shed_data_manager_config_file = config/shed_data_manager_conf.xml
# Directory to store Data Manager based tool-data; defaults to tool_data_path
#galaxy_data_manager_data_path = tool-data
@@ -870,7 +894,8 @@
# following option to True or False.
#track_jobs_in_database = None
-# This enables splitting of jobs into tasks, if specified by the particular tool config.
+# This enables splitting of jobs into tasks, if specified by the particular tool
+# config.
# This is a new feature and not recommended for production servers yet.
#use_tasked_jobs = False
#local_task_queue_workers = 2
@@ -921,7 +946,8 @@
# Optional file containing job resource data entry fields definition.
# These fields will be presented to users in the tool forms and allow them to
-# overwrite default job resources such as number of processors, memory, and walltime.
+# overwrite default job resources such as number of processors, memory and
+# walltime.
#job_resource_params_file = config/job_resource_params_conf.xml
# If using job concurrency limits (configured in job_config_file), several
@@ -940,7 +966,7 @@
# the following lines. tool_* filters will be applied for all users
# and can not be changed by them. user_tool_* filters will be shown
# under user preferences and can be toogled on and off by
-# runtime. Example shown below are not real defaults (no custom
+# runtime. Example shown below are not real defaults (no custom
# filters applied by defualt) but can be enabled with by renaming the
# example.py.sample in the filters directory to example.py.
@@ -972,11 +998,12 @@
# ---- Galaxy External Message Queue -------------------------------------------------
-# Galaxy uses AMQ protocol to receive messages from external sources like
-# bar code scanners. Galaxy has been tested against RabbitMQ AMQP implementation.
-# For Galaxy to receive messages from a message queue the RabbitMQ server has
-# to be set up with a user account and other parameters listed below. The 'host'
-# and 'port' fields should point to where the RabbitMQ server is running.
+# Galaxy uses Advanced Message Queuing Protocol (AMQP) to receive messages from
+# external sources like barcode scanners. Galaxy has been tested against
+# RabbitMQ AMQP implementation. For Galaxy to receive messages from a message
+# queue, the RabbitMQ server has to be set up with a user account and other
+# parameters listed below. The 'host' and 'port' fields should point to where
+# the RabbitMQ server is running.
[galaxy_amqp]
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: natefoo: Update tag latest_2015.01.13 for changeset a481d17c0448
by commits-noreply@bitbucket.org 29 Jan '15
by commits-noreply@bitbucket.org 29 Jan '15
29 Jan '15
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/d386cf71d4f7/
Changeset: d386cf71d4f7
Branch: stable
User: natefoo
Date: 2015-01-29 14:00:24+00:00
Summary: Update tag latest_2015.01.13 for changeset a481d17c0448
Affected #: 1 file
diff -r a481d17c04482b069e6b2ebdda4fbca01448fa9c -r d386cf71d4f71faeb49ec871b742e81011bee266 .hgtags
--- a/.hgtags
+++ b/.hgtags
@@ -22,4 +22,4 @@
2092948937ac30ef82f71463a235c66d34987088 release_2014.10.06
9bd6f8b5b8153db752f4e61ed62f2b6c01ae4a11 latest_2014.10.06
2e8dd2949dd3eee0f56f9a3a5ebf1b2baca24aee release_2015.01.13
-1b96d3a4ff28da2bd547b65ad86c5ad5a9bbfb90 latest_2015.01.13
+a481d17c04482b069e6b2ebdda4fbca01448fa9c latest_2015.01.13
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0