galaxy-commits
Threads by month
- ----- 2026 -----
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- 15302 discussions
commit/galaxy-central: greg: Include tool shed environment information when displaying functional test results in the tool shed.
by commits-noreply@bitbucket.org 12 Mar '13
by commits-noreply@bitbucket.org 12 Mar '13
12 Mar '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/b89a535240e5/
changeset: b89a535240e5
user: greg
date: 2013-03-12 14:06:52
summary: Include tool shed environment information when displaying functional test results in the tool shed.
affected #: 1 file
diff -r be95601bd33fde791b42dff3e7446d0ccdcf7fd1 -r b89a535240e5cb5fc123264b6294f86a577d02d9 templates/webapps/tool_shed/repository/display_tool_functional_test_results.mako
--- a/templates/webapps/tool_shed/repository/display_tool_functional_test_results.mako
+++ b/templates/webapps/tool_shed/repository/display_tool_functional_test_results.mako
@@ -136,25 +136,30 @@
</div><div class="form-row"><table width="100%">
- <tr bgcolor="#D8D8D8" width="100%"><td><b>Test environment</td></tr>
+ <tr bgcolor="#D8D8D8" width="100%"><td><b>Tool Shed environment</td></tr></table></div><div class="form-row">
- <label>Architecture:</label>
- ${test_environment_dict.get( 'architecture', 'unknown' ) | h}
+ <label>Tool shed version:</label>
+ ${test_environment_dict.get( 'tool_shed_revision', 'unknown' ) | h}
<div style="clear: both"></div></div><div class="form-row">
- <label>Python version:</label>
- ${test_environment_dict.get( 'python_version', 'unknown' ) | h}
+ <label>Tool shed database version:</label>
+ ${test_environment_dict.get( 'tool_shed_database_version', 'unknown' ) | h}
<div style="clear: both"></div></div><div class="form-row">
- <label>Operating system:</label>
- ${test_environment_dict.get( 'system', 'unknown' ) | h}
+ <label>Mercurial version:</label>
+ ${test_environment_dict.get( 'tool_shed_mercurial_version', 'unknown' ) | h}
<div style="clear: both"></div></div><div class="form-row">
+ <table width="100%">
+ <tr bgcolor="#D8D8D8" width="100%"><td><b>Galaxy environment</td></tr>
+ </table>
+ </div>
+ <div class="form-row"><label>Galaxy version:</label>
${test_environment_dict.get( 'galaxy_revision', 'unknown' ) | h}
<div style="clear: both"></div>
@@ -164,6 +169,21 @@
${test_environment_dict.get( 'galaxy_database_version', 'unknown' ) | h}
<div style="clear: both"></div></div>
+ <div class="form-row">
+ <label>Architecture:</label>
+ ${test_environment_dict.get( 'architecture', 'unknown' ) | h}
+ <div style="clear: both"></div>
+ </div>
+ <div class="form-row">
+ <label>Operating system:</label>
+ ${test_environment_dict.get( 'system', 'unknown' ) | h}
+ <div style="clear: both"></div>
+ </div>
+ <div class="form-row">
+ <label>Python version:</label>
+ ${test_environment_dict.get( 'python_version', 'unknown' ) | h}
+ <div style="clear: both"></div>
+ </div>
%if test_errors:
<div class="form-row"><table width="100%">
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: inithello: Fix for installing and testing repositories where all functional tests fail.
by commits-noreply@bitbucket.org 11 Mar '13
by commits-noreply@bitbucket.org 11 Mar '13
11 Mar '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/5d9328262021/
changeset: 5d9328262021
user: inithello
date: 2013-03-11 19:21:32
summary: Fix for installing and testing repositories where all functional tests fail.
affected #: 1 file
diff -r c07417582dd369a02625d3b3d170718fd69bdd12 -r 5d9328262021b414c9005f79b0c97605acdd4c94 test/install_and_test_tool_shed_repositories/functional_tests.py
--- a/test/install_and_test_tool_shed_repositories/functional_tests.py
+++ b/test/install_and_test_tool_shed_repositories/functional_tests.py
@@ -155,9 +155,11 @@
self.passed[ test_identifier ].append( fixed_test_id )
def getTestStatus( self, test_identifier ):
- tests_passed = self.passed[ test_identifier ]
- del self.passed[ test_identifier ]
- return tests_passed
+ if test_identifier in self.passed:
+ tests_passed = self.passed[ test_identifier ]
+ del self.passed[ test_identifier ]
+ return tests_passed
+ return []
def execute_uninstall_method( repository_dict ):
# Delete any configured tool functional tests from the test_toolbox.__dict__, otherwise nose will find them
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: inithello: Improve structure of invalid tools dict. Clarify summary and detailed log output. Add check for required environment variables.
by commits-noreply@bitbucket.org 11 Mar '13
by commits-noreply@bitbucket.org 11 Mar '13
11 Mar '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/c07417582dd3/
changeset: c07417582dd3
user: inithello
date: 2013-03-11 18:49:12
summary: Improve structure of invalid tools dict. Clarify summary and detailed log output. Add check for required environment variables.
affected #: 3 files
diff -r a2f58a2b87876f8db9a8703782633b84f4dc13ef -r c07417582dd369a02625d3b3d170718fd69bdd12 install_and_test_tool_shed_repositories.sh
--- a/install_and_test_tool_shed_repositories.sh
+++ b/install_and_test_tool_shed_repositories.sh
@@ -2,5 +2,42 @@
# A good place to look for nose info: http://somethingaboutorange.com/mrl/projects/nose/
-python test/install_and_test_tool_shed_repositories/functional_tests.py $* -v --with-nosehtml --html-report-file ./test/install_and_test_tool_shed_repositories/run_functional_tests.html test/install_and_test_tool_shed_repositories/functional/test_install_repositories.py test/functional/test_toolbox.py
+# The test/install_and_test_tool_shed_repositories/functional_tests.py can not be executed directly, because it must have certain functional test definitions
+# in sys.argv. Running it through this shell script is the best way to ensure that it has the required definitions.
+# This script requires the following environment variables:
+# GALAXY_INSTALL_TEST_TOOL_SHED_API_KEY - must be set to the API key for the tool shed that is being checked.
+# GALAXY_INSTALL_TEST_TOOL_SHED_URL - must be set to a URL that the tool shed is listening on.
+# If the tool shed url is not specified in tool_sheds_conf.xml, GALAXY_INSTALL_TEST_TOOL_SHEDS_CONF must be set to a tool sheds configuration file
+# that does specify that url, otherwise repository installation will fail.
+
+if [ -z $GALAXY_INSTALL_TEST_TOOL_SHED_API_KEY ] ; then
+ echo "This script requires the GALAXY_INSTALL_TEST_TOOL_SHED_API_KEY environment variable to be set and non-empty."
+ exit 1
+fi
+
+if [ -z $GALAXY_INSTALL_TEST_TOOL_SHED_URL ] ; then
+ echo "This script requires the GALAXY_INSTALL_TEST_TOOL_SHED_URL environment variable to be set and non-empty."
+ exit 1
+fi
+
+if [ -z "$GALAXY_INSTALL_TEST_TOOL_SHEDS_CONF" ] ; then
+ if grep --quiet $GALAXY_INSTALL_TEST_TOOL_SHED_URL tool_sheds_conf.xml; then
+ echo "Tool sheds configuration tool_sheds_conf.xml ok, proceeding."
+ else
+ echo "ERROR: Tool sheds configuration tool_sheds_conf.xml does not have an entry for $GALAXY_INSTALL_TEST_TOOL_SHED_URL."
+ exit 1
+ fi
+else
+ if grep --quiet $GALAXY_INSTALL_TEST_TOOL_SHED_URL $GALAXY_INSTALL_TEST_TOOL_SHEDS_CONF; then
+ echo "Tool sheds configuration $GALAXY_INSTALL_TEST_TOOL_SHEDS_CONF ok, proceeding."
+ else
+ echo "ERROR: Tool sheds configuration $GALAXY_INSTALL_TEST_TOOL_SHEDS_CONF does not have an entry for $GALAXY_INSTALL_TEST_TOOL_SHED_URL"
+ exit 1
+ fi
+fi
+
+python test/install_and_test_tool_shed_repositories/functional_tests.py $* -v --with-nosehtml --html-report-file \
+ test/install_and_test_tool_shed_repositories/run_functional_tests.html \
+ test/install_and_test_tool_shed_repositories/functional/test_install_repositories.py \
+ test/functional/test_toolbox.py
diff -r a2f58a2b87876f8db9a8703782633b84f4dc13ef -r c07417582dd369a02625d3b3d170718fd69bdd12 lib/tool_shed/scripts/check_repositories_for_functional_tests.py
--- a/lib/tool_shed/scripts/check_repositories_for_functional_tests.py
+++ b/lib/tool_shed/scripts/check_repositories_for_functional_tests.py
@@ -55,7 +55,11 @@
type="int", help="Control the amount of detail in the log output. --verbosity=1 is "
"the same as -v")
( options, args ) = parser.parse_args()
- ini_file = args[0]
+ try:
+ ini_file = args[0]
+ except IndexError:
+ print "Usage: python %s <tool shed .ini file> [options]" % sys.argv[ 0 ]
+ exit( 127 )
config_parser = ConfigParser.ConfigParser( {'here':os.getcwd()} )
config_parser.read( ini_file )
config_dict = {}
@@ -161,6 +165,8 @@
has_tests = 0
no_tests = 0
no_tools = 0
+ valid_revisions = 0
+ invalid_revisions = 0
# Get the list of metadata records to check for functional tests and test data. Limit this to records that have not been flagged do_not_test
# or tools_functionally_correct. Also filter out changeset revisions that are not downloadable, because it's redundant to test a revision that
# a user can't install.
@@ -173,6 +179,7 @@
repository_status[ 'invalid_tests' ] = []
metadata_records_to_check = app.sa_session.query( app.model.RepositoryMetadata ) \
.filter( and_( app.model.RepositoryMetadata.table.c.downloadable == True,
+ app.model.RepositoryMetadata.table.c.includes_tools == True,
app.model.RepositoryMetadata.table.c.do_not_test == False,
app.model.RepositoryMetadata.table.c.tools_functionally_correct == False ) ) \
.all()
@@ -185,87 +192,14 @@
repository_status[ 'invalid_tests' ] = []
if metadata_record.repository.id not in checked_repository_ids:
checked_repository_ids.append( metadata_record.repository.id )
- if verbosity >= 1:
- print '# Checking for functional tests in changeset revision %s of %s, owned by %s.' % \
- ( changeset_revision, name, owner )
+ if verbosity >= 1:
+ print '# -------------------------------------------------------------------------------------------'
+ print '# Now checking revision %s of %s, owned by %s.' % ( changeset_revision, name, owner )
# If this changeset revision has no tools, we don't need to do anything here, the install and test script has a filter for returning
# only repositories that contain tools.
if 'tools' not in metadata_record.metadata:
- no_tools += 1
continue
else:
- # Loop through all the tools in this metadata record, checking each one for defined functional tests.
- for tool_metadata in metadata_record.metadata[ 'tools' ]:
- tool_count += 1
- tool_id = tool_metadata[ 'id' ]
- tool_version = tool_metadata[ 'version' ]
- tool_guid = tool_metadata[ 'guid' ]
- if verbosity >= 2:
- print "# Checking tool ID '%s' in changeset revision %s of %s." % \
- ( tool_id, changeset_revision, name )
- # If there are no tests, this tool should not be tested, since the tool functional tests only report failure if the test itself fails,
- # not if it's missing or undefined. Filtering out those repositories at this step will reduce the number of "false negatives" the
- # automated functional test framework produces.
- if 'tests' not in tool_metadata or not tool_metadata[ 'tests' ]:
- if verbosity >= 2:
- print '# No functional tests defined for %s.' % tool_id
- # The repository_metadata.tool_test_errors attribute should always have the following structure:
- # {
- # "test_environment":
- # {
- # "galaxy_revision": "9001:abcd1234",
- # "galaxy_database_version": "114",
- # "tool_shed_revision": "9001:abcd1234",
- # "tool_shed_mercurial_version": "2.3.1",
- # "tool_shed_database_version": "17",
- # "python_version": "2.7.2",
- # "architecture": "x86_64",
- # "system": "Darwin 12.2.0"
- # },
- # "test_errors":
- # [
- # {
- # "test_id": "The test ID, generated by twill",
- # "tool_id": "The tool ID that was tested",
- # "tool_version": "The tool version that was tested",
- # "stderr": "The output of the test, or a more detailed description of what was tested and what the outcome was."
- # "traceback": "The captured traceback."
- # },
- # ]
- # "passed_tests":
- # [
- # {
- # "test_id": "The test ID, generated by twill",
- # "tool_id": "The tool ID that was tested",
- # "tool_version": "The tool version that was tested",
- # "stderr": "The output of the test, or a more detailed description of what was tested and what the outcome was."
- # },
- # ]
- # "invalid_tests":
- # [
- # {
- # "tool_id": "The tool ID that does not have functional tests defined.",
- # "tool_version": "The version of the tool."
- # "tool_guid": "The guid of the tool."
- # "reason_test_is_invalid": "A short explanation of what is invalid.
- # },
- # ]
- # }
- # Optionally, "traceback" may be included in a test_errors dict, if it is relevant. No script should overwrite anything other
- # than the list relevant to what it is testing.
- test_errors = dict( tool_id=tool_id, tool_version=tool_version, tool_guid=tool_guid )
- repository_status[ 'invalid_tests' ].append( test_errors )
- no_tests += 1
- else:
- if verbosity >= 2:
- print "# Tool ID '%s' in changeset revision %s of %s has one or more valid functional tests defined." % \
- ( tool_id, changeset_revision, name )
- has_tests += 1
- if verbosity >= 1:
- if not repository_status[ 'invalid_tests' ]:
- print '# All tools have functional tests in changeset revision %s of repository %s owned by %s.' % ( changeset_revision, name, owner )
- else:
- print '# Some tools missing functional tests in changeset revision %s of repository %s owned by %s.' % ( changeset_revision, name, owner )
has_test_data = False
# Clone the repository up to the changeset revision we're checking.
repo_dir = metadata_record.repository.repo_path( app )
@@ -285,9 +219,47 @@
# Remove the cloned repository path.
if os.path.exists( work_dir ):
shutil.rmtree( work_dir )
- if not has_test_data:
- if verbosity >= 1:
+ if verbosity >= 1:
+ if not has_test_data:
print '# Test data missing in changeset revision %s of repository %s owned by %s.' % ( changeset_revision, name, owner )
+ else:
+ print '# Test data found in changeset revision %s of repository %s owned by %s.' % ( changeset_revision, name, owner )
+ print '# Checking for functional tests in changeset revision %s of %s, owned by %s.' % \
+ ( changeset_revision, name, owner )
+ # Loop through all the tools in this metadata record, checking each one for defined functional tests.
+ for tool_metadata in metadata_record.metadata[ 'tools' ]:
+ tool_count += 1
+ tool_id = tool_metadata[ 'id' ]
+ tool_version = tool_metadata[ 'version' ]
+ tool_guid = tool_metadata[ 'guid' ]
+ if verbosity >= 2:
+ print "# Checking tool ID '%s' in changeset revision %s of %s." % \
+ ( tool_id, changeset_revision, name )
+ # If there are no tests, this tool should not be tested, since the tool functional tests only report failure if the test itself fails,
+ # not if it's missing or undefined. Filtering out those repositories at this step will reduce the number of "false negatives" the
+ # automated functional test framework produces.
+ tool_has_tests = True
+ if 'tests' not in tool_metadata or not tool_metadata[ 'tests' ]:
+ tool_has_tests = False
+ if verbosity >= 2:
+ print '# No functional tests defined for %s.' % tool_id
+ no_tests += 1
+ else:
+ tool_has_tests = True
+ if verbosity >= 2:
+ print "# Tool ID '%s' in changeset revision %s of %s has one or more valid functional tests defined." % \
+ ( tool_id, changeset_revision, name )
+ has_tests += 1
+ failure_reason = ''
+ problem_found = False
+ if not has_test_data:
+ failure_reason += 'Repository does not have a test-data directory. '
+ problem_found = True
+ if not tool_has_tests:
+ failure_reason += 'Functional test definitions missing for %s. ' % tool_id
+ problem_found = True
+ test_errors = dict( tool_id=tool_id, tool_version=tool_version, tool_guid=tool_guid,
+ reason_test_is_invalid=failure_reason )
# The repository_metadata.tool_test_errors attribute should always have the following structure:
# {
# "test_environment":
@@ -323,21 +295,28 @@
# "invalid_tests":
# [
# {
- # "tool_id": "The tool ID that does not have functional tests defined.",
+ # "tool_id": "The ID of the tool that does not have valid tests.",
# "tool_version": "The version of the tool."
# "tool_guid": "The guid of the tool."
- # "reason_test_is_invalid": "A short explanation of what is invalid.
+ # "reason_test_is_invalid": "A short explanation of what is invalid."
# },
# ]
# }
+ #
# Optionally, "traceback" may be included in a test_errors dict, if it is relevant. No script should overwrite anything other
# than the list relevant to what it is testing.
- test_errors = dict( tool_id=None, tool_version=None, tool_guid=None,
- reason_test_is_invalid="Repository %s is missing a test-data directory." % name )
- repository_status[ 'invalid_tests' ].append( test_errors )
+ # Only append this error dict if it hasn't already been added.
+ if problem_found:
+ if test_errors not in repository_status[ 'invalid_tests' ]:
+ repository_status[ 'invalid_tests' ].append( test_errors )
+ if not repository_status[ 'invalid_tests' ]:
+ valid_revisions += 1
+ if verbosity >= 1:
+ print '# All tools have functional tests in changeset revision %s of repository %s owned by %s.' % ( changeset_revision, name, owner )
else:
+ invalid_revisions += 1
if verbosity >= 1:
- print '# Test data found in changeset revision %s of repository %s owned by %s.' % ( changeset_revision, name, owner )
+ print '# Some tools missing functional tests in changeset revision %s of repository %s owned by %s.' % ( changeset_revision, name, owner )
if not info_only:
# If repository_status[ 'test_errors' ] is empty, no issues were found, and we can just update time_last_tested with the platform
# on which this script was run.
@@ -356,8 +335,10 @@
app.sa_session.add( metadata_record )
app.sa_session.flush()
stop = time.time()
+ print '# -------------------------------------------------------------------------------------------'
print '# Checked %d repositories with %d tools in %d changeset revisions.' % ( len( checked_repository_ids ), tool_count, len( metadata_records_to_check ) )
- print '# Skipped %d changeset revisions without tools.' % no_tools
+ print '# %d revisions found with functional tests and test data for all tools.' % valid_revisions
+ print '# %d revisions found with one or more tools missing functional tests and/or test data.' % invalid_revisions
print '# Found %d tools without functional tests.' % no_tests
print '# Found %d tools with functional tests.' % has_tests
if info_only:
diff -r a2f58a2b87876f8db9a8703782633b84f4dc13ef -r c07417582dd369a02625d3b3d170718fd69bdd12 test/install_and_test_tool_shed_repositories/functional_tests.py
--- a/test/install_and_test_tool_shed_repositories/functional_tests.py
+++ b/test/install_and_test_tool_shed_repositories/functional_tests.py
@@ -104,11 +104,20 @@
</tables>
'''
-# Define a default location to find the list of repositories to check.
-galaxy_repository_list = os.environ.get( 'GALAXY_INSTALL_TEST_REPOSITORY_LIST_LOCATIOM', 'repository_list.json' )
-galaxy_tool_shed_url = os.environ.get( 'GALAXY_INSTALL_TEST_TOOL_SHED_URL', 'http://localhost:9009' )
+# The tool shed url and api key must be set for this script to work correctly. Additionally, if the tool shed url does not
+# point to one of the defaults, the GALAXY_INSTALL_TEST_TOOL_SHEDS_CONF needs to point to a tool sheds configuration file
+# that contains a definition for that tool shed.
+
+galaxy_tool_shed_url = os.environ.get( 'GALAXY_INSTALL_TEST_TOOL_SHED_URL', None )
tool_shed_api_key = os.environ.get( 'GALAXY_INSTALL_TEST_TOOL_SHED_API_KEY', None )
-assert tool_shed_api_key is not None, 'Unable to proceed without API key.'
+
+if tool_shed_api_key is None:
+ print "This script requires the GALAXY_INSTALL_TEST_TOOL_SHED_API_KEY environment variable to be set and non-empty."
+ exit( 1 )
+
+if galaxy_tool_shed_url is None:
+ print "This script requires the GALAXY_INSTALL_TEST_TOOL_SHED_URL environment variable to be set and non-empty."
+ exit( 1 )
if 'GALAXY_INSTALL_TEST_SECRET' not in os.environ:
galaxy_encode_secret = 'changethisinproductiontoo'
@@ -462,8 +471,7 @@
# Get a list of repositories to test from the tool shed specified in the GALAXY_INSTALL_TEST_TOOL_SHED_URL environment variable.
log.info( "Retrieving repositories to install from the URL:\n%s\n" % str( galaxy_tool_shed_url ) )
repositories_to_install = get_repositories_to_install( galaxy_tool_shed_url, source='url' )
- log.info( "Retrieved %d repositories to install..." % len( repositories_to_install ) )
- repositories_tested = len( repositories_to_install )
+ log.info( "Retrieved %d repositories from the API." % len( repositories_to_install ) )
for repository_to_install_dict in repositories_to_install:
# We need to get some details from the tool shed API, such as repository name and owner, to pass on to the
# module that will generate the install methods.
@@ -473,13 +481,15 @@
# and therefore do not need to be checked. If they are undeleted, this script will then test them the next time it runs.
if repository_info_dict[ 'deleted' ]:
log.info( "Skipping revision %s of repository id %s (%s/%s) since the repository is deleted..." % \
- ( repository_info_dict[ 'changeset_revision' ],
- repository_info_dict[ 'repository_id' ],
+ ( repository_to_install_dict[ 'changeset_revision' ],
+ repository_to_install_dict[ 'repository_id' ],
repository_info_dict[ 'owner' ],
repository_info_dict[ 'name' ] ) )
continue
# Now merge the dict returned from /api/repository_revisions with the detailed dict we just retrieved.
detailed_repository_list.append( dict( repository_info_dict.items() + repository_to_install_dict.items() ) )
+ repositories_tested = len( detailed_repository_list )
+ log.info( 'After removing deleted repositories from the list, %d remain to be tested.' % repositories_tested )
if '-list_repositories' in sys.argv:
log.info( "The API returned the following repositories, not counting deleted:" )
for repository_info_dict in detailed_repository_list:
@@ -741,15 +751,15 @@
print "# Repository revisions tested: %d" % repositories_tested
if repositories_tested > 0:
if repositories_passed:
- print "# "
+ print '# ----------------------------------------------------------------------------------'
print "# Repositories passed:"
show_summary_output( repositories_passed )
if repositories_failed:
- print "# "
+ print '# ----------------------------------------------------------------------------------'
print "# Repositories failed:"
show_summary_output( repositories_failed )
if repositories_failed_install:
- print "# "
+ print '# ----------------------------------------------------------------------------------'
print "# Repositories not installed correctly:"
show_summary_output( repositories_failed_install )
print "####################################################################################"
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Do not display unnecessary invalid test tesults when displaying functional test results in the tool shed.
by commits-noreply@bitbucket.org 11 Mar '13
by commits-noreply@bitbucket.org 11 Mar '13
11 Mar '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/a2f58a2b8787/
changeset: a2f58a2b8787
user: greg
date: 2013-03-11 15:49:12
summary: Do not display unnecessary invalid test tesults when displaying functional test results in the tool shed.
affected #: 1 file
diff -r 15fe17fcdeb7d7a86f605bc93600242c843d897d -r a2f58a2b87876f8db9a8703782633b84f4dc13ef templates/webapps/tool_shed/repository/display_tool_functional_test_results.mako
--- a/templates/webapps/tool_shed/repository/display_tool_functional_test_results.mako
+++ b/templates/webapps/tool_shed/repository/display_tool_functional_test_results.mako
@@ -253,7 +253,7 @@
guid = test_results_dict.get( 'tool_guid', None )
tool_id = test_results_dict.get( 'tool_id', None )
tool_version = test_results_dict.get( 'tool_version', None )
- reason_test_is_invalid = test_results_dict.get( 'reason_test_is_invalid', 'unknown' )
+ reason_test_is_invalid = test_results_dict.get( 'reason_test_is_invalid', None )
if reason_test_is_invalid:
reason_test_is_invalid = to_safe_string( reason_test_is_invalid, to_html=True )
%>
@@ -262,10 +262,12 @@
<td colspan="2" bgcolor="#FFFFCC">Tool id: <b>${tool_id}</b> version: <b>${tool_version}</b></td></tr>
%endif
- <tr>
- <td><b>Reason test is invalid</b></td>
- <td>${reason_test_is_invalid}</td>
- </tr>
+ %if reason_test_is_invalid:
+ <tr>
+ <td><b>Reason test is invalid</b></td>
+ <td>${reason_test_is_invalid}</td>
+ </tr>
+ %endif
%endfor
</table></div>
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: greg: Enhancesments for displaying functional tests run against installed tool shed repositories that contain tools.
by commits-noreply@bitbucket.org 11 Mar '13
by commits-noreply@bitbucket.org 11 Mar '13
11 Mar '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/15fe17fcdeb7/
changeset: 15fe17fcdeb7
user: greg
date: 2013-03-11 15:38:15
summary: Enhancesments for displaying functional tests run against installed tool shed repositories that contain tools.
affected #: 1 file
diff -r 0a6b63eca3ff6a72cd874bf36f73a2b5905aff65 -r 15fe17fcdeb7d7a86f605bc93600242c843d897d templates/webapps/tool_shed/repository/display_tool_functional_test_results.mako
--- a/templates/webapps/tool_shed/repository/display_tool_functional_test_results.mako
+++ b/templates/webapps/tool_shed/repository/display_tool_functional_test_results.mako
@@ -28,11 +28,15 @@
# about the test environment even if all tests passed and the repository_metadata.tools_functionally_correct column is set to True.
tool_test_errors = repository_metadata.tool_test_errors
test_environment_dict = tool_test_errors.get( 'test_environment', None )
+ invalid_tests = tool_test_errors.get( 'invalid_tests', [] )
test_errors = tool_test_errors.get( 'test_errors', [] )
+ tests_passed = tool_test_errors.get( 'tests_passed', [] )
else:
tool_test_errors = None
test_environment_dict = {}
+ invalid_tests = []
test_errors = []
+ tests_passed = []
if can_push:
browse_label = 'Browse or delete repository tip files'
@@ -121,7 +125,7 @@
<b>Repository name:</b><br/>
${repository.name}
%endif
-%if tool_test_errors:
+%if invalid_tests or tool_test_errors or tests_passed:
<p/><div class="toolForm"><div class="toolFormTitle">Tool functional test results</div>
@@ -151,61 +155,48 @@
<div style="clear: both"></div></div><div class="form-row">
- <table width="100%">
- <tr bgcolor="#D8D8D8" width="100%"><td><b>Test results</td></tr>
- </table>
+ <label>Galaxy version:</label>
+ ${test_environment_dict.get( 'galaxy_revision', 'unknown' ) | h}
+ <div style="clear: both"></div></div><div class="form-row">
- <table class="grid">
- %for test_results_dict in test_errors:
- <%
- test_id = test_results_dict.get( 'test_id', 'unknown' )
- if test_id != 'unknown':
- # The test_id looks something like:
- # test_tool_000003 (functional.test_toolbox.TestForTool_localhost:9009/repos/test/bwa_mappers/bwa_color_wrapper/1.0.2)
- # Highlight the tool id and version.
- test_id_items = test_id.split( '/' )
- tool_id = test_id_items[ -2 ]
- tool_id = '<b>%s</b>' % tool_id
- test_id_items[ -2 ] = tool_id
- tool_version = test_id_items[ -1 ]
- tool_version = '<b>%s</b>' % ( tool_version.rstrip( ')' ) )
- tool_version = '%s)' % tool_version
- test_id_items[ -1 ] = tool_version
- test_id = '/'.join( test_id_items )
- test_id_items = test_id.split( ' ' )
- test_num = test_id_items[ 0 ]
- test_num = '<b>%s</b>' % test_num
- test_id_items[ 0 ] = test_num
- test_id = ' '.join( test_id_items )
-
- test_status = '<font color="green">Test passed</font>'
- stdout = test_results_dict.get( 'stdout', '' )
- if stdout:
- stdout = to_safe_string( stdout, to_html=True )
+ <label>Galaxy database version:</label>
+ ${test_environment_dict.get( 'galaxy_database_version', 'unknown' ) | h}
+ <div style="clear: both"></div>
+ </div>
+ %if test_errors:
+ <div class="form-row">
+ <table width="100%">
+ <tr bgcolor="#D8D8D8" width="100%"><td><b>Tests that failed</td></tr>
+ </table>
+ </div>
+ <div class="form-row">
+ <table class="grid">
+ %for test_results_dict in test_errors:
+ <%
+ test_id = test_results_dict.get( 'test_id', 'unknown' )
+ tool_id = test_results_dict.get( 'tool_id', 'unknown' )
+ tool_version = test_results_dict.get( 'tool_version', 'unknown' )
+ test_status = '<font color="red">Test failed</font>'
+
stderr = test_results_dict.get( 'stderr', '' )
if stderr:
stderr = to_safe_string( stderr, to_html=True )
- test_status = '<font color="red">Test failed</font>'
traceback = test_results_dict.get( 'traceback', '' )
if traceback:
traceback = to_safe_string( traceback, to_html=True )
- test_status = '<font color="red">Test failed</font>'
- %>
- <tr>
- <td bgcolor="#FFFFCC"><b>Test id</b></td>
- <td bgcolor="#FFFFCC">${test_id}</td>
- </tr>
- <tr>
- <td><b>Status</b></td>
- <td>${test_status}</td>
- </tr>
- %if repository_metadata.tools_functionally_correct:
+ %><tr>
- <td><b>Stdout</b></td>
- <td>${stdout}</td>
+ <td colspan="2" bgcolor="#FFFFCC">Tool id: <b>${tool_id}</b> version: <b>${tool_version}</b></td></tr>
- %else:
+ <tr>
+ <td><b>Test id</b></td>
+ <td>${test_id}</td>
+ </tr>
+ <tr>
+ <td><b>Status</b></td>
+ <td>${test_status}</td>
+ </tr><tr><td><b>Stderr</b></td><td>${stderr}</td>
@@ -214,11 +205,71 @@
<td><b>Traceback</b></td><td>${traceback}</td></tr>
- %endif
- %endfor
- </table>
- <div style="clear: both"></div>
- </div>
+ %endfor
+ </table>
+ <div style="clear: both"></div>
+ </div>
+ %endif
+ %if tests_passed:
+ <div class="form-row">
+ <table width="100%">
+ <tr bgcolor="#D8D8D8" width="100%"><td><b>Tests that passed successfully</td></tr>
+ </table>
+ </div>
+ <div class="form-row">
+ <table class="grid">
+ %for test_results_dict in tests_passed:
+ <%
+ test_id = test_results_dict.get( 'test_id', 'unknown' )
+ tool_id = test_results_dict.get( 'tool_id', 'unknown' )
+ tool_version = test_results_dict.get( 'tool_version', 'unknown' )
+ test_status = '<font color="green">Test passed</font>'
+ %>
+ <tr>
+ <td colspan="2" bgcolor="#FFFFCC">Tool id: <b>${tool_id}</b> version: <b>${tool_version}</b></td>
+ </tr>
+ <tr>
+ <td><b>Test id</b></td>
+ <td>${test_id}</td>
+ </tr>
+ <tr>
+ <td><b>Status</b></td>
+ <td>${test_status}</td>
+ </tr>
+ %endfor
+ </table>
+ </div>
+ %endif
+ %if invalid_tests:
+ <div class="form-row">
+ <table width="100%">
+ <tr bgcolor="#D8D8D8" width="100%"><td><b>Invalid tests</td></tr>
+ </table>
+ </div>
+ <div class="form-row">
+ <table class="grid">
+ %for test_results_dict in invalid_tests:
+ <%
+ guid = test_results_dict.get( 'tool_guid', None )
+ tool_id = test_results_dict.get( 'tool_id', None )
+ tool_version = test_results_dict.get( 'tool_version', None )
+ reason_test_is_invalid = test_results_dict.get( 'reason_test_is_invalid', 'unknown' )
+ if reason_test_is_invalid:
+ reason_test_is_invalid = to_safe_string( reason_test_is_invalid, to_html=True )
+ %>
+ %if tool_id or tool_version:
+ <tr>
+ <td colspan="2" bgcolor="#FFFFCC">Tool id: <b>${tool_id}</b> version: <b>${tool_version}</b></td>
+ </tr>
+ %endif
+ <tr>
+ <td><b>Reason test is invalid</b></td>
+ <td>${reason_test_is_invalid}</td>
+ </tr>
+ %endfor
+ </table>
+ </div>
+ %endif
</div></div>
%endif
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: inithello: Refactor summary output. Improve handling of empty tool_test_errors dict. Refactor repository information retrieval.
by commits-noreply@bitbucket.org 11 Mar '13
by commits-noreply@bitbucket.org 11 Mar '13
11 Mar '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/0a6b63eca3ff/
changeset: 0a6b63eca3ff
user: inithello
date: 2013-03-11 05:47:47
summary: Refactor summary output. Improve handling of empty tool_test_errors dict. Refactor repository information retrieval.
affected #: 2 files
diff -r 2b4d6af45a04ddb49c2c87a0ead3874503d35832 -r 0a6b63eca3ff6a72cd874bf36f73a2b5905aff65 install_and_test_tool_shed_repositories.sh
--- a/install_and_test_tool_shed_repositories.sh
+++ b/install_and_test_tool_shed_repositories.sh
@@ -1,7 +1,6 @@
#!/bin/sh
# A good place to look for nose info: http://somethingaboutorange.com/mrl/projects/nose/
-#rm -f ./test/tool_shed/run_functional_tests.log
-python test/install_and_test_tool_shed_repositories/functional_tests.py -v --with-nosehtml --html-report-file ./test/install_and_test_tool_shed_repositories/run_functional_tests.html test/install_and_test_tool_shed_repositories/functional/test_install_repositories.py test/functional/test_toolbox.py
+python test/install_and_test_tool_shed_repositories/functional_tests.py $* -v --with-nosehtml --html-report-file ./test/install_and_test_tool_shed_repositories/run_functional_tests.html test/install_and_test_tool_shed_repositories/functional/test_install_repositories.py test/functional/test_toolbox.py
diff -r 2b4d6af45a04ddb49c2c87a0ead3874503d35832 -r 0a6b63eca3ff6a72cd874bf36f73a2b5905aff65 test/install_and_test_tool_shed_repositories/functional_tests.py
--- a/test/install_and_test_tool_shed_repositories/functional_tests.py
+++ b/test/install_and_test_tool_shed_repositories/functional_tests.py
@@ -242,6 +242,8 @@
api_path = metadata_dict[ 'url' ].split( '/' )
api_url = get_api_url( base=tool_shed_url, parts=api_path )
repository_metadata = json_from_url( api_url )
+ if repository_metadata[ 'tool_test_errors' ] is None:
+ return {}
return repository_metadata[ 'tool_test_errors' ]
def json_from_url( url ):
@@ -276,16 +278,22 @@
result = test_runner.run( tests )
return result, test_config.plugins._plugins
+def show_summary_output( repository_info_dicts ):
+ repositories_by_owner = dict()
+ for repository in repository_info_dicts:
+ if repository[ 'owner' ] not in repositories_by_owner:
+ repositories_by_owner[ repository[ 'owner' ] ] = []
+ repositories_by_owner[ repository[ 'owner' ] ].append( repository )
+ for owner in repositories_by_owner:
+ print "# "
+ for repository in repositories_by_owner[ owner ]:
+ print "# %s owned by %s, changeset revision %s" % ( repository[ 'name' ], repository[ 'owner' ], repository[ 'changeset_revision' ] )
+
def main():
# ---- Configuration ------------------------------------------------------
galaxy_test_host = os.environ.get( 'GALAXY_INSTALL_TEST_HOST', default_galaxy_test_host )
galaxy_test_port = os.environ.get( 'GALAXY_INSTALL_TEST_PORT', str( default_galaxy_test_port_max ) )
- # Initialize some variables for the summary that will be printed to stdout.
- repositories_tested = 0
- repositories_passed = []
- repositories_failed = []
-
tool_path = os.environ.get( 'GALAXY_INSTALL_TEST_TOOL_PATH', 'tools' )
if 'HTTP_ACCEPT_LANGUAGE' not in os.environ:
os.environ[ 'HTTP_ACCEPT_LANGUAGE' ] = default_galaxy_locales
@@ -444,38 +452,19 @@
log.info( "The embedded Galaxy application is running on %s:%s" % ( galaxy_test_host, galaxy_test_port ) )
log.info( "Repositories will be installed from the tool shed at %s" % galaxy_tool_shed_url )
success = False
+ # Initialize some variables for the summary that will be printed to stdout.
+ repositories_tested = 0
+ repositories_passed = []
+ repositories_failed = []
+ repositories_failed_install = []
try:
- # Iterate through a list of repository info dicts.
+ detailed_repository_list = []
+ # Get a list of repositories to test from the tool shed specified in the GALAXY_INSTALL_TEST_TOOL_SHED_URL environment variable.
log.info( "Retrieving repositories to install from the URL:\n%s\n" % str( galaxy_tool_shed_url ) )
repositories_to_install = get_repositories_to_install( galaxy_tool_shed_url, source='url' )
log.info( "Retrieved %d repositories to install..." % len( repositories_to_install ) )
repositories_tested = len( repositories_to_install )
- # This loop will iterate through the list of repositories returned by the above method, skipping any that are marked
- # as deleted. For each repository, it will generate a test method that will use Twill to install that repository into the
- # embedded Galaxy application that was started up, selecting to install repository and tool dependencies if they are
- # defined. If the installation completes successfully, it will then generate a test case for each functional test
- # defined for each tool in the repository, and execute the generated test cases. When this completes, it will record
- # the result of the tests, and if any failed, the traceback and captured output of the tool that was run.
- # After all tests have completed, the repository is uninstalled, so that the previous test cases don't interfere with
- # the next repository's functional tests.
for repository_to_install_dict in repositories_to_install:
- """
- Each repository_to_install_dict looks something like:
- {
- "changeset_revision": "13fa22a258b5",
- "downloadable": true,
- "id": "529fd61ab1c6cc36",
- "malicious": false,
- "repository_id": "529fd61ab1c6cc36",
- "url": "/api/repository_revisions/529fd61ab1c6cc36"
- }
- """
- repository_status = dict()
- repository_id = repository_to_install_dict.get( 'repository_id', None )
- changeset_revision = repository_to_install_dict.get( 'changeset_revision', None )
- metadata_revision_id = repository_to_install_dict.get( 'id', None )
- # Add the URL for the tool shed we're installing from, so the automated installation methods go to the right place.
- repository_to_install_dict[ 'tool_shed_url' ] = galaxy_tool_shed_url
# We need to get some details from the tool shed API, such as repository name and owner, to pass on to the
# module that will generate the install methods.
repository_info_dict = get_repository_info_from_api( galaxy_tool_shed_url, repository_to_install_dict )
@@ -483,13 +472,31 @@
# and functionally correct tools that someone has previously installed. Deleted repositories have never been installed,
# and therefore do not need to be checked. If they are undeleted, this script will then test them the next time it runs.
if repository_info_dict[ 'deleted' ]:
- log.info( "Skipping revision %s of repository id %s since the repository is deleted..." % ( str( changeset_revision ), str( repository_id ) ) )
+ log.info( "Skipping revision %s of repository id %s (%s/%s) since the repository is deleted..." % \
+ ( repository_info_dict[ 'changeset_revision' ],
+ repository_info_dict[ 'repository_id' ],
+ repository_info_dict[ 'owner' ],
+ repository_info_dict[ 'name' ] ) )
continue
- log.info( "Installing and testing revision %s of repository id %s..." % ( str( changeset_revision ), str( repository_id ) ) )
- # Add repository details to the basic repository dict.
- repository_dict = dict( repository_info_dict.items() + repository_to_install_dict.items() )
+ # Now merge the dict returned from /api/repository_revisions with the detailed dict we just retrieved.
+ detailed_repository_list.append( dict( repository_info_dict.items() + repository_to_install_dict.items() ) )
+ if '-list_repositories' in sys.argv:
+ log.info( "The API returned the following repositories, not counting deleted:" )
+ for repository_info_dict in detailed_repository_list:
+ log.info( "%s owned by %s changeset revision %s" % ( repository_info_dict.get( 'name', None ),
+ repository_info_dict.get( 'owner', None ),
+ repository_info_dict.get( 'changeset_revision', None ) ) )
+ # This loop will iterate through the list of repositories generated by the above code, having already filtered out any
+ # that were marked as deleted. For each repository, it will generate a test method that will use Twill to install that
+ # repository into the embedded Galaxy application that was started up, selecting to install repository and tool
+ # dependencies if they are defined. If the installation completes successfully, it will then generate a test case for
+ # each functional test defined for each tool in the repository, and execute the generated test cases. When this completes,
+ # it will record the result of the tests, and if any failed, the traceback and captured output of the tool that was run.
+ # After all tests have completed, the repository is uninstalled, so that the previous test cases don't interfere with
+ # the next repository's functional tests.
+ for repository_info_dict in detailed_repository_list:
"""
- After the addition of the repository details, each repository_dict should now contain something like:
+ Each repository_info_dict looks something like:
{
"changeset_revision": "13fa22a258b5",
"contents_url": "/api/repositories/529fd61ab1c6cc36/contents",
@@ -510,11 +517,19 @@
"user_id": "529fd61ab1c6cc36"
}
"""
- name = repository_dict[ 'name' ]
- owner = repository_dict[ 'owner' ]
+ repository_status = dict()
+ repository_id = repository_info_dict.get( 'repository_id', None )
+ changeset_revision = repository_info_dict.get( 'changeset_revision', None )
+ metadata_revision_id = repository_info_dict.get( 'id', None )
+ # Add the URL for the tool shed we're installing from, so the automated installation methods go to the right place.
+ repository_info_dict[ 'tool_shed_url' ] = galaxy_tool_shed_url
+ log.info( "Installing and testing revision %s of repository id %s..." % ( str( changeset_revision ), str( repository_id ) ) )
+ # Get the name and owner out of the repository info dict.
+ name = repository_info_dict[ 'name' ]
+ owner = repository_info_dict[ 'owner' ]
# Use the repository information dict to generate an install method that will install the repository into the embedded
# Galaxy application, with tool dependencies and repository dependencies, if any.
- test_install_repositories.generate_install_method( repository_dict )
+ test_install_repositories.generate_install_method( repository_info_dict )
os.environ[ 'GALAXY_INSTALL_TEST_HOST' ] = galaxy_test_host
# Configure nose to run the install method as a test.
test_config = nose.config.Config( env=os.environ, plugins=nose.plugins.manager.DefaultPluginManager() )
@@ -526,7 +541,7 @@
# If the installation succeeds, configure and run functional tests for this repository. This is equivalent to
# sh run_functional_tests.sh -installed
if success:
- log.debug( 'Installation of %s succeeded, running all defined functional tests.' % repository_dict[ 'name' ] )
+ log.debug( 'Installation of %s succeeded, running all defined functional tests.' % name )
# Generate the shed_tools_dict that specifies the location of test data contained within this repository. If the repository
# does not have a test-data directory, this will return has_test_data = False, and we will set the do_not_test flag to True,
# and the tools_functionally_correct flag to False, as well as updating tool_test_errors.
@@ -574,17 +589,18 @@
# },
# ]
# }
- repository_status = get_tool_test_errors_from_api( galaxy_tool_shed_url, repository_dict )
+ repository_status = get_tool_test_errors_from_api( galaxy_tool_shed_url, repository_info_dict )
+ if 'test_environment' not in repository_status:
+ repository_status[ 'test_environment' ] = {}
test_environment = get_test_environment( repository_status[ 'test_environment' ] )
test_environment[ 'galaxy_database_version' ] = get_database_version( app )
test_environment[ 'galaxy_revision'] = get_repository_current_revision( os.getcwd() )
repository_status[ 'test_environment' ] = test_environment
repository_status[ 'tests_passed' ] = []
repository_status[ 'test_errors' ] = []
+ repository_status[ 'invalid_tests' ] = []
if not has_test_data:
log.error( 'Test data is missing for this repository. Updating repository and skipping functional tests.' )
- repository_status[ 'test_environment' ] = get_test_environment()
- test_id = 'Find functional test data for %s' % repository_dict[ 'name' ]
# Record the lack of test data.
test_errors = dict( tool_id=None, tool_version=None, tool_guid=None,
reason_test_is_invalid="Repository %s is missing a test-data directory." % name )
@@ -593,7 +609,7 @@
register_test_result( galaxy_tool_shed_url, metadata_revision_id, repository_status, tests_passed=False )
# Run the cleanup method. This removes tool functional test methods from the test_toolbox module and uninstalls the
# repository using Twill.
- execute_uninstall_method( repository_dict )
+ execute_uninstall_method( repository_info_dict )
# Set the test_toolbox.toolbox module-level variable to the new app.toolbox.
test_toolbox.toolbox = app.toolbox
repositories_failed.append( dict( name=name, owner=owner, changeset_revision=changeset_revision ) )
@@ -618,9 +634,7 @@
# Run the configured tests.
result, test_plugins = run_tests( test_config )
success = result.wasSuccessful()
- # Record some information about the environment in which this test was run, in case a failure is specific to a certain processor
- # architecture or operating system.
- repository_dict[ 'test_environment' ] = test_environment
+ # Use the ReportResults nose plugin to get a list of tests that passed.
for plugin in test_plugins:
if hasattr( plugin, 'getTestStatus' ):
test_identifier = '%s/%s' % ( owner, name )
@@ -637,10 +651,8 @@
# controller with the status of the test. This also sets the do_not_test and tools_functionally correct flags, and
# updates the time_last_tested field to today's date.
repositories_passed.append( dict( name=name, owner=owner, changeset_revision=changeset_revision ) )
- repository_status[ 'test_environment' ] = test_environment
register_test_result( galaxy_tool_shed_url, metadata_revision_id, repository_status, tests_passed=True )
- log.debug( 'Revision %s of repository %s installed and passed functional tests.' % \
- ( repository_dict[ 'changeset_revision' ], repository_dict[ 'name' ] ) )
+ log.debug( 'Revision %s of repository %s installed and passed functional tests.' % ( changeset_revision, name ) )
else:
# If the functional tests fail, log the output and update the failed changeset revision's metadata record in the tool shed via the API.
for failure in result.failures:
@@ -686,14 +698,15 @@
repositories_failed.append( dict( name=name, owner=owner, changeset_revision=changeset_revision ) )
register_test_result( galaxy_tool_shed_url, metadata_revision_id, repository_status, tests_passed=False )
log.debug( 'Revision %s of repository %s installed successfully, but did not pass functional tests.' % \
- ( repository_dict[ 'changeset_revision' ], repository_dict[ 'name' ] ) )
+ ( changeset_revision, name ) )
# Run the cleanup method. This removes tool functional test methods from the test_toolbox module and uninstalls the
# repository using Twill.
- execute_uninstall_method( repository_dict )
+ execute_uninstall_method( repository_info_dict )
# Set the test_toolbox.toolbox module-level variable to the new app.toolbox.
test_toolbox.toolbox = app.toolbox
else:
- log.debug( 'Repository %s failed to install correctly.' % repository_dict[ 'name' ] )
+ repositories_failed_install.append( dict( name=name, owner=owner, changeset_revision=changeset_revision ) )
+ log.debug( 'Repository %s failed to install correctly.' % name )
except:
log.exception( "Failure running tests" )
@@ -728,29 +741,17 @@
print "# Repository revisions tested: %d" % repositories_tested
if repositories_tested > 0:
if repositories_passed:
- passed_repositories_by_owner = dict()
- for repository in repositories_passed:
- if repository[ 'owner' ] not in passed_repositories_by_owner:
- passed_repositories_by_owner[ repository[ 'owner' ] ] = []
- passed_repositories_by_owner[ repository[ 'owner' ] ].append( repository )
print "# "
print "# Repositories passed:"
- for owner in passed_repositories_by_owner:
- print "# "
- for repository in passed_repositories_by_owner[ owner ]:
- print "# %s owned by %s, changeset revision %s" % ( repository[ 'name' ], repository[ 'owner' ], repository[ 'changeset_revision' ] )
+ show_summary_output( repositories_passed )
if repositories_failed:
- failed_repositories_by_owner = dict()
- for repository in repositories_failed:
- if repository[ 'owner' ] not in failed_repositories_by_owner:
- failed_repositories_by_owner[ repository[ 'owner' ] ] = []
- failed_repositories_by_owner[ repository[ 'owner' ] ].append( repository )
print "# "
print "# Repositories failed:"
- for owner in failed_repositories_by_owner:
- print "# "
- for repository in failed_repositories_by_owner[ owner ]:
- print "# %s owned by %s, changeset revision %s" % ( repository[ 'name' ], repository[ 'owner' ], repository[ 'changeset_revision' ] )
+ show_summary_output( repositories_failed )
+ if repositories_failed_install:
+ print "# "
+ print "# Repositories not installed correctly:"
+ show_summary_output( repositories_failed_install )
print "####################################################################################"
if success:
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: inithello: Explicitly clear ReportResults.passed after each set of tests. Group test results by repository owner, and separate with a blank line.
by commits-noreply@bitbucket.org 08 Mar '13
by commits-noreply@bitbucket.org 08 Mar '13
08 Mar '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/2b4d6af45a04/
changeset: 2b4d6af45a04
user: inithello
date: 2013-03-08 23:03:32
summary: Explicitly clear ReportResults.passed after each set of tests. Group test results by repository owner, and separate with a blank line.
affected #: 1 file
diff -r f25f3fee4da76fc0660ba27a1b26673d7592ddc1 -r 2b4d6af45a04ddb49c2c87a0ead3874503d35832 test/install_and_test_tool_shed_repositories/functional_tests.py
--- a/test/install_and_test_tool_shed_repositories/functional_tests.py
+++ b/test/install_and_test_tool_shed_repositories/functional_tests.py
@@ -120,7 +120,7 @@
class ReportResults( Plugin ):
'''Simple Nose plugin to record the IDs of all tests run, regardless of success.'''
name = "reportresults"
- passed = []
+ passed = dict()
def options( self, parser, env=os.environ ):
super( ReportResults, self ).options( parser, env=env )
@@ -137,10 +137,18 @@
# Rearrange the test ID to match the format that is produced in test_results.failures
test_id_parts = test_id.split( '.' )
fixed_test_id = '%s (%s)' % ( test_id_parts[ -1 ], '.'.join( test_id_parts[ :-1 ] ) )
- self.passed.append( fixed_test_id )
+ test_parts = fixed_test_id.split( '/' )
+ owner = test_parts[ -4 ]
+ name = test_parts[ -3 ]
+ test_identifier = '%s/%s' % ( owner, name )
+ if test_identifier not in self.passed:
+ self.passed[ test_identifier ] = []
+ self.passed[ test_identifier ].append( fixed_test_id )
- def getTestStatus( self ):
- return self.passed
+ def getTestStatus( self, test_identifier ):
+ tests_passed = self.passed[ test_identifier ]
+ del self.passed[ test_identifier ]
+ return tests_passed
def execute_uninstall_method( repository_dict ):
# Delete any configured tool functional tests from the test_toolbox.__dict__, otherwise nose will find them
@@ -615,7 +623,8 @@
repository_dict[ 'test_environment' ] = test_environment
for plugin in test_plugins:
if hasattr( plugin, 'getTestStatus' ):
- tests_passed = plugin.getTestStatus()
+ test_identifier = '%s/%s' % ( owner, name )
+ tests_passed = plugin.getTestStatus( test_identifier )
break
repository_status[ 'tests_passed' ] = []
for test_id in tests_passed:
@@ -670,7 +679,6 @@
if output_type in tmp_output:
test_status[ output_type ] = '\n'.join( tmp_output[ output_type ] )
repository_status[ 'test_errors' ].append( test_status )
- log.debug( to_json_string( repository_status, indent=2, sort_keys=True ) )
# Call the register_test_result method, which executes a PUT request to the repository_revisions API controller with the outcome
# of the tests, and updates tool_test_errors with the relevant log data.
# This also sets the do_not_test and tools_functionally correct flags to the appropriate values, and updates the time_last_tested
@@ -720,13 +728,29 @@
print "# Repository revisions tested: %d" % repositories_tested
if repositories_tested > 0:
if repositories_passed:
+ passed_repositories_by_owner = dict()
+ for repository in repositories_passed:
+ if repository[ 'owner' ] not in passed_repositories_by_owner:
+ passed_repositories_by_owner[ repository[ 'owner' ] ] = []
+ passed_repositories_by_owner[ repository[ 'owner' ] ].append( repository )
+ print "# "
print "# Repositories passed:"
- for repository in repositories_passed:
- print "# %s owned by %s, changeset revision %s" % ( repository[ 'name' ], repository[ 'owner' ], repository[ 'changeset_revision' ] )
+ for owner in passed_repositories_by_owner:
+ print "# "
+ for repository in passed_repositories_by_owner[ owner ]:
+ print "# %s owned by %s, changeset revision %s" % ( repository[ 'name' ], repository[ 'owner' ], repository[ 'changeset_revision' ] )
if repositories_failed:
+ failed_repositories_by_owner = dict()
+ for repository in repositories_failed:
+ if repository[ 'owner' ] not in failed_repositories_by_owner:
+ failed_repositories_by_owner[ repository[ 'owner' ] ] = []
+ failed_repositories_by_owner[ repository[ 'owner' ] ].append( repository )
+ print "# "
print "# Repositories failed:"
- for repository in repositories_failed:
- print "# %s owned by %s, changeset revision %s" % ( repository[ 'name' ], repository[ 'owner' ], repository[ 'changeset_revision' ] )
+ for owner in failed_repositories_by_owner:
+ print "# "
+ for repository in failed_repositories_by_owner[ owner ]:
+ print "# %s owned by %s, changeset revision %s" % ( repository[ 'name' ], repository[ 'owner' ], repository[ 'changeset_revision' ] )
print "####################################################################################"
if success:
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: dannon: Remove unused yet exposed 'stdoutX' method of the datasets controller.
by commits-noreply@bitbucket.org 08 Mar '13
by commits-noreply@bitbucket.org 08 Mar '13
08 Mar '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/f25f3fee4da7/
changeset: f25f3fee4da7
user: dannon
date: 2013-03-08 22:01:25
summary: Remove unused yet exposed 'stdoutX' method of the datasets controller.
affected #: 1 file
diff -r 619995bcb99feda43d9a369a7aa563108a19f3c7 -r f25f3fee4da76fc0660ba27a1b26673d7592ddc1 lib/galaxy/webapps/galaxy/controllers/dataset.py
--- a/lib/galaxy/webapps/galaxy/controllers/dataset.py
+++ b/lib/galaxy/webapps/galaxy/controllers/dataset.py
@@ -180,16 +180,6 @@
if not hda or not self._can_access_dataset( trans, hda ):
return trans.show_error_message( "Either this dataset does not exist or you do not have permission to access it." )
return trans.fill_template( "dataset/errors.mako", hda=hda )
- @web.expose
- def stdoutX( self, trans, dataset_id=None, **kwargs ):
- trans.response.set_content_type( 'text/plain' )
- try:
- hda = trans.sa_session.query( trans.app.model.HistoryDatasetAssociation ).get( trans.security.decode_id( dataset_id ) )
- assert hda and self._can_access_dataset( trans, hda )
- job = hda.creating_job_associations[0].job
- except:
- return "Invalid dataset ID or you are not allowed to access this dataset"
- return job.stdout
@web.expose
def stdout( self, trans, dataset_id=None, **kwargs ):
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: inithello: Enhance the script that checks repositories for functional tests, adding support for multiple web frontends.
by commits-noreply@bitbucket.org 08 Mar '13
by commits-noreply@bitbucket.org 08 Mar '13
08 Mar '13
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/619995bcb99f/
changeset: 619995bcb99f
user: inithello
date: 2013-03-08 20:44:57
summary: Enhance the script that checks repositories for functional tests, adding support for multiple web frontends.
affected #: 1 file
diff -r c937f80188d8026f2074f8a897e2ff2972d8da6d -r 619995bcb99feda43d9a369a7aa563108a19f3c7 lib/tool_shed/scripts/check_repositories_for_functional_tests.py
--- a/lib/tool_shed/scripts/check_repositories_for_functional_tests.py
+++ b/lib/tool_shed/scripts/check_repositories_for_functional_tests.py
@@ -38,6 +38,12 @@
'''Script that checks repositories to see if the tools contained within them have functional tests defined.'''
parser = OptionParser()
parser.add_option( "-i", "--info_only", action="store_true", dest="info_only", help="info about the requested action", default=False )
+ parser.add_option( "-s",
+ "--section",
+ action="store",
+ dest="section",
+ default='server:main',
+ help="which .ini file section to extract the host and port from" )
parser.add_option(
"-v", "--verbose",
action="count", dest="verbosity",
@@ -57,10 +63,11 @@
config_dict[key] = value
config = tool_shed_config.Configuration( **config_dict )
+ config_section = options.section
now = strftime( "%Y-%m-%d %H:%M:%S" )
print "#############################################################################"
print "# %s - Checking repositories for tools with functional tests." % now
- print "# This tool shed is configured to listen on %s:%s." % ( config_parser.get( 'server:main', 'host' ), config_parser.get( 'server:main', 'port' ) )
+ print "# This tool shed is configured to listen on %s:%s." % ( config_parser.get( config_section, 'host' ), config_parser.get( config_section, 'port' ) )
app = FlagRepositoriesApplication( config )
if options.info_only:
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
4 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/8e00d2fd674a/
changeset: 8e00d2fd674a
user: jmchilton
date: 2013-02-10 17:16:26
summary: Breakup big get_path_paste_uploaded_datasets function in library_common.py into smaller more extensible pieces. This is useful in implementing multiple file dataset uploads downstream but is also a good refactoring on its own.
affected #: 1 file
diff -r 506484344db3a370f8ae24096041d38557d1967e -r 8e00d2fd674acafb36e7118afaaf7f739235be2e lib/galaxy/webapps/galaxy/controllers/library_common.py
--- a/lib/galaxy/webapps/galaxy/controllers/library_common.py
+++ b/lib/galaxy/webapps/galaxy/controllers/library_common.py
@@ -1075,6 +1075,7 @@
return output
def make_library_uploaded_dataset( self, trans, cntrller, params, name, path, type, library_bunch, in_folder=None ):
link_data_only = params.get( 'link_data_only', 'copy_files' )
+ file_type = params.file_type
library_bunch.replace_dataset = None # not valid for these types of upload
uploaded_dataset = util.bunch.Bunch()
new_name = name
@@ -1089,7 +1090,7 @@
uploaded_dataset.path = path
uploaded_dataset.type = type
uploaded_dataset.ext = None
- uploaded_dataset.file_type = params.file_type
+ uploaded_dataset.file_type = file_type
uploaded_dataset.dbkey = params.dbkey
uploaded_dataset.space_to_tab = params.space_to_tab
if in_folder:
@@ -1145,44 +1146,57 @@
uploaded_datasets.append( self.make_library_uploaded_dataset( trans, cntrller, params, name, file, 'server_dir', library_bunch ) )
return uploaded_datasets, 200, None
def get_path_paste_uploaded_datasets( self, trans, cntrller, params, library_bunch, response_code, message ):
+ preserve_dirs = util.string_as_bool( params.get( 'preserve_dirs', False ) )
+ uploaded_datasets = []
+ (files_and_folders, _response_code, _message) = self._get_path_files_and_folders(params, preserve_dirs)
+ if _response_code:
+ return (uploaded_datasets, _response_code, _message)
+ for (path, name, folder) in files_and_folders:
+ uploaded_datasets.append( self.make_library_uploaded_dataset( trans, cntrller, params, name, path, 'path_paste', library_bunch, folder ) )
+ return uploaded_datasets, 200, None
+
+ def _get_path_files_and_folders( self, params, preserve_dirs ):
+ problem_response = self._check_path_paste_params( params )
+ if problem_response:
+ return problem_response
+ files_and_folders = []
+ for (line, path) in self._paths_list( params ):
+ line_files_and_folders = self._get_single_path_files_and_folders( line, path, preserve_dirs )
+ files_and_folders.extend( line_files_and_folders )
+ return files_and_folders, None, None
+
+ def _get_single_path_files_and_folders(self, line, path, preserve_dirs):
+ files_and_folders = []
+ if os.path.isfile( path ):
+ name = os.path.basename( path )
+ files_and_folders.append((path, name, None))
+ for basedir, dirs, files in os.walk( line ):
+ for file in files:
+ file_path = os.path.abspath( os.path.join( basedir, file ) )
+ if preserve_dirs:
+ in_folder = os.path.dirname( file_path.replace( path, '', 1 ).lstrip( '/' ) )
+ else:
+ in_folder = None
+ files_and_folders.append((file_path, file, in_folder))
+ return files_and_folders
+ def _paths_list(self, params):
+ return [ (l.strip(), os.path.abspath(l.strip())) for l in params.filesystem_paths.splitlines() if l.strip() ]
+
+ def _check_path_paste_params(self, params):
if params.get( 'filesystem_paths', '' ) == '':
message = "No paths entered in the upload form"
response_code = 400
return None, response_code, message
- preserve_dirs = util.string_as_bool( params.get( 'preserve_dirs', False ) )
- # locate files
bad_paths = []
- uploaded_datasets = []
- for line in [ l.strip() for l in params.filesystem_paths.splitlines() if l.strip() ]:
- path = os.path.abspath( line )
+ for (_, path) in self._paths_list( params ):
if not os.path.exists( path ):
bad_paths.append( path )
- continue
- # don't bother processing if we're just going to return an error
- if not bad_paths:
- if os.path.isfile( path ):
- name = os.path.basename( path )
- uploaded_datasets.append( self.make_library_uploaded_dataset( trans, cntrller, params, name, path, 'path_paste', library_bunch ) )
- for basedir, dirs, files in os.walk( line ):
- for file in files:
- file_path = os.path.abspath( os.path.join( basedir, file ) )
- if preserve_dirs:
- in_folder = os.path.dirname( file_path.replace( path, '', 1 ).lstrip( '/' ) )
- else:
- in_folder = None
- uploaded_datasets.append( self.make_library_uploaded_dataset( trans,
- cntrller,
- params,
- file,
- file_path,
- 'path_paste',
- library_bunch,
- in_folder ) )
if bad_paths:
message = "Invalid paths:<br><ul><li>%s</li></ul>" % "</li><li>".join( bad_paths )
response_code = 400
return None, response_code, message
- return uploaded_datasets, 200, None
+ return None
+
@web.expose
def add_history_datasets_to_library( self, trans, cntrller, library_id, folder_id, hda_ids='', **kwd ):
params = util.Params( kwd )
https://bitbucket.org/galaxy/galaxy-central/commits/c85bf30a5c35/
changeset: c85bf30a5c35
user: jmchilton
date: 2013-02-10 17:16:27
summary: Breakup big get_server_dir_uploaded_datasets function in library_common.py into smaller more extensible pieces. This is useful in implementing multiple file dataset uploads downstream but is also a good refactoring on its own.
affected #: 1 file
diff -r 8e00d2fd674acafb36e7118afaaf7f739235be2e -r c85bf30a5c35a7fab80d9499b0852eeb2e433039 lib/galaxy/webapps/galaxy/controllers/library_common.py
--- a/lib/galaxy/webapps/galaxy/controllers/library_common.py
+++ b/lib/galaxy/webapps/galaxy/controllers/library_common.py
@@ -1106,6 +1106,16 @@
trans.sa_session.flush()
return uploaded_dataset
def get_server_dir_uploaded_datasets( self, trans, cntrller, params, full_dir, import_dir_desc, library_bunch, response_code, message ):
+ dir_response = self._get_server_dir_files(params, full_dir, import_dir_desc)
+ files = dir_response[0]
+ if not files:
+ return dir_response
+ uploaded_datasets = []
+ for file in files:
+ name = os.path.basename( file )
+ uploaded_datasets.append( self.make_library_uploaded_dataset( trans, cntrller, params, name, file, 'server_dir', library_bunch ) )
+ return uploaded_datasets, 200, None
+ def _get_server_dir_files( self, params, full_dir, import_dir_desc ):
files = []
try:
for entry in os.listdir( full_dir ):
@@ -1140,11 +1150,7 @@
message = "The directory '%s' contains no valid files" % full_dir
response_code = 400
return None, response_code, message
- uploaded_datasets = []
- for file in files:
- name = os.path.basename( file )
- uploaded_datasets.append( self.make_library_uploaded_dataset( trans, cntrller, params, name, file, 'server_dir', library_bunch ) )
- return uploaded_datasets, 200, None
+ return files, None, None
def get_path_paste_uploaded_datasets( self, trans, cntrller, params, library_bunch, response_code, message ):
preserve_dirs = util.string_as_bool( params.get( 'preserve_dirs', False ) )
uploaded_datasets = []
https://bitbucket.org/galaxy/galaxy-central/commits/e1e8ddf3401a/
changeset: e1e8ddf3401a
user: dannon
date: 2013-03-08 20:32:34
summary: Merge pull request 108, https://bitbucket.org/galaxy/galaxy-central/pull-request/108/collaborate-on…
affected #: 1 file
diff -r a37fbe9cbd93a4e73a765d50be3aca4f31968f0b -r e1e8ddf3401ab1e028fbebd9fa8f4307b8d3ae03 lib/galaxy/webapps/galaxy/controllers/library_common.py
--- a/lib/galaxy/webapps/galaxy/controllers/library_common.py
+++ b/lib/galaxy/webapps/galaxy/controllers/library_common.py
@@ -1077,6 +1077,7 @@
def make_library_uploaded_dataset( self, trans, cntrller, params, name, path, type, library_bunch, in_folder=None ):
link_data_only = params.get( 'link_data_only', 'copy_files' )
uuid_str = params.get( 'uuid', None )
+ file_type = params.file_type
library_bunch.replace_dataset = None # not valid for these types of upload
uploaded_dataset = util.bunch.Bunch()
new_name = name
@@ -1091,7 +1092,7 @@
uploaded_dataset.path = path
uploaded_dataset.type = type
uploaded_dataset.ext = None
- uploaded_dataset.file_type = params.file_type
+ uploaded_dataset.file_type = file_type
uploaded_dataset.dbkey = params.dbkey
uploaded_dataset.space_to_tab = params.space_to_tab
if in_folder:
@@ -1108,6 +1109,16 @@
trans.sa_session.flush()
return uploaded_dataset
def get_server_dir_uploaded_datasets( self, trans, cntrller, params, full_dir, import_dir_desc, library_bunch, response_code, message ):
+ dir_response = self._get_server_dir_files(params, full_dir, import_dir_desc)
+ files = dir_response[0]
+ if not files:
+ return dir_response
+ uploaded_datasets = []
+ for file in files:
+ name = os.path.basename( file )
+ uploaded_datasets.append( self.make_library_uploaded_dataset( trans, cntrller, params, name, file, 'server_dir', library_bunch ) )
+ return uploaded_datasets, 200, None
+ def _get_server_dir_files( self, params, full_dir, import_dir_desc ):
files = []
try:
for entry in os.listdir( full_dir ):
@@ -1142,50 +1153,59 @@
message = "The directory '%s' contains no valid files" % full_dir
response_code = 400
return None, response_code, message
+ return files, None, None
+ def get_path_paste_uploaded_datasets( self, trans, cntrller, params, library_bunch, response_code, message ):
+ preserve_dirs = util.string_as_bool( params.get( 'preserve_dirs', False ) )
uploaded_datasets = []
- for file in files:
- name = os.path.basename( file )
- uploaded_datasets.append( self.make_library_uploaded_dataset( trans, cntrller, params, name, file, 'server_dir', library_bunch ) )
+ (files_and_folders, _response_code, _message) = self._get_path_files_and_folders(params, preserve_dirs)
+ if _response_code:
+ return (uploaded_datasets, _response_code, _message)
+ for (path, name, folder) in files_and_folders:
+ uploaded_datasets.append( self.make_library_uploaded_dataset( trans, cntrller, params, name, path, 'path_paste', library_bunch, folder ) )
return uploaded_datasets, 200, None
- def get_path_paste_uploaded_datasets( self, trans, cntrller, params, library_bunch, response_code, message ):
+
+ def _get_path_files_and_folders( self, params, preserve_dirs ):
+ problem_response = self._check_path_paste_params( params )
+ if problem_response:
+ return problem_response
+ files_and_folders = []
+ for (line, path) in self._paths_list( params ):
+ line_files_and_folders = self._get_single_path_files_and_folders( line, path, preserve_dirs )
+ files_and_folders.extend( line_files_and_folders )
+ return files_and_folders, None, None
+
+ def _get_single_path_files_and_folders(self, line, path, preserve_dirs):
+ files_and_folders = []
+ if os.path.isfile( path ):
+ name = os.path.basename( path )
+ files_and_folders.append((path, name, None))
+ for basedir, dirs, files in os.walk( line ):
+ for file in files:
+ file_path = os.path.abspath( os.path.join( basedir, file ) )
+ if preserve_dirs:
+ in_folder = os.path.dirname( file_path.replace( path, '', 1 ).lstrip( '/' ) )
+ else:
+ in_folder = None
+ files_and_folders.append((file_path, file, in_folder))
+ return files_and_folders
+ def _paths_list(self, params):
+ return [ (l.strip(), os.path.abspath(l.strip())) for l in params.filesystem_paths.splitlines() if l.strip() ]
+
+ def _check_path_paste_params(self, params):
if params.get( 'filesystem_paths', '' ) == '':
message = "No paths entered in the upload form"
response_code = 400
return None, response_code, message
- preserve_dirs = util.string_as_bool( params.get( 'preserve_dirs', False ) )
- # locate files
bad_paths = []
- uploaded_datasets = []
- for line in [ l.strip() for l in params.filesystem_paths.splitlines() if l.strip() ]:
- path = os.path.abspath( line )
+ for (_, path) in self._paths_list( params ):
if not os.path.exists( path ):
bad_paths.append( path )
- continue
- # don't bother processing if we're just going to return an error
- if not bad_paths:
- if os.path.isfile( path ):
- name = os.path.basename( path )
- uploaded_datasets.append( self.make_library_uploaded_dataset( trans, cntrller, params, name, path, 'path_paste', library_bunch ) )
- for basedir, dirs, files in os.walk( line ):
- for file in files:
- file_path = os.path.abspath( os.path.join( basedir, file ) )
- if preserve_dirs:
- in_folder = os.path.dirname( file_path.replace( path, '', 1 ).lstrip( '/' ) )
- else:
- in_folder = None
- uploaded_datasets.append( self.make_library_uploaded_dataset( trans,
- cntrller,
- params,
- file,
- file_path,
- 'path_paste',
- library_bunch,
- in_folder ) )
if bad_paths:
message = "Invalid paths:<br><ul><li>%s</li></ul>" % "</li><li>".join( bad_paths )
response_code = 400
return None, response_code, message
- return uploaded_datasets, 200, None
+ return None
+
@web.expose
def add_history_datasets_to_library( self, trans, cntrller, library_id, folder_id, hda_ids='', **kwd ):
params = util.Params( kwd )
https://bitbucket.org/galaxy/galaxy-central/commits/c937f80188d8/
changeset: c937f80188d8
user: dannon
date: 2013-03-08 20:41:10
summary: Merge
affected #: 3 files
diff -r e1e8ddf3401ab1e028fbebd9fa8f4307b8d3ae03 -r c937f80188d8026f2074f8a897e2ff2972d8da6d lib/tool_shed/scripts/check_repositories_for_functional_tests.py
--- a/lib/tool_shed/scripts/check_repositories_for_functional_tests.py
+++ b/lib/tool_shed/scripts/check_repositories_for_functional_tests.py
@@ -38,7 +38,16 @@
'''Script that checks repositories to see if the tools contained within them have functional tests defined.'''
parser = OptionParser()
parser.add_option( "-i", "--info_only", action="store_true", dest="info_only", help="info about the requested action", default=False )
- parser.add_option( "-v", "--verbose", action="store_true", dest="verbose", help="verbose mode, print the name, owner, and changeset revision of each repository", default=False )
+ parser.add_option(
+ "-v", "--verbose",
+ action="count", dest="verbosity",
+ default=1,
+ help="Control the amount of detail in the log output.")
+ parser.add_option(
+ "--verbosity", action="store", dest="verbosity",
+ metavar='VERBOSITY',
+ type="int", help="Control the amount of detail in the log output. --verbosity=1 is "
+ "the same as -v")
( options, args ) = parser.parse_args()
ini_file = args[0]
config_parser = ConfigParser.ConfigParser( {'here':os.getcwd()} )
@@ -56,12 +65,12 @@
if options.info_only:
print "# Displaying info only ( --info_only )"
- if options.verbose:
- print "# Displaying extra information ( --verbose )"
+ if options.verbosity:
+ print "# Displaying extra information ( --verbosity = %d )" % options.verbosity
- check_and_flag_repositories( app, info_only=options.info_only, verbose=options.verbose )
+ check_and_flag_repositories( app, info_only=options.info_only, verbosity=options.verbosity )
-def check_and_flag_repositories( app, info_only=False, verbose=False ):
+def check_and_flag_repositories( app, info_only=False, verbosity=1 ):
'''
This method will iterate through all records in the repository_metadata table, checking each one for tool metadata,
then checking the tool metadata for tests.
@@ -97,10 +106,15 @@
and test repositories script to process. If the tested changeset revision does not have a test-data directory, this script will also mark the revision
not to be tested.
- If any error is encountered, the script will update the repository_metadata.tool_test_errors attribute with the following structure:
+ If any error is encountered, the script will update the repository_metadata.tool_test_errors attribute following this structure:
{
"test_environment":
{
+ "galaxy_revision": "9001:abcd1234",
+ "galaxy_database_version": "114",
+ "tool_shed_revision": "9001:abcd1234",
+ "tool_shed_mercurial_version": "2.3.1",
+ "tool_shed_database_version": "17",
"python_version": "2.7.2",
"architecture": "x86_64",
"system": "Darwin 12.2.0"
@@ -108,8 +122,28 @@
"test_errors":
[
{
- "test_id": "Something that will easily identify what the problem is",
- "stdout": "The output of the test, or a more detailed description of what was tested and why it failed."
+ "test_id": "The test ID, generated by twill",
+ "tool_id": "The tool ID that was tested",
+ "tool_version": "The tool version that was tested",
+ "stderr": "The output of the test, or a more detailed description of what was tested and what the error was."
+ "traceback": "The traceback, if any."
+ },
+ ]
+ "passed_tests":
+ [
+ {
+ "test_id": "The test ID, generated by twill",
+ "tool_id": "The tool ID that was tested",
+ "tool_version": "The tool version that was tested",
+ },
+ ]
+ "invalid_tests":
+ [
+ {
+ "tool_id": "The tool ID that does not have functional tests defined.",
+ "tool_version": "The version of the tool."
+ "tool_guid": "The guid of the tool."
+ "reason_test_is_invalid": "A short explanation of what is invalid.
},
]
}
@@ -129,19 +163,24 @@
repository_status[ 'test_environment' ][ 'tool_shed_database_version' ] = get_database_version( app )
repository_status[ 'test_environment' ][ 'tool_shed_mercurial_version' ] = __version__.version
repository_status[ 'test_environment' ][ 'tool_shed_revision' ] = get_repository_current_revision( os.getcwd() )
- repository_status[ 'test_errors' ] = []
+ repository_status[ 'invalid_tests' ] = []
metadata_records_to_check = app.sa_session.query( app.model.RepositoryMetadata ) \
.filter( and_( app.model.RepositoryMetadata.table.c.downloadable == True,
app.model.RepositoryMetadata.table.c.do_not_test == False,
app.model.RepositoryMetadata.table.c.tools_functionally_correct == False ) ) \
.all()
for metadata_record in metadata_records_to_check:
- repository_status[ 'test_errors' ] = []
+ if metadata_record.tool_test_errors:
+ repository_status = metadata_record.tool_test_errors
name = metadata_record.repository.name
owner = metadata_record.repository.user.username
changeset_revision = str( metadata_record.changeset_revision )
+ repository_status[ 'invalid_tests' ] = []
if metadata_record.repository.id not in checked_repository_ids:
checked_repository_ids.append( metadata_record.repository.id )
+ if verbosity >= 1:
+ print '# Checking for functional tests in changeset revision %s of %s, owned by %s.' % \
+ ( changeset_revision, name, owner )
# If this changeset revision has no tools, we don't need to do anything here, the install and test script has a filter for returning
# only repositories that contain tools.
if 'tools' not in metadata_record.metadata:
@@ -152,19 +191,26 @@
for tool_metadata in metadata_record.metadata[ 'tools' ]:
tool_count += 1
tool_id = tool_metadata[ 'id' ]
- if verbose:
- print '# Checking for functional tests in changeset revision %s of %s, tool ID %s.' % \
- ( changeset_revision, name, tool_id )
+ tool_version = tool_metadata[ 'version' ]
+ tool_guid = tool_metadata[ 'guid' ]
+ if verbosity >= 2:
+ print "# Checking tool ID '%s' in changeset revision %s of %s." % \
+ ( tool_id, changeset_revision, name )
# If there are no tests, this tool should not be tested, since the tool functional tests only report failure if the test itself fails,
# not if it's missing or undefined. Filtering out those repositories at this step will reduce the number of "false negatives" the
# automated functional test framework produces.
if 'tests' not in tool_metadata or not tool_metadata[ 'tests' ]:
- if verbose:
+ if verbosity >= 2:
print '# No functional tests defined for %s.' % tool_id
# The repository_metadata.tool_test_errors attribute should always have the following structure:
# {
# "test_environment":
# {
+ # "galaxy_revision": "9001:abcd1234",
+ # "galaxy_database_version": "114",
+ # "tool_shed_revision": "9001:abcd1234",
+ # "tool_shed_mercurial_version": "2.3.1",
+ # "tool_shed_database_version": "17",
# "python_version": "2.7.2",
# "architecture": "x86_64",
# "system": "Darwin 12.2.0"
@@ -172,22 +218,44 @@
# "test_errors":
# [
# {
- # "test_id": "Something that will easily identify what the problem is",
- # "stderr": "The output of the test, or a more detailed description of what was tested and why it failed."
+ # "test_id": "The test ID, generated by twill",
+ # "tool_id": "The tool ID that was tested",
+ # "tool_version": "The tool version that was tested",
+ # "stderr": "The output of the test, or a more detailed description of what was tested and what the outcome was."
+ # "traceback": "The captured traceback."
+ # },
+ # ]
+ # "passed_tests":
+ # [
+ # {
+ # "test_id": "The test ID, generated by twill",
+ # "tool_id": "The tool ID that was tested",
+ # "tool_version": "The tool version that was tested",
+ # "stderr": "The output of the test, or a more detailed description of what was tested and what the outcome was."
+ # },
+ # ]
+ # "invalid_tests":
+ # [
+ # {
+ # "tool_id": "The tool ID that does not have functional tests defined.",
+ # "tool_version": "The version of the tool."
+ # "tool_guid": "The guid of the tool."
+ # "reason_test_is_invalid": "A short explanation of what is invalid.
# },
# ]
# }
- # Optionally, "stdout" and "traceback" may be included in a test_errors dict, if they are relevant.
- test_id = 'Functional tests for %s' % tool_id
- test_errors = dict( stderr='No functional tests defined for tool %s in changeset revision %s of repository %s owned by %s.' % \
- ( tool_id, changeset_revision, name, owner ) )
- repository_status[ 'test_errors' ].append( test_errors )
- repository_status[ 'status' ] = 'failed'
+ # Optionally, "traceback" may be included in a test_errors dict, if it is relevant. No script should overwrite anything other
+ # than the list relevant to what it is testing.
+ test_errors = dict( tool_id=tool_id, tool_version=tool_version, tool_guid=tool_guid )
+ repository_status[ 'invalid_tests' ].append( test_errors )
no_tests += 1
else:
+ if verbosity >= 2:
+ print "# Tool ID '%s' in changeset revision %s of %s has one or more valid functional tests defined." % \
+ ( tool_id, changeset_revision, name )
has_tests += 1
- if verbose:
- if not repository_status[ 'test_errors' ]:
+ if verbosity >= 1:
+ if not repository_status[ 'invalid_tests' ]:
print '# All tools have functional tests in changeset revision %s of repository %s owned by %s.' % ( changeset_revision, name, owner )
else:
print '# Some tools missing functional tests in changeset revision %s of repository %s owned by %s.' % ( changeset_revision, name, owner )
@@ -211,12 +279,17 @@
if os.path.exists( work_dir ):
shutil.rmtree( work_dir )
if not has_test_data:
- if verbose:
+ if verbosity >= 1:
print '# Test data missing in changeset revision %s of repository %s owned by %s.' % ( changeset_revision, name, owner )
# The repository_metadata.tool_test_errors attribute should always have the following structure:
# {
# "test_environment":
# {
+ # "galaxy_revision": "9001:abcd1234",
+ # "galaxy_database_version": "114",
+ # "tool_shed_revision": "9001:abcd1234",
+ # "tool_shed_mercurial_version": "2.3.1",
+ # "tool_shed_database_version": "17",
# "python_version": "2.7.2",
# "architecture": "x86_64",
# "system": "Darwin 12.2.0"
@@ -224,23 +297,44 @@
# "test_errors":
# [
# {
- # "test_id": "Something that will easily identify what the problem is",
- # "stderr": "The output of the test, or a more detailed description of what was tested and why it failed."
+ # "test_id": "The test ID, generated by twill",
+ # "tool_id": "The tool ID that was tested",
+ # "tool_version": "The tool version that was tested",
+ # "stderr": "The output of the test, or a more detailed description of what was tested and what the outcome was."
+ # "traceback": "The captured traceback."
+ # },
+ # ]
+ # "passed_tests":
+ # [
+ # {
+ # "test_id": "The test ID, generated by twill",
+ # "tool_id": "The tool ID that was tested",
+ # "tool_version": "The tool version that was tested",
+ # "stderr": "The output of the test, or a more detailed description of what was tested and what the outcome was."
+ # },
+ # ]
+ # "invalid_tests":
+ # [
+ # {
+ # "tool_id": "The tool ID that does not have functional tests defined.",
+ # "tool_version": "The version of the tool."
+ # "tool_guid": "The guid of the tool."
+ # "reason_test_is_invalid": "A short explanation of what is invalid.
# },
# ]
# }
- # Optionally, "stdout" and "traceback" may be included in a test_errors dict, if they are relevant.
- test_id = 'Find functional test data for %s' % metadata_record.repository.name
- test_errors = dict( stderr='No test data found for changeset revision %s of repository %s owned by %s.' % ( changeset_revision, name, owner ) )
- repository_status[ 'test_errors' ].append( test_errors )
- repository_status[ 'status' ] = 'failed'
+ # Optionally, "traceback" may be included in a test_errors dict, if it is relevant. No script should overwrite anything other
+ # than the list relevant to what it is testing.
+ test_errors = dict( tool_id=None, tool_version=None, tool_guid=None,
+ reason_test_is_invalid="Repository %s is missing a test-data directory." % name )
+ repository_status[ 'invalid_tests' ].append( test_errors )
else:
- if verbose:
+ if verbosity >= 1:
print '# Test data found in changeset revision %s of repository %s owned by %s.' % ( changeset_revision, name, owner )
if not info_only:
# If repository_status[ 'test_errors' ] is empty, no issues were found, and we can just update time_last_tested with the platform
# on which this script was run.
- if repository_status[ 'test_errors' ]:
+ if repository_status[ 'invalid_tests' ]:
# If functional test definitions or test data are missing, set do_not_test = True if and only if:
# a) There are multiple downloadable revisions, and the revision being tested is not the most recent downloadable revision. In this case,
# the revision will never be updated with correct data, and re-testing it would be redundant.
@@ -250,8 +344,6 @@
if should_set_do_not_test_flag( app, metadata_record.repository, changeset_revision ):
metadata_record.do_not_test = True
metadata_record.tools_functionally_correct = False
- else:
- repository_status[ 'status' ] = 'passed'
metadata_record.tool_test_errors = repository_status
metadata_record.time_last_tested = datetime.utcnow()
app.sa_session.add( metadata_record )
diff -r e1e8ddf3401ab1e028fbebd9fa8f4307b8d3ae03 -r c937f80188d8026f2074f8a897e2ff2972d8da6d test/base/util.py
--- a/test/base/util.py
+++ b/test/base/util.py
@@ -74,8 +74,8 @@
return None, repository_name, changeset_revision
return last_galaxy_test_file_dir, last_tested_repository_name, last_tested_changeset_revision
-def get_test_environment():
- rval = {}
+def get_test_environment( current_environment={} ):
+ rval = current_environment
rval[ 'python_version' ] = platform.python_version()
rval[ 'architecture' ] = platform.machine()
os, hostname, os_version, uname, arch, processor = platform.uname()
diff -r e1e8ddf3401ab1e028fbebd9fa8f4307b8d3ae03 -r c937f80188d8026f2074f8a897e2ff2972d8da6d test/install_and_test_tool_shed_repositories/functional_tests.py
--- a/test/install_and_test_tool_shed_repositories/functional_tests.py
+++ b/test/install_and_test_tool_shed_repositories/functional_tests.py
@@ -60,6 +60,7 @@
import nose.config
import nose.loader
import nose.plugins.manager
+from nose.plugins import Plugin
from base.util import parse_tool_panel_config, get_database_version, get_test_environment, get_repository_current_revision
@@ -115,9 +116,36 @@
else:
galaxy_encode_secret = os.environ[ 'GALAXY_INSTALL_TEST_SECRET' ]
+
+class ReportResults( Plugin ):
+ '''Simple Nose plugin to record the IDs of all tests run, regardless of success.'''
+ name = "reportresults"
+ passed = []
+
+ def options( self, parser, env=os.environ ):
+ super( ReportResults, self ).options( parser, env=env )
+
+ def configure(self, options, conf):
+ super( ReportResults, self ).configure( options, conf )
+ if not self.enabled:
+ return
+
+ def addSuccess( self, test ):
+ '''Only record test IDs that correspond to tool functional tests.'''
+ if 'TestForTool' in test.id():
+ test_id = test.id()
+ # Rearrange the test ID to match the format that is produced in test_results.failures
+ test_id_parts = test_id.split( '.' )
+ fixed_test_id = '%s (%s)' % ( test_id_parts[ -1 ], '.'.join( test_id_parts[ :-1 ] ) )
+ self.passed.append( fixed_test_id )
+
+ def getTestStatus( self ):
+ return self.passed
+
def execute_uninstall_method( repository_dict ):
# Delete any configured tool functional tests from the test_toolbox.__dict__, otherwise nose will find them
- # and try to re-run the tests after uninstalling the repository.
+ # and try to re-run the tests after uninstalling the repository, which will cause false failure reports,
+ # since the test data has been deleted from disk by now.
tests_to_delete = []
for key in test_toolbox.__dict__:
if key.startswith( 'TestForTool_' ):
@@ -132,7 +160,7 @@
test_config.configure( sys.argv )
# Run the uninstall method. This method uses the Galaxy web interface to uninstall the previously installed
# repository and delete it from disk.
- result = run_tests( test_config )
+ result, _ = run_tests( test_config )
success = result.wasSuccessful()
return success
@@ -191,6 +219,23 @@
else:
raise AssertonError( 'Unknown format %s.' % format )
+def get_tool_info_from_test_id( test_id ):
+ '''
+ Test IDs come in the form test_tool_number (functional.test_toolbox.TestForTool_toolshed_url/repos/owner/repository_name/tool_id/tool_version)
+ We want the tool ID and tool version.
+ '''
+ parts = test_id.replace( ')', '' ).split( '/' )
+ tool_version = parts[ -1 ]
+ tool_id = parts[ -2 ]
+ return tool_id, tool_version
+
+def get_tool_test_errors_from_api( tool_shed_url, metadata_dict ):
+ params = dict()
+ api_path = metadata_dict[ 'url' ].split( '/' )
+ api_url = get_api_url( base=tool_shed_url, parts=api_path )
+ repository_metadata = json_from_url( api_url )
+ return repository_metadata[ 'tool_test_errors' ]
+
def json_from_url( url ):
url_handle = urllib.urlopen( url )
url_contents = url_handle.read()
@@ -201,16 +246,15 @@
if tests_passed:
params[ 'tools_functionally_correct' ] = 'true'
params[ 'do_not_test' ] = 'true'
- test_results_dict[ 'status' ] = 'passed'
else:
params[ 'tools_functionally_correct' ] = 'false'
params[ 'do_not_test' ] = 'true'
- test_results_dict[ 'status' ] = 'failed'
params[ 'tool_test_errors' ] = test_results_dict
return update( tool_shed_api_key, '%s' % ( url_join( galaxy_tool_shed_url, 'api', 'repository_revisions', metadata_id ) ), params, return_formatted=False )
def run_tests( test_config ):
loader = nose.loader.TestLoader( config=test_config )
+ test_config.plugins.addPlugin( ReportResults() )
plug_loader = test_config.plugins.prepareTestLoader( loader )
if plug_loader is not None:
loader = plug_loader
@@ -221,7 +265,8 @@
plug_runner = test_config.plugins.prepareTestRunner( test_runner )
if plug_runner is not None:
test_runner = plug_runner
- return test_runner.run( tests )
+ result = test_runner.run( tests )
+ return result, test_config.plugins._plugins
def main():
# ---- Configuration ------------------------------------------------------
@@ -391,10 +436,6 @@
log.info( "The embedded Galaxy application is running on %s:%s" % ( galaxy_test_host, galaxy_test_port ) )
log.info( "Repositories will be installed from the tool shed at %s" % galaxy_tool_shed_url )
success = False
- repository_status = dict()
- test_environment = get_test_environment()
- test_environment[ 'galaxy_database_version' ] = get_database_version( app )
- test_environment[ 'galaxy_revision'] = get_repository_current_revision( os.getcwd() )
try:
# Iterate through a list of repository info dicts.
log.info( "Retrieving repositories to install from the URL:\n%s\n" % str( galaxy_tool_shed_url ) )
@@ -472,7 +513,7 @@
test_config.configure( sys.argv )
# Run the configured install method as a test. This method uses the embedded Galaxy application's web interface to install the specified
# repository, with tool and repository dependencies also selected for installation.
- result = run_tests( test_config )
+ result, _ = run_tests( test_config )
success = result.wasSuccessful()
# If the installation succeeds, configure and run functional tests for this repository. This is equivalent to
# sh run_functional_tests.sh -installed
@@ -483,14 +524,63 @@
# and the tools_functionally_correct flag to False, as well as updating tool_test_errors.
file( galaxy_shed_tools_dict, 'w' ).write( to_json_string( dict() ) )
has_test_data, shed_tools_dict = parse_tool_panel_config( galaxy_shed_tool_conf_file, from_json_string( file( galaxy_shed_tools_dict, 'r' ).read() ) )
+ # The repository_status dict should always have the following structure:
+ # {
+ # "test_environment":
+ # {
+ # "galaxy_revision": "9001:abcd1234",
+ # "galaxy_database_version": "114",
+ # "tool_shed_revision": "9001:abcd1234",
+ # "tool_shed_mercurial_version": "2.3.1",
+ # "tool_shed_database_version": "17",
+ # "python_version": "2.7.2",
+ # "architecture": "x86_64",
+ # "system": "Darwin 12.2.0"
+ # },
+ # "test_errors":
+ # [
+ # {
+ # "test_id": "The test ID, generated by twill",
+ # "tool_id": "The tool ID that was tested",
+ # "tool_version": "The tool version that was tested",
+ # "stderr": "The output of the test, or a more detailed description of what was tested and what the outcome was."
+ # "traceback": "The captured traceback."
+ # },
+ # ]
+ # "passed_tests":
+ # [
+ # {
+ # "test_id": "The test ID, generated by twill",
+ # "tool_id": "The tool ID that was tested",
+ # "tool_version": "The tool version that was tested",
+ # "stderr": "The output of the test, or a more detailed description of what was tested and what the outcome was."
+ # },
+ # ]
+ # "invalid_tests":
+ # [
+ # {
+ # "tool_id": "The tool ID that does not have functional tests defined.",
+ # "tool_version": "The version of the tool."
+ # "tool_guid": "The guid of the tool."
+ # "reason_test_is_invalid": "A short explanation of what is invalid.
+ # },
+ # ]
+ # }
+ repository_status = get_tool_test_errors_from_api( galaxy_tool_shed_url, repository_dict )
+ test_environment = get_test_environment( repository_status[ 'test_environment' ] )
+ test_environment[ 'galaxy_database_version' ] = get_database_version( app )
+ test_environment[ 'galaxy_revision'] = get_repository_current_revision( os.getcwd() )
+ repository_status[ 'test_environment' ] = test_environment
+ repository_status[ 'tests_passed' ] = []
+ repository_status[ 'test_errors' ] = []
if not has_test_data:
log.error( 'Test data is missing for this repository. Updating repository and skipping functional tests.' )
repository_status[ 'test_environment' ] = get_test_environment()
test_id = 'Find functional test data for %s' % repository_dict[ 'name' ]
- test_errors = dict( test_id=test_id,
- stdout='No test data found for changeset revision %s of repository %s owned by %s.' % \
- ( repository_dict[ 'changeset_revision' ], repository_dict[ 'name' ], repository_dict[ 'owner' ] ) )
- repository_status[ 'test_errors' ] = [ test_errors ]
+ # Record the lack of test data.
+ test_errors = dict( tool_id=None, tool_version=None, tool_guid=None,
+ reason_test_is_invalid="Repository %s is missing a test-data directory." % name )
+ repository_status[ 'invalid_tests' ].append( test_errors )
# Record the status of this repository in the tool shed.
register_test_result( galaxy_tool_shed_url, metadata_revision_id, repository_status, tests_passed=False )
# Run the cleanup method. This removes tool functional test methods from the test_toolbox module and uninstalls the
@@ -518,12 +608,20 @@
test_config = nose.config.Config( env=os.environ, plugins=nose.plugins.manager.DefaultPluginManager() )
test_config.configure( sys.argv )
# Run the configured tests.
- result = run_tests( test_config )
+ result, test_plugins = run_tests( test_config )
success = result.wasSuccessful()
# Record some information about the environment in which this test was run, in case a failure is specific to a certain processor
# architecture or operating system.
repository_dict[ 'test_environment' ] = test_environment
- test_errors = []
+ for plugin in test_plugins:
+ if hasattr( plugin, 'getTestStatus' ):
+ tests_passed = plugin.getTestStatus()
+ break
+ repository_status[ 'tests_passed' ] = []
+ for test_id in tests_passed:
+ tool_id, tool_version = get_tool_info_from_test_id( test_id )
+ test_result = dict( test_id=test_id, tool_id=tool_id, tool_version=tool_version )
+ repository_status[ 'tests_passed' ].append( test_result )
if success:
# This repository's tools passed all functional tests. Update the repository_metadata table in the tool shed's database
# to reflect that. Call the register_test_result method, which executes a PUT request to the repository_revisions API
@@ -531,15 +629,16 @@
# updates the time_last_tested field to today's date.
repositories_passed.append( dict( name=name, owner=owner, changeset_revision=changeset_revision ) )
repository_status[ 'test_environment' ] = test_environment
- repository_status[ 'test_errors' ] = []
register_test_result( galaxy_tool_shed_url, metadata_revision_id, repository_status, tests_passed=True )
log.debug( 'Revision %s of repository %s installed and passed functional tests.' % \
( repository_dict[ 'changeset_revision' ], repository_dict[ 'name' ] ) )
else:
# If the functional tests fail, log the output and update the failed changeset revision's metadata record in the tool shed via the API.
for failure in result.failures:
- # Record the twill test identifier, so the repository owner can discover which test is failing.
- test_status = dict( test_id=str( failure[0] ) )
+ # Record the twill test identifier and information about the tool, so the repository owner can discover which test is failing.
+ test_id = str( failure[0] )
+ tool_id, tool_version = get_tool_info_from_test_id( test_id )
+ test_status = dict( test_id=test_id, tool_id=tool_id, tool_version=tool_version )
log_output = failure[1].replace( '\\n', '\n' )
# Remove debug output that the reviewer or owner doesn't need.
log_output = re.sub( r'control \d+:.+', r'', log_output )
@@ -547,7 +646,7 @@
appending_to = 'output'
tmp_output = {}
output = {}
- # Iterate through the functional test output and extract only the important data. Captured logging is not recorded.
+ # Iterate through the functional test output and extract only the important data. Captured logging and stdout are not recorded.
for line in log_output.split( '\n' ):
if line.startswith( 'Traceback' ):
appending_to = 'traceback'
@@ -567,17 +666,15 @@
if appending_to not in tmp_output:
tmp_output[ appending_to ] = []
tmp_output[ appending_to ].append( line )
- for output_type in [ 'stderr', 'stdout', 'traceback' ]:
+ for output_type in [ 'stderr', 'traceback' ]:
if output_type in tmp_output:
test_status[ output_type ] = '\n'.join( tmp_output[ output_type ] )
- test_errors.append( test_status )
- if test_errors:
- # Only update test_errors for this repository if it's not empty.
- repository_status[ 'test_environment' ] = test_environment
- repository_status[ 'test_errors' ] = test_errors
+ repository_status[ 'test_errors' ].append( test_status )
+ log.debug( to_json_string( repository_status, indent=2, sort_keys=True ) )
# Call the register_test_result method, which executes a PUT request to the repository_revisions API controller with the outcome
- # status of the tests, and updates tool_test_errors with the relevant log data.
- # This also sets the do_not_test and tools_functionally correct flags, and updates the time_last_tested field to today's date.
+ # of the tests, and updates tool_test_errors with the relevant log data.
+ # This also sets the do_not_test and tools_functionally correct flags to the appropriate values, and updates the time_last_tested
+ # field to today's date.
repositories_failed.append( dict( name=name, owner=owner, changeset_revision=changeset_revision ) )
register_test_result( galaxy_tool_shed_url, metadata_revision_id, repository_status, tests_passed=False )
log.debug( 'Revision %s of repository %s installed successfully, but did not pass functional tests.' % \
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0