On Wed, Nov 27, 2013 at 9:02 PM, Greg Von Kuster <greg@bx.psu.edu> wrote:
Hello Peter,
I've done a significant amount of re-engineering of the
Tool Shed's install and test framework over the past 2
weeks, so the results you are seeing are probably a
reflection of some broken peices during the time frame
in which I was making changes. I believe I have most
of the things ironed out in 11545:32205f911e74 which
is now running on the test tool shed.
We will be performing some test runs of thie new
framework over the long weekend, so more correct
test results should hopefully be showing up for yours
and other repositories.
Hi Greg,
Things look much better today on the Test Tool Shed, with
most of my tools under "Latest revision: all tool tests pass"
again - or as expected a few under "Latest revision: missing
tool tests" due to not yet having full test coverage.
I've only spotted one glitch so far, missing test results for:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/b2795652d2b4
The above repository was not displaying the missing test components due to a problem in the previous test runs which I fixed yesterday. The problem resulted in test result data that could not be displayed. When I introduced the new feature for stroing and displaying multiple test runs, I decided to not eliminate the bad data, but just to let enough test runs proceed that the bad data would eventually be eliminated. I forced new test runs, and now things should be displaying as expected. Here is the display for the above repository for the latest run this morning:
Automated tool test results
|
Time tested: 2013-11-28 11:29:38 |
System: |
Architecture: |
Python version: |
Galaxy revision: |
Galaxy database version: |
Tool shed revision: 11549:5aede225ebef |
Tool shed database version: 21 |
Tool shed mercurial version: 2.2.3 |
|
|
|
|
|
|
|
I saw a recent commit on galaxy-central addressing a test
time display - that may have explained why yesterday
the Test Tool Shed was claiming the test results were
from four days ago.
Yes, there is a flag that can be set for testing a repository that eliminates it from being tested by the install and test framework. This flag had been inappropriately set on many repositories, including the one to which you are referring. That problem has been corrected, so testing should be more stable.
I've also enhanced the "Tool test results" components to store a
configurable number of test runs instead of just 1. The test tool
shed is configured to store 5 test runs. So from here on you'll
begin to see multiple containers in the Tool test results container
- something like this:
That sounds useful when the framework itself is changing
rapidly, or if there is anything stochastic in a test failure.
Presumably this shows the last 5 tests against the current
tool revision?
It displays the last 5 test runs against any installable revision that is tested. We are currently restricting testing to the latest installable revision, so the 5 test runs will be displayed for that revision only.
Thank Greg, and enjoy Thanksgiving,
Thanks Peter, your too!