Peter, Your point is well taken about the utility of the testing framework in the tool development process. The framework has been modified as of 9520:41d8cdde4729 to only flag a changeset revision not to be tested if no valid tests have been found in that revision. If one or more tools have functional tests, and there is test data for one or more of those tests, the repository will now be installed and tested. I will be re-running the testing framework on the test tool shed shortly. --Dave B. On 4/25/13 09:58:39.000, Peter Cock wrote:
Hi Dave,
On Thu, Apr 25, 2013 at 2:27 PM, Dave Bouvier <dave@bx.psu.edu> wrote:
Peter,
That is the intended behavior, but I've added a trello card (https://trello.com/c/O9YmzUT4) for revisiting that decision at some point.
Is that the right Trello card? Is seems to be all about multiple tool shed repository revisions rather than what I'm asking about.
We are definitely willing to be flexible about the testing conditions, but the primary goal of the automated testing framework was to verify functional correctness of an entire repository, to simplify the approval process for the category of "tools contained within the repository".
--Dave B.
Well as you can tell, I disagree with that design choice: If there are tools with tests within a repository I think those tests should be run. It seems far more pragmatic than the current all-or-nothing approach (for which I can't see any real justification).
This is especially frustrating when there are still obstacles from the test framework itself which block writing tests for all my tools. e.g. http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-April/014280.html
Given "the primary goal of the automated testing framework was to verify functional correctness of an entire repository", not running any tests if just one tool is missing tests seems very bizarre. Right now I can run those tests locally, but I have no way to know if they would work on the Tool Shed tests - until I reach full test coverage for the repository. If all the tests provided were actually run, I can fix any failures now, rather than waiting until issues blocking adding any missing tests are resolved.
I hope this aspect of the design can be revisited sooner rather than later.
Thanks,
Peter