On Wed, May 8, 2013 at 3:28 PM, Greg Von Kuster <greg@bx.psu.edu> wrote:
Hi Peter,
On May 8, 2013, at 6:45 AM, Peter Cock wrote:
On Tue, May 7, 2013 at 7:02 PM, Greg Von Kuster <greg@bx.psu.edu> wrote:
Hi Peter,
Missing test components implies a tool config that does not define a test (i.e, a missing test definition) or a tool config that defines a test, but the test's input or output files are missing from the repository.
This seems to be our point of confusion: I don't understand combining these two categories - it seems unhelpful to me.
I feel this is just a difference of opinion. Combining missing tests and missing test data into a single category is certainly justifiable. Any repository that falls into this category clearly states to the owner what is missing, and the owner can easily know that work is needed to prepare the repository contents for testing, whether that work falls into the category of adding a missing test or adding missing test data.
Speaking as a tool author, these are two rather different categories which should not be merged. I personally would put "tools with defined tests but missing input/output files" under "failing tests" not under "missing tests".
Tools missing a test definition clearly can't be tested - but since we'd like every tool to have tests having this as an easily view listing is useful both for authors and reviewers.
But is is an easily viewed listing. It is currently very easy to determine if a tool is missing a defined test, is missing test data, or both.
No it isn't easily viewable - it is easy to get a combined listing of repositories with (a) missing tests and/or (b) tests with missing files, and then very tedious to look at these repositories one by one to see which it is.
It highlights tools which need some work - or in some cases work on the Galaxy test framework itself. They are neither passing nor failing tests - and it makes sense to list them separately.
Tools with a test definition should be tested
This is where I disagree. ... <snip> Installing a testing repositories that have tools with defined tests but missing test data is potentially costly from a time perspective. <snip>
I wasn't meaning to suggest you do that though - you're already able to short cut these cases and mark the test as failed. These are the quickest possible tests to run - they fail at the first hurdle.
- if they are missing an input or output file this is just a special case of a test failure (and can be spotted without actually attempting to run the tool).
Yes, but this is what we are doing now. We are spotting this scenario without installing the repository or running any defined tests by running the tool.
Yes, and that is fine - I'm merely talking about how this information is presented to the Tool Shed viewer.
This is clearly a broken test and the tool author should be able to fix this easily (by uploading the missing test data file)
Yes, but this is already possible for them to clearly see without having to install the repository or run any tests.
Indeed, but it this is a failing test and should (in my view) be listed under failing tests not under missing tests. We're just debating where to list such problem tools/repositories in the Tool Shed's test results interface. Regards, Peter