Hello Eric,
Submitted by epaniagu@cshl.edu Hi all,
I've read the Wiki page on Writing Functional Tests (http://wiki.g2.bx.psu.edu/Admin/Tools/Writing%20Tests) and I've been looking through test/base and test/functional and I am left with two questions:
* Is it possible to write a test to validate metadata directly on an (optionally composite) output dataset?
I'm sure this is possible, but it would require enhancements to the current functional test framework.
Everything described on the above page is file oriented. I see that there is TwillTestCase.check_metadata_for_string, but as far as I can tell this is a bit nonspecific since it appears to just do a text search on the Edit page.
This is correct.
I don't yet fully understand the context in which tests run, but is there some way to access a "live" dataset's metadata directly, either as a dictionary or just as attributes? Or even to get the actual dataset object?
Not with the current functional test framework. Doing this would require enhancements to the framework.
* Does the test harness support retaining output files only for failed tests? Ideally with a cap on how much output data to save. If not, would this be difficult to configure?
I'm not sure what you mean by "output files" in your question. If you mean output datasets that result from running a functional test for a tool, then I believe there is no difference if the test passed or failed.
Thanks, Eric
Greg Von Kuster Galaxy Development Team greg@bx.psu.edu