TestToolShed out of date? (not running unit tests)
Hi all, I'd like to be able to experiment with the new automated testing recently added to the main Tool Shed http://toolshed.g2.bx.psu.edu/ by first trying my updates on the Test Tool Shed (to make sure my tests will work before making the tool update public). It appears that the testing features are not available on the Test Tool Shed, http://testtoolshed.g2.bx.psu.edu/ Is this correct? Is this intentional? Thanks, Peter P.S. This would be less urgent if I could solve this issue which is blocking my local testing: http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-April/014123.html
Peter, The automated testing framework is also running against the test tool shed. Could you send me a link to the repository that is behaving oddly? --Dave B. On 4/12/13 06:42:04.000, Peter Cock wrote:
Hi all,
I'd like to be able to experiment with the new automated testing recently added to the main Tool Shed http://toolshed.g2.bx.psu.edu/ by first trying my updates on the Test Tool Shed (to make sure my tests will work before making the tool update public).
It appears that the testing features are not available on the Test Tool Shed, http://testtoolshed.g2.bx.psu.edu/
Is this correct? Is this intentional?
Thanks,
Peter
P.S. This would be less urgent if I could solve this issue which is blocking my local testing: http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-April/014123.html ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
On Mon, Apr 15, 2013 at 2:20 PM, Dave Bouvier <dave@bx.psu.edu> wrote:
Peter,
The automated testing framework is also running against the test tool shed. Could you send me a link to the repository that is behaving oddly?
e.g. http://toolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp On the main tool shed, when I am logged in, on this page under the top right menu "Repository actions" the second entry is "View tool functional test results". However, on the test tool shed, that menu item is missing: http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp Although the history is different, I believe the same tarball for v0.2.1 of the suite was uploaded to both the main and test tool sheds. Peter
Peter, I'm unable to duplicate that behavior, the "View tool functional test results" option shows up on the test tool shed both when I'm logged in and logged out. My suggestion would be to clear your browser's cookies and cache, and see if that makes a difference. --Dave B. On 4/15/13 09:35:40.000, Peter Cock wrote:
On Mon, Apr 15, 2013 at 2:20 PM, Dave Bouvier <dave@bx.psu.edu> wrote:
Peter,
The automated testing framework is also running against the test tool shed. Could you send me a link to the repository that is behaving oddly?
e.g. http://toolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp On the main tool shed, when I am logged in, on this page under the top right menu "Repository actions" the second entry is "View tool functional test results".
However, on the test tool shed, that menu item is missing: http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp
Although the history is different, I believe the same tarball for v0.2.1 of the suite was uploaded to both the main and test tool sheds.
Peter
On Mon, Apr 15, 2013 at 3:56 PM, Dave Bouvier <dave@bx.psu.edu> wrote:
Peter,
I'm unable to duplicate that behavior, the "View tool functional test results" option shows up on the test tool shed both when I'm logged in and logged out. My suggestion would be to clear your browser's cookies and cache, and see if that makes a difference.
--Dave B.
That did it - thanks :) I also have a feature request now though ;) The "View tool functional test results" option clearly lists failed tests, but it is not obvious if there were any successful tests. Could that be indicated somehow (e.g. five tests passed for tool xxx). Peter
That does sound like a useful option, I'll be discussing that and other enhancements to the automated testing system with Greg in the near future. In this specific case, it looks like functional tests were set to not be executed, because two of the tools in the repository were missing functional test definitions. Our rationale for doing so was that if any tool can't be tested due to missing functional tests or test data, then the repository as a whole can't be proven functionally correct. --Dave B. On 4/15/13 11:04:51.000, Peter Cock wrote:
On Mon, Apr 15, 2013 at 3:56 PM, Dave Bouvier <dave@bx.psu.edu> wrote:
Peter,
I'm unable to duplicate that behavior, the "View tool functional test results" option shows up on the test tool shed both when I'm logged in and logged out. My suggestion would be to clear your browser's cookies and cache, and see if that makes a difference.
--Dave B.
That did it - thanks :)
I also have a feature request now though ;)
The "View tool functional test results" option clearly lists failed tests, but it is not obvious if there were any successful tests. Could that be indicated somehow (e.g. five tests passed for tool xxx).
Peter
On Mon, Apr 15, 2013 at 4:14 PM, Dave Bouvier <dave@bx.psu.edu> wrote:
That does sound like a useful option, I'll be discussing that and other enhancements to the automated testing system with Greg in the near future.
In this specific case, it looks like functional tests were set to not be executed, because two of the tools in the repository were missing functional test definitions. Our rationale for doing so was that if any tool can't be tested due to missing functional tests or test data, then the repository as a whole can't be proven functionally correct.
--Dave B.
It would still be useful to the tool author and potential tool users to know that tools X and Y in a repository work, even if Z fails a test. Thanks, Peter
Hello Peter, On Apr 15, 2013, at 11:04 AM, Peter Cock wrote:
The "View tool functional test results" option clearly lists failed tests, but it is not obvious if there were any successful tests. Could that be indicated somehow (e.g. five tests passed for tool xxx).
If I understand your request, I believe we already provide it. Functional test results are displayed for tests that passed or failed. For example, see the functional test results for revision 3:aa76c8dd97e6 of jjohnson's pileup_to_vcf repository, which displays the following. Is this what you are asking for? Thanks! Tests that failed Tool id: pileup_to_vcf version: 2.0 Test id test_tool_000001 (functional.test_toolbox.TestForTool_toolshed.g2.bx.psu.edu/repos/jjohnson/pileup_to_vcf/pileup_to_vcf/2.0) Status Test failed Stderr Traceback Traceback (most recent call last): File "/var/opt/buildbot/slaves/buildbot-install-test-1/buildbot-install-test-main-tool-shed-py27/build/test/functional/test_toolbox.py", line 171, in test_tool self.do_it( td, shed_tool_id=shed_tool_id ) File "/var/opt/buildbot/slaves/buildbot-install-test-1/buildbot-install-test-main-tool-shed-py27/build/test/functional/test_toolbox.py", line 102, in do_it self.verify_dataset_correctness( outfile, hid=elem_hid, maxseconds=testdef.maxseconds, attributes=attributes, shed_tool_id=shed_tool_id ) File "/var/opt/buildbot/slaves/buildbot-install-test-1/buildbot-install-test-main-tool-shed-py27/build/test/base/twilltestcase.py", line 798, in verify_dataset_correctness raise AssertionError( errmsg ) AssertionError: History item 2 different than expected No text matching expression 'chr1\t158571283\t.\tA\tC\t.\tPASS\t.*' was found in output file. Tests that passed successfully Tool id: pileup_to_vcf version: 2.0 Test id test_tool_000000 (functional.test_toolbox.TestForTool_toolshed.g2.bx.psu.edu/repos/jjohnson/pileup_to_vcf/pileup_to_vcf/2.0) Status Test passed
On Tue, Apr 16, 2013 at 9:42 PM, Greg Von Kuster <greg@bx.psu.edu> wrote:
Hello Peter,
On Apr 15, 2013, at 11:04 AM, Peter Cock wrote:
The "View tool functional test results" option clearly lists failed tests, but it is not obvious if there were any successful tests. Could that be indicated somehow (e.g. five tests passed for tool xxx).
If I understand your request, I believe we already provide it. Functional test results are displayed for tests that passed or failed. For example, see the functional test results for revision 3:aa76c8dd97e6<http://toolshed.g2.bx.psu.edu/repository/view_changelog?id=9a07f99536a91e29> of jjohnson's pileup_to_vcf repository, which displays the following. Is this what you are asking for?
Thanks!
*Tests that failed* Tool id: *pileup_to_vcf* version: *2.0* *Test id* test_tool_000001 ( functional.test_toolbox.TestForTool_toolshed.g2.bx.psu.edu/repos/jjohnson/pileup_to_vcf/pileup_to_vcf/2.0 ) *Status* Test failed *Stderr* *Traceback* Traceback (most recent call last):
File "/var/opt/buildbot/slaves/buildbot-install-test-1/buildbot-install-test-main-tool-shed-py27/build/test/functional/test_toolbox.py", line 171, in test_tool self.do_it( td, shed_tool_id=shed_tool_id )
File "/var/opt/buildbot/slaves/buildbot-install-test-1/buildbot-install-test-main-tool-shed-py27/build/test/functional/test_toolbox.py", line 102, in do_it
self.verify_dataset_correctness( outfile, hid=elem_hid, maxseconds=testdef.maxseconds, attributes=attributes, shed_tool_id=shed_tool_id )
File "/var/opt/buildbo<t/slaves/buildbot-install-test-1/buildbot-install-test-main-tool-shed-py27/build/test/base/twilltestcase.py", line 798, in verify_dataset_correctness raise AssertionError( errmsg ) AssertionError: History item 2 different than expected
No text matching expression 'chr1\t158571283\t.\tA\tC\t.\tPASS\t.*' was found in output file. *Tests that passed successfully* Tool id: *pileup_to_vcf* version: *2.0* *Test id* test_tool_000000 ( functional.test_toolbox.TestForTool_toolshed.g2.bx.psu.edu/repos/jjohnson/pileup_to_vcf/pileup_to_vcf/2.0 ) *Status* Test passed
That does look good - maybe I've been unlucky in the examples I've looked at due to missing dependencies and/or true results being hidden by this issue: https://trello.com/card/toolshed -automated-functional-test-bug/506338ce32ae458f6d15e4b3/793 Thanks, Peter
On Wed, Apr 17, 2013 at 9:44 AM, Peter Cock <p.j.a.cock@googlemail.com>wrote:
On Tue, Apr 16, 2013 at 9:42 PM, Greg Von Kuster <greg@bx.psu.edu> wrote:
Hello Peter,
On Apr 15, 2013, at 11:04 AM, Peter Cock wrote:
The "View tool functional test results" option clearly lists failed tests, but it is not obvious if there were any successful tests. Could that be indicated somehow (e.g. five tests passed for tool xxx).
Here's a problem case, http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr This has one unit test defined, <tests> <test> <param name="blastxml_file" value="blastp_four_human_vs_rhodopsin.xml" ftype="blastxml" /> <param name="topN" value="3" /> <output name="tabular_file" file="blastp_four_human_vs_rhodopsin_top3.tabular" ftype="tabular" /> </test> </tests> However, it is not shown via "View tool functional test results": Tool functional test results Time tested: ~ 6 hours ago *Tool Shed environment* Tool shed version: 9252:da2d6ee29f30 Tool shed database version: 17 Mercurial version: 2.2.3 *Galaxy environment* Galaxy version: unknown Galaxy database version: unknown Architecture: x86_64 Operating system: Linux 3.0.0-1-amd64 Python version: 2.7.2+ i.e. No mention of the fact there was (presumably) one successful test. Peter
Peter, I've committed a fix for this in 9480:a80175a97194, and re-run the automated testing framework. However, as you will see, the test still reports an error. This is not due to an issue in your repository, but an problem on the Galaxy side, as described in the following trello card: https://trello.com/c/KdGX3hkh --Dave B. On 4/17/13 06:20:39.000, Peter Cock wrote:
On Wed, Apr 17, 2013 at 9:44 AM, Peter Cock <p.j.a.cock@googlemail.com <mailto:p.j.a.cock@googlemail.com>> wrote:
On Tue, Apr 16, 2013 at 9:42 PM, Greg Von Kuster <greg@bx.psu.edu <mailto:greg@bx.psu.edu>> wrote:
Hello Peter,
On Apr 15, 2013, at 11:04 AM, Peter Cock wrote:
The "View tool functional test results" option clearly lists failed tests, but it is not obvious if there were any successful tests. Could that be indicated somehow (e.g. five tests passed for tool xxx).
Here's a problem case, http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr
This has one unit test defined,
<tests> <test> <param name="blastxml_file" value="blastp_four_human_vs_rhodopsin.xml" ftype="blastxml" /> <param name="topN" value="3" /> <output name="tabular_file" file="blastp_four_human_vs_rhodopsin_top3.tabular" ftype="tabular" /> </test> </tests>
However, it is not shown via "View tool functional test results":
Tool functional test results Time tested: ~ 6 hours ago *Tool Shed environment*
Tool shed version: 9252:da2d6ee29f30 Tool shed database version: 17 Mercurial version: 2.2.3 *Galaxy environment*
Galaxy version: unknown Galaxy database version: unknown Architecture: x86_64 Operating system: Linux 3.0.0-1-amd64 Python version: 2.7.2+
i.e. No mention of the fact there was (presumably) one successful test.
Peter
On Fri, Apr 19, 2013 at 4:46 PM, Dave Bouvier <dave@bx.psu.edu> wrote:
Peter,
I've committed a fix for this in 9480:a80175a97194, and re-run the automated testing framework. However, as you will see, the test still reports an error. This is not due to an issue in your repository, but an problem on the Galaxy side, as described in the following trello card: https://trello.com/c/KdGX3hkh
--Dave B.
Thanks - once this and the best way forward for defining Python modules as dependencies are sorted out I should be able to get more of my tools in the Tool Shed passing their unit tests :) Peter
participants (3)
-
Dave Bouvier
-
Greg Von Kuster
-
Peter Cock