Hi Greg & Dave,
I really like the new features on the (Test) Tool Shed for searching my repositories:
* Latest revision missing tool tests * Latest revision failing tool tests * Latest revision all tool tests pass
However there are some teething problems. Some of my tools are listed under "Latest revision failing tool tests", but when I go to look at them, no test results are shown (passing or failing): http://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3 http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip
In some cases there is indeed a failing test, for instance this is due to a bug in the test framework: http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go https://trello.com/c/KdGX3hkh
And here there are missing dependencies (due to restrictive licensing problems): http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp https://trello.com/card/-/506338ce32ae458f6d15e4b3/770
Peter
Hi Peter,
I created the following Trello card for this and things will be fixed as a priority. Thanks for letting us know!
https://trello.com/card/toolshed-issues-with-functional-test-results/506338c...
On May 1, 2013, at 7:09 AM, Peter Cock wrote:
Hi Greg & Dave,
I really like the new features on the (Test) Tool Shed for searching my repositories:
- Latest revision missing tool tests
- Latest revision failing tool tests
- Latest revision all tool tests pass
However there are some teething problems. Some of my tools are listed under "Latest revision failing tool tests", but when I go to look at them, no test results are shown (passing or failing): http://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3 http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip
In some cases there is indeed a failing test, for instance this is due to a bug in the test framework: http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go https://trello.com/c/KdGX3hkh
And here there are missing dependencies (due to restrictive licensing problems): http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp https://trello.com/card/-/506338ce32ae458f6d15e4b3/770
Peter
Peter,
Thanks for the data, I am in the process of tracking down the source of the issue, and I hope to have a fix ready shortly.
--Dave B.
On 2013-05-01 08:52, Greg Von Kuster wrote:
Hi Peter,
I created the following Trello card for this and things will be fixed as a priority. Thanks for letting us know!
https://trello.com/card/toolshed-issues-with-functional-test-results/506338c...
On May 1, 2013, at 7:09 AM, Peter Cock wrote:
Hi Greg & Dave,
I really like the new features on the (Test) Tool Shed for searching my repositories:
- Latest revision missing tool tests
- Latest revision failing tool tests
- Latest revision all tool tests pass
However there are some teething problems. Some of my tools are listed under "Latest revision failing tool tests", but when I go to look at them, no test results are shown (passing or failing): http://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3 http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip
In some cases there is indeed a failing test, for instance this is due to a bug in the test framework: http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go https://trello.com/c/KdGX3hkh
And here there are missing dependencies (due to restrictive licensing problems): http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp https://trello.com/card/-/506338ce32ae458f6d15e4b3/770
Peter
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
On 2013-05-01 08:52, Greg Von Kuster wrote:
Hi Peter,
I created the following Trello card for this and things will be fixed as a priority. Thanks for letting us know!
https://trello.com/card/toolshed-issues-with-functional-test-results/506338c...
On Wed, May 1, 2013 at 11:50 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
Thanks for the data, I am in the process of tracking down the source of the issue, and I hope to have a fix ready shortly.
--Dave B.
Hi Dave,
I see from the Trello card you've made some progress here - but I am still seeing odd results on the Test Tool Shed (including not seeing any test results), which has also been reflected in the RSS feed. e.g. Conflicting RSS entries:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/fastq_paired_unpaired/fe46950... Functional test results for changeset revision fae4084a0bc0 of fastq_paired_unpaired 0 tests passed, 0 tests failed, 5 tests missing test components.
vs
http://testtoolshed.g2.bx.psu.edu/view/peterjc/fastq_paired_unpaired/fe46950... Functional test results for changeset revision fe469503441b of fastq_paired_unpaired 0 tests passed, 0 tests failed, 0 tests missing test components.
Currently the Test Tool Shed shows me no test information at all for this example (and sadly many others).
This seems to be part of a larger issue, "Latest revision: failing tool tests" currently only lists mira_assembler - yet viewing this shows no failure (or any other results): http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler/da604cc07658
Last week I had numerous failing tests (mostly due to missing tool dependencies, some due to issues with the text framework). Many of the (presumably still problematic) repositories are now listed under "Latest revision: all tool tests pass" yet on viewing them there are no test results shown at all, e.g. http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/54c3d14e5621 http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp/af317463783...
Something seems to have gone wrong on Friday or over the weekend,
Regards,
Peter
Peter,
Yes, there are a few issues that are causing some repositories not to be tested correctly. I've discovered the problem, but the solution will require a bit of work. My goal is to have this resolved as soon as possible.
--Dave B.
On 5/6/13 05:10:57.000, Peter Cock wrote:
On 2013-05-01 08:52, Greg Von Kuster wrote:
Hi Peter,
I created the following Trello card for this and things will be fixed as a priority. Thanks for letting us know!
https://trello.com/card/toolshed-issues-with-functional-test-results/506338c...
On Wed, May 1, 2013 at 11:50 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
Thanks for the data, I am in the process of tracking down the source of the issue, and I hope to have a fix ready shortly.
--Dave B.
Hi Dave,
I see from the Trello card you've made some progress here - but I am still seeing odd results on the Test Tool Shed (including not seeing any test results), which has also been reflected in the RSS feed. e.g. Conflicting RSS entries:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/fastq_paired_unpaired/fe46950... Functional test results for changeset revision fae4084a0bc0 of fastq_paired_unpaired 0 tests passed, 0 tests failed, 5 tests missing test components.
vs
http://testtoolshed.g2.bx.psu.edu/view/peterjc/fastq_paired_unpaired/fe46950... Functional test results for changeset revision fe469503441b of fastq_paired_unpaired 0 tests passed, 0 tests failed, 0 tests missing test components.
Currently the Test Tool Shed shows me no test information at all for this example (and sadly many others).
This seems to be part of a larger issue, "Latest revision: failing tool tests" currently only lists mira_assembler - yet viewing this shows no failure (or any other results): http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler/da604cc07658
Last week I had numerous failing tests (mostly due to missing tool dependencies, some due to issues with the text framework). Many of the (presumably still problematic) repositories are now listed under "Latest revision: all tool tests pass" yet on viewing them there are no test results shown at all, e.g. http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/54c3d14e5621 http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp/af317463783...
Something seems to have gone wrong on Friday or over the weekend,
Regards,
Peter
On Mon, May 6, 2013 at 1:55 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
Yes, there are a few issues that are causing some repositories not to be tested correctly. I've discovered the problem, but the solution will require a bit of work. My goal is to have this resolved as soon as possible.
--Dave B.
Hi Dave,
Things are looking sensible again on the Test Tool Shed - thanks :)
I have a query which might deserve discussion: Currently my some of my BLAST+ tools have missing tests and some are failing (due in part to limitations of the test framework) yet the repository is only listed under missing tool tests:
Latest revision: missing tool tests - includes ncbi_blast_plus Latest revision: failing tool tests - omits ncbi_blast_plus http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus
As the yellow description makes clear, this is deliberate, but I would prefer to have ncbi_blast_plus listed in both places. In this case I know there are missing tests, but would hope there are no failing tests - yet the Tool Shed interface is effectively 'hiding' these failures from the prominent listing.
(Right now the failures are unexpected, it seems my changes to the automated installation were not successful - but that's why I'm testing them here to check this)
What does everyone else think?
Thanks,
Peter
Hi Peter,
This change would be simple to make, but it would result in the inclusion of all repositories with missing test components to be included in the list of failing tests. There are currently 2 filters that provide the information to categorize these repositories: missing test components ( True / False ) and tools functionally correct ( True / False ). I'm not sure it makes sense to add a 3rd filter, but if we can come up with a sensible one, I would be happy to add it. If not, do you want the list of failing tool tests to include repositories that are missing test components?
Thanks!
Greg
On May 7, 2013, at 4:50 AM, Peter Cock wrote:
On Mon, May 6, 2013 at 1:55 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
Yes, there are a few issues that are causing some repositories not to be tested correctly. I've discovered the problem, but the solution will require a bit of work. My goal is to have this resolved as soon as possible.
--Dave B.
Hi Dave,
Things are looking sensible again on the Test Tool Shed - thanks :)
I have a query which might deserve discussion: Currently my some of my BLAST+ tools have missing tests and some are failing (due in part to limitations of the test framework) yet the repository is only listed under missing tool tests:
Latest revision: missing tool tests - includes ncbi_blast_plus Latest revision: failing tool tests - omits ncbi_blast_plus http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus
As the yellow description makes clear, this is deliberate, but I would prefer to have ncbi_blast_plus listed in both places. In this case I know there are missing tests, but would hope there are no failing tests - yet the Tool Shed interface is effectively 'hiding' these failures from the prominent listing.
(Right now the failures are unexpected, it seems my changes to the automated installation were not successful
- but that's why I'm testing them here to check this)
What does everyone else think?
Thanks,
Peter ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
On Tue, May 7, 2013 at 10:59 AM, Greg Von Kuster greg@bx.psu.edu wrote:
Hi Peter,
This change would be simple to make, but it would result in the inclusion of all repositories with missing test components to be included in the list of failing tests. There are currently 2 filters that provide the information to categorize these repositories: missing test components ( True / False ) and tools functionally correct ( True / False ). I'm not sure it makes sense to add a 3rd filter, but if we can come up with a sensible one, I would be happy to add it. If not, do you want the list of failing tool tests to include repositories that are missing test components?
By missing test components do you mean things like missing a test input or output file? If so, that would be fine. If I've written a test but not included the sample data, then to me that is just a special case of a failing test that needs fixing (and it should be a trivial fix).
Currently (according to the yellow notes on the Test Tool Shed),
Latest revision: missing tool tests: * you are authorized to update them * the latest installable revision contains at least 1 tool with no defined tests OR: * the latest installable revision contains at least 1 tool with a test that requires a missing test data file
Latest revision: failing tool tests: * you are authorized to update them * the latest installable revision contains at least 1 tool * the latest installable revision is not missing any tool test components * the latest installable revision has at least 1 tool test that fails
My suggestion would be to treat missing test data files as a failing test, something like this:
Latest revision: missing tool tests: * you are authorized to update them * the latest installable revision contains at least 1 tool with no defined tests
Latest revision: failing tool tests: * you are authorized to update them * the latest installable revision has at least 1 tool test that fails OR requires a missing test data file
Under this scheme, right now on the Test Tool Shed my ncbi_blast_plus repository would appear on both lists due to missing tests for tools like ncbi_rpsblast_wrapper, and failing tests like ncbi_blastn_wrapper (due to a glitch in the dependency installation changes I made).
Peter
Hi Peter,
On May 7, 2013, at 9:10 AM, Peter Cock wrote:
On Tue, May 7, 2013 at 10:59 AM, Greg Von Kuster greg@bx.psu.edu wrote:
Hi Peter,
This change would be simple to make, but it would result in the inclusion of all repositories with missing test components to be included in the list of failing tests. There are currently 2 filters that provide the information to categorize these repositories: missing test components ( True / False ) and tools functionally correct ( True / False ). I'm not sure it makes sense to add a 3rd filter, but if we can come up with a sensible one, I would be happy to add it. If not, do you want the list of failing tool tests to include repositories that are missing test components?
By missing test components do you mean things like missing a test input or output file? If so, that would be fine. If I've written a test but not included the sample data, then to me that is just a special case of a failing test that needs fixing (and it should be a trivial fix).
Missing test components implies a tool config that does not define a test (i.e, a missing test definition) or a tool config that defines a test, but the test's input or output files are missing from the repository.
Currently (according to the yellow notes on the Test Tool Shed),
Latest revision: missing tool tests:
- you are authorized to update them
- the latest installable revision contains at least 1 tool with no
defined tests OR:
- the latest installable revision contains at least 1 tool with a
test that requires a missing test data file
Latest revision: failing tool tests:
- you are authorized to update them
- the latest installable revision contains at least 1 tool
- the latest installable revision is not missing any tool test components
- the latest installable revision has at least 1 tool test that fails
My suggestion would be to treat missing test data files as a failing test, something like this:
Latest revision: missing tool tests:
- you are authorized to update them
- the latest installable revision contains at least 1 tool with no
defined tests
I don't see the benefit of the above where you place tools missing tests into a different category than tools with defined tests, but missing test data. If any of the test components (test definition or required input or output files) are missing, then the test cannot be executed, so defining it as a failing test in either case is a bit misleading. It is actually a tool that is missing test components that are required for execution which will result in a pass / fail status.
It would be much simpler to change the filter for failing tests to include those that are missing test components so that the list of missing test components is a subset of the list of failing tests.
Latest revision: failing tool tests:
- you are authorized to update them
- the latest installable revision has at least 1 tool test that fails
OR requires a missing test data file
Under this scheme, right now on the Test Tool Shed my ncbi_blast_plus repository would appear on both lists due to missing tests for tools like ncbi_rpsblast_wrapper, and failing tests like ncbi_blastn_wrapper (due to a glitch in the dependency installation changes I made).
Peter
On Tue, May 7, 2013 at 7:02 PM, Greg Von Kuster greg@bx.psu.edu wrote:
Hi Peter,
Missing test components implies a tool config that does not define a test (i.e, a missing test definition) or a tool config that defines a test, but the test's input or output files are missing from the repository.
This seems to be our point of confusion: I don't understand combining these two categories - it seems unhelpful to me.
Tools missing a test definition clearly can't be tested - but since we'd like every tool to have tests having this as an easily view listing is useful both for authors and reviewers. It highlights tools which need some work - or in some cases work on the Galaxy test framework itself. They are neither passing nor failing tests - and it makes sense to list them separately.
Tools with a test definition should be tested - if they are missing an input or output file this is just a special case of a test failure (and can be spotted without actually attempting to run the tool). This is clearly a broken test and the tool author should be able to fix this easily (by uploading the missing test data file)
I don't see the benefit of the above where you place tools missing tests into a different category than tools with defined tests, but missing test data. If any of the test components (test definition or required input or output files) are missing, then the test cannot be executed, so defining it as a failing test in either case is a bit misleading. It is actually a tool that is missing test components that are required for execution which will result in a pass / fail status.
It is still a failing test (just for the trivial reason of missing a test data file).
It would be much simpler to change the filter for failing tests to include those that are missing test components so that the list of missing test components is a subset of the list of failing tests.
What I would like is three lists:
Latest revision: missing tool tests - repositories where at least 1 tool has no test defined
[The medium term TO-DO list for the Tool Author]
Latest revision: failing tool tests - repositories where at least 1 tool has a failing test (where I include tests missing their input or output test data files)
[The priority TO-DO list for the Tool Author]
Latest revision: all tool tests pass - repositories where every tool has tests and they all pass
[The good list, Tool Authors should aim to have everything here]
Right now http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus would appear under both "missing tool tests" and "failing tool tests", but I hope to fix this and have this under "missing tool tests" only (until my current roadblocks with the Galaxy Test Framework are resolved).
I hope I've managed a clearer explanation this time,
Thanks,
Peter
Hi Peter,
On May 8, 2013, at 6:45 AM, Peter Cock wrote:
On Tue, May 7, 2013 at 7:02 PM, Greg Von Kuster greg@bx.psu.edu wrote:
Hi Peter,
Missing test components implies a tool config that does not define a test (i.e, a missing test definition) or a tool config that defines a test, but the test's input or output files are missing from the repository.
This seems to be our point of confusion: I don't understand combining these two categories - it seems unhelpful to me.
I feel this is just a difference of opinion. Combining missing tests and missing test data into a single category is certainly justifiable. Any repository that falls into this category clearly states to the owner what is missing, and the owner can easily know that work is needed to prepare the repository contents for testing, whether that work falls into the category of adding a missing test or adding missing test data.
Tools missing a test definition clearly can't be tested - but since we'd like every tool to have tests having this as an easily view listing is useful both for authors and reviewers.
But is is an easily viewed listing. It is currently very easy to determine if a tool is missing a defined test, is missing test data, or both.
It highlights tools which need some work - or in some cases work on the Galaxy test framework itself. They are neither passing nor failing tests - and it makes sense to list them separately.
Tools with a test definition should be tested
This is where I disagree. It currently takes a few seconds for our check repositories for test components to crawl the entire main tool shed and set flags for those repositories missing test components. However, the separate script that crawls the main tool shed and installs and tests repositories that are not missing test components currently takes hours to run even though less than 10% of the repositories are currently tests (due to missing test components on most of them).
Installing a testing repositories that have tools with defined tests but missing test data is potentially costly from a time perspective. Let's take a simple example:
Repo A has 1 tool that includes a defined test, but is missing required test data from the repository. The tool in repo A defines 2 3rd party tool dependencies that must be installed and compiled. In addition, repo A defines a repository dependency whose ultimate chain of repository installations results in 4 additional repositories with 16 additional 3rd party tool dependencies, with a total installation time of 2 hours. All of this time is taken in order to test the tool in repo A when we already know that it will not succeed because it is missing test data. This is certainly a realistic scenario.
- if they are missing an
input or output file this is just a special case of a test failure (and can be spotted without actually attempting to run the tool).
Yes, but this is what we are doing now. We are spotting this scenario without installing the repository or running any defined tests by running the tool.
This is clearly a broken test and the tool author should be able to fix this easily (by uploading the missing test data file)
Yes, but this is already possible for them to clearly see without having to install the repository or run any tests.
I don't see the benefit of the above where you place tools missing tests into a different category than tools with defined tests, but missing test data. If any of the test components (test definition or required input or output files) are missing, then the test cannot be executed, so defining it as a failing test in either case is a bit misleading. It is actually a tool that is missing test components that are required for execution which will result in a pass / fail status.
It is still a failing test (just for the trivial reason of missing a test data file).
It would be much simpler to change the filter for failing tests to include those that are missing test components so that the list of missing test components is a subset of the list of failing tests.
What I would like is three lists:
Latest revision: missing tool tests
- repositories where at least 1 tool has no test defined
[The medium term TO-DO list for the Tool Author]
Latest revision: failing tool tests
- repositories where at least 1 tool has a failing test (where I include
tests missing their input or output test data files)
[The priority TO-DO list for the Tool Author]
Latest revision: all tool tests pass
- repositories where every tool has tests and they all pass
[The good list, Tool Authors should aim to have everything here]
Right now http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus would appear under both "missing tool tests" and "failing tool tests", but I hope to fix this and have this under "missing tool tests" only (until my current roadblocks with the Galaxy Test Framework are resolved).
I hope I've managed a clearer explanation this time,
Thanks,
Peter ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
On Wed, May 8, 2013 at 3:28 PM, Greg Von Kuster greg@bx.psu.edu wrote:
Hi Peter,
On May 8, 2013, at 6:45 AM, Peter Cock wrote:
On Tue, May 7, 2013 at 7:02 PM, Greg Von Kuster greg@bx.psu.edu wrote:
Hi Peter,
Missing test components implies a tool config that does not define a test (i.e, a missing test definition) or a tool config that defines a test, but the test's input or output files are missing from the repository.
This seems to be our point of confusion: I don't understand combining these two categories - it seems unhelpful to me.
I feel this is just a difference of opinion. Combining missing tests and missing test data into a single category is certainly justifiable. Any repository that falls into this category clearly states to the owner what is missing, and the owner can easily know that work is needed to prepare the repository contents for testing, whether that work falls into the category of adding a missing test or adding missing test data.
Speaking as a tool author, these are two rather different categories which should not be merged. I personally would put "tools with defined tests but missing input/output files" under "failing tests" not under "missing tests".
Tools missing a test definition clearly can't be tested - but since we'd like every tool to have tests having this as an easily view listing is useful both for authors and reviewers.
But is is an easily viewed listing. It is currently very easy to determine if a tool is missing a defined test, is missing test data, or both.
No it isn't easily viewable - it is easy to get a combined listing of repositories with (a) missing tests and/or (b) tests with missing files, and then very tedious to look at these repositories one by one to see which it is.
It highlights tools which need some work - or in some cases work on the Galaxy test framework itself. They are neither passing nor failing tests - and it makes sense to list them separately.
Tools with a test definition should be tested
This is where I disagree. ... <snip> Installing a testing repositories that have tools with defined tests but missing test data is potentially costly from a time perspective.
<snip>
I wasn't meaning to suggest you do that though - you're already able to short cut these cases and mark the test as failed. These are the quickest possible tests to run - they fail at the first hurdle.
- if they are missing an input or output
file this is just a special case of a test failure (and can be spotted without actually attempting to run the tool).
Yes, but this is what we are doing now. We are spotting this scenario without installing the repository or running any defined tests by running the tool.
Yes, and that is fine - I'm merely talking about how this information is presented to the Tool Shed viewer.
This is clearly a broken test and the tool author should be able to fix this easily (by uploading the missing test data file)
Yes, but this is already possible for them to clearly see without having to install the repository or run any tests.
Indeed, but it this is a failing test and should (in my view) be listed under failing tests not under missing tests.
We're just debating where to list such problem tools/repositories in the Tool Shed's test results interface.
Regards,
Peter
Peter,
As you've already noticed, I've tracked down and fixed the main issue that was causing inaccurate test results. Thank you again for the data you provided, which was of great help narrowing down the cause of the issue.
--Dave B.
On 5/7/13 04:50:23.000, Peter Cock wrote:
On Mon, May 6, 2013 at 1:55 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
Yes, there are a few issues that are causing some repositories not to be tested correctly. I've discovered the problem, but the solution will require a bit of work. My goal is to have this resolved as soon as possible.
--Dave B.
Hi Dave,
Things are looking sensible again on the Test Tool Shed - thanks :)
I have a query which might deserve discussion: Currently my some of my BLAST+ tools have missing tests and some are failing (due in part to limitations of the test framework) yet the repository is only listed under missing tool tests:
Latest revision: missing tool tests - includes ncbi_blast_plus Latest revision: failing tool tests - omits ncbi_blast_plus http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus
As the yellow description makes clear, this is deliberate, but I would prefer to have ncbi_blast_plus listed in both places. In this case I know there are missing tests, but would hope there are no failing tests - yet the Tool Shed interface is effectively 'hiding' these failures from the prominent listing.
(Right now the failures are unexpected, it seems my changes to the automated installation were not successful
- but that's why I'm testing them here to check this)
What does everyone else think?
Thanks,
Peter
On Tue, May 7, 2013 at 2:10 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
As you've already noticed, I've tracked down and fixed the main issue that was causing inaccurate test results. Thank you again for the data you provided, which was of great help narrowing down the cause of the issue.
--Dave B.
I'm glad that's sorted - I should be able to get back into fixing tests and adding a few more now that I'm getting nightly feedback again :)
Thanks!
Peter
P.S. Is the idea of running the tests whenever a tool is updated likely to happen in the near future, or is that still a wish list item?
On Tue, May 7, 2013 at 2:10 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
As you've already noticed, I've tracked down and fixed the main issue that was causing inaccurate test results. Thank you again for the data you provided, which was of great help narrowing down the cause of the issue.
--Dave B.
Hi Dave,
I've got another problem set for you, some repositories are listed under "Latest revision: failing tool tests", yet there are no test results are shown (positive or negative).
Tool shed revision: 9661:cb0432cfcc8a
My clinod wrapper with one test: http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/a66a914c39b5 Revision 3:a66a914c39b5
My effective T3 wrapper with two tests: http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/4567618bebbd Revision 5:4567618bebbd
My seq_rename tool with one test: http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/b51633a69a92 Revision 2:b51633a69a92
Regards,
Peter
Peter,
A technical issue prevented the tests from automatically running. I've resolved the issue and started a manual run, you should be seeing test results within 2-3 hours.
--Dave B.
On 5/8/13 07:09:28.000, Peter Cock wrote:
On Tue, May 7, 2013 at 2:10 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
As you've already noticed, I've tracked down and fixed the main issue that was causing inaccurate test results. Thank you again for the data you provided, which was of great help narrowing down the cause of the issue.
--Dave B.
Hi Dave,
I've got another problem set for you, some repositories are listed under "Latest revision: failing tool tests", yet there are no test results are shown (positive or negative).
Tool shed revision: 9661:cb0432cfcc8a
My clinod wrapper with one test: http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/a66a914c39b5 Revision 3:a66a914c39b5
My effective T3 wrapper with two tests: http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/4567618bebbd Revision 5:4567618bebbd
My seq_rename tool with one test: http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/b51633a69a92 Revision 2:b51633a69a92
Regards,
Peter
On Wed, May 8, 2013 at 2:09 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
A technical issue prevented the tests from automatically running. I've resolved the issue and started a manual run, you should be seeing test results within 2-3 hours.
--Dave B.
Hi Dave,
As you hoped, these test now shows up on the tool page (as expected they are test failures - apparently my install script isn't quite right yet):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/a66a914c39b5 Functional test results for changeset revision a66a914c39b5 of clinod 0 tests passed, 1 tests failed, 0 tests missing test components.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/5644c28cf965 Functional test results for changeset revision 5644c28cf965 of effectivet3 0 tests passed, 2 tests failed, 0 tests missing test components.
This one is more interesting, and appears to be a problem with the test framework (both tests pass locally via run_functional_tests.sh):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/16a1a5ae98e9 Functional test results for changeset revision 16a1a5ae98e9 of seq_rename 0 tests passed, 2 tests failed, 0 tests missing test components.
I'll raise that on a new thread,
Peter
(Sorry for resending this Dave, I left off the mailing list the first time)
On Thu, May 9, 2013 at 10:49 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Wed, May 8, 2013 at 2:09 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
A technical issue prevented the tests from automatically running. I've resolved the issue and started a manual run, you should be seeing test results within 2-3 hours.
--Dave B.
Hi Dave,
As you hoped, these test now shows up on the tool page (as expected they are test failures - apparently my install script isn't quite right yet):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/a66a914c39b5 Functional test results for changeset revision a66a914c39b5 of clinod 0 tests passed, 1 tests failed, 0 tests missing test components.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/5644c28cf965 Functional test results for changeset revision 5644c28cf965 of effectivet3 0 tests passed, 2 tests failed, 0 tests missing test components.
This one is more interesting, and appears to be a problem with the test framework (both tests pass locally via run_functional_tests.sh):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/16a1a5ae98e9 Functional test results for changeset revision 16a1a5ae98e9 of seq_rename 0 tests passed, 2 tests failed, 0 tests missing test components.
I'll raise that on a new thread,
Peter
Hi again Dave,
There is something breaking intermittently - my overnight RSS feed:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/20e44218fa63 Functional test results for changeset revision 20e44218fa63 of clinod 0 tests passed, 0 tests failed, 0 tests missing test components. 7:38 AM (2 hours ago)
--> No test results shown (positive or negative), yet it is listed under "Latest revision: failing tool tests"
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/6e7694a0ae00 Functional test results for changeset revision 6e7694a0ae00 of blast2go 0 tests passed, 0 tests failed, 0 tests missing test components. 7:38 AM (2 hours ago)
--> Actually one failure due to dependency on main Tool Shed, correctly listed under "Latest revision: failing tool tests"
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/c70df461aae5 Functional test results for changeset revision c70df461aae5 of effectivet3 0 tests passed, 0 tests failed, 0 tests missing test components. 7:38 AM (2 hours ago)
--> No test results shown (positive or negative), yet it is listed under "Latest revision: failing tool tests"
These were the only three feed entries overnight from: http://testtoolshed.g2.bx.psu.edu/repository/get_functional_test_rss?owner=p...
Presumably then other tools were not tested - which makes sense to save resources if neither the tool nor the tool shed was updated since the last time a tool was tested.
Peter
Peter,
The automated testing framework consists of a series of scripts that run in a certain order, so between one script and the next, there may sometimes be unexpected results displayed. I've had a look at the repositories you listed, and the results all seem to be in order now. The RSS feed also seems to be displaying the same results as the web interface.
A repository is set not to be tested only in a few cases, among those is when there is a more recent installable changeset revision, and the revision currently being tested has missing components or does not pass the tests.
--Dave B.
On 5/10/13 05:34:59.000, Peter Cock wrote:
On Thu, May 9, 2013 at 10:49 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Wed, May 8, 2013 at 2:09 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
A technical issue prevented the tests from automatically running. I've resolved the issue and started a manual run, you should be seeing test results within 2-3 hours.
--Dave B.
Hi Dave,
As you hoped, these test now shows up on the tool page (as expected they are test failures - apparently my install script isn't quite right yet):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/a66a914c39b5 Functional test results for changeset revision a66a914c39b5 of clinod 0 tests passed, 1 tests failed, 0 tests missing test components.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/5644c28cf965 Functional test results for changeset revision 5644c28cf965 of effectivet3 0 tests passed, 2 tests failed, 0 tests missing test components.
This one is more interesting, and appears to be a problem with the test framework (both tests pass locally via run_functional_tests.sh):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/16a1a5ae98e9 Functional test results for changeset revision 16a1a5ae98e9 of seq_rename 0 tests passed, 2 tests failed, 0 tests missing test components.
I'll raise that on a new thread,
Peter
Hi again Dave,
There is something breaking intermittently - my overnight RSS feed:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/20e44218fa63 Functional test results for changeset revision 20e44218fa63 of clinod 0 tests passed, 0 tests failed, 0 tests missing test components. 7:38 AM (2 hours ago)
--> No test results shown (positive or negative), yet it is listed under "Latest revision: failing tool tests"
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/6e7694a0ae00 Functional test results for changeset revision 6e7694a0ae00 of blast2go 0 tests passed, 0 tests failed, 0 tests missing test components. 7:38 AM (2 hours ago)
--> Actually one failure due to dependency on main Tool Shed, correctly listed under "Latest revision: failing tool tests"
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/c70df461aae5 Functional test results for changeset revision c70df461aae5 of effectivet3 0 tests passed, 0 tests failed, 0 tests missing test components. 7:38 AM (2 hours ago)
--> No test results shown (positive or negative), yet it is listed under "Latest revision: failing tool tests"
These were the only three feed entries overnight from: http://testtoolshed.g2.bx.psu.edu/repository/get_functional_test_rss?owner=p...
Presumably then other tools were not tested - which makes sense to save resources if neither the tool nor the tool shed was updated since the last time a tool was tested.
Peter
On Fri, May 10, 2013 at 1:51 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
The automated testing framework consists of a series of scripts that run in a certain order, so between one script and the next, there may sometimes be unexpected results displayed. I've had a look at the repositories you listed, and the results all seem to be in order now. The RSS feed also seems to be displaying the same results as the web interface.
There seems to be some kind of caching problem for me:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/20e44218fa63 Functional test results for changeset revision 20e44218fa63 of clinod 0 tests passed, 0 tests failed, 0 tests missing test components. 7:38 AM (2 hours ago)
--> No test results shown (positive or negative), yet it is listed under "Latest revision: failing tool tests"
That URL redirects to here, still no test results shown in my main browser (Firefox v20.0 on Mac): http://testtoolshed.g2.bx.psu.edu/repository?repository_id=27e2bf03e13aae5d&...
However, switching to Safari or Firefox on Windows I see the failing test (Error: Unable to access jarfile ...).
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/c70df461aae5 Functional test results for changeset revision c70df461aae5 of effectivet3 0 tests passed, 0 tests failed, 0 tests missing test components. 7:38 AM (2 hours ago)
--> No test results shown (positive or negative), yet it is listed under "Latest revision: failing tool tests"
That URL redirects to here, still no test results in my main browser (Firefox v20.0 on Mac): http://testtoolshed.g2.bx.psu.edu/repository?repository_id=26f26c7aa084d325&...
Again, switching to Safari or Firefox on Windows I see two failing tests (Effective T3 model JAR file not found ...).
This suggest some caching problem - but restarting Firefox didn't solve it, nor did deleting the cache files in Firefox. Puzzling.
Peter
On Fri, May 10, 2013 at 2:30 PM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Fri, May 10, 2013 at 1:51 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
The automated testing framework consists of a series of scripts that run in a certain order, so between one script and the next, there may sometimes be unexpected results displayed. I've had a look at the repositories you listed, and the results all seem to be in order now. The RSS feed also seems to be displaying the same results as the web interface.
There seems to be some kind of caching problem for me:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/20e44218fa63 Functional test results for changeset revision 20e44218fa63 of clinod 0 tests passed, 0 tests failed, 0 tests missing test components. 7:38 AM (2 hours ago)
--> No test results shown (positive or negative), yet it is listed under "Latest revision: failing tool tests"
That URL redirects to here, still no test results shown in my main browser (Firefox v20.0 on Mac): http://testtoolshed.g2.bx.psu.edu/repository?repository_id=27e2bf03e13aae5d&...
However, switching to Safari or Firefox on Windows I see the failing test (Error: Unable to access jarfile ...).
Today all those browsers show no test results for clinod :(
Tool test results Automated test environment Time tested: ~ 4 hours ago System: Linux 3.0.0-1-amd64 Architecture: x86_64 Python version: 2.7.2+ Galaxy revision: Galaxy database version: Tool shed revision: 9701:65a81aead95e Tool shed database version: 18 Tool shed mercurial version: 2.2.3
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/c70df461aae5 Functional test results for changeset revision c70df461aae5 of effectivet3 0 tests passed, 0 tests failed, 0 tests missing test components. 7:38 AM (2 hours ago)
Failures still showing (good), I've not uploaded the suggested changes Dave made on Friday yet: http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-May/014619.html
But in addition to clinod (above), here are some more repositories where the test results are currently missing (tested in three browsers over two machines): http://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/58359ce... http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/88f002677c63 http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip/06e6112091aa
Might there be a timing issue here? Is it possible that when I check in the morning UK time, the test results haven't yet all been recorded in the database so the Tool Shed only has partial information - then when you check in the afternoon UK time (morning in USA) things are working (although I don't always see this if there is also a cache problem).
Thanks,
Peter
On Mon, May 13, 2013 at 10:08 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/c70df461aae5 Functional test results for changeset revision c70df461aae5 of effectivet3 0 tests passed, 0 tests failed, 0 tests missing test components. 7:38 AM (2 hours ago)
Failures still showing (good), I've not uploaded the suggested changes Dave made on Friday yet: http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-May/014619.html
I uploaded a minor change which should still fail but with a more helpful error when running the tests (which confirmed the install script is failing to create an expected directory):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/b863c4bec840
Using my default browser Firefox (on two machines) the tests are missing, just:
Tool test results Automated test environment Time tested: 28 minutes ago System: Linux 3.5.0-21-generic Architecture: x86_64 Python version: 2.7.3 Galaxy revision: 9701:65a81aead95e Galaxy database version: 115 Tool shed revision: Tool shed database version: Tool shed mercurial version:
However, switching to Safari and Chrome I can see the failures here. I remain puzzled why I am seeing seemingly stochastic missing test results.
Regards,
Peter
On Mon, May 13, 2013 at 11:09 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
I remain puzzled why I am seeing seemingly stochastic missing test results.
Hi Dave,
This continues - although I have been making progress with the tests for some of my tools. Currently there are a number of repositories reported as failing but I can't see the error (and so can't do much about fixing it):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip
http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus When previously visible this was due to a test framework limitation testing with composite output files: http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-April/014365.html
Given I suspect a cache issue hiding the test failures, it might help if rather than the current text like "Time tested: ~ 8 hours ago" this could also state the date and time of the test (in UTC/GMT to avoid locale confusion).
Thanks,
Peter
----
Other repositories failing with a clear error message:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp Missing dependencies, should have been skipped, see http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-May/014748.html
http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler Limitation of test framework, Trello issue open for this, http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-April/014366.html
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename Limitation of test framework, issue raised, http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-May/014701.html
http://testtoolshed.g2.bx.psu.edu/view/peterjc/venn_list Missing an R/Bioconductor dependency, best practise for handling this has yet to be agreed.
Peter,
The most likely cause of this is that there is an issue with some preceding repositories that hang on installation, causing the build process to time out and not test later repositories in the list. I am in the process of working out a solution for that situation.
--Dave B.
On 5/21/13 06:26:56.000, Peter Cock wrote:
On Mon, May 13, 2013 at 11:09 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
I remain puzzled why I am seeing seemingly stochastic missing test results.
Hi Dave,
This continues - although I have been making progress with the tests for some of my tools. Currently there are a number of repositories reported as failing but I can't see the error (and so can't do much about fixing it):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip
http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus When previously visible this was due to a test framework limitation testing with composite output files: http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-April/014365.html
Given I suspect a cache issue hiding the test failures, it might help if rather than the current text like "Time tested: ~ 8 hours ago" this could also state the date and time of the test (in UTC/GMT to avoid locale confusion).
Thanks,
Peter
Other repositories failing with a clear error message:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp Missing dependencies, should have been skipped, see http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-May/014748.html
http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler Limitation of test framework, Trello issue open for this, http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-April/014366.html
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename Limitation of test framework, issue raised, http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-May/014701.html
http://testtoolshed.g2.bx.psu.edu/view/peterjc/venn_list Missing an R/Bioconductor dependency, best practise for handling this has yet to be agreed.
Peter,
I have implemented a system that will explicitly exclude a defined list of repositories from the testing framework, so your repositories' tests should now be running.
--Dave B.
On 5/21/13 06:26:56.000, Peter Cock wrote:
On Mon, May 13, 2013 at 11:09 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
I remain puzzled why I am seeing seemingly stochastic missing test results.
Hi Dave,
This continues - although I have been making progress with the tests for some of my tools. Currently there are a number of repositories reported as failing but I can't see the error (and so can't do much about fixing it):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip
http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus When previously visible this was due to a test framework limitation testing with composite output files: http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-April/014365.html
Given I suspect a cache issue hiding the test failures, it might help if rather than the current text like "Time tested: ~ 8 hours ago" this could also state the date and time of the test (in UTC/GMT to avoid locale confusion).
Thanks,
Peter
Other repositories failing with a clear error message:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp Missing dependencies, should have been skipped, see http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-May/014748.html
http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler Limitation of test framework, Trello issue open for this, http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-April/014366.html
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename Limitation of test framework, issue raised, http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-May/014701.html
http://testtoolshed.g2.bx.psu.edu/view/peterjc/venn_list Missing an R/Bioconductor dependency, best practise for handling this has yet to be agreed.
On Wed, May 22, 2013 at 6:47 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
I have implemented a system that will explicitly exclude a defined list of repositories from the testing framework, so your repositories' tests should now be running.
--Dave B.
Looks good - and now I have some more errors to sort out :)
Thank you,
Peter
On Wed, May 22, 2013 at 8:05 PM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Wed, May 22, 2013 at 6:47 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
I have implemented a system that will explicitly exclude a defined list
of
repositories from the testing framework, so your repositories' tests
should
now be running.
--Dave B.
Looks good - and now I have some more errors to sort out :)
Hi Dave,
I'm guessing you need to black-list a few more repositories as I'm again seeing missing test results, e.g.
------------------
On the test Tool Shed page for this revision I see nothing at all about the tests for clinod:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/04aed2f80910 Functional test results for changeset revision 04aed2f80910 of clinod 0 tests passed, 0 tests failed, 0 tests missing test components.
The RSS entry is cryptic - either tests didn't run at all (due to something timing out elsewhere in the test suite), or there was an installation error perhaps?
-----------------
This one is different - there is a minimal test result output but with no test results or install failures shown: http://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr
<quote>
Automated tool test results
Tool test results Automated test environment *Time tested:* ~ 13 hours ago *System:* Linux 3.0.0-1-amd64 * Architecture:* x86_64 *Python version:* 2.7.2+ *Galaxy revision:* *Galaxy database version:* *Tool shed revision:* 9872:31714646a7b4 *Tool shed database version:* 19 *Tool shed mercurial version:* 2.2.3
</quote>
Regards,
Peter
Hi again Dave,
Have you worked out any possible causes for missing test results? I do suspect some sort of caching issue is to blame...
Here is the current list of my failing repositories from the Test Tool Shed,
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/f3a61c2cf309 - can see test failure, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/392279f2e120 - shows one passing test (which is odd, where is the failure?)
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/26ce6e046040 - no test results visible
http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler/da604cc07658 - can see test failure, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus/2be36fa7565e - some test results visible, but no failures
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/44b2e489e26f - no test results visible
http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp/4cd848c5590... - no test results visible (strange, should be skipping the tests, so why is this listed as failing?)
http://testtoolshed.g2.bx.psu.edu/view/peterjc/venn_list/51fe47a5a803 - can see test failure, good.
Note that for get_orfs_or_cdss and seq_rename, I am trying to use an automatically installed dependency on Biopython - so perhaps there is a silent dependency failure which means the tests are not being run?
Regards,
Peter
Peter,
It turns out that the automated functional testing framework was not actually running. I suggest having a look in the morning, and if there are still any missing test results, I'll get them resolved then.
--Dave B.
On 6/25/13 02:10:10.000, Peter Cock wrote:
Hi again Dave,
Have you worked out any possible causes for missing test results? I do suspect some sort of caching issue is to blame...
Here is the current list of my failing repositories from the Test Tool Shed,
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/f3a61c2cf309
- can see test failure, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/392279f2e120
- shows one passing test (which is odd, where is the failure?)
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/26ce6e046040
- no test results visible
http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler/da604cc07658
- can see test failure, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus/2be36fa7565e
- some test results visible, but no failures
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/44b2e489e26f
- no test results visible
http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp/4cd848c5590...
- no test results visible (strange, should be skipping the tests, so why is this listed as failing?)
http://testtoolshed.g2.bx.psu.edu/view/peterjc/venn_list/51fe47a5a803
- can see test failure, good.
Note that for get_orfs_or_cdss and seq_rename, I am trying to use an automatically installed dependency on Biopython - so perhaps there is a silent dependency failure which means the tests are not being run?
Regards,
Peter
On Tue, Jun 25, 2013 at 9:43 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
It turns out that the automated functional testing framework was not actually running. I suggest having a look in the morning, and if there are still any missing test results, I'll get them resolved then.
--Dave B.
Thanks Dave,
That has made a difference but there are still a couple of issues here - at least in my current browser (Safari):
Here is the current list of my failing repositories from the Test Tool Shed,
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/f3a61c2cf309
- can see test failure, good.
No change, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/392279f2e120
- shows one passing test (which is odd, where is the failure?)
Improved, says there are test failures, but the arrow section fails to expand with any details. Could this just be a slow connection issue?
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/26ce6e046040
- no test results visible
Tests have passed now :)
http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler/da604cc07658
- can see test failure, good.
No change, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus/2be36fa7565e
- some test results visible, but no failures
No change, bad. As above, the arrow/section for the failing test does not expand.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/44b2e489e26f
- no test results visible
Can see test failures, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp/4cd848c5590...
- no test results visible (strange, should be skipping the tests, so why is this listed as failing?)
No change, bad.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/venn_list/51fe47a5a803
- can see test failure, good.
No change, good.
Regards,
Peter
Peter,
It looks like the failed tests container is being populated, but there is some odd behavior with displaying it. As soon as we've tracked down the issue, I'll update you.
--Dave B.
On 6/26/13 02:00:30.000, Peter Cock wrote:
On Tue, Jun 25, 2013 at 9:43 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
It turns out that the automated functional testing framework was not actually running. I suggest having a look in the morning, and if there are still any missing test results, I'll get them resolved then.
--Dave B.
Thanks Dave,
That has made a difference but there are still a couple of issues here - at least in my current browser (Safari):
Here is the current list of my failing repositories from the Test Tool Shed,
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/f3a61c2cf309
- can see test failure, good.
No change, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/392279f2e120
- shows one passing test (which is odd, where is the failure?)
Improved, says there are test failures, but the arrow section fails to expand with any details. Could this just be a slow connection issue?
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/26ce6e046040
- no test results visible
Tests have passed now :)
http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler/da604cc07658
- can see test failure, good.
No change, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus/2be36fa7565e
- some test results visible, but no failures
No change, bad. As above, the arrow/section for the failing test does not expand.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/44b2e489e26f
- no test results visible
Can see test failures, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp/4cd848c5590...
- no test results visible (strange, should be skipping the tests, so why is this listed as failing?)
No change, bad.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/venn_list/51fe47a5a803
- can see test failure, good.
No change, good.
Regards,
Peter
Peter,
The empty failed tests folder issue has been resolved in 10171:eaa5112fefcd, and the test tool shed has been updated to that revision.
--Dave B.
On 6/26/13 11:34:38.000, Dave Bouvier wrote:
Peter,
It looks like the failed tests container is being populated, but there is some odd behavior with displaying it. As soon as we've tracked down the issue, I'll update you.
--Dave B.
On 6/26/13 02:00:30.000, Peter Cock wrote:
On Tue, Jun 25, 2013 at 9:43 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
It turns out that the automated functional testing framework was not actually running. I suggest having a look in the morning, and if there are still any missing test results, I'll get them resolved then.
--Dave B.
Thanks Dave,
That has made a difference but there are still a couple of issues here - at least in my current browser (Safari):
Here is the current list of my failing repositories from the Test Tool Shed,
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/f3a61c2cf309
- can see test failure, good.
No change, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/392279f2e120
- shows one passing test (which is odd, where is the failure?)
Improved, says there are test failures, but the arrow section fails to expand with any details. Could this just be a slow connection issue?
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/26ce6e046040
- no test results visible
Tests have passed now :)
http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler/da604cc07658
- can see test failure, good.
No change, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus/2be36fa7565e
- some test results visible, but no failures
No change, bad. As above, the arrow/section for the failing test does not expand.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/44b2e489e26f
- no test results visible
Can see test failures, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp/4cd848c5590...
- no test results visible (strange, should be skipping the tests, so why is this listed as failing?)
No change, bad.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/venn_list/51fe47a5a803
- can see test failure, good.
No change, good.
Regards,
Peter
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
On Wed, Jun 26, 2013 at 10:01 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
The empty failed tests folder issue has been resolved in 10171:eaa5112fefcd, and the test tool shed has been updated to that revision.
--Dave B.
Great - currently all the missing test failures have been resolved :) The bad news is there seems to be a new sniffer bug.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/f3a61c2cf309 Can see test failures (tool configuration problem).
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/392279f2e120 One test pass, on test failure. The failure is a bit odd and is likely a new sniffer related bug.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/26ce6e046040 Tests pass.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler/da604cc07658 Can see test failure, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus/2be36fa7565e One pass, one failure - same new failure as effectiveT3 above.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/44b2e489e26f Can see a failure, also seems to be sniffer related (but using a different datatype).
http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp/4cd848c5590... As before, no test results as there are marked as skip - yet this is still listed under "Latest revision: failing tool tests". The filter for this needs to exclude repositories marked as skip testing.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/venn_list/51fe47a5a803 Can see test failure, good (missing undeclared dependency)
Thanks,
Peter
I'm not seeing anything other than the test environment report in the failing tests report for http://testtoolshed.g2.bx.psu.edu/view/fubar/htseq_bams_to_count_matrix - if you have time to take a look Dave? I'm having a hard time figuring out what I've done wrong - it builds, installs and tests fine on my test machine but I've probably forgotten some dependency that I already have installed on my laptop...
The result for http://testtoolshed.g2.bx.psu.edu/view/fubar/edger_testmakes me think that one or more of the test input files is missing but it's hard to tell. I wonder if overall we need a more or less complete output data spew for failing tests to make the task of tracking them down easier?
On Thu, Jun 27, 2013 at 8:58 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Wed, Jun 26, 2013 at 10:01 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
The empty failed tests folder issue has been resolved in
10171:eaa5112fefcd,
and the test tool shed has been updated to that revision.
--Dave B.
Great - currently all the missing test failures have been resolved :) The bad news is there seems to be a new sniffer bug.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/f3a61c2cf309 Can see test failures (tool configuration problem).
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/392279f2e120 One test pass, on test failure. The failure is a bit odd and is likely a new sniffer related bug.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/26ce6e046040
Tests pass.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira_assembler/da604cc07658 Can see test failure, good.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus/2be36fa7565e One pass, one failure - same new failure as effectiveT3 above.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/44b2e489e26f Can see a failure, also seems to be sniffer related (but using a different datatype).
http://testtoolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp/4cd848c5590...
As before, no test results as there are marked as skip - yet this is still listed under "Latest revision: failing tool tests". The filter for this needs to exclude repositories marked as skip testing.
http://testtoolshed.g2.bx.psu.edu/view/peterjc/venn_list/51fe47a5a803 Can see test failure, good (missing undeclared dependency)
Thanks,
Peter ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
-- Ross Lazarus MBBS MPH; Head, Medical Bioinformatics, BakerIDI; Tel: +61 385321444 http://scholar.google.com/citations?hl=en&user=UCUuEM4AAAAJ
Hello all,
I'm seeing missing test results again on the Test Tool Shed, e.g. listed under "Latest revision: failing tool testshttp://toolshed.g2.bx.psu.edu/repository/browse_my_writable_repositories_with_failing_tool_tests" I have:
http://toolshed.g2.bx.psu.edu/view/peterjc/nlstradamus
Tool test results Automated test environment *Time tested:* ~ 11 hours ago *System:* Linux 3.8.0-25-generic * Architecture:* x86_64 *Python version:* 2.7.4 *Galaxy revision:*10265:e19301cc51b9 *Galaxy database version:* 115 *Tool shed revision:* 10098:adee6fc31991 *Tool shed database version:* 19 *Tool shed mercurial version:* 2.2.3 Tests that passed successfully *Tool id:* nlstradamus *Tool version:* nlstradamus *Test:*test_tool_000000 ( functional.test_toolbox.TestForTool_toolshed.g2.bx.psu.edu/repos/peterjc/nlstradamus/nlstradamus/0.0.7 ) Tests that failed
i.e. The failed test isn't shown
http://toolshed.g2.bx.psu.edu/view/peterjc/effectivet3 - same problem as above
http://toolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp - should be skipping tests (as I expect it to fail)
http://toolshed.g2.bx.psu.edu/view/peterjc/seq_rename - test framework regression raised here: http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-May/014701.html http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-July/015710.html
Thanks,
Peter
Peter,
I've created a Trello card where you can track this issue: https://trello.com/c/wHXz9j0v/1018-toolshed-test-result-display-issue
Most of these look like display issues, the test results are correctly recorded in the database. I hope to have that and the seq_rename and tmhmm_and_signalp issues resolved shortly.
--Dave B.
On 7/29/13 09:50:41.000, Peter Cock wrote:
Hello all,
I'm seeing missing test results again on the Test Tool Shed, e.g. listed under "Latest revision: failing tool tests http://toolshed.g2.bx.psu.edu/repository/browse_my_writable_repositories_with_failing_tool_tests" I have:
http://toolshed.g2.bx.psu.edu/view/peterjc/nlstradamus
Tool test results Automated test environment *Time tested:* ~ 11 hours ago *System:* Linux 3.8.0-25-generic *Architecture:* x86_64 *Python version:* 2.7.4 *Galaxy revision:* 10265:e19301cc51b9 *Galaxy database version:* 115 *Tool shed revision:* 10098:adee6fc31991 *Tool shed database version:* 19 *Tool shed mercurial version:* 2.2.3
Tests that passed successfully *Tool id:* nlstradamus *Tool version:* nlstradamus *Test:* test_tool_000000 (functional.test_toolbox.TestForTool_toolshed.g2.bx.psu.edu/repos/peterjc/nlstradamus/nlstradamus/0.0.7 http://functional.test_toolbox.TestForTool_toolshed.g2.bx.psu.edu/repos/peterjc/nlstradamus/nlstradamus/0.0.7)
Tests that failed
i.e. The failed test isn't shown
http://toolshed.g2.bx.psu.edu/view/peterjc/effectivet3 - same problem as above
http://toolshed.g2.bx.psu.edu/view/peterjc/tmhmm_and_signalp - should be skipping tests (as I expect it to fail)
http://toolshed.g2.bx.psu.edu/view/peterjc/seq_rename - test framework regression raised here: http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-May/014701.html http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-July/015710.html
Thanks,
Peter
On Mon, Jul 29, 2013 at 3:10 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
I've created a Trello card where you can track this issue: https://trello.com/c/wHXz9j0v/1018-toolshed-test-result-display-issue
Thanks Dave.
Most of these look like display issues, the test results are correctly recorded in the database. I hope to have that and the seq_rename and tmhmm_and_signalp issues resolved shortly.
--Dave B.
Oh good - that sounds less troublesome than a problem running the tests themselves.
Cheers,
Peter
Peter,
The issue with seq_rename incorrectly failing functional tests has been resolved in 10266:fe04978dadac, and the test results will be corrected the next time the automated testing framework runs. I'm still looking into the other issues you reported.
--Dave B.
On 7/29/13 10:50:58.000, Peter Cock wrote:
On Mon, Jul 29, 2013 at 3:10 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
I've created a Trello card where you can track this issue: https://trello.com/c/wHXz9j0v/1018-toolshed-test-result-display-issue
Thanks Dave.
Most of these look like display issues, the test results are correctly recorded in the database. I hope to have that and the seq_rename and tmhmm_and_signalp issues resolved shortly.
--Dave B.
Oh good - that sounds less troublesome than a problem running the tests themselves.
Cheers,
Peter
Peter,
From what I've been able to determine, the tmhmm_and_signalp test results predate the introduction of the skip tool tests option, and the script to check repositories for the presence of functional tests and test data has been updating the last test date every day, hence the misleading entry for time last tested. I've committed a fix to that script in 10267:5ad8db7ff748, which will be included in the next release, at which point the main tool shed should correctly display no test results for that repository and any others you have specified not to be tested.
--Dave B.
On 7/29/13 12:03:23.000, Dave Bouvier wrote:
Peter,
The issue with seq_rename incorrectly failing functional tests has been resolved in 10266:fe04978dadac, and the test results will be corrected the next time the automated testing framework runs. I'm still looking into the other issues you reported.
--Dave B.
On 7/29/13 10:50:58.000, Peter Cock wrote:
On Mon, Jul 29, 2013 at 3:10 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
I've created a Trello card where you can track this issue: https://trello.com/c/wHXz9j0v/1018-toolshed-test-result-display-issue
Thanks Dave.
Most of these look like display issues, the test results are correctly recorded in the database. I hope to have that and the seq_rename and tmhmm_and_signalp issues resolved shortly.
--Dave B.
Oh good - that sounds less troublesome than a problem running the tests themselves.
Cheers,
Peter
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
On Monday, July 29, 2013, Dave Bouvier wrote:
Peter,
From what I've been able to determine, the tmhmm_and_signalp test results predate the introduction of the skip tool tests option, and the script to check repositories for the presence of functional tests and test data has been updating the last test date every day, hence the misleading entry for time last tested. I've committed a fix to that script in 10267:5ad8db7ff748, which will be included in the next release, at which point the main tool shed should correctly display no test results for that repository and any others you have specified not to be tested.
--Dave B.
Thanks - that probably makes sense for future repositories with similar issues, I suspect people are going to tick the skip box after getting unresolvable test failures.
Peter
On Mon, Jul 29, 2013 at 5:03 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
The issue with seq_rename incorrectly failing functional tests has been resolved in 10266:fe04978dadac, and the test results will be corrected the next time the automated testing framework runs. I'm still looking into the other issues you reported.
--Dave B.
Hi Dave,
The seq_rename test results (apparently) from last night are still failing, http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/e440681dd17c
AssertionError: Attempting to set field 'new_column' to value '['2']' in form 'tool_form' threw exception: cannot find value/label "2" in list control control: <SelectControl(new_column=[1])> If the above control is a DataToolparameter whose data type class does not include a sniff() method, make sure to include a proper 'ftype' attribute to the tag for the control within the <test> tag set.
Thanks,
Peter
On Thu, Aug 1, 2013 at 11:43 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Mon, Jul 29, 2013 at 5:03 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
The issue with seq_rename incorrectly failing functional tests has been resolved in 10266:fe04978dadac, and the test results will be corrected the next time the automated testing framework runs. I'm still looking into the other issues you reported.
--Dave B.
Hi Dave,
The seq_rename test results (apparently) from last night are still failing, http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/e440681dd17c
AssertionError: Attempting to set field 'new_column' to value '['2']' in form 'tool_form' threw exception: cannot find value/label "2" in list control control: <SelectControl(new_column=[1])> If the above control is a DataToolparameter whose data type class does not include a sniff() method, make sure to include a proper 'ftype' attribute to the tag for the control within the <test> tag set.
Same issue here?,
http://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus/2be36fa7565e
AssertionError: Attempting to set field 'fasta_file' to value '['empty.fasta']' in form 'tool_form' threw exception: cannot find value/label "empty.fasta" in list control control: <SelectControl(fasta_file=[])> If the above control is a DataToolparameter whose data type class does not include a sniff() method, make sure to include a proper 'ftype' attribute to the tag for the control within the <test> tag set.
Peter
On Mon, Jul 29, 2013 at 3:10 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
I've created a Trello card where you can track this issue: https://trello.com/c/wHXz9j0v/1018-toolshed-test-result-display-issue
Most of these look like display issues, the test results are correctly recorded in the database. I hope to have that and the seq_rename and tmhmm_and_signalp issues resolved shortly.
--Dave B.
Hi Dave,
The good news is it does seem some of the missing test results I had on Monday have been resolved. However, there are still others showing with no test results: :(
No test results here (I expect some to pass and some to fail):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/688f3fb09a6a
No test results here (I expect a failure due to a missing R dependency):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/rxlr_venn_workflow/41d8f46e03...
No test results here (I expect these to pass):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/bc4e7c0... http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/deb6cb2db22b
Thanks,
Peter
On Thu, Aug 1, 2013 at 12:11 PM, Peter Cock p.j.a.cock@googlemail.com wrote:
Hi Dave,
The good news is it does seem some of the missing test results I had on Monday have been resolved. However, there are still others showing with no test results: :(
No test results here (I expect some to pass and some to fail):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/688f3fb09a6a
I've been updating the BLAST+ repository - currently it does show some failures due to missing test, but there seems to be test information missing here - and also I'm expecting the installation to fail with BOOST issues: http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/5693417851ec
(The other problems from 1 August look OK now)
Thanks,
Peter
Peter,
The automated build prior to this email was executed incorrectly. I've resolved that status, and the current test results look correct.
--Dave B.
On 8/8/13 10:30:37.000, Peter Cock wrote:
On Thu, Aug 1, 2013 at 12:11 PM, Peter Cock p.j.a.cock@googlemail.com wrote:
Hi Dave,
The good news is it does seem some of the missing test results I had on Monday have been resolved. However, there are still others showing with no test results: :(
No test results here (I expect some to pass and some to fail):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/688f3fb09a6a
I've been updating the BLAST+ repository - currently it does show some failures due to missing test, but there seems to be test information missing here - and also I'm expecting the installation to fail with BOOST issues: http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/5693417851ec
(The other problems from 1 August look OK now)
Thanks,
Peter
Hi Dave,
Progress: I can now see the test failures which suggest that the underlying binaries did not get installed properly. I guess this either means:
(a) the dependency install failed but the Tool Shed doesn't say this (my hunch as recently the NCBI BLAST+ tools had stopped compiling on the Test Tool Shed due to a BOOST system library change).
Or,
(b) the dependency installed didn't raise any errors but failed in some way such as not setting the path (unlikely as the tool_dependencies.xml content has not changed - it has just been moved to a dependency package repository)
Or,
(c) there is something amiss with the dependency chain meaning that the BLAST dependencies are not being added to the PATH. This looks OK but it is the only thing I have changed recently.
Or some other mode of failure I'm not guessed at yet?
Is this something you can diagnose, or do I need to press ahead with a local Tool Shed to help me work out what is amiss?
Thanks,
Peter
On Fri, Aug 9, 2013 at 1:58 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
The automated build prior to this email was executed incorrectly. I've resolved that status, and the current test results look correct.
--Dave B.
On 8/8/13 10:30:37.000, Peter Cock wrote:
I've been updating the BLAST+ repository - currently it does show some failures due to missing test, but there seems to be test information missing here - and also I'm expecting the installation to fail with BOOST issues:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/5693417851ec
(The other problems from 1 August look OK now)
Thanks,
Peter
Hello again,
I've not have any missing test results for a while, on the main or test Tool Shed, so this was a bit of a surprise:
http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/09a68a90d55...
This is listed under "Latest revision: failing tool tests", but I see no test results at all :(
Regards,
Peter
Peter,
Thank you for reporting this. I've been able to determine that there was an installation error that should have been recorded, but was not. I hope to have a fix committed shortly.
--Dave B.
On 09/23/2013 05:59 AM, Peter Cock wrote:
Hello again,
I've not have any missing test results for a while, on the main or test Tool Shed, so this was a bit of a surprise:
http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/09a68a90d55...
This is listed under "Latest revision: failing tool tests", but I see no test results at all :(
Regards,
Peter
Thanks Dave,
Looks like the same thing happened again last night, but as this is on the Main Tool Shed your fix wouldn't be included anyway: http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/09a68a90d55...
Any clues about the installation failure would be useful, this is working nicely on the Test Tool Shed: http://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/d8de5b6...
Additionally, another repository I updated yesterday is missing test results and this time there are no external dependencies: http://testtoolshed.g2.bx.psu.edu/view/peterjc/predictnls
Regards,
Peter
On Mon, Sep 23, 2013 at 5:27 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
Thank you for reporting this. I've been able to determine that there was an installation error that should have been recorded, but was not. I hope to have a fix committed shortly.
--Dave B.
On 09/23/2013 05:59 AM, Peter Cock wrote:
Hello again,
I've not have any missing test results for a while, on the main or test Tool Shed, so this was a bit of a surprise:
http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/09a68a90d55...
This is listed under "Latest revision: failing tool tests", but I see no test results at all :(
Regards,
Peter
Peter,
The one on the main tool shed is due to an issue Greg and I are in the process of resolving. As soon as we've tested the fix, I'll schedule a re-test of that repository and update you on the status.
--Dave B.
On 09/24/2013 05:53 AM, Peter Cock wrote:
Thanks Dave,
Looks like the same thing happened again last night, but as this is on the Main Tool Shed your fix wouldn't be included anyway: http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/09a68a90d55...
Any clues about the installation failure would be useful, this is working nicely on the Test Tool Shed: http://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/d8de5b6...
Additionally, another repository I updated yesterday is missing test results and this time there are no external dependencies: http://testtoolshed.g2.bx.psu.edu/view/peterjc/predictnls
Regards,
Peter
On Mon, Sep 23, 2013 at 5:27 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
Thank you for reporting this. I've been able to determine that there was an installation error that should have been recorded, but was not. I hope to have a fix committed shortly.
--Dave B.
On 09/23/2013 05:59 AM, Peter Cock wrote:
Hello again,
I've not have any missing test results for a while, on the main or test Tool Shed, so this was a bit of a surprise:
http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/09a68a90d55...
This is listed under "Latest revision: failing tool tests", but I see no test results at all :(
Regards,
Peter
On Tue, Sep 24, 2013 at 1:44 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
The one on the main tool shed is due to an issue Greg and I are in the process of resolving. As soon as we've tested the fix, I'll schedule a re-test of that repository and update you on the status.
--Dave B.
Looks to be spreading, or linked to the blast_datatypes dependency? Listed under "Latest revision: failing tool tests"
http://toolshed.g2.bx.psu.edu/view/peterjc/blast2go/0ac3ef59ea93 - missing test results
http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/09a68a90d55... - missing test results
http://toolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip/8c02a91a8680 - failed as expected (missing Biopython dependency, see separate thread about the ALTAS dependency problem)
http://toolshed.g2.bx.psu.edu/view/peterjc/venn_list/6aae6bc0802d - failing as expected (missing R dependency)
And over on the Test Tool Shed (which was offline but recently came back up), listed under "Latest revision: failing tool tests":
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/8c462f7b2c8d - missing test results
http://testtoolshed.g2.bx.psu.edu/view/peterjc/predictnls/aabc2852571e - missing test results
Regards,
Peter
+1
I think it's been a problem with some of my repos for a while on test so possible not related to blast_datatypes - I sent this privately about 2 weeks ago:
Ross ross.lazarus@gmail.com Sep 13 (13 days ago) to Dave, Greg Hey Dave - any thoughts on how to fix http://testtoolshed.g2.bx.psu.edu/view/fubar/differential_count_models so the test results appear? AFAIK the test works but there's no test results section in the manage repository sections I can see.
On Thu, Sep 26, 2013 at 8:40 PM, Peter Cock p.j.a.cock@googlemail.comwrote:
On Tue, Sep 24, 2013 at 1:44 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
The one on the main tool shed is due to an issue Greg and I are in the process of resolving. As soon as we've tested the fix, I'll schedule a re-test of that repository and update you on the status.
--Dave B.
Looks to be spreading, or linked to the blast_datatypes dependency? Listed under "Latest revision: failing tool tests"
http://toolshed.g2.bx.psu.edu/view/peterjc/blast2go/0ac3ef59ea93
- missing test results
http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/09a68a90d55...
- missing test results
http://toolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip/8c02a91a8680
- failed as expected (missing Biopython dependency, see separate thread about the ALTAS dependency problem)
http://toolshed.g2.bx.psu.edu/view/peterjc/venn_list/6aae6bc0802d
- failing as expected (missing R dependency)
And over on the Test Tool Shed (which was offline but recently came back up), listed under "Latest revision: failing tool tests":
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/8c462f7b2c8d
- missing test results
http://testtoolshed.g2.bx.psu.edu/view/peterjc/predictnls/aabc2852571e
- missing test results
Regards,
Peter ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
There seems to be issues with some repositories which we'll need to track down. However, I fixed some issues yesterday with styles in the tool shed that kep the tool tests from displaing, and now test results are displaying at least for this repository:
Repository revision repository tip Select a revision to inspect and download versions of Galaxy utilities from this repository.
Repository 'differential_count_models' Sharable link to this repository: http://testtoolshed.g2.bx.psu.edu/view/fubar/differential_count_models Clone this repository: hg clone http://testtoolshed.g2.bx.psu.edu/repos/fubar/differential_count_models Name: differential_count_models Repository type: Unrestricted Synopsis: edgeR, DESeq2 and voom in one tool Detailed description: edgeR, DESeq2 and voom for replicate RNA seq data. 1) TURN OFF HTML SANITIZATION in universe_wsgi.xml or your outputs will lack proper style sheets and look awful 2) This packages dependencies include R and bioconductor so expect your proxy to time out during the installation - be patient - it takes 10 to 30 minutes to download and compile everything, depending on the velocity of your machine and the interwebs - about 20 minutes on my laptop. 3) make sure you have a $TMP set in your environment or ghostscript gets unhappy - you'll see a warning every time you run this tool if you've forgotten. Revision: 31:5b6af671d985 Owner: fubar Times downloaded: 15 Dependencies of this repository
Repository dependencies - installation of these additional repositories is required
Tool dependencies - repository tools require handling of these dependencies
Contents of this repository
Valid tools - click the name to preview the tool and use the pop-up menu to inspect all metadata
Automated tool test results
Tool test results Automated test environment Tests that failed Tool id: rgDifferentialCount Tool version: rgDifferentialCount Test: test_tool_000000 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/fubar/differential_count_models/rgDifferentialCount/0.21) Stderr: Traceback: Traceback (most recent call last): File "/var/opt/buildslaves/buildslave-ec2-1/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 171, in test_tool self.do_it( td, shed_tool_id=shed_tool_id ) File "/var/opt/buildslaves/buildslave-ec2-1/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 102, in do_it self.verify_dataset_correctness( outfile, hid=elem_hid, maxseconds=testdef.maxseconds, attributes=attributes, shed_tool_id=shed_tool_id ) File "/var/opt/buildslaves/buildslave-ec2-1/buildbot-install-test-test-tool-shed-py27/build/test/base/twilltestcase.py", line 855, in verify_dataset_correctness raise AssertionError( errmsg ) AssertionError: History item 4 different than expected, difference (using diff): ( /var/opt/buildslaves/buildslave-ec2-1/buildbot-install-test-test-tool-shed-py27/build/test/install_and_test_tool_shed_repositories/tmp/shed_toolsubztFQ/testtoolshed.g2.bx.psu.edu/repos/fubar/differential_count_models/5b6af671d985/differential_count_models/differential_count_models/test-data/edgeRtest1out.xls v. /var/opt/buildslaves/buildslave-ec2-1/buildbot-install-test-test-tool-shed-py27/build/test/install_and_test_tool_shed_repositories/tmp/tmpFGK1x1/tmpCzIKTPedgeRtest1out.xls ) --- local_file +++ history_data @@ -1,1142 +0,0 @@ -ID logFC AveExpr t P.Value adj.P.Val B NReads URL -Mir192 6.94888256843679 14.6763802609023 42.7229535356942 2.30119906424271e-16 2.62566813230094e-13 27.2664713266936 2325567 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir192'>Mir192</a> -Mir208a -11.0150177152075 3.93955375669227 -23.2524066836307 1.11893807599952e-12 6.38354172357727e-10 17.2086622097974 4638 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir208a'>Mir208a</a> -Mir122a 10.4261254701779 8.16986409392255 21.7229119192922 2.85968233611017e-12 1.08763251516723e-09 17.760171141852 90428 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir122a'>Mir122a</a> -Mir149 -7.03046258655617 6.31608073609863 -20.8838348040628 4.91549082404237e-12 1.40214375755809e-09 17.2776088871455 6164 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir149'>Mir149</a> -Mir208b -12.4332279840446 4.60762179736006 -19.5924575126382 1.17919871718875e-11 2.69093147262473e-09 15.6836663826186 14756 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir208b'>Mir208b</a> -Mir10b -5.1309149063532 12.2628671946242 -18.2420234752943 3.12499057505143e-11 4.96397841614262e-09 16.2215027882858 197340 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir10b'>Mir10b</a> -Mir143hg -2.24922058313374 16.2444825488726 -18.0824813146443 3.52173903971276e-11 4.96397841614262e-09 16.0266951625541 1407364 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir143hg'>Mir143hg</a> -Mir143 -2.25106712131643 16.235859869169 -18.0814805993441 3.524391092512e-11 4.96397841614262e-09 16.0260836456534 1399819 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir143'>Mir143</a> -Mir499 -11.5675289490546 3.78745976580796 -17.9420857279689 3.91549568319751e-11 4.96397841614262e-09 14.8217405828874 6527 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir499'>Mir499</a> -Mir802 9.15843445824816 2.91576747878654 17.3165224121399 6.33861560587965e-11 7.23236040630868e-09 14.381577240531 1514 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir802'>Mir802</a> -Mir3073 8.42054159318439 2.54571889776166 16.7026571721381 1.03306635740721e-10 1.03604453339228e-08 13.9858447292853 904 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir3073'>Mir3073</a> -Mir148a 2.63821345578617 15.4435819751152 16.5481882215215 1.17118649515038e-10 1.03604453339228e-08 14.8147917664862 1002397 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir148a'>Mir148a</a> -Mir101b 3.76572195114225 10.8508440499081 16.5385659719288 1.1804188373444e-10 1.03604453339228e-08 14.9000274171241 59019 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir101b'>Mir101b</a> -Mir490 -8.47437764634465 3.75069567634692 -16.2596504905533 1.48481644820999e-10 1.21012540529114e-08 13.4246171016517 1741 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir490'>Mir490</a> -Mir21 2.93853744034991 13.1642916950886 15.3754036511693 3.14833456057776e-10 2.39483315574615e-08 13.8676979022068 229120 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir21'>Mir21</a> -Mir181c -3.74256009957124 9.62955774646065 -15.2423608550805 3.53706264458683e-10 2.52236779842098e-08 13.8104046176901 23605 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir181c'>Mir181c</a> -Mir204 -7.68442507149438 4.77517348536933 -15.0334839919296 4.2542677795722e-10 2.85536443323052e-08 12.8224274879526 2601 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir204'>Mir204</a> -Mir23a -3.16576837850821 8.78965917558611 -14.6311785109623 6.11068192724496e-10 3.87349337721472e-08 13.2691736804205 10118 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir23a'>Mir23a</a> -Mir181d -3.63621106402109 6.37132182424908 -14.3170733565449 8.15750840848868e-10 4.89879847057136e-08 12.9563328312209 2139 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir181d'>Mir181d</a> -Mir133b -6.49354876170712 1.25448620431148 -13.969968060601 1.12993427319653e-09 6.44627502858619e-08 11.9826837063041 159 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir133b'>Mir133b</a> -Mir27a -3.10693537246128 9.92557960348829 -13.8382510839158 1.28101104196848e-09 6.96015999469543e-08 12.5130856443239 21886 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir27a'>Mir27a</a> -Mir194-2 5.26413595786074 6.08976151689046 13.0440120203829 2.79288399641768e-09 1.44849119996026e-07 11.7157527118771 3570 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir194-2'>Mir194-2</a> -Mir195 -3.21659545049586 7.4509349905835 -12.869478368273 3.33278798407795e-09 1.65335264775345e-07 11.5875227405737 3962 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir195'>Mir195</a> -Mir27b -1.97637614533106 15.0957731023791 -11.75603589654 1.08219717999805e-08 5.14494575990741e-07 10.1277185662145 625308 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir27b'>Mir27b</a> -Mir378 -3.09739319841142 7.38320489393809 -11.6841625470748 1.17137125863851e-08 5.34613842442616e-07 10.3296922348831 4075 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir378'>Mir378</a> -Snord104 2.33737428989677 10.6109023861403 11.4956750870273 1.44448164322638e-08 6.33905213431269e-07 10.0233949189609 33458 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Snord104'>Snord104</a> -Mir1983 -5.89550024150745 0.993185099223749 -11.4458119994178 1.52754786535047e-08 6.44160462853232e-07 9.74926029381244 101 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir1983'>Mir1983</a> -Mir322 -3.29661750880005 8.21534154356388 -11.4153616003567 1.58076187203247e-08 6.44160462853232e-07 10.0084716002011 7074 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir322'>Mir322</a> -Mir200a 6.19156065085543 1.79813092499896 11.3221723123067 1.75622912046568e-08 6.9098531946598e-07 9.66229453831667 264 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir200a'>Mir200a</a> -Mir215 -3.04587333807051 5.75442336214621 -11.1481336257529 2.14182153707674e-08 8.08245865886245e-07 9.75326755116029 1182 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir215'>Mir215</a> -Dnm3os -3.36334357719079 5.86074322417943 -11.0922610835813 2.28395969947309e-08 8.08245865886245e-07 9.68949616901383 1401 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Dnm3os'>Dnm3os</a> -Mir182 4.90399541739044 7.1511683493624 11.0744681203078 2.33130367310143e-08 8.08245865886245e-07 9.65884218207857 7189 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir182'>Mir182</a> -Mir181a-2 -3.04829832099813 6.94146510070354 -11.0721276255975 2.33760855164295e-08 8.08245865886245e-07 9.64401697815694 2817 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir181a-2'>Mir181a-2</a> -Mir1948 7.19552540631629 4.5513492833967 11.0054920626234 2.52493600829575e-08 8.47338819254543e-07 9.34179361673467 2404 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir1948'>Mir1948</a> -Mir214 -3.28087400431203 5.47844506177362 -10.7682572190636 3.3325545851092e-08 1.0864127947456e-06 9.3185039394887 1048 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir214'>Mir214</a> -Mir153 -5.9638030672045 1.43863148956561 -10.7270821099311 3.49874201497232e-08 1.09398957489501e-06 9.03556928822473 140 <a href='http://www.genecards.org/index.php?path=/Search/keyword/Mir153'>Mir153</a>
Categories Statistics
On Sep 26, 2013, at 6:47 AM, Ross ross.lazarus@gmail.com wrote:
+1
I think it's been a problem with some of my repos for a while on test so possible not related to blast_datatypes - I sent this privately about 2 weeks ago:
Ross ross.lazarus@gmail.com Sep 13 (13 days ago)
to Dave, Greg
Hey Dave - any thoughts on how to fix http://testtoolshed.g2.bx.psu.edu/view/fubar/differential_count_models so the test results appear? AFAIK the test works but there's no test results section in the manage repository sections I can see.
On Thu, Sep 26, 2013 at 8:40 PM, Peter Cock p.j.a.cock@googlemail.com wrote: On Tue, Sep 24, 2013 at 1:44 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
The one on the main tool shed is due to an issue Greg and I are in the process of resolving. As soon as we've tested the fix, I'll schedule a re-test of that repository and update you on the status.
--Dave B.
Looks to be spreading, or linked to the blast_datatypes dependency? Listed under "Latest revision: failing tool tests"
http://toolshed.g2.bx.psu.edu/view/peterjc/blast2go/0ac3ef59ea93
- missing test results
http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/09a68a90d55...
- missing test results
http://toolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip/8c02a91a8680
- failed as expected (missing Biopython dependency, see separate thread about the ALTAS dependency problem)
http://toolshed.g2.bx.psu.edu/view/peterjc/venn_list/6aae6bc0802d
- failing as expected (missing R dependency)
And over on the Test Tool Shed (which was offline but recently came back up), listed under "Latest revision: failing tool tests":
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/8c462f7b2c8d
- missing test results
http://testtoolshed.g2.bx.psu.edu/view/peterjc/predictnls/aabc2852571e
- missing test results
Regards,
Peter ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Great! Thanks! Don't you ever sleep? It must be some un-allah-ly hour where you are.
On Thu, Sep 26, 2013 at 8:55 PM, Greg Von Kuster greg@bx.psu.edu wrote:
There seems to be issues with some repositories which we'll need to track down. However, I fixed some issues yesterday with styles in the tool shed that kep the tool tests from displaing, and now test results are displaying at least for this repository:
Repository revision 31:5b6af671d985 28:c4ee2e69d691 3:fe5c828dee80 *repository tip* Select a revision to inspect and download versions of Galaxy utilities from this repository.
Repository 'differential_count_models' Sharable link to this repository: http://testtoolshed.g2.bx.psu.edu/view/fubar/differential_count_models Clone this repository:hg clone http://testtoolshed.g2.bx.psu.edu/repos/fubar/differential_count_models Name:differential_count_modelshttp://testtoolshed.g2.bx.psu.edu/repository/browse_repository?id=347ec9acdc3b02e8 Repository type:Unrestricted Synopsis:edgeR, DESeq2 and voom in one tool Detailed description:edgeR, DESeq2 and voom for replicate RNA seq data.
TURN OFF HTML SANITIZATION in universe_wsgi.xml or your outputs will lack proper style sheets and look awful
This packages dependencies include R and bioconductor so expect your proxy to time out during the installation - be patient - it takes 10 to 30 minutes to download and compile everything, depending on the velocity of your machine and the interwebs - about 20 minutes on my laptop.
make sure you have a $TMP set in your environment or ghostscript gets unhappy - you'll see a warning every time you run this tool if you've forgotten.
Revision:31:5b6af671d985http://testtoolshed.g2.bx.psu.edu/repository/view_changelog?id=347ec9acdc3b02e8 Owner:fubar Times downloaded:15 Dependencies of this repository
Repository dependencies* - installation of these additional repositories is required*http://testtoolshed.g2.bx.psu.edu/repository/view_repository?status=done&message=&id=347ec9acdc3b02e8
Tool dependencies* - repository tools require handling of these dependencies*http://testtoolshed.g2.bx.psu.edu/repository/view_repository?status=done&message=&id=347ec9acdc3b02e8
Contents of this repository
Valid tools* - click the name to preview the tool and use the pop-up menu to inspect all metadata*http://testtoolshed.g2.bx.psu.edu/repository/view_repository?status=done&message=&id=347ec9acdc3b02e8
Automated tool test results
Tool test resultshttp://testtoolshed.g2.bx.psu.edu/repository/view_repository?status=done&message=&id=347ec9acdc3b02e8 Automated test environmenthttp://testtoolshed.g2.bx.psu.edu/repository/view_repository?status=done&message=&id=347ec9acdc3b02e8 Tests that failedhttp://testtoolshed.g2.bx.psu.edu/repository/view_repository?status=done&message=&id=347ec9acdc3b02e8 *Tool id:* rgDifferentialCount*Tool version:* rgDifferentialCount*Test:* test_tool_000000 (functional.test_toolbox.TestForTool_testtoolshed.g2. bx.psu.edu/repos/fubar/differential_count_models/rgDifferentialCount/0.21) *Stderr:* *Traceback:* Traceback (most recent call last):
File "/var/opt/buildslaves/buildslave-ec2-1/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 171, in test_tool self.do_it( td, shed_tool_id=shed_tool_id )
File "/var/opt/buildslaves/buildslave-ec2-1/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 102, in do_it
self.verify_dataset_correctness( outfile, hid=elem_hid, maxseconds=testdef.maxseconds, attributes=attributes, shed_tool_id=shed_tool_id )
File "/var/opt/buildslaves/buildslave-ec2-1/buildbot-install-test-test-tool-shed-py27/build/test/base/twilltestcase.py", line 855, in verify_dataset_correctness raise AssertionError( errmsg )
AssertionError: History item 4 different than expected, difference (using diff):
( /var/opt/buildslaves/buildslave-ec2-1/buildbot-install-test-test-tool-shed-py27/build/test/install_and_test_tool_shed_repositories/tmp/shed_toolsubztFQ/testtoolshed.g2. bx.psu.edu/repos/fubar/differential_count_models/5b6af671d985/differential_count_models/differential_count_models/test-data/edgeRtest1out.xls v. /var/opt/buildslaves/buildslave-ec2-1/buildbot-install-test-test-tool-shed-py27/build/test/install_and_test_tool_shed_repositories/tmp/tmpFGK1x1/tmpCzIKTPedgeRtest1out.xls ) --- local_file +++ history_data @@ -1,1142 +0,0 @@ -ID logFC AveExpr t P.Value adj.P.Val B NReads URL -Mir192 6.94888256843679 14.6763802609023 42.7229535356942 2.30119906424271e-16 2.62566813230094e-13 27.2664713266936 2325567 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir192'>Mir192</a> -Mir208a -11.0150177152075 3.93955375669227 -23.2524066836307 1.11893807599952e-12 6.38354172357727e-10 17.2086622097974 4638 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir208a'
Mir208a</a>
-Mir122a 10.4261254701779 8.16986409392255 21.7229119192922 2.85968233611017e-12 1.08763251516723e-09 17.760171141852 90428 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir122a'
Mir122a</a>
-Mir149 -7.03046258655617 6.31608073609863 -20.8838348040628 4.91549082404237e-12 1.40214375755809e-09 17.2776088871455 6164 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir149'>Mir149</a> -Mir208b -12.4332279840446 4.60762179736006 -19.5924575126382 1.17919871718875e-11 2.69093147262473e-09 15.6836663826186 14756 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir208b'
Mir208b</a>
-Mir10b -5.1309149063532 12.2628671946242 -18.2420234752943 3.12499057505143e-11 4.96397841614262e-09 16.2215027882858 197340 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir10b'>Mir10b</a> -Mir143hg -2.24922058313374 16.2444825488726 -18.0824813146443 3.52173903971276e-11 4.96397841614262e-09 16.0266951625541 1407364 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir143hg'
Mir143hg</a>
-Mir143 -2.25106712131643 16.235859869169 -18.0814805993441 3.524391092512e-11 4.96397841614262e-09 16.0260836456534 1399819 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir143'>Mir143</a> -Mir499 -11.5675289490546 3.78745976580796 -17.9420857279689 3.91549568319751e-11 4.96397841614262e-09 14.8217405828874 6527 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir499'>Mir499</a> -Mir802 9.15843445824816 2.91576747878654 17.3165224121399 6.33861560587965e-11 7.23236040630868e-09 14.381577240531 1514 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir802'>Mir802</a> -Mir3073 8.42054159318439 2.54571889776166 16.7026571721381 1.03306635740721e-10 1.03604453339228e-08 13.9858447292853 904 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir3073'
Mir3073</a>
-Mir148a 2.63821345578617 15.4435819751152 16.5481882215215 1.17118649515038e-10 1.03604453339228e-08 14.8147917664862 1002397 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir148a'
Mir148a</a>
-Mir101b 3.76572195114225 10.8508440499081 16.5385659719288 1.1804188373444e-10 1.03604453339228e-08 14.9000274171241 59019 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir101b'
Mir101b</a>
-Mir490 -8.47437764634465 3.75069567634692 -16.2596504905533 1.48481644820999e-10 1.21012540529114e-08 13.4246171016517 1741 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir490'>Mir490</a> -Mir21 2.93853744034991 13.1642916950886 15.3754036511693 3.14833456057776e-10 2.39483315574615e-08 13.8676979022068 229120 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir21'>Mir21</a> -Mir181c -3.74256009957124 9.62955774646065 -15.2423608550805 3.53706264458683e-10 2.52236779842098e-08 13.8104046176901 23605 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir181c'
Mir181c</a>
-Mir204 -7.68442507149438 4.77517348536933 -15.0334839919296 4.2542677795722e-10 2.85536443323052e-08 12.8224274879526 2601 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir204'>Mir204</a> -Mir23a -3.16576837850821 8.78965917558611 -14.6311785109623 6.11068192724496e-10 3.87349337721472e-08 13.2691736804205 10118 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir23a'>Mir23a</a> -Mir181d -3.63621106402109 6.37132182424908 -14.3170733565449 8.15750840848868e-10 4.89879847057136e-08 12.9563328312209 2139 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir181d'
Mir181d</a>
-Mir133b -6.49354876170712 1.25448620431148 -13.969968060601 1.12993427319653e-09 6.44627502858619e-08 11.9826837063041 159 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir133b'
Mir133b</a>
-Mir27a -3.10693537246128 9.92557960348829 -13.8382510839158 1.28101104196848e-09 6.96015999469543e-08 12.5130856443239 21886 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir27a'>Mir27a</a> -Mir194-2 5.26413595786074 6.08976151689046 13.0440120203829 2.79288399641768e-09 1.44849119996026e-07 11.7157527118771 3570 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir194-2'
Mir194-2</a>
-Mir195 -3.21659545049586 7.4509349905835 -12.869478368273 3.33278798407795e-09 1.65335264775345e-07 11.5875227405737 3962 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir195'>Mir195</a> -Mir27b -1.97637614533106 15.0957731023791 -11.75603589654 1.08219717999805e-08 5.14494575990741e-07 10.1277185662145 625308 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir27b'>Mir27b</a> -Mir378 -3.09739319841142 7.38320489393809 -11.6841625470748 1.17137125863851e-08 5.34613842442616e-07 10.3296922348831 4075 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir378'>Mir378</a> -Snord104 2.33737428989677 10.6109023861403 11.4956750870273 1.44448164322638e-08 6.33905213431269e-07 10.0233949189609 33458 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Snord104'
Snord104</a>
-Mir1983 -5.89550024150745 0.993185099223749 -11.4458119994178 1.52754786535047e-08 6.44160462853232e-07 9.74926029381244 101 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir1983'
Mir1983</a>
-Mir322 -3.29661750880005 8.21534154356388 -11.4153616003567 1.58076187203247e-08 6.44160462853232e-07 10.0084716002011 7074 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir322'>Mir322</a> -Mir200a 6.19156065085543 1.79813092499896 11.3221723123067 1.75622912046568e-08 6.9098531946598e-07 9.66229453831667 264 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir200a'
Mir200a</a>
-Mir215 -3.04587333807051 5.75442336214621 -11.1481336257529 2.14182153707674e-08 8.08245865886245e-07 9.75326755116029 1182 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir215'>Mir215</a> -Dnm3os -3.36334357719079 5.86074322417943 -11.0922610835813 2.28395969947309e-08 8.08245865886245e-07 9.68949616901383 1401 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Dnm3os'>Dnm3os</a> -Mir182 4.90399541739044 7.1511683493624 11.0744681203078 2.33130367310143e-08 8.08245865886245e-07 9.65884218207857 7189 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir182'>Mir182</a> -Mir181a-2 -3.04829832099813 6.94146510070354 -11.0721276255975 2.33760855164295e-08 8.08245865886245e-07 9.64401697815694 2817 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir181a-2'
Mir181a-2</a>
-Mir1948 7.19552540631629 4.5513492833967 11.0054920626234 2.52493600829575e-08 8.47338819254543e-07 9.34179361673467 2404 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir1948'
Mir1948</a>
-Mir214 -3.28087400431203 5.47844506177362 -10.7682572190636 3.3325545851092e-08 1.0864127947456e-06 9.3185039394887 1048 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir214'>Mir214</a> -Mir153 -5.9638030672045 1.43863148956561 -10.7270821099311 3.49874201497232e-08 1.09398957489501e-06 9.03556928822473 140 <a href=' http://www.genecards.org/index.php?path=/Search/keyword/Mir153'>Mir153</a>
Categories Statistics
On Sep 26, 2013, at 6:47 AM, Ross ross.lazarus@gmail.com wrote:
+1
I think it's been a problem with some of my repos for a while on test so possible not related to blast_datatypes - I sent this privately about 2 weeks ago:
Ross ross.lazarus@gmail.com Sep 13 (13 days ago) to Dave, Greg Hey Dave - any thoughts on how to fix http://testtoolshed.g2.bx.psu.edu/view/fubar/differential_count_models so the test results appear? AFAIK the test works but there's no test results section in the manage repository sections I can see.
On Thu, Sep 26, 2013 at 8:40 PM, Peter Cock p.j.a.cock@googlemail.comwrote:
On Tue, Sep 24, 2013 at 1:44 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
The one on the main tool shed is due to an issue Greg and I are in the process of resolving. As soon as we've tested the fix, I'll schedule a re-test of that repository and update you on the status.
--Dave B.
Looks to be spreading, or linked to the blast_datatypes dependency? Listed under "Latest revision: failing tool tests"
http://toolshed.g2.bx.psu.edu/view/peterjc/blast2go/0ac3ef59ea93
- missing test results
http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/09a68a90d55...
- missing test results
http://toolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip/8c02a91a8680
- failed as expected (missing Biopython dependency, see separate thread about the ALTAS dependency problem)
http://toolshed.g2.bx.psu.edu/view/peterjc/venn_list/6aae6bc0802d
- failing as expected (missing R dependency)
And over on the Test Tool Shed (which was offline but recently came back up), listed under "Latest revision: failing tool tests":
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/8c462f7b2c8d
- missing test results
http://testtoolshed.g2.bx.psu.edu/view/peterjc/predictnls/aabc2852571e
- missing test results
Regards,
Peter ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Ross, Peter,
Unfortunately, there are some issues with the automated testing framework that I am working on resolving. I've created a Trello card to track my progress:
--Dave B.
On 09/26/2013 06:47 AM, Ross wrote:
+1
I think it's been a problem with some of my repos for a while on test so possible not related to blast_datatypes - I sent this privately about 2 weeks ago:
Ross <ross.lazarus@gmail.com mailto:ross.lazarus@gmail.com>
Sep 13 (13 days ago) to Dave, Greg
Hey Dave - any thoughts on how to fix http://testtoolshed.g2.bx.psu.edu/view/fubar/differential_count_models so the test results appear? AFAIK the test works but there's no test results section in the manage repository sections I can see.
On Thu, Sep 26, 2013 at 8:40 PM, Peter Cock <p.j.a.cock@googlemail.com mailto:p.j.a.cock@googlemail.com> wrote:
On Tue, Sep 24, 2013 at 1:44 PM, Dave Bouvier <dave@bx.psu.edu <mailto:dave@bx.psu.edu>> wrote: > Peter, > > The one on the main tool shed is due to an issue Greg and I are in the > process of resolving. As soon as we've tested the fix, I'll schedule a > re-test of that repository and update you on the status. > > --Dave B. Looks to be spreading, or linked to the blast_datatypes dependency? Listed under "Latest revision: failing tool tests" http://toolshed.g2.bx.psu.edu/view/peterjc/blast2go/0ac3ef59ea93 - missing test results http://toolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr/09a68a90d552 - missing test results http://toolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip/8c02a91a8680 - failed as expected (missing Biopython dependency, see separate thread about the ALTAS dependency problem) http://toolshed.g2.bx.psu.edu/view/peterjc/venn_list/6aae6bc0802d - failing as expected (missing R dependency) And over on the Test Tool Shed (which was offline but recently came back up), listed under "Latest revision: failing tool tests": http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/8c462f7b2c8d - missing test results http://testtoolshed.g2.bx.psu.edu/view/peterjc/predictnls/aabc2852571e - missing test results Regards, Peter ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
On Thu, Sep 26, 2013 at 3:29 PM, Dave Bouvier dave@bx.psu.edu wrote:
Ross, Peter,
Unfortunately, there are some issues with the automated testing framework that I am working on resolving. I've created a Trello card to track my progress:
--Dave B.
Any joy with that "from fabric.api import env" issue?
Meanwhile the set of tools with missing test failures seems to change from day to day, right now the following are listed under "Latest revision: failing tool tests" yet have not test results:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/0d39248a7a9f (expected to pass)
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/1e681e19e35c (expected to pass)
http://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus/81cac85fc286 (expected to pass)
Peter
On Fri, Oct 4, 2013 at 11:11 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Thu, Sep 26, 2013 at 3:29 PM, Dave Bouvier dave@bx.psu.edu wrote:
Ross, Peter,
Unfortunately, there are some issues with the automated testing framework that I am working on resolving. I've created a Trello card to track my progress:
--Dave B.
Any joy with that "from fabric.api import env" issue?
Meanwhile the set of tools with missing test failures seems to change from day to day, right now the following are listed under "Latest revision: failing tool tests" yet have not test results:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/0d39248a7a9f (expected to pass)
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/1e681e19e35c (expected to pass)
http://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus/81cac85fc286 (expected to pass)
Peter
There's a similar set on the main Tool Shed, where I should have the same version of those three tools, which may be interesting for tracing the problem(s):
http://toolshed.g2.bx.psu.edu/view/peterjc/clinod (expected to pass)
http://toolshed.g2.bx.psu.edu/view/peterjc/effectivet3 (expected to pass)
http://toolshed.g2.bx.psu.edu/view/peterjc/nlstradamus (expected to pass)
Also missing test results:
http://toolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip/8c02a91a8680 (this revision is expected to fail if Biopython not present, see other thread about packaging NumPy and its dependencies)
Regards,
Peter
Peter,
Thanks for the information, it uncovered a problem with the testing framework that has been resolved in 10817:6b0c3592dde1. The listed repositories should start showing useful results next time the nightly tests run.
--Dave B.
On 10/04/2013 06:19 AM, Peter Cock wrote:
On Fri, Oct 4, 2013 at 11:11 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Thu, Sep 26, 2013 at 3:29 PM, Dave Bouvier dave@bx.psu.edu wrote:
Ross, Peter,
Unfortunately, there are some issues with the automated testing framework that I am working on resolving. I've created a Trello card to track my progress:
--Dave B.
Any joy with that "from fabric.api import env" issue?
Meanwhile the set of tools with missing test failures seems to change from day to day, right now the following are listed under "Latest revision: failing tool tests" yet have not test results:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod/0d39248a7a9f (expected to pass)
http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/1e681e19e35c (expected to pass)
http://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus/81cac85fc286 (expected to pass)
Peter
There's a similar set on the main Tool Shed, where I should have the same version of those three tools, which may be interesting for tracing the problem(s):
http://toolshed.g2.bx.psu.edu/view/peterjc/clinod (expected to pass)
http://toolshed.g2.bx.psu.edu/view/peterjc/effectivet3 (expected to pass)
http://toolshed.g2.bx.psu.edu/view/peterjc/nlstradamus (expected to pass)
Also missing test results:
http://toolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip/8c02a91a8680 (this revision is expected to fail if Biopython not present, see other thread about packaging NumPy and its dependencies)
Regards,
Peter
On Mon, Oct 7, 2013 at 7:32 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
Thanks for the information, it uncovered a problem with the testing framework that has been resolved in 10817:6b0c3592dde1. The listed repositories should start showing useful results next time the nightly tests run.
--Dave B.
Hi Dave,
I must sound like a scratched record (is there a replacement phrase now that LPs are a cultural artefact?):
Currently listed under "Latest revision: failing tool tests" but missing any test results:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/predictnls
(I've got a lot of fresh failures pointing at a separate issue, email to follow with a new title).
Peter
Hi Peter,
Here is a new Trello card to track the progress for fixing and stabilizing the tool shed install and test framework. This card will evolve over the next few days as items are completed and new issues are uncovered that cause problems in the framework. This work is our top priority until it is finished.
https://trello.com/c/QWgsgll8/1203-toolshed-automated-test-framework-fixes
Greg Von Kuster
On Oct 22, 2013, at 5:43 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Mon, Oct 7, 2013 at 7:32 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
Thanks for the information, it uncovered a problem with the testing framework that has been resolved in 10817:6b0c3592dde1. The listed repositories should start showing useful results next time the nightly tests run.
--Dave B.
Hi Dave,
I must sound like a scratched record (is there a replacement phrase now that LPs are a cultural artefact?):
Currently listed under "Latest revision: failing tool tests" but missing any test results:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/predictnls
(I've got a lot of fresh failures pointing at a separate issue, email to follow with a new title).
Peter ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
On Tue, Oct 22, 2013 at 2:49 PM, Greg Von Kuster greg@bx.psu.edu wrote:
Hi Peter,
Here is a new Trello card to track the progress for fixing and stabilizing the tool shed install and test framework. This card will evolve over the next few days as items are completed and new issues are uncovered that cause problems in the framework. This work is our top priority until it is finished.
https://trello.com/c/QWgsgll8/1203-toolshed-automated-test-framework-fixes
Greg Von Kuster
Hi Greg, Dave,
This is just to comment that there is a (possibly related) oddity in the RSS feeds, with lots of entries showning missing test results:
"0 tests passed, 0 tests failed, 0 tests missing test components."
Strangely this is reporting multiple revisions of each tool (not in commit order), and it seems the older revisions show no tests (possibly true, or simple a sign of no test data?).
Peter
--
e.g.
http://testtoolshed.g2.bx.psu.edu/repository/get_functional_test_rss?owner=p...
Revision 66d1ca92fb38 of seq_filter_by_id 31 October 2013 22:03 0 tests passed, 0 tests failed, 0 tests missing test components.
Revision b7ec1c9db38d of seq_filter_by_id 31 October 2013 21:14 0 tests passed, 0 tests failed, 0 tests missing test components.
Revision 21a065d5f0e2 of seq_filter_by_id 31 October 2013 22:24 0 tests passed, 0 tests failed, 0 tests missing test components.
Revision 8a34c565a473 of seq_filter_by_id 1 November 2013 09:23 1 tests passed, 0 tests failed, 0 tests missing test components.
Revision 44891766cf9b of seq_filter_by_id 18 October 2013 23:29 0 tests passed, 0 tests failed, 1 tests missing test components.
Revision 726dcfa94b2c of seq_select_by_id 31 October 2013 22:17 0 tests passed, 0 tests failed, 0 tests missing test components.
Revision 9e9e3b860aa0 of seq_select_by_id 1 November 2013 09:26 0 tests passed, 1 tests failed, 0 tests missing test components.
Revision 694208ea6c34 of seq_select_by_id 31 October 2013 22:24 0 tests passed, 0 tests failed, 0 tests missing test components.
Revision 2b27279adeff of seq_select_by_id 18 October 2013 23:28 0 tests passed, 0 tests failed, 1 tests missing test components.
On Friday, November 1, 2013, Peter Cock wrote:
On Tue, Oct 22, 2013 at 2:49 PM, Greg Von Kuster <greg@bx.psu.edujavascript:;> wrote:
Hi Peter,
Here is a new Trello card to track the progress for fixing and
stabilizing the tool shed install and test framework. This card will evolve over the next few days as items are completed and new issues are uncovered that cause problems in the framework. This work is our top priority until it is finished.
https://trello.com/c/QWgsgll8/1203-toolshed-automated-test-framework-fixes
Greg Von Kuster
Hi Greg, Dave,
This is just to comment that there is a (possibly related) oddity in the RSS feeds, with lots of entries showning missing test results:
"0 tests passed, 0 tests failed, 0 tests missing test components."
Strangely this is reporting multiple revisions of each tool (not in commit order), and it seems the older revisions show no tests (possibly true, or simple a sign of no test data?).
Peter
Looks like Dave's fixed the multiple RSS entries: https://bitbucket.org/galaxy/galaxy-central/commits/52df185d7552
Thanks,
Peter
On Tue, Oct 22, 2013 at 10:43 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Mon, Oct 7, 2013 at 7:32 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
Thanks for the information, it uncovered a problem with the testing framework that has been resolved in 10817:6b0c3592dde1. The listed repositories should start showing useful results next time the nightly tests run.
--Dave B.
Hi Dave,
I must sound like a scratched record (is there a replacement phrase now that LPs are a cultural artefact?):
Currently listed under "Latest revision: failing tool tests" but missing any test results:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/predictnls
(I've got a lot of fresh failures pointing at a separate issue, email to follow with a new title).
Peter
More "missing test results", yet listed under "Latest revision: failing tool tests":
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_select_by_id
The common feature is they all depend on Biopython - perhaps that failed to install, and the tests were skipped?
Over on the main tool shed, those three are all showing errors relating to a failed Biopython install.
Peter
On Mon, Nov 4, 2013 at 10:33 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
More "missing test results", yet listed under "Latest revision: failing tool tests":
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_select_by_id
The common feature is they all depend on Biopython - perhaps that failed to install, and the tests were skipped?
I updated all three of these tools (and others) on the Test Tool Shed yesterday to address my misconfigured dependencies (thanks Greg!), e.g. https://github.com/peterjc/pico_galaxy/commit/399111894d1ea9432a4c24ed041369...
(Note that other tools with the same broken dependency setup did not suffer from the "missing test output" problem - they were showing the tool's test failures)
The good news is that seq_select_by_id has gone from an apparent failure with missing test output, to a passing test: http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_select_by_id/bedfe8f3d3fb
The bad news is there are still 2 apparently failing tests with missing test output:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/09b5b1af9b8f http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip/42bca649ec44
Similarly over on the Main Tool Shed (where I have not yet updated these two tools).
Regards,
Peter
On Thu, Nov 21, 2013 at 10:02 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
Similarly over on the Main Tool Shed (where I have not yet updated these two tools).
I currently have three tool repositories on the Main Tool Shed listed under "Latest revision: failing tool tests" yet with no test results:
* http://toolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/5208c15805ec Known dependency mis-configuration, updated version on Test Tool Shed. I would expect this revision to fail (saying Biopython not installed).
* http://toolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip/ee5acea162a7 Known dependency mis-configuration, updated version on Test Tool Shed. I would expect this revision to fail (saying Biopython not installed).
* http://toolshed.g2.bx.psu.edu/view/peterjc/samtools_idxstats/d4412c04d7b1 Should be identical to the working version on the Test Tool Shed, i.e. http://toolshed.g2.bx.psu.edu/view/peterjc/samtools_idxstats/93b8db68dde4 I would expect this revision to pass.
Regards,
Peter
Peter,
As of this morning, all of the repositories you listed below seem to have the expected results, with the exception of samtools_idxstats on the main tool shed. That repository was not tested due to an unhandled unicode error in the testing framework, which has been corrected, and the main tool shed's test framework is now being re-run with the updated code.
--Dave B.
On 11/21/2013 05:18 AM, Peter Cock wrote:
On Thu, Nov 21, 2013 at 10:02 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
Similarly over on the Main Tool Shed (where I have not yet updated these two tools).
I currently have three tool repositories on the Main Tool Shed listed under "Latest revision: failing tool tests" yet with no test results:
Known dependency mis-configuration, updated version on Test Tool Shed. I would expect this revision to fail (saying Biopython not installed).
Known dependency mis-configuration, updated version on Test Tool Shed. I would expect this revision to fail (saying Biopython not installed).
Should be identical to the working version on the Test Tool Shed, i.e. http://toolshed.g2.bx.psu.edu/view/peterjc/samtools_idxstats/93b8db68dde4 I would expect this revision to pass.
Regards,
Peter
On Thu, Nov 21, 2013 at 2:19 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
As of this morning, all of the repositories you listed below seem to have the expected results, with the exception of samtools_idxstats on the main tool shed. That repository was not tested due to an unhandled unicode error in the testing framework, which has been corrected, and the main tool shed's test framework is now being re-run with the updated code.
--Dave B.
Thanks Dave,
I'll wait for the main Tool Shed to finish the test run, but things look great on the Test Tool Shed now - I have nothing under "Latest revision: installation errors" or "Latest revision: failing tool tests" :) :) :)
One possible glitch though is there are no test results for the BLAST+ suite (yet?), listed under "Latest revision: missing tool tests" as not all its tools have tests yet:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/f2478dc77ccb
Says:
Tool test results Automated test environment Time tested: ~ 4 hours ago System: Linux 3.8.0-30-generic Architecture: x86_64 Python version: 2.7.4 Galaxy revision: 11433:e6e9349e8da4 Galaxy database version: 117 Tool shed revision: Tool shed database version: Tool shed mercurial version: (no actual test output)
Regards,
Peter
Peter,
About three weeks ago, we began to focus a higher priority on fixing the nightly functional test runs. Part of this process was to reschedule the nightly tests to finish around 9am ET. At the time you sent this email, the test run had not completed, and you were looking at the previous day's results. The last test run results (from this morning, finishing at 8:54am ET) are pasted inline.
--Dave B.
On 11/21/2013 05:02 AM, Peter Cock wrote:
The bad news is there are still 2 apparently failing tests with missing test output:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/09b5b1af9b8f
Tool test results Automated test environment Time tested: ~ 4 hours ago System: Linux 3.8.0-30-generic Architecture: x86_64 Python version: 2.7.4 Galaxy revision: 11433:e6e9349e8da4 Galaxy database version: 117 Tool shed revision: Tool shed database version: Tool shed mercurial version: Tests that passed successfully Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000000 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000001 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000002 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000003 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5)
http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip/42bca649ec44
Tool test results Automated test environment Time tested: ~ 4 hours ago System: Linux 3.8.0-30-generic Architecture: x86_64 Python version: 2.7.4 Galaxy revision: 11433:e6e9349e8da4 Galaxy database version: 117 Tool shed revision: Tool shed database version: Tool shed mercurial version: Tests that passed successfully Tool id: seq_primer_clip Tool version: seq_primer_clip Test: test_tool_000000 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/seq_primer_clip/seq_primer_clip/0.0.10) Tool id: seq_primer_clip Tool version: seq_primer_clip Test: test_tool_000001 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/seq_primer_clip/seq_primer_clip/0.0.10) Tool id: seq_primer_clip Tool version: seq_primer_clip Test: test_tool_000002 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/seq_primer_clip/seq_primer_clip/0.0.10) Tool id: seq_primer_clip Tool version: seq_primer_clip Test: test_tool_000003 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/seq_primer_clip/seq_primer_clip/0.0.10) Tool id: seq_primer_clip Tool version: seq_primer_clip Test: test_tool_000004 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/seq_primer_clip/seq_primer_clip/0.0.10) Tool id: seq_primer_clip Tool version: seq_primer_clip Test: test_tool_000005 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/seq_primer_clip/seq_primer_clip/0.0.10)
Similarly over on the Main Tool Shed (where I have not yet updated these two tools).
Regards,
Peter
On Thu, Nov 21, 2013 at 2:38 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
About three weeks ago, we began to focus a higher priority on fixing the nightly functional test runs.
And we're starting to see the dividends of this time investment now :)
(And with John extending the test framework to handle <repeats> and potentially ambiguous parameter names with conditionals etc, you will soon have even more tools with tests to run....)
Part of this process was to reschedule the nightly tests to finish around 9am ET. At the time you sent this email, the test run had not completed, and you were looking at the previous day's results. The last test run results (from this morning, finishing at 8:54am ET) are pasted inline.
i.e. The tests are now run at 14:00 UTC/GMT, which is mid afternoon in Europe?
Given you've been working on the test framework a lot, and running extra tests sometimes, I think I'd missed this time shift.
Given we have Galaxy tool developers right round the globe (unless maybe the tests could be run twice a day), whenever they are scheduled wouldn't suit everyone.
Regards,
Peter
On Thu, Nov 21, 2013 at 2:47 PM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Thu, Nov 21, 2013 at 2:38 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
About three weeks ago, we began to focus a higher priority on fixing the nightly functional test runs.
And we're starting to see the dividends of this time investment now :)
Hi Dave,
For a while almost all my tools were showing as passing their tests, but sadly something seems to have broken - has the Test Tool Shed really not run any tests for the last three days? Even if so, the display is still missing the last test results.
Test Tool Shed, "Latest revision: installation errors", no individual test results, but says the tests were run ~4 days ago:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/8c462f7b2c8d http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/1e681e19e35c http://testtoolshed.g2.bx.psu.edu/view/peterjc/fastq_paired_unpaired/63eb445...
Test Tool Shed, "Latest revision: installation errors", but no test results section at all:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/09b5b1af9b8f http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/b2795652d2b4 http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip/42bca649ec44
Actual test failure shown, again time tested listed as 4 days ago, but apparently a framework issue:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus/81cac85fc286 http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_filter_by_id/324893f5751e http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/61cec46f6be5 http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_select_by_id/bedfe8f3d3fb (BrowserStateError: cannot go to 'http://None:29009')
Regards,
Peter
Hello Peter,
I've done a significant amount of re-engineering of the Tool Shed's install and test framework over the past 2 weeks, so the results you are seeing are probably a reflection of some broken peices during the time frame in which I was making changes. I believe I have most of the things ironed out in 11545:32205f911e74 which is now running on the test tool shed.
We will be performing some test runs of thie new framework over the long weekend, so more correct test results should hopefully be showing up for yours and other repositories.
I've also enhanced the "Tool test results" components to store a configurable number of test runs instead of just 1. The test tool shed is configured to store 5 test runs. So from here on you'll begin to see multiple containers in the Tool test results container - something like this:
Automated tool test results
2013-11-27 15:49:51 Automated test environment Tests that passed successfully 2013-11-27 15:17:57 Automated test environment Tests that passed successfully 2013-11-27 14:53:46 Automated test environment Tests that passed successfully 2013-11-27 14:50:30 Automated test environment Tests that passed successfully
We'll keep on top of things that need fixing as we proceed with this new framework.
Thanks,
Greg Von Kuster
On Nov 27, 2013, at 5:44 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Thu, Nov 21, 2013 at 2:47 PM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Thu, Nov 21, 2013 at 2:38 PM, Dave Bouvier dave@bx.psu.edu wrote:
Peter,
About three weeks ago, we began to focus a higher priority on fixing the nightly functional test runs.
And we're starting to see the dividends of this time investment now :)
Hi Dave,
For a while almost all my tools were showing as passing their tests, but sadly something seems to have broken - has the Test Tool Shed really not run any tests for the last three days? Even if so, the display is still missing the last test results.
Test Tool Shed, "Latest revision: installation errors", no individual test results, but says the tests were run ~4 days ago:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go/8c462f7b2c8d http://testtoolshed.g2.bx.psu.edu/view/peterjc/effectivet3/1e681e19e35c http://testtoolshed.g2.bx.psu.edu/view/peterjc/fastq_paired_unpaired/63eb445...
Test Tool Shed, "Latest revision: installation errors", but no test results section at all:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/09b5b1af9b8f http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/b2795652d2b4 http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_primer_clip/42bca649ec44
Actual test failure shown, again time tested listed as 4 days ago, but apparently a framework issue:
http://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus/81cac85fc286 http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_filter_by_id/324893f5751e http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_rename/61cec46f6be5 http://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_select_by_id/bedfe8f3d3fb (BrowserStateError: cannot go to 'http://None:29009')
Regards,
Peter
On Wed, Nov 27, 2013 at 9:02 PM, Greg Von Kuster greg@bx.psu.edu wrote:
Hello Peter,
I've done a significant amount of re-engineering of the Tool Shed's install and test framework over the past 2 weeks, so the results you are seeing are probably a reflection of some broken peices during the time frame in which I was making changes. I believe I have most of the things ironed out in 11545:32205f911e74 which is now running on the test tool shed.
We will be performing some test runs of thie new framework over the long weekend, so more correct test results should hopefully be showing up for yours and other repositories.
Hi Greg,
Things look much better today on the Test Tool Shed, with most of my tools under "Latest revision: all tool tests pass" again - or as expected a few under "Latest revision: missing tool tests" due to not yet having full test coverage.
I've only spotted one glitch so far, missing test results for: http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/b2795652d2b4
I saw a recent commit on galaxy-central addressing a test time display - that may have explained why yesterday the Test Tool Shed was claiming the test results were from four days ago.
I've also enhanced the "Tool test results" components to store a configurable number of test runs instead of just 1. The test tool shed is configured to store 5 test runs. So from here on you'll begin to see multiple containers in the Tool test results container
- something like this:
That sounds useful when the framework itself is changing rapidly, or if there is anything stochastic in a test failure.
Presumably this shows the last 5 tests against the current tool revision?
Thank Greg, and enjoy Thanksgiving,
Peter
Hello Peter,
On Nov 28, 2013, at 4:56 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Wed, Nov 27, 2013 at 9:02 PM, Greg Von Kuster greg@bx.psu.edu wrote:
Hello Peter,
I've done a significant amount of re-engineering of the Tool Shed's install and test framework over the past 2 weeks, so the results you are seeing are probably a reflection of some broken peices during the time frame in which I was making changes. I believe I have most of the things ironed out in 11545:32205f911e74 which is now running on the test tool shed.
We will be performing some test runs of thie new framework over the long weekend, so more correct test results should hopefully be showing up for yours and other repositories.
Hi Greg,
Things look much better today on the Test Tool Shed, with most of my tools under "Latest revision: all tool tests pass" again - or as expected a few under "Latest revision: missing tool tests" due to not yet having full test coverage.
I've only spotted one glitch so far, missing test results for: http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/b2795652d2b4
The above repository was not displaying the missing test components due to a problem in the previous test runs which I fixed yesterday. The problem resulted in test result data that could not be displayed. When I introduced the new feature for stroing and displaying multiple test runs, I decided to not eliminate the bad data, but just to let enough test runs proceed that the bad data would eventually be eliminated. I forced new test runs, and now things should be displaying as expected. Here is the display for the above repository for the latest run this morning:
Automated tool test results
Automated test environment Time tested: 2013-11-28 11:29:38 System: Architecture: Python version: Galaxy revision: Galaxy database version: Tool shed revision: 11549:5aede225ebef Tool shed database version: 21 Tool shed mercurial version: 2.2.3 Tools missing tests or test data Tool id: ncbi_rpstblastn_wrapper Tool version: 0.0.22 Tool guid: testtoolshed.g2.bx.psu.edu/repos/peterjc/ncbi_blast_plus/ncbi_rpstblastn_wrapper/0.0.22 Missing components: Functional test definitions missing for ncbi_rpstblastn_wrapper. Tool id: ncbi_blastdbcmd_info Tool version: 0.0.22 Tool guid: testtoolshed.g2.bx.psu.edu/repos/peterjc/ncbi_blast_plus/ncbi_blastdbcmd_info/0.0.22 Missing components: Functional test definitions missing for ncbi_blastdbcmd_info. Tool id: ncbi_makeblastdb Tool version: 0.0.22 Tool guid: testtoolshed.g2.bx.psu.edu/repos/peterjc/ncbi_blast_plus/ncbi_makeblastdb/0.0.22 Missing components: One or more test files are missing for tool ncbi_makeblastdb: four_human_proteins.fasta.log Tool id: ncbi_blastdbcmd_wrapper Tool version: 0.0.22 Tool guid: testtoolshed.g2.bx.psu.edu/repos/peterjc/ncbi_blast_plus/ncbi_blastdbcmd_wrapper/0.0.22 Missing components: Functional test definitions missing for ncbi_blastdbcmd_wrapper. Tool id: ncbi_rpsblast_wrapper Tool version: 0.0.22 Tool guid: testtoolshed.g2.bx.psu.edu/repos/peterjc/ncbi_blast_plus/ncbi_rpsblast_wrapper/0.0.22 Missing components: Functional test definitions missing for ncbi_rpsblast_wrapper.
I saw a recent commit on galaxy-central addressing a test time display - that may have explained why yesterday the Test Tool Shed was claiming the test results were from four days ago.
Yes, there is a flag that can be set for testing a repository that eliminates it from being tested by the install and test framework. This flag had been inappropriately set on many repositories, including the one to which you are referring. That problem has been corrected, so testing should be more stable.
I've also enhanced the "Tool test results" components to store a configurable number of test runs instead of just 1. The test tool shed is configured to store 5 test runs. So from here on you'll begin to see multiple containers in the Tool test results container
- something like this:
That sounds useful when the framework itself is changing rapidly, or if there is anything stochastic in a test failure.
Presumably this shows the last 5 tests against the current tool revision?
It displays the last 5 test runs against any installable revision that is tested. We are currently restricting testing to the latest installable revision, so the 5 test runs will be displayed for that revision only.
Thank Greg, and enjoy Thanksgiving,
Thanks Peter, your too!
Peter
On Thu, Nov 28, 2013 at 5:16 PM, Greg Von Kuster greg@bx.psu.edu wrote:
Hello Peter,
On Nov 28, 2013, at 4:56 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
Hi Greg,
Things look much better today on the Test Tool Shed, with most of my tools under "Latest revision: all tool tests pass" again - or as expected a few under "Latest revision: missing tool tests" due to not yet having full test coverage.
I've only spotted one glitch so far, missing test results for: http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/b2795652d2b4
The above repository was not displaying the missing test components due to a problem in the previous test runs which I fixed yesterday. The problem resulted in test result data that could not be displayed.
Perhaps a fall back error message would be useful here (e.g. "Test failed but unable to display data"), which would be better than the "missing" test results?
When I introduced the new feature for stroing and displaying multiple test runs, I decided to not eliminate the bad data, but just to let enough test runs proceed that the bad data would eventually be eliminated. I forced new test runs, and now things should be displaying as expected. Here is the display for the above repository for the latest run this morning:
Thank you - that alerted me to a missing set of out output files for a recently added test, hopefully addressed in this revision: http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/1b8a92016045
But its not all good news - another test is now listed as failing, but with missing test results: http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/09b5b1af9b8f
Thanks,
Peter
On Thu, Nov 28, 2013 at 5:30 PM, Peter Cock p.j.a.cock@googlemail.com wrote:
Thank you - that alerted me to a missing set of out output files for a recently added test, hopefully addressed in this revision: http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/1b8a92016045
But its not all good news - another test is now listed as failing, but with missing test results: http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/09b5b1af9b8f
Another little oddity, the three test results for mira_bait.xml are missing (although it correctly tells me mira_4_0_mapping and mira_4_0_de_novo have no tests):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira4_assembler/302d13490b23
Peter
Hi Peter,
On Nov 28, 2013, at 12:56 PM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Thu, Nov 28, 2013 at 5:30 PM, Peter Cock p.j.a.cock@googlemail.com wrote:
Thank you - that alerted me to a missing set of out output files for a recently added test, hopefully addressed in this revision: http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/1b8a92016045
But its not all good news - another test is now listed as failing, but with missing test results: http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/09b5b1af9b8f
Another little oddity, the three test results for mira_bait.xml are missing (although it correctly tells me mira_4_0_mapping and mira_4_0_de_novo have no tests):
http://testtoolshed.g2.bx.psu.edu/view/peterjc/mira4_assembler/302d13490b23
Here is what I am seeing for the missing tests. Is this information ot correct? If not, what is incorrect?
Automated tool test results
Test runs 2013-11-28 13:02:42 Automated test environment Time tested: 2013-11-28 13:02:42 System: Architecture: Python version: Galaxy revision: Galaxy database version: Tool shed revision: 11549:5aede225ebef Tool shed database version: 21 Tool shed mercurial version: 2.2.3 Tools missing tests or test data Tool id: mira_4_0_mapping Tool version: 0.0.2 Tool guid: testtoolshed.g2.bx.psu.edu/repos/peterjc/mira4_assembler/mira_4_0_mapping/0.0.2 Missing components: Functional test definitions missing for mira_4_0_mapping. Tool id: mira_4_0_de_novo Tool version: 0.0.2 Tool guid: testtoolshed.g2.bx.psu.edu/repos/peterjc/mira4_assembler/mira_4_0_de_novo/0.0.2 Missing components: Functional test definitions missing for mira_4_0_de_novo. 2013-11-28 11:29:49 Automated test environment Time tested: 2013-11-28 11:29:49 System: Architecture: Python version: Galaxy revision: Galaxy database version: Tool shed revision: 11549:5aede225ebef Tool shed database version: 21 Tool shed mercurial version: 2.2.3 Tools missing tests or test data Tool id: mira_4_0_mapping Tool version: 0.0.2 Tool guid: testtoolshed.g2.bx.psu.edu/repos/peterjc/mira4_assembler/mira_4_0_mapping/0.0.2 Missing components: Functional test definitions missing for mira_4_0_mapping. Tool id: mira_4_0_de_novo Tool version: 0.0.2 Tool guid: testtoolshed.g2.bx.psu.edu/repos/peterjc/mira4_assembler/mira_4_0_de_novo/0.0.2 Missing components: Functional test definitions missing for mira_4_0_de_novo.
Peter ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
Hi Peter,
On Nov 28, 2013, at 12:30 PM, Peter Cock p.j.a.cock@googlemail.com wrote:
On Thu, Nov 28, 2013 at 5:16 PM, Greg Von Kuster greg@bx.psu.edu wrote: Hello Peter,
On Nov 28, 2013, at 4:56 AM, Peter Cock p.j.a.cock@googlemail.com wrote:
Hi Greg,
Things look much better today on the Test Tool Shed, with most of my tools under "Latest revision: all tool tests pass" again - or as expected a few under "Latest revision: missing tool tests" due to not yet having full test coverage.
I've only spotted one glitch so far, missing test results for: http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/b2795652d2b4
The above repository was not displaying the missing test components due to a problem in the previous test runs which I fixed yesterday. The problem resulted in test result data that could not be displayed.
Perhaps a fall back error message would be useful here (e.g. "Test failed but unable to display data"), which would be better than the "missing" test results?
When I introduced the new feature for stroing and displaying multiple test runs, I decided to not eliminate the bad data, but just to let enough test runs proceed that the bad data would eventually be eliminated. I forced new test runs, and now things should be displaying as expected. Here is the display for the above repository for the latest run this morning:
Thank you - that alerted me to a missing set of out output files for a recently added test, hopefully addressed in this revision: http://testtoolshed.g2.bx.psu.edu/view/peterjc/ncbi_blast_plus/1b8a92016045
But its not all good news - another test is now listed as failing, but with missing test results: http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/09b5b1af9b8f
The bad data shoud now be eliminated, and there are 3 test runs displaying for this one. Do these look crrect to you? The oldest test run has a data of 'unknown' because I had not implemented the date deature at that point.
Automated tool test results
Test runs 2013-11-28 14:50:44 Automated test environment Time tested: 2013-11-28 14:50:44 System: Linux 3.8.0-30-generic Architecture: x86_64 Python version: 2.7.4 Galaxy revision: 11549:5aede225ebef Galaxy database version: 117 Tool shed revision: 11549:5aede225ebef Tool shed database version: 21 Tool shed mercurial version: 2.2.3 Tests that passed successfully Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000000 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000002 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000003 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) Tests that failed Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000001 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) Stderr: Traceback: Traceback (most recent call last): File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 101, in test_tool self.do_it( td ) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 31, in do_it self.__verify_outputs( testdef, test_history, shed_tool_id, data_list, galaxy_interactor ) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 64, in __verify_outputs galaxy_interactor.verify_output( history, output_data, outfile, attributes=attributes, shed_tool_id=shed_tool_id, maxseconds=maxseconds ) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 303, in verify_output self.twill_test_case.verify_dataset_correctness( outfile, hid=hid, attributes=attributes, shed_tool_id=shed_tool_id, maxseconds=maxseconds ) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/base/twilltestcase.py", line 814, in verify_dataset_correctness self.wait( maxseconds=maxseconds ) # wait for job to finish File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/base/twilltestcase.py", line 1425, in wait return self.wait_for(lambda: self.get_running_datasets(), **kwds) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/base/twilltestcase.py", line 1440, in wait_for assert slept < maxseconds AssertionError: 2013-11-28 09:24:20 Automated test environment Time tested: 2013-11-28 09:24:20 System: Linux 3.8.0-30-generic Architecture: x86_64 Python version: 2.7.4 Galaxy revision: 11548:7756b29bea29 Galaxy database version: 117 Tool shed revision: 11544:8b613225f1de Tool shed database version: 21 Tool shed mercurial version: 2.2.3 Tests that passed successfully Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000000 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000001 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000002 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000003 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) unknown_2 Automated test environment Time tested: unknown_2 System: Linux 3.8.0-30-generic Architecture: x86_64 Python version: 2.7.4 Galaxy revision: 11526:c0384bad246d Galaxy database version: 117 Tool shed revision: Tool shed database version: Tool shed mercurial version: Tests that passed successfully Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000000 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000001 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000002 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) Tests that failed Tool id: get_orfs_or_cdss Tool version: get_orfs_or_cdss Test: test_tool_000003 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/get_orfs_or_cdss/get_orfs_or_cdss/0.0.5) Stderr: Traceback: Traceback (most recent call last): File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 101, in test_tool self.do_it( td ) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 31, in do_it self.__verify_outputs( testdef, test_history, shed_tool_id, data_list, galaxy_interactor ) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 64, in __verify_outputs galaxy_interactor.verify_output( history, output_data, outfile, attributes=attributes, shed_tool_id=shed_tool_id, maxseconds=maxseconds ) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 303, in verify_output self.twill_test_case.verify_dataset_correctness( outfile, hid=hid, attributes=attributes, shed_tool_id=shed_tool_id, maxseconds=maxseconds ) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/base/twilltestcase.py", line 814, in verify_dataset_correctness self.wait( maxseconds=maxseconds ) # wait for job to finish File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/base/twilltestcase.py", line 1425, in wait return self.wait_for(lambda: self.get_running_datasets(), **kwds) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/base/twilltestcase.py", line 1440, in wait_for assert slept < maxseconds AssertionError
Thanks,
Peter
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
On Thu, Nov 28, 2013 at 7:34 PM, Greg Von Kuster greg@bx.psu.edu wrote:
Hi Peter,
But its not all good news - another test is now listed as failing, but with missing test results: http://testtoolshed.g2.bx.psu.edu/view/peterjc/get_orfs_or_cdss/09b5b1af9b8f
The bad data shoud now be eliminated, and there are 3 test runs displaying for this one. Do these look crrect to you? The oldest test run has a data of 'unknown' because I had not implemented the date deature at that point.
Most of the tests passed (as expected), but a couple of times there was some kind of time out error, e.g.
Traceback (most recent call last): File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 101, in test_tool self.do_it( td ) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 31, in do_it self.__verify_outputs( testdef, test_history, shed_tool_id, data_list, galaxy_interactor ) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 64, in __verify_outputs galaxy_interactor.verify_output( history, output_data, outfile, attributes=attributes, shed_tool_id=shed_tool_id, maxseconds=maxseconds ) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 303, in verify_output self.twill_test_case.verify_dataset_correctness( outfile, hid=hid, attributes=attributes, shed_tool_id=shed_tool_id, maxseconds=maxseconds ) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/base/twilltestcase.py", line 814, in verify_dataset_correctness self.wait( maxseconds=maxseconds ) # wait for job to finish File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/base/twilltestcase.py", line 1425, in wait return self.wait_for(lambda: self.get_running_datasets(), **kwds) File "/var/opt/buildslaves/buildslave-ec2-2/buildbot-install-test-test-tool-shed-py27/build/test/base/twilltestcase.py", line 1440, in wait_for assert slept < maxseconds AssertionError:
Regards,
Peter
galaxy-dev@lists.galaxyproject.org