value_from_basic - KeyError: 'base_name', was: Tests not being run on toolsheds?
Hi Dave, The following looks like a regression on the Test Tool Shed, the tests pass locally (using a recent revision), and on TravisCI using the current galaxy dev branch on GitHub: https://travis-ci.org/peterjc/pico_galaxy/builds/54870500 Problem tool: https://testtoolshed.g2.bx.psu.edu/view/peterjc/sample_seqs This may be unrelated, but by chance this is the first time I have uploaded a tool to the Tool Shed which uses the new functionality to test the stdout/stderr strings, and more importantly it includes a test expected to fail via <test expect_failure="true" expect_exit_code="1"> Revision on my development repository: https://github.com/peterjc/pico_galaxy/commit/55ebb308b911b4acef912cc3b03f43... Test Tool Shed output from last night: Automated test environment Time tested: 2015-03-18 02:46:55 System: Linux 3.13.0-36-generic Architecture: x86_64 Python version: 2.7.6 Galaxy revision: 17050:6395e7035143 Galaxy database version: 128 Tool shed revision: 16867:0468d285f89c Tool shed database version: 25 Tool shed mercurial version: 3.2.4 Tests that failed Tool id: sample_seqs Tool version: sample_seqs Test: test_tool_000000 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/sample_seqs/sample_seqs/0.2.1) Stderr: Traceback: Traceback (most recent call last): File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 268, in test_tool self.do_it( td ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 37, in do_it stage_data_in_history( galaxy_interactor, testdef.test_data(), test_history, shed_tool_id ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 38, in stage_data_in_history upload_wait() File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 279, in wait while not self.__history_ready( history_id ): File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 297, in __history_ready return self._state_ready( state, error_msg="History in error state." ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 356, in _state_ready raise Exception( error_msg ) Exception: History in error state. Traceback (most recent call last): File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/jobs/runners/__init__.py", line 158, in prepare_job job_wrapper.prepare() File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/jobs/__init__.py", line 828, in prepare tool_evaluator.set_compute_environment( compute_environment, get_special=get_special ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/tools/evaluation.py", line 53, in set_compute_environment incoming = self.tool.params_from_strings( incoming, self.app ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/tools/__init__.py", line 1873, in params_from_strings return params_from_strings( self.inputs, params, app, ignore_errors ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/tools/parameters/__init__.py", line 103, in params_from_strings value = params[key].value_from_basic( value, app, ignore_errors ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/tools/parameters/grouping.py", line 512, in value_from_basic raise e KeyError: 'base_name' requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None requests.packages.urllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1 galaxy.datatypes.metadata: DEBUG: Cleaning up external metadata files requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None requests.packages.urllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1 requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c/contents?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None requests.packages.urllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1 requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c/contents/d80a923b055ee8e9?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None requests.packages.urllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1 requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c/contents/d80a923b055ee8e9/provenance?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None (All the tests for this tool seem to have failed the same way) Regards, Peter On Wed, Mar 18, 2015 at 1:46 PM, Dave Bouvier <dave@bx.psu.edu> wrote:
Gentlemen,
The issue with the nightly testing was due to a stalled test run blocking subsequent tests. I've cleared out that blockage and a manual test run appears to have completed successfully, as should future automated test runs. As always, feel free to let us know if you encounter any additional inexplicable behavior.
--Dave B.
On 03/18/2015 07:00 AM, Peter Cock wrote:
... I can also confirm that the Test Tool Shed example I gave was tested overnight, although it looks like a novel failure: https://testtoolshed.g2.bx.psu.edu/view/peterjc/sample_seqs
Hi Dave, Now that you've fixed some of the test back log, the bad news is this issue I reported last week appears to be a major regression affecting multiple tools on the Test Tool Shed: https://testtoolshed.g2.bx.psu.edu/view/peterjc/align_back_trans https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go https://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr https://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod ... https://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus ... https://testtoolshed.g2.bx.psu.edu/view/peterjc/sample_seqs ... https://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_select_by_id These are all recent test runs from 2015-03-18. Note that some tool tests from the same date are passing, e.g. https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast_rbh https://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_composition https://testtoolshed.g2.bx.psu.edu/view/peterjc/mummer I have not yet spotted any pattern in this division. (However this is clearly not linked to the test expect_failure test I'd added to the sample_seqs tool, as I speculated last week.) Regards, Peter On Wed, Mar 18, 2015 at 2:14 PM, Peter Cock <p.j.a.cock@googlemail.com> wrote:
Hi Dave,
The following looks like a regression on the Test Tool Shed, the tests pass locally (using a recent revision), and on TravisCI using the current galaxy dev branch on GitHub: https://travis-ci.org/peterjc/pico_galaxy/builds/54870500
Problem tool: https://testtoolshed.g2.bx.psu.edu/view/peterjc/sample_seqs
This may be unrelated, but by chance this is the first time I have uploaded a tool to the Tool Shed which uses the new functionality to test the stdout/stderr strings, and more importantly it includes a test expected to fail via <test expect_failure="true" expect_exit_code="1">
Revision on my development repository: https://github.com/peterjc/pico_galaxy/commit/55ebb308b911b4acef912cc3b03f43...
Test Tool Shed output from last night:
Automated test environment Time tested: 2015-03-18 02:46:55 System: Linux 3.13.0-36-generic Architecture: x86_64 Python version: 2.7.6 Galaxy revision: 17050:6395e7035143 Galaxy database version: 128 Tool shed revision: 16867:0468d285f89c Tool shed database version: 25 Tool shed mercurial version: 3.2.4 Tests that failed Tool id: sample_seqs Tool version: sample_seqs Test: test_tool_000000 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/sample_seqs/sample_seqs/0.2.1) Stderr: Traceback: Traceback (most recent call last): File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 268, in test_tool self.do_it( td ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 37, in do_it stage_data_in_history( galaxy_interactor, testdef.test_data(), test_history, shed_tool_id ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 38, in stage_data_in_history upload_wait() File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 279, in wait while not self.__history_ready( history_id ): File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 297, in __history_ready return self._state_ready( state, error_msg="History in error state." ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 356, in _state_ready raise Exception( error_msg ) Exception: History in error state. Traceback (most recent call last): File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/jobs/runners/__init__.py", line 158, in prepare_job job_wrapper.prepare() File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/jobs/__init__.py", line 828, in prepare tool_evaluator.set_compute_environment( compute_environment, get_special=get_special ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/tools/evaluation.py", line 53, in set_compute_environment incoming = self.tool.params_from_strings( incoming, self.app ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/tools/__init__.py", line 1873, in params_from_strings return params_from_strings( self.inputs, params, app, ignore_errors ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/tools/parameters/__init__.py", line 103, in params_from_strings value = params[key].value_from_basic( value, app, ignore_errors ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/tools/parameters/grouping.py", line 512, in value_from_basic raise e KeyError: 'base_name' requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None requests.packages.urllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1 galaxy.datatypes.metadata: DEBUG: Cleaning up external metadata files requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None requests.packages.urllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1 requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c/contents?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None requests.packages.urllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1 requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c/contents/d80a923b055ee8e9?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None requests.packages.urllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1 requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c/contents/d80a923b055ee8e9/provenance?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None
(All the tests for this tool seem to have failed the same way)
Regards,
Peter
On Wed, Mar 18, 2015 at 1:46 PM, Dave Bouvier <dave@bx.psu.edu> wrote:
Gentlemen,
The issue with the nightly testing was due to a stalled test run blocking subsequent tests. I've cleared out that blockage and a manual test run appears to have completed successfully, as should future automated test runs. As always, feel free to let us know if you encounter any additional inexplicable behavior.
--Dave B.
On 03/18/2015 07:00 AM, Peter Cock wrote:
... I can also confirm that the Test Tool Shed example I gave was tested overnight, although it looks like a novel failure: https://testtoolshed.g2.bx.psu.edu/view/peterjc/sample_seqs
It seems to be happening during upload - so it is not unsurprising that this error would affect multiple tools. I have checked out the exact revision registered in the tests and the uploads seem fine to me. I am having trouble imagining a deployment problem short of an invalid upload tool that might result in this error. Dave B, is it possible that a problematic datatype is being installed by some repository and affecting unrelated tools? -John On Mon, Mar 23, 2015 at 8:27 AM, Peter Cock <p.j.a.cock@googlemail.com> wrote:
Hi Dave,
Now that you've fixed some of the test back log, the bad news is this issue I reported last week appears to be a major regression affecting multiple tools on the Test Tool Shed:
https://testtoolshed.g2.bx.psu.edu/view/peterjc/align_back_trans https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go https://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr https://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod ... https://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus ... https://testtoolshed.g2.bx.psu.edu/view/peterjc/sample_seqs ... https://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_select_by_id
These are all recent test runs from 2015-03-18. Note that some tool tests from the same date are passing, e.g.
https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast_rbh https://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_composition https://testtoolshed.g2.bx.psu.edu/view/peterjc/mummer
I have not yet spotted any pattern in this division.
(However this is clearly not linked to the test expect_failure test I'd added to the sample_seqs tool, as I speculated last week.)
Regards,
Peter
On Wed, Mar 18, 2015 at 2:14 PM, Peter Cock <p.j.a.cock@googlemail.com> wrote:
Hi Dave,
The following looks like a regression on the Test Tool Shed, the tests pass locally (using a recent revision), and on TravisCI using the current galaxy dev branch on GitHub: https://travis-ci.org/peterjc/pico_galaxy/builds/54870500
Problem tool: https://testtoolshed.g2.bx.psu.edu/view/peterjc/sample_seqs
This may be unrelated, but by chance this is the first time I have uploaded a tool to the Tool Shed which uses the new functionality to test the stdout/stderr strings, and more importantly it includes a test expected to fail via <test expect_failure="true" expect_exit_code="1">
Revision on my development repository: https://github.com/peterjc/pico_galaxy/commit/55ebb308b911b4acef912cc3b03f43...
Test Tool Shed output from last night:
Automated test environment Time tested: 2015-03-18 02:46:55 System: Linux 3.13.0-36-generic Architecture: x86_64 Python version: 2.7.6 Galaxy revision: 17050:6395e7035143 Galaxy database version: 128 Tool shed revision: 16867:0468d285f89c Tool shed database version: 25 Tool shed mercurial version: 3.2.4 Tests that failed Tool id: sample_seqs Tool version: sample_seqs Test: test_tool_000000 (functional.test_toolbox.TestForTool_testtoolshed.g2.bx.psu.edu/repos/peterjc/sample_seqs/sample_seqs/0.2.1) Stderr: Traceback: Traceback (most recent call last): File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 268, in test_tool self.do_it( td ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/functional/test_toolbox.py", line 37, in do_it stage_data_in_history( galaxy_interactor, testdef.test_data(), test_history, shed_tool_id ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 38, in stage_data_in_history upload_wait() File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 279, in wait while not self.__history_ready( history_id ): File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 297, in __history_ready return self._state_ready( state, error_msg="History in error state." ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/test/base/interactor.py", line 356, in _state_ready raise Exception( error_msg ) Exception: History in error state. Traceback (most recent call last): File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/jobs/runners/__init__.py", line 158, in prepare_job job_wrapper.prepare() File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/jobs/__init__.py", line 828, in prepare tool_evaluator.set_compute_environment( compute_environment, get_special=get_special ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/tools/evaluation.py", line 53, in set_compute_environment incoming = self.tool.params_from_strings( incoming, self.app ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/tools/__init__.py", line 1873, in params_from_strings return params_from_strings( self.inputs, params, app, ignore_errors ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/tools/parameters/__init__.py", line 103, in params_from_strings value = params[key].value_from_basic( value, app, ignore_errors ) File "/tmp/buildslave/buildbot-install-test-test-tool-shed-py27/build/lib/galaxy/tools/parameters/grouping.py", line 512, in value_from_basic raise e KeyError: 'base_name' requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None requests.packages.urllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1 galaxy.datatypes.metadata: DEBUG: Cleaning up external metadata files requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None requests.packages.urllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1 requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c/contents?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None requests.packages.urllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1 requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c/contents/d80a923b055ee8e9?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None requests.packages.urllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1 requests.packages.urllib3.connectionpool: DEBUG: "GET /api/histories/c33ffb67c9176a6c/contents/d80a923b055ee8e9/provenance?key=ae52a59b996b793c4c06a11fc3c33481 HTTP/1.1" 200 None
(All the tests for this tool seem to have failed the same way)
Regards,
Peter
On Wed, Mar 18, 2015 at 1:46 PM, Dave Bouvier <dave@bx.psu.edu> wrote:
Gentlemen,
The issue with the nightly testing was due to a stalled test run blocking subsequent tests. I've cleared out that blockage and a manual test run appears to have completed successfully, as should future automated test runs. As always, feel free to let us know if you encounter any additional inexplicable behavior.
--Dave B.
On 03/18/2015 07:00 AM, Peter Cock wrote:
... I can also confirm that the Test Tool Shed example I gave was tested overnight, although it looks like a novel failure: https://testtoolshed.g2.bx.psu.edu/view/peterjc/sample_seqs
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: https://lists.galaxyproject.org/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/
On Mon, Mar 23, 2015 at 1:07 PM, John Chilton <jmchilton@gmail.com> wrote:
It seems to be happening during upload - so it is not unsurprising that this error would affect multiple tools. I have checked out the exact revision registered in the tests and the uploads seem fine to me.
You mean during the test-data upload at the start of running a test?
I am having trouble imagining a deployment problem short of an invalid upload tool that might result in this error. Dave B, is it possible that a problematic datatype is being installed by some repository and affecting unrelated tools?
That sounds like a believable scenario. However, if this was a problem in a specific datatype (even one not being used, but referenced via the sniffer configuration), I'm at lost to explain the division between failures and passes. e.g. these both take just FASTA input and return just tabular output: Failed: https://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus Passed: https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast_rbh I suspect this is specific to select parameters (from the exception) and my hunch is this is a regression from the recent work on grouping parameters (again, pointed to by the exception). Peter
On Mon, Mar 23, 2015 at 12:27 PM, Peter Cock <p.j.a.cock@googlemail.com> wrote:
Hi Dave,
Now that you've fixed some of the test back log, the bad news is this issue I reported last week appears to be a major regression affecting multiple tools on the Test Tool Shed:
https://testtoolshed.g2.bx.psu.edu/view/peterjc/align_back_trans https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go https://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr https://testtoolshed.g2.bx.psu.edu/view/peterjc/clinod ... https://testtoolshed.g2.bx.psu.edu/view/peterjc/nlstradamus ... https://testtoolshed.g2.bx.psu.edu/view/peterjc/sample_seqs ... https://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_select_by_id
These are all recent test runs from 2015-03-18. Note that some tool tests from the same date are passing, e.g.
https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast_rbh https://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_composition https://testtoolshed.g2.bx.psu.edu/view/peterjc/mummer
Hi Dave, These tools do not seem to have been tested on the TestToolShed since 2015-03-18. Does something need resetting/unblocking again? Regards, Peter
participants (2)
-
John Chilton
-
Peter Cock