Peter, Yes, the functional test suite is a bit on the slow side, and one of my long-term goals has been to improve the performance as best I can. --Dave B. On 09/16/2013 07:24 AM, Peter Cock wrote:
Hi all,
I'd like to echo Florent's concern that run_functional_tests.sh is too slow, and that this discourages Tool Authors from adding more tests to their tools:
https://trello.com/c/wL21d2do/1017-functional-tests-take-too-long
(And encourage people to vote up that issue ;) )
Florent has identified one clear bottleneck is the creation of a fresh SQLite database each time, upon which the growing number of schema migration calls must be performed. Could we not cache an empty but up to date copy of this database?
i.e. Something like this:
1. run_functional_tests.sh starts 2. If it exists, copy empty_test_sqlite_database.db, otherwise create new test SQLite database as now. 3. Check the schema version. 4. If the temporary SQlite database is out of date, run the migration, and then save a copy of this now up to date database as empty_test_sqlite_database.db, 5. run the tests
And that would be very easy to automate, with the environment variable GALAXY_TEST_DBURI set to the path to the empty, migrated database, and a few aliases or scripts to replace the test database file with the "template" file on each run.
Perhaps the empty SQLite database could even be cached in BitBucket too, for a faster first test run with a clean checkout?
I'm not sure of that idea, it would feel a bit like adding a non-essential file to the codebase which is generated programmatically by existing code.
Separately, running the tests themselves seems overly slow - can anything be tweaked here? For example, is there any point executing the external set_meta script in the test environment?
The switch to the external set_meta script was done because setting metadata internally was becoming computationally expensive. As I understand it, switching to internal metadata would actually lead to poorer performance in the functional tests.
Regards,
Peter
P.S. On a related note, my achievement last Friday was to get TravisCI doing continuous integration testing of a GitHub repository of Galaxy tools:
https://travis-ci.org/peterjc/galaxy_blast https://github.com/peterjc/galaxy_blast
For those not familiar with this system, the idea is that via a special .travis.yml configuration file, each time new commits are pushed to GitHub, the latest code is checked out and tested on a virtual machine. i.e. Continuous in the sense of running the tests after each change, rather than just once a night.
Right now the tests for the BLAST+ suite and associated tool wrappers like Blast2GO takes from 15 to 20 minutes, which I feel could be much improved. ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/mailinglists/