I've moved this discussion to the galaxy-dev list since it pertains to a
local installation of Galaxy.
Responses to your questions follow, in-line.
Yan Luo wrote:
> Dear Sir,
> (1)We installed Galaxy, but recently the user can't registered and got the
> following error, how can we fix it?
> Sever error
> An error occurred. See the error logs for more information.(To turn debug on
> to display ...).
Since debug = False in universe_wsgi.ini, you should be able to find a
more detailed error message in the log file. If starting Galaxy with:
% sh run.sh --daemon
The default log file is 'paster.log' in Galaxy's root directory.
> (2) Could you please let me know if there is any command to stop galaxy?
If starting with the --daemon flag (as above), you can use:
% sh run.sh --stop-daemon
If running in the foreground, you can use Ctrl-C to terminate the
process. There is a recent bug whereby Ctrl-C is ineffective on some
platforms under Python 2.6 - in this case you will have to kill/pkill
the process manually. We are working on a fix for the latter.
> (3) If I reset universe_wsgi.ini file and want to set an administrator
> user(I can add a line in the above file), how can I get the password? Should
> I stop galaxy(See question 2) first? then run "./setup.sh" and "./run.sh".
setup.sh would have only been necessary prior to running Galaxy the
first time, however, this step has recently been removed. If you are
referencing documentation that still refers to setup.sh, please let us
know so we can update it - I did notice this was still on the
"Production Server" page, so I removed it from there.
You no longer need to run setup.sh at all.
> (4) If I run "setup.sh", will a new file "universe_wsgi.ini" be generated?
> if I want to change this file,should I edit it before "run.sh" and after
> "setup.sh". Is it right?
setup.sh and its replacements in run.sh and the Galaxy application
itself never overwrite files, they only create files from sample files
if they do not exist.
> (5) I read some of your docs, command "sh setup.sh"(sh run.sh) and
> "./setup.sh"(./run.sh), which one is correct under Linux?
Both syntaxes are effectively the same in most cases.
> Looking forward to hearing from you.
> Best Wises,
> Yan Luo, Ph.D.
> galaxy-user mailing list
Something I raised during the GCC2013, and we talked about
via Twitter as well was a Galaxy environment variable for use
within Tool Wrappers setting the number of threads/CPUs to
The idea is that you can configure a default value, and then
override this per runner or per tool etc. James Taylor had
suggested calling this environment variable $GALAXY_CPUS
which seem fine to me (personally I'd say threads not CPUs
but I don't really mind). e.g.
<command>my_tool --threads "\$GALAXY_CPUS"
--input "$input" --output "$output"</command>
Everyone I spoke to about this seemed positive about the idea.
This would/should be integrated into the various cluster back
ends, for example for SGE/OGE the number of threads is
already configurable via the DRMAA settings and available
as the environment variable $NSLOTS for non-MPI jobs, so
my guess is all Galaxy needs to do is something like this:
$ hg diff
diff -r ce0d758bb995 lib/galaxy/jobs/runners/drmaa.py
--- a/lib/galaxy/jobs/runners/drmaa.py Tue Jul 30 12:30:30 2013 +0100
+++ b/lib/galaxy/jobs/runners/drmaa.py Tue Jul 30 16:10:40 2013 +0100
@@ -43,6 +43,7 @@
# - execute the command
# - take the command's exit code ($?) and write it to a file.
drm_template = """#!/bin/sh
if [ "$GALAXY_LIB" != "None" ]; then
if [ -n "$PYTHONPATH" ]; then
Is there an open Trello card for this?
Retitling thread to focus on a regression in the (Tool Shed) test framework
to do with input file types and sniffers.
On Thu, Aug 1, 2013 at 11:47 AM, Peter Cock <p.j.a.cock(a)googlemail.com> wrote:
> On Thu, Aug 1, 2013 at 11:43 AM, Peter Cock <p.j.a.cock(a)googlemail.com> wrote:
>> On Mon, Jul 29, 2013 at 5:03 PM, Dave Bouvier <dave(a)bx.psu.edu> wrote:
>>> The issue with seq_rename incorrectly failing functional tests has been
>>> resolved in 10266:fe04978dadac, and the test results will be corrected the
>>> next time the automated testing framework runs. I'm still looking into the
>>> other issues you reported.
>>> --Dave B.
>> Hi Dave,
>> The seq_rename test results (apparently) from last night are still failing,
>> AssertionError: Attempting to set field 'new_column' to value '['2']'
>> in form 'tool_form' threw exception: cannot find value/label "2" in
>> list control control: <SelectControl(new_column=)> If the above
>> control is a DataToolparameter whose data type class does not include
>> a sniff() method, make sure to include a proper 'ftype' attribute to
>> the tag for the control within the <test> tag set.
> Same issue here?,
> AssertionError: Attempting to set field 'fasta_file' to value
> '['empty.fasta']' in form 'tool_form' threw exception: cannot find
> value/label "empty.fasta" in list control control:
> <SelectControl(fasta_file=)> If the above control is a
> DataToolparameter whose data type class does not include a sniff()
> method, make sure to include a proper 'ftype' attribute to the tag for
> the control within the <test> tag set.
And another case which looks the same,
AssertionError: Attempting to set field 'fasta_file' to value
'['empty.fasta']' in form 'tool_form' threw exception: cannot find
value/label "empty.fasta" in list control control:
<SelectControl(fasta_file=)> If the above control is a
DataToolparameter whose data type class does not include a sniff()
method, make sure to include a proper 'ftype' attribute to the tag for
the control within the <test> tag set.
Note that for the nlstradamus and effectivet3 tests this is using
an empty FASTA file with no sequences in it, so the sniffer will
not be able guess this is a FASTA file - which is why the tool
text gives the ftype explicitly.
I originally used a literally empty fasta file (0 bytes) but as I recall
the (non-Tool Shed) framework at the time didn't like me doing
that in the test framework, so this empty.fasta file just has a
couple of blank lines in it.
I am testing with empty FASTA files deliberately - they can
often occur in gene/protein selection pipelines where an
early part of the pipeline is stringent enough to reject all
However, the seq_rename sniffer problem is not to do with
an 'empty' fasta file - it uses a non-empty tabular file.
My guess is somewhere in the test framework the ftype
attribute is being lost. This is still working locally,
$ ./run_functional_tests.sh -id seq_rename
$ ./run_functional_tests.sh -id nlstradamus
$ ./run_functional_tests.sh -id effectiveT3
The tool I am currently wrapping has built-in data, which may be used by the tool users (through a relevant < from_data_table> + .LOC file configuration).
They are .fasta databases which are rather small and are thus bundled in the tool distribution package.
Thanks to the tool_dependencies.xml file, said distribution package is downloaded at install time, code is compiled, and since they are here, the data files are copied to $INSTAL L_DIR too , ready to be used.
After that, the user still has to edit tool-data/my_fancy_data_files.loc ; but the thing is, during the install I know where these data files are (since I copied those there), so I would like to save the user the trouble and set up this file automagically.
I would have two questions:
1/ Is it okay to have tool built-in data files in $INSTAL L_DIR, or would it be considered bad practice?
2/ Is there a way to set up the tool-data/my_fancy_data_files.loc during the install? Here are the options I though of:
*shipping a “real” my_fancy_data_files.loc.sample with the good paths already set-up, which is going to be copied as the .loc file (a rather ugly hack)
*using more <action type="shell_command"> during install to create my_fancy_data_files.loc (but deploying this file it is not part of the tool dependency install per se)
*variant of the previous : shipping my_fancy_data_files.loc as part of the tool distribution package, and copy it through shell_command (same concern than above).
Bonsai Bioinformatics group
I have developed a tool in our local Galaxy, and imported it in a local
toolshed. I next removed the 'dev' version of the tool in Galaxy, and
installed it now via het Tool Shed Manager.
I got an error that the dependencies could not be installed. If figured it
out that it (needed to add an extra shell_command in the tool_dependencies)
(and the mount point on my system needs to have exec permissions of
So I updated the tool in the toolshed. BUT, then I uninstalled the tool
from Galaxy, and reinstalled it from the Tool Shed ... instead of just
updating. Anyway, it has happened.
Now I have the tool listed in my 'Install Tool Shed Repositories', and when
I click on it, it says 'this tool is not installed'. The only action I can
apply on this installed tool shed repository is 'Install' (...). But
clicking on that 'install' button, leads me to an error page (http error
Module weberror.evalexception.middleware:364 in respond view
>> app_iter = self.application(environ, detect_start_response)
Module paste.recursive:84 in __call__ view
>> return self.application(environ, start_response)
Module paste.httpexceptions:633 in __call__ view
>> return self.application(environ, start_response)
Module galaxy.web.framework.base:132 in __call__ view
>> return self.handle_request( environ, start_response )
Module galaxy.web.framework.base:190 in handle_request view
>> body = method( trans, **kwargs )
Module galaxy.web.framework:221 in decorator view
>> return func( self, trans, *args, **kwargs )
Module galaxy.webapps.galaxy.controllers.admin_toolshed:825 in
>> raw_text = common_util.tool_shed_get( trans.app, tool_shed_url, url )
Module tool_shed.util.common_util:110 in tool_shed_get view
>> response = urlopener.open( uri )
Module urllib2:397 in open view
>> response = meth(req, response)
Module urllib2:510 in http_response view
>> 'http', request, response, code, msg, hdrs)
Module urllib2:435 in error view
>> return self._call_chain(*args)
Module urllib2:369 in _call_chain view
>> result = func(*args)
Module urllib2:518 in http_error_default view
>> raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 500: Internal Server Error
The same happens when I go to the Tool Shed, and try to install the tool
from there: Galaxy detects that the tool has been installed before, and
points me to that 500 error page.
Any help is heartily appreciated,
Contact details: http://www.bits.vib.be/index.php/about/80-team
After the latest update of Galaxy I saw that the cuffdiff wrapper is for
version 2.0. I updated the $PATH to cuffdiff v. 2.0.2 but when I run
cuffdiff in Galaxy I get the following error:
Error running cuffdiff. cuffdiff: /lib64/libz.so.1: no version information
available (required by cuffdiff)
cuffdiff: unrecognized option '--library-norm-method'
cuffdiff v1.3.0 (3022)
What else do I need to define inorder to run the latest version of cuffdiff.
I'm interested in developing a tool that would allow easy interaction
with the Galaxy API. I started coding and this is what I have so
This is a pure "release early, release often" kind of project. In the
current state many of the project's goals are missing and current
implementation might significantly change. You can however already do
something like this:
$ galaxy-api-tools import --link --file-type fasta --library "API
Imports" --folder R_TEST --description "R_TEST with description."
--workflow 2491ee81eec9beee --extension .qual:qual454 --label
QUAL:.qual --label FASTA:.fna /local/data/R_TEST/1.fna
This will import(only '--link' not copy, copy is the default) these
two 454 fasta files with the matching quality files into data library
"API Imports" and folder "R_TEST"(it will create both if they don't
exists), it will them run a workflow where there are two inputs
labeled as FASTA and QUAL assigning the correct file to each, the
results will be available in a newly created history.
I shamelessly copied/adapted code from '$GALAXY_SOURCE/scripts/api'.
I'm releasing this code under BSD-Clause-2 which I think is one of the
most open license I could find in the github dropdown menu. Hopefully
this license won't have any incompatibility with Galaxy license which
seems very open also.
I would love to receive comments to help shape this tool. I'm very
open to any kind of suggestions. Also, if there is any worry the name
of the tool my cause any confusion on the source, please let me know
and I'll be happy to change it.
I've been seeing some examples of tool_depedencies.xml come across of
the list, and I'm wondering if there are ways that it can be
simplified. When we were first defining these features, we talked
about having high level recipes for certain types of installs. This
could greatly simplify things. For example, can this:
<package name="requests" version="1.2.3">
python setup.py install --home $INSTALL_DIR
Be simplified to:
<package name="requests" version="1.2.3">
The assumptions are: when version is not provided, it is 1.0 (we've
always maintained compatibility in the past for config files so
hopefully this never changes), when installing a python package the
install directories and environment variables that need to be set are
always the same.
Similar recipes could be:
autoconf: default to configure; make; make install, allow providing
make_install: just make; make install; allow providing make options
Basically, most of the times the steps to install a particular package
are boilerplate, this would remove a ton of duplication in the recipe
files. Also, a likely less popular proposal would be to go one step