I've moved this discussion to the galaxy-dev list since it pertains to a
local installation of Galaxy.
Responses to your questions follow, in-line.
Yan Luo wrote:
> Dear Sir,
> (1)We installed Galaxy, but recently the user can't registered and got the
> following error, how can we fix it?
> Sever error
> An error occurred. See the error logs for more information.(To turn debug on
> to display ...).
Since debug = False in universe_wsgi.ini, you should be able to find a
more detailed error message in the log file. If starting Galaxy with:
% sh run.sh --daemon
The default log file is 'paster.log' in Galaxy's root directory.
> (2) Could you please let me know if there is any command to stop galaxy?
If starting with the --daemon flag (as above), you can use:
% sh run.sh --stop-daemon
If running in the foreground, you can use Ctrl-C to terminate the
process. There is a recent bug whereby Ctrl-C is ineffective on some
platforms under Python 2.6 - in this case you will have to kill/pkill
the process manually. We are working on a fix for the latter.
> (3) If I reset universe_wsgi.ini file and want to set an administrator
> user(I can add a line in the above file), how can I get the password? Should
> I stop galaxy(See question 2) first? then run "./setup.sh" and "./run.sh".
setup.sh would have only been necessary prior to running Galaxy the
first time, however, this step has recently been removed. If you are
referencing documentation that still refers to setup.sh, please let us
know so we can update it - I did notice this was still on the
"Production Server" page, so I removed it from there.
You no longer need to run setup.sh at all.
> (4) If I run "setup.sh", will a new file "universe_wsgi.ini" be generated?
> if I want to change this file,should I edit it before "run.sh" and after
> "setup.sh". Is it right?
setup.sh and its replacements in run.sh and the Galaxy application
itself never overwrite files, they only create files from sample files
if they do not exist.
> (5) I read some of your docs, command "sh setup.sh"(sh run.sh) and
> "./setup.sh"(./run.sh), which one is correct under Linux?
Both syntaxes are effectively the same in most cases.
> Looking forward to hearing from you.
> Best Wises,
> Yan Luo, Ph.D.
> galaxy-user mailing list
The tool I am currently wrapping has built-in data, which may be used by the tool users (through a relevant < from_data_table> + .LOC file configuration).
They are .fasta databases which are rather small and are thus bundled in the tool distribution package.
Thanks to the tool_dependencies.xml file, said distribution package is downloaded at install time, code is compiled, and since they are here, the data files are copied to $INSTAL L_DIR too , ready to be used.
After that, the user still has to edit tool-data/my_fancy_data_files.loc ; but the thing is, during the install I know where these data files are (since I copied those there), so I would like to save the user the trouble and set up this file automagically.
I would have two questions:
1/ Is it okay to have tool built-in data files in $INSTAL L_DIR, or would it be considered bad practice?
2/ Is there a way to set up the tool-data/my_fancy_data_files.loc during the install? Here are the options I though of:
*shipping a “real” my_fancy_data_files.loc.sample with the good paths already set-up, which is going to be copied as the .loc file (a rather ugly hack)
*using more <action type="shell_command"> during install to create my_fancy_data_files.loc (but deploying this file it is not part of the tool dependency install per se)
*variant of the previous : shipping my_fancy_data_files.loc as part of the tool distribution package, and copy it through shell_command (same concern than above).
Bonsai Bioinformatics group
I have a question regarding the param tag. I would like to pass on the
user email to a external python script. I tried to use it like this:
<param name="email" type="hidden" value=$__user_email__ />
<param name="experiment" type="select" label="Experiment" help="select
This does not work. Ideally I would like to have something like this:
Has someone done this before?
Center for Information Sciences and Databases (C-ISD)
Department of Biosystems Science & Engineering (D-BSSE)
ETH Zurich, Maulbeerstrasse (1078, 1.02), CH-4058 Basel, +41 61 387 3132
Is there a reason that the samtools BAM to SAM does not respect the
new_file_path set in the config file? The tmp directory handling by
different tool wrappers seems to be an issue right now on systems with
small system tmp directories.
Ph.D Candidate - BCMB
Pevsner Lab <http://pevsnerlab.kennedykrieger.org/>
Johns Hopkins Medicine
I am getting this error:
Error in tophat:
[2013-02-13 20:46:41] Beginning TopHat run (v2.0.7)
[2013-02-13 20:46:41] Checking for Bowtie
Bowtie version: 188.8.131.52
[2013-02-13 20:46:41] Checking for Samtools
Samtools version: 0.1.18.0
[2013-02-13 20:46:41] Checking for Bowtie index files
[2013-02-13 20:46:41] Checking for reference FASTA file
Warning: Could not find FASTA file
[2013-02-13 20:46:41] Reconstituting reference FASTA file from Bowtie index
[2013-02-13 20:48:51] Generating SAM header for
quality scale: phred33 (default)
[2013-02-13 20:49:23] Preparing reads
left reads: min. length=34, max. length=34, 2 kept reads (0 discarded)
Warning: you have only one segment per read.
If the read length is greater than or equal to 45bp,
we strongly recommend that you decrease --segment-length to about
half the read length because TopHat will work better with multiple
[2013-02-13 20:49:23] Mapping left_kept_reads to genome genome with Bowtie2
[2013-02-13 20:49:56] Searching for junctions via segment mapping
Coverage-search algorithm is turned on, making this step very slow
Please try running TopHat again with the option
(--no-coverage-search) if this step takes too much time or memory.
Warning: junction database is empty!
[2013-02-13 20:51:18] Reporting output tracks
Error running /usr/local/bin/tophat_reports --min-anchor 8
--splice-mismatches 0 --min-report-intron 50 --max-report-intron
500000 --min-isoform-fraction 0.15 --output-dir ./tophat_out/
--max-multihits 20 --max-seg-multihits 40 --segment-length 25
--segment-mismatches 2 --min-closure-exon 100 --min-closure-intron 50
--max-closure-intron 5000 --min-coverage-intron 50
--max-coverage-intron 20000 --min-segment-intron 50
--max-segment-intron 500000 --read-mismatches 2 --read-gap-length 2
--read-edit-dist 2 --read-realign-edit-dist 3 --max-insertion-length 3
--max-deletion-length 3 -z gzip -p4 --no-closure-search
--samtools=/bin/samtools --bowtie2-max-penalty 6 --bowtie2-min-penalty
2 --bowtie2-penalty-for-N 1 --bowtie2-read-gap-open 5
--bowtie2-read-gap-cont 3 --bowtie2-ref-gap-open 5
--bowtie2-ref-gap-cont 3 ./tophat_out/tmp/genome.fa
The recent updates to set_user_disk_usage.py for Postgres users have an
issue with Postgres 8.x. The SQL in the pgcalc method (line 51) leads
to the following error:
sqlalchemy.exc.ProgrammingError: (ProgrammingError) column "d.total_size" must appear in the GROUP BY clause or be used in an aggregate function
LINE 4: FROM ( SELECT d.total_siz...
The problem is that version of Postgres before 9.x were a bit more
restrictive in the use of GROUP BY. This can be fixed using DISTINCT ON
instead. See this StackOverflow post for more info:
I've included a patch below. Let me know if a pull request would be
@@ -52,7 +52,7 @@
sql = """
SET disk_usage = (SELECT COALESCE(SUM(total_size), 0)
- FROM ( SELECT d.total_size
+ FROM ( SELECT DISTINCT ON (d.id)
JOIN history h ON
h.id = hda.history_id
JOIN dataset d ON
hda.dataset_id = d.id
@@ -62,7 +62,7 @@
AND d.purged = false
AND d.id NOT IN
- GROUP BY d.id) sizes)
+ ) sizes)
WHERE id = :id
Lance Parsons - Scientific Programmer
134 Carl C. Icahn Laboratory
Lewis-Sigler Institute for Integrative Genomics
We have our own galaxy instance and the idea is to have trackster enabled for users to be able to visualize NGS mapping. We were able to configure trackster in our instance and the visualization works fine.
We have two questions regarding trackster:
1) We can't display genomic sequences in trackster. As per the tutorial, we set the location of the .2bit file in the twobit.loc file for the trackster to be able to display the genomic sequence but for some reason it doesn't display it. The name of the builds is the same in all places i.e) in ucsc/chrom/builds.txt and also in the .loc files. Any ideas on what else should be done?
2) While saving the visualization, there is always an error message saying "could not save visualization" and it doesn't seem to be a web browser issue. How do we then save the visualization?
Thanks in advance,
Hi all -
I've been trying to get the <repeat>...</repeat> tag working with a min attribute for some time now, though without any success. It works in other tools distributed with Galaxy, but when I attempt to use it in one of our custom tools, it dies with a "AttributeError: 'ExpressionContext' object has no attribute 'keys'" exception.
Can anybody offer any insight?
The full traceback is:
⇝ AttributeError: 'ExpressionContext' object has no attribute 'keys'
Module weberror.evalexception.middleware:364 in respond view
>> app_iter = self.application(environ, detect_start_response)
Module paste.debug.prints:98 in __call__ view
>> environ, self.app)
Module paste.wsgilib:539 in intercept_output view
>> app_iter = application(environ, replacement_start_response)
Module paste.recursive:80 in __call__ view
>> return self.application(environ, start_response)
Module paste.httpexceptions:632 in __call__ view
>> return self.application(environ, start_response)
Module galaxy.web.framework.base:160 in __call__ view
>> body = method( trans, **kwargs )
Module galaxy.web.controllers.tool_runner:68 in index view
>> template, vars = tool.handle_input( trans, params.__dict__ )
Module galaxy.tools:1320 in handle_input view
>> state = self.new_state( trans )
Module galaxy.tools:1248 in new_state view
>> self.fill_in_new_state( trans, inputs, state.inputs )
Module galaxy.tools:1257 in fill_in_new_state view
>> state[ input.name ] = input.get_initial_value( trans, context )
Module galaxy.tools.parameters.grouping:100 in get_initial_value view
>> rval_dict[ input.name ] = input.get_initial_value( trans, context )
Module galaxy.tools.parameters.basic:1016 in get_initial_value view
>> return SelectToolParameter.get_initial_value( self, trans, context )
Module galaxy.tools.parameters.basic:785 in get_initial_value view
>> if self.need_late_validation( trans, context ):
Module galaxy.tools.parameters.basic:1022 in need_late_validation view
>> if super( ColumnListParameter, self ).need_late_validation( trans, context ):
Module galaxy.tools.parameters.basic:766 in need_late_validation view
>> for layer in context.itervalues():
Module UserDict:116 in itervalues view
>> for _, v in self.iteritems():
Module UserDict:109 in iteritems view
>> for k in self:
Module UserDict:96 in __iter__ view
>> for k in self.keys():
AttributeError: 'ExpressionContext' object has no attribute 'keys'
Dear Galaxy mailing-list,
We are currently facing a strange issue with our local Galaxy installation
(distribution from Jan. 11 2013).
I'm going to try to describe the problem as much as I can, and feel free
to ask for more details if it can help in solving this issue.
So basically, we have a situation where (about half the time), when a job
ends in error, the error message (shown when clicking on the bug icon),
doesn't seem to be related to the dataset in error.
As this particular problem seem to be present only when datasets ends in
error, I created a simple python script which simply does 2 things:
It outputs the hostname to stdout (in order to investigate if the problem
It outputs the following message "This is an error message printed to
stderr", to stderr. (This is done in order to get only error datasets and
to investigate the before-mentioned issue.)
Here is the code of my python script:
sys.stderr.write('This is an error message printed to stderr\n')
if __name__ == '__main__':
I ran this tool about 50 times and I looked at the error messages
displayed both in the preview section of the dataset (in the history), and
the error message displayed when clicking on the bug icon "view or report
About half the time, the results were as expected:
the dataset stdout would read the name of the node where the job was
the dataset stderr would read: "This is an error message printed to
and the error message displayed when clicking on the bug icon would be:
Dataset generation errors
Dataset 54: Test error
Tool execution generated the following error message:
This is an error message printed to stderr
The tool produced the following additional output:
However, the other half of the time, the error message shown in stderr
doesn't correspond to the error message displayed when clicking on the bug
the dataset stdout is still reading the name of the node
the dataset stderr is still reading "This is an error message printed to
but the error message displayed when clicking on the bug icon would be
Dataset generation errors
Dataset 3: chrM.bed
Tool execution did not generate any error messages.
We can clearly see a discrepancy between the error message in stderr and
the error message of the bug report. Actually, the bug report is saying
that there is not any error and it makes reference to some "Dataset 3:
chrM.bed", even when the actual dataset is "Dataset 53: Test error". There
is absolutely no .bed file in my history and the dataset 3 actually reads
"Dataset 3: Test error". Some of the datasets mentioned in the faulty bug
reports seems to be really old datasets (like one year old).
So my question to you is, could this be related to some mix-up between
datasets ID? And how can I look this up.
I must say that at the moment, I have absolutely no idea how to solve this
Many thanks for your help!
I've looked over the wiki and as far as I can see, 'when' tags used inside
'conditional' tags only work on another input variable's values.
I would like to be able to do is a conditional based on the file format
of an input file. For instance, a tool might take XML or tabular files,
and need to show additional parameters for one input type but not the
Does this enhancement idea seem useful?