I have several jobs have been stuck in the queue for long time, I checked
the handler log and found the following message:
Job runner assigned but no external ID recorded, adding to the job handler
Is there a way that I can clean these orphan jobs out of the queue?
After installing the BWA from the Tools Shed, I find there is a
new tool_data_table_conf.xml got created under tool-data directory. The
file contains the entries of BWA's loc files.
I believe the installed BWA still relies on the tool_data_table_conf.xml
under galaxy root dir. Anyone can clarify what the one under tool-data dir
I have managed to get proftpd to work, it can connect to the galaxy sql database, and users can log to upload files in their directory. But there is a problem, when a galaxy user logs in the galaxy web platform, the user can't see his upload files since Galaxy doesn't have the rights to open the directory. How can i change the permissions in proftpd so that galaxy can open the user directory?
Thanks in advance to all.
What is the current status in Galaxy for supporting compressed files?
We've talked about this before, for example in addition to FASTQ,
many of us have expressed a wish to work with gzipped FASTQ.
I understand that some have customized their local Galaxy
installations to use gzipped FASTQ as a specific data type - I'm
more interested in a general file format neutral solution.
Also, I'd like to be able to used BGZF (not just GZIP) because it
is better for random access - see for example
- and makes it much easier to break up large datafiles for sharing
over a cluster (i.e. it could be exploited in the current Galaxy code
for splitting large sequence files).
The 11 May 2012 Galaxy Development News Brief
mentions tabix indexing - that uses bgzip, so is there something
general in place yet to allow tool wrappers to say they accept not just
given file formats, but different compressed versions of file formats?
Ideally I'd like to be able to write an XML tool description saying
a tool produced BGZF compressed tabular data, or GZIP
compressed Sanger FASTQ etc. Similarly, I'd like to specify my
tool accepts FASTA or gzipped FASTA (including BGZF FASTA).
While for older tools if they say they accept only uncompressed
FASTA, Galaxy could automatically decompress any compressed
FASTA entries in my history on demand.
I would like to inquire whether anyone has attempted to implement the
idxstats tool from samtools into Galaxy?
The xml-file for idxstats is not present in the Galaxy source code,
which led me to try and implement it myself.
However, the main problem I face is that the idxstats tool silently
relies on having an index file available (within the same directory)
for the bam file you which to print the stats for.
samtools idxstats PATH/test.bam
searches for PATH/test.bam.bai and gives an error when this file is not
present. And somehow I cannot model this behavior in Galaxy.
A different solution would of course be to ask the author(s) of samtools
to have an option available where the user can directly indicate the
path to the index file.
PS: I've searched the mailing list archives for this problem but did not
find any matches. Apologies if I somehow missed the answer.
Michiel Van Bel, PhD
Tel:+32 (0)9 331 36 95 fax:+32 (0)9 3313809
VIB Department of Plant Systems Biology, Ghent University
Technologiepark 927, 9052 Gent, BELGIUM
I'd like to be able to write some simple <test> entries for
some of the BLAST+ tools using composite datatypes
as input or output (i.e. small BLAST databases). This
doesn't seem to be mentioned or hinted at on the wiki:
Is it possible to use a composite datatype as a test input?
If so how? Normal datatypes are loaded into the test history
using using the upload tool - does that mean I first need to
extend the relevant datatypes to allow them to be uploaded?
Example: Run blastp using a small query FASTA file and
a small database, check the output (eg tabular).
Is it possible to use a composite datatype as a test output?
If so how?
Example: Run makeblastdb using a small FASTA file, and
check the output (a small BLAST database).
I've moved this discussion to the galaxy-dev list since it pertains to a
local installation of Galaxy.
Responses to your questions follow, in-line.
Yan Luo wrote:
> Dear Sir,
> (1)We installed Galaxy, but recently the user can't registered and got the
> following error, how can we fix it?
> Sever error
> An error occurred. See the error logs for more information.(To turn debug on
> to display ...).
Since debug = False in universe_wsgi.ini, you should be able to find a
more detailed error message in the log file. If starting Galaxy with:
% sh run.sh --daemon
The default log file is 'paster.log' in Galaxy's root directory.
> (2) Could you please let me know if there is any command to stop galaxy?
If starting with the --daemon flag (as above), you can use:
% sh run.sh --stop-daemon
If running in the foreground, you can use Ctrl-C to terminate the
process. There is a recent bug whereby Ctrl-C is ineffective on some
platforms under Python 2.6 - in this case you will have to kill/pkill
the process manually. We are working on a fix for the latter.
> (3) If I reset universe_wsgi.ini file and want to set an administrator
> user(I can add a line in the above file), how can I get the password? Should
> I stop galaxy(See question 2) first? then run "./setup.sh" and "./run.sh".
setup.sh would have only been necessary prior to running Galaxy the
first time, however, this step has recently been removed. If you are
referencing documentation that still refers to setup.sh, please let us
know so we can update it - I did notice this was still on the
"Production Server" page, so I removed it from there.
You no longer need to run setup.sh at all.
> (4) If I run "setup.sh", will a new file "universe_wsgi.ini" be generated?
> if I want to change this file,should I edit it before "run.sh" and after
> "setup.sh". Is it right?
setup.sh and its replacements in run.sh and the Galaxy application
itself never overwrite files, they only create files from sample files
if they do not exist.
> (5) I read some of your docs, command "sh setup.sh"(sh run.sh) and
> "./setup.sh"(./run.sh), which one is correct under Linux?
Both syntaxes are effectively the same in most cases.
> Looking forward to hearing from you.
> Best Wises,
> Yan Luo, Ph.D.
> galaxy-user mailing list
The tool I am currently wrapping has built-in data, which may be used by the tool users (through a relevant < from_data_table> + .LOC file configuration).
They are .fasta databases which are rather small and are thus bundled in the tool distribution package.
Thanks to the tool_dependencies.xml file, said distribution package is downloaded at install time, code is compiled, and since they are here, the data files are copied to $INSTAL L_DIR too , ready to be used.
After that, the user still has to edit tool-data/my_fancy_data_files.loc ; but the thing is, during the install I know where these data files are (since I copied those there), so I would like to save the user the trouble and set up this file automagically.
I would have two questions:
1/ Is it okay to have tool built-in data files in $INSTAL L_DIR, or would it be considered bad practice?
2/ Is there a way to set up the tool-data/my_fancy_data_files.loc during the install? Here are the options I though of:
*shipping a “real” my_fancy_data_files.loc.sample with the good paths already set-up, which is going to be copied as the .loc file (a rather ugly hack)
*using more <action type="shell_command"> during install to create my_fancy_data_files.loc (but deploying this file it is not part of the tool dependency install per se)
*variant of the previous : shipping my_fancy_data_files.loc as part of the tool distribution package, and copy it through shell_command (same concern than above).
Bonsai Bioinformatics group