Is the main galaxy server down? Or very sloooow? I have a Lastz job (not too demanding and one that has been run several times before) waiting for a long time already.
Boaz Shaanan, Ph.D.
Dept. of Life Sciences
Ben-Gurion University of the Negev
Phone: 972-8-647-2220 Skype: boaz.shaanan
Fax: 972-8-647-2992 or 972-8-646-1710
Is it possible to use ACE or some other type of in line editor to modify
files in Galaxy? Like if we had a tool that accepted as a parameter a file
written in a homegrown modeling language, is there any way to update that
file without downloading and re-uploading? Has anyone seen or done anything
IT Manager - Laboratory of Genetics
University of Wisconsin - Madison
I just established Galaxy Cloudman on Amazon EC2. I would like to add some
tools in tool shed to to it. However, I need to be the admin to do this.
Once I got into the galaxy cloudman, I had to register a new account and
then log in. The admin interface (described in tool shed wiki) didn't appear
at all. How could I log in as admin on Galaxy Cloudman? This may be a very
simple question, but it bothers me two days. Looking forward to your
replies! Best wishes,
I have been looking for an answer to my problem in all the Galaxy
Support resources but with no success. I am sorry if this topic has been
So, I am analyzing MiSeq data on the main Galaxy.
I have Fastq files from 4 paired-end samples. After having checked the
quality with FastQC and groomed them, I have performed a BWA mapping,
filtered the results and converted the SAM to BAM files (for each sample
separately). I have then called SNPs with Freebayes and SAMtools,
encountering problems in both cases.
1) SAMtools: if I run the Generate pileup tool, then the Filter pileup
doesn't recognize any valid format in the files I have in my History and
I cannot go on with the analysis. Why is that? What can I do?
2) I have performed variant calling with Freebayes on single BAM files
and on one merged BAM files from all my four BWA mapping files. In all
cases, the last column is "unknown", while it should be the name of my
sample. This is not a big deal for the single vcf files, but from the
merged BAM file, I cannot discriminate from which sample the SNPs were
detected. I think there is a problem in the BAM files which are not
properly indexed. Also Freebayes needs an RG tag.
Is there a tool in Galaxy I can use to index BAM files, adding the RG tag?
I hope someone can help me!
Thank you very much!
Debora Garzetti, PhD Student
Max von Pettenkofer-Institute, LMU
Phone: +49 (0)89 2180 72915
My understanding is that the cleanup_datasets.py under scripts/cleanup_datasets should be compatible with 'older' versions of postgresql. I'm running 8.1 under CentOS 5. When I attempt to run the scripts, it fails to clean any data sets. From the logs, I'm noticing messages like:
database_connection contains an unknown SQLAlchemy database dialect: postgresql
In my universe_wsgi.ini I've defined the database_connection as:
database_connection = postgresql://galaxy:********@localhost:5432/galaxydb
and am confident it's working as Galaxy works fine, other than this problem with deleting older datasets.
The pgcleanup.py script isn't an option, as it requires postgresql >=9.1. I noticed yet another version of cleanup_datasets.py under cron which looks like it might be compatible with postgres, but am leary of trying it as I'm not sure what it means by "1: database directory to clean". I'm not sure what directory to point this to, nor am I confident that just deleting data from the directory is the safest thing.
Any suggestions on how to proceed?
My question: How can I generate a pileup with an output of more than
8000 hits per base? I was generating pileups using the SAM tools -->
Generate pileup and do not see an option to change the settings for
output. In mpileup there is a variable that looks correct, -D (Output
per-sample read depth) , but I cannot figure out how to adjust it. I
checked the box, Output per-sample read depth, under the advanced
settings but the log file generated still says the max depth was 8000.
The pileup files I generated look great but I would like to know what
the true read depth is at the 8000 hit plateaus.
Any help appreciated. I'm a little out of my depth here.
Lauren M. Oldfield
Dept. of Biological Sciences
University of Pittsburgh
376 Crawford Hall
4249 Fifth Ave
Pittsburgh, PA 15260
I'm trying to start a visualizer with two different files to show a person
where the variants are. Can I start IGV or something like it with multiple
files? I read the docs and didn't find anything about it. I also asked here
I am analyzing some targeted resequencing data. I was trying to use Variant
Annotation tool under GATK after got .VCF file. But some error as following
returned. Would you please help me to figure out what happened? Or this
tool can only be used for some model organisms? Thanks a lot!
"An error occurred with this dataset: Picked up _JAVA_OPTIONS:
-Djava.io.tmpdir=/space/g2main [Wed Sep 25 03:59:41 EDT 2013]