I'd like to launch a galaxy-cloudman head node which does not accept SGE
jobs, but as jobs are submitted go to compute nodes (or cause compute node
to be added when auto-scale is on).
Primarily, this is b/c I'd like to have the head node be a cheaper instance
which can run long term, and only fire up more expensive compute nodes as
they are actually needed.
How would I enable this?
Dear Galxy admin and user
I have generated BAM file from my RNa seq data by using Bowtie with custom reference, followed by Filter SAM and SAM to BAM connversion, i wish to run cufflink, but its just givinbg me emty file,
how to proceed
House No. 2 ground floor
79115 Freiburg im Breisgau
I have been trying to get an instance of Galaxy going on the EC2. I have no problem going through BioCloudCentral and getting an instance going. I can also successfully load my data from an S2 bucket into Galaxy. The problem occurs when I try to use velveth. I always says 'job waiting to run'. As a matter of curiosity, I then used SSH to get into CloudBioLinux, which worked. However, when I try to use NX to get the virtual desktop going I get the message usr/bin/nxserver: line 381: echo: write error: No space left on device.
Using the df -h command, I get:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 20G 19G 0 100% /
udev 8.4G 4.0K 8.4G 1% /dev
tmpfs 3.4G 660K 3.4G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 8.4G 0 8.4G 0% /run/shm
/dev/xvdb 404G 202M 383G 1% /mnt
/dev/xvdg1 700G 654G 47G 94% /mnt/galaxyIndices
/dev/xvdg2 10G 1.7G 8.4G 17% /mnt/galaxyTools
/dev/xvdg3 200G 11G 190G 6% /mnt/galaxyData
So, I guess my question as a new user is: How do I point Galaxy and CloudBioLinux to all of this unused space? I assume the problem is with the /dev/xvda1 that is 100% full. I am obviously doing something silly and/or missing a really big step. Any help would be greatly appreciated.
Many thanks in advance,
I wanted to build a workflow where the first step would be a tool that
can have a variable number of inputs (e.g. a series).
It seems that I need to predefine apriori the number of inputs to give
to the tool.
Is there a way to define the number of inputs at runtime?
E.g. a series-like input?
SAM-to-BAM (version 1.1.2) says that it produces an indexed BAM file
based on a sorted input SAM file. When I go from SAM to BAM using
SAMTools on my computer (not using Galaxy), I first convert the SAM to
BAM then sort the BAM. I was wondering if on Galaxy you need to first
sort the SAM file then use SAM-to-BAM to get the same result ?
The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.
This is a reminder that there is only a little over two days left to
get in on early registration for the April 2012 GMOD meeting. In
addition to getting a $10 discount on the registration fee, you will
be entered in a drawing to get a GMOD coffee cup or T-shirt. There
are some very good speakers and topics lined up for the meeting; I'm
looking forward to a good one. Please see:
for more information on the meeting, and to:
to register for the meeting. I look forward to seeing you next month.
Scott Cain, Ph. D. scott at scottcain dot net
GMOD Coordinator (http://gmod.org/) 216-392-3087
Ontario Institute for Cancer Research
I'm been trying to get Galaxy to recognize this GFF from NCBI ( ftp://ftp.ncbi.nih.gov/genomes/Bacteria/Lactobacillus_reuteri_JCM_1112_ui...) but it failed to recognize the format after I uploaded it. Manual setting didn't work either because it gave me a "unable to set metadata" error to me as soon as I started a cufflinks run using that GFF. I have tried to reformat the file several times and even tried using the popular bp_genbank2gff3.pl script to re-parse the records from the original genbank file.
Would anyone kindly look at the NCBI GFF and guide me to a solution to get this file recognized by Galaxy? I've been stuck for a couple of weeks now and would appreciate some suggestions. Thank you!
Peera Hemarajata, M.D.
Advanced graduate student - Versalovic lab
Department of Molecular Virology and Microbiology - Baylor College of Medicine
Department of Pathology - Texas Children's Hospital
Suite 830, 8th Floor Feigin Center. Tel: 832-824-8245
Hi everyone. I've set up a galaxy production environment and it seems to be working.
I've identified a couple of issues though:
If I enable apache_xsendfile, the little "eye" icon ("Display data in browser") on each dataset stops working. I get the following instead:
The requested URL /datasets/7e93e3cf926fb2b3/display/ was not found on this server
My document root is not the same as the galaxy root (as suggested). Everything else seems to work. Is this a bug?
I want to rotate the galaxy log file (as specified with paster.py serve --log-file. I can't find a way to reopen the logfile in galaxy without taking down the whole service, or a way to redirect the log to a pipe together while using --daemon.
I cannot let the file grow forever.
This has been raised a number of times .. one issue is at ...
but seems to have been dropped from Open to On Hold. There was another
similar request in the issues list but I can't find it just now.
I'd be gald to know if anything is happening on the idea of simple progress
indicators. The first things that were raised by my "users" were 1) Don't
know how long a data upload is going to take, and b) Don't know what's
happening with my tool execution. (And these were both operations of only
5-10 minutes or so, they'd go crazy with the multi-GB uploads and
hours-long runs I see reported here).
I've implemented my own email-based scheme on a AWS EC2 CloudMan Galay,
seems to be working ok. It uses an Ubuntu package "sendEmail", which is a
self-conatined simple SMTP email sender. It sends progress emails out of
the main Perl tool code, via a bash script, every 1 to N minutes
(selectable), to an email address supplied by the user. The emails have
configurable subject and content with brief display of Start, Progress
(every N minutes) and Success/Failure termination status.
You have to jump through some hurdles. AWS thinks your'e a spammer and
blocks emails after a while ! but you can submit a request to get around
If anyone's interested let me know.
Port Jackson Bioinformatics
I am having a problem with the public galaxy server. When I try to go the User->Custom Builds page, I see the following error:
An error occurred. See the error logs for more information. (Turn debug on to display exception reports here)
I had previously created a custom build but when I try to use it, for example in a BWA/Illumina alignment, I get the following error:
Error executing tool: maximum recursion depth exceeded while calling a Python object
I believe this is a continuation of a previous thread that was started before I joined the list:
Thanks for any help.
David A. C. Beck, Ph.D.
University of Washington, Seattle