I am writing a wrapper for makeblastdb. The problem is that Galaxy doesn’t support the blastdb data format, so this has to be added if a makeblastdb wrapper is to work properly.
I have problems modifying the registry.py, Could anyone specify which changes have to be done?
Jobs that I start on my in-house Galaxy instance now take up to 3-4 min to go from queued to Running, even though there is nothing much going on on the galaxy server...
I have been running this instance since June and use a relatively new version of Galaxy-central (Last update, 22-Aug changeset: 7535:bf6517b2b336)
I have noticed that my jobs table in the galaxy Postgres database contains about 60,000 jobs...
Could that be the culprit? Does it slow to the complete database to see if there are any jobs that need to run?
Could I purge the jobs and related tables to speed up the submissions?
I've written a MrBayes wrapper that accepts a Nexus file (presumably containing a data block and a command block). In order to make it a bit easier for new users, I was hoping to be able to let users specify some parameters, such as their model, in Galaxy, as opposed to including it in the command block of the Nexus file. Am I correct in thinking this is not possible since MrBayes does not have a way to invoke commands in-line?
To clarify for non-MrBayes people, my limited understanding is that you either invoke the interpreter with "$ mb" and then enter your commands, or directly execute a Nexus file (containing the data and commands) with " $ mb <nexusfilename>".
Please correct me if I'm missing something or thinking about this completely wrong. I'd appreciate any input about MrBayes or a similar kind of tool.
Agriculture and Agri-Food Canada | Agriculture et Agroalimentaire Canada
KW Neatby Bldg | éd. KW Neatby
960 Carling Ave| 960, avenue Carling
Ottawa, ON | Ottawa (ON) K1A 0C6
E-mail Address / Adresse courriel: ann.gomez(a)agr.gc.ca <mailto:email@example.com>
Telephone | Téléphone 613-759-6805
Facsimile | Télécopieur 613-759-1701
Government of Canada | Gouvernement du Canada
I'm using Galaxy from June, 2012. (Sorry if there's already a fix.)
We've got it "working" in production. We've gotten whole pipelines to run.
However, we occasionally get situations where we upload file (using the
FTP mechanism), which seems to be fine, but then I can't get to the data.
I went to Saved Histories, and selected Switch, and it outlined the line
in blue and wrote "current history" next to it. But the right pane still
shows "Unnamed history" with no data in it. Then if I go back to Saved
Histories, I get one or two new Unnamed histories, created within the last
I just tried to "View" the history, which worked (in the middle pane) and
clicked "import and start using history". This seemed to work, but I got
three panes inside the middle pane! When I go back (again) to saved
histories, there are 3 histories - one the imported one with 2 steps, two
unnamed histories, all created < 1 minute ago.
We just asked a beta tester to play with things, and he uploaded two
fastqs, but had what sounds like a similar problem.
Any thoughts on what's happening?
Harvard Medical School
Dear galaxy developers--
I'd like to bring your attention to the call for participation included below this message. A "Phylotastic" system is a distributed tree-of-life delivery system composed of web services. Although it is not readily apparent from the presentation linked below, the project includes some work on using Galaxy as a framework for managing a phylotastic workflow. See the demos page (http://phylotastic.org/demos.html) and the "Making Galaxy Phylotastic!" screencast (http://youtu.be/kMME658xOu4 for more.
I would be happy to respond to questions about the project. Please consider applying if you are excited by the project and think that you can contribute. Regards,
Phylogenies! Hacking! Tucson in January!
An implicit promise of the Tree of Life project is that, ultimately, expert knowledge of species phylogeny will be accessible and usable by everybody. In other words, we will all be able to get the species trees we need, in a useable form, when we need them. The Phylotastic project aims to make that vision a reality, by developing a loosely coupled system of components that, in response to a user’s query (a list of taxa plus conditions), will rectify names, find suitable source trees, prune and graft to get the right species, estimate branch lengths, and return the results with metadata on sources and methods.
In June 2012, the HIP (Hackathons, Interoperability, Phylogenies) working group of NESCent staged a hackathon (http://en.wikipedia.org/wiki/Hackathon) to prototype components of a Phylotastic system, and implement demos to show their potential. Thanks to 25 participants who responded to an open call (just like this one), that hackathon was an extraordinary success (see http://phylotastic.org/).
Now we are gearing up for a second hackathon, to take another step toward providing computable, convenient, credible access to the Tree of Life. We aim to recruit a diverse group of participants, including students, faculty, postdocs, and staff of both genders and from different backgrounds. Only some of the seats at the hackathon will be filled by expert hackers who spend their time coding. We also need folks who can:
* Provide knowledge of workflows and downstream uses of trees
* Design user interfaces
* Test software
* Make screencasts, develop tutorials, and document user experiences
Applications are now being accepted to Phylotastic 2, which will take place January 28th, 2013 to February 1st, 2013 at the BIO5 facility in warm and sunny Tucson, AZ (http://bio5.arizona.edu/about-bio5). Information on how to apply is provided below. Partial support (and, if funds allow, full support) for travel, food and lodging will be available to each successful applicant who indicates need. We particularly welcome applications from women and members of underrepresented groups.
To understand how you might be able to contribute, review the slideshow (ppt format, http://bit.ly/RWRgIc, or PDF format, http://bit.ly/POaoci), consult the web site (phylotastic.org) or contact a member of the HIP leadership team (listed below).
the HIP (Hackathons, Interoperability and Phylogenies) Leadership Team
HOW TO APPLY
Before 16 November, 2012 at midnight, EST, fill out the online form here http://tinyurl.com/PhyloTastic2. Note that:
* The application form asks you to describe how you can contribute to Phylotastic. Remember that we are not just looking for expert coders. There are many other ways to contribute.
* Support for travel, housing and meals will be available to each successful applicant who indicates need.
* Please understand that your application is not a guarantee of participation in the event. Funds and space are limited: we may not be able to sponsor every individual who is qualified to participate.
Karen Cranston, karen.cranston(a)nescent.org
Brian Sidlauskas, brian.sidlauskas(a)oregonstate.edu
Arlin Stoltzfus, arlin(a)umd.edu
Mike Rosenberg mrosenb(a)asu.edu
Brian O’Meara, bomeara(a)utk.edu
Mark Westneat mwestneat(a)fieldmuseum.org
Enrico Pontelli epontell(a)cs.nmsu.edu
Rutger Vos rutgeraldo(a)gmail.com
Naim Matasci nmatasci(a)iplantcollaborative.org
Arlin Stoltzfus (arlin(a)umd.edu)
Fellow, IBBR; Adj. Assoc. Prof., UMCP; Research Biologist, NIST
IBBR, 9600 Gudelsky Drive, Rockville, MD, 20850
tel: 240 314 6208; web: www.molevol.org
I am a JBrowse Dev hoping to add the ability to export data directly from
locally, opening Galaxy, loading file from drive). I have looked through the
wiki page on Data
but JBrowse is client-side only. I was wondering if anybody knew of a
different way to do this, or had any pointers.
After reading a message on this mailing list about the job splitter I began
to investigate what and how this is used. Unfortunately I have been unable
to find any documentation on your website for it.
Am I blind and missing it or is it yet to be properly documented?
Sorry if this turns to be out a pointless exercise, but it would be
extremely useful for my Galaxy development.
Using large amazon instance
Trying to do an interval join of SNPs as output from pileup 120,000 regions(5.5Mb) with with snp135Common 12,000,000(425Mb) and get the following errors. The goal is to pickup rs id's for known SNPs in the list of SNPs.
Is this a memory issue?
I was able to do the operation against chr1 as a test. Thought about chaining the outputs and doing against a file for each chromosome to make smaller files but then I have a mess where rs id's are in different columns.
71: Join on data 38 and data 36
An error occurred running this job: /opt/sge/default/spool/execd/ip-10-191-53-90/job_scripts/14: line 13: 5517 Killed python /mnt/galaxyTools/galaxy-central/tools/new_operations/gops_join.py /mnt/galaxyData/files/000/dataset_75.dat /mnt/galaxyData/files/000/dataset_77.dat