Tool shed and datatypes
Can we introduce new file types via tools in the tool shed? It seems Galaxy can load them if they are in the datatypes configuration file. Does tool installation automate the editing of that file? John Duddy Sr. Staff Software Engineer Illumina, Inc. 9885 Towne Centre Drive San Diego, CA 92121 Tel: 858-736-3584 E-mail: jduddy@illumina.com<mailto:jduddy@illumina.com>
Hello John, The Galaxy tool shed currently is not enabled to automatically edit the datatypes_conf.xml file, although I could add this feature if the need exists. Can you elaborate on what you are looking to do regarding this? Thanks! On Oct 5, 2011, at 1:52 PM, Duddy, John wrote:
Can we introduce new file types via tools in the tool shed? It seems Galaxy can load them if they are in the datatypes configuration file. Does tool installation automate the editing of that file?
John Duddy Sr. Staff Software Engineer Illumina, Inc. 9885 Towne Centre Drive San Diego, CA 92121 Tel: 858-736-3584 E-mail: jduddy@illumina.com
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
Greg Von Kuster Galaxy Development Team greg@bx.psu.edu
One of the things we're facing is the sheer size of a whole human genome at 30x coverage. An effective way to deal with that is by compressing the FASTQ files. That works for BWA and our ELAND, which can directly read a compressed FASTQ, but other tools crash when reading compressed FASTQ filesfiles. One way to address that would be to introduce a new type, for example "CompressedFastQ", with a conversion to FASTQ defined. BWA could take both types as input. This would allow the best of both worlds - efficient storage and use by all existing tools. Another example would be adding the CASAVA tools to Galaxy. Some of the statistics generation tools use custom file formats. To be able to make the use of those tools optional and configurable, they should be separate from the aligner, but that would require that Galaxy be made aware of the custom file formats - we'd have to add a datatype. John Duddy Sr. Staff Software Engineer Illumina, Inc. 9885 Towne Centre Drive San Diego, CA 92121 Tel: 858-736-3584 E-mail: jduddy@illumina.com<mailto:jduddy@illumina.com> From: Greg Von Kuster [mailto:greg@bx.psu.edu] Sent: Wednesday, October 05, 2011 6:25 PM To: Duddy, John Cc: galaxy-dev@lists.bx.psu.edu Subject: Re: [galaxy-dev] Tool shed and datatypes Hello John, The Galaxy tool shed currently is not enabled to automatically edit the datatypes_conf.xml file, although I could add this feature if the need exists. Can you elaborate on what you are looking to do regarding this? Thanks! On Oct 5, 2011, at 1:52 PM, Duddy, John wrote: Can we introduce new file types via tools in the tool shed? It seems Galaxy can load them if they are in the datatypes configuration file. Does tool installation automate the editing of that file? John Duddy Sr. Staff Software Engineer Illumina, Inc. 9885 Towne Centre Drive San Diego, CA 92121 Tel: 858-736-3584 E-mail: jduddy@illumina.com<mailto:jduddy@illumina.com> ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/ Greg Von Kuster Galaxy Development Team greg@bx.psu.edu<mailto:greg@bx.psu.edu>
On Thu, Oct 6, 2011 at 4:48 AM, Duddy, John <jduddy@illumina.com> wrote:
One of the things we’re facing is the sheer size of a whole human genome at 30x coverage. An effective way to deal with that is by compressing the FASTQ files. That works for BWA and our ELAND, which can directly read a compressed FASTQ, but other tools crash when reading compressed FASTQ filesfiles. One way to address that would be to introduce a new type, for example “CompressedFastQ”, with a conversion to FASTQ defined. BWA could take both types as input. This would allow the best of both worlds – efficient storage and use by all existing tools.
We'd discussed this and a more general approach where any file could be gzipped, but the code to do that doesn't exist yet: http://lists.bx.psu.edu/pipermail/galaxy-dev/2011-September/006745.html Issue filed: https://bitbucket.org/galaxy/galaxy-central/issue/666/ That seems a better long term solution than the pragmatic short term solution of fastqsanger-gzip (or whatever it gets called). Note that it sounded like Edward Kirton might already be using this - you should be consistent. The other strong idea from that thread was moving from FASTQ to unaligned BAM, which is gzipped compressed, and has explicit support for paired end reads, read groups, etc. Peter
I'd be up for that something like that, although I have other tasking in the short term after I finish my parallelism work. I'd rather not have Galaxy do the compression/decompression, because that will not effectively utilize the distributed nature of many filesystems, such as Isilon, that our customers use. My parallelism work (second phase almost done) handles that by using a block-gzipped format and index files that allow the compute nodes to seek to the blocks they need and extract from there. Another thing that should probably go along with this is an enhancement to metadata such that it can be fed in from the outside. We upload files by linking to file paths, and at that point, we know everything about the files (index information). So need to decompress a 500GB file and read the whole thing just to count the lines - all you have to do is ask ;-} John Duddy Sr. Staff Software Engineer Illumina, Inc. 9885 Towne Centre Drive San Diego, CA 92121 Tel: 858-736-3584 E-mail: jduddy@illumina.com -----Original Message----- From: Peter Cock [mailto:p.j.a.cock@googlemail.com] Sent: Thursday, October 06, 2011 1:28 AM To: Duddy, John Cc: Greg Von Kuster; galaxy-dev@lists.bx.psu.edu; Nate Coraor Subject: Re: [galaxy-dev] Tool shed and datatypes On Thu, Oct 6, 2011 at 4:48 AM, Duddy, John <jduddy@illumina.com> wrote:
One of the things we're facing is the sheer size of a whole human genome at 30x coverage. An effective way to deal with that is by compressing the FASTQ files. That works for BWA and our ELAND, which can directly read a compressed FASTQ, but other tools crash when reading compressed FASTQ filesfiles. One way to address that would be to introduce a new type, for example "CompressedFastQ", with a conversion to FASTQ defined. BWA could take both types as input. This would allow the best of both worlds - efficient storage and use by all existing tools.
We'd discussed this and a more general approach where any file could be gzipped, but the code to do that doesn't exist yet: http://lists.bx.psu.edu/pipermail/galaxy-dev/2011-September/006745.html Issue filed: https://bitbucket.org/galaxy/galaxy-central/issue/666/ That seems a better long term solution than the pragmatic short term solution of fastqsanger-gzip (or whatever it gets called). Note that it sounded like Edward Kirton might already be using this - you should be consistent. The other strong idea from that thread was moving from FASTQ to unaligned BAM, which is gzipped compressed, and has explicit support for paired end reads, read groups, etc. Peter
On Thu, Oct 6, 2011 at 3:48 PM, Duddy, John <jduddy@illumina.com> wrote:
I'd be up for that something like that, although I have other tasking in the short term after I finish my parallelism work. I'd rather not have Galaxy do the compression/decompression, because that will not effectively utilize the distributed nature of many filesystems, such as Isilon, that our customers use.
Is that like a compressed filesystem, where there is probably less benefit to storing the data gzipped?
My parallelism work (second phase almost done) handles that by using a block-gzipped format and index files that allow the compute nodes to seek to the blocks they need and extract from there.
How similar is your block-gzipped approach to BGZF used in BAM?
Another thing that should probably go along with this is an enhancement to metadata such that it can be fed in from the outside. We upload files by linking to file paths, and at that point, we know everything about the files (index information). So need to decompress a 500GB file and read the whole thing just to count the lines - all you have to do is ask ;-}
I can see how that might be useful. Peter
As I understand it, Isilion is built up from "bricks" that have storage and compute power. They replicate files amongst themselves, so that for every IO request there are multiple systems that could respond. They are interconnected by an ultra fast fibre backbone. So, depending on your topology, it's possible to get a lot more throughput by working on different sections of the same file from different physical computers. I haven't delved into BGZF, so I can't comment. My approach to block GZIP was just to concatenate multiple GZIP files and keep a record of the offsets and sequences contained in each. The advantage is compatibility, in that it decompresses just like it was one big chunk, yet you can compose subsets of the data without decompressing/recompressing and (as long as we actually have to write out the file subsets) can reap the reduced IO benefits of smaller writes. John Duddy Sr. Staff Software Engineer Illumina, Inc. 9885 Towne Centre Drive San Diego, CA 92121 Tel: 858-736-3584 E-mail: jduddy@illumina.com -----Original Message----- From: Peter Cock [mailto:p.j.a.cock@googlemail.com] Sent: Thursday, October 06, 2011 8:16 AM To: Duddy, John Cc: Greg Von Kuster; galaxy-dev@lists.bx.psu.edu; Nate Coraor Subject: Re: [galaxy-dev] Tool shed and datatypes On Thu, Oct 6, 2011 at 3:48 PM, Duddy, John <jduddy@illumina.com> wrote:
I'd be up for that something like that, although I have other tasking in the short term after I finish my parallelism work. I'd rather not have Galaxy do the compression/decompression, because that will not effectively utilize the distributed nature of many filesystems, such as Isilon, that our customers use.
Is that like a compressed filesystem, where there is probably less benefit to storing the data gzipped?
My parallelism work (second phase almost done) handles that by using a block-gzipped format and index files that allow the compute nodes to seek to the blocks they need and extract from there.
How similar is your block-gzipped approach to BGZF used in BAM?
Another thing that should probably go along with this is an enhancement to metadata such that it can be fed in from the outside. We upload files by linking to file paths, and at that point, we know everything about the files (index information). So need to decompress a 500GB file and read the whole thing just to count the lines - all you have to do is ask ;-}
I can see how that might be useful. Peter
On Thu, Oct 6, 2011 at 5:02 PM, Duddy, John <jduddy@illumina.com> wrote:
As I understand it, Isilion is built up from "bricks" that have storage and compute power. They replicate files amongst themselves, so that for every IO request there are multiple systems that could respond. They are interconnected by an ultra fast fibre backbone.
So why not use gzipped files on top of that? Smaller chunks of data to access so should be faster even with the decompression once it gets to the CPU.
So, depending on your topology, it's possible to get a lot more throughput by working on different sections of the same file from different physical computers.
Nice.
I haven't delved into BGZF, so I can't comment. My approach to block GZIP was just to concatenate multiple GZIP files and keep a record of the offsets and sequences contained in each. The advantage is compatibility, in that it decompresses just like it was one big chunk, yet you can compose subsets of the data without decompressing/recompressing and (as long as we actually have to write out the file subsets) can reap the reduced IO benefits of smaller writes.
That sounds VERY similar to BGZF - have a read over the SAM specification which covers this. Basically they stick the block size into the gzip headers, and the BAM index files (BAI) use a 64 bit offset which is split into the BGZF block offset and the offset within that decompressed block. See: http://samtools.sourceforge.net/SAM1.pdf Peter
GZIP files are definitely our plan. I just finished testing the code that distributes the processing of a FASTQ (or pair for PE) to an arbitrary number of tasks, where each subtask extracts just the data it needs without reading any of the file it does not need. It extracts the blocks of GZIPped data into a standalone GZIP file just by copying whole blocks and appending them (if the window is not aligned perfectly, there is additional processing). Since the entire file does not need to be read, it distributes quite nicely. I'll be preparing a pull request for it soon. John Duddy Sr. Staff Software Engineer Illumina, Inc. 9885 Towne Centre Drive San Diego, CA 92121 Tel: 858-736-3584 E-mail: jduddy@illumina.com -----Original Message----- From: Peter Cock [mailto:p.j.a.cock@googlemail.com] Sent: Thursday, October 06, 2011 9:19 AM To: Duddy, John Cc: Greg Von Kuster; galaxy-dev@lists.bx.psu.edu; Nate Coraor Subject: Re: [galaxy-dev] Tool shed and datatypes On Thu, Oct 6, 2011 at 5:02 PM, Duddy, John <jduddy@illumina.com> wrote:
As I understand it, Isilion is built up from "bricks" that have storage and compute power. They replicate files amongst themselves, so that for every IO request there are multiple systems that could respond. They are interconnected by an ultra fast fibre backbone.
So why not use gzipped files on top of that? Smaller chunks of data to access so should be faster even with the decompression once it gets to the CPU.
So, depending on your topology, it's possible to get a lot more throughput by working on different sections of the same file from different physical computers.
Nice.
I haven't delved into BGZF, so I can't comment. My approach to block GZIP was just to concatenate multiple GZIP files and keep a record of the offsets and sequences contained in each. The advantage is compatibility, in that it decompresses just like it was one big chunk, yet you can compose subsets of the data without decompressing/recompressing and (as long as we actually have to write out the file subsets) can reap the reduced IO benefits of smaller writes.
That sounds VERY similar to BGZF - have a read over the SAM specification which covers this. Basically they stick the block size into the gzip headers, and the BAM index files (BAI) use a 64 bit offset which is split into the BGZF block offset and the offset within that decompressed block. See: http://samtools.sourceforge.net/SAM1.pdf Peter
On Thu, Oct 6, 2011 at 5:45 PM, Duddy, John <jduddy@illumina.com> wrote:
GZIP files are definitely our plan. I just finished testing the code that distributes the processing of a FASTQ (or pair for PE) to an arbitrary number of tasks, where each subtask extracts just the data it needs without reading any of the file it does not need. It extracts the blocks of GZIPped data into a standalone GZIP file just by copying whole blocks and appending them (if the window is not aligned perfectly, there is additional processing). Since the entire file does not need to be read, it distributes quite nicely.
I'll be preparing a pull request for it soon.
John Duddy
Hi John, Is your pull request public yet? I'd like to know more about your GZIP based plan (and how it differs from BGZF). It would seem silly to reinvent something slightly different if an existing and well tested mechanism like BGZF (used in BAM files) would work. BGZF is based on GZIP with blocks each up to 64kb, where the block size is recorded in the GZIP block header. This may be more fine grained than the block sizes you are using, but should serve equally well for distribution of data chunks between machines/cores. I appreciate the SAM/BAM specification where BGZF is defined is quite dry reading, and the broad potential of this GZIP variant beyond BAM is not articulated clearly. So I've written a blog post about how BGZF can be used for efficient random access to sequential files (in the sense of one self contained record after another, e.g. many sequence file formats including FASTA & FASTQ): http://blastedbio.blogspot.com/2011/11/bgzf-blocked-bigger-better-gzip.html I've also added a reference to BGZF on the open Galaxy feature request for general support of gzipped data types: https://bitbucket.org/galaxy/galaxy-central/issue/666/ Regards, Peter
It's not public yet, and it involves a little conundrum - we want it so we can support large amounts of data efficiently on a variety of aligners, including our ELAND from CASAVA. However, ELAND does not support unaligned BAM inputs yet, and apparently it would be a lot of work to make it so (and another team's area of responsibility as well). So in the near term, BGZF would not meet our needs. However, work is quite far along on a GZIP-based one that works with ELAND and BWA, since they both read GZIP FASTQ files, and works/will work with a converter to fastq_sanger for other tools. I can put you in touch with the engineer doing the work if you are interested. John Duddy Sr. Staff Software Engineer Illumina, Inc. 9885 Towne Centre Drive San Diego, CA 92121 Tel: 858-736-3584 E-mail: jduddy@illumina.com -----Original Message----- From: Peter Cock [mailto:p.j.a.cock@googlemail.com] Sent: Tuesday, November 08, 2011 3:29 PM To: Duddy, John Cc: Greg Von Kuster; galaxy-dev@lists.bx.psu.edu; Nate Coraor Subject: Re: [galaxy-dev] Tool shed and datatypes On Thu, Oct 6, 2011 at 5:45 PM, Duddy, John <jduddy@illumina.com> wrote:
GZIP files are definitely our plan. I just finished testing the code that distributes the processing of a FASTQ (or pair for PE) to an arbitrary number of tasks, where each subtask extracts just the data it needs without reading any of the file it does not need. It extracts the blocks of GZIPped data into a standalone GZIP file just by copying whole blocks and appending them (if the window is not aligned perfectly, there is additional processing). Since the entire file does not need to be read, it distributes quite nicely.
I'll be preparing a pull request for it soon.
John Duddy
Hi John, Is your pull request public yet? I'd like to know more about your GZIP based plan (and how it differs from BGZF). It would seem silly to reinvent something slightly different if an existing and well tested mechanism like BGZF (used in BAM files) would work. BGZF is based on GZIP with blocks each up to 64kb, where the block size is recorded in the GZIP block header. This may be more fine grained than the block sizes you are using, but should serve equally well for distribution of data chunks between machines/cores. I appreciate the SAM/BAM specification where BGZF is defined is quite dry reading, and the broad potential of this GZIP variant beyond BAM is not articulated clearly. So I've written a blog post about how BGZF can be used for efficient random access to sequential files (in the sense of one self contained record after another, e.g. many sequence file formats including FASTA & FASTQ): http://blastedbio.blogspot.com/2011/11/bgzf-blocked-bigger-better-gzip.html I've also added a reference to BGZF on the open Galaxy feature request for general support of gzipped data types: https://bitbucket.org/galaxy/galaxy-central/issue/666/ Regards, Peter
On Tue, Nov 8, 2011 at 11:45 PM, Duddy, John <jduddy@illumina.com> wrote:
It's not public yet, and it involves a little conundrum - we want it so we can support large amounts of data efficiently on a variety of aligners, including our ELAND from CASAVA. However, ELAND does not support unaligned BAM inputs yet, and apparently it would be a lot of work to make it so (and another team's area of responsibility as well).
OK, so using (unaligned) BAM isn't about to happen.
So in the near term, BGZF would not meet our needs.
I don't follow you there, BAM != BGZF. We can use BGZF to compress FASTQ, FASTA, GenBank, basically anything. You get compression approaching that of plain GZIP (depending on the characteristics of the data) plus efficient random access.
However, work is quite far along on a GZIP-based one that works with ELAND and BWA, since they both read GZIP FASTQ files, and works/will work with a converter to fastq_sanger for other tools.
I can put you in touch with the engineer doing the work if you are interested.
That might be a good idea, or ask them to post here? Peter
Ahh - sorry. I finally found the format specification for BGZF in the SAM format specification, and it seems that it is 100% GZIP-compatible. There is still the issue of needing an external file index, since all BGZF seems to give you is the size of the compressed block, not anything format-specific, like the number of sequences in the block. In any case, whether it's GZIP or BGZF, it seems the solutions are very similar, and porting my work should be pretty simple - I just used larger blocks and put all the data in the index file and none in the headers. John Duddy Sr. Staff Software Engineer Illumina, Inc. 9885 Towne Centre Drive San Diego, CA 92121 Tel: 858-736-3584 E-mail: jduddy@illumina.com -----Original Message----- From: Peter Cock [mailto:p.j.a.cock@googlemail.com] Sent: Tuesday, November 08, 2011 4:04 PM To: Duddy, John Cc: Greg Von Kuster; galaxy-dev@lists.bx.psu.edu; Nate Coraor Subject: Re: [galaxy-dev] Tool shed and datatypes On Tue, Nov 8, 2011 at 11:45 PM, Duddy, John <jduddy@illumina.com> wrote:
It's not public yet, and it involves a little conundrum - we want it so we can support large amounts of data efficiently on a variety of aligners, including our ELAND from CASAVA. However, ELAND does not support unaligned BAM inputs yet, and apparently it would be a lot of work to make it so (and another team's area of responsibility as well).
OK, so using (unaligned) BAM isn't about to happen.
So in the near term, BGZF would not meet our needs.
I don't follow you there, BAM != BGZF. We can use BGZF to compress FASTQ, FASTA, GenBank, basically anything. You get compression approaching that of plain GZIP (depending on the characteristics of the data) plus efficient random access.
However, work is quite far along on a GZIP-based one that works with ELAND and BWA, since they both read GZIP FASTQ files, and works/will work with a converter to fastq_sanger for other tools.
I can put you in touch with the engineer doing the work if you are interested.
That might be a good idea, or ask them to post here? Peter
BTW - the pull request for the GZIP-based splitting is actually integrated - I was referring to the GZIP-based datatype. John Duddy Sr. Staff Software Engineer Illumina, Inc. 9885 Towne Centre Drive San Diego, CA 92121 Tel: 858-736-3584 E-mail: jduddy@illumina.com -----Original Message----- From: Peter Cock [mailto:p.j.a.cock@googlemail.com] Sent: Tuesday, November 08, 2011 3:29 PM To: Duddy, John Cc: Greg Von Kuster; galaxy-dev@lists.bx.psu.edu; Nate Coraor Subject: Re: [galaxy-dev] Tool shed and datatypes On Thu, Oct 6, 2011 at 5:45 PM, Duddy, John <jduddy@illumina.com> wrote:
GZIP files are definitely our plan. I just finished testing the code that distributes the processing of a FASTQ (or pair for PE) to an arbitrary number of tasks, where each subtask extracts just the data it needs without reading any of the file it does not need. It extracts the blocks of GZIPped data into a standalone GZIP file just by copying whole blocks and appending them (if the window is not aligned perfectly, there is additional processing). Since the entire file does not need to be read, it distributes quite nicely.
I'll be preparing a pull request for it soon.
John Duddy
Hi John, Is your pull request public yet? I'd like to know more about your GZIP based plan (and how it differs from BGZF). It would seem silly to reinvent something slightly different if an existing and well tested mechanism like BGZF (used in BAM files) would work. BGZF is based on GZIP with blocks each up to 64kb, where the block size is recorded in the GZIP block header. This may be more fine grained than the block sizes you are using, but should serve equally well for distribution of data chunks between machines/cores. I appreciate the SAM/BAM specification where BGZF is defined is quite dry reading, and the broad potential of this GZIP variant beyond BAM is not articulated clearly. So I've written a blog post about how BGZF can be used for efficient random access to sequential files (in the sense of one self contained record after another, e.g. many sequence file formats including FASTA & FASTQ): http://blastedbio.blogspot.com/2011/11/bgzf-blocked-bigger-better-gzip.html I've also added a reference to BGZF on the open Galaxy feature request for general support of gzipped data types: https://bitbucket.org/galaxy/galaxy-central/issue/666/ Regards, Peter
John, I've been following this message thread, and it seems it's gone in a direction that differs from your initial question about the possibility for Galaxy to handle automatic editing of the datatypes_conf.xml file when certain Galaxy tool shed tools are automatically installed. There are some complexities to consider in attempting this. One of the issues to consider is that the work for adding support for a new datatype to Galaxy lies outside of the intended function of the tool shed. If new support is added to the Galaxy code base, an entry for that new datatype should be manually added to the table at the same time. There may be benefits to enabling automatic changes to datatype entries that already exist in the file (e.g., adding a new converter for an existing datatype entry), but perhaps adding a completely new datatype to the file may not be appropriate. I'll continue to think about this - send additional thought and feedback, as doing so is always helpful Thanks! Greg On Oct 5, 2011, at 11:48 PM, Duddy, John wrote:
One of the things we’re facing is the sheer size of a whole human genome at 30x coverage. An effective way to deal with that is by compressing the FASTQ files. That works for BWA and our ELAND, which can directly read a compressed FASTQ, but other tools crash when reading compressed FASTQ filesfiles. One way to address that would be to introduce a new type, for example “CompressedFastQ”, with a conversion to FASTQ defined. BWA could take both types as input. This would allow the best of both worlds – efficient storage and use by all existing tools.
Another example would be adding the CASAVA tools to Galaxy. Some of the statistics generation tools use custom file formats. To be able to make the use of those tools optional and configurable, they should be separate from the aligner, but that would require that Galaxy be made aware of the custom file formats – we’d have to add a datatype.
John Duddy Sr. Staff Software Engineer Illumina, Inc. 9885 Towne Centre Drive San Diego, CA 92121 Tel: 858-736-3584 E-mail: jduddy@illumina.com
From: Greg Von Kuster [mailto:greg@bx.psu.edu] Sent: Wednesday, October 05, 2011 6:25 PM To: Duddy, John Cc: galaxy-dev@lists.bx.psu.edu Subject: Re: [galaxy-dev] Tool shed and datatypes
Hello John,
The Galaxy tool shed currently is not enabled to automatically edit the datatypes_conf.xml file, although I could add this feature if the need exists. Can you elaborate on what you are looking to do regarding this?
Thanks!
On Oct 5, 2011, at 1:52 PM, Duddy, John wrote:
Can we introduce new file types via tools in the tool shed? It seems Galaxy can load them if they are in the datatypes configuration file. Does tool installation automate the editing of that file?
John Duddy Sr. Staff Software Engineer Illumina, Inc. 9885 Towne Centre Drive San Diego, CA 92121 Tel: 858-736-3584 E-mail: jduddy@illumina.com
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
Greg Von Kuster Galaxy Development Team greg@bx.psu.edu
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
Greg Von Kuster Galaxy Development Team greg@bx.psu.edu
participants (3)
-
Duddy, John
-
Greg Von Kuster
-
Peter Cock