Dear list,
Our lab's proteomics data is frequently outputted into >50 files
containing different fractions of proteins. The files are locally stored
and not present on the Galaxy server. We've planned to somehow (inside
galaxy) merge these files and split them into tasks so they can be run
on the cluster. We would either merge/split the files by concatenation,
or untar/tar files at every job, depending on filetype and tool. No
problems so far.
However, I have been looking around for a way to upload >50 files
simultaneously to galaxy and convert to one dataset, and this does not
seem to be supported. Before starting to create a hack to make this
work, which doesn't seem especially trivial to me, I'd like to know if I
should instead use libraries. From what I've seen, libraries are not
treated as datasets in Galaxy but rather contain datasets. If there was
a "tar all sets in library and import to history" I'd be using that, but
I've only encountered "tar/zip sets and download locally" which would be
a bit of a workaround.
I haven't found much on this subject in the mailing list, has this
functionality been requested before?
cheers,
jorrit boekel