Hi Rich, This is a good question! Maybe people have been asking about this. To download data with a unix line command method, please try wget or curl, for example: unix% wget 'url_for_the_dataset' or unix% wget 'url_for_the_history' To capture the url for a dataset, right click on the disk icon for a dataset and select "copy link location". To capture the url for an entire history, select "Options -> Export to File". The middle panel will display a link. A downloaded history can be loaded into a local Galaxy instance where the datasets can be managed (copy/rename) or the histories archived. Hopefully this helps you and others that are managing larger datasets & histories, Best, Jen Galaxy team On 11/7/11 7:50 AM, Richard Mark White wrote:
Hi, I am using the galaxy public server. Is there a way to access output files (via ftp, perhaps) so I can bulk download them to my computer? I am over my quota and want to get data off of Galaxy but prefer not to do this all one at a time. Similarly, is there a way to access a directory (via unix, ftp, etc) to rename files quickly while they are on Galaxy, since renaming each output file (i.e. the multiple ones output from cuffdiff) within galaxy is very inefficient and time consuming.
Thanks.
Rich
-- Jennifer Jackson http://usegalaxy.org http://galaxyproject.org/wiki/Support