Hello, Today I was routinely adding a 27GB Illumina lane on my galaxy instance running on a cluster node. Just the regular cloned-from-hg type of instance with set_metadata_externally, no more tuning. It took more than 10 minutes to have the dataset imported into a data library via the filesystem path upload method... not copying it into galaxy, just "linking". galaxy.jobs INFO 2011-09-19 18:05:08,641 job 120 dispatched (...) galaxy.jobs DEBUG 2011-09-19 18:16:52,822 job 120 ended galaxy.datatypes.metadata DEBUG 2011-09-19 18:16:52,824 Cleaning up external metadata files Since I cannot add datasets to libraries in usegalaxy.org and compare, I was wondering if someone can state an approximated average time *for a production* galaxy installation to do that operation. I would like to have some empirical number to show on how a production deployment[1] could speed things up, as opposed to having individual galaxy instances per user in a cluster (as per IT policies): http://blogs.nopcode.org/brainstorm/2011/08/22/galaxy-on-uppmax-simplified/ Thanks in advance ! Roman [1] http://usegalaxy.org/production