Hello, Metadata is not regenerated when cloning histories. Values should be (deep) copied over from the source datasets, with the exception of MetadataFiles, where a new file is created and the contents of the original file are copied into a new file, but this is a shutil.copy() call in the framework and not a set_meta action. This could potentially be slow for certain datatypes (e.g. bam datatype currently stores indexes as metadata files). For some datatypes, peeks are regenerated (i.e. copy_safe_peek = False is set for a datatype), but this is uncommon. It might also be worthwhile to verify that your database is working efficiently. If you can provide additional information about the datatypes, etc that were in this history, we might be able to track down the issue. Thanks, Dan On Jan 21, 2010, at 2:35 PM, Assaf Gordon wrote:
Hello,
I tried to clone a large history (334 active datasets) with very big files (more than half are FASTQ/FASTA files, of 400MB to 1.5GB size). The python process was busy for 17 minutes, and of course I got an HTTP 504 "proxy timeout" error. The cloning actually succeeded (after 17 minutes, the history appeared in my histories list), but I was wondering if there's a way to make this faster - because it effectively disables one of the two web-service processes that I have (luckily, I have two now, thanks yesterday's help).
I think I noticed that external set_metadata process was called for some/all of the the cloned datasets. Is it possible to disable set-metadata when cloning? the datasets already have a metadata. (Or I could be mistaken and the external set-metadata was not related to the cloning?).
Thanks, -gordon _______________________________________________ galaxy-dev mailing list galaxy-dev@lists.bx.psu.edu http://lists.bx.psu.edu/listinfo/galaxy-dev