Hi @ all, I would like to know if anyone could give me some guidance or hint on how to migrate data 'correctly' if Galaxy already wrote files, worked on the database etc. The situation is that we set up an instance, it was already in use and wrote files e.g. into the directory 'galaxy-dist/database/'. The local space is quite small, but more storage is available via a NFS share. I already wrote a bash script for setting up the connections (and editing the 'universe.wsgi' file accordingly) for the entries 'genome_data_path, 'ftp_upload_dir', 'file_path', 'new_file_path' and 'job_working_directory'. Those should address the most bulky features... The application of that script is fine as long as the Galaxy system has never started before. While applying that on a system that already 'did something', e.g. the files in the 'database/' subdirectory remain there and are not transferred. Anyhow, if Galaxy is started up after the application of the new settings, no error is reported. I wonder how Galaxy now deals with that situation: * Handling data from both sources? (read and/or write?) * Automatic movements of the 'historic' data when touched the next time? * Crashes if those older objects are intended to be read/edited? * Removes them silently from the database? => Is there a procedure/module to used in order to migrate the data? Is it sufficient (and appropriate) just to move all contents of the old folders to the new locations? Did I miss any existing documentation on that issue? The answer(s) may diverge, depending on to which of the five parameters announced above those questions are addressed... Any help appreciated before blowing up our production instance :). Thanks in advance, Best regards, Sebastian -- Sebastian Schaaf, M.Sc. Bioinformatics Faculty Coordinator NGS Infrastructure Chair of Biometry and Bioinformatics Department of Medical Informatics, Biometry and Epidemiology (IBE) University of Munich