I'll have a good look at the data manager approach - a quick glance tells me it is designed to fetch data and so be able to get fresh data on demand. We'll need a 3rd party data import process that operates on its own pre-defined schedule so that each data file gets to have its own update schedule - weekly/monthly/yearly. So it seems the galaxy API could accomplish this in talking with the data manager system? I was going to look into the http://biomaj.genouest.org/ system as another possibility. But is the data manager approach able to get data in such a way that it can just place it into a user's history as a linked file or set of files? I.e. the solution we need has its master databases locally on the server, and will generate a cached version from the master file - without any networking off of the server. I'm back from vacation in a week so I'll get to check that out in more detail... Cheers, Damion Hsiao lab, BC Public Health Microbiology & Reference Laboratory, BC Centre for Disease Control 655 West 12th Avenue, Vancouver, British Columbia, V5Z 4R4 Canada ________________________________________ From: Marius van den Beek [m.vandenbeek@gmail.com] Sent: Saturday, August 23, 2014 2:25 AM To: Dooley, Damion Cc: galaxy-dev@lists.bx.psu.edu; Hsiao, William Subject: Re: [galaxy-dev] Concept for a Galaxy Versioned Fasta Data Retrieval Tool I also think that this a great idea, and as you described it I think it's feasible as a stand-alone galaxy tool. Eventually you consider to implement this as a data manager (https://wiki.galaxyproject.org/Admin/Tools/DataManagers) ?