Thanks Jennifer. I was just able to the updated version of Galaxy installed, and I still have the same issue. After having looked at the scripts, I am fairly sure it is due to a bug in the -f (force) option of the data cleanup scripts. I'm not sure, I believe it's similar to issue I ran into with library datasets: http://gmod.827538.n3.nabble.com/Bug-importing-from-data-library-to-history-...
Unfortunately, I haven't had time to dig up additional details beyond what I posted in that thread.
Jennifer Jackson wrote:
Were you able to work out the data issues (perhaps the datasets/histories were shared?)?
If this issue is still open, there are new features in Galaxy to help manage data that may be of interest: http://galaxyproject.org/wiki/Admin/Disk%20Quotas
Jen Galaxy team
On 8/25/11 9:16 AM, Lance Parsons wrote:
I am running a local instance of Galaxy and I've been trying to sort out some issue with dataset cleanup. For the most part, things are working OK running the shell scripts in the recommended order:
delete_userless_histories.sh purge_histories.sh purge_libraries.sh purge_folders.sh delete_datasets.sh purge_datasets.sh
I have the number of days set to 10. When I look at the reports webapp however, it reports that there are "62 datasets were deleted more than 15 days ago, but have not yet been purged, disk space: 12975717335." These have stuck around now for 45 days (and counting). I have even tried running the scripts with the -f option to force galaxy to re-evaluate the datasets to no avail.
Any suggestions? Thanks.
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: