
Hi All, I've noticed an issue a couple of times now where I've added a directory of fastq's from an NFS mounted filesystem (reference only rather than copying into galaxy) and then galaxy times out. Load average begins to get really high and then consumes all the RAM and sometimes crashes. These are the same symptom as I had before with this issue that was never resolved; http://dev.list.galaxyproject.org/run-sh-segfault-td4667549.html#a4667553 What I've noticed is that in the dataset I'm uploading to galaxy, there are suddenly many duplicates. In this example that's just happened, there are 288 fastq.gz files in the physical folder, but galaxy has created 6 references to each file resulting in 1728 datasets in the folder (see attached images). When this happened before and crashed the galaxy application, whenever it restarted it'd try to resume what it was doing which created an endless loop of retrying and crashing until the job was removed. Does anyone know what may be causing this? Cheers, Martin -- -- Dr. Martin Vickers Data Manager/HPC Systems Administrator Institute of Biological, Environmental and Rural Sciences IBERS New Building Aberystwyth University w: http://www.martin-vickers.co.uk/ e: mjv08@aber.ac.uk t: 01970 62 2807