Hi, We emailed previously about possible memory leaks in our installation of Galaxy here on the HPC at Bristol. We can run Galaxy just fine on our login node but when we integrate into the cluster using pbs job runner the whole thing falls over - almost certainly due to a memory leak. In essence, every attempt to submit a TopHat job (with 2x5GB paired end reads to the full human genome) always results in the whole thing falling over - but not when Galaxy is restricted to the login node. We saw that Nate responded to Todd Oakley about a week ago saying that there is a memory leak in libtorque or pbs_python when using the pbs job runner. Have there been any developments on this ? Best Wishes, David. __________________________________ Dr David A. Matthews Senior Lecturer in Virology Room E49 Department of Cellular and Molecular Medicine, School of Medical Sciences University Walk, University of Bristol Bristol. BS8 1TD U.K. Tel. +44 117 3312058 Fax. +44 117 3312091 D.A.Matthews@bristol.ac.uk