Possible Galaxy memory leak?
Hi all-- Our group is running a local instance of Galaxy on a Linux box. We have been noticing a memory issue, where if we leave the server running for a few days, soon all available memory (~8gb) will be used up to the point where the tools fail because they are unable to allocate new memory. While we haven't done any rigorous testing, this issue seems to be independent of the specific tools we run, and actually seems to still appear if we do not run any tools at all. The process "python" is what is taking up all the memory, and the issue is not instantaneous but happens over time. Restarting the server frees the memory immediately. Is this a known issue? If so, is there a known fix for it? Any insight on this would be much appreciated, as it is complicating our efforts to automate long work-flows. Thank you so much for your time Josh Gross, Western Washington University.
We had intermittent issues like this for an embarrassingly long time at MSI, but since we applied this patch https://bitbucket.org/galaxy/galaxy-central/pull-request/36/fix-for-dynamic_... Galaxy has been running smoothly for weeks. Seems unlikely this is the problem in your case (unless you have pulled in tools with make use of dynamic options), but it something to consider. -John On Wed, Feb 29, 2012 at 2:08 PM, Joshua Gross <grossj5@students.wwu.edu> wrote:
Hi all--
Our group is running a local instance of Galaxy on a Linux box. We have been noticing a memory issue, where if we leave the server running for a few days, soon all available memory (~8gb) will be used up to the point where the tools fail because they are unable to allocate new memory.
While we haven't done any rigorous testing, this issue seems to be independent of the specific tools we run, and actually seems to still appear if we do not run any tools at all. The process "python" is what is taking up all the memory, and the issue is not instantaneous but happens over time. Restarting the server frees the memory immediately.
Is this a known issue? If so, is there a known fix for it? Any insight on this would be much appreciated, as it is complicating our efforts to automate long work-flows.
Thank you so much for your time
Josh Gross, Western Washington University.
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
There is also a memory leak that occurs when using the pbs job runner. I believe the leak is somewhere in libtorque or pbs_python but I have not yet tracked it down. --nate On Feb 29, 2012, at 3:28 PM, John Chilton wrote:
We had intermittent issues like this for an embarrassingly long time at MSI, but since we applied this patch https://bitbucket.org/galaxy/galaxy-central/pull-request/36/fix-for-dynamic_... Galaxy has been running smoothly for weeks. Seems unlikely this is the problem in your case (unless you have pulled in tools with make use of dynamic options), but it something to consider.
-John
On Wed, Feb 29, 2012 at 2:08 PM, Joshua Gross <grossj5@students.wwu.edu> wrote:
Hi all--
Our group is running a local instance of Galaxy on a Linux box. We have been noticing a memory issue, where if we leave the server running for a few days, soon all available memory (~8gb) will be used up to the point where the tools fail because they are unable to allocate new memory.
While we haven't done any rigorous testing, this issue seems to be independent of the specific tools we run, and actually seems to still appear if we do not run any tools at all. The process "python" is what is taking up all the memory, and the issue is not instantaneous but happens over time. Restarting the server frees the memory immediately.
Is this a known issue? If so, is there a known fix for it? Any insight on this would be much appreciated, as it is complicating our efforts to automate long work-flows.
Thank you so much for your time
Josh Gross, Western Washington University.
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
We've experience memory-leak symptoms and we also use the PBS job runner here. We did not try to locate the leak, but instead installed monit as a work-around http://mmonit.com/monit/ Todd Oakley On 3/6/2012 11:10 AM, Nate Coraor wrote:
There is also a memory leak that occurs when using the pbs job runner. I believe the leak is somewhere in libtorque or pbs_python but I have not yet tracked it down.
--nate
On Feb 29, 2012, at 3:28 PM, John Chilton wrote:
We had intermittent issues like this for an embarrassingly long time at MSI, but since we applied this patch https://bitbucket.org/galaxy/galaxy-central/pull-request/36/fix-for-dynamic_... Galaxy has been running smoothly for weeks. Seems unlikely this is the problem in your case (unless you have pulled in tools with make use of dynamic options), but it something to consider.
-John
On Wed, Feb 29, 2012 at 2:08 PM, Joshua Gross<grossj5@students.wwu.edu> wrote:
Hi all--
Our group is running a local instance of Galaxy on a Linux box. We have been noticing a memory issue, where if we leave the server running for a few days, soon all available memory (~8gb) will be used up to the point where the tools fail because they are unable to allocate new memory.
While we haven't done any rigorous testing, this issue seems to be independent of the specific tools we run, and actually seems to still appear if we do not run any tools at all. The process "python" is what is taking up all the memory, and the issue is not instantaneous but happens over time. Restarting the server frees the memory immediately.
Is this a known issue? If so, is there a known fix for it? Any insight on this would be much appreciated, as it is complicating our efforts to automate long work-flows.
Thank you so much for your time
Josh Gross, Western Washington University.
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
-- ************************************* Todd Oakley, Professor Ecology Evolution and Marine Biology University of California, Santa Barbara Santa Barbara, CA 93106 USA NEWLY UPDATED LAB WEBSITE: http://labs.eemb.ucsb.edu/oakley/todd/ **************************************
participants (4)
-
John Chilton
-
Joshua Gross
-
Nate Coraor
-
Todd Oakley