We're seeing an odd issue that seems to be related to Trackster's caching of track information, mainly that memory in the web server processes increases. It appears that tracks are being cached in memory per web process. If I create a new visualization (and monitor our web processed via top) we can see the %mem usage go up, so much so that it eventually crashes with an out-of-mem issue. This seems to happen only when new visualizations are created; when they are shared between users mem usage stays roughly the same. We can bump up the mem on the VM (we have more at our disposal), and we can monitor and restart the processes if they get too high, but is there a way to determine whether Trackster is the cause? Below is the top output (note the manager and handlers are still normal-ish range): PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 27777 galaxy 20 0 2637m 1.5g 2700 S 6.3 19.8 7:29.01 python ./scripts/paster.py serve universe_wsgi.ini --server-name=web2 --pid-file=web2.pid --log-file=web2.log --daemon 27759 galaxy 20 0 3624m 2.2g 2716 S 5.3 28.9 9:04.33 python ./scripts/paster.py serve universe_wsgi.ini --server-name=web1 --pid-file=web1.pid --log-file=web1.log --daemon 27749 galaxy 20 0 2621m 1.5g 2724 S 5.0 19.7 7:39.51 python ./scripts/paster.py serve universe_wsgi.ini --server-name=web0 --pid-file=web0.pid --log-file=web0.log --daemon 27808 galaxy 20 0 1616m 160m 2640 S 1.7 2.0 3:00.21 python ./scripts/paster.py serve universe_wsgi.ini --server-name=handler1 --pid-file=handler1.pid --log-file=handler1.log --da 27798 galaxy 20 0 1616m 159m 2652 S 1.0 2.0 2:59.03 python ./scripts/paster.py serve universe_wsgi.ini --server-name=handler0 --pid-file=handler0.pid --log-file=handler0.log --da 27789 galaxy 20 0 944m 88m 2400 S 0.0 1.1 0:36.40 python ./scripts/paster.py serve universe_wsgi.ini --server-name=manager --pid-file=manager.pid --log-file=manager.log --d chris
is there a way to determine whether Trackster is the cause?
The only place where Trackster caches data is in the SummaryTreeDataProvider. In galaxy-central, the relevant line is 709 in lib/galaxy/visualization/genome/data_providers.py : -- CACHE = LRUCache( 20 ) # Store 20 recently accessed indices for performance -- Try setting the cache size low or to 0 and see if that addresses the memory issue. Best, J.
On Sep 15, 2012, at 11:05 PM, Jeremy Goecks <jeremy.goecks@emory.edu> wrote:
is there a way to determine whether Trackster is the cause?
The only place where Trackster caches data is in the SummaryTreeDataProvider. In galaxy-central, the relevant line is 709 in lib/galaxy/visualization/genome/data_providers.py :
-- CACHE = LRUCache( 20 ) # Store 20 recently accessed indices for performance --
Try setting the cache size low or to 0 and see if that addresses the memory issue.
Best, J.
I'll try this in combination with a bump in mem allocated to the VM, should take care of it. Thanks! chris
participants (2)
-
Fields, Christopher J
-
Jeremy Goecks