On Apr 8, 2011, at 11:01 AM, Dave Walton wrote:
This is very close to our config, except - We run all of this on a 4 core Virtual Machine running SUSE Linux Enterprise Server 11 (x86_64) with 16 GB of memory.
Instead of SGE our HPC cluster uses Torque/Moab for scheduling.
Also, we've set up a separate IO Node for upload of data files from the file system and FTP (correct me if I mis-spoke Glen).
Also, instead of apache we run nginx for our httpd server as it was easy to get off-loading of file upload and download working with that server.
We're not seeing a heavy load from users at this point, but this has worked pretty well for us so far.
Hope this helps,
Dave
The only reason we offload the upload jobs somewhere other than our HPC cluster is that our cluster nodes do not see the outside world. Our IT folks did not really want to change the network configuration, so we installed TORQUE on a spare Linux server, mounted our galaxy network storage on it, and we setup some upload specific job runners that send those jobs to that node. If you have NAT setup on your cluster you probably don't need to worry about that. We have pretty "fat" cluster nodes (128GB RAM and 32 cores) since we run a lot of multi-threaded jobs on the cluster but not a lot of MPI jobs. Our NGS tools are typically configured to use 16-32 threads.
On 4/8/11 10:21 AM, "Assaf Gordon" <gordon@cshl.edu> wrote:
Assaf Gordon wrote, On 04/08/2011 10:07 AM:
Processes:
The servers processes that you should plan for are: 1 galaxy process for job-runner 2 or 3 galaxy processes for web-fronts 1 process of postgres 1 process of apache optionally 1 process of galaxy-reports you'll also want to leave some free CPUs for SSH access, CRON jobs and other peripherals. Postgres & apache are multithreaded, but it usually balances out with light load on the web/DB front from galaxy (even with 30 users). So all in all, I'd recommend reserving 5 to 8 CPU cores to just galaxy and daemons (reserving means: never using those cores for galaxy jobs). You can do with less cores, but then response times might suffer (and it's annoying when you click "show saved histories" and the page takes 20 seconds to load...).
Forgot to mention SGE/PBS: you definitely want to use them (even if you're using a single machine), because the local job runner doesn't take into account multi-threaded programs when scheduling jobs. So another core is needed for the SGE scheduler daemons (sge_qmaster and sge_execd).
___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
-- Glen L. Beane Senior Software Engineer The Jackson Laboratory (207) 288-6153