Maarten: The Galaxy per se does not need much to run, so you should base your calculations on estiamted usage/computation complexity/data size etc. sort of things. For example, a single bowtie job comparing a single illumina channel against human genomes will take between 2 to 20 min on 8 cores depending on parameters. So if you have 100 users running such jobs simultaneously you will need a cluster with 100 8 core nodes, and so on. Other types of job may take significantly longer (e.g., mapping half of 454FLX plate with lastz will take overnight). Let us know if you want to know specifics of our Penn State setup. anton galaxy team On Jan 21, 2010, at 6:10 AM, Maarten Kooyman wrote:
Dear list,
I'm a employee for the The duch bioinformatics center (NBIC). We decided to set up national Galaxy server with a focus on Next Generation Sequences data. For this we need to acquire some hardware and services. I would like have a idea what kind of hardware is necessary to operate smoothly . We expect a the first month +- 30 (serious) users and will grow to 100 user in 3 months.
I would like to get a rough estimate of: -number of cpu -memory -hard drive space needed -network usage etc.
I known it it hard to predict what kind of hardware I need, because it depends on the user input. However I would like to get a indication.
Kind Regards,
Maarten Kooyman
_______________________________________________ galaxy-dev mailing list galaxy-dev@lists.bx.psu.edu http://lists.bx.psu.edu/listinfo/galaxy-dev
Anton Nekrutenko http://nekrut.bx.psu.edu http://usegalaxy.org