On Fri, Nov 18, 2011 at 12:56 AM, Jeremy Goecks <jeremy.goecks@emory.edu> wrote:
Scalability issues are more likely to arise on the back end than the front end, so you'll want to ensure that you have enough compute nodes. BWA uses four nodes by default--Enis, does the cloud config change this parameter?--so you'll want 4x50 or 200 total nodes if you want everyone to be able to run a BWA job simultaneously.
Actually, one other question - this paragraph makes me realise that I don't really understand how Galaxy is distributing jobs. I had thought that each job would only use one node, and in some cases take advantage of multiple cores within that node. I'm taking a "node" to be a set of cores with their own shared memory, so in this case a VM instance, is this right? If some types of jobs can be distributed over multiple nodes, can I configure, in Galaxy, how many nodes they should use? Thanks again, Clare -- E: sloc@unimelb.edu.au P: 03 903 53357 M: 0414 854 759