Hi,

Just add some info about this, I have attempted to do so on SGE cluster. 

I found that $NSLOTS works only for MPI jobs as it's part of the MPI integration in SGE.

Other non MPI jobs won't work. For example:

python parallel_groomer.py input output $NSLOTS, $NSLOTS won't be replaced by the SGE with the specified number.

Regards,
Derrick

On Sat, Jun 16, 2012 at 1:52 AM, Peter Cock <p.j.a.cock@googlemail.com> wrote:
On Fri, Jun 15, 2012 at 4:38 PM, James Taylor <james@jamestaylor.org> wrote:
> This is exactly what I think we should do (and have for a long time), but I
> think the variable should be something like:
>
> GALAXY_CPUS
>
> (threads is not accurate, a multithread or multiprocess job might want to use
> this info, something even more abstract than CPUS might make sense, but
> SLOTS has never made sense to me).

I agree that a Galaxy specific name makes a lot of sense, and that
the SGE term "slots" is a bit odd. Using CPUS however is potentially
ambiguous with CPUs vs cores - my desktop has two quad core CPUs,
i.e. 2 CPUs but 8 cores.

Where do you think this number should come from? A new entry in the
runner URL is simple albeit potentially redundant with cluster-specific
entries in the runner URL. As to the alternative (doing it automatically),
for PBS and SGE determining the number of cores from the cluster
configuration and/or parsing the cluster runner URL sounds doable -
what about the other backends?

Peter
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/