Yes,and I also have the same confuse about that.Actually when I set server:<id> in the universe_wsgi.ini as follows for a try,my Galaxy doesn't work with Cluster,if I remove server:<id>,it work .
[server:node01]
use = egg:Paste#http
port = 8080
host = 0.0.0.0
use_threadpool = true
threadpool_workers = 5
This is my job_conf.xml :
<?xml version="1.0"?>
<job_conf>
    <plugins workers="4">
        <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>
        <plugin id="pbs" type="runner" load="galaxy.jobs.runners.pbs:PBSJobRunner" workers="8"/>
    </plugins>
    <handlers default="batch">
        <handler id="node01" tags="batch"/>
        <handler id="node02" tags="batch"/>
    </handlers>
    <destinations default="regularjobs">
        <destination id="local" runner="local"/>
        <destination id="regularjobs" runner="pbs" tags="cluster">
            <param id="Resource_List">walltime=24:00:00,nodes=1:ppn=4,mem=10G</param>
            <param id="galaxy_external_runjob_script">scripts/drmaa_external_runner.py</param>
            <param id="galaxy_external_killjob_script">scripts/drmaa_external_killer.py</param>
            <param id="galaxy_external_chown_script">scripts/external_chown_script.py</param>
        </destination>
   </destinations>
</job_conf>
Further more when I want to kill my jobs  by clicking  in galaxy web,the job keeps on running in my background.I do not know how to fix this.
Any help on this would be grateful.Thank you very much.
 

shenwiyn
 
From: Jurgens de Bruin
Date: 2013-08-07 19:55
To: galaxy-dev
Subject: [galaxy-dev] Help with cluster setup
Hi,

This is my first Galaxy installation setup so apologies for stupid questions. I am setting up Galaxy on a Cluster running Torque as the resource manager. I am working through the documentation but I am unclear on some things:

Firstly I am unable to find : start_job_runners within the universe_wsgi.ini and I dont want to just add this anywhere - any help on this would be create.

Further more this is my job_conf.xml :

<?xml version="1.0"?>
<!-- A sample job config that explicitly configures job running the way it is configured by default (if there is no explicit config). -->
<job_conf>
    <plugins>
        <plugin id="hpc" type="runner" load="galaxy.jobs.runners.drmaa:DRMAAJobRunner" workers="4"/>
    </plugins>
    <handlers>
  <!-- Additional job handlers - the id should match the name of a
             [server:<id>] in universe_wsgi.ini.
        <handler id="cn01"/>
        <handler id="cn02"/>
    </handlers>
    <destinations>
        <destination id="hpc" runner="drmaa"/>
    </destinations>
</job_conf>


Does this look meaning full, further more where to I set the additional server:<id>
in the universe_wsgi.ini.

As background the cluster has 13 compute nodes and a shared storage array that can be accessed by all nodes in the cluster.


Thanks again



--
Regards/Groete/Mit freundlichen Grüßen/recuerdos/meilleures salutations/
distinti saluti/siong/duì yú/привет

Jurgens de Bruin