Thanks, I have read tool's xml files, and I noticed that I should let the cluster run first. 
So I installed torque 4.2.8 in the 3 servers , and I followed https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster

galaxy_user@galaxy_server% cd /clusterfs/galaxy/galaxy-dist
galaxy_user@galaxy_server% LIBTORQUE_DIR=/usr/local/lib python scripts/scramble.py -e pbs_python
And it seems end successfully. But then I run #sh ./run.sh in /galaxy/galaxy-dist, it comes the below error...

galaxy.jobs.manager DEBUG 2014-07-16 20:03:36,159 Starting job handler
galaxy.jobs.runners DEBUG 2014-07-16 20:03:36,163 Starting 4 LocalRunner workers
galaxy.jobs DEBUG 2014-07-16 20:03:36,203 Loaded job runner 'galaxy.jobs.runners.local:LocalJobRunner' as 'local'
Traceback (most recent call last):
File "/home/jin/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py", line 39, in app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
File "/home/jin/galaxy-dist/lib/galaxy/app.py", line 169, in __init__
self.job_manager = manager.JobManager( self )
File "/home/jin/galaxy-dist/lib/galaxy/jobs/manager.py", line 30, in __init__
self.job_handler = handler.JobHandler( app )
File "/home/jin/galaxy-dist/lib/galaxy/jobs/handler.py", line 29, in __init__
self.dispatcher = DefaultJobDispatcher( app )
File "/home/jin/galaxy-dist/lib/galaxy/jobs/handler.py", line 567, in __init__
self.job_runners = self.app.job_config.get_job_runner_plugins()
File "/home/jin/galaxy-dist/lib/galaxy/jobs/__init__.py", line 451, in get_job_runner_plugins
module = __import__( module_name )
File "/home/jin/galaxy-dist/lib/galaxy/jobs/runners/pbs.py", line 31, in <module>
raise Exception( egg_message % str( e ) )
Exception:

The 'pbs' runner depends on 'pbs_python' which is not installed or not
configured properly. Galaxy's "scramble" system should make this installation
simple, please follow the instructions found at:

http://wiki.galaxyproject.org/Admin/Config/Performance/Cluster

Additional errors may follow:
/home/jin/galaxy-dist/eggs/pbs_python-4.3.5-py2.6-linux-x86_64-ucs4.egg/_pbs.so: undefined symbol: log_record

I don't know what to do again... 



王渭巍
 
From: Thomas Bellembois
Date: 2014-07-16 16:11
To: 王渭巍; galaxy-dev
Subject: Re: [galaxy-dev] How to configure galaxy with a cluster
Hello,
 
I am not a megablast expert but the
"tools/metag_tools/megablast_wrapper.xml" may help you to understand how
the program is run. You may want to tune the parameters.
 
Regards,
 
Thomas
 
On 16/07/2014 09:43, 王渭巍 wrote:
> Thanks a lot, Thomas! It really helps, I added tools section followed your suggestion...
>
> here is my job_conf.xml ( I am using Torque,  I have 3 servers. One for galaxy server, two for cluster computing.  )
>
> <?xml version="1.0"?>
> <job_conf>
> <plugins>
> <plugin id="pbs" type="runner" load="galaxy.jobs.runners.pbs:PBSJobRunner"/>
> </plugins>
> <destinations default="pbs_default">
> <destination id="pbs_default" runner="pbs"/>
> </destination>
> <destination id="long_jobs" runner="pbs">
> <param id="Resource_List">walltime=72:00:00,nodes=1:ppn=8</param>
> <param id="-p">128</param>
> </destination>
> </destinations>
> <tools>
> <tool id="megablast_wrapper" destination="long_jobs"/>
> </tools>
> </job_conf>
>
> and still no cluster options in "megablast" item.  How can I see cluster options in the page, for example, the page will let me choose to use local server or a cluster.
>
> Ben
>
>
> From: Thomas Bellembois
> Date: 2014-07-15 17:41
> To: galaxy-dev@lists.bx.psu.edu
> Subject: Re: [galaxy-dev] How to configure galaxy with a cluster
> Hello Ben,
> you can configure your Galaxy instance to use your cluster in the
> job_conf.xml file:
> https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster
> You can set up your instance to use your cluster by default for all jobs
> or only for specific jobs.
> Here is a part of my job_conf.xml for example:
>     <plugins>
> <!-- LOCAL JOBS -->
>         <plugin id="local" type="runner"
> load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>
>         <!-- SUN GRID ENGINE -->
>         <plugin id="sge" type="runner"
> load="galaxy.jobs.runners.drmaa:DRMAAJobRunner"/>
>     </plugins>
>     <handlers default="handlers">
>         <handler id="handler0" tags="handlers"/>
>         <handler id="handler1" tags="handlers"/>
>     </handlers>
>     <destinations default="sge_default">
>         <destination id="local" runner="local"/>
>         <destination id="sge_default" runner="sge">
>           <param id="nativeSpecification">-r yes -b n -cwd -S /bin/bash
> -V -pe galaxy 1</param>
>         </destination>
>         <destination id="sge_big" runner="sge">
>           <param id="nativeSpecification">-r yes -b n -cwd -S /bin/bash
> -V -pe galaxy 12</param>
>         </destination>
>     </destinations>
>     <tools>
>         <tool id="upload1" destination="local"/>
>         <tool
> id="toolshed.g2.bx.psu.edu/repos/bhaas/trinityrnaseq/trinityrnaseq/0.0.1" destination="sge_big"/>
>         <tool id="mira_assembler" destination="sge_big"/>
>         <tool id="megablast_wrapper" destination="sge_big"/>
>         <tool
> id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_blastp_wrapper/0.1.00"
> destination="sge_big"/>
>         <tool
> id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_tblastn_wrapper/0.1.00"
> destination="sge_big"/>
>         <tool
> id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_blastx_wrapper/0.1.00"
> destination="sge_big"/>
>         <tool
> id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_blastn_wrapper/0.1.00"
> destination="sge_big"/>
>         <tool
> id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_tblastx_wrapper/0.1.00"
> destination="sge_big"/>
>         <tool
> id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_rpstblastn_wrapper/0.1.00"
> destination="sge_big"/>
>         <tool
> id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_rpsblast_wrapper/0.1.00"
> destination="sge_big"/>
> </tools>
> Moreover you Galaxy user and Galaxy server must be allowed to submit
> jobs to your scheduler.
> Hope it  helps,
> Thomas
>
 
 
--
Thomas Bellembois, Network and System Administrator
ENS-Lyon/IGFL - FRANCE
+33 4 26 73 13 67
 
.The IGFL is a member of the "portail-labos" project
(https://portail-labos.ens-lyon.fr/site/).
.IGFL IT documentation: http://itdoc.igfl.ens-lyon.fr/itdoc