Plugins
by Michael E. Cotterell
Are there any plans in the works to support plugins within Galaxy? One of the tools I'm working on is actually more of an extension of the workflow canvas/editor's user interface. Right now, I've minimized the modifications required to the editor.mako file to simply adding a mako include near the bottom of the file. However, like I just said, this requires editing a Galaxy file. It would be a lot better if Galaxy supported plugins with hooks, thus allowing me to extend the user interface (or any other part of the application) by registering hooks in a plugin.
To demonstrate how easy it would be to integrate plugins into a python application, I've written up a simple plugin manager script (~100 loc) an placed it in a gist: https://gist.github.com/mepcotterell/6004997. Using this (or something similar to this) would make Galaxy extremely extensible.
The only hard part would be figuring out where you want to place the hooks and documenting them for plugin developers.
Sincerely,
Michael E. Cotterell
Ph.D. Student in Computer Science, University of Georgia
Instructor of Record, Graduate RA & TA, University of Georgia
Faculty Liaison, CS Graduate Student Association, University of Georgia
mepcotterell(a)gmail.com (mailto:mepcotterell@gmail.com)
mepcott(a)uga.edu (mailto:mepcott@uga.edu)
mec(a)cs.uga.edu (mailto:mec@cs.uga.edu)
http://michaelcotterell.com/
8 years, 8 months
Embedding a help text in a Workflow?
by Peter Cock
Hello all,
I've recently started sharing some sample workflows on the Galaxy Tool Shed,
http://toolshed.g2.bx.psu.edu/view/peterjc/rxlr_venn_workflow
http://toolshed.g2.bx.psu.edu/view/peterjc/secreted_protein_workflow
When writing a Galaxy tool we have a lot of flexibility within the <help>
tag to describe the tool, its output, and relevant references using some
nice markup language for things like bold fonts and tables.
I would very much like something similar for embedding README or
help text into a workflow - which again should be shown to the user,
both on the 'View workflow action' but also under the "Run workflow" /
"Execute" button when actually running a workflow.
Am I overlooking something or is the only workflow level description
the plain text 'Annotation / Notes' field?
Thanks,
Peter
P.S. I'm not talking about the repository level README text which I
can use for a workflow repository - that is only really seen by the
Galaxy administrator (not by the users within the Galaxy web-interface)
and is not specific to each workflow.
8 years, 8 months
Implementing blend4j
by Eric Kuyt
Hello all,
I'm trying to implement blend4j in a java application. for this I cloned
the source, fetched the dependencies and added a maven assembly plugin to
assemble a jar file with dependencies.
This jar I put in my classpath.
Now fetching histories works, but creating a new one fails. Online I see a
lot of jersey users have this problem and it seems like jersey-json is not
available, but jersey-json is packaged in the jar.
Anyone else have this problem?
@John maybe you have better practices? I would like to have blend4j in my
maven dependencies, but not build my own nexus.
Thanks,
Eric
--
Central Veterinary Institute of Wageningen UR (CVI)
Department of Infection Biology
PO box 65, 8200 AB Lelystad, NL
Visiting address: ASG, Edelhertweg 15, 8219 PH Lelystad
Tel: +31-(0)320-293391
Fax: +31-(0)320-238153
E-mail: eric.kuijt(a)wur.nl
Web: http://www.cvi.wur.nl
8 years, 9 months
error checking active jobs : KeyError: 'destination'
by Geert Vandeweyer
Hi,
I'm getting errors in the handler logs after upgrading galaxy stable to
today's release.
I specified the "dynamic" job runner as default, which returns a job for
pbs.
These errors appear as soon as the server is up and running. What could
be the issue here?
galaxy.jobs.runners ERROR 2013-04-02 17:24:23,411 Unhandled exception
checking active jobs
Traceback (most recent call last):
File "/galaxy/galaxy-dist/lib/galaxy/jobs/runners/__init__.py", line
336, in monitor
self.check_watched_items()
File "/galaxy/galaxy-dist/lib/galaxy/jobs/runners/pbs.py", line 382,
in check_watched_items
( failures, statuses ) = self.check_all_jobs()
File "/galaxy/galaxy-dist/lib/galaxy/jobs/runners/pbs.py", line 463,
in check_all_jobs
pbs_server_name =
self.__get_pbs_server(pbs_job_state.job_destination.params)
File "/galaxy/galaxy-dist/lib/galaxy/jobs/runners/pbs.py", line 221,
in __get_pbs_server
return job_destination_params['destination'].split('@')[-1]
KeyError: 'destination'
--
Geert Vandeweyer, Ph.D.
Department of Medical Genetics
University of Antwerp
Prins Boudewijnlaan 43
2650 Edegem
Belgium
Tel: +32 (0)3 275 97 56
E-mail: geert.vandeweyer(a)ua.ac.be
http://ua.ac.be/cognitivegenetics
http://www.linkedin.com/pub/geert-vandeweyer/26/457/726
8 years, 9 months
Multiple output files do not appear in history
by Christoph Malisi
Hey,
I'm writing a XML for a tool that produces a number of output files. As the number is not known before the tool is run, I used the strategy outlined here:
http://wiki.galaxyproject.org/Admin/Tools/Multiple%20Output%20Files#Numbe...
First, I tried my tool definition in a galaxy instance in a virtual machine to not interrupt our production instance (we have a local instance set up). Here, everything worked fine. When trying the same code in our production galaxy instance, it didn't: Regardless of how many output files were produced, only one appears in the history. (the first one, corresponding to "output1" in the references wiki entry). When looking at the temp directory provided to the tool by galaxy ("$__new_file_dir__" in the wiki, it always points to "galaxy-dist/database/tmp/"), the additional output files have been created by my tool and are named correctly (e.g. primary_123_output2_visible_fastq, primary_123_output3_visible_fastq, …)
Both instances are new, based on release_2013.06.03.
Is there any setting in galaxy somewhere that could prevent this from working?
Thanks for your help
Chris
PS: I wrote a little python script that just creates multiple output files, and the discrepancy between the two instances is reproducible. Here is the code for the python script and the corresponding XML:
#######
import subprocess
import argparse
import os
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-i', type=int)
parser.add_argument('output1')
parser.add_argument('output1_id')
parser.add_argument('out_dir')
args = parser.parse_args()
print 'Wrapper arguments:', args
with open(args.output1, 'w') as f:
f.write('output1')
for i in range(2, args.i+1):
name = 'output%i'%i
file_type = 'text'
fname = '%s_%s_%s_%s_%s' % ('primary', args.output1_id, name, 'visible', file_type)
with open(os.path.join(args.out_dir, fname), 'w') as f:
f.write('output%i'%i)
if __name__ == "__main__":
main()
############################
###########################
<tool name="MultiOutTest" id="multiouttest">
<description>Multiple Outputs test</description>
<command interpreter="python">
multiout.py
-i $how_many
$output1
$output1.id
$__new_file_path__
</command>
<inputs>
<param name="how_many" type="integer" value="2" label="How many output files?"/>
</inputs>
<outputs>
<data name="output1" format="txt"/>
</outputs>
<help>
This is some help text.
</help>
</tool>
#################################
8 years, 9 months
Job listing exception
by Kyle Ellrott
I'm getting an exception when trying to look at the admin job management
screen. It looks like SQLAlchemy doesn't like non-ascii characters. Any
ideas about what to do?
Error - <type 'exceptions.UnicodeDecodeError'>: 'ascii' codec can't decode
byte 0x8b in position 404: ordinal not in range(128)
URL: http://pk.kilokluster.ucsc.edu:8079/admin/jobs
File '/inside/depot4/galaxy/lib/galaxy/web/framework/middleware/error.py',
line 149 in __call__
app_iter = self.application(environ, sr_checker)
File
'/inside/depot4/galaxy/eggs/Paste-1.7.5.1-py2.7.egg/paste/recursive.py',
line 84 in __call__
return self.application(environ, start_response)
File
'/inside/depot4/galaxy/eggs/Paste-1.7.5.1-py2.7.egg/paste/httpexceptions.py',
line 633 in __call__
return self.application(environ, start_response)
File '/inside/depot4/galaxy/lib/galaxy/web/framework/base.py', line 132 in
__call__
return self.handle_request( environ, start_response )
File '/inside/depot4/galaxy/lib/galaxy/web/framework/base.py', line 190 in
handle_request
body = method( trans, **kwargs )
File '/inside/depot4/galaxy/lib/galaxy/web/framework/__init__.py', line 221
in decorator
return func( self, trans, *args, **kwargs )
File '/inside/depot4/galaxy/lib/galaxy/web/base/controllers/admin.py', line
1053 in jobs
for job in jobs:
File
'/inside/depot4/galaxy/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/orm/query.py',
line 2341 in instances
File
'/inside/depot4/galaxy/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py',
line 3204 in fetchall
File
'/inside/depot4/galaxy/eggs/SQLAlchemy-0.7.9-py2.7-linux-x86_64-ucs2.egg/sqlalchemy/engine/base.py',
line 3171 in _fetchall_impl
UnicodeDecodeError: 'ascii' codec can't decode byte 0x8b in position 404:
ordinal not in range(128)
8 years, 9 months
Display data in browser
by Gromobir
Hello list,
I'm currently trying to create a new tool, which has an .xhtml file as
its output.
After finishing the job, I would like to be able to click on the "eye
icon" aka "Display data in browser"
and display the resulting .xhtml file in the frame to the left of my
tool history. Unfortunately I have no idea yet
how I could accomplish this. Any help is highly appreciated.
Best regards,
Gromobir
8 years, 9 months
Setting torque queues
by Scott Hazelhurst
[Apologies for my previous email : finger slippage meant I sent before
completing the email]
Dear dev
I would be grateful for some assistance in setting up our galaxy server to
submit jobs to our torque server.. We are using release_2013.04.01
I have tried going through the install information but clearly I¹ve
misunderstood something, and unfortunately most examples come from earlier
versions
Here is my job_conf.xml file ‹ this is just a temporary file while I set
things up. My intention is to run clustalw on our cluster using the torque
queue WitsLong, assuming our torque server is galaxy.bioinf.wits.ac.za
Below that is the output I get in the log. From the torque log it complains
that there is no default queue specified.
Any help would be appreciated.
Thanks
Scott
<?xml version="1.0"?>
<!-- A sample job config that explicitly configures job running the way it
is configured by default (if there is no explicit config). -->
<job_conf>
<plugins>
<plugin id="local" type="runner"
load="galaxy.jobs.runners.local:LocalJobRunner"/>
<plugin id="pbs" type="runner"
load="galaxy.jobs.runners.pbs:PBSJobRunner"/>
</plugins>
<handlers default="handlers">
<handler id="main" tags="handlers,pbs"/>
</handlers>
<destinations default="WitsLong">
<destination id="local" runner="local"/>
<destination id="WitsLong" runner="pbs">
<param id="Resource_List">walltime=72:00:00,nodes=1:ppn=8</param>
<param id="-p">128</param>
</destination>
</destinations>
<tools>
<tool id="foo" handler="trackster_handler">
<param id="source">trackster</param>
</tool>
<tool id="clustalw" destination="WitsLong"/>
</tools>
</job_conf>
galaxy.jobs DEBUG 2013-04-30 15:28:13,481 (33) Working directory for job is:
/home/galaxy/galaxy-dist/database/job_working_directory/000/33
galaxy.jobs.handler DEBUG 2013-04-30 15:28:13,527 (33) Dispatching to pbs
runner
146.141.75.218, 146.141.1.81 - - [30/Apr/2013:15:28:12 +0200] "POST
/galaxy/tool_runner/index HTTP/1.1" 200 -
"http://galaxy.bioinf.wits.ac.za/galaxy/tool_runner/rerun?id=9752b387803d3e1
e" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:20.0) Gecko/20100101
Firefox/20.0"
galaxy.jobs DEBUG 2013-04-30 15:28:14,213 (33) Persisting job destination
(destination id: WitsLong)
Exception AssertionError: AssertionError('State
<sqlalchemy.orm.state.InstanceState object at 0x7f79682e5150> is not present
in this identity map',) in <bound method InstanceState._cleanup of
<sqlalchemy.orm.state.InstanceState object at 0x7f79682e5150>> ignored
galaxy.jobs.handler INFO 2013-04-30 15:28:14,522 (33) Job dispatched
galaxy.tools DEBUG 2013-04-30 15:28:15,189 Building dependency shell command
for dependency 'clustalw2'
galaxy.tools WARNING 2013-04-30 15:28:15,192 Failed to resolve dependency on
'clustalw2', ignoring
galaxy.jobs.runners.pbs DEBUG 2013-04-30 15:28:15,957 (33) submitting file
/home/galaxy/galaxy-dist/database/pbs/33.sh
galaxy.jobs.runners.pbs DEBUG 2013-04-30 15:28:15,958 (33) command is:
python /home/galaxy/galaxy-dist/tools/rgenetics/rgClustalw.py -i
"/home/galaxy/galaxy-dist/database/files/000/dataset_2.dat" -o
"/home/galaxy/galaxy-dist/database/files/000/dataset_60.dat" -s "ALIGNED" -l
"/home/galaxy/galaxy-dist/database/files/000/dataset_61.dat" -t
"Clustal_run" -d "DNA" -f "CLUSTAL"
galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:15,963 (33) pbs_submit
failed (try 1/5), PBS error 15039: Route rejected by all destinations
galaxy.jobs DEBUG 2013-04-30 15:28:16,400 (34) Working directory for job is:
/home/galaxy/galaxy-dist/database/job_working_directory/000/34
galaxy.jobs.handler DEBUG 2013-04-30 15:28:16,416 (34) Dispatching to pbs
runner
146.141.75.218, 146.141.1.81 - - [30/Apr/2013:15:28:13 +0200] "POST
/galaxy/tool_runner/index HTTP/1.1" 200 -
"http://galaxy.bioinf.wits.ac.za/galaxy/tool_runner/rerun?id=9752b387803d3e1
e" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:20.0) Gecko/20100101
Firefox/20.0"
galaxy.jobs DEBUG 2013-04-30 15:28:16,668 (34) Persisting job destination
(destination id: WitsLong)
galaxy.jobs.handler INFO 2013-04-30 15:28:16,689 (34) Job dispatched
galaxy.tools DEBUG 2013-04-30 15:28:16,937 Building dependency shell command
for dependency 'clustalw2'
galaxy.tools WARNING 2013-04-30 15:28:16,938 Failed to resolve dependency on
'clustalw2', ignoring
galaxy.jobs.runners.pbs DEBUG 2013-04-30 15:28:17,160 (34) submitting file
/home/galaxy/galaxy-dist/database/pbs/34.sh
galaxy.jobs.runners.pbs DEBUG 2013-04-30 15:28:17,161 (34) command is:
python /home/galaxy/galaxy-dist/tools/rgenetics/rgClustalw.py -i
"/home/galaxy/galaxy-dist/database/files/000/dataset_2.dat" -o
"/home/galaxy/galaxy-dist/database/files/000/dataset_62.dat" -s "ALIGNED" -l
"/home/galaxy/galaxy-dist/database/files/000/dataset_63.dat" -t
"Clustal_run" -d "DNA" -f "CLUSTAL"
galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:17,162 (34) pbs_submit
failed (try 1/5), PBS error 15039: Route rejected by all destinations
galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:17,971 (33) pbs_submit
failed (try 2/5), PBS error 15039: Route rejected by all destinations
galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:19,166 (34) pbs_submit
failed (try 2/5), PBS error 15039: Route rejected by all destinations
galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:19,975 (33) pbs_submit
failed (try 3/5), PBS error 15039: Route rejected by all destinations
galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:21,170 (34) pbs_submit
failed (try 3/5), PBS error 15039: Route rejected by all destinations
galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:21,978 (33) pbs_submit
failed (try 4/5), PBS error 15039: Route rejected by all destinations
galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:23,173 (34) pbs_submit
failed (try 4/5), PBS error 15039: Route rejected by all destinations
galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:23,982 (33) pbs_submit
failed (try 5/5), PBS error 15039: Route rejected by all destinations
galaxy.jobs.runners.pbs WARNING 2013-04-30 15:28:25,176 (34) pbs_submit
failed (try 5/5), PBS error 15039: Route rejected by all destinations
galaxy.jobs.runners.pbs ERROR 2013-04-30 15:28:25,984 (33) All attempts to
submit job failed
galaxy.jobs.runners.pbs ERROR 2013-04-30 15:28:27,179 (34) All attempts to
submit job failed
<table width="100%" border="0" cellspacing="0" cellpadding="0" style="width:100%;">
<tr>
<td align="left" style="text-align:justify;"><font face="arial,sans-serif" size="1" color="#999999"><span style="font-size:11px;">This communication is intended for the addressee only. It is confidential. If you have received this communication in error, please notify us immediately and destroy the original message. You may not copy or disseminate this communication without the permission of the University. Only authorised signatories are competent to enter into agreements on behalf of the University and recipients are thus advised that the content of this message may not be legally binding on the University and may contain the personal views and opinions of the author, which are not necessarily the views and opinions of The University of the Witwatersrand, Johannesburg. All agreements between the University and outsiders are subject to South African Law unless the University agrees in writing to the contrary. </span></font></td>
</tr>
</table
8 years, 9 months
Upstart script to manage a multi instance load balanced installation
by Seth Sims
After some work i've created an Upstart script which can manage a load
balanced galaxy configuration as described in the wiki. I thought that I
would put it on this list for other people to use. The script parses
universe_wsgi.ini just like run.sh and spawns all of the servers it finds.
It comes in two pieces galaxy.conf and galaxy-worker.conf. Once you place
them both in /etc/init and make the proper edits for the environment a
server can be started with "sudo start galaxy". The configuration starts
the server at boot time and puts all of the instances under the management
of upstart which deals with pids, logging to syslog and respawning if an
instance crashes.
I have just gotten this working reasonably well but have done basically no
testing so there are bugs to be found. Any comments are welcome if anyone
knows a better way to do something here.
- Seth
*galaxy.conf*
----------------------------
author "Seth Sims <seth.sims(a)gmail.com>"
version "0.0.1 test"
description "galaxy master process. Fetches eggs and spawns all of the
servers it finds configured"
start on started network-services
# make sure that any eggs we download are at least owned by the galaxy
group.
# we cannot use setuid in this script because only root can issue the
"start galaxy-worker"
# command. But this way the galaxy instances should still be able to use
their eggs.
setgid galaxy
# put galaxy root directory here
chdir /srv/galaxy-dist/
pre-start script
date
echo "checking python version"
python ./scripts/check_python.py
[ $? -ne 0 ] && exit 1
echo "pre-fetching tossing out expired eggs"
python ./scripts/check_eggs.py -q
if [ $? -eq 0 ]; then
echo "Some eggs are out of date, attempting to fetch..."
python ./scripts/fetch_eggs.py
if [ $? -eq 0 ]; then
echo "Fetch Successful."
else
echo "Fetch failed."
fi
fi
echo "starting servers"
SERVERS=`sed -n 's/^\[server:\(.*\)\]/\1/ p' universe_wsgi.ini | xargs
echo`
for SERVER in ${SERVERS} ; do
echo "starting server ${SERVER}"
start galaxy-worker SERVER_NAME=${SERVER}
done
end script
post-stop script
SERVERS=`sed -n 's/^\[server:\(.*\)\]/\1/ p' universe_wsgi.ini | xargs
echo`
date
echo "stopping galaxy servers".
for SERVER in ${SERVERS} ; do
echo "stopping ${SERVER}"
stop galaxy-worker SERVER_NAME=${SERVER}
done
end script
---------------------------
*galaxy-worker*
author "Seth Sims <seth.sims(a)gmail.com>"
version "0.0.1 test"
description "Starts a galaxy server instance. This is run from the
galaxy.conf file"
instance $SERVER_NAME
#make sure we are running as the galaxy user
setuid galaxy
setgid galaxy
#put the galaxy root directory here
chdir /srv/galaxy-dist/
#having multiple instances of galaxy using the same egg directory was
causing a race
#condition that was stopping the instances from starting correctly. So give
each instance
#its own directory under /tmp
env PYTHON_EGG_CACHE=/tmp/${SERVER_NAME}_egg/
respawn
script
exec python ./scripts/paster.py serve universe_wsgi.ini
--server-name=${SERVER_NAME}
end script
8 years, 9 months
Jobs remain in queue until restart
by Thon Deboer
Hi,
I have noticed that from time to time, the job queue seems to be "stuck" and
can only be unstuck by restarting galaxy.
The jobs seem to be in the queue state and the python job handler processes
are hardly ticking over and the cluster is empty.
When I restart, the startup procedure realizes all jobs are in the a "new
state" and it then assigns a jobhandler after which the jobs start fine..
Any ideas?
Thon
P.S I am using the june version of galaxy and I DO set limits on my users in
job_conf.xml as so: (Maybe it is related? Before it went into dormant mode,
this user had started lots of jobs and may have hit the limit, but I assumed
this limit was the number of running jobs at one time, right?)
<?xml version="1.0"?>
<job_conf>
<plugins workers="4">
<!-- "workers" is the number of threads for the runner's work queue.
The default from <plugins> is used if not defined for a
<plugin>.
-->
<plugin id="local" type="runner"
load="galaxy.jobs.runners.local:LocalJobRunner" workers="2"/>
<plugin id="drmaa" type="runner"
load="galaxy.jobs.runners.drmaa:DRMAAJobRunner" workers="8"/>
<plugin id="cli" type="runner"
load="galaxy.jobs.runners.cli:ShellJobRunner" workers="2"/>
</plugins>
<handlers default="handlers">
<!-- Additional job handlers - the id should match the name of a
[server:<id>] in universe_wsgi.ini.
-->
<handler id="handler0" tags="handlers"/>
<handler id="handler1" tags="handlers"/>
<handler id="handler2" tags="handlers"/>
<handler id="handler3" tags="handlers"/>
<!-- <handler id="handler10" tags="handlers"/>
<handler id="handler11" tags="handlers"/>
<handler id="handler12" tags="handlers"/>
<handler id="handler13" tags="handlers"/>
-->
</handlers>
<destinations default="regularjobs">
<!-- Destinations define details about remote resources and how jobs
should be executed on those remote resources.
-->
<destination id="local" runner="local"/>
<destination id="regularjobs" runner="drmaa" tags="cluster">
<!-- These are the parameters for qsub, such as queue etc. -->
<param id="nativeSpecification">-V -q long.q -pe smp 1</param>
</destination>
<destination id="longjobs" runner="drmaa" tags="cluster,long_jobs">
<!-- These are the parameters for qsub, such as queue etc. -->
<param id="nativeSpecification">-V -q long.q -pe smp 1</param>
</destination>
<destination id="shortjobs" runner="drmaa"
tags="cluster,short_jobs">
<!-- These are the parameters for qsub, such as queue etc. -->
<param id="nativeSpecification">-V -q short.q -pe smp 1</param>
</destination>
<destination id="multicorejobs4" runner="drmaa"
tags="cluster,multicore_jobs">
<!-- These are the parameters for qsub, such as queue etc. -->
<param id="nativeSpecification">-V -q long.q -pe smp 4</param>
</destination>
<!-- <destination id="real_user_cluster" runner="drmaa">
<param
id="galaxy_external_runjob_script">scripts/drmaa_external_runner.py</param>
<param
id="galaxy_external_killjob_script">scripts/drmaa_external_killer.py</param>
<param
id="galaxy_external_chown_script">scripts/external_chown_script.py</param>
</destination> -->
<destination id="dynamic" runner="dynamic">
<!-- A destination that represents a method in the dynamic
runner. -->
<param id="type">python</param>
<param id="function">interactiveOrCluster</param>
</destination>
</destinations>
<tools>
<!-- Tools can be configured to use specific destinations or
handlers,
identified by either the "id" or "tags" attribute. If assigned
to
a tag, a handler or destination that matches that tag will be
chosen at random.
-->
<tool id="bwa_wrapper" destination="multicorejobs4"/>
</tools>
<limits>
<!-- Certain limits can be defined.
<limit type="registered_user_concurrent_jobs">500</limit>
<limit type="unregistered_user_concurrent_jobs">1</limit>
<limit type="concurrent_jobs" id="local">1</limit>
<limit type="concurrent_jobs" tag="cluster">200</limit>
<limit type="concurrent_jobs" tag="long_jobs">200</limit>
<limit type="concurrent_jobs" tag="short_jobs">200</limit>
<limit type="concurrent_jobs" tag="multicore_jobs">100</limit>
-->
</limits>
</job_conf>
8 years, 9 months