Hi Edgar,
I think that you should set handler0 as your default handler, otherwise non-upload tools may try to use the local handler to submit to the torque destination.

Ciao,
Nicola

---- Fernandez Edgar ha scritto ----

Hello Eric,

 

I actually have a small issue with the job configuration.

So when I upload my file with the URL it works every time.

Now when I run a job on that file, the two first tries fails but my third executes.

You can look at the print screen for the error message.

 

Do you have an idea?

 

Here’s my job_conf file:

<job_conf>

  <plugins>

    <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner"/>

    <plugin id="drmaa" type="runner" load="galaxy.jobs.runners.drmaa:DRMAAJobRunner"/>

  </plugins>

  <handlers default="handlers">

    <handler id="main" tags="handlers"><plugin id="local"/></handler>

    <handler id="handler0" tags="handlers"><plugin id="drmaa"/></handler>

  </handlers>

  <destinations default="torque">

    <destination id="local" runner="local"></destination>

    <destination id="torque" runner="drmaa"><param id="native_specification">-l nodes=1:ppn=4</param></destination>

  </destinations>

  <tools>

    <tool id="upload1" handler="main" destination="local"/>

  </tools>

</job_conf>

 

Is it perhaps my default handlers tag?

 

 

Cordialement / Regards,

Edgar Fernandez

 

De : Fernandez Edgar
Envoyé : January-23-15 10:29 AM
À : 'Eric Rasche'
Cc : galaxy-dev@bx.psu.edu
Objet : RE: [galaxy-dev] show stopper for my galaxy installation

 

Hello Eric,

 

It works great! Thank you!

Is it possible to use the same logic to overwrite an input parameters to a job.

For example, trimmomatic executes by default with the input parameters "-threads 6”.

Can I change that parameter to threads 4?

 

Cordialement / Regards,

Edgar Fernandez

 

De : Eric Rasche [mailto:esr@tamu.edu]
Envoyé : January-22-15 3:05 PM
À : Fernandez Edgar
Cc : galaxy-dev@bx.psu.edu
Objet : Re: [galaxy-dev] show stopper for my galaxy installation

 

The tool to fetch data isn't being run on your submit node but instead on the cluster, which is the problem. 

 

It's a fairly easy to solve problem:

  • In your job_conf you'll need to define a way to run jobs locally (either via local runner, or via a special queue dedicated to your submit node)
  • Then you'll need to specify that the upload tool is forced to that job runner.

Something like this (untested, probably invalid xml... )

 

<plugins workers="4">

  <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" />

  <!-- your torque runner here -->

</plugins>

<destinations default="cluster">

  <destination id="local" runner="local">

  <!-- your cluster dest here -->

</destinations>

<tools>

 

  <!-- we force the upload tool to the local job runner, thereby bypassing the issue of internet access on the cluster nodes -->

  <tool id="upload1" destination="local" />

</tools>

 

Cheers,

Eric

 

2015-01-22 13:51 GMT-06:00 Fernandez Edgar <edgar.fernandez@umontreal.ca>:

Hello gents,

 

I started this new thread because I have a showstopper for my installation of galaxy.

Here’s my situation:

1.       I have a galaxy server that has internet access.

2.       My galaxy server is also my torque server and my only torque submit node.

3.       I have three torque compute nodes that DOESN’T have internet access.

 

Now, I’ve tried uploading a file (larger than 2Gb) via and URL and it fails.

However, I’ve added my galaxy server as a compute node and re-tried to upload the same file and I made sure the job will run on that machine and it works.

 

Now, my compute nodes are in a network that is completely protected from the outside world.

So no internet access.

 

What are my option in this case?

 

Cordialement / Regards,

 

Edgar Fernandez

System Administrator (Linux)

Direction Générale des Technologies de l'Information et de la Communication

(  Bur. : 1-514-343-6111 poste 16568

 

Université de Montréal

PAVILLON ROGER-GAUDRY, bureau X-218

 


___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/



 

--

Eric Rasche

Programmer II

 

Center for Phage Technology

Rm 312A, BioBio

Texas A&M University

College Station, TX 77843

404-692-2048

esr@tamu.edu

rasche.eric@yandex.ru