Hi John,

No I haven't been able to figure this out. In answer to your questions, I haven't modified the run.sh. I do however specify that "upload1" tool be run on destination "local" since cluster nodes are configured such that they can't communicate with the outside world which would prevent some of the functionality. Is it possible that this is the problem? Could it be that the separate parts of the test (e.g. upload, actually run a script, set metadata, etc..), are running on different nodes and writing to temporary locations that are not accessible to other nodes?

Thanks!

On Mon, Apr 11, 2016 at 11:08 AM, John Chilton <jmchilton@gmail.com> wrote:
Did you ever figure this out? I cannot think of anything that would
cause this - if the job conf is working with Galaxy it should work
with planemo. Are you sure Galaxy was actually using the configuration
and not just the local runner.

Is it possible you've modified run.sh to modify the environment to
change a DRMAA library path or something? I guess Pulsar would need to
know about that same change.

I did some testing this morning and added documentation to planemo
about developing tools for clusters and testing against them and I
didn't encounter any problems.
http://planemo.readthedocs.org/en/latest/writing_advanced.html#cluster-usage


-John

On Fri, Mar 4, 2016 at 2:39 PM, D K <danielfortin86@gmail.com> wrote:
> Hi,
>
> I posted this question to biostars but didn't get an answer....
>
> Is it possible to use "planemo test" functionality on an SGE cluster? I can
> correctly run a custom tool on an SGE cluster when I manually run a job
> through the galaxy interface. If I change the job_conf.xml so that the job
> is run locally, I can use "planemo test" successfully, but if I change the
> job_conf.xml so that Galaxy uses the SGE cluster, planemo fails:
>
> Traceback (most recent call last):
>   File
> "/remote/home/galaxyd/galaxy-test-SGE/test/functional/test_toolbox.py", line
> 289, in test_tool
>     self.do_it( td )
>   File
> "/remote/home/galaxyd/galaxy-test-SGE/test/functional/test_toolbox.py", line
> 38, in do_it
>     stage_data_in_history( galaxy_interactor, testdef.test_data(),
> test_history, shed_tool_id )
>   File "/remote/home/galaxyd/galaxy-test-SGE/test/base/interactor.py", line
> 43, in stage_data_in_history
>     upload_wait()
>   File "/remote/home/galaxyd/galaxy-test-SGE/test/base/interactor.py", line
> 280, in wait
>     while not self.__history_ready( history_id ):
>   File "/remote/home/galaxyd/galaxy-test-SGE/test/base/interactor.py", line
> 298, in __history_ready
>     return self._state_ready( state, error_msg="History in error state." )
>   File "/remote/home/galaxyd/galaxy-test-SGE/test/base/interactor.py", line
> 361, in _state_ready
>     raise Exception( error_msg )
> Exception: History in error state.
> -------------------- >> begin captured stdout << ---------------------
> History with id 2891970512fa2d5a in error - summary of datasets in error
> below.
> --------------------------------------
> | 1 - MTB_Genes.fasta (HID - NAME)
> | Dataset Blurb:
> |  tool error
> | Dataset Info:
> |  The cluster DRM system terminated this job
> | Dataset Job Standard Output:
> |  *Standard output was empty.*
> | Dataset Job Standard Error:
> |  *Standard error was empty.*
> |
> --------------------------------------
>
> --------------------- >> end captured stdout << ----------------------
>
> Is this a problem with my configuration or a limitation of planemo?
>
>
> Thanks!
>
>
> ___________________________________________________________
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   https://lists.galaxyproject.org/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/