Hello Stavros, 
Galaxy shells jobs to a job variety of job managers for which interfaces have been implemented. Traditionally, this has been the DRMAA-compatible job schedulers but as of about a year ago support for Kubernetes has been added. This is the only container cluster manager that Galaxy natively supports. However, my understanding is that Kubernetes (or other similar container managers) are not really ready to take on short-running jobs like the traditional batch clusters - but please correct me if you know differently. Either way, take a look at this pull request where support for Kubernetes was added for more info: https://github.com/galaxyproject/galaxy/pull/2314

In addition, not sure if you're aware of this, but Galaxy has a cloud manager that helps it to run in the cloud: https://new.galaxyproject.org/cloudman/ While the current version of the app was developed with Amazon in mind, we are currently in the process of redoing our entire cloud stack so it becomes cloud-agnostic and container-based. We have implemented a cloud abstraction layer in a library called CloudBridge (http://cloudbridge.readthedocs.io/), a new CloudLaunch that allows users to self-provision Galaxy instances (https://beta.launch.usegalaxy.org/), and are starting work on a new version of CloudMan that will allow the entire Galaxy service system to run in containers as well as manage container-based jobs. We are just starting on the implementation part of this so if you have interest in the topic and would like to participate, please let me know.

Hope this helps,
Enis

On Fri, Mar 3, 2017 at 7:00 AM, Stavros 0 Sachtouris <saxtouri@grnet.gr> wrote:

Hi all,

we are going to deploy galaxy on ~okeanos cloud ( okeanos.grnet.gr ). We have already developed a working storage backend for our object store system ( https://github.com/galaxyproject/galaxy/pull/3611 ).

Now we focus on how to make Galaxy run instances/tasks on our VMs and I would like some advice on how to approach this issue.

What we need is a (potentially large) Galaxy deployment with the ability to run workflows of dockerized "tasks". Each task should run on a provisioned VM on our cloud. There are many tools out there to let us automate VM provisioning, managing docker clusters, scaling, as well as monitoring and accounting, and we want to provide all of these features to our partners in European Academia.

The question is, how does Galaxy communicate with a cluster manager like Apache Mesos or Docker swarm? Any other tools I should look into, which are known to work with Galaxy? Will we have to drill into Galaxy code base to provide these features?

Thank you in advance for you suggestions,

Stavros Sachtouris
developer @ www.grnet.gr


----------------------------- Upozorenje -----------------------------

Automatskom detekcijom utvrdjeno je da tekst ove poruke
podsjeca na tzv. phishing poruku.

AKO SE U PORUCI TRAZI DA POSALJETE VASU IRB LOZINKU ILI
DA UNESETE IRB PODATKE NA NAVEDENOM LINKU, RADI SE O
NAPADU S CILJEM KRADJE I ZLOUPOTREBE PODATAKA.

Centar za informatiku i racunarstvo,
Institut Rudjer Boskovic

----------------------------- Upozorenje -----------------------------

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/