Re: Running slurm job with pulsar
Dear Gianmauro, Thanks for your answer. I think that my question was not clear enough. Below you will find some answer to your suggestions and maybe it will be more clear for you to guide me. * check the Pulsar' log for error messages ->I look into staging directory but as my test analysis is executed with success, I have no error log. When a lunch a test from the galaxy GUI, it is executed with success on the cluster. My problem is that is not executed in a job, not using srun or sbatch command (so not using the scheduler). I would like pulsar to be able to submit a job on the cluster, just like others users, and not execute analysis directly "in the terminal". * verify if your Pulsar server can reach the cluster trough ssh ->The pulsar server is on the HPC cluster. The connection between galaxy (rabbitmq) and the cluster (pulsar) is fine. * in the staging directory of your job should be a command.sh file. You can try to run it manually (sbatch command.sh or something similar) and Yes indeed I have this file in staging directory (see below). We i execute the command.sh file with sbatch command.sh, it fails immediately which is normal since command.sh is not a slurm job. total 48 -rw-r--r-- 1 pulsar pulsar 4 Jun 17 14:15 use_metadata_directory -rw-r--r-- 1 pulsar pulsar 10 Jun 17 14:15 tool_version -rw-r--r-- 1 pulsar pulsar 59 Jun 17 14:15 tool_id drwxr-xr-x 2 pulsar pulsar 6 Jun 17 14:15 tool_files drwxr-xr-x 2 pulsar pulsar 6 Jun 17 14:15 metadata drwxr-xr-x 2 pulsar pulsar 6 Jun 17 14:15 configs drwxr-xr-x 5 pulsar pulsar 30 Jun 17 14:15 .. -rw-r--r-- 1 pulsar pulsar 2551 Jun 17 14:15 launch_config drwxr-xr-x 2 pulsar pulsar 46 Jun 17 14:15 inputs -rw-r--r-- 1 pulsar pulsar 4 Jun 17 14:15 preprocessed -rwx------ 1 pulsar pulsar 5441 Jun 17 14:15 command.sh -rw-r--r-- 1 pulsar pulsar 0 Jun 17 14:15 stdout drwxr-xr-x 2 pulsar pulsar 6 Jun 17 14:15 home drwxr-xr-x 2 pulsar pulsar 58 Jun 17 14:15 working -rw-r--r-- 1 pulsar pulsar 4 Jun 17 14:15 running -rw-r--r-- 1 pulsar pulsar 546 Jun 17 14:15 stderr drwxr-xr-x 2 pulsar pulsar 26 Jun 17 14:15 outputs -rw-r--r-- 1 pulsar pulsar 1 Jun 17 14:15 return_code -rw-r--r-- 1 pulsar pulsar 10 Jun 17 14:15 final_status -rw-r--r-- 1 pulsar pulsar 0 Jun 17 14:15 postprocessed drwxr-xr-x 9 pulsar pulsar 4096 Jun 17 14:15 . ------------ Luc Cornet, PhD Bio-informatician Mycology and Aerobiology Sciensano ----- Mail original ----- De: "Gianmauro Cuccuru" <gmauro@informatik.uni-freiburg.de> À: "Luc Cornet" <luc.cornet@uliege.be>, "HelpGalaxy" <galaxy-dev@lists.galaxyproject.org> Cc: "Colignon David" <David.Colignon@uliege.be>, "Baurain Denis" <Denis.Baurain@uliege.be>, "Pierre Becker" <Pierre.Becker@sciensano.be> Envoyé: Vendredi 18 Juin 2021 11:48:30 Objet: Re: [galaxy-dev] Running slurm job with pulsar Hi Luc, I am not a Slurm expert but I can suggest you several things: * check the Pulsar' log for error messages * verify if your Pulsar server can reach the cluster trough ssh * in the staging directory of your job should be a command.sh file. You can try to run it manually (sbatch command.sh or something similar) and see if it works Cheers, Gianmauro On 17.06.21 20:18, Luc Cornet wrote:
Dear all,
I am trying to launch a slurm job with pulsar using CLI (instead of drmaa). The pulsar playbook below pass without problem but the analyses is still run out of slurm. The analyses is excited with success but not in a slrum job.
Can you help me to launch slurm with pulsar ? What did I miss?
Thanks, Luc
``` # Put your Galaxy server's fully qualified domain name (FQDN) (or the FQDN of the RabbitMQ server) above.
pulsar_root: /opt/pulsar
pulsar_pip_install: true pulsar_pycurl_ssl_library: openssl pulsar_systemd: true pulsar_systemd_runner: webless
pulsar_create_user: false pulsar_user: {name: pulsar, shell: /bin/bash}
pulsar_optional_dependencies: - pyOpenSSL # For remote transfers initiated on the Pulsar end rather than the Galaxy end - pycurl # drmaa required if connecting to an external DRM using it. - drmaa # kombu needed if using a message queue - kombu # amqp 5.0.3 changes behaviour in an unexpected way, pin for now. - 'amqp==5.0.2' # psutil and pylockfile are optional dependencies but can make Pulsar # more robust in small ways. - psutil
pulsar_yaml_config: conda_auto_init: True conda_auto_install: True staging_directory: "{{ pulsar_staging_dir }}" persistence_directory: "{{ pulsar_persistence_dir }}" tool_dependency_dir: "{{ pulsar_dependencies_dir }}" # The following are the settings for the pulsar server to contact the message queue with related timeouts etc. message_queue_url: "pyamqp://galaxy_au:{{ rabbitmq_password_galaxy_au }}@{{ galaxy_server_url }}:5671//pulsar/galaxy_au?ssl=1" managers: _default_: type: queued_cli job_plugin: slurm native_specification: "-p batch --tasks=1 --cpus-per-task=2 --mem-per-cpu=1000 -t 10:00" min_polling_interval: 0.5 amqp_publish_retry: True amqp_publish_retry_max_retries: 5 amqp_publish_retry_interval_start: 10 amqp_publish_retry_interval_step: 10 amqp_publish_retry_interval_max: 60
# We also need to create the dependency resolver file so pulsar knows how to # find and install dependencies for the tools we ask it to run. The simplest # method which covers 99% of the use cases is to use conda auto installs similar # to how Galaxy works. pulsar_dependency_resolvers: - name: conda args: - name: auto_init value: true ```
------------ Luc Cornet, PhD Bio-informatician Mycology and Aerobiology Sciensano ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: %(web_page_url)s
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/
-- Gianmauro Cuccuru UseGalaxy.eu Bioinformatics Group Department of Computer Science Albert-Ludwigs-University Freiburg Georges-Köhler-Allee 106 79110 Freiburg, Germany
On 18.06.21 15:34, Luc Cornet wrote:
Dear Gianmauro,
Thanks for your answer. I think that my question was not clear enough. Below you will find some answer to your suggestions and maybe it will be more clear for you to guide me.
* check the Pulsar' log for error messages
If you are using systemd, as it seems, try this: journalctl -u pulsar or journalctl -f -u pulsar to continuously print new entries as they are appended to the journal
->I look into staging directory but as my test analysis is executed with success, I have no error log. When a lunch a test from the galaxy GUI, it is executed with success on the cluster. My problem is that is not executed in a job, not using srun or sbatch command (so not using the scheduler). I would like pulsar to be able to submit a job on the cluster, just like others users, and not execute analysis directly "in the terminal".
* verify if your Pulsar server can reach the cluster trough ssh
->The pulsar server is on the HPC cluster. The connection between galaxy (rabbitmq) and the cluster (pulsar) is fine.
* in the staging directory of your job should be a command.sh file. You can try to run it manually (sbatch command.sh or something similar) and
Yes indeed I have this file in staging directory (see below). We i execute the command.sh file with sbatch command.sh, it fails immediately which is normal since command.sh is not a slurm job.
what is in the stderr file?
total 48 -rw-r--r-- 1 pulsar pulsar 4 Jun 17 14:15 use_metadata_directory -rw-r--r-- 1 pulsar pulsar 10 Jun 17 14:15 tool_version -rw-r--r-- 1 pulsar pulsar 59 Jun 17 14:15 tool_id drwxr-xr-x 2 pulsar pulsar 6 Jun 17 14:15 tool_files drwxr-xr-x 2 pulsar pulsar 6 Jun 17 14:15 metadata drwxr-xr-x 2 pulsar pulsar 6 Jun 17 14:15 configs drwxr-xr-x 5 pulsar pulsar 30 Jun 17 14:15 .. -rw-r--r-- 1 pulsar pulsar 2551 Jun 17 14:15 launch_config drwxr-xr-x 2 pulsar pulsar 46 Jun 17 14:15 inputs -rw-r--r-- 1 pulsar pulsar 4 Jun 17 14:15 preprocessed -rwx------ 1 pulsar pulsar 5441 Jun 17 14:15 command.sh -rw-r--r-- 1 pulsar pulsar 0 Jun 17 14:15 stdout drwxr-xr-x 2 pulsar pulsar 6 Jun 17 14:15 home drwxr-xr-x 2 pulsar pulsar 58 Jun 17 14:15 working -rw-r--r-- 1 pulsar pulsar 4 Jun 17 14:15 running -rw-r--r-- 1 pulsar pulsar 546 Jun 17 14:15 stderr drwxr-xr-x 2 pulsar pulsar 26 Jun 17 14:15 outputs -rw-r--r-- 1 pulsar pulsar 1 Jun 17 14:15 return_code -rw-r--r-- 1 pulsar pulsar 10 Jun 17 14:15 final_status -rw-r--r-- 1 pulsar pulsar 0 Jun 17 14:15 postprocessed drwxr-xr-x 9 pulsar pulsar 4096 Jun 17 14:15 .
------------ Luc Cornet, PhD Bio-informatician Mycology and Aerobiology Sciensano
----- Mail original ----- De: "Gianmauro Cuccuru" <gmauro@informatik.uni-freiburg.de> À: "Luc Cornet" <luc.cornet@uliege.be>, "HelpGalaxy" <galaxy-dev@lists.galaxyproject.org> Cc: "Colignon David" <David.Colignon@uliege.be>, "Baurain Denis" <Denis.Baurain@uliege.be>, "Pierre Becker" <Pierre.Becker@sciensano.be> Envoyé: Vendredi 18 Juin 2021 11:48:30 Objet: Re: [galaxy-dev] Running slurm job with pulsar
Hi Luc,
I am not a Slurm expert but I can suggest you several things: * check the Pulsar' log for error messages * verify if your Pulsar server can reach the cluster trough ssh * in the staging directory of your job should be a command.sh file. You can try to run it manually (sbatch command.sh or something similar) and see if it works
Cheers, Gianmauro
On 17.06.21 20:18, Luc Cornet wrote:
Dear all,
I am trying to launch a slurm job with pulsar using CLI (instead of drmaa). The pulsar playbook below pass without problem but the analyses is still run out of slurm. The analyses is excited with success but not in a slrum job.
Can you help me to launch slurm with pulsar ? What did I miss?
Thanks, Luc
``` # Put your Galaxy server's fully qualified domain name (FQDN) (or the FQDN of the RabbitMQ server) above.
pulsar_root: /opt/pulsar
pulsar_pip_install: true pulsar_pycurl_ssl_library: openssl pulsar_systemd: true pulsar_systemd_runner: webless
pulsar_create_user: false pulsar_user: {name: pulsar, shell: /bin/bash}
pulsar_optional_dependencies: - pyOpenSSL # For remote transfers initiated on the Pulsar end rather than the Galaxy end - pycurl # drmaa required if connecting to an external DRM using it. - drmaa # kombu needed if using a message queue - kombu # amqp 5.0.3 changes behaviour in an unexpected way, pin for now. - 'amqp==5.0.2' # psutil and pylockfile are optional dependencies but can make Pulsar # more robust in small ways. - psutil
pulsar_yaml_config: conda_auto_init: True conda_auto_install: True staging_directory: "{{ pulsar_staging_dir }}" persistence_directory: "{{ pulsar_persistence_dir }}" tool_dependency_dir: "{{ pulsar_dependencies_dir }}" # The following are the settings for the pulsar server to contact the message queue with related timeouts etc. message_queue_url: "pyamqp://galaxy_au:{{ rabbitmq_password_galaxy_au }}@{{ galaxy_server_url }}:5671//pulsar/galaxy_au?ssl=1" managers: _default_: type: queued_cli job_plugin: slurm native_specification: "-p batch --tasks=1 --cpus-per-task=2 --mem-per-cpu=1000 -t 10:00" min_polling_interval: 0.5 amqp_publish_retry: True amqp_publish_retry_max_retries: 5 amqp_publish_retry_interval_start: 10 amqp_publish_retry_interval_step: 10 amqp_publish_retry_interval_max: 60
# We also need to create the dependency resolver file so pulsar knows how to # find and install dependencies for the tools we ask it to run. The simplest # method which covers 99% of the use cases is to use conda auto installs similar # to how Galaxy works. pulsar_dependency_resolvers: - name: conda args: - name: auto_init value: true ```
------------ Luc Cornet, PhD Bio-informatician Mycology and Aerobiology Sciensano ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: %(web_page_url)s
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/
Re, This is the output of journalctl -q -f -u pulsar Jun 18 17:05:11 nic5-login1 pulsar[2214545]: 2021-06-18 17:05:11,565 DEBUG [pulsar.managers.staging.pre][[manager=_default_]-[action=preprocess]-[job=9]] Staging input 'dataset_1.dat' via FileAction[path=/data/000/dataset_1.dat,action_type=remote_transfer,url=https://galaxy.inbios.uliege.be/api/jobs/0a248a1f62a0cc04/files?job_key=99525bca5c821ddb&path=/data/000/dataset_1.dat&file_type=input] to /opt/pulsar/files/staging/9/inputs/dataset_1.dat Jun 18 17:05:11 nic5-login1 pulsar[2214545]: 2021-06-18 17:05:11,623 DEBUG [pulsar.managers.staging.pre][[manager=_default_]-[action=preprocess]-[job=9]] Staging input 'dataset_2.dat' via FileAction[path=/data/000/dataset_2.dat,action_type=remote_transfer,url=https://galaxy.inbios.uliege.be/api/jobs/0a248a1f62a0cc04/files?job_key=99525bca5c821ddb&path=/data/000/dataset_2.dat&file_type=input] to /opt/pulsar/files/staging/9/inputs/dataset_2.dat Jun 18 17:05:11 nic5-login1 pulsar[2214545]: 2021-06-18 17:05:11,681 DEBUG [pulsar.managers.base][[manager=_default_]-[action=preprocess]-[job=9]] job_id: 9 - Checking authorization of command_line [set -o | grep -q pipefail && set -o pipefail; bwa mem -t "${GALAXY_SLOTS:-1}" -v 1 '/cvmfs/data.galaxyproject.org/managed/bwa_mem_index/eschColi_K12/eschColi_K12.fa' '/opt/pulsar/files/staging/9/inputs/dataset_1.dat' '/opt/pulsar/files/staging/9/inputs/dataset_2.dat' | samtools sort -@${GALAXY_SLOTS:-2} -T "${TMPDIR:-.}" -O bam -o '/opt/pulsar/files/staging/9/outputs/dataset_9.dat'; return_code=$?; sh -c "exit $return_code"] Jun 18 17:05:11 nic5-login1 pulsar[2214545]: 2021-06-18 17:05:11,681 DEBUG [galaxy.tool_util.deps][[manager=_default_]-[action=preprocess]-[job=9]] Using dependency bwa version 0.7.17 of type conda Jun 18 17:05:11 nic5-login1 pulsar[2214545]: 2021-06-18 17:05:11,681 DEBUG [galaxy.tool_util.deps][[manager=_default_]-[action=preprocess]-[job=9]] Using dependency samtools version 1.6 of type conda Jun 18 17:05:12 nic5-login1 pulsar[2214545]: 2021-06-18 17:05:12,100 DEBUG [pulsar.messaging.bind_amqp][[manager=_default_]-[action=monitor]] Publishing Pulsar state change with status running for job_id 9 Jun 18 17:05:12 nic5-login1 pulsar[2214545]: 2021-06-18 17:05:12,100 DEBUG [pulsar.client.amqp_exchange][[manager=_default_]-[action=monitor]] [publish:94671c20-d046-11eb-87d4-b42e99da3a3b] Begin publishing to key pulsar__status_update Jun 18 17:05:12 nic5-login1 pulsar[2214545]: 2021-06-18 17:05:12,100 DEBUG [pulsar.client.amqp_exchange][[manager=_default_]-[action=monitor]] [publish:94671c20-d046-11eb-87d4-b42e99da3a3b] Have producer for publishing to key pulsar__status_update Jun 18 17:05:12 nic5-login1 pulsar[2214545]: 2021-06-18 17:05:12,101 DEBUG [pulsar.client.amqp_exchange][[manager=_default_]-[action=monitor]] [publish:94671c20-d046-11eb-87d4-b42e99da3a3b] Published to key pulsar__status_update Jun 18 17:05:13 nic5-login1 pulsar[2214545]: 2021-06-18 17:05:13,602 INFO [pulsar.client.staging.down][[manager=_default_]-[action=postprocess]-[job=9]] collecting output None with action FileAction[path=/data/000/dataset_9.dat,action_type=remote_transfer,url=https://galaxy.inbios.uliege.be/api/jobs/0a248a1f62a0cc04/files?job_key=99525bca5c821ddb&path=/data/000/dataset_9.dat&file_type=output] Jun 18 17:05:13 nic5-login1 pulsar[2214545]: 2021-06-18 17:05:13,768 DEBUG [pulsar.messaging.bind_amqp][[manager=_default_]-[action=postprocess]-[job=9]] Publishing Pulsar state change with status complete for job_id 9 Jun 18 17:05:13 nic5-login1 pulsar[2214545]: 2021-06-18 17:05:13,769 DEBUG [pulsar.client.amqp_exchange][[manager=_default_]-[action=postprocess]-[job=9]] [publish:9565b74e-d046-11eb-9811-b42e99da3a3b] Begin publishing to key pulsar__status_update Jun 18 17:05:13 nic5-login1 pulsar[2214545]: 2021-06-18 17:05:13,769 DEBUG [pulsar.client.amqp_exchange][[manager=_default_]-[action=postprocess]-[job=9]] [publish:9565b74e-d046-11eb-9811-b42e99da3a3b] Have producer for publishing to key pulsar__status_update Jun 18 17:05:13 nic5-login1 pulsar[2214545]: 2021-06-18 17:05:13,769 DEBUG [pulsar.client.amqp_exchange][[manager=_default_]-[action=postprocess]-[job=9]] [publish:9565b74e-d046-11eb-9811-b42e99da3a3b] Published to key pulsar__status_update and this is the output of stderr (simple bwa test): [M::mem_pestat] skip orientation FF as there are not enough pairs [M::mem_pestat] skip orientation FR as there are not enough pairs [M::mem_pestat] skip orientation RF as there are not enough pairs [M::mem_pestat] skip orientation RR as there are not enough pairs [main] Version: 0.7.17-r1188 [main] CMD: bwa mem -t 1 -v 1 /cvmfs/data.galaxyproject.org/managed/bwa_mem_index/eschColi_K12/eschColi_K12.fa /opt/pulsar/files/staging/9/inputs/dataset_1.dat /opt/pulsar/files/staging/9/inputs/dataset_2.dat [main] Real time: 1.362 sec; CPU: 1.323 sec I see no sbatch ou job submission part in both. Luc ------------ Luc Cornet, PhD Bio-informatician Mycology and Aerobiology Sciensano ----- Mail original ----- De: "Gianmauro Cuccuru" <gmauroatwork@gmail.com> À: "Luc Cornet" <luc.cornet@uliege.be>, "Gianmauro Cuccuru" <gmauro@informatik.uni-freiburg.de> Cc: "HelpGalaxy" <galaxy-dev@lists.galaxyproject.org>, "Colignon David" <David.Colignon@uliege.be>, "Baurain Denis" <Denis.Baurain@uliege.be>, "Pierre Becker" <Pierre.Becker@sciensano.be> Envoyé: Vendredi 18 Juin 2021 16:27:35 Objet: [galaxy-dev] Re: Running slurm job with pulsar On 18.06.21 15:34, Luc Cornet wrote:
Dear Gianmauro,
Thanks for your answer. I think that my question was not clear enough. Below you will find some answer to your suggestions and maybe it will be more clear for you to guide me.
* check the Pulsar' log for error messages
If you are using systemd, as it seems, try this: journalctl -u pulsar or journalctl -f -u pulsar to continuously print new entries as they are appended to the journal
->I look into staging directory but as my test analysis is executed with success, I have no error log. When a lunch a test from the galaxy GUI, it is executed with success on the cluster. My problem is that is not executed in a job, not using srun or sbatch command (so not using the scheduler). I would like pulsar to be able to submit a job on the cluster, just like others users, and not execute analysis directly "in the terminal".
* verify if your Pulsar server can reach the cluster trough ssh
->The pulsar server is on the HPC cluster. The connection between galaxy (rabbitmq) and the cluster (pulsar) is fine.
* in the staging directory of your job should be a command.sh file. You can try to run it manually (sbatch command.sh or something similar) and
Yes indeed I have this file in staging directory (see below). We i execute the command.sh file with sbatch command.sh, it fails immediately which is normal since command.sh is not a slurm job.
what is in the stderr file?
total 48 -rw-r--r-- 1 pulsar pulsar 4 Jun 17 14:15 use_metadata_directory -rw-r--r-- 1 pulsar pulsar 10 Jun 17 14:15 tool_version -rw-r--r-- 1 pulsar pulsar 59 Jun 17 14:15 tool_id drwxr-xr-x 2 pulsar pulsar 6 Jun 17 14:15 tool_files drwxr-xr-x 2 pulsar pulsar 6 Jun 17 14:15 metadata drwxr-xr-x 2 pulsar pulsar 6 Jun 17 14:15 configs drwxr-xr-x 5 pulsar pulsar 30 Jun 17 14:15 .. -rw-r--r-- 1 pulsar pulsar 2551 Jun 17 14:15 launch_config drwxr-xr-x 2 pulsar pulsar 46 Jun 17 14:15 inputs -rw-r--r-- 1 pulsar pulsar 4 Jun 17 14:15 preprocessed -rwx------ 1 pulsar pulsar 5441 Jun 17 14:15 command.sh -rw-r--r-- 1 pulsar pulsar 0 Jun 17 14:15 stdout drwxr-xr-x 2 pulsar pulsar 6 Jun 17 14:15 home drwxr-xr-x 2 pulsar pulsar 58 Jun 17 14:15 working -rw-r--r-- 1 pulsar pulsar 4 Jun 17 14:15 running -rw-r--r-- 1 pulsar pulsar 546 Jun 17 14:15 stderr drwxr-xr-x 2 pulsar pulsar 26 Jun 17 14:15 outputs -rw-r--r-- 1 pulsar pulsar 1 Jun 17 14:15 return_code -rw-r--r-- 1 pulsar pulsar 10 Jun 17 14:15 final_status -rw-r--r-- 1 pulsar pulsar 0 Jun 17 14:15 postprocessed drwxr-xr-x 9 pulsar pulsar 4096 Jun 17 14:15 .
------------ Luc Cornet, PhD Bio-informatician Mycology and Aerobiology Sciensano
----- Mail original ----- De: "Gianmauro Cuccuru" <gmauro@informatik.uni-freiburg.de> À: "Luc Cornet" <luc.cornet@uliege.be>, "HelpGalaxy" <galaxy-dev@lists.galaxyproject.org> Cc: "Colignon David" <David.Colignon@uliege.be>, "Baurain Denis" <Denis.Baurain@uliege.be>, "Pierre Becker" <Pierre.Becker@sciensano.be> Envoyé: Vendredi 18 Juin 2021 11:48:30 Objet: Re: [galaxy-dev] Running slurm job with pulsar
Hi Luc,
I am not a Slurm expert but I can suggest you several things: * check the Pulsar' log for error messages * verify if your Pulsar server can reach the cluster trough ssh * in the staging directory of your job should be a command.sh file. You can try to run it manually (sbatch command.sh or something similar) and see if it works
Cheers, Gianmauro
On 17.06.21 20:18, Luc Cornet wrote:
Dear all,
I am trying to launch a slurm job with pulsar using CLI (instead of drmaa). The pulsar playbook below pass without problem but the analyses is still run out of slurm. The analyses is excited with success but not in a slrum job.
Can you help me to launch slurm with pulsar ? What did I miss?
Thanks, Luc
``` # Put your Galaxy server's fully qualified domain name (FQDN) (or the FQDN of the RabbitMQ server) above.
pulsar_root: /opt/pulsar
pulsar_pip_install: true pulsar_pycurl_ssl_library: openssl pulsar_systemd: true pulsar_systemd_runner: webless
pulsar_create_user: false pulsar_user: {name: pulsar, shell: /bin/bash}
pulsar_optional_dependencies: - pyOpenSSL # For remote transfers initiated on the Pulsar end rather than the Galaxy end - pycurl # drmaa required if connecting to an external DRM using it. - drmaa # kombu needed if using a message queue - kombu # amqp 5.0.3 changes behaviour in an unexpected way, pin for now. - 'amqp==5.0.2' # psutil and pylockfile are optional dependencies but can make Pulsar # more robust in small ways. - psutil
pulsar_yaml_config: conda_auto_init: True conda_auto_install: True staging_directory: "{{ pulsar_staging_dir }}" persistence_directory: "{{ pulsar_persistence_dir }}" tool_dependency_dir: "{{ pulsar_dependencies_dir }}" # The following are the settings for the pulsar server to contact the message queue with related timeouts etc. message_queue_url: "pyamqp://galaxy_au:{{ rabbitmq_password_galaxy_au }}@{{ galaxy_server_url }}:5671//pulsar/galaxy_au?ssl=1" managers: _default_: type: queued_cli job_plugin: slurm native_specification: "-p batch --tasks=1 --cpus-per-task=2 --mem-per-cpu=1000 -t 10:00" min_polling_interval: 0.5 amqp_publish_retry: True amqp_publish_retry_max_retries: 5 amqp_publish_retry_interval_start: 10 amqp_publish_retry_interval_step: 10 amqp_publish_retry_interval_max: 60
# We also need to create the dependency resolver file so pulsar knows how to # find and install dependencies for the tools we ask it to run. The simplest # method which covers 99% of the use cases is to use conda auto installs similar # to how Galaxy works. pulsar_dependency_resolvers: - name: conda args: - name: auto_init value: true ```
------------ Luc Cornet, PhD Bio-informatician Mycology and Aerobiology Sciensano ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: %(web_page_url)s
To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/
Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: %(web_page_url)s To search Galaxy mailing lists use the unified search at: http://galaxyproject.org/search/
participants (2)
-
Gianmauro Cuccuru
-
Luc Cornet