Hi Chris,
Thanks for looking at this. We have spun up an Ubunutu 14.04 VM for this purpose, and have a lustre filesystem mounted for persistent data. Our SLURM cluster is already in place, so I have removed from the playbook most of what occurs
in the galaxy-extras slurm task. I believe the only thing left are a couple of steps at the bottom of the task that deal with creating a job_conf.xml file. I have also changed things like the galaxy_user_name, gid, uid, galaxy_loc (on lustre), etc. The
playbook does run to completion, and without error. We have confirmed that the ‘galaxyuser’ user can submit jobs to the scheduler, though haven’t had any luck getting this to work from the Galaxy instance.
I’m happy to give you any details you need, I feel like I am so close to getting this working…
Thanks,
John
From: galaxy-dev <galaxy-dev-bounces@lists.galaxyproject.org> on behalf of Christophe Antoniewski <drosofff@gmail.com>
Date: Tuesday, November 7, 2017 at 9:25 AM
To: "galaxy-dev@lists.galaxyproject.org" <galaxy-dev@lists.galaxyproject.org>
Subject: [galaxy-dev] Fwd: Job Script Integrity with GalaxyKickStart (galaxy-dev Digest, Vol 137, Issue 5)
Hi John,
Can you also raise an issue in https://github.com/ARTbio/GalaxyKickStart/issues ?
In order to help, I will need to know the configuration of your GalaxyKickStart (the variables you modified in the playbook, group_vars and inventory_files).
Did you use the cloud_setup role ? In that case Enis Afgan https://github.com/afgane may help.
Best regards
Chris
Christophe Antoniewski
Institut de Biologie Paris Seine
9, Quai St Bernard, Boîte courrier 24
75252 Paris Cedex 05
ARTbio Bâtiment B, 7e étage, porte 725
Tel +33 1 44 27 70 05
Mobile +33 6 68 60 51 50
Pour accéder à la Plateforme
Bâtiment B, 7e étage, Porte 725
2017-11-07 18:00 GMT+01:00 <galaxy-dev-request@lists.galaxyproject.org>:
Send galaxy-dev mailing list submissions to
galaxy-dev@lists.galaxyproject.org
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.galaxyproject.org/listinfo/galaxy-dev
or, via email, send a message with subject or body 'help' to
galaxy-dev-request@lists.galaxyproject.org
You can reach the person managing the list at
galaxy-dev-owner@lists.galaxyproject.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of galaxy-dev digest..."
HEY! This is important! If you reply to a thread in a digest, please
1. Change the subject of your response from "Galaxy-dev Digest Vol ..." to the original subject for the thread.
2. Strip out everything else in the digest that is not part of the thread you are responding to.
Why?
1. This will keep the subject meaningful. People will have some idea from the subject line if they should read it or not.
2. Not doing this greatly increases the number of emails that match search queries, but that aren't actually informative.
Today's Topics:
1. Job Script Integrity (John Letaw)
----------------------------------------------------------------------
Message: 1
Date: Tue, 7 Nov 2017 03:20:49 +0000
From: "John Letaw" <letaw@ohsu.edu>
To: "galaxy-dev@lists.galaxyproject.org"
<galaxy-dev@lists.galaxyproject.org>
Subject: [galaxy-dev] Job Script Integrity
Message-ID: <FBF795C3-8F01-47EF-8033-F14DD8694328@ohsu.edu>
Content-Type: text/plain; charset="utf-8"
Hi all,
I’m installing via GalaxyKickStart…
I’m getting the following error:
galaxy.jobs.runners ERROR 2017-11-06 19:14:05,263 (19) Failure preparing job
Traceback (most recent call last):
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/__init__.py", line 175, in prepare_job
modify_command_for_container=modify_command_for_container
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/__init__.py", line 209, in build_command_line
container=container
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/command_factory.py", line 84, in build_command
externalized_commands = __externalize_commands(job_wrapper, external_command_shell, commands_builder, remote_command_params)
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/command_factory.py", line 143, in __externalize_commands
write_script(local_container_script, script_contents, config)
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/util/job_script/__init__.py", line 112, in write_script
_handle_script_integrity(path, config)
File "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/util/job_script/__init__.py", line 147, in _handle_script_integrity
raise Exception("Failed to write job script, could not verify job script integrity.")
Exception: Failed to write job script, could not verify job script integrity.
galaxy.model.metadata DEBUG 2017-11-06 19:14:05,541 Cleaning up external metadata files
galaxy.model.metadata DEBUG 2017-11-06 19:14:05,576 Failed to cleanup MetadataTempFile temp files from /home/exacloud/lustre1/galaxydev/galaxyuser/database/jobs/000/19/metadata_out_HistoryDatasetAssociation_16_I8bhLX: No JSON object could be decoded
I would like to further understand what it means to not verify integrity of a job script. Does this just mean there is a permissions error? Ownership doesn’t match up?
Thanks,
John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.galaxyproject.org/pipermail/galaxy-dev/attachments/20171107/66103ce8/attachment-0001.html>
------------------------------
Subject: Digest Footer
_______________________________________________
galaxy-dev mailing list
galaxy-dev@lists.galaxyproject.org
https://lists.galaxyproject.org/listinfo/galaxy-dev
To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/
------------------------------
End of galaxy-dev Digest, Vol 137, Issue 5
******************************************