Groovy tool for galaxy
by Anshul Tak
Hi,
I have a question regarding galaxy. Can I execute groovy scipts using
galaxy? I installed groovy in galaxy docker image and all the dependencies.
However, when I try to execute a tool which require groovy engine it
says "/export/galaxy-central/database/job_working_directory/000/2/tool_script.sh:
line 9: groovy: command not found"
So I understood the error that galaxy is not able to find groovy but
ideally it should take the groovy engine as it is installed.
Can you give me some pointers to solve this issue?
--
With Regards,
Anshul Tak
5 years, 5 months
DeepTools installation error
by Timo Janßen
Hi all,
trying to install the deeptools_suite from the toolshed I only get the
following error:
"500 Internal Server Error
nginx/1.11.10"
This happens as well for the dependencies, e.g. deeptools_bigwig_compare.
I am currently using Galaxy 16.10 with nginx proxy.
Also the installation of the dependency
package_python_2_7_matplotlib_1_4 fails because of a missing dependency:
"Error installing tool dependency python version 2.7.10: Unable to
locate required tool shed repository named package_python_2_7_10 owned
by iuc with revision 80931fb5f433."
Has anyone seen these problems before and knows a solution?
Best regards,
Timo Janßen
--
------------------------------------------------------------------------
Timo Janßen
Wissenschaftliche Hilfskraft
Arbeitsgruppe "Anwendungs- und Informationssysteme"
Tel.: +49(0)551/201-1791
E-Mail: timo.janssen(a)gwdg.de
------------------------------------------------------------------------
Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen
(GWDG)
Am Faßberg 11, 37077 Göttingen, URL: http://www.gwdg.de
Tel.: +49 551 201-1510, Fax: +49 551 201-2150, E-Mail: gwdg(a)gwdg.de
Service-Hotline: Tel.: +49 551 201-1523, E-Mail: support(a)gwdg.de
Geschäftsführer: Prof. Dr. Ramin Yahyapour
Aufsichtsratsvorsitzender: Prof. Dr. Norbert Lassau
Sitz der Gesellschaft: Göttingen
Registergericht: Göttingen, Handelsregister-Nr. B 598
------------------------------------------------------------------------
Zertifiziert nach ISO 9001
------------------------------------------------------------------------
5 years, 7 months
Pulsar - running as real DRMAA user problems
by Poole, Richard
Hey John,
So I’ve been happily using Pulsar to send all my Galaxy server jobs to our cluster here at UCL for several months now (I love it!). I am now exploring the ‘run-as-real-user’ option for DRMAA submissions and have run into a problem. The files are correctly staged, correctly chowned, successfully submitted to the queue and the job runs. However, at job end (collection?) fails with the following error message in Pulsar:
Exception happened during processing of request from (‘*.*.*.*', 54321)
Traceback (most recent call last):
File "/opt/rocks/lib/python2.6/site-packages/Paste-2.0.1-py2.6.egg/paste/httpserver.py", line 1072, in process_request_in_thread
self.finish_request(request, client_address)
File "/opt/rocks/lib/python2.6/SocketServer.py", line 322, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/opt/rocks/lib/python2.6/SocketServer.py", line 617, in __init__
self.handle()
File "/opt/rocks/lib/python2.6/site-packages/Paste-2.0.1-py2.6.egg/paste/httpserver.py", line 446, in handle
BaseHTTPRequestHandler.handle(self)
File "/opt/rocks/lib/python2.6/BaseHTTPServer.py", line 329, in handle
self.handle_one_request()
File "/opt/rocks/lib/python2.6/site-packages/Paste-2.0.1-py2.6.egg/paste/httpserver.py", line 441, in handle_one_request
self.wsgi_execute()
File "/opt/rocks/lib/python2.6/site-packages/Paste-2.0.1-py2.6.egg/paste/httpserver.py", line 291, in wsgi_execute
self.wsgi_start_response)
File "/cluster/galaxy/pulsar/pulsar/web/framework.py", line 39, in __call__
return controller(environ, start_response, **request_args)
File "/cluster/galaxy/pulsar/pulsar/web/framework.py", line 144, in controller_replacement
result = self.__execute_request(func, args, req, environ)
File "/cluster/galaxy/pulsar/pulsar/web/framework.py", line 124, in __execute_request
result = func(**args)
File "/cluster/galaxy/pulsar/pulsar/web/routes.py", line 82, in status
return status_dict(manager, job_id)
File "/cluster/galaxy/pulsar/pulsar/manager_endpoint_util.py", line 12, in status_dict
job_status = manager.get_status(job_id)
File "/cluster/galaxy/pulsar/pulsar/managers/stateful.py", line 95, in get_status
proxy_status, state_change = self.__proxy_status(job_directory, job_id)
File "/cluster/galaxy/pulsar/pulsar/managers/stateful.py", line 115, in __proxy_status
proxy_status = self._proxied_manager.get_status(job_id)
File "/cluster/galaxy/pulsar/pulsar/managers/queued_external_drmaa_original.py", line 62, in get_status
external_status = super(ExternalDrmaaQueueManager, self)._get_status_external(external_id)
File "/cluster/galaxy/pulsar/pulsar/managers/base/base_drmaa.py", line 31, in _get_status_external
drmaa_state = self.drmaa_session.job_status(external_id)
File "/cluster/galaxy/pulsar/pulsar/managers/util/drmaa/__init__.py", line 50, in job_status
return self.session.jobStatus(str(external_job_id))
File "build/bdist.linux-x86_64/egg/drmaa/session.py", line 518, in jobStatus
c(drmaa_job_ps, jobId, byref(status))
File "build/bdist.linux-x86_64/egg/drmaa/helpers.py", line 299, in c
return f(*(args + (error_buffer, sizeof(error_buffer))))
File "build/bdist.linux-x86_64/egg/drmaa/errors.py", line 151, in error_check
raise _ERRORS[code - 1](error_string)
InvalidJobException: code 18: The job specified by the 'jobid' does not exist.
With this corresponding error from my Galaxy server:
galaxy.tools.actions INFO 2016-10-13 18:47:51,851 Handled output (279.421 ms)
galaxy.tools.actions INFO 2016-10-13 18:47:52,093 Verified access to datasets (5.271 ms)
galaxy.tools.execute DEBUG 2016-10-13 18:47:52,118 Tool [toolshed.g2.bx.psu.edu/repos/devteam/sam_to_bam/sam_to_bam/1.1.4] created job [25008<http://toolshed.g2.bx.psu.edu/repos/devteam/sam_to_bam/sam_to_bam/1.1.4]%...>] (560.404 ms)
galaxy.jobs DEBUG 2016-10-13 18:47:52,579 (25008) Working directory for job is: /Users/galaxy/galaxy-dist/database/job_working_directory/025/25008
galaxy.jobs.handler DEBUG 2016-10-13 18:47:52,591 (25008) Dispatching to pulsar runner
galaxy.jobs DEBUG 2016-10-13 18:47:52,677 (25008) Persisting job destination (destination id: hpc_low)
galaxy.jobs.runners DEBUG 2016-10-13 18:47:52,681 Job [25008] queued (90.231 ms)
galaxy.jobs.handler INFO 2016-10-13 18:47:52,699 (25008) Job dispatched
galaxy.tools.deps DEBUG 2016-10-13 18:47:53,138 Building dependency shell command for dependency 'samtools'
galaxy.jobs.runners.pulsar INFO 2016-10-13 18:47:53,233 Pulsar job submitted with job_id 25008
galaxy.jobs DEBUG 2016-10-13 18:47:53,257 (25008) Persisting job destination (destination id: hpc_low)
galaxy.datatypes.metadata DEBUG 2016-10-13 18:51:03,922 Cleaning up external metadata files
galaxy.jobs.runners.pulsar ERROR 2016-10-13 18:51:03,945 failure finishing job 25008
Traceback (most recent call last):
File "/Users/galaxy/galaxy-dist/lib/galaxy/jobs/runners/pulsar.py", line 386, in finish_job
run_results = client.full_status()
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/client.py", line 132, in full_status
return self.raw_check_complete()
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/decorators.py", line 28, in replacement
return func(*args, **kwargs)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/decorators.py", line 13, in replacement
response = func(*args, **kwargs)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/client.py", line 146, in raw_check_complete
check_complete_response = self._raw_execute("status", {"job_id": self.job_id})
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/client.py", line 215, in _raw_execute
return self.job_manager_interface.execute(command, args, data, input_path, output_path)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/interface.py", line 96, in execute
response = self.transport.execute(url, method=method, data=data, input_path=input_path, output_path=output_path)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/transport/standard.py", line 34, in execute
response = self._url_open(request, data)
File "/Users/galaxy/galaxy-dist/lib/pulsar/client/transport/standard.py", line 20, in _url_open
return urlopen(request, data)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 475, in error
return self._call_chain(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 500: Internal Server Error
I am running 15.10 and Python 2.7.10 on my iMac for the server and the cluster submission node is running Pulsar 0.5.0 and Python 2.7.12
For these tests I run Pulsar in an interactive window so I have not set the sudoers file up, but rather enter sudo password when requested by Pulsar (at the first step of chowning the staging directory). Also have rewrites set up in Galaxy’s pulsar_actions.yml and I am using remote_scp for the file transfers rather than http - although I have also tried switching back to http (as I noticed caching, which I am also testing, does not work with scp transfers) but get an identical set of error messages.
As I say, I have no troubles using a regular queued_drmaa manager in pulsar. Any ideas what the problem may be?
Cheers,
Rich
Richard J Poole PhD
Wellcome Trust Fellow
Department of Cell and Developmental Biology
University College London
518 Rockefeller
21 University Street, London WC1E 6DE
Office (518 Rockefeller): +44 20 7679 6577 (int. 46577)
Lab (529 Rockefeller): +44 20 7679 6133 (int. 46133)
https://www.ucl.ac.uk/cdb/academics/poole
5 years, 8 months
Running jobs as real user and extra_file_path
by Louise-Amélie Schmitt
Hi everyone,
I just wanted to ask how the extra_file_path is handled in case of job
running as the real user since the file_path is only writable by the
galaxy user. Any clue?
Thanks,
L-A
5 years, 9 months
Testing updated NCBI BLAST+ wrappers for version 2.5.0
by Peter Cock
Hello all,
I have updated the NCBI BLAST+ wrappers on the Test Tool Shed,
the wrapper is now at v0.2.00:
https://testtoolshed.g2.bx.psu.edu/view/devteam/ncbi_blast_plus/
The main changes is this now depends on BLAST+ 2.5.0, and that is
available via either BioConda or the Tool Shed:
https://toolshed.g2.bx.psu.edu/view/iuc/package_blast_plus_2_5_0/
https://testtoolshed.g2.bx.psu.edu/view/iuc/package_blast_plus_2_5_0/
In order for the dependency to work smoothly on both BioConda
and the Tool Shed system, we have changed the package name
from "blast+" to just "blast". Given the NCBI stopped updated the
original "legacy" BLAST some time ago, when combined with the
version number this is no longer ambiguous.
Jumping from using BLAST+ 2.2.31 to using BLAST+ 2.5.0
required updating lots of the test files for NCBI changes, including
dropping the GI numbers in many outputs, expanding the percentage
identity field from 2dp to 3dp, and also changing how -parse_deflines
works with tabular output.
The wrappers (deliberately) do not yet offer any new functionality
added in the recent NCBI BLAST+ updates, in particular BLAST
XML v2 is not yet available as an output with a datatype in Galaxy.
At this point I would welcome feedback from those of you using the
BLAST+ wrappers - including if you were able to install this with the
dependencies from BioConda or the traditional Tool Shed packages.
Once I'm confident that this is all OK, I will update the main Tool Shed
(and think about adding new functionality in 2017).
Thank you all,
Peter
5 years, 9 months
Galaxy Reports Webapp Login
by Timo Janßen
Hi,
is it possible to implement some kind of login for the usage reports
webapp, so that only admins can see the website? In our current setup
the tool is running on a server with many other users so that anyone who
knows the port can open the website and see potentially sensitive data.
Best regards,
Timo
--
------------------------------------------------------------------------
Timo Janßen
Wissenschaftliche Hilfskraft
Arbeitsgruppe "Anwendungs- und Informationssysteme"
Tel.: +49(0)551/201-1791
E-Mail: timo.janssen(a)gwdg.de
------------------------------------------------------------------------
Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen
(GWDG)
Am Faßberg 11, 37077 Göttingen, URL: http://www.gwdg.de
Tel.: +49 551 201-1510, Fax: +49 551 201-2150, E-Mail: gwdg(a)gwdg.de
Service-Hotline: Tel.: +49 551 201-1523, E-Mail: support(a)gwdg.de
Geschäftsführer: Prof. Dr. Ramin Yahyapour
Aufsichtsratsvorsitzender: Prof. Dr. Norbert Lassau
Sitz der Gesellschaft: Göttingen
Registergericht: Göttingen, Handelsregister-Nr. B 598
------------------------------------------------------------------------
Zertifiziert nach ISO 9001
------------------------------------------------------------------------
5 years, 9 months
Fwd: [Bosc] BOSC 2017: Call for Abstracts
by Peter Cock
Dear all,
Recent BOSC meetings have had a strong Galaxy presence, so please do
consider submitting an abstract and/or attending BOSC 2017 in Prague.
Thank you,
Peter
Dr. Peter Cock,
Bioinformatician at The James Hutton Institute;
Open Bioinformatics Foundation, board of directors, treasurer;
BOSC 2017 organising committee member.
---------- Forwarded message ----------
From: Nomi Harris <nlharris(a)gmail.com>
Date: Mon, Mar 6, 2017 at 6:50 PM
Subject: [Bosc] BOSC 2017: Call for Abstracts
To: bosc-announce(a)mailman.open-bio.org, bosc-reviewers(a)open-bio.org
Cc: Nomi Harris <nlharris(a)gmail.com>, BOSC 2017 Organizing Committee <
bosc(a)open-bio.org>
*Call for Abstracts for the 18th Annual Bioinformatics Open Source
Conference (BOSC 2017)*
An ISMB/ECCB Community of Special Interest (COSI)
Dates: July 22-23, 2017
Location: Prague, Czech Republic
Web site: http://www.open-bio.org/wiki/BOSC_2017
Email: bosc(a)open-bio.org
BOSC announcements mailing list: http://lists.open-bio.
org/mailman/listinfo/bosc-announce
Twitter: @OBF_BOSC
*Important Dates*
- Call for one-page abstracts
<https://www.open-bio.org/wiki/BOSC_Abstract_Submission> opens: March 6,
2017
- Abstract submission
<https://www.open-bio.org/wiki/BOSC_Abstract_Submission> deadline: April
13, 2017
- Travel fellowship
<https://github.com/OBF/obf-docs/blob/master/Travel_fellowships.md>
application
deadline: April 15, 2017
- Authors notified: May 10, 2017
- Codefest 2017 <https://www.open-bio.org/wiki/Codefest_2017>: July
21-22, Prague
- BOSC 2017 <https://www.open-bio.org/wiki/BOSC_2017>: July 22-23,
Prague (days 1 and 2 of ISMB/ECCB)
- ISMB/ECCB 2017 <https://www.iscb.org/ismbeccb2017>: July 21-25, Prague
*About BOSC*
Since 2000, the yearly Bioinformatics Open Source Conference (BOSC) has
provided a forum for developers and users to interact and share research
results and ideas in open source bioinformatics. BOSC’s broad spectrum of
topics includes practical techniques for solving bioinformatics problems;
software development practices; standards and ontologies; approaches that
promote open science and sharing of data, results and software; and ways to
grow open source communities while promoting diversity within them.
In the past, BOSC has taken place the two days before ISMB as a Special
Interest Group (SIG). This year, ISMB is trying a new structure: the SIGs
(now called COSIs) are integrated into the main ISMB meeting. BOSC will
take place the first two full days of ISMB (July 22-23). Attendees will
have the option to register for the full ISMB/ECCB meeting (July 21-25) or
for just two days (there is no single-day registration option this year). A
limited number of partial travel fellowships will be granted to some
accepted speakers who would not otherwise be able to attend BOSC--please
see https://github.com/OBF/obf-docs/blob/master/Travel_fellowships.md for
more information.
We encourage you to submit one-page abstracts on any topic of relevance to
open source bioinformatics and open science. After review, some abstracts
will be selected for lightning talks, longer talks, and/or posters.
Abstract submission instructions and a link to the EasyChair portal can be
found on https://www.open-bio.org/wiki/BOSC_Abstract_Submission
*Session topics include:*
- Open Science and Reproducible Research
- Open Biomedical Data
- Citizen/Participatory Science
- Standards and Interoperability
- Data Science
- Workflows
- Visualization
- Medical and Translational Bioinformatics
- Developer Tools and Libraries
- Bioinformatics Open Source Project Progress Reports
*Sponsorship*
We gratefully accept sponsorships from relevant private companies. These
sponsorships enable us to offer free registration to some BOSC speakers to
help increase diversity at our meeting. Sponsors in 2016 included
Curoverse, the company behind the open source platform Arvados. Please
contact us if you are interested in being a sponsor of BOSC 2017!
Thank you,
BOSC 2017 Organizing Committee: Nomi Harris (chair), Brad Chapman,
Peter Cock, Christopher Fields, Bastian Greshake, Karsten Hokamp, Hilmar
Lapp, Mónica Muñoz-Torres, Heather Wiencko
P.S. Don't forget to submit your BOSC abstract by April 13 at
https://www.open-bio.org/wiki/BOSC_Abstract_Submission!
--
You received this message because you are subscribed to the Google Groups
"OBF-BOSC" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to obf-bosc+unsubscribe(a)googlegroups.com.
Visit this group at https://groups.google.com/group/obf-bosc.
For more options, visit https://groups.google.com/d/optout.
5 years, 10 months
Login message
by Timo Janßen
Hi,
is it possible to change the message that is shown on the login page? At
the moment it says "This Galaxy instance has been configured such that
only users who are logged in may use it. If you don't already have an
account, you may create one."
I looked for the file where the text is defined but couldn't find it.
Best regards,
Timo
--
------------------------------------------------------------------------
Timo Janßen
Wissenschaftliche Hilfskraft
Arbeitsgruppe "Anwendungs- und Informationssysteme"
Tel.: +49(0)551/201-1791
E-Mail: timo.janssen(a)gwdg.de
------------------------------------------------------------------------
Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen
(GWDG)
Am Faßberg 11, 37077 Göttingen, URL: http://www.gwdg.de
Tel.: +49 551 201-1510, Fax: +49 551 201-2150, E-Mail: gwdg(a)gwdg.de
Service-Hotline: Tel.: +49 551 201-1523, E-Mail: support(a)gwdg.de
Geschäftsführer: Prof. Dr. Ramin Yahyapour
Aufsichtsratsvorsitzender: Prof. Dr. Norbert Lassau
Sitz der Gesellschaft: Göttingen
Registergericht: Göttingen, Handelsregister-Nr. B 598
------------------------------------------------------------------------
Zertifiziert nach ISO 9001
------------------------------------------------------------------------
5 years, 10 months
SLURM configuration problem
by Leonor Palmeira
Dear all,
we have setup a Galaxy instance on a virtual machine, and we want to be
able to submit jobs to our HPC system (SLURM).
Currently, we do not understand how to define that jobs will be sent to
the HPC cluster.
We have set :
export $DRMAA_LIBRARY_PATH=/var/lib/libdrmaa.so
This is our config/job_conf.xml :
<?xml version="1.0"?>
<!-- A sample job config that explicitly configures job running the way
it is configured by default (if there is no explicit config). -->
<job_conf>
<plugins>
<plugin id="drmaa" type="runner"
load="galaxy.jobs.runners.drmaa:DRMAAJobRunner" />
</plugins>
<handlers default="handlers">
<handler id="handler0" tags="handlers" />
<handler id="main" />
</handlers>
<destinations default="slurm">
<destination id="slurm" runner="drmaa">
<param id="nativeSpecification">-P all_5hrs</param>
</destination>
</destinations>
</job_conf>
And the output of "sh run.sh" :
galaxy.jobs.manager DEBUG 2017-02-07 15:50:39,962 Starting job handler
galaxy.jobs INFO 2017-02-07 15:50:39,962 Handler 'main' will load all
configured runner plugins
galaxy.jobs.runners.state_handler_factory DEBUG 2017-02-07 15:50:39,971
Loaded 'failure' state handler from module
galaxy.jobs.runners.state_handlers.resubmit
pulsar.managers.util.drmaa DEBUG 2017-02-07 15:50:39,975 Initializing
DRMAA session from thread MainThread
Traceback (most recent call last):
File
"/home/mass/GAL/APP/galaxy/lib/galaxy/webapps/galaxy/buildapp.py", line
55, in paste_app_factory
app = galaxy.app.UniverseApplication( global_conf=global_conf,
**kwargs )
File "/home/mass/GAL/APP/galaxy/lib/galaxy/app.py", line 170, in __init__
self.job_manager = manager.JobManager( self )
File "/home/mass/GAL/APP/galaxy/lib/galaxy/jobs/manager.py", line 23,
in __init__
self.job_handler = handler.JobHandler( app )
File "/home/mass/GAL/APP/galaxy/lib/galaxy/jobs/handler.py", line 32,
in __init__
self.dispatcher = DefaultJobDispatcher( app )
File "/home/mass/GAL/APP/galaxy/lib/galaxy/jobs/handler.py", line 723,
in __init__
self.job_runners = self.app.job_config.get_job_runner_plugins(
self.app.config.server_name )
File "/home/mass/GAL/APP/galaxy/lib/galaxy/jobs/__init__.py", line
687, in get_job_runner_plugins
rval[id] = runner_class( self.app, runner[ 'workers' ],
**runner.get( 'kwds', {} ) )
File "/home/mass/GAL/APP/galaxy/lib/galaxy/jobs/runners/drmaa.py",
line 88, in __init__
self.ds = DrmaaSessionFactory().get()
File
"/usr/local/lib/python2.7/dist-packages/pulsar/managers/util/drmaa/__init__.py",
line 31, in get
return DrmaaSession(session_constructor, **kwds)
File
"/usr/local/lib/python2.7/dist-packages/pulsar/managers/util/drmaa/__init__.py",
line 49, in __init__
DrmaaSession.session.initialize()
File "/usr/local/lib/python2.7/dist-packages/drmaa/session.py", line
257, in initialize
py_drmaa_init(contactString)
File "/usr/local/lib/python2.7/dist-packages/drmaa/wrappers.py", line
73, in py_drmaa_init
return _lib.drmaa_init(contact, error_buffer, sizeof(error_buffer))
File "/usr/local/lib/python2.7/dist-packages/drmaa/errors.py", line
151, in error_check
raise _ERRORS[code - 1](error_string)
InternalException: code 1: cell directory
"/usr/lib/gridengine-drmaa/default" doesn't exist
Could anyone point us in the right direction?
This would be greatly appreciated.
Best regards
Leonor
--
Leonor Palmeira | PhD
Associate Scientist
Department of Human Genetics
CHU de Liège | Domaine Universitaire du Sart-Tilman
4000 Liège | BELGIQUE
Tél: +32-4-366.91.41
Fax: +32-4-366.72.61
e-mail: lpalmeira(a)chu.ulg.ac.be
5 years, 10 months
LDAP group data synchronization
by Matthias Bernt
Dear galaxy developers,
I'm currently trying to establish galaxy at the UFZ research center in
Leipzig. I have some questions on how to match the infrastructure of
galaxy and and our local infrastructure. I would be really happy about
some comments on my ideas.
Galaxy external authentication via LDAP is already working -- worked
like a charm :)
How to integrate LDAP group data into galaxy?
My idea would be to regularly sync the groups and user-group
associations from LDAP to galaxy via the API/bioblend.
Adding groups and user-group associations should be no problem, but
how about removing users, groups, or associations? Are there any
pitfalls that I need to consider? How about data associated with deleted
users? Maybe there are already some scripts around that I could build upon?
Best,
Matthias
P.S. I'm afraid that I ask quite a lot of questions at the moment.
I hope that I will be soon able to contribute a bit.
--
-------------------------------------------
Matthias Bernt
Bioinformatics Service
Molekulare Systembiologie (MOLSYB)
Helmholtz-Zentrum für Umweltforschung GmbH - UFZ/
Helmholtz Centre for Environmental Research GmbH - UFZ
Permoserstraße 15, 04318 Leipzig, Germany
Phone +49 341 235 482296,
m.bernt(a)ufz.de, www.ufz.de
Sitz der Gesellschaft/Registered Office: Leipzig
Registergericht/Registration Office: Amtsgericht Leipzig
Handelsregister Nr./Trade Register Nr.: B 4703
Vorsitzender des Aufsichtsrats/Chairman of the Supervisory Board:
MinDirig Wilfried Kraus
Wissenschaftlicher Geschäftsführer/Scientific Managing Director:
Prof. Dr. Dr. h.c. Georg Teutsch
Administrative Geschäftsführerin/ Administrative Managing Director:
Prof. Dr. Heike Graßmann
-------------------------------------------
5 years, 10 months