Galaxy Administrator Training, Jan 28 - Feb 1, Penn State University
by Dave Clements
Hello all,
We are pleased to announce that* the 2019 Galaxy Admin Training
<https://galaxyproject.org/events/2019-admin-training/> will be offered
January 28 through February 1 at Penn State University* (where the Galaxy
Project got started over ten years ago). The workshop offers a 2 day
introductory session
<https://github.com/galaxyproject/dagobah-training#basic-sessions> followed
by a 3 day advanced topics session
<https://github.com/galaxyproject/dagobah-training#advanced-sessions>.
Participants can register for one or both sessions.
This workshop will cover what you need to know to set up your own
high-performance and multi-user production Galaxy instance. Sessions will
be *intensive and hands-on*, and taught by experienced instructors from the
Galaxy Community. Participants will learn how to install, configure,
customize, and extend their own Galaxy servers. Topics include tool
configuration, authentication and user management, using heterogeneous
storage and compute services, and many other topics that will enable you to
get your own Galaxy server up and running, performing well, and used by
your community.
Registration
<https://galaxyproject.org/events/2019-admin-training/#registration> is now
open and starts at $40 / day for participants from non-profits and
academia. Early registration ends October 31. However, the 2016 and 2018
admin training were both full, so you are strongly encouraged to *register
now*.
*The workshop does have prerequisites
<https://galaxyproject.org/events/2019-admin-training/#prerequisites>.
Please read them before you register.*
Cheers,
Dave C.
*About Galaxy *
Galaxy is an open web based platform for biomedical data integration and
analysis. It is deployed at large and small organizations around the world
and used in a broad range of biomedical research domains, and across the
tree of life.
--
https://galaxyproject.org/
https://usegalaxy.org/
4 years, 1 month
Galaxy Administrator Training, Jan 28 - Feb 1, Penn State University
by Mo Heydarian
Hello all,
We would like to remind everyone that* the 2019 Galaxy Admin Training
<https://galaxyproject.org/events/2019-admin-training/> will be offered
January 28 through February 1 at Penn State University* (where the
Galaxy Project
got started over ten years ago). The workshop offers a 2 day introductory
session <https://github.com/galaxyproject/dagobah-training#basic-sessions>
followed
by a 3 day advanced topics session
<https://github.com/galaxyproject/dagobah-training#advanced-sessions>.
Participants can register for one or both sessions.
This workshop will cover what you need to know to set up your own
high-performance and multi-user production Galaxy instance. Sessions will
be *intensive and hands-on*, and taught by experienced instructors from the
Galaxy Community. Participants will learn how to install, configure,
customize, and extend their own Galaxy servers. Topics include tool
configuration, authentication and user management, using heterogeneous
storage and compute services, and many other topics that will enable you to
get your own Galaxy server up and running, performing well, and used by
your community.
Registration
<https://galaxyproject.org/events/2019-admin-training/#registration> is now
open and starts at $40 / day for participants from non-profits and
academia. *Early registration ends TODAY October 31, register now before
rates increase.* However, the 2016 and 2018 admin training were both full,
so you are strongly encouraged to *register now*.
*The workshop does have prerequisites
<https://galaxyproject.org/events/2019-admin-training/#prerequisites>.
Please read them before you register.*
Cheers,
Mo Heydarian
*About Galaxy *
Galaxy is an open web based platform for biomedical data integration and
analysis. It is deployed at large and small organizations around the world
and used in a broad range of biomedical research domains, and across the
tree of life.
4 years, 2 months
Problem with cleanup.py
by George Weingart
Hello,
We are incurring a problem with cleanup.py:
We have already run two cycles of cleanup.py (1-6) with a cut off of 30
days, we see in the logs that many datasets are deleted, but when we do
df -kh to check the file system, we see creeping up utilization of space -
no relinquishing of any space.
We are running in a docker environment and our cleanup job looks as follows:
================================================================
#!/bin/bash
export GALAXY_VIRTUAL_ENV=/galaxy_venv
source $GALAXY_VIRTUAL_ENV/bin/activate
export script="/galaxy-central/scripts/cleanup_datasets/cleanup_datasets.py"
export config="/etc/galaxy/galaxy.ini"
export logdir="/export/galaxy-central/logs"
export days=30
export num=$1
nohup $script $config -d $days -$num -r -f >> $logdir/cleanup$num.log &
================================================================
and we run it with ./run_cleanup.sh 1
or 2, 3 etc.
Any ideas ?
Thanks!
George Weingart PhD
Huttenhower Lab
Harvard School of Public Health
4 years, 3 months
Galaxy 18.09 Release
by Dannon Baker
We are pleased to announce the *release of Galaxy 18.09
<https://docs.galaxyproject.org/en/release_18.09/releases/18.09_announce.html>*.
A few release highlights are:
<http://localhost:8095/news/2018-9-galaxy-release/#extensive-workflow-enha...>
Extensive Workflow Enhancements
Workflows got a lot of love this time around, with new runtime parameters
for subworkflows, exposed workflow versions, and zoom capability in the
editor, to name a few. There were also a number of usability enhancements
including better labeling, links, overhauled workflow import interfaces,
and many more.
<http://localhost:8095/news/2018-9-galaxy-release/#group-tags> Group Tags
Galaxy now contains powerful new features for multiple factor analysis of
collections of datasets. The concept of group tags has been added to
Galaxy. These are a special class of tags that describe key-value pairs
that can be attached to the contents of a collection during upload or using
collection operation tools. These tags can describe multiple sets of
variables for the contents of a collection. Once set, these tags can be
consumed intelligently by tools that need to divide collections into
multiple overlapping factors or sets of datasets. A special thanks to
@mvdbeek for devising and implementing this approach.
<http://localhost:8095/news/2018-9-galaxy-release/#python-3-beta-support>
Python 3 Beta Support
After almost 3 years of work and more than 100 pull requests, we are proud
to announce the Beta-stage support for running Galaxy under Python 3. Lint,
unit, API, framework, integration and Selenium tests all pass, time for you
to give it a try and report any bug you find!
Please see the full release notes
<https://docs.galaxyproject.org/en/release_18.09/releases/18.09_announce.html>
for more information, including how to upgrade today!
*Thanks for using Galaxy!*
4 years, 3 months
regarding galaxy kickstart
by Rui Wang
Hi Folks,
I spent sometime playing with the kickstart, and sort of got it working.
Still a couple things not clear:
1. Is it true that there is no start/stop? My guess is that whenever you
need to restart, just re-run the playbook? Anything you want to update(for
example add new tools), you re-run the updated playbook too, correct?
Before you re-run it, do you have to manually kill everything?
2. Seems tools could easily added by editing the tool list. How about
genome data? What if I want to add a specific repo or genome data? Is there
a role for this purpose?
Please give me a hand if you could. Any input will be greatly appreciated.
:-)
Thanks so much,
Rui
4 years, 3 months
Running Cloudlaunch against a new OpenStack cloud
by Peter van Heusden
Hi there
I'd like to set up cloudlaunch to be able to set up on-demand Galaxy
servers and associated clusters on the new Ilifu OpenStack cloud. From
looking around the repositories it looks like Cloudlaunch is at least 2
components:
1. The cloudlaunch server
2. The UI - either cloudlaunch-ui or cloudlaunch-cli
I'm trying to get cloudlaunch server installed on a Ubuntu 18.04 VM. The
only requirement seems to be python3-pip. For postgresql support I
installed the python3-psycopg2 module, the postgresql-10 server and create
a user and db (both named cloud). Then I installed the server with `pip3
install cloudlaunch-server` and ran commands:
cloudlaunch-server django migrate
cloudlaunch-server django createsuperuser
cloudlaunch-server django runserver
The next step (https://github.com/galaxyproject/cloudlaunch) refers to
redis-server. Installing this package seemed to satisfy that requirement,
running it manually didn't work so I just ran the celery command as
illustrated. And finally relaunched the django server.
Then nodejs and npm were installed, the cloudlaunch-ui cloned from github,
the various npm package specified installed and finally npm start brings up
the cloudlaunch user interface.
So far so good but:
1. Is there more extensive documentation anywhere? Specifically
documentation about running this in production.
2. Is there a docker container that can be used instead of all this manual
installation?
3. How are appliances configured and added to the Appliance Catalogue?
Thanks!
Peter
4 years, 3 months
Get Data and 'xlsx'
by D K
It looks like uploading 'xlsx' data through 'Get Data' is not working. When
I try this on my local Galaxy (v18.05) and on usegalaxy.org it gets
automatically converted to 'BLAST xml' when using Automatic Detection. If I
select 'xlsx' during uploading I get the error message ( Warning: The file
'Type' was set to 'xlsx' but the file does not appear to be of that type )
Does anyone have any suggestions?
4 years, 3 months
error and question from using galaxy kickstart
by Rui Wang
Hi Folks,
So I managed to ran through the ansible. What I did was to remove certain
steps. For example, I don't need slurm-drmaa and docker, then I skipped the
update cache. After these minor changes, it finished successfully(?) with
an error message it ignored. Then I tried to access the UI, but nothing
worked. I pasted it below for your reference. If anyone has seen this
before, please help. :-)
Sorry for the beginner question, so once the ansible playbook ran
through...seems the galaxy and the postgresql etc were started. How do I
start/stop them manually? Do I have to run the playbook every time I want
to run galaxy or only after I modify the playbook? Also, if I don't look at
the log, I don't even know that the UI is at 127.0.0.1:4001. Is there any
documentation for this?
Thanks,
Rui
error message:
galaxy.web.stack INFO 2018-10-12 19:41:53,874 [p:102477,w:1,m:0]
[MainThread] Galaxy server instance 'main.web.1' is running
Starting server in PID 101567.
serving on uwsgi://127.0.0.1:4001
galaxy.jobs.handler ERROR 2018-10-12 19:42:48,487 [p:102477,w:1,m:0]
[JobHandlerQueue.monitor_thread] Exception in monitor_step
Traceback (most recent call last):
File "lib/galaxy/jobs/handler.py", line 213, in __monitor
self.__monitor_step()
File "lib/galaxy/jobs/handler.py", line 272, in __monitor_step
.order_by(model.Job.id).all()
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 2737, in all
return list(self)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 2889, in __iter__
return self._execute_and_instances(context)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 2912, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 948, in execute
return meth(self, multiparams, params)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1200, in _execute_context
context)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1413, in _handle_dbapi_exception
exc_info
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1193, in _execute_context
context)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
line 507, in do_execute
cursor.execute(statement, parameters)
OperationalError: (psycopg2.OperationalError) server closed the connection
unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
[SQL: 'SELECT EXISTS (SELECT history_dataset_association.id,
history_dataset_association.history_id,
history_dataset_association.dataset_id,
history_dataset_association.create_time,
history_dataset_association.update_time, history_dataset_association.state,
history_dataset_association.copied_from_history_dataset_association_id,
history_dataset_association.copied_from_library_dataset_dataset_association_id,
history_dataset_association.name, history_dataset_association.info,
history_dataset_association.blurb, history_dataset_association.peek,
history_dataset_association.tool_version,
history_dataset_association.extension,
history_dataset_association.metadata,
history_dataset_association.parent_id,
history_dataset_association.designation,
history_dataset_association.deleted, history_dataset_association.visible,
history_dataset_association.extended_metadata_id,
history_dataset_association.version, history_dataset_association.hid,
history_dataset_association.purged,
history_dataset_association.hidden_beneath_collection_instance_id \nFROM
history_dataset_association, job_to_output_dataset \nWHERE job.id =
job_to_output_dataset.job_id AND history_dataset_association.id =
job_to_output_dataset.dataset_id AND history_dataset_association.deleted =
true) AS anon_1, EXISTS (SELECT history_dataset_collection_association.id
\nFROM history_dataset_collection_association,
job_to_output_dataset_collection \nWHERE job.id =
job_to_output_dataset_collection.job_id AND
history_dataset_collection_association.id =
job_to_output_dataset_collection.dataset_collection_id AND
history_dataset_collection_association.deleted = true) AS anon_2, job.id AS
job_id, job.create_time AS job_create_time, job.update_time AS
job_update_time, job.history_id AS job_history_id, job.library_folder_id AS
job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS
job_tool_version, job.state AS job_state, job.info AS job_info,
job.copied_from_job_id AS job_copied_from_job_id, job.command_line AS
job_command_line, job.dependencies AS job_dependencies, job.param_filename
AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS
job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code,
job.traceback AS job_traceback, job.session_id AS job_session_id,
job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name,
job.job_runner_external_id AS job_job_runner_external_id,
job.destination_id AS job_destination_id, job.destination_params AS
job_destination_params, job.object_store_id AS job_object_store_id,
job.imported AS job_imported, job.params AS job_params, job.handler AS
job_handler \nFROM job \nWHERE job.state = %(state_1)s AND job.handler =
%(handler_1)s AND job.id NOT IN (SELECT job.id \nFROM job JOIN
job_to_input_dataset ON job.id = job_to_input_dataset.job_id JOIN
history_dataset_association ON history_dataset_association.id =
job_to_input_dataset.dataset_id JOIN dataset ON dataset.id =
history_dataset_association.dataset_id \nWHERE job.state = %(state_2)s AND
(history_dataset_association.state = %(_state_1)s OR
history_dataset_association.deleted = true OR dataset.state != %(state_3)s
OR dataset.deleted = true)) AND job.id NOT IN (SELECT job.id \nFROM job
JOIN job_to_input_library_dataset ON job.id =
job_to_input_library_dataset.job_id JOIN
library_dataset_dataset_association ON
library_dataset_dataset_association.id =
job_to_input_library_dataset.ldda_id JOIN dataset ON dataset.id =
library_dataset_dataset_association.dataset_id \nWHERE job.state =
%(state_4)s AND (library_dataset_dataset_association.state IS NOT NULL OR
library_dataset_dataset_association.deleted = true OR dataset.state !=
%(state_5)s OR dataset.deleted = true)) ORDER BY job.id'] [parameters:
{'state_3': 'ok', 'handler_1': 'main.web.1', 'state_1': 'new', '_state_1':
'failed_metadata', 'state_2': 'new', 'state_5': 'ok', 'state_4': 'new'}]
(Background on this error at: http://sqlalche.me/e/e3q8)
galaxy.jobs.handler ERROR 2018-10-12 19:42:48,488 [p:102478,w:2,m:0]
[JobHandlerQueue.monitor_thread] Exception in monitor_step
Traceback (most recent call last):
File "lib/galaxy/jobs/handler.py", line 213, in __monitor
self.__monitor_step()
File "lib/galaxy/jobs/handler.py", line 272, in __monitor_step
.order_by(model.Job.id).all()
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 2737, in all
return list(self)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 2889, in __iter__
return self._execute_and_instances(context)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 2912, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 948, in execute
return meth(self, multiparams, params)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1200, in _execute_context
context)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1413, in _handle_dbapi_exception
exc_info
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1193, in _execute_context
context)
File
"/media/libraryfiles/bioinfoadmin/bioinfoadmin/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
line 507, in do_execute
cursor.execute(statement, parameters)
OperationalError: (psycopg2.OperationalError) server closed the connection
unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
[SQL: 'SELECT EXISTS (SELECT history_dataset_association.id,
history_dataset_association.history_id,
history_dataset_association.dataset_id,
history_dataset_association.create_time,
history_dataset_association.update_time, history_dataset_association.state,
history_dataset_association.copied_from_history_dataset_association_id,
history_dataset_association.copied_from_library_dataset_dataset_association_id,
history_dataset_association.name, history_dataset_association.info,
history_dataset_association.blurb, history_dataset_association.peek,
history_dataset_association.tool_version,
history_dataset_association.extension,
history_dataset_association.metadata,
history_dataset_association.parent_id,
history_dataset_association.designation,
history_dataset_association.deleted, history_dataset_association.visible,
history_dataset_association.extended_metadata_id,
history_dataset_association.version, history_dataset_association.hid,
history_dataset_association.purged,
history_dataset_association.hidden_beneath_collection_instance_id \nFROM
history_dataset_association, job_to_output_dataset \nWHERE job.id =
job_to_output_dataset.job_id AND history_dataset_association.id =
job_to_output_dataset.dataset_id AND history_dataset_association.deleted =
true) AS anon_1, EXISTS (SELECT history_dataset_collection_association.id
\nFROM history_dataset_collection_association,
job_to_output_dataset_collection \nWHERE job.id =
job_to_output_dataset_collection.job_id AND
history_dataset_collection_association.id =
job_to_output_dataset_collection.dataset_collection_id AND
history_dataset_collection_association.deleted = true) AS anon_2, job.id AS
job_id, job.create_time AS job_create_time, job.update_time AS
job_update_time, job.history_id AS job_history_id, job.library_folder_id AS
job_library_folder_id, job.tool_id AS job_tool_id, job.tool_version AS
job_tool_version, job.state AS job_state, job.info AS job_info,
job.copied_from_job_id AS job_copied_from_job_id, job.command_line AS
job_command_line, job.dependencies AS job_dependencies, job.param_filename
AS job_param_filename, job.runner_name AS job_runner_name_1, job.stdout AS
job_stdout, job.stderr AS job_stderr, job.exit_code AS job_exit_code,
job.traceback AS job_traceback, job.session_id AS job_session_id,
job.user_id AS job_user_id, job.job_runner_name AS job_job_runner_name,
job.job_runner_external_id AS job_job_runner_external_id,
job.destination_id AS job_destination_id, job.destination_params AS
job_destination_params, job.object_store_id AS job_object_store_id,
job.imported AS job_imported, job.params AS job_params, job.handler AS
job_handler \nFROM job \nWHERE job.state = %(state_1)s AND job.handler =
%(handler_1)s AND job.id NOT IN (SELECT job.id \nFROM job JOIN
job_to_input_dataset ON job.id = job_to_input_dataset.job_id JOIN
history_dataset_association ON history_dataset_association.id =
job_to_input_dataset.dataset_id JOIN dataset ON dataset.id =
history_dataset_association.dataset_id \nWHERE job.state = %(state_2)s AND
(history_dataset_association.state = %(_state_1)s OR
history_dataset_association.deleted = true OR dataset.state != %(state_3)s
OR dataset.deleted = true)) AND job.id NOT IN (SELECT job.id \nFROM job
JOIN job_to_input_library_dataset ON job.id =
job_to_input_library_dataset.job_id JOIN
library_dataset_dataset_association ON
library_dataset_dataset_association.id =
job_to_input_library_dataset.ldda_id JOIN dataset ON dataset.id =
library_dataset_dataset_association.dataset_id \nWHERE job.state =
%(state_4)s AND (library_dataset_dataset_association.state IS NOT NULL OR
library_dataset_dataset_association.deleted = true OR dataset.state !=
%(state_5)s OR dataset.deleted = true)) ORDER BY job.id'] [parameters:
{'state_3': 'ok', 'handler_1': 'main.web.2', 'state_1': 'new', '_state_1':
'failed_metadata', 'state_2': 'new', 'state_5': 'ok', 'state_4': 'new'}]
(Background on this error at: http://sqlalche.me/e/e3q8)
4 years, 3 months
Re: [galaxy-dev] can't create user on a new instance
by Rui Wang
Hey Folks,
When I used the kickstart, I hit this always:
TASK [galaxyprojectdotorg.galaxy-os : Update APT cache]
*************************************************************************
[WARNING]: sftp transfer mechanism failed on [localhost]. Use
ANSIBLE_DEBUG=1 to see detailed information
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg":
"Failed to update apt cache."}
Have you seen this before? If you do, please give me a hand. :)
Thanks,
Rui
On Fri, Oct 12, 2018 at 2:17 PM Rui Wang <ruiwang.sz(a)gmail.com> wrote:
> Hi Qiang,
>
> Let me try that. I think the postgres might be contaminated by the attempt
> to use kickstart. During that it modify the config of postgres permissions
> and it took me a while to clean those to get postgres working again(which
> may or may not really clean the change completely).
>
> I'll get back to you on this. Also I'll let you know my attempts on the
> kickstart. The problem was with the updater. :-(
>
> Thanks,
> Rui
>
> On Fri, Oct 12, 2018 at 12:23 PM Qiang Gu <guq(a)ohsu.edu> wrote:
>
>> Hi Rui,
>>
>> Galaxy will use the default sqlite if you comment out the db setting the
>> config file (`config/galaxy.yml`). You mean the sqlite works the same as
>> the postgresql, which doesn't allow you to create users?
>> If that's the case, try `git clone` a new galaxy repo, in the galaxy
>> folder, `git checkout remotes/original/release_18.05`, `git checkout -b
>> release_18.05`, `./run.sh`.
>> Basically, I would like to make the sqlite working first, then switch to
>> postgresql. The above-mentioned steps may help identify the hidden issue.
>>
>> Good luck!
>>
>> -Qiang
>>
>>
>> ------------------------------
>> *From:* Rui Wang [ruiwang.sz(a)gmail.com]
>> *Sent:* Friday, October 12, 2018 1:01 AM
>> *To:* Qiang Gu
>> *Cc:* galaxy-dev(a)lists.galaxyproject.org
>> *Subject:* Re: [galaxy-dev] can't create user on a new instance
>>
>> Hi Qiang,
>>
>> Thanks so much for the email. The default sqlite seems to be the same. I
>> spent some time with the kickstart. That would be fantastic if it
>> works...sadly it failed on me. :-( Let me make some more effort on that.
>> Btw, do you have a better version of the instructions for the kickstart?
>>
>> Cheers,
>> Rui
>>
>> On Wed, Oct 10, 2018 at 10:12 PM Qiang Gu <guq(a)ohsu.edu> wrote:
>>
>>> Hi Rui,
>>>
>>> Does the default database sqlite work for you? I guess there is certain
>>> mis-configuration in your
>>> postgresql settings or certain element in the config/galaxy.yml file is
>>> broken. Try to enter your postgresql database in terminal using the
>>> same login info and create a table to make sure you have proper privileges.
>>> Alternatively, using GalaxyKickStart (
>>> https://github.com/ARTbio/GalaxyKickStart) to deploy a server
>>> isn't that difficult and a postgresql database comes with it by default.
>>> Good luck!
>>>
>>> Thanks,
>>>
>>> -Qiang
>>>
>>> ------------------------------
>>> *From:* galaxy-dev [galaxy-dev-bounces(a)lists.galaxyproject.org] on
>>> behalf of Rui Wang [ruiwang.sz(a)gmail.com]
>>> *Sent:* Wednesday, October 10, 2018 9:30 PM
>>> *To:* galaxy-dev(a)lists.galaxyproject.org
>>> *Subject:* Re: [galaxy-dev] can't create user on a new instance
>>>
>>> Hi Folks,
>>>
>>> I still could not create new user account with the same error message.
>>> Has anyone happened to see this before?
>>>
>>> Cheers,
>>> Rui
>>>
>>> On Tue, Oct 9, 2018 at 5:48 PM Rui Wang <ruiwang.sz(a)gmail.com> wrote:
>>>
>>>> Hey Folks,
>>>>
>>>> I just installed a new 18.05 instance with a new postgresql db created.
>>>> However, the UI doesn't allow me to create any user. After I provided all
>>>> input information, it says
>>>>
>>>> galaxy.web.framework.webapp WARNING 2018-10-09 17:28:35,735
>>>> [p:32925,w:1,m:0] [uWSGIWorker1Core2] Wrong session token found, denying
>>>> request
>>>>
>>>> in the server log. Btw, I enabled 'allow to create user' and admin user
>>>> list is empty.
>>>>
>>>> I looked into it, and it's a function in
>>>> lib/galaxy/web/framework/webapp.py:
>>>>
>>>> def check_csrf_token(self):
>>>> session_csrf_token =
>>>> self.request.params.get("session_csrf_token", None)
>>>> problem = False
>>>> if not session_csrf_token:
>>>> log.warning("No session_csrf_token set, denying request.")
>>>> problem = True
>>>> elif session_csrf_token != self.session_csrf_token:
>>>> log.warning("Wrong session token found, denying request.")
>>>> problem = True
>>>>
>>>> if problem:
>>>> return self.show_warn_message("Failed to authorize action.")
>>>>
>>>> Could someone give me a hand? :-)
>>>>
>>>> Cheers,
>>>> Rui
>>>>
>>>
4 years, 3 months