Launching multiple jobs using one tool form with multiple selected datasets
by Leandro Hermida
Hi everyone,
I was wondering what would be the way in Galaxy to program the following:
- User clicks on a tool and form is displayed
- They use a select multi menu in the form to pick lets say X multiple
datasets from their history
- When they click submit the tool launches X number of jobs in the history,
on for each of the datasets selected.
I have a common use case where users have to manually run the same tool over
and over again with the same parameters for each dataset of interest in
their history. I would be great to be able programmatically or otherwise
with Galaxy to be able to use one form and multi select the datasets and
then launch the parallel jobs in one go.
regards,
Leandro
11 years, 9 months
Does REMOTE_USER have to be an email address?
by Leandro Hermida
Hello,
When returning REMOTE_USER env var from the external authentication method
does it have to be an email address for Galaxy? The docs aren't very clear
on that, in the main External Authentication section it says you have to
have to set remote_user_maildomain if you aren't return an email address
(and the domain will be appended it seems) but in the mod_authnz_ldap
section it says you are allowed to set any attribute to return as
REMOTE_USER.
regards,
Leandro
11 years, 9 months
Question on finding external_filename and _extra_files_path from psql DB.
by Vipin TS
Dear Galaxy Team,
As a part of in-house automated cleaning of data set from our production
galaxy service,
I am finding difficulty to get the galaxy assigned file name
(galaxy-root/database/001/dataset_001.dat)
for each data set from the corresponding database. We are using PostgreSQL
and
when I queried through the table name called 'dataset', couldn't figure out
the values for
external_filename or _extra_files_path.
For example:
>> select * from dataset order by id DESC limit 10;
id | create_time | update_time | state
| deleted | purged | purgable | external_filename | _extra_files_path |
file_size
70805 | 2011-04-22 20:49:55.319709 | 2011-04-22 20:50:26.643807 | ok
| f | f | t | | |
421593
Could you please let me know where I can locate the file name from other
tables or
do I need to set up any configuration.
Thanks in advance,
Vipin
Friedrich Miescher Laboratory
of the Max Planck Society
Spemannstrasse 39, 72076
Tuebingen, Germany
11 years, 9 months
Galaxy Community Conference Early Registration Deadline extended to May 1
by Dave Clements
Hello all,
The early registration deadline for the 2011 Galaxy Community Conference has
been extended by a week, from April 24 to May 1. The conference is being
held in Lunteren, the Netherlands, 25-26 May, with a special "Introduction
to Galaxy" session on the evening of May 24.
That's the good news. The bad news is that online registration has been
taken offline, starting today, until April 26 or 27. We took this step
because of unexpectedly high demand for conference centre hotel rooms on May
24 and 26. We still have ample rooms for the night of May 25,* but we are
now sold out on the other nights*. We are working with the conference
centre to increase our room allotment on May 24 and 26. We are optimistic
of getting more rooms on May 26, but we are not as hopeful about getting
additional rooms for May 24.
We will reopen registration next Tuesday or Wednesday and post an update on
the room situation, including alternatives, at that time.
As always, please let me know if you have any questions.
Thanks for your patience,
Dave C.
--
http://galaxy.psu.edu/gcc2011/
http://getgalaxy.org
http://usegalaxy.org/
11 years, 9 months
Workflow variables and the workflow API
by Dave Walton
I'm working on a module that involves calling a workflow multiple times over a set of data.
The workflow has a "variable" that I use for renaming output files.
I see, using the workflow_execute.py example how to pass in file names, but I don't see how to pass in a value for this variable.
When I do a "./display.py <api key> http://jaxgalaxydev01/api/workflows/<workflow id>" I see the two input data files I would expect, but I don't see this variable.
ex:
Member Information
------------------
url: /api/workflows/<workflow id>
inputs: {'207': {'value': '', 'label': 'Input Dataset'}, '211': {'value': '', 'label': 'Input Dataset'}}
id: <workflow id>
name: Single-end RNA Seq with grooming v 2.0 (imported from uploaded file)
Is there anyway to get at this variable so I can set it when I call a workflow through the API?
Thanks,
Dave
11 years, 9 months
galaxy related
by shashi shekhar
if i want some in information from tool xml file to code file ,how i will
get the info. it it's possible ,send me detailed info.
Regards
galaxy user shashi shekhar
11 years, 9 months
sqlalchemy Timeout Error
by Liisa Koski
Hi,
We are running many NGS workflows at the same time on our local instance
of Galaxy. They are crashing with the following error
Error - <class 'sqlalchemy.exc.TimeoutError'>: QueuePool limit of size 5
overflow 10 reached, connection timed out, timeout 30
In the universe_wsgi.ini we made the following adjustments:
# If the server logs errors about not having enough database pool
connections,
# you will want to increase these values, or consider running more Galaxy
# processes.
database_engine_option_pool_size = 50 # this used to be 5
database_engine_option_max_overflow = 100 # this used to be 100
Those numbers were pulled out of a hat so I wanted to make sure what we
were doing was correct. Is there a limit on the values? It doesn't appear
to be crashing anymore but I still want to make sure.
Thanks,
Liisa
11 years, 9 months
galaxy after logout
by hari krishna
hi,
In galaxy after logout by using browser back buttons i can able to go
my home page .How to disable this facility can any one help ....
--
Thanks & Regards,
Hari Krishna .M
11 years, 9 months
Problem with "Map with Bowtie for Illumina" on a local Galaxy
by Leon Mei
Dear colleagues,
I have a local installed Galaxy. When I try to run bowtie to align a
small FASTQ file, I got the following message and the result was
empty.
"""
empty, format: sam, database: Project-Id-Version: PACKAGE VERSION
Report-Msgid-Bugs-To: POT-Creation-Date: 2008-09-21 18:33+0900
PO-Revision-Date: 2009-03-17 03:55-0400 Last-Translator: FULL NAME
Language-Team: en Plural-Forms: nplurals=2; plural=(n != 1)
MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit Generated-By: Babel 0.9.4
Info:
"""
It seems bowtie_wrapper.py wasn't executed at all. Has someone seen
this message before and what could be the cause?
Thanks,
Leon
--
Hailiang (Leon) Mei
Netherlands Bioinformatics Center (http://www.nbic.nl/)
Skype: leon_mei Mobile: +31 6 41709231
11 years, 9 months
Staged Method for cluster running SGE?
by Peter Cock
Hi all,
So far we've been running our local Galaxy instance on
a single machine, but I would like to be able to offload
(some) jobs onto our local SGE cluster. I've been reading
https://bitbucket.org/galaxy/galaxy-central/wiki/Config/Cluster
Unfortunately in our setup the SGE cluster head node is
a different machine to the Galaxy server, and they do not
(currently) have a shared file system. Once on the cluster,
the head node and the compute nodes do have a shared
file system.
Therefore we will need some way of copying input data
from the Galaxy server to the cluster, running the job,
and once the job is done, copying the results back to the
Galaxy server.
The "Staged Method" on the wiki sounds relevant, but
appears to be for TORQUE only (via pbs_python), not
any of the other back ends (via DRMAA).
Have I overlooked anything on the "Cluster" wiki page?
Has anyone attempted anything similar, and could you
offer any guidance or tips?
Thanks,
Peter
11 years, 9 months