I know a few people have talked about using docker to package and
distribute Galaxy tools. The most recent version now runs on all major
linux distributions, so it's a bit more inclusive now (but there probably
won't ever be a windows or mac version). One strategy would be for tool
configs to define an option Docker archive name and possibly an alternate
command line to execute if the docker archive is being used.
How far has that work gotten, and what are people thinking about it?
I am going to forward your email to the galaxy-dev list so that the
development community can offer comments/suggestions.
On 8/30/11 2:27 AM, Petr Novak wrote:
> Hi everybody,
> I am developing the application on Galaxy server. One of the requirement
> is to create user specific list of options. Is it possible to access
> somehow $__user_email__ in <options> tag or in <conditional> ?. I did
> not found documentation how to used cheetah in galaxy tool xml files but
> from files provided with galaxy, cheetah is used only in <command> and
> <config> tag. Is that rigth? If it can be used in any part of xml
> definition file it would make much easier to generate xml dynamicaly
> based on the $__user_email__
> Does anybody have an idea how to manage this problem?
> Petr Novak
> The Galaxy User list should be used for the discussion of
> Galaxy analysis and other features on the public server
> at usegalaxy.org. Please keep all replies on the list by
> using "reply all" in your mail client. For discussion of
> local Galaxy instances and the Galaxy source code, please
> use the Galaxy Development list:
> To manage your subscriptions to this and other Galaxy lists,
> please use the interface at:
I'm trying to write a new tool working with tabular data (specifically
a Reciprocal Best Hits (RBH) tool using BLAST style tabular output).
I want the user to be able to choose a column number (from one of the
input files), but I have a default column in mind. However, Galaxy
doesn't seem to obey the default column number:
<param name="a_vs_b" type="data" format="tabular" label="Hits
from querying A against B" description="Tabular file, e.g. BLAST
<param name="b_vs_a" type="data" format="tabular" label="Hits
from querying B against A" description="Tabular file, e.g. BLAST
<param name="id1" type="data_column" data_ref="a_vs_b"
multiple="False" numerical="False" value="1" label="Column containing
query ID" help="This is column 1 in standard BLAST tabular output" />
<param name="id2" type="data_column" data_ref="a_vs_b"
multiple="False" numerical="False" value="2" label="Column containing
match ID" help="This is column 2 in standard BLAST tabular output"/>
<param name="score" type="data_column" data_ref="a_vs_b"
multiple="False" numerical="False" value="12" label="Column containing
containing score to rank on" help="The bitscore is column 12 in
standard BLAST tabular output"/>
I've tried giving the default column value numerically (as shown), and
also using value="c2" etc. Regardless, Galaxy just defaults to the
Is this a bug, or am I doing something wrong?
I submitted a workflow that in turn submits a drm job to sun grid engine.
The queue had an error (probably due to a problem with automount).
Traceback (most recent call last):
File "/inside/depot4/galaxy/lib/galaxy/jobs/runners/__init__.py", line 60, in run_next
File "/inside/depot4/galaxy/lib/galaxy/jobs/runners/drmaa.py", line 169, in queue_job
external_job_id = self.ds.runJob(jt)
File "/inside/depot4/galaxy/eggs/drmaa-0.4b3-py2.7.egg/drmaa/__init__.py", line 331, in runJob
_h.c(_w.drmaa_run_job, jid, _ct.sizeof(jid), jobTemplate)
File "/inside/depot4/galaxy/eggs/drmaa-0.4b3-py2.7.egg/drmaa/helpers.py", line 213, in c
return f(*(args + (error_buffer, sizeof(error_buffer))))
File "/inside/depot4/galaxy/eggs/drmaa-0.4b3-py2.7.egg/drmaa/errors.py", line 90, in error_check
raise _ERRORS[code-1]("code %s: %s" % (code, error_buffer.value))
DeniedByDrmException: code 17: error: no suitable queues
When the sysadmin cleared the error, the job started running normally after being in an error state for 10 minutes.
The cool thing is that galaxy kept running without a problem.
galaxy.jobs.runners.drmaa DEBUG 2013-06-27 14:51:22,968 (10481/4767487) state change: job is running
galaxy.jobs.runners.drmaa DEBUG 2013-06-27 15:00:23,628 (10481/4767487) state change: job is queued and active
galaxy.jobs.runners.drmaa DEBUG 2013-06-27 15:00:27,751 (10481/4767487) state change: job is running
However in the history panel, the job shows as queued but not running, even if I refresh the history panel.
Is this normal or should the status change to running?
I'm using this version of straight galaxy:
user: Dannon Baker <dannonbaker(a)me.com>
date: Sat Jun 15 09:08:09 2013 -0400
summary: Fix reports import issue reported by Lance, https://trello.com/card/bug-in-reports-webapp-imports/506338ce32ae458f6d1...
UC Santa Cruz
This is my first Galaxy installation setup so apologies for stupid
questions. I am setting up Galaxy on a Cluster running Torque as the
resource manager. I am working through the documentation but I am unclear
on some things:
Firstly I am unable to find : *start_job_runners *within the
universe_wsgi.ini and I dont want to just add this anywhere - any help on
this would be create.
Further more this is my job_conf.xml :
<!-- A sample job config that explicitly configures job running the way it
is configured by default (if there is no explicit config). -->
<plugin id="hpc" type="runner"
<!-- Additional job handlers - the id should match the name of a
[server:<id>] in universe_wsgi.ini.
<destination id="hpc" runner="drmaa"/>
Does this look meaning full, further more where to I set the additional
in the universe_wsgi.ini.
As background the cluster has 13 compute nodes and a shared storage array
that can be accessed by all nodes in the cluster.
Regards/Groete/Mit freundlichen Grüßen/recuerdos/meilleures salutations/
distinti saluti/siong/duì yú/привет
Jurgens de Bruin
Just been reading a bit more about the Galaxy packaging system. Here's a slight modification to what I was suggesting that might fit in a bit better. Apologies for not being more familiar with the existing system before proposing extensions.
Recall that my goal is to support using a system-installed (native) package, at a defined version, which I aim to achieve by loading the appropriate environment module before running a tool.
We still have tool_dependencies.xml defining a package at a particular version, but rather than download and build the source code, there's just a directive that says how to pick up the correct program version at runtime, e.g. which environment module to load.
So instead of the tool_dependencies.xml fragment:
<package name="bwa" version="0.6.2">
<environment_variable name="PATH" action="prepend_to">$INSTALL_DIR/bin</environment_variable>
We have something like this (NB: element and attribute names are for illustrative purposes only):
<package name="bwa" version="0.6.2">
This causes the right thing (module load bwa/0.6.2) to be stuck into the dependencies env.sh file when this package is installed from the toolshed. We could call this toolshed tool native_package_bwa_0_6_2, to avoid confusion with the existing download-and-make one.
We might want a bit of flexibility on what actions are supported (in case we want to support Software Collections, for example).
What do you think?
PS: In case it wasn't already clear, solving this problem well is quite important to us here at AgResearch. ;-)
Attention: The information contained in this message and/or attachments
from AgResearch Limited is intended only for the persons or entities
to which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipients is prohibited by AgResearch
Limited. If you have received this message in error, please notify the
I have a new galaxy installation under Linux. It's running as an Apache proxy (subdirectory) with authentication through galaxy (not apache).
When trying to register a new user (which will be the administrator), the registration page declare success ("Now logged in as foo") but I am still just an anonymous user. If I try to login, it appears to succeed but I'm still anonymous. If I purposely enter the wrong password, the system complains as it should but I never get proper credentials.
If I take away the Apache proxy configuration and run on the internal server, the problem goes away: I can login as expected and get the correct environment. The error is clearly here but I feel like I've followed the procedure written on the wiki. Specifically,
RewriteRule ^/galaxy$ /galaxy/ [R]
RewriteRule ^/galaxy(.*) http://localhost:8082$1 [P]
# 8080 and 8081 are busy
use = egg:PasteDeploy#prefix
prefix = /galaxy
filter-with = proxy-prefix
cookie_path = /galaxy
Any idea of what might be wrong? Don't know if it matters but I'm using a MySQL database. I can see new users and it seems to work well enough to complain when bad passwords are entered.
Earlier today I updated my development server, which uses SQLite,
and ran the update schema script (now at v118). With hindsight
there could have been a warning that I missed - because afterwards
despite updating to the galaxy-central tip, trying to use any of
the tools failed with an SQL error:
OperationalError: (OperationalError) no such column:
history_dataset_association.id AS history_dataset_association_id, ...
$ ./manage_db.sh db_version
$ ./manage_db.sh version
Attempting a downgrade/upgrade suggested I had a stale
migration_tmp table blocking this,
Since this is only a development instance, I removed the SQLite
database (and the old data files) to allow a clean regeneration:
galaxy.model.migrate.check INFO 2014-02-18 13:25:52,248 Migrating 117 -> 118...
galaxy.model.migrate.check INFO 2014-02-18 13:25:54,176
galaxy.model.migrate.check INFO 2014-02-18 13:25:54,177 Add link from
history_dataset_association to the extended_metadata table
galaxy.model.migrate.check INFO 2014-02-18 13:25:54,177
galaxy.model.migrate.check INFO 2014-02-18 13:25:54,177
migrate.versioning.repository DEBUG 2014-02-18 13:25:54,206 Loading
This fixed the history_dataset_association.extended_metadata_id
error - so is the most likely explanation a failed schema update?
Might a stale migration_tmp table have been to blame?
We have been developing our own Galaxy instance for a while now. We have a cluster on which the job are sent to be executed, it is managed through SGE. Usually, communication between SGE and DRMAA is ok and we don't have any problem with that.
When a job is deleted by the user, most of the times, the job disappears but sometimes, we don't know why, the job stays and has the status 'dr' within SGE. If we don't kill it 'manually', it stays forever. It is not always the same tools which produces this error .
Have you any idea why how manage it ?
We have another problem, a display one.
Since the last update, we experienced problem with the history. Launching a job, it was very long to appear in the history and we had to refresh.
We got a more recent version of firefox and it seems that we don't have the problem anymore. But some user of ours can't update their firefox. Are you aware of that problem ?
Genouest Platform, Rennes - FRANCE