Hello Team Galaxy
trying to share/ forward dataset history from user 1 ( running on a local instance,Scrn.1) to user 2 (running on cloud amazon ec2 instance,Scrn.2) and vice-versa, error:- user is not a valid galaxy user. getting this error while both the users are logged into system and both the instance's are running normal.
insights of team will be very helpful.
Thanks so much for your help! It is like you said a browser issue.
I also noticed that the Galaxy main server (https://usegalaxy.org/) doesn't have this problem. And their USCS main table browser uses "https" instead of "http". Does anyone know why this isn't included in the current Galaxy release? And how can I change my current "http USCS main" to "https USCS main"?
Date: Tue, 19 Nov 2013 08:51:03 +0000
From: Peter Briggs <peter.briggs(a)manchester.ac.uk>
Subject: Re: [galaxy-dev] secure Galaxy with SSL
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Re your problem #1 (UCSC browser appears to be blocked), I've seen
something similar with our local Galaxy instance, which is also served
In our case I believe this is actually a browser issue: the latest
version of Firefox silently blocks mixed secure and insecure content:
(I think Chrome and IE do something similar, although IE at least gives
The workaround is either to disable mixed content blocking (not trivial
in Firefox, and probably not a good idea in general), or to do something
like e.g. right-click "Open link in new tab" on the "Get data"/"UCSC
main table browser" link in Galaxy.
Once UCSC has loaded in the new tab it can be used to send data back to
Galaxy without any problems.
Best wishes, Peter
On 18/11/13 23:03, Jingchao Zhang wrote:
> Dear all,
> Today I installed the SSL module for our local Galaxy instance and the
> "https://" link is working fine. I added this
> RequestHeader set X-URL-SCHEME https
> in our Apache configuration file as instructed in this webpage:
> Here are my problems with SSL:
> 1. Some build in links like "UCSC Main
> <http://genome.ucsc.edu/cgi-bin/hgTables?GALAXY_URL=https%3A//hcc-galaxy.u...> table
> browser" and "UCSC Test
> <http://genome-test.cse.ucsc.edu/cgi-bin/hgTables?GALAXY_URL=https%3A//hcc...> table
> browser" become invalid. If I click on them, nothing will happen, as if
> they are blocked.
> 2. The old "http" link still works, which I think shouldn't because I
> added the "RequestHeader ... https" line in Apache configuration. I
> really want to disable the http link because the new users could be
> easily led to the old one.
> Both httpd and Galaxy have been restarted after the changes are made.
> Since I didn't find any similar threads in the mailist, I hope someone
> here can help me out with this.
> Please keep all replies on the list by using "reply all"
> in your mail client. To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> To search Galaxy mailing lists use the unified search at:
Peter Briggs peter.briggs(a)manchester.ac.uk
Bioinformatics Core Facility University of Manchester
B.1083 Michael Smith Bldg Tel: (0161) 2751482
I‘m seeing some weird error messages...I googled but didn't see anything
So, it is during the wigToBigwig conversion:
Dataset generation errors
Dataset 47: Wig/BedGraph-to-bigWig on data 43
Tool execution generated the following error message:
grep: writing output: Broken pipe
grep: writing output: Broken pipe
grep: writing output: Broken pipe
and many more lines of the same error
The tool produced the following additional output:
Couldn't open tool-data/shared/ucsc/chrom/mm10.len , No such file or
but it is there:
ls -l tool-data/shared/ucsc/chrom/mm10.len
-rw-rw-r-- 1 bioinfoadmin bioinfoadmin 1405 Oct 9 11:33
>From the Galaxy server log, I found these:
galaxy.tools WARNING 2013-11-15 17:46:43,200 Failed to resolve dependency
on 'ucsc_tools', ignoring
galaxy.jobs.runners.local DEBUG 2013-11-15 17:46:43,255 (216) executing:
grep -v "^track"
wigToBigWig stdin tool-data/shared/ucsc/chrom/mm10.len
-blockSize=256 -itemsPerSlot=1024 -clip 2>&1 || echo "Error running
and then, setmetadata set the dataset state to ERROR.
I did verify that the result dataset_473.dat has a size 0.
However, when I run the command alone:
grep -v "^track"
| wigToBigWig stdin tool-data/shared/ucsc/chrom/mm10.len
-blockSize=256 -itemsPerSlot=1024 -clip
it successfully generated the dataset_473.dat.
also, I look into the wig_to_Bigwig.xml in tools/filters, the command is:
grep -v "^track" $input1 | wigToBigWig stdin $chromInfo $out_file1
however, I don't see where this "$chromInfo" was defined, yet it indeed
the correct value: tool-data/shared/ucsc/chrom/mm10.len. how does that
it is very puzzling...I'm not sure if anyone has seen this before, please
let me know!
In the current version of galaxy stable branch (5c789ab4144a), the tool
panel is empty when installed (freshly and out of the box) on my
production server (ubuntu server 12.04). If I copy this galaxy folder
onto my laptop, the tool panel shows up (ubuntu 13.10).
Do you have an idea of what could be the problem?
Dr. Yec'han LAIZET
Tel: +33 (0)5 57 12 27 75
Plateforme Genome Transcriptome
INRA-UMR BIOGECO 1202
69 route d'Arcachon
Today I installed the SSL module for our local Galaxy instance and the "https://" link is working fine. I added this
RequestHeader set X-URL-SCHEME https
in our Apache configuration file as instructed in this webpage: http://wiki.galaxyproject.org/Admin/Config/Apache%20Proxy
Here are my problems with SSL:
1. Some build in links like "UCSC Main<http://genome.ucsc.edu/cgi-bin/hgTables?GALAXY_URL=https%3A//hcc-galaxy.u...> table browser" and "UCSC Test<http://genome-test.cse.ucsc.edu/cgi-bin/hgTables?GALAXY_URL=https%3A//hcc...> table browser" become invalid. If I click on them, nothing will happen, as if they are blocked.
2. The old "http" link still works, which I think shouldn't because I added the "RequestHeader ... https" line in Apache configuration. I really want to disable the http link because the new users could be easily led to the old one.
Both httpd and Galaxy have been restarted after the changes are made. Since I didn't find any similar threads in the mailist, I hope someone here can help me out with this.
We're using Galaxy Cloud -> New Cloud Cluster to create instances of Galaxy for training on Wednesday 20 November.
When we try to view a bam file using Trackster, we get the following error:
/opt/sge/default/spool/execd/ip-10-32-111-66/job_scripts/14: line 13: bedtools: command not found
/opt/sge/default/spool/execd/ip-10-32-111-66/job_scripts/14: line 13: bedGraphToBigWig: command not found
Please let us know how to proceed.
Lynn Young, PhD | Informationist/Bioinformatics Scientist | National Institutes of Health Library | Office of Research Services | Building 10, Room B1L-306D | 10 Center Drive | Bethesda, Maryland 20892-1150 | T. 301-594-6426 | F. 301-402-0254 | E. lynny(a)mail.nih.gov<mailto:email@example.com> | http://twitter.com/bioinformatics1
Stay connected with the NIH Library
NIH Library: http://nihlibrary.nih.gov<http://nihlibrary.nih.gov/> | Facebook: http://www.facebook.com/nihlibrary | Twitter: http://www.twitter.com/nihlib<http://www.twitter.com/nihlib>
Amazing Research. Amazing Help.
I have a new wrapper for samtools idxstats with a working
unit test via run_functional_tests.sh run locally or on TravisCI:
However, this tool's test is failing on the Test Tool Shed:
Tool test results
Automated test environment
Time tested: ~ 11 hours ago
System: Linux 3.8.0-30-generic
Python version: 2.7.4
Galaxy revision: 11284:28469a503b56
Galaxy database version: 117
Tool shed revision:
Tool shed database version:
Tool shed mercurial version:
Tests that failed
Tool id: samtools_idxstats
Tool version: samtools_idxstats
Fatal error: Exit code 1 ()
Input BAI file not found: None
Traceback (most recent call last):
line 216, in test_tool
self.do_it( td, shed_tool_id=shed_tool_id )
line 28, in do_it
self.__verify_outputs( testdef, shed_tool_id, data_list )
line 134, in __verify_outputs
self.__verify_output( output_tuple, shed_tool_id, elem,
line 141, in __verify_output
self.verify_dataset_correctness( outfile, hid=elem_hid,
attributes=attributes, shed_tool_id=shed_tool_id )
line 782, in verify_dataset_correctness
self._assert_dataset_state( elem, 'ok' )
line 606, in _assert_dataset_state
raise AssertionError( errmsg )
AssertionError: Expecting dataset state 'ok', but state is 'error'.
Dataset blurb: error
It appears that the upload has not generated the *.bai index and
assigned it to the variable input_bam.metadata.bam_index
(but this works via run_functional_tests.sh for me):
$ ls test-data/ex1.*
$ ./run_functional_tests.sh -id samtools_idxstats
Ran 1 test in 32.400s
(Note that the bai file does not seem to be needed)
Tested with this revision, which works:
$ hg branch
$ hg log | head
user: John Chilton <jmchilton(a)gmail.com>
date: Sun Nov 10 23:37:56 2013 -0600
summary: PEP-8 cleanups of lib/galaxy/security/__init__.py.
Updated to current tip, also works:
$ hg branch
[galaxy@ppserver galaxy-central]$ hg log | head
date: Mon Nov 11 16:00:10 2013 -0500
summary: UI: Fix tooltip placement for masthead icons
Are there anyone known differences on the Test Tool Shed
which could explain this failure?
I'd like to be able to write some simple <test> entries for
some of the BLAST+ tools using composite datatypes
as input or output (i.e. small BLAST databases). This
doesn't seem to be mentioned or hinted at on the wiki:
Is it possible to use a composite datatype as a test input?
If so how? Normal datatypes are loaded into the test history
using using the upload tool - does that mean I first need to
extend the relevant datatypes to allow them to be uploaded?
Example: Run blastp using a small query FASTA file and
a small database, check the output (eg tabular).
Is it possible to use a composite datatype as a test output?
If so how?
Example: Run makeblastdb using a small FASTA file, and
check the output (a small BLAST database).
There have been a few posts lately about doing distributed computing via Galaxy - i.e.
job splitters etc - below a contribution of some ideas we have developed
and applied in our work, where we have arranged for some Galaxy tools to execute in parallel
on our cluster.
We have developed a job-splitter script "tardis.py" (available from
https://bitbucket.org/agr-bifo/tardis), which takes marked-up
standard unix commands that run an application or tool. The mark-up is
prefixed to the input and output command-line options. Tardis strips off the
mark-up, and re-writes the commands to refer to split inputs and outputs, which are then
executed in parallel e.g. on a distributed compute resource. Tardis knows
the output files to expect and how to join them back together.
(This was referred to in our GCC2013 talk
Any reasonable unix based data processing or analysis command may be marked up and run
using tardis, though of course tardis needs to know how to split and join the data. Our approach
also assumes a "symmetrical" HPC cluster configuration, in the sense that each node sees the same
view of the file system (and has the required underlying application installed). We use tardis
to support both Galaxy and command-line based compute.
Background / design pattern / motivating analogy: Galaxy provides a high level
"end to end" view of a workflow; the HPC cluster resource that one uses then involves
spraying chunks of data out into parallel processes, usually in the form of some kind of
distributed compute cluster - but an end-user looking at a Galaxy history, should ideally not be able
to tell whether the workflow was run as a single process on the server, or
via many parallel processes on the cluster (apart from the fact that when run
in parallel on the cluster, its alot faster!). We noticed that the TCP / IP layered networking
protocol stack provides a useful metaphor and design pattern - with the "end to end" topology
of a Galaxy workflow corresponding to the transport layer of TCP/ IP; and the distribution
of computation across a cluster corresponding to the next TCP/IP layer down - the packet-routing
This picture suggested a strongly layered approach to provisioning
Galaxy with parallelised compute on split data, and hence to an approach in which the
footprint in the Galaxy code-base, of parallel / distributed compute support, should ideally
(from the layered-design point of view) be minimal and superficial. Thus in our approach so far,
the only footprint is in the tool config files, where we arrange the templating to
(optionally) prefix the required tardis mark-up to the input and output command options, and
the tardis script name to the command as a whole. tardis then takes care of rewriting and
launching all of the jobs, and finally joining the results back together and putting them where
galaxy expects them to be (and also housekeeping such as collating and passing up stderr and stdout , and
appropriate process exit codes). (For each galaxy job, tardis creates a working folder in a designated
scratch area, where input files are uncompressed and split; job files and their output
are stored; logging is done etc. Split data is cleaned up at the end unless there
was an error in some part of the job, in which case everything is retained
for debugging and in some cases restart)
(We modify Galaxy tool-configs so that the user can optionally choose to run
the tool on our HPC cluster - there are three HPC related input fields, appended
to the input section of a tool. Here the user selects whether they want to use
our cluster and if so, they specify the chunk size, and can also at that point
specify a sampling rate, since we often find it useful to be able to run preliminary
analyses on a random sample of (for example) single or paired-end NGS sequence
data, to obtain a fairly quick snapshot of the data, before the expense of a
complete run. We found it convenient to include support for input sampling
The pdf document at https://bitbucket.org/agr-bifo/tardis includes a number of
examples of marking up a command, and also a simple example of a galaxy tool-config that
has been modified to include support for optionally running the job on our HPC cluster
via the tardis pre-processor.
* we have not yet attempted to integrate our approach with the existing Galaxy job-splitting
distributed compute support, partly because of our "layered" design goal (admittedly also partly
because of ignorance about its details ! )
* our current implementation is quite naive in the distributed compute API
it uses - it supports launching condor job files (and also native sub-processes) - our plan
is to replace that with using the drmaa API
* we would like to integrate it better with the galaxy type system, probably via
a galaxy-tardis wrapper
We would be keen to contribute our approach to Galaxy if people are
Bioinformatics Software Engineer