Do you know a fast way to open large tiff files in galaxy ? The sniffer recognizes the type, but clicking on the eyeball on a large 82mb file just hangs.
Downloading the file and opening in preview does work but it is clunky. Perhaps storing a low res version that would be display when clicking the eyeball would speed access.
we want to put some more hard drives into your Galaxy Server. And we
plan to use the hierarchical ObjectStore for that purpose.
The new discs will be type="distributed" id="primary" and the old disc
will become type="disk" id="secondary". If I understood correctly the
old disc is them in some read-only state and will not touched until the
primary discs are full or not working ...
Is it save to mount the old discs or all discs with noatime, to get a
small performance gain? Is Galaxy using noatime?
Dear Dr Johannes,Alot of thanks for you but i already sent the same issue to galaxy-dev(a)lists.bx.psu.edu the galaxy mailing list without reply
i am so sorry for disturbance Best Regards,
https://www.mcpvirtualbusinesscard.com/VBCServer/elsayedhegazy/interactiv... HegazyIT Manager at ALHAYAH LabsBioinformatician - Software developer Mobile: +2 0100 6 99 88 36e-mail: elsayedhegazy(a)live.com
> Date: Tue, 3 Dec 2013 10:15:22 +0100
> From: johannes.eichner(a)uni-tuebingen.de
> To: elsayedhegazy(a)live.com
> CC: andraeger(a)eng.ucsd.edu
> Subject: Re: [Galaxy-ZBIT] Help needed in Galaxy Uploading files.
> Dear Mr. Hegazy,
> please note that we only provide support for the customized Galaxy
> instance hosted at http://webservices.cs.uni-tuebingen.de.
> If you have any questions considering the main Galaxy instance which
> also offers the NGS TOOLBOX BETA (https://usegalaxy.org) please contact
> the core development team by joining the galaxy mailing list
> (galaxy-dev(a)lists.bx.psu.edu) as described here:
> Kind regards,
> On 12/03/2013 08:37 AM, El-Sayed Hegazy wrote:
> > Dear Dr Andreas,When i using Galaxy website and then go to Analyze data section and navigate through tools i found NGS tool box used for Next Generation Sequencing data analysis from Quality control like Groomer and trimming and filtering and then we can make mapping and more with Illumina RAW data files Its actually part of Galaxy services Called NGS tool box Beta
> > I need to clarify what i miss ?
> > Alot of thanks for you
> > Best Regards,
> > https://www.mcpvirtualbusinesscard.com/VBCServer/elsayedhegazy/interactiv... HegazyIT Manager at ALHAYAH LabsBioinformatician - Software developer Mobile: +2 0100 6 99 88 36e-mail: elsayedhegazy(a)live.com
> >> Date: Mon, 2 Dec 2013 09:28:10 -0800
> >> From: andraeger(a)eng.ucsd.edu
> >> To: elsayedhegazy(a)live.com; galaxy(a)listserv.uni-tuebingen.de
> >> CC: mahef111(a)gmail.com
> >> Subject: Re: [Galaxy-ZBIT] Help needed in Galaxy Uploading files.
> >> Dear El-Sayed Hegazy,
> >> Due to the long Thanksgiving weekend I could not reply to you earlier.
> >> Thank you for your patience. I now downloaded your file and had a look
> >> into it. What I realized from your e-mail is that you are interested in
> >> next generation sequencing, NGS, which is actually not part of our
> >> services. We do not provide any tools that facilitate handling this kind
> >> of data. So, I would like to know what you intended to do with your
> >> file; i.e., to which tool did you like to upload it and what was your
> >> envisaged analysis?
> >> Thanks
> >> --
> >> Dr. Andreas Draeger
> >> University of California, San Diego, La Jolla, CA 92093-0412, USA
> >> Bioengineering Dept., Systems Biology Research Group, Office #2506
> >> Phone: +1-858-534-9717, Fax: +1-858-822-3120, twitter: @dr_drae
> > _______________________________________________
> > Galaxy mailing list
> > Galaxy(a)listserv.uni-tuebingen.de
> > https://listserv.uni-tuebingen.de/mailman/listinfo/galaxy
I hope this is relevant to the dev list but if it is not please let me know.
I was trying to estimate active users out of our registered users. I ended
counting users that had a job during the last month. How would you guys do
During this process I also wanted to see how we were doing in comparison to
other instance. I set up a simple survey and would greatly appreciate if
instance admin would take 2 min to complete it. I will publish results here
(or maybe to some more relevant place, let me know) at the end of the month.
I would like to modify the default ZIP bundle behaviour
for composite datatypes, which currently insists on
creating an HTML "main" file. To do this I think the current
monolithic _archive_composite_dataset function needs
to be refactored in file lib/galaxy/datatypes/data.py
I would like to move the section which currently names
and populates the HTML index file into a sub-method,
i.e. this chunk:
path = data.file_name
fname = os.path.split(path)[-1]
efp = data.extra_files_path
htmlname = os.path.splitext(outfname)
if not htmlname.endswith(ext):
htmlname = '%s_%s' % (htmlname,ext)
archname = '%s.html' % htmlname # fake the real nature
of the html file
error = True
log.exception( "Unable to add composite parent %s
to temporary library download archive" % data.file_name)
msg = "Unable to create archive for download,
please report this error"
Then by overriding this new method a subclass (custom
composite datatype) could for example set the filename
used inside the archive to be "index.html" or "index.htm"
or anything else like "README.txt" (doesn't have to be
HTML); alter the contents of the index file; or even not
include an index file in the archive.
Would a pull request to do this be considered?
in my job_conf.xml I defined the following limit
Galaxy seems to correctly identify the number as 500 GB, since I got the error:
"Job output file grew too large (greater than 500.0 GB), please try different inputs or parameters"
The problem is that the output file of the job is just a few GB (verified by rerunning the command manually). So Galaxy is wrongly stopping the job after just a few seconds, although the actual output size was fine. I disabled the limit for now, but I really want it in place. I just put it because I had a user generating output files of over 4 TB that filled up our storage (quotas only catch them when they are finished, but the user had two concurrent jobs running, both creating such huge files).
Any help is appreciated! Thanks!
Hello - I am running to errors when trying to run Clustalw and HyPhy on our
(1) Clustalw: The program completes without error and produces an empty
file and a log file that states "/bin/sh: 1: clustalw2: not found"
Searching around a bit I am not seeing clustalw2 previously installed. I
am happy to do it if need be, however, I wanted to check in and see if I am
overlooking the directory or an easy fix.
If I do need to install clustalw, I would prefer to use clustalw-omega,
but was not sure if I needed to script a brand new wrapper or if I could
use the previous one that came with galaxy image.
(2) HyPhy: The cloud the link was initially broken. I changed it to the
sudo ln -sfn /mnt/galaxy/tools/hyphy/default
I am using a fasta alignment file of 6 protein sequences (566 AA each) as a
test case (*also fails with nucleotide alignments).
It runs briefly and then issues an error statement "Single Alignment
Analyses Segmentation fault (core dumped)".
Thank you in advance,
I've been trying to do something similar. There are no doubt others on
this list who can offer more official answers than mine, but I'm happy to
share my experience so far:
1) First of all, do make sure to read through
It's the best starting point for customizing your local production
2) For AD authentication, it is indeed possible to kick that off to the
Apache proxy to handle. I ended up opting out of using it because of some
anomalies and my lack of AD knowledge, and frankly I think there are some
UX disadvantages to using apache's generic authentication prompt.
Personally I'd like to see AD or other authentication handling be served
directly by the Galaxy backend (same login prompt and everything), but
that's just my personal taste. Applicable documentation:
3) I think there are a couple ways to go about this, but none of them are
direct. Actually mounting the shares will probably be the most difficult
part, especially if you need to take permissions into account. On the AD
side, you'll want to first verify that the shared drive is mountable by
the OS of your Galaxy server. Depending on how the share is configured
(AD file permissions vs linux file permissions), you might have some
trouble matching permissions on, say, a linux server.
Assuming you can mount to your Galaxy server, I personally think the
easiest thing to do would be to mount not individual user shares but
rather a single share with individual user directories. You could
potentially take advantage of Galaxy's FTP Upload functionality
(http://wiki.galaxyproject.org/Admin/Config/Upload%20via%20FTP) at this
point, sans the ftp server. Let's say you mounted the AD governed single
point share to /mnt/ad-drive. You could set
'ftp_upload_dir=/mnt/ad-drive' in Galaxy's universe_wsgi.ini file. The
trick here would be to get Galaxy to map user directories under
/mnt/ad-drive in a manner that Galaxy's FTP interface understands. To do
it Galaxy's way, you would want user directories to appear as their full
e-mail address as registered with Galaxy; so, for example
"alexander.kurze(a)bioch.ox.ac.uk" would appear on the share as literally
The fact that users would be registering on galaxy with their AD
credentials would help keep that system in sync. With the ftp_upload_dir
option configured, user(a)bioch.ox.ac.uk would see an extra option in the
Import Data interface to directly import data files from
/mnt/ad-drive/user(a)bioch.ox.ac.uk. As far as exporting data from their
Galaxy account to the shared drive, I've come across a couple tools in the
Galaxy ToolShed that do exactly that.
Alternatively, I don't know if your server is sitting in front of a
cluster, but you could always go down the route of running jobs as the
actual system user, assuming a system configuration where system users are
AD users. See:
Hope some of that is helpful.
Office of Research Information Services (ORIS),
Office of the Chief Information Officer (OCIO),
On 12/1/13 12:00 PM, "galaxy-dev-request(a)lists.bx.psu.edu"
>Date: Thu, 28 Nov 2013 14:06:58 +0000
>From: Alexander Kurze <alexander.kurze(a)bioch.ox.ac.uk>
>To: galaxy-dev <galaxy-dev(a)bx.psu.edu>
>Subject: [galaxy-dev] Active Directory
>Content-Type: text/plain; charset="iso-8859-1"
>I have a server set up of galaxy. I would like to have a system setup
>people can login via their active directory name and then use their
>share drive to download processed data from the galaxy server.
>I saw that it is possible via apache to login through the AD, but will the
>AD also mount the share drives? If yes, is there a wrapper that allows to
>download Galaxy Histories onto the shared drive?