galaxy local installation
by Yinan Wan
Hi,
I was trying to install the galaxy on my laptop and run into the following
error
Some eggs are out of date, attempting to fetch...
Warning: MarkupSafe (a dependent egg of Mako) cannot be fetched
Warning: pycrypto (a dependent egg of Fabric) cannot be fetched
Warning: simplejson (a dependent egg of WebHelpers) cannot be fetched
Fetched http://eggs.galaxyproject.org/ssh/ssh-1.7.14-py2.7.egg
One of Galaxy's managed eggs depends on something which is missing, this is
almost certainly a bug in the egg distribution.
Dependency "ssh" requires "pycrypto>=2.1,!=2.4"
Traceback (most recent call last):
File "./scripts/fetch_eggs.py", line 37, in <module>
c.resolve() # Only fetch eggs required by the config
File
"/Users/yinan/Course2013Spring/IBOIS58A/galaxy-dist/lib/galaxy/eggs/__init__.py",
line 345, in resolve
egg.resolve()
File
"/Users/yinan/Course2013Spring/IBOIS58A/galaxy-dist/lib/galaxy/eggs/__init__.py",
line 168, in resolve
dists = pkg_resources.working_set.resolve( (
self.distribution.as_requirement(), ), env, self.fetch )
File "build/bdist.linux-i686/egg/pkg_resources.py", line 569, in resolve
raise VersionConflict(dist,req) # XXX put more info here
pkg_resources.VersionConflict: (ssh 1.7.14
(/Users/yinan/Course2013Spring/IBOIS58A/galaxy-dist/eggs/ssh-1.7.14-py2.7.egg),
Requirement.parse('pycrypto>=2.1,!=2.4'))
Fetch failed.
Because I have numpy installed which seems to be conflict to galaxy, so I
was install with virtual python environment (virtualenv), and also tried
the protection environment provided in the galaxy instruction page (
http://wiki.galaxyproject.org/Admin/Config/Performance/ProductionServer),
and both came across the same error as indicated above.
Some information you may be interested:
(GalaxyENV)yinan@Macbook galaxy-dist$ python -V
Python 2.7.3
(GalaxyENV)yinan@Macbook galaxy-dist$ python scripts/get_platforms.py
macosx-10.7-x86_64-ucs2
It will be great if you can help me out.
Thanks,
Yinan
--
Yinan Wan, graduate student
Bioinformatics and Genomics program
Huck Institutes of Life Sciences
the Pennsylvania State University
9 years, 10 months
How to test DRMAA configuration?
by Joshua Orvis
I have a working local Galaxy instance and wanted to enable DRMAA support
to utilize our SGE (or LSF) grid. Following the guide
here<http://wiki.galaxyproject.org/Admin/Config/Performance/Cluster> I
set what I appeared to need to make this work. From the DRMAA_LIBRARY_PATH
env variable to all the configuration settings in universe_wsgi.ini,
reconfiguring the server hosting Galaxy as a submit host, etc. Some
specific config file changes made:
new_file_path =
/seq/gscidA/www/gscid_devel/htdocs/galaxy-dist/database/tmp
start_job_runners = drmaa
default_cluster_job_runner = drmaa:///
set_metadata_externally = True
outputs_to_working_directory = True
I then killed and restarted the Galaxy instance and tried a simple FASTQ ->
FASTA test execution, but it ran locally. I couldn't find any sort of
errors or messages related to DRMAA in the server log, and the job ran to
completion. I commented out the local tool runner overrides. What can I
do to test my DRMAA configuration and where should I look for errors?
Thanks -
Joshua
9 years, 10 months
Custom tool need a directory as input
by David Angot
Hello,
I have a tool that needs a directory containing a bunch of files as an
input : mytool -d $directory -p param1 -o $output ...
I thought the best approach would be to use a Zip file that I would
temporally uncompress in a bash wrapper.
But when I upload the zip file, galaxy unzip it.
What would you recommend to accomplish such a task ? I want that tool to
be installable from a toolshed, so I don't want to modify the upload.py
script or any other files.
Thanks,
David
9 years, 10 months
How to distribute our local installation via git(logistics)
by Perez, Ricardo
Hello,
We want to able to distribute Galaxy configured with the UCSC browser, specially for people who are not too UNIX savvy. We had a question on how to do this, since we are planning to do it via git. How can we set up the git server, in such a way that it doesn't compromise important files that are specific to our installation. For example, in our universe_wsgi.ini file we have passwords for our database that we do not want to distribute. However, we do want to distribute universe_wsgi.ini in releases so people can be updated in important changes. We were thinking that maybe we should have a test machine to make sure all the changes does not break Galaxy, and then push the changes to our server machine an to our git server. However, we are not sure if this is the most efficient way to do it. Does anyone know how we can accomplish this without complicating too many things?
Thank you for your time and patience,
--Ricardo Perez
9 years, 10 months
GCC2013 Talk Abstract Submission closes this Friday
by Dave Clements
Hello all,
This is a final reminder that the *deadline for submitting an oral
presentation abstract is this 12 April, this Friday*. See
http://wiki.galaxyproject.org/Events/GCC2013/Abstracts
The poster presentation abstract deadline is 3 May.
And in case you missed the announcement about GIgaScience and GCC2013 talks:
We are pleased to announce that talks presented at the 2013 Galaxy
Community conference will be eligible for consideration to be published in
the journal *GigaScience <http://www.gigasciencejournal.com/>* and
that BGI<http://www.genomics.cn/en/index>will generously cover the
article processing fees for these articles. See
the announcement
<http://wiki.galaxyproject.org/News/GigaScienceGalaxyCFP>for details.
Thanks,
Dave C
On Mon, Feb 25, 2013 at 3:40 AM, Dave Clements
<clements(a)galaxyproject.org>wrote:
> Dear Galaxy Community,
>
> We are pleased to announce that early registration<http://wiki.galaxyproject.org/Events/GCC2013/Register>
> and paper and poster abstract submission<http://wiki.galaxyproject.org/Events/GCC2013/Abstracts> are
> now open for the 2013 Galaxy Community Conference (GCC2013)<http://wiki.galaxyproject.org/Events/GCC2013>
> . GCC2013 will be held 30 June through July 2 in Oslo Norway, at the University
> of Oslo <http://uio.no/>.
>
> GCC2013 <http://wiki.galaxyproject.org/Events/GCC2013> is an opportunity
> to participate in two full days of presentations, discussions, poster
> sessions, keynotes, lightning talks and breakouts, all about
> high-throughput biology and the tools that support it. The conference also
> includes a Training Day<http://wiki.galaxyproject.org/Events/GCC2013/TrainingDay> for
> the second year in a row, this year with more in-depth topic coverage, more
> concurrent sessions, and more topics.
>
> If you are a biologist or bioinformatician performing or enabling
> high-throughput biological research, then please consider attending.
> GCC2013 is aimed at:
>
> - Bioinformatics tool developers and data providers
> - Workflow developers and power bioinformatics users
> - Sequencing and Bioinformatics core staff
> - Data archival and analysis reproducibility specialists
>
> *Early registration<http://wiki.galaxyproject.org/Events/GCC2013/Register>
> * *saves up to 75% off regular registration costs,* and is very
> affordable, with combined registration (Training Day<http://wiki.galaxyproject.org/Events/GCC2013/TrainingDay> +
> main meeting) starting at ~ €95 for post-docs and students. Registering
> early also assures you a spot in the Training Day workshops you want to
> attend. Once a Training Day session becomes full, it will be closed to new
> registrations. Early registration closes 24 May.
>
> *Abstract submission<http://wiki.galaxyproject.org/Events/GCC2013/Abstracts>
> * for oral presentations closes 12 April, and for posters on 3 May.
> Please consider presenting your work. If you are working with big
> biological data, then the people at this meeting want to hear about your
> work.
>
> Thanks, and hope to see you in Oslo!
>
> The GCC2013 Organizing Committee<http://wiki.galaxyproject.org/Events/GCC2013/Organizers>
>
> PS: And please help get the word out<http://wiki.galaxyproject.org/Events/GCC2013/Promotion>
> !
> --
> http://galaxyproject.org/GCC2013
> <http://galaxyproject.org/wiki/GCC2012>http://galaxyproject.org/
> http://getgalaxy.org/
> http://usegalaxy.org/
> http://wiki.galaxyproject.org/
>
--
http://galaxyproject.org/GCC2013
http://galaxyproject.org/
http://getgalaxy.org/
http://usegalaxy.org/
http://wiki.galaxyproject.org/
9 years, 10 months
removing inputs from a pipeline
by Philipe Moncuquet
Hi Dev,
Is there a way to automatically delete inputs in a pipeline ? Let say you
don't want to keep your raw data because of quotas issues and you want your
pipeline to do it rather than to do it manually.
Regards,
Philippe
9 years, 10 months
setup grid engine
by Jingzhi Zhu
I have downloaded the 04_01 release and tried to configure the Sun Grid Engine so that the job can run on our cluster.
I have export the DRMAA_LIBRARY_PATH environment variable (export DRMAA_LIBRARY_PATH=/home/love-galaxy/bin/libdrmaa.so.1.0). Then copy job_conf.xml.sample_advanced to job_conf.xml. I have deleted some lines on the plugins section so it looks like this in job_conf.xml
<plugins workers="4">
<!-- "workers" is the number of threads for the runner's work queue.
The default from <plugins> is used if not defined for a <plugin>.
-->
<plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner"/>
<plugin id="drmaa" type="runner" load="galaxy.jobs.runners.drmaa:DRMAARunner"/>
</plugins>
The run.sh returns the following error
galaxy.tools.imp_exp DEBUG 2013-04-08 15:57:12,465 Loaded history export tool: __EXPORT_HISTORY__
galaxy.tools.imp_exp DEBUG 2013-04-08 15:57:12,466 Loaded history import tool: __IMPORT_HISTORY__
galaxy.tools.genome_index DEBUG 2013-04-08 15:57:12,472 Loaded genome index tool: __GENOME_INDEX__
galaxy.jobs.manager DEBUG 2013-04-08 15:57:12,474 Starting job handler
galaxy.jobs.runners DEBUG 2013-04-08 15:57:12,475 Starting 4 LocalRunner workers
galaxy.jobs DEBUG 2013-04-08 15:57:12,477 Loaded job runner 'galaxy.jobs.runners.local:LocalJobRunner' as 'local'
Traceback (most recent call last):
File "/net/rowley/ifs/data/love/love-galaxy/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py", line 37, in app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
File "/net/rowley/ifs/data/love/love-galaxy/galaxy-dist/lib/galaxy/app.py", line 159, in __init__
self.job_manager = manager.JobManager( self )
File "/net/rowley/ifs/data/love/love-galaxy/galaxy-dist/lib/galaxy/jobs/manager.py", line 31, in __init__
self.job_handler = handler.JobHandler( app )
File "/net/rowley/ifs/data/love/love-galaxy/galaxy-dist/lib/galaxy/jobs/handler.py", line 29, in __init__
self.dispatcher = DefaultJobDispatcher( app )
File "/net/rowley/ifs/data/love/love-galaxy/galaxy-dist/lib/galaxy/jobs/handler.py", line 543, in __init__
self.job_runners = self.app.job_config.get_job_runner_plugins()
File "/net/rowley/ifs/data/love/love-galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py", line 476, in get_job_runner_plugins
runner_class = getattr( module, class_name )
AttributeError: 'module' object has no attribute 'DRMAARunner'
Can someone point to me what is going on here? If you have configured the SGE successfully with the latest 4.1 release, can you show me what the file job_conf.xml should look like? There are a lot of tags in this new XML file and I found it is hard to do it right.
Thanks!
Jingzhi
9 years, 10 months
Galaxy + ProFTPd - proper configuration / uploaded files not accessible in Galaxy
by Jan Hapala
Hi,
I am new to Galaxy. I have managed to set up my Galaxy server with
PostgreSQL and ProFTPd.
I only do not know how to configure the system properly in order to get
uploaded files into Galaxy.
Problems:
1. I cannot see the files uploaded through ProFTPd from Galaxy.
2. I cannot upload any (even tiny) file through the Galaxy web interface --
Galaxy puts the upload task in the queue and the arrow is blinking (like it
is uploading) but it never ends.
Problem nr. 1 is solved when I set the current user's home directory access
rights to 777. But it does not seem to be the best solution (and the
ProFTPd config file shown in
http://wiki.galaxyproject.org/Admin/Config/Upload%20via%20FTP is more
restrictive than that).
I run Galaxy under UNIX user called galaxyuser. And I log into Galaxy as
someone(a)somewhere.com.
The access rights on the folders are as follows:
drwx--x--x 3 galaxyuser root 4096 dub 5 12:05 /var/ftp_uploads/
/var/ftp_uploads/:
total 4
drwx------ 2 65533 65533 4096 dub 8 15:38 someone(a)somewhere.com
I would appreciate any suggestions.
Best regards,
Jan
9 years, 10 months
History periodically disappears in AWS Cloudman installation
by Greg Edwards
Hi,
We're running a private Cloudman Galaxy on AWS for small-scale proteomics
work. Lately the whole History of the main user id we use has occasionally
disappeared, ie. on login the History is empty. The datasets aren't hiding
in "Deleted Datasets". They appear to still be there
in /mnt/galaxyData/files/000. They're not in the Anonymous (not logged in )
id. They're not in another id. The data doesn't come back later. We reload
the latest datasets in use and the numbering in the history restarts from
1. We're running the most basic config, with the simple single-threaded
database.
Nothing of interest seems to be in the various Cloudman logs.
I've searched the archives for "lost/deleted/disappeared datasets/history"
etc but nothing useful turned up.
This is our rev status ..
UBUNTU /mnt/galaxyTools/galaxy-central $ hg summary
parent: 8116:ecd131b136d0 tip
libraries: fix in query for 'datasets_are_public'
branch: default
commit: 2 modified, 268 unknown
update: (current)
This is a poor fault report but .. appreciate any pointers here.
Many thanks ...
--
Greg Edwards,
Port Jackson Bioinformatics
gedwards2(a)gmail.com
9 years, 10 months
April 8, 2013 Galaxy Security Release
by Nate Coraor
A security vulnerability was recently discovered that would allow a malicious person to delete the contents of a history that does not belong to them. The vulnerability was in a method designed to allow users to switch between histories as a convenience from certain messages and is not used for most normal history switching operations. A fix has been provided in the stable branch of the Galaxy distribution and tagged as security_2013.04.08. Administrators are strongly encouraged to upgrade to at least the listed tag or to implement one of the provided workarounds. This changeset *is not* included in the most recent (April 1, 2013) stable release.
To upgrade:
% hg pull
% hg update security_2013.04.08
For Galaxy installations on relatively old versions that administrators are not yet ready to upgrade, there are two workarounds. First, the patch can be downloaded and applied manually:
% wget -o history.patch https://bitbucket.org/galaxy/galaxy-central/commits/883216b2eaf2ff05b7771...
and then:
% hg patch history.patch
or:
% patch -p1 < history.patch
In this case, the Galaxy server process(es) must be restarted for the change to take effect.
Second, access to this method via the upstream proxy server can be denied. Note that anyone with the ability to bypass the proxy server via directly connecting to the Galaxy application's web port(s) would still be able to call this method. This is possible for anyone with direct login access to the server, or even remotely if the Galaxy application binds to all addresses (instead of just localhost) unless restricted by other means (e.g. a firewall).
For Apache:
<Location ~ "/history/switch_to_history.*">
Deny from all
</Location>
For nginx:
location ~ /history/switch_to_history.* {
deny all;
}
If using a proxy prefix (e.g. "/galaxy"), that prefix should be prepended to "/history".
--nate
Galaxy Team
9 years, 10 months