local_task_queue_workers
by Jorrit Boekel
Hi list,
I would need a memory refresher about tasked jobs. When testing some
larger analyses on a local installation, I thought the
local_task_queue_workers setting in universe_wsgi.ini would be the
limiting factor for how many tasks can be executed at the same time. In
our setup, it is currently set to 2. However, 5 tasks are run
simultaneously, leading to memory problems.
Am I overlooking something that anyone knows of?
cheers,
jorrit boekel
--
Scientific programmer
Mass spec analysis support @ BILS
Janne Lehtiö / Lukas Käll labs
SciLifeLab Stockholm
9 years, 2 months
Trouble setting up a local instance of Galaxy
by Hazard, E. Starr
Hello,
I am a new user of Galaxy.
I have a Galaxy instance running (sort of) on a local research cluster. I issued the command "hg update stable” today and it retrieved no files. SO presume I am up-to-date on the stable release. I start the instance as a user named “galaxy”.
Right now I am still running in “local” mode. I hope to migrate to DRMAA LSF eventually.
I have tried to set up ProFTP to upload files but have not succeeded so I use Galaxy Web-upload.
The upload was working nicely and I had added a couple of new tools and they were working with the uploaded files.
Getting LSF/DRMAA to work was giving me fits and ultimately I deleted all my history files in an effort to start over.
Presently, files being uploaded appear in history as say job 1 ( in a new history) The job status in the history panel of the web GUI
changes from purple to yellow then then to red indicating some sort of error. There is no viewable error text captured, but I can click on the “eye” icon and see the
first megabyte of the data (for tiny files I can see the entire content and it’s intact). In the Galaxy file system, however, these files appear but have a different number , say, dataset_399.dat
On my system the uploaded files appear in /PATH/galaxy-dist/database/files/000
My first question is why is the data going into the “000” subdirectory and not one “owned’ by the user who is uploading?
My second question is why is the dataset being labeled as dataset_399.dat and not dataset_001.dat?
My third question is why do the uploaded files not appear as selectable options ( say I have paired-end fastq files and tool wants to have choices about filenames)? This problem is present for programs that seek one input file as well.
I presume that Galaxy is confused because the numbering in history is not the same as the numbering in the file upload archive (e.g. /PATH/galaxy-dist/database/files/000 in my case) so my last question is how do I “reset” my system to get the dataset and history numbers to be the same?
Here’s how I launch the Galaxy instance
sh /shared/app/Galaxy/galaxy-dist/run.sh -v --daemon --pid-file=Nov6Localdaemon.pid.txt --log-file=Nov6Local1639daemon.log.txt
Entering daemon mode
Here are the last lines of the log
Starting server in PID 26236.
serving on 0.0.0.0:8089 view at http://127.0.0.1:8089
galaxy.tools.actions.upload_common DEBUG 2013-11-06 16:48:49,624 Changing ownership of /shared/app/Galaxy/galaxy-dist/database/tmp/upload_file_data_QZGHm4 with: /usr/bin/sudo -E scripts/external_chown_script.py /shared/app/Galaxy/galaxy-dist/database/tmp/upload_file_data_QZGHm4 hazards 502
galaxy.tools.actions.upload_common WARNING 2013-11-06 16:48:49,750 Changing ownership of uploaded file /shared/app/Galaxy/galaxy-dist/database/tmp/upload_file_data_QZGHm4 failed: sudo: no tty present and no askpass program specified
galaxy.tools.actions.upload_common DEBUG 2013-11-06 16:48:49,751 Changing ownership of /shared/app/Galaxy/galaxy-dist/database/tmp/tmpEsyGfO with: /usr/bin/sudo -E scripts/external_chown_script.py /shared/app/Galaxy/galaxy-dist/database/tmp/tmpEsyGfO hazards 502
galaxy.tools.actions.upload_common WARNING 2013-11-06 16:48:49,775 Changing ownership of uploaded file /shared/app/Galaxy/galaxy-dist/database/tmp/tmpEsyGfO failed: sudo: no tty present and no askpass program specified
galaxy.tools.actions.upload_common INFO 2013-11-06 16:48:49,805 tool upload1 created job id 170
galaxy.jobs DEBUG 2013-11-06 16:48:50,678 (170) Persisting job destination (destination id: local)
galaxy.jobs.handler INFO 2013-11-06 16:48:50,698 (170) Job dispatched
galaxy.jobs.runners.local DEBUG 2013-11-06 16:48:50,994 (170) executing: python /shared/app/Galaxy/galaxy-dist/tools/data_source/upload.py /depot/shared/app/Galaxy/galaxy-dist /shared/app/Galaxy/galaxy-dist/database/tmp/tmpTq22ot /shared/app/Galaxy/galaxy-dist/database/tmp/tmpEsyGfO 406:/shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170/dataset_406_files:/shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170/galaxy_dataset_406.dat
galaxy.jobs DEBUG 2013-11-06 16:48:51,030 (170) Persisting job destination (destination id: local)
galaxy.jobs.runners.local DEBUG 2013-11-06 16:48:53,335 execution finished: python /shared/app/Galaxy/galaxy-dist/tools/data_source/upload.py /depot/shared/app/Galaxy/galaxy-dist /shared/app/Galaxy/galaxy-dist/database/tmp/tmpTq22ot /shared/app/Galaxy/galaxy-dist/database/tmp/tmpEsyGfO 406:/shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170/dataset_406_files:/shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170/galaxy_dataset_406.dat
galaxy.jobs.runners.local DEBUG 2013-11-06 16:48:53,463 executing external set_meta script for job 170: /depot/shared/app/Galaxy/galaxy-dist/set_metadata.sh /shared/app/Galaxy/galaxy-dist/database/files /shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170 . /depot/shared/app/Galaxy/galaxy-dist/universe_wsgi.ini /shared/app/Galaxy/galaxy-dist/database/tmp/tmpTq22ot /shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170/galaxy.json /shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170/metadata_in_HistoryDatasetAssociation_410_pUkPU9,/shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170/metadata_kwds_HistoryDatasetAssociation_410_iouZ6k,/shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170/metadata_out_HistoryDatasetAssociation_410_3c6Nj2,/shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170/metadata_results_HistoryDatasetAssociation_410_A6L2LO,/shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170/galaxy_dataset_406.dat,/shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170/metadata_override_HistoryDatasetAssociation_410_8Wub22
galaxy.jobs.runners.local DEBUG 2013-11-06 16:48:54,789 execution of external set_meta for job 170 finished
galaxy.jobs DEBUG 2013-11-06 16:48:54,815 (170) Changing ownership of working directory with: /usr/bin/sudo -E scripts/external_chown_script.py /shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170 galaxy 50982
galaxy.jobs ERROR 2013-11-06 16:48:54,839 (170) Failed to change ownership of /shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170, failing
Traceback (most recent call last):
File "/depot/shared/app/Galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py", line 868, in finish
self.reclaim_ownership()
File "/depot/shared/app/Galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py", line 1452, in reclaim_ownership
self._change_ownership( self.galaxy_system_pwent[0], str( self.galaxy_system_pwent[3] ) )
File "/depot/shared/app/Galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py", line 1438, in _change_ownership
assert p.returncode == 0
AssertionError
galaxy.jobs DEBUG 2013-11-06 16:48:54,876 fail(): Moved /shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170/galaxy_dataset_406.dat to /shared/app/Galaxy/galaxy-dist/database/files/000/dataset_406.dat
ls -ltr /shared/app/Galaxy/galaxy-dist/database/files/000
-rw------- 1 galaxy galaxy 93 Nov 6 16:48 dataset_406.dat
>From http://dev.list.galaxyproject.org/Submitting-jobs-as-a-real-user-without-...
I tried this as sudo user
scripts/external_chown_script.py /shared/app/Galaxy/galaxy-dist/database/job_working_directory/000/170 svcgalaxy 100000
Traceback (most recent call last):
File "scripts/external_chown_script.py", line 16, in <module>
import drmaa
File "build/bdist.linux-x86_64/egg/drmaa/__init__.py", line 41, in <module>
File "build/bdist.linux-x86_64/egg/drmaa/wrappers.py", line 43, in <module>
RuntimeError: could not find drmaa library. Please specify its full path using the environment variable DRMAA_LIBRARY_PATH
I have no idea why that refers to DRMAA anything because in this case I was running local
In fact however I get that same error when I start galaxy with this ”job_conf.xml”
cat job_conf.xml
<?xml version="1.0"?>
<job_conf>
<plugins>
<!-- "workers" is the number of threads for the runner's work queue.
The default from <plugins> is used if not defined for a <plugin>.
-->
<plugin id="drmaa" type="runner" load="galaxy.jobs.runners.drmaa:DRMAAJobRunner" workers="8"/>
</plugins>
<handlers default="handlers">
<!-- Additional job handlers - the id should match the name of a
[server:<id>] in universe_wsgi.ini.
-->
<handler id="main" tags="handlers"/>
</handlers>
<destinations default="drmaa">
<!-- Destinations define details about remote resources and how jobs
should be executed on those remote resources.
-->
<destination id="drmaa" runner="drmaa" tags="hpcc3.musc.edu">
<!-- "TODO: The real user options should maybe not be considered runner params."
-->
<param id="galaxy_external_runjob_script">scripts/drmaa_external_runner.py</param>
<param id="galaxy_external_killjob_script">scripts/drmaa_external_killer.py</param>
<param id="galaxy_external_chown_script">scripts/external_chown_script.py</param>
</destination>
</destinations>
</job_conf>
The other difference is that in the DRMAA case, the GUI upload job in the HISTORY panel goes “grey” and stays that way; nothing is ever uploaded.
Summary:
In local mode I can upload files but there are errors in HISTORY and I cannot get tools to see the uploaded files.
In DMAA mode I cannot get the GUI upload to happen at all.
BTW in user “galaxy”’s .bashrc file I have this line
export DRMAA_LIBRARY_PATH=/opt/lsf/7.0/linux2.6-glibc2.3-x86_64/lib/libdrmaa.so
I’m baffled.
Thanks
Starr
/shared/app/Galaxy/galaxy-dist/database/files
9 years, 2 months
Re: [galaxy-dev] Question regarding walltime exceeded not being correctly reported via the WebUI
by Daniel Patrick Sullivan
Hi, John,
Thank you for taking the time to help me look into this issue. I have
applied the patch you provided and confirmed that it appears to help
remediate the problem (when a walltime is exceeded feedback is in fact
provided via the Galaxy web UI; it no longer appears that jobs are running
indefinitely). One thing I would like to note is that the error that is
provided to the user is generic, i.e. the web UI reports "An error occurred
with this dataset: Job cannot be completed due to a cluster error, please
retry it later". So, the fact that a Walltime exceeded error actually
occurred is not presented to the user (I am not sure if this is intentional
or not). Again, I appreciate you taking the time to verify and patch this
issue. I have attached a screenshot of the output for your review.
I am probably going to be testing Galaxy with Torque 4.2.5 in the coming
weeks, I will let you know if I identify any additional problems. Thank
you so much have a wonderful day.
Dan Sullivan
On Tue, Nov 5, 2013 at 8:48 AM, John Chilton <chilton(a)msi.umn.edu> wrote:
> Hey Daniel,
>
> Thanks so much for the details problem report, it was very helpful.
> Reviewing the code there appears to be a bug in the PBS job runner -
> in some cases pbs_job_state.stop_job is never set but is attempted to
> be read. I don't have torque so I don't have a great test setup for
> this problem, any chance you can make the following changes for me and
> let me know if they work?
>
> Between the following two lines:
>
> log.error( '(%s/%s) PBS job failed: %s' % (
> galaxy_job_id, job_id, JOB_EXIT_STATUS.get( int( status.exit_status ),
> 'Unknown error: %s' % status.exit_status ) ) )
> self.work_queue.put( ( self.fail_job, pbs_job_state ) )
>
> log.error( '(%s/%s) PBS job failed: %s' % (
> galaxy_job_id, job_id, JOB_EXIT_STATUS.get( int( status.exit_status ),
> 'Unknown error: %s' % status.exit_status ) ) )
> pbs_job_state.stop_job = False
> self.work_queue.put( ( self.fail_job, pbs_job_state ) )
>
> And at the top of the file can you add a -11 option to the
> JOB_EXIT_STATUS to indicate a job timeout.
>
> I have attached a patch that would apply against the latest stable -
> it will probably will work against your branch as well.
>
> If you would rather not act as my QC layer, I can try to come up with
> a way to do some testing on my end :).
>
> Thanks again,
> -John
>
>
> On Mon, Nov 4, 2013 at 10:10 AM, Daniel Patrick Sullivan
> <dansullivan(a)gmail.com> wrote:
> > Hi, Galaxy Developers,
> >
> > I have what I hops is somewhat of a basic question regarding Galaxy's
> > interaction with a pbs job cluster and information reported via the
> webUI.
> > Basically, in certain situations, the walltime of a specific job is
> > exceeded. This is of course to be expected and all fine and
> > understandeable.
> >
> > My problem is that the information is not being relayed back to the end
> user
> > via the Galaxy web UI, which causes confusion in our Galaxy user
> community.
> > Basically the Torque scheduler generates the following message when a
> > walltime is exceeded:
> >
> > 11/04/2013
> > 08:39:45;000d;PBS_Server.30621;Job;163.sctest.cri.uchicago.edu;preparing
> to
> > send 'a' mail for job 163.sctest.cri.uchicago.edu to
> > s.cri.galaxy(a)crigalaxy-test.uchicago.edu (Job exceeded its walltime
> limit.
> > Job was aborted
> > 11/04/2013
> > 08:39:45;0009;PBS_Server.30621;Job;163.sctest.cri.uchicago.edu;job exit
> > status -11 handled
> >
> > Now, my problem is that this status -11 return code is not being
> correctly
> > handled by Galaxy. What happens is that Galaxy throws an exception,
> > specificially:
> >
> > 10.135.217.178 - - [04/Nov/2013:08:39:42 -0500] "GET
> > /api/histories/90240358ebde1489 HTTP/1.1" 200 -
> > "https://crigalaxy-test.uchicago.edu/history" "Mozilla/5.0 (X11; Linux
> > x86_64; rv:23.0) Gecko/20100101 Firefox/23.0"
> > galaxy.jobs.runners.pbs DEBUG 2013-11-04 08:39:46,137
> > (2150/163.sctest.cri.uchicago.edu) PBS job state changed from R to C
> > galaxy.jobs.runners.pbs ERROR 2013-11-04 08:39:46,139
> > (2150/163.sctest.cri.uchicago.edu) PBS job failed: Unknown error: -11
> > galaxy.jobs.runners ERROR 2013-11-04 08:39:46,139 (unknown) Unhandled
> > exception calling fail_job
> > Traceback (most recent call last):
> > File
> "/group/galaxy_test/galaxy-dist/lib/galaxy/jobs/runners/__init__.py",
> > line 60, in run_next
> > method(arg)
> > File "/group/galaxy_test/galaxy-dist/lib/galaxy/jobs/runners/pbs.py",
> line
> > 561, in fail_job
> > if pbs_job_state.stop_job:
> > AttributeError: 'AsynchronousJobState' object has no attribute 'stop_job'
> >
> > After this exception occurs, the Galaxy job status via the Web UI is
> still
> > reported as "Job is currently running". It appears that the job will
> remain
> > in this state (from the end users perspective) indefinitely. Has anybody
> > seen this issue before?
> >
> > I noticed that return code -11 does not exist in
> > /group/galaxy_test/galaxy-dist/lib/galaxy/jobs/runners/pbs.py under the
> > JOB_EXIT_STATUS dictionary. I tried adding an entry for this, however
> > when I do the exception changes to:
> >
> > galaxy.jobs.runners.pbs ERROR 2013-11-04 10:02:17,274
> > (2151/164.sctest.cri.uchicago.edu) PBS job failed: job walltime exceeded
> > galaxy.jobs.runners ERROR 2013-11-04 10:02:17,275 (unknown) Unhandled
> > exception calling fail_job
> > Traceback (most recent call last):
> > File
> "/group/galaxy_test/galaxy-dist/lib/galaxy/jobs/runners/__init__.py",
> > line 60, in run_next
> > method(arg)
> > File "/group/galaxy_test/galaxy-dist/lib/galaxy/jobs/runners/pbs.py",
> line
> > 562, in fail_job
> > if pbs_job_state.stop_job:
> > AttributeError: 'AsynchronousJobState' object has no attribute 'stop_job'
> >
> > I am wondering if this is a bug or if it is just because I am using a
> newer
> > version of TORQUE (I am using TORQUE 4.2.2).
> >
> > In terms of Galaxy, I am using:
> >
> > [s.cri.galaxy@crigalaxy-test galaxy-dist]$ hg parents
> > changeset: 10408:6822f41bc9bb
> > branch: stable
> > parent: 10393:d05bf67aefa6
> > user: Dave Bouvier <dave(a)bx.psu.edu>
> > date: Mon Aug 19 13:06:17 2013 -0400
> > summary: Fix for case where running functional tests might overwrite
> > certain files in database/files.
> >
> > [s.cri.galaxy@crigalaxy-test galaxy-dist]$
> >
> > Does anybody know how I could fix this such that walltime exceeded
> messages
> > are correctly reporeted via the Galaxy web UI for TORQUE 4.2.2? Thank
> you
> > so much for your input and guidance, and for the ongoing development of
> > Galaxy.
> >
> > Dan Sullivan
> >
> > ___________________________________________________________
> > Please keep all replies on the list by using "reply all"
> > in your mail client. To manage your subscriptions to this
> > and other Galaxy lists, please use the interface at:
> > http://lists.bx.psu.edu/
> >
> > To search Galaxy mailing lists use the unified search at:
> > http://galaxyproject.org/search/mailinglists/
>
9 years, 2 months
Dynamic tool configuration
by Biobix Galaxy
Hi all,
We are working on a galaxy tool suite for data analysis.
We use a sqlite db to keep result data centralised between the different tools.
At one point the tool configuration options of a tool should be dependent on the rows within a table of the sqlite db that is the output of the previous step. In other words, we would like to be able to set selectable parameters based on an underlying sql statement. If sql is not possible, an alternative would be to output the table content into a txt file and subsequently parse the txt file instead of the sqlite_db within the xml configuration file.
When looking through the galaxy wiki and mailing lists I came across the <code> tag which would be ideal, we could run a python script in the background to fetch date from the sqlite table, however that function is deprecated.
Does anybody know of other ways to achieve this?
Thanks!
Jeroen
Ir. Jeroen Crappé, PhD Student
Lab of Bioinformatics and Computational Genomics (Biobix)
FBW - Ghent University
9 years, 3 months
Re: [galaxy-dev] Galaxy dropping jobs?
by Nikolai Vazov
Hi again,
The loop (as explained below), did the job :)
Nikolay
Thank you very much, Nate,
1.
I have put a fix : a loop running the JobStatus check 5 times every 60
secs and only then throwing an exception as the one below.
It happened that all connect failures happen at the same time - at
slurm log rotation time at 3 am. Hopefully it helps :)
2.
Our slurm conf keeps the info about each job for 5 min. But looking at
the code, it seems that in in the case you describe below, there will be
an "Invalid job exception" leading to "Job finished" state. Am I wrong?
Anyway, I'll let you know if the loop does the job.
Thanks again,
Nikolay
On 2013-11-04 15:57, Nate Coraor wrote:
> Hi Nikolay,
> With slurm, the following change that I backed out should fix the
> problem:
>
> https://bitbucket.org/galaxy/galaxy-central/diff/lib/galaxy/jobs/runners/...
> Although I do believe that if Galaxy doesn't read the completion
> state before slurm "forgets" about the job (MinJobAge in slurm.conf),
> this change could result in the job becoming permanently stuck in the
> running state.
> I should have some enhancements to the DRMAA runner for slurm coming
> soon that would prevent this.
> --nate
> On Oct 31, 2013, at 5:27 AM, Nikolai Vazov wrote:
>
>> Hi,
>>
>> I discovered a weird issue in the job behaviour : Galaxy is running a
>> long job on a cluster (more than 24h), about 15 hours later it misses
>> the connection with SLURM on the cluster and throws the following
>> message :
>> [root@galaxy-prod01 galaxy-dist]# grep 3715200 paster.log
>> galaxy.jobs.runners.drmaa INFO 2013-10-30 10:51:54,149 (555) queued
>> as 3715200
>> galaxy.jobs.runners.drmaa DEBUG 2013-10-30 10:51:55,149 (555/3715200)
>> state change: job is queued and active
>> galaxy.jobs.runners.drmaa DEBUG 2013-10-30 10:52:13,516 (555/3715200)
>> state change: job is running
>> galaxy.jobs.runners.drmaa INFO 2013-10-31 03:29:33,090 (555/3715200)
>> job left DRM queue with following message: code 1: slurm_load_jobs
>> error: Unable to contact slurm controller (connect failure),job_id:
>> 3715200
>> Is there a timeout in Galaxy for contacting slurm? Yet, the job is
>> still running properly on the cluster ...
>> Thanks for help, it's really urgent :)
>> Nikolay
>>
>> --
>> Nikolay Vazov, PhD
>> Research Computing Centre - http://hpc.uio.no
>> USIT, University of Oslo
>> ___________________________________________________________
>> Please keep all replies on the list by using "reply all"
>> in your mail client. To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>> http://lists.bx.psu.edu/
>> To search Galaxy mailing lists use the unified search at:
>> http://galaxyproject.org/search/mailinglists/
--
Nikolay Vazov, PhD
Research Computing Centre - http://hpc.uio.no
USIT, University of Oslo
9 years, 3 months
Supporting file sets for running a tool with multiple input files
by Lukasse, Pieter
Hi,
Is there any news regarding support for the following scenario in Galaxy:
- User has N files which he would like to process with a Galaxy tool using the same parameters
- User uploads a (.tar or .zip ?) file to Galaxy and selects this as the input file for the tool
- Tool produces an output .zip file with the N result files
I know Galaxy-P had a workaround for this some time ago. But has this been solved in the main Galaxy code base?
Or are there any feasible workarounds that I can add to my Toolshed package to ensure my .zip file does not get unzipped at upload (default Galaxy behaviour)?
Thanks and regards,
Pieter Lukasse
Wageningen UR, Plant Research International
Departments of Bioscience and Bioinformatics
Wageningen Campus, Building 107, Droevendaalsesteeg 1, 6708 PB,
Wageningen, the Netherlands
+31-317480891; skype: pieter.lukasse.wur
http://www.pri.wur.nl<http://www.pri.wur.nl/>
9 years, 3 months
Re: [galaxy-dev] latest galaxy-central version
by James Taylor
Robert, I'm not sure what is going on here, other than that the javascript
that converts buttons into dropdown menus has not fired.
Are there any javascript errors?
Marten is working on rewriting libraries, and we will be eliminating the
progressive loading popupmenus for something much more efficient, but this
also might indicate a bug so let us know if there is anything odd in the
console.
--
James Taylor, Associate Professor, Biology/CS, Emory University
On Mon, Nov 4, 2013 at 1:58 PM, Robert Baertsch <baertsch(a)soe.ucsc.edu>wrote:
> HI James,
> I just pulled in the latest code to see how you changed from iframe to
> divs. Very exciting update.
>
>
> I tried importing a bam file into the library using the admin tool that
> reads the file directly from a path on the server.
>
> During the copy operation, the screen gets into a weird state.
> ps -ef shows these two processes running:
> python /data/galaxy-central/tools/data_source/upload.py
> /data/galaxy-central /data/galaxy-central/database/tmp/tmpywxnFE
> /data/galaxy-central/database/tmp/tmpXkEvSY
> 8876:/data/galaxy-central/database/job_working_directory/004/4547/dataset_8876_files:/data/galaxy-central/database/files/008/dataset_8876.dat
> samtools sort /data/galaxy-central/database/files/008/dataset_8876.dat
> /tmp/tmpNbcggh/sorted
>
> Do you finish rendering the screen after the copy operation is complete?
>
> -Robert
>
>
9 years, 3 months
errors after updating galaxy distribution
by Dan Tenenbaum
Hi,
I updated my galaxy distribution with
hg pull -u
Then tried to re-run galaxy and got an error:
$ ./run.sh
Traceback (most recent call last):
File "./scripts/paster.py", line 33, in <module>
serve.run()
File "/home/microbeco/galaxy-dist/lib/galaxy/util/pastescript/serve.py", line 1049, in run
invoke(command, command_name, options, args[1:])
File "/home/microbeco/galaxy-dist/lib/galaxy/util/pastescript/serve.py", line 1055, in invoke
exit_code = runner.run(args)
File "/home/microbeco/galaxy-dist/lib/galaxy/util/pastescript/serve.py", line 220, in run
result = self.command()
File "/home/microbeco/galaxy-dist/lib/galaxy/util/pastescript/serve.py", line 643, in command
app = loadapp( app_spec, name=app_name, relative_to=base, global_conf=vars)
File "/home/microbeco/galaxy-dist/lib/galaxy/util/pastescript/loadwsgi.py", line 350, in loadapp
return loadobj(APP, uri, name=name, **kw)
File "/home/microbeco/galaxy-dist/lib/galaxy/util/pastescript/loadwsgi.py", line 374, in loadobj
global_conf=global_conf)
File "/home/microbeco/galaxy-dist/lib/galaxy/util/pastescript/loadwsgi.py", line 399, in loadcontext
global_conf=global_conf)
File "/home/microbeco/galaxy-dist/lib/galaxy/util/pastescript/loadwsgi.py", line 423, in _loadconfig
return loader.get_context(object_type, name, global_conf)
File "/home/microbeco/galaxy-dist/lib/galaxy/util/pastescript/loadwsgi.py", line 561, in get_context
section)
File "/home/microbeco/galaxy-dist/lib/galaxy/util/pastescript/loadwsgi.py", line 620, in _context_from_explicit
value = import_string(found_expr)
File "/home/microbeco/galaxy-dist/lib/galaxy/util/pastescript/loadwsgi.py", line 125, in import_string
return pkg_resources.EntryPoint.parse("x=" + s).load(False)
File "/home/microbeco/galaxy-dist/lib/pkg_resources.py", line 1954, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/home/microbeco/galaxy-dist/lib/galaxy/web/buildapp.py", line 3, in <module>
from galaxy.webapps.galaxy.buildapp import app_factory
File "/home/microbeco/galaxy-dist/lib/galaxy/webapps/galaxy/buildapp.py", line 16, in <module>
import galaxy.model.mapping
File "/home/microbeco/galaxy-dist/lib/galaxy/model/mapping.py", line 14, in <module>
from sqlalchemy.types import BigInteger
ImportError: cannot import name BigInteger
I also tried to update my database but got a different error:
$ sh manage_db.sh upgrade
114 -> 115...
Traceback (most recent call last):
File "./scripts/manage_db.py", line 62, in <module>
main( repository=repo, url=db_url )
File "/home/microbeco/galaxy-dist/eggs/sqlalchemy_migrate-0.5.4-py2.7.egg/migrate/versioning/shell.py", line 150, in main
ret = command_func(**kwargs)
File "/home/microbeco/galaxy-dist/eggs/sqlalchemy_migrate-0.5.4-py2.7.egg/migrate/versioning/api.py", line 221, in upgrade
return _migrate(url, repository, version, upgrade=True, err=err, **opts)
File "/home/microbeco/galaxy-dist/eggs/sqlalchemy_migrate-0.5.4-py2.7.egg/migrate/versioning/api.py", line 349, in _migrate
schema.runchange(ver, change, changeset.step)
File "/home/microbeco/galaxy-dist/eggs/sqlalchemy_migrate-0.5.4-py2.7.egg/migrate/versioning/schema.py", line 184, in runchange
change.run(self.engine, step)
File "/home/microbeco/galaxy-dist/eggs/sqlalchemy_migrate-0.5.4-py2.7.egg/migrate/versioning/script/py.py", line 101, in run
func()
TypeError: upgrade() takes exactly 1 argument (0 given)
How can I fix this?
Thanks,
Dan
9 years, 3 months
Egg distribution issue
by Iry Witham
Hi Dev Team,
I have run into an egg distribution issue. I had performed an upgrade late last month and am experiencing issues fetching eggs. I am getting the following error message:
galaxy@jaxgalaxydev01:/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist> python ./scripts/fetch_eggs.py
Fetched http://eggs.galaxyproject.org/sqlalchemy_migrate/sqlalchemy_migrate-0.7.2...
One of Galaxy's managed eggs depends on something which is missing, this is almost certainly a bug in the egg distribution.
Dependency "sqlalchemy-migrate" requires "setuptools"
Traceback (most recent call last):
File "./scripts/fetch_eggs.py", line 37, in <module>
c.resolve() # Only fetch eggs required by the config
File "/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist/lib/galaxy/eggs/__init__.py", line 345, in resolve
egg.resolve()
File "/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist/lib/galaxy/eggs/__init__.py", line 168, in resolve
dists = pkg_resources.working_set.resolve( ( self.distribution.as_requirement(), ), env, self.fetch )
File "/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist-jax/galaxy-dist-jax/lib/pkg_resources.py", line 569, in resolve
raise VersionConflict(dist,req) # XXX put more info here
pkg_resources.VersionConflict: (sqlalchemy-migrate 0.7.2 (/hpcdata/galaxy-dev/galaxy-setup/galaxy-dist/eggs/sqlalchemy_migrate-0.7.2-py2.6.egg), Requirement.parse('setuptools'))
Could someone point me in the right direction?
Thanks,
Iry
__________________________________
Iry T. Witham
Scientific Applications Administrator
Scientific Computing Group
Computational Sciences Dept.
The Jackson Laboratory
600 Main Street
Bar Harbor, ME 04609
Phone: 207-288-6744
email: iry.witham(a)jax.org
[cid:3302E28C-F24A-4AF1-97B8-A33A6CD618F6]
The information in this email, including attachments, may be confidential and is intended solely for the addressee(s). If you believe you received this email by mistake, please notify the sender by return email as soon as possible.
9 years, 3 months
Deleting history entries not working, javascript void link?
by Peter Cock
Hi all,
I've just updated our main Galaxy to release_2013.11.04 but
also see the following problem on galaxy-central, and on
https://usegalaxy.org/ as well.
(1) Go to any history with deleted files
(2) Cog icon, tick "Include Deleted datasets"
(3) Look at a deleted dataset in your history column, note
the yellow "Warning" box with text:
This dataset has been deleted. Click *here* to undelete it or *here*
to immediately remove it from disk
(4) Notice the first *here* link (undelete) has a sensible URL,
e.g. as shown via your browser's status bar or right click to
copy the URL.
(5) Notice the second *here* (remove) has this as the URL:
javascript:void(0)
Unsurprisingly, the remove link has no effect as it is a JavaScript
command to do nothing, and the user cannot therefore purge the
file from their history.
I've confirmed this on multiple browsers on multiple machines.
Regards,
Peter
9 years, 3 months