Not sure if I've overlooked a setting somewhere, but is it possible to customize or tweak the job descriptions that are displayed in the history?
Eg. "Compute sequence length on data 1" isn't very descriptive, but if perhaps the annotation field or a description field could be added it would make identification much easier eg. "Compute sequence length on data 1 [Burkholderia pseudomallei K96243]" or even for datatypes that have a defined definition line, perhaps a mouse-over hover could be added with the accession or definition pulled from the input file? These mouse-over hovers are already used to identify the view, edit, and delete icons so no additional libraries wold be used.
I could probably hack something up but thought it better to try and get the feature included.
For those interested,
The main Galaxy Tool Shed is now running the next-stable branch in preparation for the upcoming Galaxy release currently scheduled for about 2 weeks from now. We'll continue to update the Tool Shed as new changeset are pushed to the next-stable branch. We'll alert you wnen the main Tool Shed is updated to the stable branch when it is tagged for the next Galaxy release.
As usual, the test Galaxy Tool Shed continues to track the default branch.
Greg Von Kuster
I have implemented a custom tool called 'barcode-parse' and uploaded this
custom to into my local tool shed(which is running at http://localhost:9009).
Also able to install this custom tool from galaxy.
Then I have added simple repository dependency. I have created a xml file
'repository_dependencies.xml'. Then I created a new repository in the
toolShed called 'Test' and uploaded all necessary files(.py, definition xml
file and repository dependency xml file) into this repository. (I have
created another repository called 'string_occurrence' and uploaded
necessary files into toolShed which is used as dependency repository)
<repository toolshed="http://localhost:9009" name="string_occurrence"
I tried to install custom tool 'barcode-parse' from galaxy, but I got below
The repository dependency definitions for this repository are invalid and
will be ignored.
I followed the steps mentioned in the galaxy
Am I doing anything wrong? Please suggest me on this.
I'm currently successfully running a pipeline for metagenomic analysis on
my locally installed galaxy instance and everything seemed to good by far.
But when I'm trying to download my processed blast xml output (I used
blast+ wrappers for this) , by clicking the small disk button nothing
happens. I tried several options.
1) clicked the 'disk' icon to initiate the downloading
2) right clicked the 'disk' icon and tried to use saved linked as, but a
dialog box doesnot appear for saving
3) tried copying the link and used curl on bash terminal, but progress bar
is stuck at 0%.
I think this is happening because my blast output dataset is much larger
(around ~10GB) , but if that's the case, how am I supposed to get the data
out of galaxy any way??
Thank a lot in advance.
I have this strange thing on my Galaxy server where some tabular files display correctly in the central frame while others will just display as some kind of plain text file where the tabs are read as spaces...but when loading them in excel they work just fine!
Has anyone seen this before and what would be the solution/workaround?
Thanks and regards,
Wageningen UR, Plant Research International
Departments of Bioscience and Bioinformatics
Wageningen Campus, Building 107, Droevendaalsesteeg 1, 6708 PB,
Wageningen, the Netherlands
+31-317481122; skype: pieter.lukasse.wur
I have made a wrapper that has a tool that giving it an input file, gives
you two output files with known extension: .gem and .log
I call the tool with a python file, and I pass it the input file and the
two output file names for galaxy to show them, but when I look into de
database/files/000/ i get four files:
dataset_54.dat (my first declared output that it's empty)
dataset_54.dat.gem (the output that throws the tool)
dataset_54.dat.log (the other output file that throws the tool)
dataset_55.dat (my second declared output that it's empty)
So,what I'm getting in the galaxy interface is _54.dat and _55.dat, both
empty, instead of _54.dat.gem and _54.dat.log that are the good ones. How
can I fix these?
I saw from: http://dev.list.galaxyproject.org/contributing-patches-to-toolshed-repos-... that B. Langhorst had submitted a change to the picard wrapper to support the rna-seq metrics - has that patch made it into any of the repo's? I'm not seeing it when I install the picard wrappers.
I'd prefer not having to manually apply that patch to get the support for running rna-seq metrics.
--CONFIDENTIALITY NOTICE--: The information contained in this email is intended for the exclusive use of the addressee and may contain confidential information. If you are not the intended recipient, you are hereby notified that any form of dissemination of this communication is strictly prohibited. www.benaroyaresearch.org
When I used SnpEff in local galaxy instance, I got an error:
Exception in thread "main" java.lang.UnsupportedClassVersionError: ca/mcgill/mcb/pcingola/snpSift/SnpSift : Unsupported major.minor version 51.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.j
Also, when I used the FASTQ Summary Statistics, I got an error like this:
Traceback (most recent call last): File "/Users/...../softwares/shed_tools/toolshed.g2.bx.psu.edu/repos/devteam/fastq_stats/9b7b4e0ca9db/fastq_stats/fastq_stats.py", line 48, in if __name__ == "__main__": main() File "/Users/...
I tried to google and solve them by myself. But I really do not know how to do that. Could anybody give me some clue? Thanks a lot for your help!
I keep getting an error everytime I start up the galaxy server. I am guessing this has to do with a job that galaxy is trying to resume and cannot find.
galaxy.jobs.runners ERROR 2014-04-02 09:31:10,889 Unhandled exception checking active jobs
Traceback (most recent call last):
File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/runners/__init__.py", line 366, in monitor
File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/runners/pbs.py", line 363, in check_watched_items
( failures, statuses ) = self.check_all_jobs()
File "/srv/www/galaxy/source/galaxy-dist/lib/galaxy/jobs/runners/pbs.py", line 452, in check_all_jobs
c = pbs.pbs_connect( pbs_server_name )
TypeError: in method 'pbs_connect', argument 1 of type 'char *'
Because of this error I cannot get any other jobs to run either. They just sit queued on the cluster.
Could you send your job_conf.xml please? Also, is there any chance that you
modified run.sh to set DRMAA_LIBRARY_PATH there (which would override what
you set on the command line)?
On Thu, Apr 3, 2014 at 9:17 AM, Luca Toldo <lucatoldo(a)gmail.com> wrote:
> Dear All,
> after having discovered the mess in which I was, and the incomplete PBS
> client installation, I've done the following:
> *1) PBS Pro*
> a) removed the incomplete pbs installation that I had
> b) downloaded the latest version from vendor
> c) installed it in //usr/pbs
> d) configured
> e) tested it --> worked well
> *2) drma-pbs*
> a) downloaded version 1.0.17
> b) compiled it (./configure --with-pbs=/usr/pbs ) without any problem
> c) installed it in /usr/local/lib
> 3) modified DRMAA_LIBRARY_PATH=/usr/local/lib/libdrmaa.so
> 4) sh run.sh --stop
> 5) sh run.sh --daemon
> Unfortunately, however when I launch a new job I always get the error
> galaxy.jobs DEBUG 2014-04-03 15:13:44,306 (261) Working directory for job
> is: /opt/ngs/bin/galaxy-dist/database/job_working_directory/000/261
> galaxy.jobs.handler DEBUG 2014-04-03 15:13:44,321 (261) Dispatching to
> drmaa runner
> galaxy.jobs DEBUG 2014-04-03 15:13:44,434 (261) Persisting job
> destination (destination id: pbs_default)
> galaxy.jobs.handler INFO 2014-04-03 15:13:44,474 (261) Job dispatched
> galaxy.tools.deps DEBUG 2014-04-03 15:13:44,659 Building dependency shell
> command for dependency 'clustalw2'
> galaxy.tools.deps WARNING 2014-04-03 15:13:44,660 Failed to resolve
> dependency on 'clustalw2', ignoring
> galaxy.jobs.runners.drmaa DEBUG 2014-04-03 15:13:45,050 (261) submitting
> galaxy.jobs.runners.drmaa DEBUG 2014-04-03 15:13:45,050 (261) command is:
> python /opt/ngs/bin/galaxy-dist/tools/rgenetics/rgClustalw.py -i
> "/data/imgt.fasta" -o
> "/opt/ngs/bin/galaxy-dist/database/files/000/dataset_340.dat" -s "ALIGNED"
> -l "/opt/ngs/bin/galaxy-dist/database/files/000/dataset_341.dat" -t
> "Clustal_run" -d "DNA" -f "CLUSTAL"; return_code=$?; cd
> /opt/ngs/bin/galaxy-dist; /opt/ngs/bin/galaxy-dist/set_metadata.sh
> /opt/ngs/bin/galaxy-dist/database/job_working_directory/000/261 .
> sh -c "exit $return_code"
> galaxy.jobs.runners ERROR 2014-04-03 15:13:45,051 (261) Unhandled
> exception calling queue_job
> Traceback (most recent call last):
> File "/opt/ngs/bin/galaxy-dist/lib/galaxy/jobs/runners/__init__.py",
> line 62, in run_next
> File "/opt/ngs/bin/galaxy-dist/lib/galaxy/jobs/runners/drmaa.py", line
> 150, in queue_job
> external_job_id = self.ds.runJob(jt)
> "/opt/ngs/bin/galaxy-dist/eggs/drmaa-0.6-py2.6.egg/drmaa/__init__.py", line
> 331, in runJob
> _h.c(_w.drmaa_run_job, jid, _ct.sizeof(jid), jobTemplate)
> "/opt/ngs/bin/galaxy-dist/eggs/drmaa-0.6-py2.6.egg/drmaa/helpers.py", line
> 213, in c
> return f(*(args + (error_buffer, sizeof(error_buffer))))
> "/opt/ngs/bin/galaxy-dist/eggs/drmaa-0.6-py2.6.egg/drmaa/errors.py", line
> 90, in error_check
> raise _ERRORS[code-1]("code %s: %s" % (code, error_buffer.value))
> InvalidAttributeValueException: code 14: Illegal attribute or resource
> value: Illegal attribute or resource value
> If from the commandline I do
> qsub -q NGSq
> I get 1695145.servername
> What can I do now ?