No problem - I originally copied the metadata access
trick from one of the Galaxy dev-team's tool anyway.
Maybe we need to add this to the wiki...
On Tuesday, January 13, 2015, Marco Albuquerque <
> Hi Peter,
> I was unaware of how to access metadata, that seemed to be my issue. The
> tool works now though!
> Thanks so much,
> On 2015-01-09 7:13 PM, "Peter Cock" <p.j.a.cock(a)googlemail.com
> >I think the symlink approach is best, see for example the Python
> >wrapper script I used here for "samtools idxstats",
> >However, you can make the link in the XML directly, see Dave's
> >reworking of this wrapper:
Hi Martin - Following up on this - I think your API folder delete commit is : https://bitbucket.org/galaxy/galaxy-central/commits/8f76a6abc5d7d5c98b6c1... ? I was wondering how to find out more about this API call. Not knowing the guts of the Galaxy API code much, is it a call like:
http://[my galaxy]/api/folders/[my folder id]/delete
I haven't tested since I haven't played with "Next-stable" galaxy branch. When roughly does that get woven into stable or default?
> Martin Čech | Threaded | More
> Dec 04, 2014; 4:40pm Re: No API way to delete a galaxy data library folder?
> I have actually implemented this feature and it will be in the next release (which should be made public around next monday).
I'm currently working on adding some tools and am having an issue with
So, I am under the impression that BAM indexing happens automatically when a
BAM is uploaded. However there is no associated dataset_i.dat.bai file in
the file location in my local instance of galaxy.
There is however metadata which seems to be created but they are not being
linked together. What I mean to say is DELLY errors with cannot find Bam
I was curious if there is a specific way developers are suppose to work
around this? Basically we want to avoid having to provide both the BAM and
BAM index and symbolically linking a new dataset because we know you have
already created a better implementation, we just want to use it and don't
Any help is greatly appreciated,
I see Galaxy API has a feature to import a history dataset into the library (in copy_hda_to_ldda() fn from GCC2013 training day course). Is this available as well via Bioblend? Latest docs don't seem to include this feature. It would be the opposite of Bioblend's upload_dataset_from_library(history_id, lib_dataset_id) )
Objective is to get customized blast indexes into library that way for shared use. Or have them actually exist outside galaxy, and linked in.
Hsiao lab, BC Public Health Microbiology & Reference Laboratory, BC Centre for Disease Control
655 West 12th Avenue, Vancouver, British Columbia, V5Z 4R4 Canada
I'm trying to find out what the best way is to see if my functional tests
work as expected, and I'm also wondering when these tests are (supposed to
be) run. Here are the details:
Following instructions at
https://wiki.galaxyproject.org/Admin/Tools/AddToolTutorial, I created a
little toolExample tool under galaxy-dist/tools/myTools. I also created
test input and output in a test-data directory and added a test to the
<param name="input" value="testinput.fa"/>
<output name="out_file1" file="testoutput.txt"/>
And yes, the files are named correctly:
I added the new tool to config/tools_conf.xml AND to
Question #1: Why is a .sample file used by run_functional_tests.sh? I
thought with .sample extensions were meant as examples for creating your
own versions, I'm really surprised that any script would want to access
I restarted galaxy to verify that the new tool showed up, then checked that
I could test it:
./run_functional_tests.sh -list | grep fa_gc_content_1
It shows up in the list of tools. So then I tested it:
./run_functional_tests.sh -id fa_gc_content_1
IOError: [Errno 2] No such file or directory:
So it looks like at some point the testinput should have been copied to
galaxy-dist/test-data but wasn't.
Question # 2: Is there some setting that makes this test input (and output)
get copied to the correct location, or do I have to do this manually?
I also tried testing by uploading the tool to the testtoolshed (as jtest1).
I deliberatly put the wrong value in the testoutput.txt file because I'd
like to see a test fail before I'll accept that it succeeds. It appears the
testinput does not get tested on upload, because I got no automated test
results, and no error messages anywhere. Is that expected behavior? I know
the tools get tested every so often, I'm just surprised it doesn't happen
The tool could then also be easily installed to my galaxy distribution,
without errors in my log or on the browser pages.
Question #3: At what point in creating or downloading a repository are
these tests run?
And lastly, #4: is either of these the best way to test my tests, or did I
I've been trying to track down why I can't get anything from the toolshed
installed and finally have it figured out.
Whenever I tried to install anything I always got an Error with no
explanation of what the error was. After enabling Debug messages into the
log file, I see the error is:
tool_shed.util.hg_util DEBUG 2014-12-22 14:47:48,910 Error cloning
repository: httpsconnection instance has no attribute '_set_hostport'
I googled around and found out this is a known bug/issue with older version
of Mercurial and was fixed in v3.
I added a line to hg_util.py to see where it picks up hg. Its using
version 2.2.3. Indeed, one of the eggs downloaded by Galaxy is
I have the newest version of mercurial installed in my site-packages folder
but I guess that's not what galaxy wants. So my question is, how do I get
Galaxy to use the latest version of Mercurial? And, Why did it download an
Sorry if this is a stupid question, but is the job id and other queue info,
for a job that has run on a cluster available from within galaxy?
Both during and and after completion of the job.
I get this error occasionally:
"/bin/sh: 1: /opt/galaxy/web/database/job_working_directory/000/100/galaxy_100.sh:
Text file busy"
When this occurs, the step fails outright. Resubmitting the step
resolves the issue and things run no problem. If this error appears
early in a long workflow, I have to manually resubmit ALL dependent
steps... what a pain!
Perhaps this is something the Galaxy job scheduler can look out for,
flush() the system, sleep() a second or two to let the file write and
close, and then rerun. A more fault-tolerant way of running workflows
without unnecessary human intervention.
Research Associate | Application Developer | User Support Consultant
Minnesota Supercomputing Institute
599 Walter Library
612 624 1447