I'm working with two custom tools in my local galaxy instance:
The first tool outputs a single aceb file. aceb is a custom datatype I've
added that describes a crop experiment.
The second tool takes three files as inputs, one of which is an aceb file.
I can successfully run these two tools together in Galaxy outside of a
workflow. But for some strange reason I cannot connect these two tools in
the workflow editor. The connection never turns green and links. Here are
the relevant lines in the tool wrappers:
Outputs of first tool:
<data format="aceb" name="acebData" label="Experiment data with unified
Inputs of second tool:
<param name="acebData" type="data" format="aceb" label="Inut Survey aceb
<param name="domeData" type="data" format="dome" label="Input Strategy
DOME Data" />
<param name="linkData" type="data" format="txt" label="Input Linkage
between field overlay and survey" />
Any ideas for what could be causing this? Thanks.
I changed the values for new_file_path and job_working_directory in my config file.
After restarting galaxy I'm finding some of the tools still reference the old path which no longer exists. Do these end up hard coded in some place... I haven't done a find + grep to go looking yet but that may be the next step...
Thanks for any help
FSU Research Computing Center
I'm running into this error with cuffdiff:
I'm running galaxy locally on a cluster (1 head+3 compute nodes) with
Here's my hg summary:
I also keep getting the a similar 149 Internal Server error (with different
Key Error) thrown when trying to install some tools. But strangely, after
leaving it for awhile, it went back to normal. So I'm not too sure what's
Should I be updating galaxy to the latest stable one?
Please let me know if you require any other info.
Hi Galaxy Dev,
Been having trouble getting my Galaxy tools to work with the testtoolshed.
Here is the tools repository:
In particular, I've been focusing on getting the
region_motif_intersect.r script to work properly when installed from
the testtoolshed. intersect.r runs using files located in the same
location as where the repository is installed, in the folder
region_motif_db. To make sure that the tool knows where these files
are, I've been trying to pass the repository install location as an
argument in the r script invocation (see region_motif_intersect.xml
and tool_dependencies.xml) .
To set the repository install location in the environment, I've been
basing off of this wiki entry:
When I install the repository to my local Galaxy, it gives me the
galaxy.tools.deps DEBUG 2014-10-25 15:12:35,376 Building dependency
shell command for dependency 'R_SCRIPT_PATH'
galaxy.tools.deps WARNING 2014-10-25 15:12:35,377 Failed to resolve
dependency on 'R_SCRIPT_PATH', ignoring
Then when I run the tool from Galaxy, this is the run command:
export PACKAGE_BASE; .
--args $R_SCRIPT_PATH p
What exactly is going on here? How do I make sure my tool knows where
the reference files it needs are, since it won't be in the job working
Note: the files that intersect.r needs are not in the toolshed
repository, since they are a few GB. I was planning on providing that
as a separate download or using the Data Managers to solve the
problem. But as long as I can get the tool to look for it in that
folder, that would be perfect.
I wish to remove most of the mnt/galaxyIndices/genomes files so I can have a smaller volume that contains those files and save money.
I am new to this system. I have set up a Galaxy Cloudman account and used the tools on the web interface, and I can ssh the server using Ubuntu. So far, so good.
I tried the first approach here twice https://wiki.galaxyproject.org/CloudMan/CustomizeGalaxyCloud but the system hangs (>24h). I can't follow the "manual" instructions on this page and I'm not sure it would result in a smaller volume anyway.
So, I was trying to follow Enis' steps from this thread to reduce the size of the galaxyIndices volume:
His instructions are:
"1. Start a new Galaxy cluster (ie, one you can easily delete later)
2. ssh into the master instance and delete whatever genomes you don't need/want (these are all located under /mnt/galaxyIndices)
3. Create a new EBS volume of size that'll fit whatever's left on the original volume, attach it and mount it
4. Copy over the data from the original volume to the new one while keeping the directory structure the same (rsync is probably the best tool for this)
5. Unmount & detach the new volume; create a snapshot from it
6. For the cluster you want to keep around (while it is terminated), edit persistent_data.yaml in it's bucket on S3 and replace the existing snap ID for the galaxyIndices with the snapshot ID you got in the previous step
7. Start that cluster and you should have a file system from the new snapshot mounted.
8. Terminate & delete the cluster you created in step 1"
Here are my questions to help me to get through step 4:
1) Step 1: Is a "cluster" the same thing as an "instance"?
2) Step 2: I deleted the directories for individual genomes using rm -rf . Is that the correct approach?
3) Step 3: Do I add the newly created EBS volume to the same instance where I deleted the genomes? Or is it added the instance I want to keep?
4) Step 3: I can see how to attach this newly created volume using the AWS EC2 management console, but how do I mount it? (and unmount it in Step 5?)
5) Step 4: what is the syntax for the rsync (or cp) command to copy directories/files from one volume to another volume (within the same instance, or in if they are in different instances)?
Any help would be gratefully received!
I'm attempting to create a tool dependency definition file for ExomeCNV,
which notably depends on R (I'm trying to use the package_r_3_1_1
repository). However, I can't seem to set up the dependency properly. See
attached for my tool_dependencies.xml. Also, see here <http://cl.ly/YK3C>
for the error message when I attempt to upload the tool_dependencies.xml
file to the Tool Shed.
One of the possible issues is that the toolshed repository tag attribute
for specifying an external Tool Shed isn't being considered. This is being
done in a local Tool Shed installation that doesn't include
the package_r_3_1_1 repository. Therefore, I must refer to the Test Tool
Shed. This is inspired by a tool definition
written by Björn and the <repository> tag description
I'm getting the following exception in a failing unit test:
Testing this tool (where two of the three near identical tests passed):
Traceback (most recent call last):
line 116, in test_tool
self.do_it( td )
line 35, in do_it
self._verify_outputs( testdef, test_history, jobs, shed_tool_id,
data_list, galaxy_interactor )
line 75, in _verify_outputs
galaxy_interactor.verify_output( history, jobs, output_data,
line 89, in verify_output
self._verify_metadata( history_id, hid, attributes )
line 102, in _verify_metadata
dataset = self._get( "histories/%s/contents/%s" % ( history_id,
hid ) ).json()
line 740, in json
return json.loads(self.content.decode(encoding), **kwargs)
File "/usr/lib/python2.7/dist-packages/simplejson/__init__.py", line
413, in loads
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line
402, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line
418, in raw_decode
obj, end = self.scan_once(s, idx)
JSONDecodeError: Unpaired high surrogate: line 1 column 785 (char 785)
Any thoughts? What does Galaxy write to these job-associated JSON
It seems the ToolShed now uses roles for granting admin rights...
but still has the old "Grant authority to make changes" feature?
I just hit a possible glitch here - I wanted to create a new repo
under the "iuc" user, edit the admin settings, then log in as my
normal personal account and upload the first version of the tool.
1. Log into the Tool Shed as the "iuc" user.
2. Created https://toolshed.g2.bx.psu.edu/view/iuc/package_blast_plus_2_2_30
3. Attempted to add other administrators, e.g. IUC group or myself,
but the top right menu only offered "upload", and the old panel to
do this was also missing: "Grant authority to make changes"
If I do the first upload as "iuc", then the menu changes to include
"Manage Repository Administrators", plus the panel on the main
page appears "Grant authority to make changes" (which is what
we used to use).
Is this a transition stage, or are the change rights a subset of
the admin role?
I installed 'ebio_deseq' tool via Admin/Tool sheds, and I am getting this
error (looks like someting is making R plotting code unhappy).
Error running DESeq analyse: Error in if (!ok) warning("some notches went
outside hinges ('box'): maybe set notch=FALSE") :
missing value where TRUE/FALSE needed
Calls: PlotViolin ... boxplot.matrix -> boxplot -> boxplot.default ->
do.call -> bxp
I would be grateful for help or suggestions for addressing this issue.
I've been trying to get Galaxy to submit jobs to my cluster as the
"real user" rather than the "galaxy" user.
In my galaxy.ini, I have:
outputs_to_working_directory = True
set_metadata_externally = True
drmaa_external_runjob_script = /galaxy/scripts/drmaa_external_runner.py
drmaa_external_killjob_script = /galaxy/scripts/drmaa_external_killer.py
drmaa_external_chown_script = /galaxy/scripts/external_chown_script.py
In my sudoers file I have:
galaxy ALL= (root) NOPASSWD: SETENV: /galaxy/scripts/drmaa_external_runner.py
galaxy ALL= (root) NOPASSWD: SETENV: /galaxy/scripts/drmaa_external_killer.py
galaxy ALL= (root) NOPASSWD: SETENV: /galaxy/scripts/external_chown_script.py
In my log file, I'm seeing:
galaxy.jobs.runners.drmaa DEBUG 2014-10-29 14:45:54,777 (15)
galaxy.jobs.runners.drmaa DEBUG 2014-10-29 14:45:54,783 (15)
submitting with credentials: tjg [uid: 28369]
galaxy.jobs.runners ERROR 2014-10-29 14:45:54,784 (15) Unhandled
exception calling queue_job
Traceback (most recent call last):
File "/galaxy/lib/galaxy/jobs/runners/__init__.py", line 96, in run_next
File "/galaxy/lib/galaxy/jobs/runners/drmaa.py", line 189, in queue_job
filename = self.store_jobtemplate(job_wrapper, jt)
File "/galaxy/lib/galaxy/jobs/runners/drmaa.py", line 348, in
f = open(filename,'w')
IOError: [Errno 2] No such file or directory: '/galaxy/database/pbs/15.jt_json'
My /var/log/messages file is not reporting any attempt to run "sudo",
so I think the process is failing before it gets to that point in the
If I comment out the "drmaa_external_*" lines, the jobs are submitted
successfully and run as the "galaxy" user.
What am I missing here? :)
Baskin Engineering, Room 313A