The tool I am currently wrapping has built-in data, which may be used by the tool users (through a relevant < from_data_table> + .LOC file configuration).
They are .fasta databases which are rather small and are thus bundled in the tool distribution package.
Thanks to the tool_dependencies.xml file, said distribution package is downloaded at install time, code is compiled, and since they are here, the data files are copied to $INSTAL L_DIR too , ready to be used.
After that, the user still has to edit tool-data/my_fancy_data_files.loc ; but the thing is, during the install I know where these data files are (since I copied those there), so I would like to save the user the trouble and set up this file automagically.
I would have two questions:
1/ Is it okay to have tool built-in data files in $INSTAL L_DIR, or would it be considered bad practice?
2/ Is there a way to set up the tool-data/my_fancy_data_files.loc during the install? Here are the options I though of:
*shipping a “real” my_fancy_data_files.loc.sample with the good paths already set-up, which is going to be copied as the .loc file (a rather ugly hack)
*using more <action type="shell_command"> during install to create my_fancy_data_files.loc (but deploying this file it is not part of the tool dependency install per se)
*variant of the previous : shipping my_fancy_data_files.loc as part of the tool distribution package, and copy it through shell_command (same concern than above).
Bonsai Bioinformatics group
I have developed a tool in our local Galaxy, and imported it in a local
toolshed. I next removed the 'dev' version of the tool in Galaxy, and
installed it now via het Tool Shed Manager.
I got an error that the dependencies could not be installed. If figured it
out that it (needed to add an extra shell_command in the tool_dependencies)
(and the mount point on my system needs to have exec permissions of
So I updated the tool in the toolshed. BUT, then I uninstalled the tool
from Galaxy, and reinstalled it from the Tool Shed ... instead of just
updating. Anyway, it has happened.
Now I have the tool listed in my 'Install Tool Shed Repositories', and when
I click on it, it says 'this tool is not installed'. The only action I can
apply on this installed tool shed repository is 'Install' (...). But
clicking on that 'install' button, leads me to an error page (http error
Module weberror.evalexception.middleware:364 in respond view
>> app_iter = self.application(environ, detect_start_response)
Module paste.recursive:84 in __call__ view
>> return self.application(environ, start_response)
Module paste.httpexceptions:633 in __call__ view
>> return self.application(environ, start_response)
Module galaxy.web.framework.base:132 in __call__ view
>> return self.handle_request( environ, start_response )
Module galaxy.web.framework.base:190 in handle_request view
>> body = method( trans, **kwargs )
Module galaxy.web.framework:221 in decorator view
>> return func( self, trans, *args, **kwargs )
Module galaxy.webapps.galaxy.controllers.admin_toolshed:825 in
>> raw_text = common_util.tool_shed_get( trans.app, tool_shed_url, url )
Module tool_shed.util.common_util:110 in tool_shed_get view
>> response = urlopener.open( uri )
Module urllib2:397 in open view
>> response = meth(req, response)
Module urllib2:510 in http_response view
>> 'http', request, response, code, msg, hdrs)
Module urllib2:435 in error view
>> return self._call_chain(*args)
Module urllib2:369 in _call_chain view
>> result = func(*args)
Module urllib2:518 in http_error_default view
>> raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 500: Internal Server Error
The same happens when I go to the Tool Shed, and try to install the tool
from there: Galaxy detects that the tool has been installed before, and
points me to that 500 error page.
Any help is heartily appreciated,
Contact details: http://www.bits.vib.be/index.php/about/80-team
After the latest update of Galaxy I saw that the cuffdiff wrapper is for
version 2.0. I updated the $PATH to cuffdiff v. 2.0.2 but when I run
cuffdiff in Galaxy I get the following error:
Error running cuffdiff. cuffdiff: /lib64/libz.so.1: no version information
available (required by cuffdiff)
cuffdiff: unrecognized option '--library-norm-method'
cuffdiff v1.3.0 (3022)
What else do I need to define inorder to run the latest version of cuffdiff.
I have a question regarding the param tag. I would like to pass on the
user email to a external python script. I tried to use it like this:
<param name="email" type="hidden" value=$__user_email__ />
<param name="experiment" type="select" label="Experiment" help="select
This does not work. Ideally I would like to have something like this:
Has someone done this before?
Center for Information Sciences and Databases (C-ISD)
Department of Biosystems Science & Engineering (D-BSSE)
ETH Zurich, Maulbeerstrasse (1078, 1.02), CH-4058 Basel, +41 61 387 3132
In my local Galaxy,when I want to reset my Login password,some error occurre :"Mail is not configured for this Galaxy instance. Please contact an administrator. "Could anyone tell me some imformation about how to configure the mail for my Galaxy instance?
Thank you very much.
I am having trouble setting up a FTP connection with the recently released
version of Galaxy Cloudman (ami-118bfc78).
I have instantiated the new version of Galaxy Cloudman with
also through the AWS EC2 wizard (using the same security group settings as
the previous versions) and neither instance will connect to my FTP
Has anyone else had this problem? Does anyone know what is preventing the
Any help would be greatly appreciated.
I found a couple of issues in using the latest version of Cloudman
The "Extract Genomic DNA" tool returns an error:
"No sequences are available for 'mm9', request them by reporting this
Upon trying to report the error in Galaxy (on the page that comes up when
you click the bug icon) I get the error:
"Mail is not configured for this galaxy instance"
Any help on fixing the "Extract Genomic DNA" tool would be great. Thanks.
*Last night, we released an update to Galaxy CloudMan.* CloudMan offers an
easy way to get a personal and completely functional instance of Galaxy in
the cloud in just a few minutes, without any manual configuration.
*IMPORTANT - please read*
Any new cluster will automatically start using this version of CloudMan.
Existing clusters will be given an option to do an automatic update once
the main interface page is refreshed. Note that this upgrade is a major
version upgrade and thus the migration is rather complicated. The migration
process has been automated but will take a little while to complete. If you
have made customizations to your cluster in terms of adding file systems,
upgrading the database, or similar, we do not recommend you perform the
upgrade. Note that this upgrade comes with (and requires) a new AMI
(ami-118bfc78), which will be automatically used if starting an instance
via CloudLaunch <http://usegalaxy.org/cloudlaunch>.
*This update brings a large number of updates and new features, the most
prominent ones being:*
- Unification of *galaxyTools* and *galaxyData* file systems into a single *
galaxy* filesystem. This change makes it possible to utilize the Galaxy
Tool Shed when installing tools into Galaxy.
- Added initial support for Hadoop-type workloads
- Added initial support for cluster federation via HTCondor
- Added a new file system service for instance's transient storage,
allowing it to be used across the cluster over NFS
- Added a service for Galaxy Reports webapp
- Added optional Loggly (loggly.com) based off-site logging support
- Added tags to all resources utilized by CloudMan
For more details on the new features, see the the
for even more details see, all of 291
Enjoy and please let us know what you think,
We also now have a logo for CloudMan
[image: Inline image 2]