I am a relatively new user on Galaxy and I had a question regarding "Fetching Taxonomic Information". It is great that I can retrieve all of the hits for each sequence, but I cannot seem to find an option to also provide how accurate of a match it is to the given taxon. For instance, a percentage match. I can access this information in the original file and programmatically retrieve it but, it would be nice if it came in one package so that I can avoide those false hits that have a low percentage match. Can you please provide me with instructions on how to best to retrieve this information (hopefully in a single file)?
I have a problem converting Interval to GFF.
I used the tool
My Interval file contains 18 coulmns:
*chr1 33308998 33309020 + cel-let-7-5p 0 chr1 33308999 255 22M * 0 0 AGAGGAAGAAGGAAGAAAAGAA UGAGGUAGUAGGUUGUAUAGUU XA:i:0 MD:Z:22 NM:i:0*
However my GFF output only contains 9, and it has removed the feature "*
chr1 bed2gff region_0 33308999 33309020 0 + . region_0;
Instead of region_0 I want the gene name, in this case the miRNA name. How
do I do this?
hi galaxy developers , i have developed this parameter in xml file and i am passing "tl" value to a function which is present in my python file , but when i select its type=text , it shows a text field with tl , all i want to know how can i pass kb value without showing text field on galaxy. thanks
<param name="site" type="text" label="site for" value="tl" help="Enter tl "/>
I'm glad you found an alternative solution.
Late last night, I realized I had forgotten to email you yesterday as I
completed implementing the feature for disabling the master from running
jobs within CloudMan (
https://bitbucket.org/galaxy/cloudman/changeset/b21e967d30f9). I have not
yet updated the official CloudMan (because there are some other features
I'd like to add before doing so) but it is possible to get this code by
pulling CloudMan's source from bitbucket, creating cm.tar.gz, and uploading
it to your cluster's bucket. Let me know if you'd like to give it a shot
and I can point you to a couple of scripts to make that task trivial.
I also like the solution you found because it does not necessarily exclude
the master from running jobs, just limits it. I'll see about adding that
option as well.
On Tue, Mar 13, 2012 at 5:36 AM, John Major <john.e.major.jr(a)gmail.com>wrote:
> Thanks again for your advice. It led me on a small SGE learning dive and
> I actually settled upon a different solution that might be easier to
> configure and figured I'd share.
> Using 'qconf -me IP'
> I set:
> 'complex_variables slots=0' (or 1 if the head node was bigger).
> This seemed to do the trick where the head node did not get overwhelmed
> and quickly added nodes if the jobs were not fast to execute. And seems to
> persist during auto scaling.
> On Tue, Mar 6, 2012 at 4:00 PM, Enis Afgan <enis.afgan(a)irb.hr> wrote:
>> Hi John,
>> Theoretically, this is a straightforward task but in reality CloudMan
>> gets in the way of making it stick. Namely, if you are to manually
>> remove the master instance from being an execution host, CloudMan will
>> add it back in the next time a node is added or removed from the
>> cluster, thus negating your manual modification.
>> So, I will add this feature to CloudMan itself but I cannot commit to
>> a date right now. It should be soon though.
>> In the mean time, if you'd like to script this yourself via an add-hoc
>> solution that runs periodically, below is the procedure for manually
>> removing a node from SGE's execution host list:
>> ubuntu@ip-10-204-170-63:~$ sudo -s
>> root@ip-10-204-170-63:~# qhost
>> HOSTNAME ARCH NCPU LOAD MEMTOT MEMUSE SWAPTO
>> global - - - - - -
>> ip-10-204-170-63 lx24-amd64 1 1.11 615.2M 161.6M 0.0
>> # Remove the host from the list of execution hosts
>> root@ip-10-204-170-63:~# qconf -de ip-10-204-170-63
>> Host object "ip-10-204-170-63" is still referenced in cluster queue
>> # Edit the configuration of allhosts and remove the host in question.
>> If this is the only host in the list, replace it's name with word NONE
>> root@ip-10-204-170-63:~# qconf -mhgrp "@allhosts"
>> root(a)ip-10-204-170-63.ec2.internal modified "@allhosts" in host group
>> # Show configuraiton of group allhosts
>> root@ip-10-204-170-63:~# qconf -shgrp "@allhosts"
>> group_name @allhosts
>> hostlist NONE
>> # The hos is now removed from the list of execution hosts
>> root@ip-10-204-170-63:~# qstat -f
>> On Wed, Mar 7, 2012 at 6:23 AM, John Major <john.e.major.jr(a)gmail.com>
>> > Hello All-
>> > I'd like to launch a galaxy-cloudman head node which does not accept SGE
>> > jobs, but as jobs are submitted go to compute nodes (or cause compute
>> > to be added when auto-scale is on).
>> > Primarily, this is b/c I'd like to have the head node be a cheaper
>> > which can run long term, and only fire up more expensive compute nodes
>> > they are actually needed.
>> > How would I enable this?
>> > Thanks-
>> > John
>> > ___________________________________________________________
>> > The Galaxy User list should be used for the discussion of
>> > Galaxy analysis and other features on the public server
>> > at usegalaxy.org. Please keep all replies on the list by
>> > using "reply all" in your mail client. For discussion of
>> > local Galaxy instances and the Galaxy source code, please
>> > use the Galaxy Development list:
>> > http://lists.bx.psu.edu/listinfo/galaxy-dev
>> > To manage your subscriptions to this and other Galaxy lists,
>> > please use the interface at:
>> > http://lists.bx.psu.edu/
I've found that I need to be really aggressive with forcing a purge &
refresh of the browser when changing things like static/welcome.html and
static/images/whatever.png. It might be a function of the nginx
light-weight HTML renderer within Galaxy caching things on top of the
normal caching. You need to purge all the cache in your browswer totally,
eg. in Chrome it's "Preferences / Delete All Browser Data" (or something
like that, I'm away from my normal computer). Just forcing a refresah with
the usual refresh/reload icon isn't enough.
On Wed, Mar 7, 2012 at 6:52 PM, huayan gao <huayangao(a)gmail.com> wrote:
> Hi Nate,
> I got a silly question for you. My galaxy is running now but I'd like
> to customize it.
> I've changed the welcome.html page but the galaxy mirror site is
> showing the old one. Do I need to change some other files or other
> Also, I add <lable> in tool_conf.xml file but it does not show up
> either. Is there another file I need to change too?
> Please keep all replies on the list by using "reply all"
> in your mail client. To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
Port Jackson Bioinformatics
I'm trying to restart my Galaxy Cloudman service, using the same approach
that has been successful over the last couple of months ..
- launch AMI 861460482541/galaxy-cloudman-2011-03-22 as m1.large
- update from Cloudman console
- copy in my tools etc etc
- away we go, all works
However today the update fails, the log says ...
RuntimeWarning: __builtin__.file size changed, may indicate binary
from csamtools import *
python path is:
Traceback (most recent call last):
line 82, in app_factory
app = UniverseApplication( global_conf = global_conf, **kwargs )
File "/mnt/galaxyTools/galaxy-central/lib/galaxy/app.py", line 24, in __init__
File "/mnt/galaxyTools/galaxy-central/lib/galaxy/config.py", line
243, in check
tree = parse_xml( config_filename )
line 105, in parse_xml
tree = ElementTree.parse(fname)
line 859, in parse
line 576, in parse
source = open(source, "rb")
IOError: [Errno 2] No such file or directory: './migrated_tools_conf.xml'
Removing PID file paster.pid
While I'm here, I see a new Galaxy Cloudman AMI
I can't manage to start that, I get an error as below, with all types of
instance, (tiny/small/medium/large). Is that a recommended AMI now ? It
would be good to have a new updated AMI.
[image: Inline image 1]
Port Jackson Bioinformatics
I think there is a problem with the way permissions are set when uploading
files. As admin, when I add new files to a library by file path upload, I
select just one specific role which is a group of people (at the bottom of
the upload page). There are several different roles available, as well as
single users. When I select the single role, it gets highlighted as
expected in the drop down list and I click Upload to library. After upload
is complete I find that the Access permissions always include me as a
single user in addition to the role I actually selected during upload. This
causes access to be denied to the role (the group) because there is now a
personal role associated with the library.
Is this a bug? How can I really get just a single role associated to Access
permissions on upload?