Error running tool
by Toqa Manasrah
Hello everybody,
I am trying to run my tool Vispa but every time i try to i got this error messages:
An error occurred running this job: /home/galaxy/galaxy-dist/tools/vispa/main.bash: line 13: ./parser_v.2.sh: No such file or directory
/home/galaxy/galaxy-dist/tools/vispa/main.bash: line 16: ./vispa.bash: No such file or directory
although all these files are reside under the same directory and have the permission 755.
how can I solve the problem?
Thanks and regards,
Tuqa.
10 years, 7 months
Galaxy Upload tool
by Toqa Manasrah
Hi everybody,
I am integrating my own tool with galaxy. the command to run is:
./main.bash <readFile> <referenceFile> threads n t > <log> &
the input files read and reference are of type fasta.
threads, t and n are integers.
log is to prevent console outputs.
should I load these files from the page which run the command. or i have to use the Galaxy upload tool. and how?
on the other hand. the output is implied and should be on the same directory where the source files.
Thank you so much.
Tuqa.
10 years, 7 months
Upload executable in galaxy
by Giota Kottara
Dear galaxy developers,
I am trying to integrate an executable tool in a private instance of galaxy. I stored the configuration .xml file and the executable in tools folder of Galaxy and the tool appears in the homepage. Nevertheless when i am running the tool it doesn't seem to find the executable. Am I forgetting something?
Thank you in advanceGiota Kottara
10 years, 7 months
Galaxy test won't run a bwa job
by Carlos Borroto
Hi,
I'm trying to test a workflow using tools only available on the test
server, for this I have uploaded a limited subset of my data that
should run fairly quickly. The first step is a BWA mapping, but the
job has being in the queue since yesterday. Is it fine to run this
kind of test there?
Thanks,
Carlos
10 years, 7 months
tools for exome sequence
by shamsher jagat
Is there any tool (work flow) in galaxy for exome sequence analysis (human/
mouse)
Thanks.
10 years, 7 months
Using Galaxy Cloudman for a workshop
by Clare Sloggett
Hi all (especially Enis :) ),
We are planning to use Amazon (Galaxy CloudMan) to run a workshop for
about 50 people. We won't need to transfer any data during the
workshop, but need the virtual cluster to be reasonably responsive and
cope with:
a) the load on the front end
b) the workshop participants each trying to run a bwa alignment - at
the moment each alignment would be of about 2.8M reads, but we could
cut it down
c) any other scalability issues I may not have thought of?
I wanted to ask if anyone has used CloudMan for a similar purpose, or
has an understanding, based on running a Galaxy cluster, of any
problems we might encounter? I can add enough nodes to the cluster on
the day to cope with the computational load (I assume) but I'm not
sure if I should be expecting any other problems.
Is the size of the node (e.g. Amazon's 4-core vs 8-core nodes) very
important? I can scale out by adding more nodes, but should I be
concerned about the capacity of the master node which handles the
traffic?
Also, is there any sensible way for me to test it in advance (in terms
of the user load)?
Many thanks for any advice!
Clare
--
E: sloc(a)unimelb.edu.au
P: 03 903 53357
M: 0414 854 759
10 years, 7 months
Running shell script
by Toqa Manasrah
Hi everybody,
I am integrating my software ViSpA inside galaxy and I have two questions:
1: the main program is a shell script (.sh) which call another shell script, which call Java executables. should I include any interpreter inside the .xml <command > tag?
and what that should be?
2. how to guarantee that the output files destination are on the use local host rather than the server which is running galaxy?
Thanks and regards,
Tuqa.
10 years, 7 months
Fwd: BioMart - DATABASE Special Issue
by Arek Kasprzyk
For those of you who use BioMart as a Galaxy data source
Dear All,
I am very pleased to announce the publication of the DATABASE special issue
dedicated to the BioMart project
http://www.oxfordjournals.org/our_journals/databa/biomart_virtual_issue.html
BioMart is a unique open source data federation technology that provides
unified access to distributed databases storing a wide range of data. This
DATABASE issue recognizes BioMart's outstanding contributions to
bioinformatics and documents the achievements of the BioMart community,
which has grown impressively over the last ten years to become what it is
today, a collaboration between numerous database projects on five
continents: Asia, Australia, Europe, North America and South America.
I would like to thank and congratulate the BioMart community on this
impressive achievement. This issue would have not been possible without
their dedication, expertise, passion and enthusiasm.
Arek Kasprzyk
*Have a look at our new website at www.biomart.org!* (You may need to
shift+reload a few times to get new pages)
10 years, 7 months