I am currently working on Galaxy installation at our Center for Medical Research. As a login server, we are using VM based on SLES-12. We are planning to extend it on a Cluster later and we would like to use SLURM as DRM. Is there any recommendation or "step-by-step" recipe how to setup Galaxy in a proper way to avoid future issues? Are they any recommendation regarding the filesystem of directory management system that I should care about in advance?
Any advice from experienced developers will be more than helpful.
Thank you in advance.
Mag. Marija Đurđević
Core Facility Computational Bioanalytics
Medical University of Graz
Center for Medical Research
Stiftingtalstraße 24, A-8010 Graz
Phone: +43 316/385-73024
Ah, and if you want to figure out the ID of a tool, you can click the small
of a dataset produced by the tool, so in the case of the upload tool it
Galaxy Tool ID: upload1
On 9 May 2016 at 12:25, Marius van den Beek <m.vandenbeek(a)gmail.com> wrote:
> Puhh, you're right, and somewhere in the undepths of the wiki (I think)
> this is documented,
> but admittedly I didn't find it when searching for it.
> If you want you could add a sample of your config to
> I believe this is something that the galaxy devteam carefully keeps up to
> You would definitely have my +1 for a Pull Request. If you don't have the
> time I can also do this for you.
> On 9 May 2016 at 12:02, Tiziano Flati <tiziano.flati(a)gmail.com> wrote:
>> Thank you very much, Marius, that worked!
>> But how should I have known about the exact ID of the upload tool (i.e.,
>> upload1)? I think this is an extremely common pattern (upload data and
>> execute jobs on a cluster)
>> 2016-05-09 11:38 GMT+02:00 Marius van den Beek <m.vandenbeek(a)gmail.com>:
>>> Hi Tiziano,
>>> I think you are correct in your assumption.
>>> If you do not need to have uploads run on pulsar, you should be able to
>>> specify a local destination for uploads (the upload tool id is upload1)
>>> in your job_conf.xml.
>>> There are some examples described here:
>>> If you can place a copy of galaxy and a virtualenv on your pulsar
>>> server, you could also set this up as in
>>> Note that I haven't tried this yet, but I think this is a good start.
>>> Let us know if that works.
>>> On 9 May 2016 at 10:42, Tiziano Flati <tiziano.flati(a)gmail.com> wrote:
>>>> Hi all,
>>>> I have succesfully setup a Galaxy-Pulsar architecture and I am able to
>>>> run jobs over datasets* already uploaded on a history*.
>>>> When I try to upload a new file to the history, though, the upload job
>>>> fails with the following error:
>>>> Traceback (most recent call last):
>>>>> File "/home/flati/pulsar/files/staging/80/tool_files/upload.py",
>>>>> line 18, in <module>
>>>>> import galaxy.model # noqa
>>>>> ImportError: No module named model
>>>> Note: in job_conf.xml, Pulsar is the default destination:
>>>> <destinations default="win_pulsar">
>>>> Does someone know what the problem is?
>>>> I suspect that setting pulsar as the default destination causes the
>>>> upload tool to run on Pulsar's side which, however, does not have access to
>>>> Galaxy's lib directory (which contains the galaxy model module).
>>>> Any help is very appreciated,
>>>> Please keep all replies on the list by using "reply all"
>>>> in your mail client. To manage your subscriptions to this
>>>> and other Galaxy lists, please use the interface at:
>>>> To search Galaxy mailing lists use the unified search at:
I have succesfully setup a Galaxy-Pulsar architecture and I am able to run
jobs over datasets* already uploaded on a history*.
When I try to upload a new file to the history, though, the upload job
fails with the following error:
Traceback (most recent call last):
> File "/home/flati/pulsar/files/staging/80/tool_files/upload.py", line
> 18, in <module>
> import galaxy.model # noqa
> ImportError: No module named model
Note: in job_conf.xml, Pulsar is the default destination:
Does someone know what the problem is?
I suspect that setting pulsar as the default destination causes the upload
tool to run on Pulsar's side which, however, does not have access to
Galaxy's lib directory (which contains the galaxy model module).
Any help is very appreciated,
I would like to have a single python / planemo environment that is portable
across computers where I can use planemo test (which instantiates an
instance of Galaxy).
Currently, we're using conda on our current network. That way we have
shared environments where the version of python and associated modules is
consistent across network computers/users. This way, any user (including
the galaxy system) can use exactly the same python version and modules
which greatly simplifies management. I guess it's possible to manage
dependencies through toolsheds, but we didn't want to do that.
Although probably not optimal, what we do is to start a conda environment
between the <command> tags in a tool's xml for python scripts, e.g.:
source activate pythonEnv; python Script.py $input1 > $output1
The problem is that planemo is also installed in this environment, and
since one cannot start a galaxy instance within a conda environment,
planemo test now fails (since the environment doesn't get loaded
correctly). I can get this to work in planemo 0.14, but not in the newer
versions. Is there a workaround for this, a better setup, or must I stay
with planemo 0.14?
I’d like to draw your attention to the following vacancy on our project in Sydney, Australia:
This is on the Alveo project and I’m posting here because part of the platform is a Galaxy server with tools developed to work on language data (text and speech). The role would suit someone with knowledge of Galaxy and some exposure to general eResearch problems; knowledge of language research methods would obviously be a bonus.
The ad mentions a part-time role for 6 months but in practice we have funding for longer than this and for a full time role for the right candidate. Please get in touch if you are interested in finding out about the post. International candidates are welcome to apply.
Department of Computing, Macquarie University
We would like to split fasta query files and run multiple concurrent jobs
to minimize our processing wall clock time for large jobs.
After chatting with folks at GCC 2015 I understand this is possible, my
problem is I cant find instructions on hos to configure
CloudMan/ncbi_blast_plus to do this. For those of you who know me it
probably goes without saying that I can't figure it out myself ;)
Peter/Enis/others, can you help us out with this question?
Is it possible to change the temporary directory used by planemo test?
The reason is that one of my xml tools uses ssh to connect to a machine
specifically due to licensing requirements e.g.:
ssh address.com 'sh runCommand.sh $input' > $output
Unfortunately, when executing a planemo test, the input is copied into a
local temporary directory (/tmp) which is of course different from the /tmp
directory on the computer where the bash script is being executed. I've
tried changing the TMP and TEMP environment variables to a location that is
available to both machines but that hasn't worked.
Is there a workaround?
I've had requests from users to install Kallisto as Galaxy tool. I
didn't see any wrappers or evidence of wrappers in my searching, but I
thought I'd ask here to see if anyone is working on or interested in
working on such a thing.
Lance Parsons - Scientific Programmer
Carl C. Icahn Laboratory - Room 136
Lewis-Sigler Institute for Integrative Genomics