Yves;
I am currently investigating if Galaxy Cloudman can help us in analyzing large NGS datasets.
I was first impressed by the simple setup, the autoscaling and useability of Galaxy Cloudman but soon ran into the EBS limit of 1 TB L
I thought to be clever and umounted the /mnt/galaxyData EBS volume, created a logical volume of 2 TB and remounted this volume to /mnt/galaxyData.
How did you create this volume? I know there are some tricks to get around the 1Tb limit: http://alestic.com/2009/06/ec2-ebs-raid In the screenshot you sent it looks like Cloudman is a bit confused about the disk size. The Disk Status lists 1.2Tb out of 668Gb, which might be the source of your problems.
All is green as you can see from the picture below but running a tool is not possible since Galaxy is not configured to work with logical volume I assume.
Can you describe what errors you are seeing?
It is truly a waste having this fine setup (autoscaling) but this is not useable if there is not enough storage ?
Does anybody has experience with this ? Tips, tricks ...
The more general answer is that folks do not normally use EBS this way since having large permanent EBS filesystems is expensive. S3 stores larger data, up to 50Tb, at a more reasonable price. S3 files are then copied to a transient EBS store, processed, and uploaded back to S3. This isn't as automated since it will be highly dependent on your workflow and what files you want to save, but might be worth exploring in general when using EC2. Hope this helps, Brad