Hi Denise, 
Although the instructions you found can still be used, a couple of steps can be updated now so I'll to that now and hopefully answer your questions in the process.

Updated instructions:
1. Start a brand new Galaxy CloudMan cluster/instance (this will be only a temporary cluster needed for the duration of the following steps)
2. ssh into the master instance and delete whatever genomes you don't need/want (these are all located under /mnt/galaxyIndices; using 'rm -rf <dir name>' is the command to use)
3. From CloudMan's Admin page, add a new file system of type 'new volume' and desired size (ie, size of data on /mnt/galaxyIndices after you've deleted what you didn't want). Give this file system a name 'indices'.
4. ssh into the master instance and copy over the data from /mnt/galaxyIndices to /mnt/indices using command: rsync -avz /mnt/galaxyIndices/ /mnt/indices/. (do this as 'galaxy' user)
5. Unmount the new 'indices' file system using command: sudo umount /mnt/indices
6. From the AWS console, create a snapshot of the volume for the 'indices' file system. You can retrieve the volume ID from the file system 'details' on the CloudMan Admin page
7. For the cluster you want to keep around (while it is terminated), edit persistent_data.yaml in it's bucket on S3 (you can get the bucket name from the CloudMan Admin page - it will look like so cm-<hash>) and replace the existing snap ID for the galaxyIndices with the snapshot ID you got in the previous step
8. Start that cluster and you should have a file system from the new snapshot mounted
9. Terminate & delete the cluster you created in step 1



 

Here are my questions to help me to get through step 4:

1)      Step 1: Is a “cluster” the same thing as an “instance”?

Yes 

2)      Step 2: I deleted the directories for individual genomes using rm –rf .  Is that the correct approach?

Yes 

3)      Step 3: Do I add the newly created EBS volume to the same instance where I deleted the genomes? Or is it added the instance I want to keep?

 You add the new file system (ie, EBS volume) to the newly created cluster (and you also delete the extra genomes on that same cluster).

4)      Step 3: I can see how to attach this newly created volume using the AWS EC2 management console, but how do I mount it? (and unmount it in Step 5?)

With the updated instructions above, mounting the file system by hand is no longer necessary.  

5)      Step 4: what is the syntax for the rsync (or cp) command to copy directories/files from one volume to another volume (within the same instance, or in if they are in different instances)?

I included in the command in the instructions now.


Hope this helps. Let us know if you have any trouble or if you have any more questions,
Enis