Hi Enis,
 
I started to use the terminal to check to see if the job was running, but it stopped successfully at the same time. Thanks again for helping me to complete the run.
 
Now I have an additional issue. I wanted to save my BAM file, but I kept getting an error. I think the error was because it was too large to send (4.1GB). So I saved what I could to my local HD and terminated the cluster. My EBS volume is 200GB and persisted after the cluster was terminated.
 
I assume that my BAM file resides somewhere in the EBS volume. I started a new Unix cluster and "attached" the EBS to that cluster. I also established an ssh to the Unis cluster but I do not know where to find the BAM file. Do you know how I can access the BAM file so that I can transfer it to my local HD?
 
Thanks,
Mike
--- On Wed, 4/13/11, Enis Afgan <eafgan@emory.edu> wrote:

From: Enis Afgan <eafgan@emory.edu>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "vasu punj" <punjv@yahoo.com>
Cc: galaxy-user@bx.psu.edu
Date: Wednesday, April 13, 2011, 10:01 AM

Hi Vasu, 
I am not sure I understand your question but the general instructions on how to get started and use Galaxy on the cloud (i.e., Cloudman) are available at usegalaxy.org/cloud

Let us know if you that page does not answer your questions,
Enis

On Wed, Apr 13, 2011 at 9:40 AM, vasu punj <punjv@yahoo.com> wrote:
I was wondering if there are instructions how can I run the Galaxy on CloudConsole, Indeed  first I want to know how Galaxy is established on console? Can someone direct me to the instructions please.
 
Thanks.
  
--- On Tue, 4/12/11, Enis Afgan <eafgan@emory.edu> wrote:

From: Enis Afgan <eafgan@emory.edu>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@yahoo.com>
Cc: galaxy-user@bx.psu.edu
Date: Tuesday, April 12, 2011, 9:31 PM


Galaxy has the functionality to recover any jobs that were running after it's restarted so it is quite possible to for the job to still be running. In addition, from the cloudman console, it appears that at least one instance is pretty heavily loaded so that can also mean that the job is still running. However, without actually accessing the instance through the command line and checking the status of the job queue, it is not possible to tell if the job is - actually running. Do you know how to do that? It's just a few commands in the terminal:
- access the instance
[local]$ ssh -i <path to the private key you downloaded from AWS when you created a key pair> ubuntu@<instance public DNS>
- become galaxy user
[ec2]$ sudo su galaxy 
- list any running jobs
[ec2]$ qstat

If that command returns a list of jobs and the jobs are in stare 'r' (running), the job is still running; otherwise, no.

Let me know how it goes,
Enis

On Tue, Apr 12, 2011 at 9:49 PM, Mike Dufault <dufaultm@yahoo.com> wrote:
Hi Enis,

THANK YOU!!!

I see that my "filter pileup on data" step is running. Is this the same analysis that was running before or did it relauch when you restarted Galaxy? I just don't know if the analysis would be compromised.

Thanks again to you and the whole Galaxy team.

Best,

Mike

--- On Tue, 4/12/11, Enis Afgan <eafgan@emory.edu> wrote:

From: Enis Afgan <eafgan@emory.edu>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@yahoo.com>
Cc: "Anton Nekrutenko" <anton@bx.psu.edu>, galaxy-user@bx.psu.edu
Date: Tuesday, April 12, 2011, 9:16 PM


Ahh, for some reason cloudman is thinking Galaxy is not 'running' but still 'starting' and has thus not enabled the given button. To access the analysis, in your browser, just delete the '/cloud' part of the URL and that should load Galaxy.

Sorry about the confusion,
Enis

On Tue, Apr 12, 2011 at 9:12 PM, Mike Dufault <dufaultm@yahoo.com> wrote:
Hi Enis,

Thanks for looking into this.

From the Galaxy Cloudman Console, I can see that it was restarted from the log (thanks), but the "Access Galaxy" choice is still grayed out and I don't know how to access the Analysis window.

Is there a way back into my analysis?

Thanks,
Mike



--- On Tue, 4/12/11, Enis Afgan <eafgan@emory.edu> wrote:

From: Enis Afgan <eafgan@emory.edu>
Subject: Re: [galaxy-user] Help!!!!!! with Galaxy Cloud!!!!!
To: "Mike Dufault" <dufaultm@yahoo.com>
Cc: "Anton Nekrutenko" <anton@bx.psu.edu>, galaxy-user@bx.psu.edu
Date: Tuesday, April 12, 2011, 8:55 PM


Hi Mike, 
Try accessing your Galaxy instance now. It should be ok.

The link in your email contained the IP for your instance so I took the liberty of restarting Galaxy and that brought it back up. There seems to have been an issue with Galaxy accessing its database and that resulted in Galaxy crashing. We'll look into why that happened in the first place but should be ok now. 

Let me know if you have any more trouble,
Enis

On Tue, Apr 12, 2011 at 2:49 PM, Mike Dufault <dufaultm@yahoo.com> wrote:
Hello Galaxy Staff,

My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc.

Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console."

What happened???

When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running.

Will the work flow finish? Can I get my data? How?

I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page."

Is there a way to get back into the analysis page?

Please help!!!

Thanks,
Mike

The cluster log shows:
  • 13:05:24 - Master starting
  • 13:05:25 - Completed initial cluster configuration.
  • 13:05:33 - Starting service 'SGE'
  • 13:05:48 - Configuring SGE...
  • 13:05:56 - Successfully setup SGE; configuring SGE
  • 13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/5
  • 13:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:06:24 - Initializing a 'Galaxy' cluster.
  • 13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.
  • 13:06:41 - Adding 3 instance(s)...
  • 13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '
  • 13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:07:52 - Starting service 'Postgres'
  • 13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)
  • 13:07:52 - Configuring PostgreSQL with a database for Galaxy...
  • 13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:05 - Starting service 'Galaxy'
  • 13:08:05 - Galaxy daemon not running.
  • 13:08:05 - Galaxy service state changed from 'Starting' to 'Error'
  • 13:08:05 - Setting up Galaxy application
  • 13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.
  • 13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.
  • 13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.
  • 13:08:05 - Starting Galaxy...
  • 13:08:09 - Galaxy service state changed from 'Error' to 'Starting'
  • 13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'
  • 13:08:28 - Instance 'i-e46f0a8b' reported alive
  • 13:08:28 - Successfully generated root user's public key.
  • 13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.
  • 13:08:28 - Instance 'i-e26f0a8d' reported alive
  • 13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.
  • 13:08:33 - Instance 'i-e06f0a8f' reported alive
  • 13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.
  • 13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list
  • 13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE
  • 13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...
  • 13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts
  • 13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...
  • 13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts
  • 13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...
  • 13:08:50 - Instance 'i-e46f0a8b' ready
  • 13:09:27 - Galaxy service state changed from 'Starting' to 'Running'
  • 22:38:18 - Found '3' idle instances; trying to remove '2'
  • 22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.
  • 22:38:18 - Removing instance 'i-e26f0a8d' from SGE
  • 22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'
  • 22:38:19 - Terminating instance 'i-e26f0a8d'
  • 22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.
  • 22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.
  • 22:38:19 - Removing instance 'i-e46f0a8b' from SGE
  • 22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'
  • 22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'
  • 22:38:20 - Terminating instance 'i-e46f0a8b'
  • 22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.
  • 22:38:20 - Initiated requested termination of instances. Terminating '2' instances.
  • 22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'
  • 22:38:41 - Found '1' idle instances; trying to remove '1'
  • 22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.
  • 22:38:41 - Removing instance 'i-e06f0a8f' from SGE
  • 22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'
  • 22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.
  • 22:38:42 - Initiated requested termination of instances. Terminating '1' instances.
  • 22:38:42 - Terminating instance 'i-e06f0a8f'
  • 22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'
  • 22:38:47 - Instance 'i-e26f0a8d' successfully terminated.
  • 22:38:49 - Instance 'i-e46f0a8b' successfully terminated.
  • 22:38:59 - Adding 3 instance(s)...
  • 22:39:07 - Instance 'i-e06f0a8f' successfully terminated.
  • 22:41:02 - Instance 'i-fa096e95' reported alive
  • 22:41:02 - Sent master public key to worker instance 'i-fa096e95'.
  • 22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list
  • 22:41:17 - Successfully added instance 'i-fa096e95' to SGE
  • 22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...
  • 22:41:17 - Instance 'i-fe096e91' reported alive
  • 22:41:17 - Sent master public key to worker instance 'i-fe096e91'.
  • 22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list
  • 22:41:34 - Successfully added instance 'i-fe096e91' to SGE
  • 22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...
  • 22:41:34 - Instance 'i-fa096e95' ready
  • 22:41:52 - Instance 'i-fe096e91' ready
  • 22:42:28 - Instance 'i-fc096e93' reported alive
  • 22:42:28 - Sent master public key to worker instance 'i-fc096e93'.
  • 22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list
  • 22:42:49 - Successfully added instance 'i-fc096e93' to SGE
  • 22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...
  • 22:43:13 - Instance 'i-fc096e93' ready
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory
  • 18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

 http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

 http://lists.bx.psu.edu/




-----Inline Attachment Follows-----


___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/


-----Inline Attachment Follows-----

___________________________________________________________
The Galaxy User list should be used for the discussion of
Galaxy analysis and other features on the public server
at usegalaxy.org.  Please keep all replies on the list by
using "reply all" in your mail client.  For discussion of
local Galaxy instances and the Galaxy source code, please
use the Galaxy Development list:

  http://lists.bx.psu.edu/listinfo/galaxy-dev

To manage your subscriptions to this and other Galaxy lists,
please use the interface at:

  http://lists.bx.psu.edu/