Hello Galaxy Staff, My data has been running on the Amazon EC2 for just over 24hrs. I have not closed any windows and my Exome analysis made it all the way through to filter on Pile up. I have two tabs for this instance. One is the Galaxy Cloudman Console and the other is the tab where I perform the analysis, load data, history etc. Anyway, I went to add a step to the work flow and the screen "Welcome Galaxy to the Cloud" screen along with the information "There is no Galaxy instance running on this host, or the Galaxy instance is not responding. To manage Galaxy on this host, please use the Cloud Console." What happened??? When I go back to the Galaxy Cloudman Console, it shows that my instance is still running along with the four cores, the Cluster log is below. AWS also shows that my instance is running. Will the work flow finish? Can I get my data? How? I tried to re-access the analysis page by selecting "Access Galaxy" from the "Galaxy Cloudman Console" but it sends me to the same "Welcome page." Is there a way to get back into the analysis page? Please help!!! Thanks, Mike The cluster log shows: 13:05:24 - Master starting13:05:25 - Completed initial cluster configuration.13:05:33 - Starting service 'SGE'13:05:48 - Configuring SGE...13:05:56 - Successfully setup SGE; configuring SGE13:05:57 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'13:05:57 - Saved file 'cm_boot.py' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'13:05:57 - Saved file 'cm.tar.gz' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'13:05:57 - Problem connecting to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3', attempt 1/513:05:59 - Saved file 'Fam122261.clusterName' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'13:06:24 - Initializing a 'Galaxy' cluster.13:06:24 - Retrieved file 'snaps.yaml' from bucket 'cloudman' to 'cm_snaps.yaml'.13:06:41 - Adding 3 instance(s)...13:07:02 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'13:07:38 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'13:07:38 - Saved file 'universe_wsgi.ini.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'13:07:38 - Saved file 'tool_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'13:07:48 - Error mounting file system '/mnt/galaxyData' from '/dev/sdg3', running command '/bin/mount /dev/sdg3 /mnt/galaxyData' returned code '32' and following stderr: 'mount: you must specify the filesystem type '13:07:52 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'13:07:52 - Starting service 'Postgres'13:07:52 - PostgreSQL data directory '/mnt/galaxyData/pgsql/data' does not exist (yet?)13:07:52 - Configuring PostgreSQL with a database for Galaxy...13:08:05 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'13:08:05 - Starting service 'Galaxy'13:08:05 - Galaxy daemon not running.13:08:05 - Galaxy service state changed from 'Starting' to 'Error'13:08:05 - Setting up Galaxy application13:08:05 - Retrieved file 'universe_wsgi.ini.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/universe_wsgi.ini'.13:08:05 - Retrieved file 'tool_conf.xml.cloud' from bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3' to '/mnt/galaxyTools/galaxy-central/tool_conf.xml'.13:08:05 - Retrieved file 'tool_data_table_conf.xml.cloud' from bucket 'cloudman' to '/mnt/galaxyTools/galaxy-central/tool_data_table_conf.xml.cloud'.13:08:05 - Starting Galaxy...13:08:09 - Galaxy service state changed from 'Error' to 'Starting'13:08:09 - Saved file 'persistent_data.yaml' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'13:08:09 - Saved file 'tool_data_table_conf.xml.cloud' to bucket 'cm-a42f040c55e7519eb63bbaf269fa78d3'13:08:28 - Instance 'i-e46f0a8b' reported alive13:08:28 - Successfully generated root user's public key.13:08:28 - Sent master public key to worker instance 'i-e46f0a8b'.13:08:28 - Instance 'i-e26f0a8d' reported alive13:08:28 - Sent master public key to worker instance 'i-e26f0a8d'.13:08:33 - Instance 'i-e06f0a8f' reported alive13:08:33 - Sent master public key to worker instance 'i-e06f0a8f'.13:08:33 - Adding instance i-e46f0a8b to SGE Execution Host list13:08:44 - Successfully added instance 'i-e46f0a8b' to SGE13:08:44 - Waiting on worker instance 'i-e46f0a8b' to configure itself...13:08:44 - Instance 'i-e26f0a8d' already in SGE's @allhosts13:08:44 - Waiting on worker instance 'i-e26f0a8d' to configure itself...13:08:45 - Instance 'i-e06f0a8f' already in SGE's @allhosts13:08:45 - Waiting on worker instance 'i-e06f0a8f' to configure itself...13:08:50 - Instance 'i-e46f0a8b' ready13:09:27 - Galaxy service state changed from 'Starting' to 'Running'22:38:18 - Found '3' idle instances; trying to remove '2'22:38:18 - Specific termination of instance 'i-e26f0a8d' requested.22:38:18 - Removing instance 'i-e26f0a8d' from SGE22:38:18 - Successfully updated @allhosts to remove 'i-e26f0a8d'22:38:19 - Terminating instance 'i-e26f0a8d'22:38:19 - Initiated requested termination of instance. Terminating 'i-e26f0a8d'.22:38:19 - Specific termination of instance 'i-e46f0a8b' requested.22:38:19 - Removing instance 'i-e46f0a8b' from SGE22:38:19 - Successfully initiated termination of instance 'i-e26f0a8d'22:38:19 - Successfully updated @allhosts to remove 'i-e46f0a8b'22:38:20 - Terminating instance 'i-e46f0a8b'22:38:20 - Initiated requested termination of instance. Terminating 'i-e46f0a8b'.22:38:20 - Initiated requested termination of instances. Terminating '2' instances.22:38:20 - Successfully initiated termination of instance 'i-e46f0a8b'22:38:41 - Found '1' idle instances; trying to remove '1'22:38:41 - Specific termination of instance 'i-e06f0a8f' requested.22:38:41 - Removing instance 'i-e06f0a8f' from SGE22:38:41 - Successfully updated @allhosts to remove 'i-e06f0a8f'22:38:42 - Initiated requested termination of instance. Terminating 'i-e06f0a8f'.22:38:42 - Initiated requested termination of instances. Terminating '1' instances.22:38:42 - Terminating instance 'i-e06f0a8f'22:38:42 - Successfully initiated termination of instance 'i-e06f0a8f'22:38:47 - Instance 'i-e26f0a8d' successfully terminated.22:38:49 - Instance 'i-e46f0a8b' successfully terminated.22:38:59 - Adding 3 instance(s)...22:39:07 - Instance 'i-e06f0a8f' successfully terminated.22:41:02 - Instance 'i-fa096e95' reported alive22:41:02 - Sent master public key to worker instance 'i-fa096e95'.22:41:06 - Adding instance i-fa096e95 to SGE Execution Host list22:41:17 - Successfully added instance 'i-fa096e95' to SGE22:41:17 - Waiting on worker instance 'i-fa096e95' to configure itself...22:41:17 - Instance 'i-fe096e91' reported alive22:41:17 - Sent master public key to worker instance 'i-fe096e91'.22:41:22 - Adding instance i-fe096e91 to SGE Execution Host list22:41:34 - Successfully added instance 'i-fe096e91' to SGE22:41:34 - Waiting on worker instance 'i-fe096e91' to configure itself...22:41:34 - Instance 'i-fa096e95' ready22:41:52 - Instance 'i-fe096e91' ready22:42:28 - Instance 'i-fc096e93' reported alive22:42:28 - Sent master public key to worker instance 'i-fc096e93'.22:42:38 - Adding instance i-fc096e93 to SGE Execution Host list22:42:49 - Successfully added instance 'i-fc096e93' to SGE22:42:49 - Waiting on worker instance 'i-fc096e93' to configure itself...22:43:13 - Instance 'i-fc096e93' ready18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory18:29:10 - Failure checking disk usage. [Errno 12] Cannot allocate memory