galaxyTools volume not given enough time to attach
About half the time I try to boot up my galaxy cluster in AWS, cloudman aborts because it could not attach the galaxyTools volume with the line: "Cannot attach volume 'vol-xxxxxx' in state 'creating' It appears that in /cm/services/data/volume.py it tries to mount the volume every two seconds up to a minute and then gives up. In the AWS EC2 volume pane, the volume in question always eventually ends up in the state "available", but often not in enough time and cloudman gives up. Could you possibly 1) increase the timeout to account for variations in AWS service performance, 2) selectively wait longer if the state is "creating" or 3) make the timeout period configurable in galaxy's settings? If it matters, the tools snapshot is my own that I made after modifying the standard volume. But again, when it boots up successfully the other half of the time, the cluster behaves normally. Thanks, Joel
Hi Joel, This has been addressed in one of the more recent commits ( https://bitbucket.org/galaxy/cloudman/src/7bf68fba5d8b/cm/services/data/volu...) but the updated version of CloudMan hasn't been released since then so it's not generally available. At this point, the fastest fix would be for you could copy cm.tar.gz out of your cluster's bucket to your machine, do something similar in the code, tar it all back up and push the updated code to your bucket. Sorry about this; AWS's seems to have been having issues with the amount of time it takes to create volumes lately because until recently this was never an issue. Let us know if you need some more guidance in getting the above fixed, Enis On Mon, Oct 8, 2012 at 4:54 PM, Joel Rosenberg <thisisjoel@hotmail.com>wrote:
About half the time I try to boot up my galaxy cluster in AWS, cloudman aborts because it could not attach the galaxyTools volume with the line:
"Cannot attach volume 'vol-xxxxxx' in state 'creating'
It appears that in /cm/services/data/volume.py it tries to mount the volume every two seconds up to a minute and then gives up. In the AWS EC2 volume pane, the volume in question always eventually ends up in the state "available", but often not in enough time and cloudman gives up.
Could you possibly 1) increase the timeout to account for variations in AWS service performance, 2) selectively wait longer if the state is "creating" or 3) make the timeout period configurable in galaxy's settings?
If it matters, the tools snapshot is my own that I made after modifying the standard volume. But again, when it boots up successfully the other half of the time, the cluster behaves normally.
Thanks,
Joel
___________________________________________________________ The Galaxy User list should be used for the discussion of Galaxy analysis and other features on the public server at usegalaxy.org. Please keep all replies on the list by using "reply all" in your mail client. For discussion of local Galaxy instances and the Galaxy source code, please use the Galaxy Development list:
http://lists.bx.psu.edu/listinfo/galaxy-dev
To manage your subscriptions to this and other Galaxy lists, please use the interface at:
Thanks, I'll probably wait until that commit makes its way downstream. Joel From: eafgan@emory.edu Date: Mon, 8 Oct 2012 17:17:02 +1100 Subject: Re: [galaxy-user] galaxyTools volume not given enough time to attach To: thisisjoel@hotmail.com CC: galaxy-user@lists.bx.psu.edu Hi Joel, This has been addressed in one of the more recent commits (https://bitbucket.org/galaxy/cloudman/src/7bf68fba5d8b/cm/services/data/volu...) but the updated version of CloudMan hasn't been released since then so it's not generally available. At this point, the fastest fix would be for you could copy cm.tar.gz out of your cluster's bucket to your machine, do something similar in the code, tar it all back up and push the updated code to your bucket. Sorry about this; AWS's seems to have been having issues with the amount of time it takes to create volumes lately because until recently this was never an issue. Let us know if you need some more guidance in getting the above fixed, Enis On Mon, Oct 8, 2012 at 4:54 PM, Joel Rosenberg <thisisjoel@hotmail.com> wrote: About half the time I try to boot up my galaxy cluster in AWS, cloudman aborts because it could not attach the galaxyTools volume with the line: "Cannot attach volume 'vol-xxxxxx' in state 'creating' It appears that in /cm/services/data/volume.py it tries to mount the volume every two seconds up to a minute and then gives up. In the AWS EC2 volume pane, the volume in question always eventually ends up in the state "available", but often not in enough time and cloudman gives up. Could you possibly 1) increase the timeout to account for variations in AWS service performance, 2) selectively wait longer if the state is "creating" or 3) make the timeout period configurable in galaxy's settings? If it matters, the tools snapshot is my own that I made after modifying the standard volume. But again, when it boots up successfully the other half of the time, the cluster behaves normally. Thanks, Joel ___________________________________________________________ The Galaxy User list should be used for the discussion of Galaxy analysis and other features on the public server at usegalaxy.org. Please keep all replies on the list by using "reply all" in your mail client. For discussion of local Galaxy instances and the Galaxy source code, please use the Galaxy Development list: http://lists.bx.psu.edu/listinfo/galaxy-dev To manage your subscriptions to this and other Galaxy lists, please use the interface at: http://lists.bx.psu.edu/
I actually did this last night and it worked. Thanks. ________________________________
From: eafgan@emory.edu Date: Mon, 8 Oct 2012 17:17:02 +1100 Subject: Re: [galaxy-user] galaxyTools volume not given enough time to attach To: thisisjoel@hotmail.com CC: galaxy-user@lists.bx.psu.edu
Hi Joel, This has been addressed in one of the more recent commits (https://bitbucket.org/galaxy/cloudman/src/7bf68fba5d8b/cm/services/data/volu...) but the updated version of CloudMan hasn't been released since then so it's not generally available.
At this point, the fastest fix would be for you could copy cm.tar.gz out of your cluster's bucket to your machine, do something similar in the code, tar it all back up and push the updated code to your bucket.
Sorry about this; AWS's seems to have been having issues with the amount of time it takes to create volumes lately because until recently this was never an issue.
Let us know if you need some more guidance in getting the above fixed, Enis
On Mon, Oct 8, 2012 at 4:54 PM, Joel Rosenberg <thisisjoel@hotmail.com<mailto:thisisjoel@hotmail.com>> wrote: About half the time I try to boot up my galaxy cluster in AWS, cloudman aborts because it could not attach the galaxyTools volume with the line:
"Cannot attach volume 'vol-xxxxxx' in state 'creating'
It appears that in /cm/services/data/volume.py it tries to mount the volume every two seconds up to a minute and then gives up. In the AWS EC2 volume pane, the volume in question always eventually ends up in the state "available", but often not in enough time and cloudman gives up.
Could you possibly 1) increase the timeout to account for variations in AWS service performance, 2) selectively wait longer if the state is "creating" or 3) make the timeout period configurable in galaxy's settings?
If it matters, the tools snapshot is my own that I made after modifying the standard volume. But again, when it boots up successfully the other half of the time, the cluster behaves normally.
Thanks,
Joel
___________________________________________________________ The Galaxy User list should be used for the discussion of Galaxy analysis and other features on the public server at usegalaxy.org<http://usegalaxy.org>. Please keep all replies on the list by using "reply all" in your mail client. For discussion of local Galaxy instances and the Galaxy source code, please use the Galaxy Development list:
http://lists.bx.psu.edu/listinfo/galaxy-dev
To manage your subscriptions to this and other Galaxy lists, please use the interface at:
participants (2)
-
Enis Afgan
-
Joel Rosenberg