Accessing Galaxy on the cloud from http
Thanks for the suggestions! The ami I've been mainly using is ami-da58aab3 (although I've tried a few others). The user data may be where I am messing up: I have a few questions I'm not clear on, I think I've tried may options here.. 1. Is cluster name just something I make up? 2. Are the <> characters part of the yaml structure? (I've tried both). 3. Do I need to specify the default information? I've tried: cluster_name: galaxy_cloud_cluster password: apassword access_key: mypublickeyblahblahblahblab secret_key: myprivatekeyblahblabhalbahblahblah I've also tried with the <> cluster_name: <galaxy_cloud_cluster> password: <apassword> I've also tried adding the other information... boot_script_name: [cm_boot.py] boot_script_path: [/tmp/cm] bucket_cluster: cm-<hash> bucket_default: [cloudman] cloudman_home: /mnt/cm [master_ip: <*exists only if worker*; master internal IP address>] role:<master or worker> [testflag: False] On apache: I totally understand it's not part of any of this, I was just trying that to see if I could access through http (I was just trying things to narrow down where the problem is). Here's an attempt at an output from py.log, I hope I have it all. Thanks for working through this with me! ubuntu@ip-10-204-103-174:/opt/galaxy/pkg$ cat ec2autorun.py.log [INFO] ec2autorun:38 2011-09-27 07:09:50,922: Getting user data from 'http://169 .254.169.254/latest/user-data', attempt 0 [DEBUG] ec2autorun:42 2011-09-27 07:09:50,938: Saving user data in its original format to file '/tmp/cm/original_userData.yaml' [DEBUG] ec2autorun:46 2011-09-27 07:09:50,938: Got user data [INFO] ec2autorun:284 2011-09-27 07:09:50,938: Handling user data in yaml format [DEBUG] ec2autorun:317 2011-09-27 07:09:50,940: The provided user data does not contain bucket_default field; setting it to 'cloudman'. [DEBUG] ec2autorun:337 2011-09-27 07:09:50,940: Composed user data: {'access_key ': '<accesskeyblah>', 'cloudman_home': '/mnt/cm', 'cluster_name': '<galax y_cloud_cluster>', 'bucket_default': 'cloudman', 'role': 'master', 'bucket_clust er': 'cm-6df419acea85fd768bdf345a980ab112', 'boot_script_path': '/tmp/cm', 'boot _script_name': 'cm_boot.py', 'secret_key': '<blahblahblahblah>', 'password': '<somepassword>'} [DEBUG] ec2autorun:75 2011-09-27 07:09:51,120: Establishing boto S3 connection [DEBUG] ec2autorun:78 2011-09-27 07:09:51,121: Got boto S3 connection. [DEBUG] ec2autorun:98 2011-09-27 07:09:52,528: Cluster bucket 'cm-6df419acea85fd 768bdf345a980ab112' not found. [DEBUG] ec2autorun:195 2011-09-27 07:09:52,528: Using default bucket 'cloudman' [DEBUG] ec2autorun:214 2011-09-27 07:09:52,528: Could not get boot script 'cm_bo ot.py' from cluster bucket 'cm-6df419acea85fd768bdf345a980ab112'; retrieving pub lic one from bucket url 'http://s3.amazonaws.com/cloudman/cm_boot.py' [INFO] ec2autorun:169 2011-09-27 07:09:52,529: Getting boot script from 'http:// s3.amazonaws.com/cloudman/cm_boot.py' and saving it locally to '/tmp/cm/cm_boot. py' [DEBUG] ec2autorun:176 2011-09-27 07:09:52,565: Got boot script from 'http://s3. amazonaws.com/cloudman/cm_boot.py' [DEBUG] ec2autorun:217 2011-09-27 07:09:52,565: Saved boot script to '/tmp/cm/cm _boot.py' [DEBUG] ec2autorun:75 2011-09-27 07:09:52,565: Establishing boto S3 connection [DEBUG] ec2autorun:78 2011-09-27 07:09:52,565: Got boto S3 connection. [DEBUG] ec2autorun:98 2011-09-27 07:09:52,589: Cluster bucket 'cm-6df419acea85fd 768bdf345a980ab112' not found. [INFO] ec2autorun:229 2011-09-27 07:09:52,590: Running boot script '/tmp/cm/cm_b oot.py' [DEBUG] ec2autorun:233 2011-09-27 07:10:04,836: Successfully ran boot script '/t mp/cm/cm_boot.py' [INFO] ec2autorun:373 2011-09-27 07:10:04,836: ---> /opt/galaxy/pkg/ec2autorun.p y done <--- On Sep 26, 2011 10:47 PM, "Enis Afgan" <eafgan@emory.edu> wrote:
Colleen; Yoy definitely only want to select a single security group: the CloudMan one. Selecting multiple groups like uses the most conservative settings in the union of the groups, so won't have the right ports open for CloudMan.
The ami I've been mainly using is ami-da58aab3 (although I've tried a few others).
I don't believe this will work with NX, which would explain being able to connect with ssh and not NX. NX requires a username/password. The latest CloudBioLinux images do this and support CloudMan: http://cloudbiolinux.org/ You need to enter a freenxpass: option to the user data with the password to use for access and it will setup the server on the back end. However, it would be worth tackling this second, after getting CloudMan working.
The user data may be where I am messing up: I have a few questions I'm not clear on, I think I've tried may options here.. 1. Is cluster name just something I make up?
Yes. It's used so you can restart the same cluster with saved data, and have multiple clusters with different names.
2. Are the <> characters part of the yaml structure? (I've tried both).
No. From the log you pasted it looks like you have those around your secret key. Could you try without it, and a single security group, on a clean cluster and see if that fixes the issue?
3. Do I need to specify the default information?
No. cluster_name, password, access_key and secret_key would be enough. It looks like you are nearly there. Hope this does it, Brad
Thanks for the help! I was able to successfully connect and get galaxy running. The items I had incorrect and needed to change were: One security group User data with NO <> and only a space between the : and the data (Not a tab). Thanks f the quick responses and advice, I would have been forever trying the different variables. On Tue, Sep 27, 2011 at 7:49 PM, Brad Chapman <chapmanb@50mail.com> wrote:
Colleen; Yoy definitely only want to select a single security group: the CloudMan one. Selecting multiple groups like uses the most conservative settings in the union of the groups, so won't have the right ports open for CloudMan.
The ami I've been mainly using is ami-da58aab3 (although I've tried a few others).
I don't believe this will work with NX, which would explain being able to connect with ssh and not NX. NX requires a username/password. The latest CloudBioLinux images do this and support CloudMan:
You need to enter a freenxpass: option to the user data with the password to use for access and it will setup the server on the back end. However, it would be worth tackling this second, after getting CloudMan working.
The user data may be where I am messing up: I have a few questions I'm not clear on, I think I've tried may options here.. 1. Is cluster name just something I make up?
Yes. It's used so you can restart the same cluster with saved data, and have multiple clusters with different names.
2. Are the <> characters part of the yaml structure? (I've tried both).
No. From the log you pasted it looks like you have those around your secret key. Could you try without it, and a single security group, on a clean cluster and see if that fixes the issue?
3. Do I need to specify the default information?
No. cluster_name, password, access_key and secret_key would be enough.
It looks like you are nearly there. Hope this does it, Brad
participants (2)
-
Brad Chapman
-
Colleen Doherty