You're almost there, the command should be executed from your local machine (home directory is fine) and it should look as follows:
scp -i <path to keyfile> ubuntu@<publicDNS>:/mnt/galaxyData/files/000/dataset_11.dat .
(note the 'ubuntu@' before the <public DNS> and a trailing dot (.) - the dot means your current directory on your current machine, i.e., your home dir if that's where you are executing the command from)

I apologize for the trouble in getting the this data out and hope it does not keep you from using Galaxy Cloud in the future (we're looking into why the browser-based data copy didn't work and should have a fix shortly for the main app).

Enis

On Mon, Mar 28, 2011 at 6:16 PM, <karlerhard@berkeley.edu> wrote:

Hi Enis

I'm getting the following error at the bottom of the galaxy log:

RuntimeError: Content returned before start_response called

I have no idea how to fix this, but I'm trying to focus now on actually
getting the files, as I don't really need this galaxy instance anymore.  I
have been able to locate where my files are and which are the ones I want.

I've tried the following command:

scp -i <path to keyfile> <publicDNS>:/mnt/galaxyData/files/000/dataset_11.dat

But I just get the scp usage statement coming back.  Is there something
else I'm missing here?  I was executing this command in my home directory,
do I need to be somewhere else?  I feel like I'm so close!!!

Thanks so much for your help so far, I'd be lost otherwise.

karl





> Hi Karl,
> Hmm, not having Galaxy accessible is definitely not a step in the right
> direction.
> Being signed into command line is not an issue; something else must have
> gone wrong. To start, please take a look at the (bottom of) galaxy log
> file
> (and email the relevant part if you don't see how to fix it immediately);
> the file is saved as /mnt/galaxyTools/galaxy-central/paster.log
>
> As far as the location of the files in data libraries, they should be
> stored
> in the same location as history datasets, namely
> /mnt/galaxyData/files/000/dataset_<ID>.dat
> Because all of the datasets are named simply based on the database ID, it
> won't necessarily be obvious which file to get without doing some (python)
> coding or doing some guess work. If you know how large your files is, you
> can easily narrow your choices down by listing the contents of the given
> directory and sorting it by size (using command ls -lS), then pulling out
> the file(s) that you want. If several files are of approx. the same size,
> open them up and see which one you want.
>
> Good luck and let us know if you have any more trouble,
> Enis
>
>
> On Mon, Mar 28, 2011 at 3:35 PM, <karlerhard@berkeley.edu> wrote:
>
>>
>> Hello Enis,
>>
>> Thanks for the quick response and suggestions.  I actually did have a
>> job
>> running while I tried to download a file the first time, that's the
>> first
>> time it gave the error message.  But the jobs have long since finished
>> and
>> it's still giving the error message.
>>
>> I've been able to edit the universe_wsgi.ini file to "debug = False",
>> but
>> now I'm getting an "Internal server error" when I try to reload the
>> galaxy
>> instance.  Should I be signed out at the command-line to reload galaxy
>> from a browser?  Forgive my simplicity, I'm really not at all
>> command-line
>> savvy.
>>
>> Also, another extremely basic problem I have is I just don't know where
>> the data library that holds my files is located.  Any help would be
>> greatly appreciated.
>>
>> best,
>>
>> karl
>>
>>
>> > Hi Karl,
>> > As you see from the error message, you seem to be getting this error
>> > because
>> > the machine is running out of memory. This can in part be caused by a
>> > configuration option that might be set in Galaxy's universe_wsgi.ini
>> file
>> > (see below).
>> > Did you have any jobs running while trying to download the file?
>> Waiting
>> > until those finish might free up some memory.
>> >
>> > A thing to try is to connect to the instance, edit Galaxy's
>> > universe_wsgi.ini file to se debug = False, restart Galaxy and try
>> > again. Are you familiar with that at all?
>> > The basic steps are as follows:
>> > [local]$ ssh -i <path to your AWS private key file> ubuntu@<instance
>> > public
>> > IP>
>> > [ec2]$ sudo su galaxy
>> > [ec2]$ cd /mnt/galaxyTools/galaxy-central
>> > [ec2]$ vi universe_wsgi.ini  -- edit file (around line 226) to set:
>> debug
>> > =
>> > False
>> > [ec2]$ sh run.sh --stop-daemon
>> > [ec2]$ sh run.sh --daemon
>> >
>> > Yet another option is to connect the instance in the same way, look
>> > through
>> > the data library on the file system and manually copy the file out of
>> the
>> > instance. You can use the following command to copy the file to your
>> local
>> > instance:
>> > [local]$ scp -i <path to your AWS private key file> ubuntu@<instance
>> > public
>> > IP>:/mnt/galaxyData/files/000/dataset_<ID>.dat .
>> >
>> > Let us know if none of this works,
>> > Enis
>> >
>> >
>> > On Mon, Mar 28, 2011 at 12:02 PM, <karlerhard@berkeley.edu> wrote:
>> >
>> >>
>> >> Hello,
>> >>
>> >> I'm trying to download the library files I processed on my galaxy
>> cloud
>> >> instance, but I'm getting an error.  At the top (on the right panel)
>> it
>> >> says "Server Error" and then lists the URL where the data should be
>> and
>> >> then lists:
>> >>
>> >> Module paste.exceptions.errormiddleware:143 in __call__
>> >> <<          try:
>> >>                __traceback_supplement__ = Supplement, self, environ
>> >>                app_iter = self.application(environ, start_response)
>> >>                return self.make_catching_iter(app_iter, environ)
>> >>            except:>>  app_iter = self.application(environ,
>> >> start_response)
>> >> Module paste.debug.prints:98 in __call__
>> >> <<          try:
>> >>                status, headers, body = wsgilib.intercept_output(
>> >>                    environ, self.app)
>> >>                if status is None:
>> >>                    # Some error occurred>>  environ, self.app)
>> >> Module paste.wsgilib:544 in intercept_output
>> >> <<      try:
>> >>            for item in app_iter:
>> >>                output.write(item)
>> >>        finally:
>> >>            if hasattr(app_iter, 'close'):>>  output.write(item)
>> >> MemoryError: out of memory
>> >>
>> >>
>> >>
>> >> Is there some easy fix to this?  I'd really like to get that data off
>> of
>> >> the cloud instance and be able to terminate it.
>> >>
>> >> thanks,
>> >>
>> >> karl
>> >>
>> >> ___________________________________________________________
>> >> The Galaxy User list should be used for the discussion of
>> >> Galaxy analysis and other features on the public server
>> >> at usegalaxy.org.  Please keep all replies on the list by
>> >> using "reply all" in your mail client.  For discussion of
>> >> local Galaxy instances and the Galaxy source code, please
>> >> use the Galaxy Development list:
>> >>
>> >>  http://lists.bx.psu.edu/listinfo/galaxy-dev
>> >>
>> >> To manage your subscriptions to this and other Galaxy lists,
>> >> please use the interface at:
>> >>
>> >>  http://lists.bx.psu.edu/
>> >>
>> >
>>
>>
>>
>