Re: [galaxy-user] downloading huge data sets from history using wget

Hi Peter, Please use 'reply all' so that everyone on the mailing list can participe in the discussion. I did not publish my history, so that's probably not what causes problems for you. If you can click on the 'save' icon and it starts the download successfully, then you ought to be able to copy the download link and use it in wget and have it working. What happens when you click on 'save'? Does it start the download? Florent On 31/05/10 09:57, pis@duke.edu wrote:
Hi Florent,
Do you think that I need to publish it first as a history and then try it again? I suspect that may be the reason for the strange behavior.
I will let you know when I get it to work
Thank you very much for your help
Have a nice day Peter
Zitat von Florent Angly <florent.angly@gmail.com>:
Hi Peter,
See an example below:
$ wget http://main.g2.bx.psu.edu/datasets/59a2a6ec00c47fc4/display?to_ext=fasta
--2010-05-31 09:45:24-- http://main.g2.bx.psu.edu/datasets/59a2a6ec00c47fc4/display?to_ext=fasta
Resolving main.g2.bx.psu.edu... 128.118.201.93 Connecting to main.g2.bx.psu.edu|128.118.201.93|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1177056689 (1.1G) [text/plain] Saving to: `display?to_ext=fasta' 0% [ ] 224,112 243K/s The download link was copied from the "save" icon.
$ wget http://main.g2.bx.psu.edu/datasets/c3a8db0a339f7a43/display?to_ext=fastqsang...
--2010-05-31 09:49:10-- http://main.g2.bx.psu.edu/datasets/c3a8db0a339f7a43/display?to_ext=fastqsang...
Resolving main.g2.bx.psu.edu... 128.118.201.93 Connecting to main.g2.bx.psu.edu|128.118.201.93|:80... connected. HTTP request sent, awaiting response... 416 Request Range Not Satisfiable
The file is already fully retrieved; nothing to do. The "request range not satisfiable" makes me think that your download
When I try with your link, I get: link is not valid for some reason.
Florent
On 30/05/10 23:43, pis@duke.edu wrote:
Dear Florent Angly,
Thank you very much for your response. I have actually tried to do that but it still does not work. When I choose "copy link location" in firefox (in my version no save link location appears" I get an URL with a strange data file name such as http://main.g2.bx.psu.edu/datasets/c3a8db0a339f7a43/display?to_ext=fastqsang.... This will not work with wget ("wget: No match"). I does not work either when I replace the data file name with the name of the file that apper when I want to download the file using the disl ikon. I would be happy for further advice on that since it has apparently worked for you.
Thank you very much for your help Peter
Zitat von Florent Angly <florent.angly@gmail.com>:
Hi Peter, It's pretty easy. In the Galaxy interface, use the "save" icon represented as a floppy disk. Instead of clicking on the icon, get the URL for it (in Firefox: right click > save link location). Then simply copy this URL in your terminal for wget to use. Florent
On 30/05/10 07:12, pis@duke.edu wrote:
Dear Galaxy team,
I am doing some standard clipping and trimming using galaxy and would be happy to download the generated files using a unix terminal and wget. Is it a way to figure it out the exact link of the data file to wget it easily? I am talking about file sizes over 5 Gb that are really time consuming and error prone to download using a web browser.
Thank you very much for developing this excellent tool Peter
_______________________________________________ galaxy-user mailing list galaxy-user@lists.bx.psu.edu http://lists.bx.psu.edu/listinfo/galaxy-user
_______________________________________________ galaxy-user mailing list galaxy-user@lists.bx.psu.edu http://lists.bx.psu.edu/listinfo/galaxy-user
participants (1)
-
Florent Angly