How to import information to Amazon S3 from URL
I have an S3 pail and also the URL of a huge documents. I would love to store the web content situated at the URL in the S3 pail.
I can download and install the documents to my neighborhood equipment and afterwards post it to S3 with Cloudberry or Jungledisk or whatever. Nonetheless, if the documents is huge, this might take a long period of time due to the fact that the documents have to be moved two times, and also my network link is a lot slower than Amazon's.
If I have a great deal of information to store in S3, I can start an EC2 instance, fetch the documents to the instance with crinkle or wget, and afterwards push the information from the EC2 instance to S3. This functions, yet it is a great deal of actions if I simply intend to archive one documents.
Any kind of pointers?