I am trying to download a file of size 1.2GB which i uploaded and put
in AWS bucket but when i download it, it gets chopped to 1GB.
can anyone help please…
So far this appears to have nothing to do with Ruby and everything to
do with AWS. Is there a limit on how big an individual bucket can be
(I don’t use AWS but this seems to be the obvious first question),
have you any buckets that are larger than 1GB?
Can you transfer the data to the bucket with any of the other web
based or command line tools that AWS supplies?
We are not mind readers if you do not explain yourself clearly we will
just ignore you.
Actually i am new to ruby…
I have seen in the AWS bucket the file is of 1.2 GB there but when i
download it it gets chopped to 1 GB.
Thanks for your reply…And please revert if you need any more info.
On Sep 8, 2011, at 6:32 AM, JAZZ F. wrote:
Actually i am new to ruby…
I have seen in the AWS bucket the file is of 1.2 GB there but when i
download it it gets chopped to 1 GB.Thanks for your reply…And please revert if you need any more info.
Buckets can be any size and contain any number of files/folders. Each
file in a bucket is limited to 4GB (last I checked, seems likely a 32-
bit limitation). If you are streaming the file from S3 through your
Ruby application and then into the browser, you may be hitting a time-
out on the Ruby side. If your connection is directly to the S3 file
(public permissions) then you should not be hitting any sort of limit
at this point. 4GB is a long way from 1.2.
Walter
Walter D. wrote in post #1020807:
On Sep 8, 2011, at 6:32 AM, JAZZ F. wrote:
Actually i am new to ruby…
I have seen in the AWS bucket the file is of 1.2 GB there but when i
download it it gets chopped to 1 GB.Thanks for your reply…And please revert if you need any more info.
Buckets can be any size and contain any number of files/folders. Each
file in a bucket is limited to 4GB (last I checked, seems likely a 32-
bit limitation). If you are streaming the file from S3 through your
Ruby application and then into the browser, you may be hitting a time-
out on the Ruby side. If your connection is directly to the S3 file
(public permissions) then you should not be hitting any sort of limit
at this point. 4GB is a long way from 1.2.Walter
Thanks walter for your reply.
i think it cant be possible that everytime i try to download it,it gets
time out. Do we have any kind of configuration/setting in order to limit
the file size anywhere?
On Sep 14, 8:08am, “JAZZ F.” [email protected] wrote:
Thanks walter for your reply.
i think it cant be possible that everytime i try to download it,it gets
time out. Do we have any kind of configuration/setting in order to limit
the file size anywhere?
Something that is not clear to me is how are you downloading the
file ? Is this your browser hitting S3 directly, a command line tool
using one of the many S3 libraries, an action in your controller that
fetches the file from s3 and then sends it to the end user, something
else ?
Fred
On Sep 14, 11:02am, “JAZZ F.” [email protected] wrote:
file ? Is this your browser hitting S3 directly, a command line tool
using one of the many S3 libraries, an action in your controller that
fetches the file from s3 and then sends it to the end user, something
else ?Fred
i have a UI application in which downloading feature is incorporated, i
am downloading throuh it…Is it what u r asking or need something else.
if your app is first downloading the file and then sending it to the
user then it’s almost certainly a timeout thing. There are many
elements along the chain that can timeout: the app server itself, the
load balancer in front of it, any proxies (transparent or not) between
your server and the user’s computer and finally the user’s browser. I
wouldn’t rely on being able to control all of these.
Why not serve the file straight from S3 instead of making the file go
via your application ?
Fred
Frederick C. wrote in post #1021861:
On Sep 14, 8:08am, “JAZZ F.” [email protected] wrote:
Thanks walter for your reply.
i think it cant be possible that everytime i try to download it,it gets
time out. Do we have any kind of configuration/setting in order to limit
the file size anywhere?Something that is not clear to me is how are you downloading the
file ? Is this your browser hitting S3 directly, a command line tool
using one of the many S3 libraries, an action in your controller that
fetches the file from s3 and then sends it to the end user, something
else ?Fred
i have a UI application in which downloading feature is incorporated, i
am downloading throuh it…Is it what u r asking or need something else.
Thanks for ur response.
On Sep 14, 2011, at 6:02 AM, JAZZ F. wrote:
am downloading throuh it…Is it what u r asking or need something
else.Thanks for ur response.
As far as I know, that’s the only place you can look. There isn’t any
sort of configuration or timing going on on the S3 side. Even if you
use an expiring URL to download a protected file, the expiration is
only considered on the start time of the download, not the entire
process. As long as the request begins before the token turns into a
pumpkin, the file can download however slowly and take however long.
It is ultimately up to your client application to maintain an open
socket to the server, the server will continue to serve until the
request is completed.
Walter