Forum: NGINX Using proxy_store under heavy load.

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
zepolen (Guest)
on 2009-05-20 20:54
(Received via mailing list)
I'm using proxy_store to act as a frontend mirror/cache to amazon s3 for
my
sites photos.
Response times are slow, iotop reports ~10-15MB/s being written to disk
by
nginx.
The website implements a 'latest' feature, any new photos will requested
by
many users at the same time.
I'm thinking nginx is probably getting a request for a photo it doesn't
already have, and while it is retrieving the file from s3, more requests
come in for the same file, meaning more round trips and more temp files
being created.
Does proxy_cache handle it differently? (ie, does it know that a url is
currently being retrieved from the backed, and block other requests for
that
url until the file has been retrieved)
Igor S. (Guest)
on 2009-05-20 22:03
(Received via mailing list)
On Wed, May 20, 2009 at 07:48:27PM +0300, zepolen wrote:

> Does proxy_cache handle it differently? (ie, does it know that a url is
> currently being retrieved from the backed, and block other requests for that
> url until the file has been retrieved)

No, proxy_cache does the same.
Dave C. (Guest)
on 2009-05-27 05:32
(Received via mailing list)
I suspect that the worker is spending a lot of time in the write() to
the
disk of the cached file. How many workers are your running ?

Cheers

Dave

zepolen writes:
zepolen (Guest)
on 2009-05-27 07:25
(Received via mailing list)
On Wed, May 27, 2009 at 4:23 AM, Dave C. <removed_email_address@domain.invalid> 
wrote:
> I suspect that the worker is spending a lot of time in the write() to the disk of the 
cached file. How many workers are your running ?
>
>> Response times are slow, iotop reports ~10-15MB/s being written to disk by nginx.
>> I'm thinking nginx is probably getting a request for a photo it doesn't already have, 
and while it is retrieving the file from s3, more requests come in for the same file, 
meaning more round trips and more temp files being created.

It was a typo in the path where nginx was supposed to find the files
that had been stored. As a result every single file was being
retrieved from the backend, worse, it was being written to disk, only
to be discarded.

It made perfect sense in retrospect as outgoing eth traffic was also
stuck at about ~15MB/s, and incoming had jumped to the same level,
unfortunately had to wait for the graphs before I could realise the
problem.
This topic is locked and can not be replied to.