Redirect on specific threshold!

Hi,

We’re using Nginx to serve videos on one of our Storage
server(contains
mp4 videos) and due to high amount of requests we’re planning to have a
separate caching Node based on Fast SSD drives to serve “Hot” content in
order to reduce load from Storage. We’re planning to have following
method
for caching :

If there are exceeding 1K requests for
http://storage.domain.com/test.mp4 ,
nginx should construct a Redirect URL for rest of the requests related
to
test.mp4 i.e http://cache.domain.com/test.mp4 and entertain the rest of
requests for test.mp4 from Caching Node while long tail would still be
served from storage.

So, can we achieve this approach with nginx or other like varnish ?

Thanks in advance.

Regards.
Shahzaib

Hi,

On 15/06/15 05:12, shahzaib shahzaib wrote:

Redirect URL for rest of the requests related to test.mp4 i.e

On the assumption that you’re hosting with a linux infrastructure, this
is simply done by just adding more memory! By default any spare memory
will be used to cache common files.

If you want more control over it, then set your cache area up on a tmpfs
backed partition. However, you’ll then have to manage what you cache
yourself.

With setups like this, it’s normally the bandwidth of the network that
becones the bottleneck. Maybe a bit of round-robin DNS would help with
this?

Steve


Steve H. BSc(Hons) MIITP

Linkedin: http://www.linkedin.com/in/steveholdoway
Skype: sholdowa

Does a nginx reverse proxy with cache fit you need?

Client → Caching server (with SSD and nginx proxy cache configured) →
Storage server(s) (Slow)

You can add even more storage server by utilizing nginx upstream module.

On Sun, Jun 14, 2015 at 1:12 PM shahzaib shahzaib
[email protected]

On Sunday 14 June 2015 22:12:37 shahzaib shahzaib wrote:

test.mp4 i.e http://cache.domain.com/test.mp4 and entertain the rest of
requests for test.mp4 from Caching Node while long tail would still be
served from storage.

So, can we achieve this approach with nginx or other like varnish ?

[…]

You can use limit_conn and limit_req modules to set limits:
http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html
http://nginx.org/en/docs/http/ngx_http_limit_req_module.html

and the error_page directive to construct the redirect.

wbr, Valentin V. Bartenev

Hi,

Thanks for the help guys. Regarding @ryd994 suggestion, the reason we
don’t want to deploy this structure is that the Caching node will have
to
respond for each client’s request and even it will be only doing proxy
for
most of the requests(without caching them), high i/o will still be
required
to serve the big proxy request(700MB mp4) to the user and that way
caching
node will eventually become the bottleneck between user and storage
node,
isn’t it ?

@steve thanks for tmpfs point. but we’re using caching node with 1TB+
SSD
storage and will prefer SSD cache over RAM(though RAM is faster but not
as
big as SSD).

Using redirect URL we believe would be only pointing specific requests
towards the cachind node and than this node will fetch requested file
using
proxy_cache.

Regards.
Shahzaib.

Hi,

Sorry got back to this thread after long time. First of all, thanks to
all for suggestions. Alright, i have also checked with rate_limit
module,
should this work as well or it should be only limit_conn (to parse
error_log and constructing redirect URL).

P.S : Actuall looks like limit_conn needs to recompile nginx as it is
not
included in default yum install nginx repo. So i tried with rate_limit
which is built-in within nginx repo.

http://greenroom.com.my/blog/2014/10/rate_limit-with-nginx-on-ubuntu/

Regards.
Shahzaib

On Mon, Jun 15, 2015 at 01:45:42PM +0300, Valentin V. Bartenev wrote:

On Sunday 14 June 2015 22:12:37 shahzaib shahzaib wrote:

Hi there,

If there are exceeding 1K requests for http://storage.domain.com/test.mp4 ,
nginx should construct a Redirect URL for rest of the requests related to
test.mp4 i.e http://cache.domain.com/test.mp4 and entertain the rest of
requests for test.mp4 from Caching Node while long tail would still be
served from storage.

You can use limit_conn and limit_req modules to set limits:
Module ngx_http_limit_conn_module
Module ngx_http_limit_req_module

and the error_page directive to construct the redirect.

limit_conn and limit_req are the right answer if you care about
concurrent
requests.

(For example: rate=1r/m with burst=1000 might do most of what you want,
without too much work on your part.)

I think you might care about historical requests, instead – so if a
url is ever accessed 1K times, then it is “popular” and future requests
should be redirected.

To do that, you probably will find it simpler to do it outside of nginx,
at least initially.

Have something read the recent-enough log files[*], and whenever there
are
more that 1K requests for the same resource, add a fragment like

location = /test.mp4 { return 301 http://cache.domain.com/test.mp4; }

to nginx.conf (and remove similar fragments that are no longer currently
popular-enough, if appropriate), and do a no-downtime config reload.

You can probably come up with a module or a code config that does the
same thing, but I think it would take me longer to do that.

[*] or accesses the statistics by a method of your choice

f

Francis D. [email protected]

On Sat, Aug 29, 2015 at 04:57:19PM +0500, shahzaib shahzaib wrote:

Hi there,

Sorry got back to this thread after long time. First of all, thanks to
all for suggestions. Alright, i have also checked with rate_limit module,
should this work as well or it should be only limit_conn (to parse
error_log and constructing redirect URL).

I think the answers already given were different, depending on different
understanding of your requirements.

Perhaps if you can re-state (or clarify) them, you will get a more
specific answer.

For what it’s worth, what I think you want is a tool to read the
access logs from storage.domain.com, copy files to cache.domain.com,
change the nginx config on storage.domain.com, and restart nginx on
storage.domain.com.

None of which involves any special modules or config within nginx.

f

Francis D. [email protected]