I have a requirement to rate limit requests to one of my customer facing
API’s. At present Nginx is a proxy point directing traffic to network
internal servers based on endpoint URL. I am interested in integrating
tightly with Nginx to do this rate limiting before the traffic is passed
my upstream resources. I’m in research phases and theres a lot of moving
pieces to the project, so in the interest of clarity I’ve tried to
the below into sensible lists. Please let me know if if I’m not
Implementation specific limitations:
- Our user base traffic tends to originate from networks where NAT is
heavily used. Unfortunately, rate limiting by IP address would
massive amounts of false positives as a result.
- Our API is not ‘open’ and requires a successful authentication
handshake (Oauth) to continue. Further requests utilize an auth token
headers to maintain session state. Auth tokens are alpha numeric
with a length of 64 characters.
- High Traffic! (30k+ req/sec)
- Is it feasible to do rate limiting based on an auth token?
- Is it feasible to insert strings of this length as keys into the
- Is the zone an in memory ‘object’ (for lack of a better word)?
- Is there a performance drawback for create one large in memory zone
that is GB as opposed to MB?
- How long do keys live in the zone? If I set a 1+ GB zone file, what
happens if our aggregate request volume bursts and the zone runs out
storage space? There is a sentence in the documentation I find
“If the zone storage is exhausted, the server will return the 503
(Service Temporarily Unavailable) error to all further requests.” (
- Are there better alternatives?