Something I’ve long wondered is why you never hear about rate limiting algorithms that are based on the cost to serve the request or algorithms that dynamically learn the capacity of the system and give everyone a fair share.
In the field of router buffer management, there are algorithms like Stochastic Fair Blue, which does the latter, but is somewhat hard to apply to HTTP because you’d have to define a success/failure metric for each request (a latency target, for example), and clients would have to tolerate a small probability a request being rejected even when they’re not being rate limited.
In Google’s paper on the Zanzibar authorization system, they give brief mention to rate limiting clients based on a CPU time allocation, but don’t go into any detail since it’s not a paper on rate limiting.
It’s something that matters less today with ubiquitous autoscaling, where the capacity of the system is whatever you need it to be to give each user what they ask for up to their rate limit, but I’m surprised at my inability to find any detailed account of such a thing being attempted.
One of those times when I was still learning that asking forgiveness is easier than asking permission, I wanted to eliminate a very expensive presence calculation that I and a coworker determined were accounting for almost 10% of average page load time. Some idiot in product has decided they wanted an OLTP-ish solution that told you -exactly- how many people were online and like a fool I asked if we could do a sane version and they said no. If you don't ask, then it's not insubordination.
For situations where eventual consistency is good enough, you can run a task in a loop that tries every n seconds to update a quantity. But as you say that can also saturate, so what you really want is for the task to update, then wait m seconds and go again, where m is more than the time you expect the task to complete in (<<50% duty cycle). As the cost of the operation climbs the time lag increases but the load on the system increases more slowly. If you want to, collect telemetry on how often it completes and set up alarms for if it doesn't for a duration that is several times longer than your spikiest loads happen.
I don't think voluntary rate limiting on the client side gets enough column inches. Peer to peer you end up footguning yourself if you bite off more than you can chew, and if you start stalling on responses then you gum up the server as well.
I mentioned this in another place on this thread, but a simple AIMD algorithm paired with a token bucket is surprisingly effective at dynamically adjusting to available capacity, even across a fleet of services not sharing state other than the contended resource.
Which isn't exactly what you're talking about, but between that and other things in the "Builder's Library" series, you can see that people are doing this, and writing about it.
My assumption would be that it is a complexity thing. As a consumer of the service having a rate limit that is easy to understand and write retry logic for is a big plus. If the criteria is "x requests per 5 minute window" and I start getting rate limit errors it's very clear what back off behaviour I need to implement. If the criteria is CPU usage of my requests, as a consumer it's hard for me to reason about how much CPU a given request is going to take so my retry logic is going to be fairly dumb.
My favourite algorithm is generic cell rate algorithm (GCRA). It works like token bucket in this post. The implementation is dead simple and requires no background tasks and needs very minimal state.
Instead of storing the current number of tokens, you instead store when the bucket will be full. If you take a token from the bucket, you increment the timestamp accordingly by 1/rps. The only complication is it the filled timestamp was in the past, you have to first update it with the current timestamp to avoid overfilling.
What's even nicer is that it doubles as a throttle implementation rather than just a rate limiter. You know the bucket is empty if you compute empty_at=filled_at-(max_tokens/rps) which is still in the future. From that calculation you now know when it will have capacity again, so you can sleep accordingly. If you use a queue before the gcra, it then starts sowing down new connections rather than just dropping them.
You should still have a limit on the queue, but it's nice in that it can gracefully turn from token bucket into leaky bucket.
Ahh this has a name! I started doing this years ago and figured this must be used frequently because it’s so simple, elegant and can be done lock free. Thanks for calling it the name!
They do it when their real goal is to funnel you into a newsletter to later sell you stuff. The only purpose of the article is to show you that prompt.
I’ve found the AIMD algo (additive increase, multiplicative decrease) paired with a token bucket gives a nice way to have a distributed set of processes adapt to backend capacity without centralized state.
Also found that AIMD is better than a circuit breaker in a lot of circumstances too.
I wonder if anyone has switched algorithms after hitting real-world scaling issues with one of those? Curious if there are any “gotchas” that only show up at scale. I only have experience with fixed window rate limiting
I have experience with token bucket and leaky bucket (or at least a variation where a request leaves the bucket when the server is done processing it) to prevent overload of backend servers. I switched from token bucket to leaky bucket. Token bucket is “the server can serve X requests per second,” while leaky bucket is the “the server can process N requests concurrently.” I found the direct limit on concurrency much more responsive to overload and better controlled delay from contention of shared resources. This kind of makes sense because imagine if your server goes from processing 10 QPS to 5 QPS. If the server has a 10 QPS token bucket limit it keeps accepting requests and the request queue and response time goes to infinity.
We used a token bucket one to allow say 100 requests immediately but the limit would actually replenish 10 per minute or something. Makes sense to allow bursts. This was to allow free tier users to test things. Unless they go crazy, they would not even notice a rate limiter.
Sliding window might work good with large intervals. If you have something like a 24h window, fixed window will abruply cut things off for hours.
I mostly work with 1 minute windows so its fixed all the way.
We used leaky bucket IIRC and the issue I saw was that the distributed aspect of it was coded incorrectly and so depending on the node you hit you were rate-limited or not :facepalm:
This seems like someone used AI to generate the article and examples without much review. It's all bullet points, and it repeatedly uses "Working:" as a heading, which doesn't make any sense to me.
The site defaults to dark mode for me (I'm assuming based on my system preferences), but all the examples are unusable with dark mode. The examples all seem to be within fixed-width divs that cut off part of the content unless you scroll within the example.
- Content getting cut was a limitation of iframe. Most blogging platform don't allows you to embed another page. This was best I could do given the limitation.
- I do use AI to bounce ideas, but a lot of effort went into getting the apps working as intended.
Now that you have mentioned it should have been working principle or algorithm. It made sense in my head. English isn't my first language sorry about that.
I tried to explain the benefits of circuit breakers and adaptive concurrency to improve the performance of our distributed monolith, but I failed. I tried to visualize it using step by step packet diagrams but failed. This is hard stuff to understand.
Great visualization tools. Next time I have to explain it, I'll reach for these.
I'm pretty sure GP means: all those users have egress from a finite number of IPv4 and thus if rate limiting is done by IP those behind the NAT are going to have a real bad time. It's true of all NAT setups, but the affected audience size for GCNAT could be outrageous
Yes, I usually prompt (Claude, GPT and Deepseek),on my rough vision, and take ideas from all of them. They never quite get it right on their own. But for a code that's deploy and forget, AI generated code is good enough.
In the field of router buffer management, there are algorithms like Stochastic Fair Blue, which does the latter, but is somewhat hard to apply to HTTP because you’d have to define a success/failure metric for each request (a latency target, for example), and clients would have to tolerate a small probability a request being rejected even when they’re not being rate limited.
In Google’s paper on the Zanzibar authorization system, they give brief mention to rate limiting clients based on a CPU time allocation, but don’t go into any detail since it’s not a paper on rate limiting.
It’s something that matters less today with ubiquitous autoscaling, where the capacity of the system is whatever you need it to be to give each user what they ask for up to their rate limit, but I’m surprised at my inability to find any detailed account of such a thing being attempted.
For situations where eventual consistency is good enough, you can run a task in a loop that tries every n seconds to update a quantity. But as you say that can also saturate, so what you really want is for the task to update, then wait m seconds and go again, where m is more than the time you expect the task to complete in (<<50% duty cycle). As the cost of the operation climbs the time lag increases but the load on the system increases more slowly. If you want to, collect telemetry on how often it completes and set up alarms for if it doesn't for a duration that is several times longer than your spikiest loads happen.
I don't think voluntary rate limiting on the client side gets enough column inches. Peer to peer you end up footguning yourself if you bite off more than you can chew, and if you start stalling on responses then you gum up the server as well.
Pretty easy to pair AIMD with token bucket (eg https://github.com/webriots/rate)
https://aws.amazon.com/builders-library/workload-isolation-u...
Which isn't exactly what you're talking about, but between that and other things in the "Builder's Library" series, you can see that people are doing this, and writing about it.
Furthermore, modern GPU workloads are way less elastic in capacity scaling
Netflix has a blog post for their implementation: https://netflixtechblog.medium.com/performance-under-load-3e...
Instead of storing the current number of tokens, you instead store when the bucket will be full. If you take a token from the bucket, you increment the timestamp accordingly by 1/rps. The only complication is it the filled timestamp was in the past, you have to first update it with the current timestamp to avoid overfilling.
What's even nicer is that it doubles as a throttle implementation rather than just a rate limiter. You know the bucket is empty if you compute empty_at=filled_at-(max_tokens/rps) which is still in the future. From that calculation you now know when it will have capacity again, so you can sleep accordingly. If you use a queue before the gcra, it then starts sowing down new connections rather than just dropping them.
You should still have a limit on the queue, but it's nice in that it can gracefully turn from token bucket into leaky bucket.
> By following, you'll have instant access to our new posts in your feed.
> Continue with Google
> More options
As soon as I see this in a blog, I quit tab. Why do authors do this to themselves?
- Medium which paywalls the article and forces you to sign up just to read.
- Substack has same problem, it's great for funneling people to your paid newsletter but there is a sign up banner as soon the page loads.
- Build your own and miss out on the social aspect and there's no proof if the numbers are real.
Related tangent, "HPBN" (High-Performance Browser Networking) is a great book that includes related concepts.
https://hpbn.co/
[0](https://www.oreilly.com/library/view/designing-data-intensiv...)
Also found that AIMD is better than a circuit breaker in a lot of circumstances too.
Golang lib of the above https://github.com/webriots/rate
Sliding window might work good with large intervals. If you have something like a 24h window, fixed window will abruply cut things off for hours.
I mostly work with 1 minute windows so its fixed all the way.
The site defaults to dark mode for me (I'm assuming based on my system preferences), but all the examples are unusable with dark mode. The examples all seem to be within fixed-width divs that cut off part of the content unless you scroll within the example.
- "Working" I wanted to keep things consistent.
- Content getting cut was a limitation of iframe. Most blogging platform don't allows you to embed another page. This was best I could do given the limitation.
- I do use AI to bounce ideas, but a lot of effort went into getting the apps working as intended.
Is it supposed to say, "How it works"?
Great visualization tools. Next time I have to explain it, I'll reach for these.
https://smudge.ai/blog/ratelimit-algorithms