HTTP RateLimit Headers

(dotat.at)

25 points | by zdw 2 days ago

4 comments

  • nitwit005 1 hour ago
    Looking at the rfc, I'm not sure I understand the motivation, as it suggests multiple times that a client or intermediary will have to read external documentation:

    > Servers MAY choose to return partition keys that distinguish between quota allocated to different consumers or different resources. There are a wide range of strategies for partitioning server capacity, including per user, per application, per HTTP method, per resource, or some combination of those values. The server SHOULD document how the partition key is generated so that clients can predict the key value for a future request and determine if there is sufficient quota remaining to execute the request.

    If external documentation is required, why send the header? It seems as though having it in the documentation is generally preferable, rather than something to avoid.

    • pcthrowaway 15 minutes ago
      The relevant word here is MAY[1]

      It's true that if an API requires the devs of its consumers to have consulted documentation in order to respect the RateLimit header, they can just as easily include custom API logic for traffic control, but this does provide a nice standardized way to do so nevertheless.

      And since the word is "MAY", APIs may also use standard responses that don't require an custom handling. As an example a CLI-builder library which parses OpenAPI spec can adopt changes to handle the RateLimit header automatically, in the situations where consulting docs is not required.

      [1] https://datatracker.ietf.org/doc/html/rfc2119

  • sholladay 1 hour ago
    Maintainer on the Ky library team here, a popular HTTP client for JavaScript.

    We support these headers, but unfortunately there’s a mess of different implementations out there. The names aren’t consistent. The number/date formats aren’t consistent. We occasionally discover new edge cases. The standard is very late to the party. Of course, better late than never. I just hope it can actually gain traction given the inertia of some incompatible implementations.

    If you are designing an API, I strongly recommend using `Retry-After` for as long as you can get away with it and only implementing the rate limit headers when it really becomes necessary. Good clients will add jitter and exponential backoff to prevent the thundering herd problem.

    • marginalia_nu 34 minutes ago
      Yup, seems both overengineered and undercooked both at the same time, as is unfortunately common for newer headers.

      As you said, 429 + Retry-After is plenty good already.

  • dfajgljsldkjag 1 hour ago
    It is nice to see some actual progress on this because handling rate limits has always been kind of a mess. I really hope the major gateways pick this up quickly so we do not have to write custom logic for every integration.
  • ezekg 2 days ago
    It really irks me that the de facto rate limiting headers mix camel case with the more standard dashes, i.e. RateLimit-Remaining instead of Rate-Limit-Remaining.