scale-tone

ThrottlingTroll, and how to make distributed locks with it.

Yes, I know that there have been many rate limiting/throttling middleware libraries already created for ASP.NET.

Most well-known is probably Stefan Prodan’s great AspNetCoreRateLimit project, which provides many advanced features like storing counters in a distributed cache, limiting clients individually based on their IDs, updating limits at runtime and more. Yet it only supports one rate limiting algorithm (“fixed window”, from what I can see), can only return error status codes (and cannot instead delay/damper responses, for example) and (as the name implies) is only intended for ASP.NET.

In .NET 7 we now have an out-of-the-box rate limiting middleware, which offers several different rate limiting algorithms (including concurrency limiter), various ways to limit clients individually (based e.g. on their IP-addresses) and declarative (attribute-based) configuration. Yet there’s no way to dynamically reconfigure rate limits without restarting/redeploying the service (which might be a vital feature in the event of a DDoS attack) and there’s no built-in support for distributed counter stores (although folks have already addressed that, e.g. take a look at Cristi Pufu’s aspnetcore-redis-rate-limiting). And again, it only works in ASP.NET.

So I created ThrottlingTroll, which is my take on rate limiting/throttling both for ASP.NET and Azure Functions (.NET Isolated). On top of everything it supports:

For a comprehensive list of features, documentation and samples I suggest you check out ThrottlingTroll’s GitHub repo. Here I just wanted to talk about one particularly interesting application of it - organizing named distributed locks (aka critical sections).

Quite often you might need to protect some server-side state from being corrupt by concurrent requests. A very typical example would be implementing a shopping cart, that should not at any circumstances be processed twice or updated in parallel (while technically both things always have a chance to happen, due to e.g. a network glitch or simply because user is pressing buttons in several browser tabs). This normally requires configuring and implementing a distributed lock. And ThrottlingTroll does this for you.

All that it takes is:

The above code first instructs ThrottlingTroll to use RedisCounterStore for storing counters in a shared place (don’t forget to put “RedisConnectionString” setting into your config file). Then it configures a rate limiting rule, that’s being applied to the “/shopping-cart” endpoint. That rule uses SemaphoreRateLimitMethod with PermitLimit set to 1, which means no more than 1 concurrent request to that endpoint is allowed. Non-zero MaxDelayInSeconds value makes ThrottlingTroll apply spin-wait logic when that limit is exceeded, thus placing all pending requests in a queue (when MaxDelayInSeconds is zero or omitted, “429 TooManyRequests” are immediately returned instead). Then finally a custom IdentityIdExtractor is used to identify and separate shopping cart instances from each other. Here we assume that they’re identified by an “id” query string parameter, but of course it can be anything else (header value, claim value etc.) so long as that value is globally unique.

That’s it. Now access to your shopping cart API methods will be synchronized, with no danger of ending up in a corrupt state. And you name all other potential applications for this feature.

In ThrottlingTroll’s samples folder you can find a distributed counter implementation - this one for ASP.NET and this one for Azure Functions - demonstrating the very same idea. And of course check out all other samples there.

Enjoy and suggest further features, if anything is missing.