Question regarding Ratelimiting / Cooldowns on the server

Hello, I’m currently working on my game server’s backend using Nakama and exploring the best methods for implementing rate limits and cooldowns on RPCs and native Nakama functions. I’ve seen various approaches recommended, including the utilization of Redis and some other solutions like using Cloudflare or NGINX ratelimiters that wouldn’t work for what I want.

Given the critical importance of efficiently managing server resources and preventing abuse, I’m curious about Nakama’s recommendations for implementing rate limits and cooldowns effectively. Specifically, I’m interested in approaches that are not only reliable and fast but also scalable, particularly for larger game servers or deployments with Nakama Enterprise.

Additionally, I’ve come across the fast-ratelimit library for JavaScript, which seems promising due to its efficiency and simplicity. However, I’m uncertain about its scalability.

Could you guys provide insights into some recommended methods for implementing rate limits and cooldowns for games using Nakama, including any best practices or considerations for scaling?

Thank you in advance for any guidance or suggestions you can provide. :grinning:

Hello @Suero,

As you point out, this is typically managed and done at the LB level, not the server itself. However you can implement your own solution in the server.

That being said, given that the JS library you provided seems to be node dependent, it may or may not work in Nakama depending on whether it can be transpiled to ES5 compliant JS, you’ll need to try it out.

Additionally, given that Nakama uses a pool of JS VMs for efficiency, you’ll likely need to use localcache APIs to share state between them. I’d probably use a before hook on the RPCs to implement this, and either do a global or per-RPC rate-limit. Due to these caveats It’s likely best if you roll out your own solution.


1 Like

Hello, @sesposito

Thank you so much for the answer and I’ll have all this in consideration, one last question, a friend of mine said that:

“ratelimits arent supposed to be on the server code, bcuz its not efficient and uses a lot of memory, you should ratelimit on a reverse proxy”

Is it possible to do such with NGINX and Nakama? Can I point out different ratelimits to individual RPCs/WebSocket interactions? Like, a higher ratelimit for chat messages but a lower one for creating matches

If Nakama is behind Nginx I believe you can set up different rate-limits per location (route) in its config, however I’m not experienced with it so I can’t speak for how to achieve that for websockets.

1 Like