@nixarn That’s somewhat true but it doesn’t do justice to how the game server is designed. We should really create some detailed architecture docs but that will have to come in the new year after we’ve released the upcoming Nakama 3.0 version. In the meantime I can add more detail in this thread.
The game server is broken up into realtime and non-realtime features. This cleanly maps to what you see in most of the client SDKs as the client object and the socket object. The non-realtime features of the system like friends, groups, notifications, chat history, etc. all depend on carefully crafted queries and indexes and communicate directly with the database server(s).
A few of these game features like leaderboards and tournaments also use some in-memory caching capabilities inside the game server to handle certain features (like rank calculations). We use datastructures like skiplists to efficiently traverse and cache these large datasets in RAM. This means we can eliminate the need for secondary caches like Redis, or MemSQL which is a part of the goal of the technology. Minimize the amount of infrastructure needed to manage and scale highly successful games.
The realtime features of the game server utilize a combination of gossip protocols, inter-cluster communication, dotted version vectors, and distributed datastructures to create a cluster wide “view” of all socket connections and which users these sockets belong to. We typically call these presences and are internally represented as a tuple of { user_id, session_id, node_id }. This representation underpins all the realtime features like chat, relayed multiplayer, status events, etc, etc.
The authoritative multiplayer engine and matchmaker in the game server take advantage of the cluster system but actually also have some of their own special components to manage the way messages are broadcast, hook into the lifecycle of the server instance itself, and replicate information needed for players to find and form matches together.
The beauty of this design (at least as far as we believe) is that it treats the game server itself like an in-memory replicated database engine. So if you skew your vision of the technology you could actually think of what we’ve built as an in-memory database which utilizes a persistent Postgres wire-compatible database for core data. It also minimizes the total infrastructure you need to manage down to:
- A load balancer to open sockets and handle requests (also do SSL termination).
- N-number of Nakama instances clustered together.
- N-number of database instances clustered together.
You end up with a scalable model with no single point of failure (except on DNS lookups but you can also set up secondary DNS if needed - depends on the budget for infrastructure in a game project) and can operate at very large scale as needed.
Sorry for the long post. Hope this helps.
PS: I’ve glossed over a few details but the majority of the important parts are covered above.