Tournaments with Fixed-Capacity Leaderboards

I am looking to implement a tournament system in Nakama where users that join the tournament get assigned to a fixed-capacity leaderboard. When the leaderboard reaches its capacity, the next person that joins the tournament gets assigned to a new leaderboard (this new leaderboard should be created and owned by the server, not the player). Players are assigned to each leaderboard in sequence; no matchmaking is necessary to assign players of similar skill to a leaderboard, and there should be no limit to the number of leaderboards that can be created this way. When the tournament comes to an end, the top players on each leaderboard are awarded a prize, and the leaderboards are deleted a few days later so that each player’s ranking can be viewed for the few days following the tournament.

For example, players get notified that a tournament has started, either through a notification or by logging into the game and seeing a user interface element showing them that a tournament has begun. The player optionally chooses to join the tournament by clicking the user interface element, and is then immediately assigned to a leaderboard with up to 49 other players. When the tournament ends, the player enters the game and can view the leaderboard, along with their final ranking, and claim any prizes they may have been awarded as a result of their final ranking in the tournament.

What would be a good way to approach creating such a system while ensuring the system remains scalable in Nakama?

@jamesbaud For this use case I wouldn’t use the Tournament API in Nakama but just use a collection of leaderboards. You can control all the logic you’ve described around them with a few RPC functions written for the server runtime. There’s a callback that you can register for at server startup that will be called when the leaderboard expires.

The only part that’s unclear is the approach you want to take to fill each leaderboard before the next one. It might be better to predefine a number of leaderboards (which you could adjust as the player base grows) and hash their user ID to place the player into one of the leaderboard “shards”. This way while leaderboards may not get full you can more evenly spread the number of players per leaderboard. What do you think?

I would much rather prefer to fill an empty leaderboard before creating a new one. This will make rewards predictable for players that join the tournament (e.g., the top several players in each leaderboard receive an award, rather than adjusting rewards based on the number of players in each leaderboard at the end of the tournament). And filling the incomplete leaderboard first will also populate it more quickly. This is important to give players that join the tournament within the same window of time the most fair chance of receiving an award when the tournament ends (because the tournament is a fixed duration and allows players to earn additional points by investing time, players who join the same leaderboard have a more equal chance at winning a reward).

Many games, both desktop and mobile, implement leaderboards in a similar way (e.g., StarCraft II). They restrict the number of players in each leaderboard so that players have a more meaningful path of progress. Many mobile games do something even more similar to what I described above. A leaderboard/tournament solution where players can invest additional time to earn more points on the leaderboard is much better at driving player engagement and improving retention than having one giant leaderboard for all players.

What I’m particularly interested in knowing is where and how to store the ID of the incomplete leaderboard (or create a new one if the previous leaderboard is full, or if it is the first leaderboard of the tournament). I want to ensure that the operation has some sort of mutex that prevents other players from joining an already-full leaderboard until the previous request has been fully processed. I would like to know where in Nakama to store this global data (i.e., the incomplete leaderboard ID) so that if the system were to scale to multiple instances, this information is shared and accessible to each instance. I assume I will need to perform some sort of custom SQL query. But if multiple clients make an RPC call to the server, are requests serviced in multiple threads, or are they processed in sequence? If they are processed simultaneously, I want to be careful so that only one leaderboard gets created, and all simultaneous requests get added to that new leaderboard. Each simultaneous request should not create its own leaderboard.

For this use case I wouldn’t use the Tournament API in Nakama but just use a collection of leaderboards

I was thinking the same here. The only part of the Tournament API that would be useful is the hook that gets called when the tournament ends, but that can be re-implemented in a custom RPC function if necessary.

@novabyte Can you provide some answers to the following few questions:

What I’m particularly interested in knowing is where and how to store the ID of the incomplete leaderboard (or create a new one if the previous leaderboard is full, or if it is the first leaderboard of the tournament). I want to ensure that the operation has some sort of mutex that prevents other players from joining an already-full leaderboard until the previous request has been fully processed. I would like to know where in Nakama to store this global data (i.e., the incomplete leaderboard ID) so that if the system were to scale to multiple instances, this information is shared and accessible to each instance. I assume I will need to perform some sort of custom SQL query. But if multiple clients make an RPC call to the server, are requests serviced in multiple threads, or are they processed in sequence? If they are processed simultaneously, I want to be careful so that only one leaderboard gets created, and all simultaneous requests get added to that new leaderboard. Each simultaneous request should not create its own leaderboard.

Does anyone have any insight or suggestions here?