Currently we’ve added a mutex so we won’t have a case where the client does three requests, all read the user at the same time, then modifies it and then stores it. As that would have a loss in data.
But we’ve had some odd issues with a mutex not unlocking. I wonder if it’s altogether a bad practice to use a mutex?
I wouldn’t say it is a bad practice but it is certainly something that should be used as a last resort since it adds complexity and “room” for bugs such as deadlocks.
Maybe a refactor with the creation of specific endpoints meant for each event/change could help. This way the concurrency would be dealt on the server instead of the game client.
Thanks for the reply, This is really what we do now. The mutexes are on the server, we have a master lock, that has a map with user locks. But we already had one hard to track down deadlock. So I’m not really happy with this, plus if we scale to multiple instances, then we’d have to know requests always go to the same instance per user.
There is the conditional write that I’m thinking could be a better solution? But it’ll need a bit of rethinking. What do you think?
Hard to tell without more context but maybe Nakama’s Storage engine might help you.
I recommend you take a look at its documentation, specially the “Conditional write” feature which may solve your issue without locks (retry if the version is outdated could be an option).
Hope this helps.