What is the best way to trace a memory leak in my custom rpcs in the typescript nakama runtime?

I have a series of custom RPC’s in my nakama typescript multiplayer server. I’ve noticed that when starting and completing a match my memory usage on my machine that is running nakama increases by ~1% and doesn’t go back down after the match finishes.

I’m wondering where my leak is and how to debug. I started using the pprof endpoint for memory (/pprof/heap) but the flame graph doesn’t provide particularly useful information on which functions are using the most (top says github.com/dop251/goja.(*baseObject).put).

Do you have any suggestions to get more specific information to find the leak?

Let me know - thanks!

Hello @lukep,

Unfortunately it is not possible to take insightful pprofs of memory usage in the JS runtime, but I can say that we have many customers using the JS runtime in production with complex games without issues and we’re not aware of any memory leaks.

Please consider the following points:

  • The JS VM pool for RPC requests may instantiate new VMs as needed, depending on these configs: Configuration - Heroic Labs Documentation

  • Avoid modifying JS global state - the VMs are reused which means that changes to global state will only reflect in the current VM and not others, which can cause correctness issues if you rely on that state.

  • The Go GC may not release all memory back to the OS immediately, whether the memory is released back to the OS depends on a number of variables, so this may play a role on what you’re observing too.

Thanks for the quick response @sesposito! I’ll try to review what we have and see if we should adapt anything.