Custom DB migration

Hi guys,

I create my own custom DB and want to use golang-migrate/migrate: Database migrations. CLI and Golang library. (github.com) to setup migration script.

So now, I want to get the DB address from server config. How can i do that?

@tunglt1810 You won’t have access to the raw database connection string, but your library doesn’t require it. In your Go module’s InitModule function use the supplied *sql.DB handle to initialize your migration tool. The project’s readme shows you how to do this.

1 Like

Thank @zyro , it’s worked

@tunglt1810 I would suggest you avoid custom SQL as much as possible. We’ve seen a number of game teams in recent months who built their own domain model and didn’t take advantage of all the structures and APIs built directly by us in Nakama and end up creating huge performance problems with their own code.

There are use cases where I’d suggest that SQL might be useful but they’re few.

1 Like

Thank @novabyte for your suggestion.

I’m newbie with Nakama and Golang too. So I want to practice with the extendable ability of your awesome project. For now I use the Storage Engine to store the user’s data.

Related with DB structure, do you think the storage table will be very very big if we store everything in it.
For example, I want to store user inventory, proficiency levels, task history, … a lot of things with complex object structure.

@tunglt1810 The storage table can be very large and you won’t have to worry about performance when you use the built-in Nakama APIs (especially the storage engine) because we’ve optimized those queries to perform very well at scale.

For example the largest game that uses Nakama has around ~6,500,000,000 storage objects for ~270,000,000 users. The average read time for a batch fetch of storage objects is ~4ms. To achieve these numbers we did apply some tuning to the database servers but that’s quite normal to do when operating at that scale.

1 Like

@novabyte it’s a whole new happy thing for a new year :smiley:

Can you share your tuning story as a blog?

@tunglt1810 That’s a large topic and difficult to cover because a lot of it involved specifics to the game studio project we helped on (so I can’t share due to NDAs). A few things we typically do when needed at large scale:

  • Tune the GC/Autovac parameters to be more aggressive on larger tables.
  • Tune kernel parameters to be better suited for database server workloads. This resource is useful though a bit outdated now.
  • Repartition the storage table if its run on PG/Aurora rather than CRDB.
1 Like