Collections and Database Implementation in conjunction with SQL Tables

Hi, I have a question as part of my Nakama explorations for our project and I don’t see a question like this already:

When performing operations on Storage Objects, is their internal implementation and storage using whichever database you have Nakama pointing to normally? (i.e. everything is saved in CockroachDB by default, but if you were pointing to postgresql, they both would live there)

If they are in the same “space”, can the internal data of storage objects ever be accessed outside of the Collection APIs in raw sql? (now SHOULD it be, might be a different question, since the developer might be trying to do something that is hinting on a change in the data model)

For example (this is just a rough example):

Player Inventory is stored as a collection, named “inventory”, indexed by “userid”, the json contents might contain a store catalogitemid with a count with other misc data.
A store catalog item might be stored as a SQL table “catalogitems”, indexed by “catalogitemid”.
A store might be stored as a SQL table “storecatalogitems”, indexed by “storecatalogitemid”, foreign key “catalogitemid”.

A query is needed to find out what items user 25 has in a store ID 2.
Can a join cross the “collection-SQL” boundary? Something like:
SELECT i.storageObjectId(?), ci.catalogitemid
FROM stores AS s
INNER JOIN catalogitems AS ci
ON s.catalogitemid = ci.catalogitemid
AND s.storeid = 2
INNER JOIN collectiontable_inventory AS i
ON ci.catalogitemid = {some kind of jsonb query looking inside for a matching i.catalogitemid}
AND i.UserId = 25

This specific example could probably be worked around in some client-side implementation that would get everything in a store and everything in the inventory for a player and finding matching IDs that way, but could mean sending potentially large data sets to the client unnecessarily.

Are they meant to remain separate data spaces? Server script can access both in separate statements, but I can imagine cases like the above you might want to access both via SQL.

Knowing this would help shape what kind of Data model is used for structures in Nakama.

thanks,

Barry

Hi @bbi-barrymccallum. These are good questions I’ll try and cover each of them for you and can discuss any parts in more detail as you follow up.

is their internal implementation and storage using whichever database you have Nakama pointing to normally?

Yes. We’ve designed Nakama to maintain data in-memory for information that can be constructed at runtime or is required for realtime performance. All core data (game data, player data, etc) is stored in a Postgres wire compatible database.

This could be Postgres, AWS Aurora, cockroachdb (or theoretically any other database engine that uses the Postgres wire protocol and understands JSONB column types).

If they are in the same “space”, can the internal data of storage objects ever be accessed outside of the Collection APIs in raw sql? (now SHOULD it be, might be a different question, since the developer might be trying to do something that is hinting on a change in the data model)

Yes absolutely. We’ve got a core storage engine API which we guarantee performance requirements meet our internal guidelines for optimal query patterns but we recognise that all games have unique requirements so we include a SQL API which can be called from the server runtime to interact with any of the database tables which are part of the Nakama schema.

I’d recommend you minimise the need for custom queries but it’s sometimes necessary to fit the shape of the query patterns you have for your game project. The performance optimization of these queries will be up to you and your team although we can provide support. We do also recommend against adding custom columns to the builtin schema and instead suggest you create separate tables if needed.

Are they meant to remain separate data spaces? Server script can access both in separate statements, but I can imagine cases like the above you might want to access both via SQL.

In general the APIs are designed to minimise JOINs and colocate relevant data alongside specific contextual uses. For example each user has a metadata region which can be used to store information that’d be returned frequently alongside their user account or might be visible to other users in game when that player’s profile is fetched.

This specific example could probably be worked around in some client-side implementation that would get everything in a store and everything in the inventory for a player and finding matching IDs that way, but could mean sending potentially large data sets to the client unnecessarily.

There’s a lot of specifics to an inventory API and a virtual store which can affect the best way to represent the data model for the common access patterns. We have built these kinds of game play features quite a few times so far successfully on top of the storage engine API through the combination of batch KV and listings operations it provides.

The example SQL you’ve given can be done if that is needed for your game design requirements but to give you a more detailed answer I’d need more information:

  • What is the average size of an inventory item?
  • How many virtual items exist in the economy for your game?
  • How frequently does/will the virtual catalog change to include new items?
  • Is there a game economy you can provide as a reference example that is similar to the game design you have for your project?

Hope this helps.

Thanks for the reply @novabyte.

What are the names of the tables in the Nakama schema? (specifically the user added collections) I don’t see it in the storage engine API.

Some other BAAS platforms include Store and Catalog API as part of their product. Customer needs can differ sometimes between store implementations so I can see why it’s not something that’s included in the base product.
Is there any extensions or sample products that extend Store and Catalog type functionality on top of Nakama? Given that we would be starting from scratch, having something to work with or extend as a base implementation would probably save some time. Google doesn’t seem to find anything.

I’m not sure I can answer some of these because I don’t have the information yet myself.

  • Inventory item storage in Nakama will be quite small, likely a count int, createdOn datetime, maybe one or two ints that might live as columns or inside a json struct.
  • A couple hundred possibly?
  • Catalog changes and economy I can’t really say.

What are the names of the tables in the Nakama schema? (specifically the user added collections) I don’t see it in the storage engine API.

The storage engine API is built on top of the storage table in the Nakama schema. We use specific indexes and take advantage of btree indexes to provide fantastic performance on the single table design. And wherever we need to achieve greater scale on the size of the table we use logical “sharding” with PG11 table partitioning on top of the storage table. Although we’ve only needed to do this on one game title so far. :slight_smile:

You can find the Nakama schema here. Although a quick summation is visible with the \d psql command:

               List of relations
 Schema |        Name        | Type
--------+--------------------+-------
 public | group_edge         | table
 public | groups             | table
 public | leaderboard        | table
 public | leaderboard_record | table
 public | message            | table
 public | migration_info     | table
 public | notification       | table
 public | storage            | table
 public | user_device        | table
 public | user_edge          | table
 public | user_tombstone     | table
 public | users              | table
 public | wallet_ledger      | table

Is there any extensions or sample products that extend Store and Catalog type functionality on top of Nakama?

This is a great suggestion. We’ve not got any open-source code to use as a reference for a virtual store/catalog and inventory design but I’ll take it to the team and see how quickly we could put an example together.

I’m not sure I can answer some of these because I don’t have the information yet myself.

No worries. Based on the info you do have to hand I think that it’d be quick and simple to structure the virtual store and inventory on top of the storage engine API.

Is there a way to connect to this using psql, or a DBA tool?

@EarthAdam The database is run separate to the game server. You can connect and inspect with psql or a GUI like you would any Postgres compatible database.

1 Like

Great @novabyte ! Is there info on this? I saw the cURL commands, but couldn’t figure out what to use as the authorization bearer. If I can just use psql or something like that though, that would be ideal. I’ve tried several iterations of this, but no luck so far:
psql "sslmode=disable dbname=nakama user=admin password=password hostaddr=<I put my Digital Ocean IP here, without the port>"

@EarthAdam Have you deployed your own custom-configured Nakama + database on Digital Ocean, or are you using the DO Marketplace Nakama droplet?

If you’re using the pre-configured marketplace droplet it uses CockroachDB as its database and only listens for traffic on localhost. Try SSH-ing into the droplet and running cockroach sql -d=nakama --insecure.

1 Like

Thanks @zyro! I did use the marketplace droplet following the YouTube turorial. It sounds like I need to setup a postgresDB in DO separate than the Nakama droplet in order to access it remotely. Is that the case, or would they just be two separate containers (one for Nakama, one for Postgres) in the same droplet?

To get started I’ve created a postgresDB in DO, and downloaded Docker Desktop, although I’m confused on how to configure the docker-compose.yaml to this particular postgres db. I also found the docker-compose-postgres.yaml, which seems like it would initialize both in the same droplet, just separate containers, but I’m getting a FileNotFoundError:
No such file or directory: '.\\docker-compose.yml'

Do you have any more information on deploying a custom-configured Nakama + postgres setup?

So my suggestion to try SSH-ing into the droplet and running cockroach sql -d=nakama --insecure did not work as intended? I admit my Digital Ocean-fu is a bit rusty. :smile:

1 Like

@zyro After a few different approaches I finally got your SSH-ing approach working for the cockroachDB in Bash using ssh root@<ip address>, but I’m stuck on getting the CA cert for the database insiide of the droplet in order to configre DBeaver like this tutorial.

Ideally I’d like to get Postgres going because I’m hoping to optimize for timeseries, and TimescaleDB seems like it would be a useful extension that works with Postgres.

The default database inside the droplet does not configure SSL, you should disable that option in DBeaver.

This is getting off-topic for this post. If you need more info please open a new thread and give more detail about what you’re trying to achieve. :+1:

Yeah it is getting off-topic, I apologize. I’ve just created a new thread for it here: Setting up DBA with Nakama
Thanks!