Nakama's Database Connection Handling (sql.Open with pgx)

Hi There,

As far as I understand from the code, Nakama appears to use sql.Open to establish the database connection:
{db, err := sql.Open(“pgx”, parsedURL.String())}

This utilizes Go’s database/sql abstraction with the pgx driver, which is great for basic connection pooling.

I have a few questions regarding this implementation:

  1. Does Heroic Cloud (enterprise version) use pgxpool instead of sql.Open?
  2. Is there any open-source or community-built utility for Nakama that leverages pgxpool for database connections?
  3. In this article - an Aurora database cluster was used.
  • Was the same db_Connect method (as in the open-source Nakama) used in that setup?
  1. Are there any benchmark or standard values for max_open_connections, idle_connections, and max_lifetime?
  • For example, if Nakama is running on a 1 CPU or 2 CPU machine, and the database is on a CockroachDB standard cluster (2 CPUs), what would be the ideal values?
  1. If a query execution fails due to contention or timeout on the database:
    • Does the current implementation perform any retries?
    • Is it possible for Nakama to fail to log such cases?

Thanks in advance for the clarification!

Hello @bachu11,

Nakama Enterprise uses a combination of libraries to manage the databases in clustered mode. Aurora is one of the databases available that is Nakama-compatible.

As for the database specifics, Nakama and the topology configuration, feel free to contact support@heroiclabs.com and they can help you with your questions as they are very specific to each individual setup and how Heroic Cloud configures the topology.

Best.

I believe if i move the database connect to PGXPool and implement a shared session cache I should be able to manage my poor man’s nakama cluster until I get to a stage where I can afford the enterprise. I have evaluated other options like PGBouncer etc… but they are not compatible with cockroach db standard cluster.

I will get in touch with support sooner or later.

Thanks