Skip to content
Logo Icon

Why We Rebuilt Our Data Layer on Kysely

· 4 min

When we started Pokerscope we defaulted to Prisma because it let us ship features without writing boilerplate. Two years later our architecture looked less like a prototype and more like a distributed system. The generated client was hiding query plans we needed to shape, migrations were opaque during incident response, and type-safety broke down whenever we reached for advanced SQL. So we rebuilt the entire data layer on Kysely and made the migration the most ambitious platform project we have shipped to date.

Why Prisma Became a Bottleneck#

  1. Predictability at scale — Our observability stack tracks query fingerprints. Prisma generated functions that depended on runtime conditionals, so identical TypeScript code produced multiple SQL variants. Postgres buffers were thrashing because the planner could not reuse cached execution plans.
  2. Composable abstractions — We maintain a shared analytics service, a real-time ingest worker, and a tRPC API. Each needed to extend the ORM differently. Extending Prisma meant monkey-patching the generated client, which made upgrades fragile.
  3. Advanced SQL features — Range analysis requires common table expressions, lateral joins, and window aggregates. Prisma could technically run raw SQL, but doing so forfeited the type guarantees that kept the rest of the stack correct.

Migration Goals#

We wrote the project charter like we would an incident mitigation plan:

Building the Kysely Foundation#

We treated Kysely as a compiler target instead of a query builder API. The goal was to describe our domain once and generate fit-for-purpose clients for every runtime.

Shared Database Typings#

We introspected production using pg_dump --schema-only, fed the result into kysely-codegen, and layered additional brand types for domain primitives. Those brands meant we could distinguish a HandId from a PlayerId at compile time.

packages/database/src/types.ts
declare module 'kysely' {
interface ColumnTypeOverrides {
bigint: number & { readonly brand: 'BigIntAsNumber' }
"hand_id": string & { readonly brand: 'HandId' }
}
}

Deterministic Query Builders#

Kysely lets you treat SQL as a value. We codified query composition rules as higher-order builders, so any service could stitch together filters without re-implementing business logic.

packages/database/src/query-handlers/rangeReport.ts
export const rangeReport = baseHandQuery
.where((eb) => eb('hands.import_source', '=', sql.placeholder('source')))
.select((eb) => [
eb.fn.sum<number>('actions.amount').as('total_committed'),
eb.fn.avg<number>('actions.spr').as('avg_spr'),
eb.fn.percentile<number>('actions.equity', 0.95).as('p95_equity'),
])
.groupBy('hands.hero_position')

Every composable added a deterministic alias map so instrumentation remained stable regardless of callsite.

Re-imagined Transactions#

Prisma wrapped everything in implicit transactions. We replaced that with explicit scopes powered by Kysely’s transaction().execute() API. Our worker pool now wires context-aware transactions through AsyncLocalStorage, giving us trace IDs for every commit and rollback.

Rewriting 1,042 Queries Without Going Dark#

We cataloged every Prisma query in the monorepo and generated representative payloads from production logs. A replay harness executed both the Prisma and Kysely versions against a seeded Postgres instance and diffed the results down to JSON order. That gave us a confidence score per route.

scripts/compare-orms.ts
for (const artifact of prismaArtifacts) {
const prismaResult = await prismaClient[artifact.model][artifact.method](artifact.args)
const kyselyResult = await kyselyExecutors[artifact.signature](artifact.args)
assert.deepStrictEqual(sortDeep(prismaResult), sortDeep(kyselyResult), artifact.signature)
}

Differences triggered snapshots that we triaged during daily standups. More than half were legitimate improvements—unused SELECT * fields we removed on purpose—and the harness made that explicit.

Managing Migrations at Scale#

The biggest architectural change was moving away from Prisma Migrate. We adopted dbmate for schema management and wrote an adapter that emits both SQL files and Kysely migration scripts. During rollout:

Observability and Performance#

The migration finally let us wire query lifecycle hooks into our telemetry stack. We stream every statement to Honeycomb with a stable hash, latency percentile, and row count. That visibility surfaced optimizations like rewriting a recursive CTE into an indexed adjacency list, dropping 99th percentile latency from 380 ms to 42 ms.

We also moved heavy analytical queries to read replicas without touching application code by swapping the Kysely executor at the dependency-injection boundary.

Adoption Timeline#

We kept the blast radius small:

  1. Q1: Data ingestion workers—high throughput, low business risk.
  2. Q2: Read-heavy APIs—benefited from query shaping and caching.
  3. Q3: Mutating workflows—moved last after the transaction scaffolding was battle-tested.

Every milestone shipped with a “fallback switch” environment variable. If any regression surfaced, we flipped back to Prisma instantly while still collecting telemetry for diagnosis.

Results and Next Steps#

Next up is auto-generating domain events from the same Kysely metadata. That will keep our streaming pipeline in lockstep with relational state changes.

Rebuilding the data layer forced us to confront the parts of our system we had been ignoring. Kysely gave us the control plane we needed, but the real win was the discipline we built along the way.