Isolating PostgreSQL Tests with PGLite

Published: Jun 9, 2025

Last updated: Jun 9, 2025

This post introduces PGLite as an in-memory way to run tests without compromising too much on speed or isolation problems.

We will be using Prisma and Vitest to demonstrate this, but please note that this approach is just as applicable to using alternative test runners and PostgresQL-compatible libraries such as Jest and Drizzle.

Example code used for this blog post can be found on my GitHub.

Testing purgatory

There are four things that lead to testing purgatory:

  1. Boundaries
  2. Isolation
  3. Mocking
  4. Asserting side-effects

If your test boundaries are too narrow, you lose test confidence.

If you fail to properly isolate your tests, it tends towards flakiness.

If you over-mock or mock the wrong things, you introduce unnecessary code volume to your application and present an inaccurate reflection of your application.

Finally, if you over-assert side-effects within a module that are separate to the desired functionality of the program, you run the risk of false negatives once refactoring occurs.

How can PGLite help with isolation?

PGLite is full PostgreSQL database run using Web Assembly.

While some benefits are beyond the scope of this blog post, the main focus is on the fact that it's lightweight, supports the PostgreSQL feature set and extensions.

This also introduces us a perfect use-case for it: a drop-in replacement that can be used for testing.

The way that test runners like Vitest work is that they run tests sequentially within each file. Beyond that, the test runners attempt to parallelize the test runs whenever they can.

We can control this behavior by reducing the amount of parallel testing that can occur through test configuration files or the CLI in order to run tests serially.

In Vitest, there are a few options provided around this:

  • threads pool runs every test file in a separate Worker.
  • forks pool runs every test file in a separate forked child process.
  • vmThreads pool runs every test file in a separate VM context, but it uses workers for parallelism.

If you are more familiar with Jest, then you may have used --runInBand. The --runInBand flag in Jest forces all tests to run serially in the same process rather than creating a worker pool of child processes.

By default, Vitest runs a forks worker pool.

Selecting values for parallelism and worker pools is important for a number of reasons (including test speed in resource-constrained environments).

One of those reasons is to prevent cross-test access to shared resources such as databases. Failing to isolate this will lead to dirty reads or false assumptions about the current state of the database and, in turn, flaky tests.

Given the lightweight nature of PGLite and knowing that tests are parallelized per test file but serially run within, we can workaround this problem by spinning up an in-memory database for every test file.

Understanding alternative approaches

Before going too far, it's worth knowing some other approaches and what their trade-offs are.

In the scenario of spinning up database for tests to interact with, we already know that would be required to run tests in isolation to prevent flakiness.

In a previous post, The Boundaries of TypeScript Testing, I demonstrating writing tests using TestContainers, which can also be a great way to introduce parallelism through test sharding.

Test shards will split up tests in N shards (where N is the number of shards nominated by you). Doing so while also spinning up N test containers can re-enable parallelism to an extent, but each test shard still needs to run their own tests serially within the shard.

When working with databases that do not offer in-memory solutions, TestContainers can still be your best bet for enable some form of parallelism with test shards.

Configuring Vitest and Prisma for in-memory PGLite

In my demo code, I've followed the Getting Started guide for Prisma and made use of that demonstration schema.

Following that, Vitest needs to be installed and configured to have following options:

// vitest.config.ts import { defineConfig } from "vitest/config"; import { cpus } from "node:os"; const maxThreads = Math.max(1, cpus().length - 1); export default defineConfig({ test: { pool: "threads", poolOptions: { threads: { maxThreads, minThreads: 1, isolate: true, }, }, setupFiles: ["./src/test/setup.ts"], globalSetup: "./src/test/global-setup.ts", // Add the types file to be loaded automatically typecheck: { include: ["**/*.{test,spec}.{js,ts,jsx,tsx}"], }, testTimeout: 15000, hookTimeout: 10000, include: ["**/*.{test,spec}.{js,ts,jsx,tsx}"], exclude: ["node_modules", "dist", "build"], environment: "node", }, });

Most of it this is default configuration, but importantly we have configured a global setup file, a setup file for tests and updated our worker pool options.

In our case specifically, I've opted to run isolated threads and am making use of node:os (since I am using Node.js in this post) to provide a limit for maxThreads. I've set -1 on the count from the max CPU cores in order not to detract compute power away from other processes. You will likely want this or -2.

Please note: it's worth reviewing the configuration options and refining your configuration for what suits. Do not forget that your CI test runner environment may be different to local.

Files added to globalSetup run once prior to running the tests, while the files within setupFiles run before each test file.

You want to configure it so that globalSetup will prepare what we need for our PGLite database while setupFiles will configure the in-memory database itself and allow it to be accessed for the files tests.

Configuring our global setup file

Starting with the global setup file, we can add the following:

import { generateDbInitSql } from "./db-setup"; export async function setup() { console.log("๐Ÿ”ง Setting up test environment..."); try { // Generate the schema SQL await generateDbInitSql(); console.log("โœ… Database schema generated and cached"); } catch (error) { console.error("โŒ Failed to generate database schema:", error); throw error; } } export async function teardown() { console.log("๐Ÿงน Global teardown complete"); }

In Vitest, the setup and teardown functions are key functions that the test runner looks for. In my case (and to keep it simple), I'm running a generateDbInitSql function as part of global setup.

My db-setup.ts file looks like the following:

import { readFileSync } from "node:fs"; import type { PrismaClient } from "../../generated/prisma"; import type { PGlite } from "@electric-sql/pglite"; import { exec } from "node:child_process"; import { promisify } from "node:util"; import { resolve } from "node:path"; import { cwd } from "node:process"; const execAsync = promisify(exec); declare global { var testPrisma: PrismaClient; var testDb: PGlite; } const migrationOutputPath = resolve(cwd(), "tmp/migration.sql"); /** * The Prisma CLI does not (yet) support initializing a PGLite database. Instead, we get Prisma to * dump a single migration file which can then be loaded by a PGLite instance to setup the * database. * * @see https://github.com/lucasthevenet/pglite-utils/issues/8#issuecomment-2147944548 */ export async function generateDbInitSql() { const prismaSchemaPath = resolve(cwd(), "prisma/schema.prisma"); await execAsync( `prisma migrate diff --from-empty --to-schema-datamodel ${prismaSchemaPath} --script > ${migrationOutputPath}` ); } export function getSchemaSql() { return readFileSync(migrationOutputPath, "utf-8"); }

In the file, I am running a Prisma shell command in order to generate a single migration file that can be used for setting up the database for each test.

At the time of writing, Prisma does not expose an API to do this programmatically.

In my case, the output file will be placed locally within a tmp/ folder. This will run everytime we re-run tests.

Configuring our test setup file

The setup file adds some hooks to our before/after blocks to configure the PGLite database for Prisma and to set that to our globals:

import { beforeAll, afterAll, beforeEach } from "vitest"; import { PGlite } from "@electric-sql/pglite"; import { PrismaPGlite } from "pglite-prisma-adapter"; import { PrismaClient } from "../../generated/prisma"; import { getSchemaSql } from "./db-setup"; let db: PGlite; let prisma: PrismaClient; beforeAll(async () => { console.log("๐Ÿš€ Setting up database for test file..."); db = new PGlite({ dataDir: "memory://", }); await db.exec(getSchemaSql()); prisma = new PrismaClient({ adapter: new PrismaPGlite(db), }); // Make available globally global.testDb = db; global.testPrisma = prisma; console.log("โœ… Database ready!"); }); afterAll(async () => { console.log("๐Ÿงน Cleaning up database..."); try { await prisma.$disconnect(); await db.close(); } catch (error) { console.warn("Cleanup warning:", error); } console.log("โœ… Database cleaned up!"); }); beforeEach(async () => { // Get all table names excluding Prisma migration tables const result = await db.query<{ tablename: string }>(` SELECT tablename FROM pg_tables WHERE schemaname = 'public' AND tablename NOT LIKE '_prisma%' ORDER BY tablename; `); const tableNames = result.rows.map((row) => row.tablename); if (tableNames.length > 0) { // Disable foreign key checks, truncate all tables, re-enable await db.exec(` SET session_replication_role = replica; TRUNCATE TABLE ${tableNames .map((name) => `"${name}"`) .join(", ")} RESTART IDENTITY CASCADE; SET session_replication_role = DEFAULT; `); } });

The beforeAll block reads in the migration SQL file, configures the PGLite client, applies the migration and then sets up a Prisma client and assigns them to globals we can access during our tests.

The afterAll block run after all the tests runs the Prisma disconnect and closes the in-memory database.

Finally, our beforeEach below will ensure the data in the database has been truncated (removed) in order to start each test with a clean start. This is incredibly important for the testing model arrange-act-assert.

You may in fact want to have seed database data for the tests. In that scenario, you should do some experimentation with seeding during beforeAll and working on how you will cleanup tables after tests to prevent polluting the data.

At this point, we are ready to run some tests.

Creating a file to write tests against

I opted to write a simple repository file that had some actions we can write tests for:

import type { PrismaClient, Role } from "../generated/prisma"; export interface CreateUserData { email: string; name?: string; role?: Role; } export interface UpdateUserData { email?: string; name?: string; role?: Role; } export class UserRepository { private prisma: PrismaClient; constructor(prisma: PrismaClient) { this.prisma = prisma; } /** * Naively return all users with no limit */ async getAllUsers() { return this.prisma.user.findMany(); } /** * Get user by ID */ async getUser(id: number) { return this.prisma.user.findUnique({ where: { id }, }); } /** * Create a new user */ async createUser(data: CreateUserData) { return this.prisma.user.create({ data: { email: data.email, name: data.name, role: data.role || "USER", }, }); } /** * Update user */ async updateUser(id: number, data: UpdateUserData) { return this.prisma.user.update({ where: { id }, data, }); } /** * Delete user */ async deleteUser(id: number) { try { await this.prisma.user.delete({ where: { id }, }); return true; } catch (error) { // Handle user not found return false; } } }

Writing tests for our file

As for tests, I had written the following:

import { beforeEach, describe, expect, it } from "vitest"; import { UserRepository } from "./user-repository"; import { faker } from "@faker-js/faker"; import { fail } from "node:assert"; describe("UserRepository", () => { let userRepo: UserRepository; beforeEach(() => { userRepo = new UserRepository(global.testPrisma); }); describe("getUser", () => { it("should return user by ID", async () => { const user = await global.testPrisma.user.create({ data: { email: faker.internet.email(), name: faker.person.fullName(), }, }); const result = await userRepo.getUser(user.id); if (!result) { fail("Result was null"); } expect(result).not.toBeNull(); expect(result.id).toBe(user.id); expect(result.email).toBe(user.email); }); it("should return null for non-existent user", async () => { const result = await userRepo.getUser(99999); expect(result).toBeNull(); }); }); describe("createUser", () => { it("should create user with all fields", async () => { const userData = { email: faker.internet.email(), name: faker.person.fullName(), role: "ADMIN" as const, }; const result = await userRepo.createUser(userData); expect(result.email).toBe(userData.email); expect(result.name).toBe(userData.name); expect(result.role).toBe(userData.role); expect(result.id).toBeDefined(); }); it("should create user with default role when not specified", async () => { const userData = { email: faker.internet.email(), name: faker.person.fullName(), }; const result = await userRepo.createUser(userData); expect(result.email).toBe(userData.email); expect(result.name).toBe(userData.name); expect(result.role).toBe("USER"); }); it("should create user without name (optional field)", async () => { const userData = { email: faker.internet.email(), }; const result = await userRepo.createUser(userData); expect(result.email).toBe(userData.email); expect(result.name).toBeNull(); expect(result.role).toBe("USER"); }); it("should throw error for duplicate email", async () => { const email = faker.internet.email(); await userRepo.createUser({ email, name: faker.person.fullName(), }); await expect( userRepo.createUser({ email, // Same email name: faker.person.fullName(), }) ).rejects.toThrow(); }); }); describe("updateUser", () => { it("should update user successfully", async () => { const user = await global.testPrisma.user.create({ data: { email: faker.internet.email(), name: faker.person.fullName(), role: "USER", }, }); const updateData = { name: faker.person.fullName(), role: "ADMIN" as const, }; const result = await userRepo.updateUser(user.id, updateData); if (!result) { fail("Update result was null"); } expect(result.id).toBe(user.id); expect(result.name).toBe(updateData.name); expect(result.role).toBe(updateData.role); expect(result.email).toBe(user.email); // Should remain unchanged }); it("should update only specified fields", async () => { const user = await global.testPrisma.user.create({ data: { email: faker.internet.email(), name: faker.person.fullName(), role: "USER", }, }); const newName = faker.person.fullName(); const result = await userRepo.updateUser(user.id, { name: newName }); if (!result) { fail("Update result was null"); } expect(result.name).toBe(newName); expect(result.email).toBe(user.email); // Unchanged expect(result.role).toBe(user.role); // Unchanged }); it("should return null for non-existent user", async () => { await expect( userRepo.updateUser(99999, { name: faker.person.fullName(), }) ).rejects.toThrow(); }); it("should throw error when updating to duplicate email", async () => { const existingUser = await global.testPrisma.user.create({ data: { email: faker.internet.email(), name: faker.person.fullName(), }, }); const userToUpdate = await global.testPrisma.user.create({ data: { email: faker.internet.email(), name: faker.person.fullName(), }, }); await expect( userRepo.updateUser(userToUpdate.id, { email: existingUser.email, // Duplicate email }) ).rejects.toThrow(); }); }); describe("deleteUser", () => { it("should delete user successfully", async () => { const user = await global.testPrisma.user.create({ data: { email: faker.internet.email(), name: faker.person.fullName(), }, }); const result = await userRepo.deleteUser(user.id); expect(result).toBe(true); // Verify user was actually deleted const deletedUser = await global.testPrisma.user.findUnique({ where: { id: user.id }, }); expect(deletedUser).toBeNull(); }); it("should return false for non-existent user", async () => { const result = await userRepo.deleteUser(99999); expect(result).toBe(false); }); }); describe("getUsers", () => { it("should return all users", async () => { const data = [ { email: faker.internet.email(), name: faker.person.fullName(), }, { email: faker.internet.email(), name: faker.person.fullName(), }, { email: faker.internet.email(), name: faker.person.fullName(), }, { email: faker.internet.email(), name: faker.person.fullName(), }, { email: faker.internet.email(), name: faker.person.fullName(), }, ]; await global.testPrisma.user.createMany({ data, }); const users = await userRepo.getAllUsers(); // Naive test just checking database length expect(users.length).toBe(5); }); }); });

Each of these runs some operations against the database. I've also opted to put a naive users.length check at the end of the tests to check for pollution of data and cleanup.

Running the tests

Running one test itself would be fine, but we want to check the parallelism and isolation of our tests.

I wrote a script to create 100 copies of the tests in order to have them run in parallel and see the results.

The expectation here is that we will, throughout the duration of the tests, create 100 PGLite databases. This aligns with our understanding of how the setup file that we created is expected to operate.

Running vitest run enables our test runner to start. You should see it indicate that it found 100 files and the UI will indicate that it is parallelising tests in groups.

Ultimately, it finish with the following:

Test Files 100 passed (100) Tests 1300 passed (1300) Start at 07:35:40 Duration 25.00s (transform 982ms, setup 5.07s, collect 14.88s, tests 206.29s, environment 12ms, prepare 7.53s)

Running 1300 tests in 25 seconds where each file only has 13 tests is seriously impressive when each tests needs to run access to a running PostgreSQL database!

For what it is worth, the timing certainly depends on compute power. 25s might be arbitrary but my computer is a 2023 MacBook pro a few years old with mid-range specs!

If you follow the logs, you will also see when tests reset the database.

Mocking out the module instead

In our current approach that we explored, it opts to create values that are added to the global namespace. Given that my approach relies on using the global namespace to arrange the data mock data first and my tested module relies on dependency injection, this works fine for me.

For alternatives where you directly import the module, you may want to consider changing the setup test file to make use of module mocks instead.

For example:

vi.mock("./path/to/your/module", async (importOriginal) => { const { PGlite } = await vi.importActual("@electric-sql/pglite"); const { PrismaPGlite } = await vi.importActual("@prisma/adapter-pglite"); const { PrismaClient } = await vi.importActual("@prisma/client"); const db = new PGlite(); const adapter = new PrismaPGlite(db); const prisma = new PrismaClient({ adapter }); // Apply your generated schema const schemaSQL = getGeneratedSchema(); await db.exec(schemaSQL); return { prisma, db }; });

In the above, we assume that ./path/to/your/module exports an object with prisma and db properties in this case. Adjust as required.

Be warned though: as covered earlier, over-mocking of modules can lead to testing purgatory. Use this sparingly and when it makes the most sense to do so.

Conclusion

This blog post explores how we can replace our PostgreSQL database for a full, in-memory version to significantly sped up our tests without compromising on test boundaries.

It enables use to bypass network latency and flaky limitations that can occur when running end-to-end without losing much in test confidence as we maintain a strong representation of interacting with the database.

Photo credit: deeezyfree

Personal image

Dennis O'Keeffe

Byron Bay, Australia

Dennis O'Keeffe

2020-present Dennis O'Keeffe.

All Rights Reserved.