Skip to content

PGlite API

Main Constructor

The main constructor is imported as:

ts
import { PGlite } from '@electric-sql/pglite'

The preferred way to create a PGlite instance is with the PGlite.create() static method that returns a promise, resolving to the new PGlite instance.

await PGlite.create(dataDir: string, options: PGliteOptions)
await PGlite.create(options: PGliteOptions)

There are a couple of advantages to using the static method:

  • This awaits the .waitReady promise, ensuring that the database has been fully initialised.
  • When using TypeScript and extensions, the returned PGlite instance will have the extensions namespace on its type. This is not possible with the standard constructor due to TypesScript limitations.

A new PGlite instance can also be created using the new PGlite() constructor.

new PGlite(dataDir: string, options: PGliteOptions)
new PGlite(options: PGliteOptions)

dataDir

Path to the directory for storing the Postgres database. You can provide a URI scheme for various storage backends:

  • file:// or unprefixed
    File system storage, available in Node and Bun.
  • idb://
    IndexedDB storage, available in the browser.
  • memory://
    In-memory ephemeral storage, available in all platforms.

options

  • dataDir?: string
    The directory in which to store the Postgres database when not provided as the first argument.

  • debug?: 1-5
    the Postgres debug level. Logs are sent to the console.

  • relaxedDurability?: boolean
    Under relaxed durability mode, PGlite will not wait for flushes to storage to complete after each query before returning results. This is particularly useful when using the IndexedDB file system.

  • fs?: Filesystem
    The alternative to providing a dataDir with a filesystem prefix is to initialise a Filesystem yourself and provide it here. See Filesystems

  • loadDataDir?: Blob | File
    A tarball of a PGlite datadir to load when the database starts. This should be a tarball produced from the related .dumpDataDir() method.

  • extensions?: Extensions
    An object containing the extensions you wish to load.

  • username?: string
    The username of the user to connect to the database as. Permissions will be applied in the context of this user.

  • database?: string
    The database from the Postgres cluster within the dataDir to connect to.

  • initialMemory?: number
    The initial amount of memory in bytes to allocate for the PGlite instance. PGlite will grow the memory automatically, but if you have a particularly large database you can set this higher to prevent the pause during memory growth.

  • wasmModule?: WebAssembly.Module
    A precompiled WASM module to use instead of downloading the default version, or when using a bundler that either can, or requires, loading the WASM module with a ESM import.

  • fsBundle?: Blob | File
    A filesystem bundle to use instead of downloading the default version. This is useful if in a restricted environment such as an edge worker.

  • parsers: ParserOptions
    An object of type { [pgType: number]: (value: string) => any; } mapping Postgres data type IDs to parser functions. For convenience, the pglite package exports a constant for most common Postgres types.

    ts
    import { PGlite, types } from '@electric-sql/pglite'
    
    const pg = await PGlite.create({
      parsers: {
        [types.TEXT]: (value) => value.toUpperCase(),
      },
    })
  • serializers: SerializerOptions
    An object of type { [pgType: number]: (value: any) => string; } mapping Postgres data type IDs to serializer functions.

    ts
    import { PGlite, types } from '@electric-sql/pglite'
    
    const pg = await PGlite.create({
      serializers: {
        [types.NUMERIC]: (value) => value.toString(),
      },
    })

options.extensions

PGlite and Postgres extensions are loaded into a PGLite instance on start, and can include both a WASM build of a Postgres extension and/or a PGlite client plugin.

The options.extensions parameter is an object of namespace: extension parings. The namespace is used to expose the PGlite client plugin included in the extension. An example of this is the live queries extension.

ts
import { PGlite } from '@electric-sql/pglite'
import { live } from '@electric-sql/pglite/live'
import { vector } from '@electric-sql/pglite/vector'

const pg = await PGlite.create({
  extensions: {
    live, // Live query extension, is a PGlite client plugin
    vector, // Postgres pgvector extension
  },
})

// The `live` namespace is added by the use of the
// `live` key in the `extensions` object.
pg.live.query('...')

For information on how to develop a PGlite extension see Extension Development.

Methods

query

.query<T>(query: string, params?: any[], options?: QueryOptions): Promise<Results<T>>

Execute a single statement, optionally with parameters.

Uses the extended query Postgres wire protocol.

Returns single result object.

Example
ts
await pg.query('INSERT INTO test (name) VALUES ($1);', ['test'])
// { affectedRows: 1 },
Query Options

The query and exec methods take an optional options objects with the following parameters:

  • rowMode: "object" | "array"
    The returned row object type, either an object of fieldName: value mappings or an array of positional values. Defaults to "object".
  • parsers: ParserOptions
    An object mapping Postgres data type IDs to parser functions. This option overrides any parsers set at the instance level.
    ts
    import { types } from '@electric-sql/pglite'
    await pg.query(`SELECT * FROM test WHERE name = $1;`, ['test'], {
      parsers: {
        [types.TEXT]: (value) => value.toUpperCase(),
      },
    })
  • serializers: SerializerOptions
    An object mapping Postgres data type IDs to serializer functions. This option overrides any serializers set at the instance level.
    ts
    import { types } from '@electric-sql/pglite'
    await pg.query(`INSERT INTO test (numeric) VALUES ($1);`, [100n], {
      serializers: {
        [types.NUMERIC]: (value: number | bigint) => value.toString(),
      },
    })
  • blob: Blob | File
    Attach a Blob or File object to the query that can used with a COPY FROM command by using the virtual /dev/blob device, see importing and exporting.

sql

.sql<T>``: Promise<Results<T>>

.sql<T>(strings: TemplateStringsArray, ...params: any[]): Promise<Results<T>>

You can also use the sql`` tagged template literal to create queries:

ts
await pg.sql`SELECT * FROM test WHERE name = ${'test'}`
// equivalent of pg.query('SELECT * FROM test WHERE name = $1', ['test'])

See the templating section for more details on how to use the tagged template literal along with various helpers for more complex cases.

exec

.exec(query: string, options?: QueryOptions): Promise<Array<Results>>

Execute one or more statements. (note that parameters are not supported)

This is useful for applying database migrations, or running multi-statement SQL that doesn't use parameters.

Uses the simple query Postgres wire protocol.

Returns array of result objects; one for each statement.

Example
ts
await pg.exec(`
  CREATE TABLE IF NOT EXISTS test (
    id SERIAL PRIMARY KEY,
    name TEXT
  );
  INSERT INTO test (name) VALUES ('test');
  SELECT * FROM test;
`)
// [
//   { affectedRows: 0 },
//   { affectedRows: 1 },
//   {
//     rows: [
//       { id: 1, name: 'test' }
//     ]
//     affectedRows: 0,
//     fields: [
//       { name: 'id', dataTypeID: '23' },
//       { name: 'name', dataTypeID: '25' },
//     ]
//   }
// ]

transaction

.transaction<T>(callback: (tx: Transaction) => Promise<T>)

To start an interactive transaction, pass a callback to the transaction method. It is passed a Transaction object which can be used to perform operations within the transaction.

The transaction will be committed when the promise returned from your callback resolves, and automatically rolled back if the promise is rejected.

Transaction objects
  • tx.query<T>(query: string, params?: any[], options?: QueryOptions): Promise<Results<T>>
    The same as the main .query method.
  • tx.sql<T>``: Promise<Results<T>>
    The same as the main .sql template string method.
  • tx.exec(query: string, options?: QueryOptions): Promise<Array<Results>>
    The same as the main .exec method.
  • tx.rollback()
    Rollback and close the current transaction.
Example
ts
await pg.transaction(async (tx) => {
  await tx.query(
    'INSERT INTO test (name) VALUES ('$1');',
    [ 'test' ]
  );
  return await ts.query('SELECT * FROM test;');
});

close

.close(): Promise<void>

Close the database, ensuring it is shut down cleanly.

listen

.listen(channel: string, callback: (payload: string) => void): Promise<(() => Promise<void>>

Subscribe to a pg_notify channel. The callback will receive the payload from the notification.

Returns an unsubscribe function to unsubscribe from the channel.

Example
ts
const unsub = await pg.listen('test', (payload) => {
  console.log('Received:', payload)
})
await pg.query("NOTIFY test, 'Hello, world!'")

unlisten

.unlisten(channel: string, callback?: (payload: string) => void): Promise<void>

Unsubscribe from the channel. If a callback is provided it removes only that callback from the subscription. When no callback is provided, it unsubscribes all callbacks for the channel.

onNotification

onNotification(callback: (channel: string, payload: string) => void): () => void

Add an event handler for all notifications received from Postgres.

Note: This does not subscribe to the notification; you will need to manually subscribe with LISTEN channel_name.

offNotification

offNotification(callback: (channel: string, payload: string) => void): void

Remove an event handler for all notifications received from Postgres.

dumpDataDir

dumpDataDir(compression?: 'auto' | 'gzip' | 'none'): Promise<File | Blob>

Dump the Postgres datadir to a Gzipped tarball.

The compression option defults to auto which uses compression where possible. You can explicit opt in or out of compression with gzip and none. When you specify that compression is required with gzip, if the environment doesn't support a suitable compression API it will throw an error.

This can then be used in combination with the loadDataDir option when starting PGlite to load a dumped database from storage.

NOTE

The datadir dump may not be compatible with other Postgres versions; it is only designed for importing back into PGlite.

execProtocol

execProtocol(message: Uint8Array, options?: ExecProtocolOptions): Promise<Array<[BackendMessage, Uint8Array]>>

Execute a Postgres wire protocol message, returning an array of tuples, one for each wire protocol result message, consisting of:

  1. The passed message object - see pg-protocol
  2. The raw Uint8Array for that message.

This API is safe to use alongside the other PGlite query APIs as it handles error, transactions and notifications.

execProtocolRaw

execProtocolRaw(message: Uint8Array, options?: ExecProtocolOptions): Promise<Uint8Array>

Execute a Postgres wire protocol message, returning the unparsed result Uint8Array, this includes all wire protocol result messages emitted as a result of your message and will require external passing. This is the lowest level API exposed by PGlite and can be used to interact with a PGlite database using existing Postgres clients. It is likely that you will want to use something such as pg-gateway that uses this internally to expose the database on a TCP socket.

WARNING

execProtocolRaw bypasses PGlite's protocol wrappers that manage error/notice messages, transactions, and notification listeners. Only use if you need to bypass these wrappers and don't intend to use the above features. execProtocol is a safer alternative.

describeQuery

.describeQuery(query: string, options?: QueryOptions): Promise<DescribeQueryResult>

Get type information about a query's parameters and result fields without executing it. This is useful for understanding the expected parameter types and the structure of the results that would be returned. It can be used to build a type inference system for queries using PGlite.

Returns an object with:

  • queryParams: Array<{ dataTypeID: number, serializer: Function }>
    Information about each parameter's Postgres type ID and the serializer function used to convert JavaScript values to Postgres format.

  • resultFields: Array<{ name: string, dataTypeID: number, parser: Function }>
    Information about each result field including its name, Postgres type ID, and the parser function used to convert Postgres values to JavaScript format.

Example
ts
await pg.describeQuery('SELECT * FROM test WHERE name = $1', ['test'])

// returns:
{
  queryParams: [{ dataTypeID: 25, serializer: Function }],
  resultFields: [{ name: "id", dataTypeID: 23, parser: Function }],
}

Properties

ready

.ready boolean (read only)

Whether the database is ready to accept queries.

closed

.closed boolean (read only)

Whether the database is closed and no longer accepting queries.

waitReady

.waitReady Promise<void>

Promise that resolves when the database is ready to use.

NOTE

Query methods will wait for the waitReady promise to resolve if called before the database has fully initialised, and so it is not necessary to wait for it explicitly.

Results<T> Objects

Result objects have the following properties:

  • rows: Row<T>[]
    The rows retuned by the query.

  • affectedRows?: number
    Count of the rows affected by the query. Note, this is not the count of rows returned, it is the number or rows in the database changed by the query.

  • fields: { name: string; dataTypeID: number }[]
    Field name and Postgres data type ID for each field returned.

  • blob?: Blob
    A Blob containing the data written to the virtual /dev/blob/ device by a COPY TO command. See /dev/blob.

Row<T> Objects

Rows objects are a key / value mapping for each row returned by the query.

The .query<T>() method can take a TypeScript type describing the expected shape of the returned rows.

NOTE

These types are not validated at run time, the result is only cast to the provided type.

Tagged Template Queries

PGlite has support for using tagged template literals to construct SQL queries via the .sql`` method on both the main PGlite instance and Transaction objects. Substituted template values are automatically converted to query parameters, and you receive the same results as calling the query API directly.

ts
const name = getName()
await pg.sql`SELECT * FROM test WHERE name = ${name}`

There are helpers in the template export to create more complex and parametrizeable templated queries:

  • identifier``
    Tag identifiers like column and table names to escape them with quotes and prevent them from becoming parameters.
  • raw``
    Tag any string that you want to avoid being interpreted as a parameter. Will allow you to still do regular string templating.
  • sql``
    Tag nested templated literals to preserve behaviour and parametrization. Will allow you to create reusable templating utilities
  • query``
    Use at the top level tag in order to generate the parametrized query without passing it to the query API. Returns a { query: string, params: any[] } object with the parametrized query and any parameters used in the query.

If you require additional configurations or complex binary parameters it's best to use the query method.

Example
ts
import { identifier, raw, sql, query } from '@electric-sql/pglite/template'

await pg.sql`SELECT * FROM ${identifier`test`} WHERE name = ${'test'}`
// equivalent of pg.query('SELECT * FROM "test" WHERE name = $1', ['test'])

const filterStmt = (filterVar?: string) =>
  filterVar ? sql`WHERE name = ${filterVar}` : raw`WHERE 1=1`

await pg.sql`SELECT * FROM test ${filterStmt('test')}`
// equivalent of pg.query('SELECT * FROM "test" WHERE name = $1', ['test'])
await pg.sql`SELECT * FROM test ${filterStmt(null)}`
// equivalent of pg.query('SELECT * FROM "test" WHERE 1=1, [])

query`SELECT * FROM ${identifier`test`} WHERE name = ${'test'}`
// { query: 'SELECT * FROM "test" WHERE name = $1', params: ['test'] }

/dev/blob

PGlite has support for importing and exporting via the SQL COPY TO/FROM command by using a virtual /dev/blob device.

To import a file, pass the File or Blob in the query options as blob, and copy from the /dev/blob device.

ts
await pg.query("COPY my_table FROM '/dev/blob';", [], {
  blob: MyBlob,
})

To export a table or query to a file, you just need to write to the /dev/blob device; the file will be returned as blob on the query results:

ts
const ret = await pg.query("COPY my_table TO '/dev/blob';")
// ret.blob is a `Blob` object with the data from the copy.