A PostgreSQL metric exporter for Prometheus written in Rust
pg_exporter supports PostgreSQL 14 and newer.
PostgreSQL 13 and older are no longer supported.
pg_exporter is designed with a selective metrics approach:
- Modular collectors – Expose only the metrics you actually need instead of collecting everything by default.
- Avoid unnecessary metrics – Prevent exposing large numbers of unused metrics to Prometheus, reducing load and keeping monitoring efficient.
- Customizable collectors – Tailor the metrics to your specific requirements while maintaining compatibility with the official postgres_exporter.
- Low memory footprint – Designed to minimize memory usage and maximize efficiency while scraping metrics.
Install via Cargo:
cargo install pg_exporter Or download the latest release from the releases page.
Container images are available at ghcr.io/nbari/pg_exporter:
# Using Docker docker run -d \ -e PG_EXPORTER_DSN="postgresql://postgres_exporter@postgres-host:5432/postgres" \ -p 9432:9432 \ ghcr.io/nbari/pg_exporter:latest # Using Podman podman run -d \ -e PG_EXPORTER_DSN="postgresql://postgres_exporter@postgres-host:5432/postgres" \ -p 9432:9432 \ ghcr.io/nbari/pg_exporter:latestConnecting to host PostgreSQL from container:
- Docker Desktop (Mac/Windows): use
host.docker.internalinstead oflocalhost - Podman: use
host.containers.internalinstead oflocalhost - Linux with
--network=host: uselocalhostdirectly
Example with host connection:
podman run -d \ -e PG_EXPORTER_DSN="postgresql://postgres_exporter@host.containers.internal:5432/postgres" \ -p 9432:9432 \ ghcr.io/nbari/pg_exporter:latestRun the exporter and use the socket directory:
pg_exporter --dsn postgresql:///postgres?host=/var/run/postgresql&user=postgres_exporter in pg_hba.conf you need to allow the user
postgres_exporterto connect, for example:
local all postgres_exporter trust Instead of using trust authentication (which allows connection without password), it is recommended to use peer authentication for local connections. This requires creating a system user named postgres_exporter.
-
Create the system user:
sudo useradd -r -d /nonexistent -s /usr/bin/nologin postgres_exporter
-
Configure
pg_hba.confto usepeerauthentication:local all postgres_exporter peer -
Run the exporter as the
postgres_exporteruser:sudo -u postgres_exporter pg_exporter --dsn postgresql:///postgres?host=/var/run/postgresql&user=postgres_exporter
This ensures that only the system user postgres_exporter can connect to the database as the postgres_exporter role, significantly improving security.
You can also specify a custom port, for example 9187:
pg_exporter --dsn postgresql://postgres_exporter@localhost:5432/postgres --port 9187 pg_exporter supports standard PostgreSQL environment variables for connection configuration. This is useful when you want to avoid putting sensitive information like passwords in the DSN or command line arguments.
Supported variables include:
PGHOSTPGPORTPGUSERPGPASSWORDPGDATABASE
Example usage with PGPASSWORD:
PGPASSWORD=secret pg_exporter --dsn postgresql://postgres@localhost:5432/postgres You can also omit parts of the DSN and rely on environment variables:
PGUSER=postgres PGPASSWORD=secret pg_exporter --dsn postgresql://localhost:5432/postgres For Docker Swarm or Kubernetes environments, you can use PG_EXPORTER_DSN_FILE to read the DSN from a file (e.g., Docker secrets):
# docker-compose.yml for Docker Swarm services: pg_exporter: image: ghcr.io/nbari/pg_exporter:latest environment: PG_EXPORTER_DSN_FILE: /run/secrets/pg_dsn secrets: - pg_dsn ports: - "9432:9432" secrets: pg_dsn: external: trueCreate the secret:
echo "postgresql://postgres_exporter:password@postgres:5432/postgres" | docker secret create pg_dsn -Priority order: PG_EXPORTER_DSN_FILE > PG_EXPORTER_DSN > --dsn flag > default value
The following collectors are available:
--collector.defaultdefault--collector.activityactivity--collector.databasedatabase--collector.vacuumvacuum--collector.lockslocks--collector.statstat--collector.replicationreplication--collector.indexindex--collector.statementsstatements - Query performance metrics frompg_stat_statements(see detailed guide)--collector.tlstls - SSL/TLS certificate monitoring and connection encryption stats (PostgreSQL 14+)--collector.exporterexporter - Exporter self-monitoring (process metrics, scrape performance, cardinality tracking)
You can enable --collector.<name> or disable --no-collector.<name> For example, to disable the vacuum collector:
pg_exporter --dsn postgresql:///postgres?host=/var/run/postgresql&user=postgres_exporter --no-collector.vacuum Collector-specific runtime options use the <collector>.<option> long-flag format. For example, to reduce pg_stat_statements cardinality and scrape cost:
pg_exporter --collector.statements --statements.top-n 10 The statements collector defaults to --statements.top-n 25 if not specified. You can also use PG_EXPORTER_STATEMENTS_TOP_N.
For local observability testing with Prometheus and Grafana, a practical flow is:
just postgres cargo run -- --collector.statements --collector.activity --collector.locks --collector.database -v just metrics just workload duration=120 clients=10 just workload reuses pgbench to populate pg_stat_statements and generate live activity while you inspect /metrics, Prometheus, or Grafana.
For vacuum-specific testing, use:
just vacuum-workflow scale=20 rounds=5 sample_mod=5 just vacuum-workflow creates dead tuples in pgbench_accounts, shows the table-level autovacuum pressure metrics, and then runs a manual VACUUM (VERBOSE, ANALYZE) so the vacuum collector and dashboard have real maintenance activity to display.
For PostgreSQL-managed cleanup testing, use:
just autovacuum-workflow scale=20 rounds=5 sample_mod=5 timeout=180 just autovacuum-workflow creates dead tuples, temporarily lowers the local autovacuum trigger for pgbench_accounts, shortens autovacuum_naptime, and then waits for PostgreSQL autovacuum to clean the table without a manual VACUUM.
For reclaiming physical space:
VACUUMreclaims dead tuples for PostgreSQL reuse, but usually does not return table space to the OS.ANALYZEonly refreshes planner statistics; it does not reclaim space.pg_repackis the preferred low-downtime option when a large table or index remains bloated and you need to compact it.VACUUM FULLrewrites the table and can return space to the OS, but it takes anACCESS EXCLUSIVElock and should be planned in a maintenance window.
In Grafana, the fastest way to spot likely pg_repack or VACUUM FULL candidates is the Vacuum & Bloat Pressure row, especially:
Top Repack Candidates by Estimated Dead SpaceTop Tables by Estimated Bloat RatioTop Tables by Table Size
This collectors are enabled by default:
defaultactivityvacuum
pg_exporter is designed to be resilient to PostgreSQL outages:
- High Availability – The exporter starts and stays available even if the database is down.
- HTTP 200 Always – The
/metricsendpoint always responds with HTTP 200 to avoid triggering unnecessary Prometheus "down" alerts for the exporter itself. pg_upMetric – Use thepg_upmetric (1 for up, 0 for down) to monitor database connectivity.- Metric Omission – When the database is unreachable, database-dependent metrics are omitted from the output rather than being reported as zero.
For systemd deployments, ensure exporter startup is ordered after PostgreSQL to avoid early boot races:
[Unit] After=network-online.target postgresql.service Wants=network-online.targetIf your distribution uses a versioned unit name (for example postgresql-16.service), replace postgresql.service accordingly.
The project is structured as follows:
├── bin ├── cli ├── collectors ├── exporter └── lib.rs All the collectors are located in the collectors directory. Each collector is in its own subdirectory, making it easy to manage and extend.
collectors ├── activity │ ├── connections.rs │ ├── mod.rs │ └── wait.rs ├── config.rs ├── database │ ├── catalog.rs │ ├── mod.rs │ ├── README.md │ └── stats.rs ├── default │ ├── mod.rs │ ├── postmaster.rs │ ├── settings.rs │ └── version.rs ├── locks │ ├── mod.rs │ └── relations.rs ├── mod.rs <-- main file to register collectors ├── register_macro.rs ├── registry.rs ├── stat │ ├── mod.rs │ └── user_tables.rs ├── util.rs └── vacuum ├── mod.rs ├── progress.rs └── stats.rs In mod.rs file inside the collectors directory, you can see how each collector is registered. This modular approach allows for easy addition or removal of collectors as needed.
Each collector can then be extended with more specific metrics. For example, the vacuum collector has two files: progress.rs and stats.rs, this allows for better organization and separation of concerns within the collector and better testability. (or that is the plan).
The project includes unit tests for each collector and integration tests for the exporter as a whole. You can run the tests using:
just test need just installed, see just
For direct checks, these commands are also part of the normal validation flow:
cargo fmt --all -- --check just clippy To run with opentelemetry set the environment variable OTEL_EXPORTER_OTLP_ENDPOINT, for example:
OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317" Then you can run the exporter and it will send traces to the specified endpoint.
To run postgres and jaeger locally
just postgres just jaeger just watch For tracees add more verbosity with -v, for example:
cargo watch -x 'run -- --collector.vacuum -vv' open jaeger at http://localhost:16686 and select the pg_exporter service to see the traces.
We welcome contributions of all kinds.
- Read the Agent & Contributor Contract. It contains repository-specific rules for AI and human contributors, including testing, safety, and release-flow expectations.
- Read the Development Guide. It covers local PostgreSQL setup, test workflows, and safe collector patterns.
- Run tests:
just testruns the standard validation flow for this crate. - Formatting: run
cargo fmt --all -- --check. - Linting: run
just clippybefore submitting changes. - Check recent release notes in CHANGELOG.md so documentation and release notes stay aligned.
Related docs:
This project is a work in progress. Your feedback, suggestions, and contributions are always welcome!