Skip to content

Slow Queries

Queries taking longer than expected. Start with pressure metrics, then narrow down.

Diagnosis path

Start here when queries are slow

Queries are slow
All queries or specific ones?
All queries
Check permit pressure > 0.8 → scale up or tune slow queries
Check DB pool saturation → increase pool size
Check JVM memory pressure → increase heap
Specific queries
Check pg_stat_statements → identify slow SQL
Check /observe/hints → create missing indexes
Check query complexity → simplify filter
Terminal window
# Check pressure metrics
curl -s http://localhost:8080/observe/pressure | jq
# Check health status
curl -s http://localhost:8080/observe/doctor | jq '.checks[] | select(.status != "healthy")'
Terminal window
curl -s http://localhost:8080/observe/pressure | jq '.query_permit'
ValueMeaningAction
< 0.8NormalNot the bottleneck
0.8-0.95HighQueries queuing — check database
> 0.95SaturatedIncrease database.channelPoolSize
Terminal window
curl -s http://localhost:8080/metrics | grep hikari.*pending

If hikaricp_connections_pending > 0:

  • Queries are waiting for database connections
  • Increase pool size or reduce query concurrency
Terminal window
curl -s http://localhost:8080/observe/pressure | jq '.jvm_memory'

If > 0.85: Increase heap size (-Xmx)

Check 1: Slow query analysis (pg_stat_statements)

Section titled “Check 1: Slow query analysis (pg_stat_statements)”

If pg_stat_statements is enabled, the built-in monitoring shows slow queries:

Terminal window
curl -s http://localhost:8080/observe/stats/queries | jq

Example response:

{
"queries": [
{
"queryId": "1234567890",
"calls": 15420,
"meanMs": 45.2,
"totalMs": 697344.0,
"rows": 30840,
"queryPreview": "SELECT * FROM entries WHERE dn = $1..."
}
],
"timestamp": "2026-01-24T12:00:00Z"
}

If you see "error": "pg_stat_statements extension not available", see Enabling pg_stat_statements below.

Terminal window
curl -s http://localhost:8080/observe/hints | jq '.hints[] | select(.severity == "warning")'

Add the suggested indexes.

Enable tracing to identify slow stages:

Terminal window
SCRIBE_TRACES_ENABLED=true ./identity-scribe

Check spans for bottlenecks:

  • Query.Execute — Database execution time
  • Query.Plan — Planning overhead (complex filters)
  • Query.Map — Result mapping (too many attributes)
ProblemSolution
High permit pressureIncrease database.channelPoolSize
Pending DB connectionsIncrease database.maxPoolSize
Memory pressureIncrease -Xmx heap size
Missing indexesAdd indexes per /observe/hints
Complex filtersAdd type constraint, simplify filter
Large result setsUse pagination (paged results, cursors)

pg_stat_statements provides query execution statistics. IdentityScribe’s monitoring UI and /observe/stats/queries endpoint use it when available.

Add to postgresql.conf:

shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all

Then restart PostgreSQL and create the extension:

CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
ProviderHow to Enable
AWS RDSAdd pg_stat_statements to parameter group’s shared_preload_libraries, reboot instance
Azure DatabaseEnable via Server Parameters → shared_preload_libraries
Google Cloud SQLEnable via Database Flags → cloudsql.enable_pg_stat_statements
DigitalOceanEnabled by default on managed databases

After enabling, verify the extension is active:

Terminal window
curl -s http://localhost:8080/observe/stats/queries | jq

You’ll see query statistics on success. The "error": "pg_stat_statements extension not available" message means the extension isn’t installed or loaded.

Installing pg_stat_statements after initial deployment? The v_slow_queries view may need a refresh — it checks for the extension during migration.

To refresh the view, either:

  1. Re-run the migration by restarting IdentityScribe (Flyway will detect and apply pending migrations)
  2. Manually recreate the view by executing the view DDL from the migration file

If the endpoint returns an empty array despite having slow queries, this view refresh is likely needed.

If you cannot enable pg_stat_statements, you can still diagnose slow queries using:

  • Tracing: Enable SCRIBE_TRACES_ENABLED=true to see per-query execution times
  • Hints: Check /observe/hints for missing index recommendations
  • PostgreSQL logs: Set log_min_duration_statement = 100 to log queries over 100ms