Debugging and monitoring
Database performance is a large topic and many factors can contribute. Some of the most common causes of poor performance include:
- An inefficiently designed schema
- Inefficiently designed queries
- A lack of indexes causing slower than required queries over large tables
- Unused indexes causing slow
INSERT
,UPDATE
andDELETE
operations - Not enough compute resources, such as memory, causing your database to go to disk for results too often
- Lock contention from multiple queries operating on highly utilized tables
- Large amount of bloat on your tables causing poor query planning
You can examine your database and queries for these issues using either the Supabase CLI or SQL.
Using the CLI
The Supabase CLI comes with a range of tools to help inspect your Postgres instances for potential issues. The CLI gets the information from Postgres internals. Therefore, most tools provided are compatible with any Postgres databases regardless if they are a Supabase project or not.
You can find installation instructions for the the Supabase CLI here.
The inspect db
command
The inspection tools for your Postgres database are under then inspect db
command. You can get a full list of available commands by running supabase inspect db help
.
Connect to any Postgres database
Most inspection commands are Postgres agnostic. You can run inspection routines on any Postgres database even if it is not a Supabase project by providing a connection string via --db-url
.
For example you can connect to your local Postgres instance:
Connect to a Supabase instance
Working with Supabase, you can link the Supabase CLI with your project:
Then the CLI will automatically connect to your Supabase project whenever you are in the project folder and you no longer need to provide —db-url
.
Inspection commands
Below are the db
inspection commands provided, grouped by different use cases.
Some commands might require pg_stat_statements
to be enabled or a specific Postgres version to be used.
Disk storage
These commands are handy if you are running low on disk storage:
- bloat - estimates the amount of wasted space
- vacuum-stats - gives information on waste collection routines
- table-record-counts - estimates the number of records per table
- table-sizes - shows the sizes of tables
- index-sizes - shows the sizes of individual index
- table-index-sizes - shows the sizes of indexes for each table
Query performance
The commands below are useful if your Postgres database consumes a lot of resources like CPU, RAM or Disk IO. You can also use them to investigate slow queries.
- cache-hit - shows how efficient your cache usage is overall
- unused-indexes - shows indexes with low index scans
- index-usage - shows information about the efficiency of indexes
- seq-scans - show number of sequential scans recorded against all tables
- long-running-queries - shows long running queries that are executing right now
- outliers - shows queries with high execution time but low call count and queries with high proportion of execution time spent on synchronous I/O
Locks
- locks - shows statements which have taken out an exclusive lock on a relation
- blocking - shows statements that are waiting for locks to be released
Connections
- role-connections - shows number of active connections for all database roles (Supabase-specific command)
- replication-slots - shows information about replication slots on the database
Notes on pg_stat_statements
Following commands require pg_stat_statements
to be enabled: calls, locks, cache-hit, blocking, unused-indexes, index-usage, bloat, outliers, table-record-counts, replication-slots, seq-scans, vacuum-stats, long-running-queries.
When using pg_stat_statements
also take note that it only stores the latest 5,000 statements. Moreover, consider resetting the analysis after optimizing any queries by running select pg_stat_statements_reset();
Learn more about pg_stats here.
Using SQL
If you're seeing an insufficient privilege
error when viewing the Query Performance page from the dashboard, run this command:
Postgres cumulative statistics system
Postgres collects data about its own operations using the cumulative statistics system. In addition to this, every Supabase project has the pg_stat_statements extension enabled by default. This extension records query execution performance details and is the best way to find inefficient queries. This information can be combined with the Postgres query plan analyzer to develop more efficient queries.
Here are some example queries to get you started.
Most frequently called queries
This query shows:
- query statistics, ordered by the number of times each query has been executed
- the role that ran the query
- the number of times it has been called
- the average number of rows returned
- the cumulative total time the query has spent running
- the min, max and mean query times.
This provides useful information about the queries you run most frequently. Queries that have high max_time
or mean_time
times and are being called often can be good candidates for optimization.
Slowest queries by execution time
This query will show you statistics about queries ordered by the maximum execution time. It is similar to the query above ordered by calls, but this one highlights outliers that may have high executions times. Queries which have high or mean execution times are good candidates for optimization.
Most time consuming queries
This query will show you statistics about queries ordered by the cumulative total execution time. It shows the total time the query has spent running as well as the proportion of total execution time the query has taken up.
Queries which are the most time consuming are not necessarily bad, you may have a very efficient and frequently ran queries that end up taking a large total % time, but it can be useful to help spot queries that are taking up more time than they should.
Hit rate
Generally for most applications a small percentage of data is accessed more regularly than the rest. To make sure that your regularly accessed data is available, Postgres tracks your data access patterns and keeps this in its shared_buffers cache.
Applications with lower cache hit rates generally perform more poorly since they have to hit the disk to get results rather than serving them from memory. Very poor hit rates can also cause you to burst past your Disk IO limits causing significant performance issues.
You can view your cache and index hit rate by executing the following query:
This shows the ratio of data blocks fetched from the Postgres shared_buffers cache against the data blocks that were read from disk/OS cache.
If either of your index or table hit rate are < 99% then this can indicate your compute plan is too small for your current workload and you would benefit from more memory. Upgrading your compute is easy and can be done from your project dashboard.
Optimizing poor performing queries
Postgres has built in tooling to help you optimize poorly performing queries. You can use the query plan analyzer on any expensive queries that you have identified:
When you include analyze
in the explain statement, the database attempts to execute the query and provides a detailed query plan along with actual execution times. So, be careful using explain analyze
with insert
/update
/delete
queries, because the query will actually run, and could have unintended side-effects.
If you run just explain
without the analyze
keyword, the database will only perform query planning without actually executing the query. This approach can be beneficial when you want to inspect the query plan without affecting the database or if you encounter timeouts in your queries.
Using the query plan analyzer to optimize your queries is a large topic, with a number of online resources available:
You can pair the information available from pg_stat_statements
with the detailed system metrics available via your metrics endpoint to better understand the behavior of your DB and the queries you're executing against it.