PostgreSQL 18: What's New for Developers and DBAs
PostgreSQL 18, released on September 25, 2025, represents one of the most significant versions in recent years. This release introduces fundamental architectural changes and numerous features designed to improve developers’ lives. In this article, we’ll explore the most important improvements, with particular attention to how they can benefit those working with Delphi and DelphiMVCFramework.
🚀 Asynchronous I/O: An Architectural Paradigm Shift
The most revolutionary feature of PostgreSQL 18 is undoubtedly the introduction of the asynchronous I/O (AIO) subsystem. For decades, PostgreSQL has used a synchronous I/O model: when a backend process needed to read data from disk, it issued a read call and waited until the operation completed. This meant the process was completely blocked during I/O operations, unable to perform calculations or other processing.
With PostgreSQL 18, this model changes radically. The new AIO subsystem allows the database to start multiple read operations and continue processing data while waiting for results. It’s like going from a chef who prepares one dish at a time, waiting for each ingredient to arrive before starting the next, to a chef who orders all ingredients simultaneously and works on what’s already available while the rest is arriving.
Three I/O Modes
PostgreSQL 18 introduces the io_method configuration parameter that allows you to choose between three different implementations:
sync: Maintains the traditional behavior of PostgreSQL 17 and previous versions. Useful for troubleshooting or legacy systems.
worker (default): Uses dedicated background worker processes to handle I/O operations. When a query needs data from disk, PostgreSQL sends the request to an available worker instead of blocking the main process. The number of workers is controlled by the io_workers parameter (default: 3). This is the default option and is cross-platform (works on Linux, Windows, macOS).
io_uring (Linux only): Leverages the Linux kernel’s io_uring interface (version 5.1+), creating a shared buffer between PostgreSQL and the kernel to reduce system call overhead. This mode generally offers the best performance but requires a recent Linux kernel and PostgreSQL compiled with --with-liburing support. In Linux cloud environments, this is the optimal choice to maximize performance.
Supported Operations
In version 18, asynchronous I/O is active for the following operations:
- Sequential Scan: full table scans
- Bitmap Heap Scan: scans based on index bitmaps
- VACUUM: maintenance and cleanup operations
Write operations, including those on the Write-Ahead Log (WAL), remain synchronous to preserve the database’s ACID guarantees.
📊 Measurable Performance
Benchmarks conducted in cloud environments (AWS, Azure) show impressive improvements:
- ⚡ Up to 2-3x throughput for read-intensive workloads
- ⚡ Significant reduction in I/O latency, especially on cloud storage
- ⚡ Better utilization of existing hardware resources
A test performed on AWS with a c7i.8xlarge instance (32 vCPU, 64GB RAM) and EBS io2 volume (100GB, 20000 IOPS) showed that with io_uring read speeds of approximately 3.4 GB/sec are achieved, compared to 2.6 GB/sec with the sync method - a 30% improvement. The difference is even more pronounced in scenarios with high I/O latency, typical of cloud environments.
What It Means for Delphi Developers
For those developing with Delphi and DelphiMVCFramework, these improvements translate into:
- Faster queries on large datasets
- Better performance for sequential scan and bitmap heap scan operations
- Reduced VACUUM times, crucial for production database maintenance
The read improvements are already substantial for most web applications and REST APIs developed with DMVC, especially those performing analytical or reporting queries on large data volumes.
PostgreSQL Training: All upcoming sessions of the PostgreSQL for Developers and DBAs course will include a dedicated section on PostgreSQL 18 new features, with practical examples on asynchronous I/O, UUIDv7, virtual generated columns, and temporal constraints. The course is available in Italian and English, both on-site and remote, with a focus on real-world use cases and best practices for production applications.
🔑 UUIDv7: Finally Efficient UUIDs as Primary Keys
One of the most anticipated features by developers is native support for UUIDv7. Anyone who has ever used UUIDs as primary keys knows the problem well: traditional UUIDs (v4) are completely random and this causes severe fragmentation of B-tree indexes, with consequent performance degradation in INSERT operations.
The Random UUID Problem
Imagine having a library where books are inserted in random positions on the shelves. Each new book could end up at the beginning, end, or middle of the existing collection. This is what happens with UUIDv4: each new record is inserted in a random position in the B-tree index, causing:
- Frequent page splits
- Inefficient cache
- Increased I/O for write operations
- Progressive performance degradation
The Solution: UUIDv7
UUIDv7 elegantly solves this problem by incorporating a Unix timestamp (in milliseconds) in the first 48 bits of the identifier, followed by 12 bits of sub-millisecond precision to ensure uniqueness. This means new UUIDs are naturally ordered chronologically and are inserted sequentially at the end of the index, just like a BIGSERIAL.
Important feature: The PostgreSQL implementation guarantees monotonicity for all UUIDv7 values generated by the same session (same backend process), even when generated in the same millisecond.
-- Generate a UUIDv7 ordered by current timestamp
SELECT uuidv7();
-- Result: 01980de8-ad3d-715c-b739-faf2bb1a7aad
-- Generate a UUIDv7 for a specific time (with optional interval)
SELECT uuidv7(NOW() - INTERVAL '1 hour');
-- Extract timestamp from a UUIDv7
SELECT uuid_extract_timestamp(uuidv7());
-- Result: 2025-09-26 14:30:15.123+02
-- Check UUID version
SELECT uuid_extract_version(uuidv7());
-- Result: 7
-- Use as primary key
CREATE TABLE orders (
id UUID PRIMARY KEY DEFAULT uuidv7(),
customer_id BIGINT NOT NULL,
total DECIMAL(10,2),
created_at TIMESTAMPTZ DEFAULT NOW()
);
✨ Concrete Benefits
The benefits of UUIDv7 are multiple:
- BIGSERIAL-like performance: sequential insertions that keep the index compact
- Global uniqueness: perfect for distributed systems and microservices
- Natural ordering: ability to order by ID without additional columns
- Compatibility: 128-bit standard compatible with existing systems
For those using DelphiMVCFramework in distributed architectures or microservices, UUIDv7 represents the ideal solution. You can generate primary keys on any application node without central coordination, while maintaining excellent performance.
Migration from UUIDv4 to UUIDv7
If you already have a table with UUIDv4 and want to start using UUIDv7 for new records:
-- Change the column default
ALTER TABLE orders
ALTER COLUMN id SET DEFAULT uuidv7();
-- Existing records keep their UUIDv4
-- New records will use UUIDv7
-- Both can coexist in the same table
In your Delphi/DMVC applications, no changes are needed: you’ll continue to map the column as TGUID or string, and the database will handle generating UUIDv7 for new inserts.
💡 Virtual Generated Columns: On-Demand Calculations
PostgreSQL already supported “stored generated columns” since version 12, but these occupied disk space because the value was calculated and stored during INSERT/UPDATE. PostgreSQL 18 introduces virtual generated columns as the default option.
How They Work
Virtual columns don’t occupy disk space (technically they only use 1 bit in the null bitmap). The value is calculated at read time, when the column is actually requested in a query.
CREATE TABLE products (
id SERIAL PRIMARY KEY,
base_price DECIMAL(10,2) NOT NULL,
tax_rate DECIMAL(5,4) DEFAULT 0.22, -- Italian VAT
discount_rate DECIMAL(5,4) DEFAULT 0.00,
-- Virtual (default in PostgreSQL 18)
selling_price DECIMAL(10,2) GENERATED ALWAYS AS (
base_price * (1 + tax_rate) * (1 - discount_rate)
),
-- Stored only if you need to index or replicate
profit_margin DECIMAL(10,2) GENERATED ALWAYS AS (
base_price * discount_rate
) STORED
);
When to Use Virtual vs Stored
Use Virtual when:
- The calculation is fast (simple arithmetic operations)
- The underlying data changes frequently
- You want to save disk space
- You don’t need to index the column
Use Stored when:
- You need to create an index on the calculated column
- The calculation is complex and heavy
- You need logical replication of the calculated value
- The column is read much more often than it’s written
Practical Use Case: E-commerce
Suppose you’re developing an e-commerce system with DelphiMVCFramework. Virtual columns are perfect for:
CREATE TABLE cart_items (
id SERIAL PRIMARY KEY,
cart_id INTEGER NOT NULL,
product_id INTEGER NOT NULL,
quantity INTEGER NOT NULL,
unit_price DECIMAL(10,2) NOT NULL,
-- Calculated on-demand, zero INSERT overhead
subtotal DECIMAL(10,2) GENERATED ALWAYS AS (
quantity * unit_price
),
-- Even formatting can be virtual
display_price VARCHAR(20) GENERATED ALWAYS AS (
'$ ' || ROUND(quantity * unit_price, 2)::TEXT
)
);
When users add items to the cart, INSERT operations are very fast because there’s no calculation overhead. The subtotal and display_price are calculated only when actually needed (e.g., when displaying the cart).
🎯 B-tree Multicolumn Index Skip Scan
A seemingly minor but significant improvement is support for “skip scan” on multicolumn B-tree indexes. Before PostgreSQL 18, to use an index on (region, category, date), it was necessary to specify columns in exact order, starting from region (the “left-most” rule).
The Traditional Problem
CREATE INDEX idx_sales ON sales (region, category, date);
-- This query could NOT use the index in PostgreSQL 17
SELECT * FROM sales
WHERE category = 'Electronics'
AND date > '2024-01-01';
The query above required a full table scan because region (the first column of the index) was not specified in the WHERE clause.
The Solution: Skip Scan
PostgreSQL 18 introduces the ability to “skip” initial columns of the index, allowing partial use of the index even when not all prefix columns are specified.
-- In PostgreSQL 18, this query USES the index!
SELECT * FROM sales
WHERE category = 'Electronics'
AND date > '2024-01-01';
This is particularly useful when:
- The cardinality of the first column is low (few distinct values)
- You have queries that use different combinations of indexed columns
- You don’t want to create multiple indexes to cover all possible combinations
Important: The query planner automatically decides whether to use skip scan based on table statistics. Skip scan is most effective when the “skipped” column has few distinct values. No manual enabling is needed - PostgreSQL uses it when advantageous.
For developers using DMVC, this means being able to design more flexible indexes without worrying too much about the order of columns in WHERE clauses.
🔄 Enhanced RETURNING: Access to OLD and NEW Values
PostgreSQL 18 extends the RETURNING clause to allow simultaneous access to old (OLD) and new (NEW) values during UPDATE, DELETE, and MERGE operations.
-- Track changes in an UPDATE
UPDATE customers
SET email = 'newemail@example.com',
last_updated = NOW()
WHERE id = 1234
RETURNING
OLD.email as previous_email,
NEW.email as current_email,
OLD.last_updated as previous_update,
NEW.last_updated as current_update;
Practical Use Cases
Automatic Audit Trail:
-- Insert into audit table during UPDATE
WITH updated AS (
UPDATE products
SET price = price * 1.10
WHERE category = 'Premium'
RETURNING
id,
OLD.price as old_price,
NEW.price as new_price,
CURRENT_USER as changed_by
)
INSERT INTO price_history (product_id, old_price, new_price, changed_by)
SELECT id, old_price, new_price, changed_by FROM updated;
Important note: For UPDATE operations, both OLD and NEW contain values; for INSERT, OLD is NULL; for DELETE, NEW is NULL. You can also use aliases for OLD and NEW: RETURNING WITH (OLD AS o, NEW AS n) o.email, n.email.
For those developing REST APIs with DelphiMVCFramework, this feature allows implementing richer responses without additional queries:
// In DMVC controller
function TProductController.UpdatePrice(const ProductId: Integer;
const NewPrice: Currency): IMVCResponse;
var
Result: TJSONObject;
begin
Result := TJSONObject.Create;
try
// Single query that returns both old and new value
Connection.ExecSQL(
'UPDATE products SET price = :new_price WHERE id = :id ' +
'RETURNING ' +
' OLD.price as previous_price, ' +
' NEW.price as current_price, ' +
' OLD.updated_at as previous_update',
[NewPrice, ProductId]
);
// Populate JSON with returned data
Result.AddPair('previous_price', Connection.Fields[0].AsCurrency);
Result.AddPair('current_price', Connection.Fields[1].AsCurrency);
Render(Result);
finally
Result.Free;
end;
end;
📅 Temporal Constraints: WITHOUT OVERLAPS
PostgreSQL 18 introduces support for temporal constraints using the WITHOUT OVERLAPS clause, perfect for managing bookings, schedules, and any scenario where time periods must not overlap.
-- Prerequisite: install the btree_gist extension
CREATE EXTENSION IF NOT EXISTS btree_gist;
CREATE TABLE room_bookings (
room_id INTEGER,
guest_name VARCHAR(100),
booking_period tstzrange,
-- Prevents temporal overlaps for the same room
PRIMARY KEY (room_id, booking_period WITHOUT OVERLAPS)
);
-- This INSERT works
INSERT INTO room_bookings VALUES
(101, 'John Doe', '[2025-10-01 14:00, 2025-10-01 16:00)');
-- This fails: overlapping period!
INSERT INTO room_bookings VALUES
(101, 'Jane Smith', '[2025-10-01 15:00, 2025-10-01 17:00)');
-- ERROR: conflicting key value violates exclusion constraint
This is incredibly useful for booking systems, calendars, shift management, and any scenario where temporal overlap must be prevented at the database level. Note: temporal constraints require the btree_gist extension and internally use GiST indexes instead of B-tree.
⚙️ Faster and Less Invasive Upgrades
PostgreSQL 18 significantly improves the upgrade experience between major versions, a crucial aspect for production systems.
Preserving Planner Statistics
Important news: query planner statistics are preserved during upgrades. In the past, after an upgrade with pg_upgrade, it was necessary to run a full ANALYZE on the entire database before the planner had enough information to generate optimal execution plans. This could take hours on large databases.
With PostgreSQL 18, existing statistics are migrated, allowing the database to be immediately operational with optimal performance.
Improvements to pg_upgrade
--jobsflag: allows parallelizing pre-upgrade checks- New
--no-statisticsoption: allows skipping statistics transfer if not needed - Better handling of databases with many objects (tables, sequences, indexes)
Post-upgrade: After the upgrade, it’s recommended to run:
vacuumdb --all --analyze-in-stages --missing-stats-onlyto quickly generate minimal statistics for relations that lack themvacuumdb --all --analyze-onlyto update cumulative statistics
For those managing PostgreSQL databases in production with Delphi/DMVC applications, these improvements significantly reduce the downtime window needed for upgrades.
OAuth 2.0: Modern Authentication
PostgreSQL 18 introduces support for OAuth 2.0, enabling integration with modern Single Sign-On (SSO) systems like Azure AD, Okta, Auth0, etc.
# In pg_hba.conf
hostssl all all 0.0.0.0/0 oauth
OAuth token validation is handled through extensible libraries, configurable via the oauth_validator_libraries parameter.
This is particularly relevant for enterprise applications developed with DelphiMVCFramework that need to integrate with centralized authentication infrastructures.
🎁 Other Notable Improvements
Parallel GIN Index Builds
The creation of GIN indexes (used for JSON, arrays, full-text search) can now be parallelized, significantly reducing creation times on large datasets.
Enhanced EXPLAIN
EXPLAIN ANALYZE now automatically shows buffer usage and provides detailed metrics on CPU, WAL, and I/O statistics for each execution plan node.
-- More information without additional flags
EXPLAIN ANALYZE SELECT * FROM large_table WHERE category = 'Active';
-- Includes CPU, WAL, read statistics
EXPLAIN (ANALYZE, VERBOSE)
SELECT * FROM orders
WHERE date > CURRENT_DATE - INTERVAL '30 days';
Data Checksums Enabled by Default
New PostgreSQL 18 clusters have data checksums enabled by default when initializing a database with initdb. This improves the ability to detect data corruption caused by I/O system issues, particularly important in cloud environments. If for some reason you want to disable this feature (with a small performance impact), you can use the --no-data-checksums flag during initialization.
Wire Protocol Improvements
PostgreSQL 18 introduces version 3.2 of the wire protocol, the first update since 2003 (PostgreSQL 7.4). Although libpq continues to use version 3.0 by default, this paves the way for future client improvements.
Considerations for Delphi and DelphiMVCFramework Developers
Migration Preparation
If you’re developing with Delphi and using FireDAC or native PostgreSQL components:
- Test with Docker: PostgreSQL 18 is available via Docker for testing:
docker run --name pg18 \
-e POSTGRES_PASSWORD=postgres \
-p 5432:5432 \
postgres:18
-
Verify Compatibility: most existing applications will work without changes, but it’s always better to test
-
Full-Text Search: if you use full-text search or pg_trgm, reindexing might be necessary after the upgrade
-
Review Indexes: with skip scan, you might be able to simplify your indexing strategy
Optimization Opportunities
With PostgreSQL 18, consider:
- Migrate to UUIDv7 if you use UUIDs as primary keys in distributed systems
- Use Virtual Generated Columns for frequent calculations you currently do in Delphi
- Configure Async I/O for read-intensive workloads
- Implement Temporal Constraints for booking/reservation scenarios
Example: Optimizing a DMVC REST API
// Before: application-side calculation
function TOrderController.GetOrderTotal(OrderId: Integer): Currency;
var
Items: TDataSet;
Total: Currency;
begin
Total := 0;
Items := GetOrderItems(OrderId);
try
while not Items.Eof do
begin
Total := Total + Items.FieldByName('quantity').AsInteger *
Items.FieldByName('price').AsCurrency;
Items.Next;
end;
Result := Total;
finally
Items.Free;
end;
end;
// After: with Virtual Generated Column
// In the database:
// ALTER TABLE order_items
// ADD COLUMN line_total DECIMAL(10,2)
// GENERATED ALWAYS AS (quantity * price);
function TOrderController.GetOrderTotal(OrderId: Integer): Currency;
begin
// Single query, efficient database-side calculation
Result := Connection.ExecSQLScalar(
'SELECT SUM(line_total) FROM order_items WHERE order_id = :id',
[OrderId]
);
end;
🎯 Conclusions
PostgreSQL 18 represents a significant qualitative leap, both architecturally with the introduction of asynchronous I/O, and for the numerous features designed to simplify developers’ lives.
For those developing with Delphi and DelphiMVCFramework, this version offers:
- ✅ Improved performance thanks to asynchronous I/O, particularly evident in cloud environments
- ✅ UUIDv7 for efficient primary keys in distributed architectures
- ✅ Virtual Generated Columns to move business logic from Delphi code to the database when appropriate
- ✅ Faster upgrades with less impact on production
- ✅ New modeling possibilities with temporal constraints and enhanced RETURNING
PostgreSQL’s philosophy of maintaining backward compatibility means that migration from previous versions will generally be smooth, allowing you to immediately benefit from performance improvements without having to rewrite existing code.
PostgreSQL 18 is available for download from the official postgresql.org website and can be tested immediately via Docker containers to evaluate the impact of new features on your Delphi and DMVC applications.
Further Learning and Training
The features presented in this article represent only an overview of PostgreSQL 18’s new capabilities. For those wishing to delve deeper into these topics with practical examples, optimizations specific to their application domain, and hands-on sessions, the PostgreSQL for Developers and DBAs course offers a structured path that covers both database fundamentals and advanced features, including all version 18 novelties.
The course can be delivered in Italian or English, on-site at your location or completely remote, with content adaptable to the specific needs of the team and reference sector.
Article based on official PostgreSQL 18 documentation, community benchmarks, and analysis of real use cases for enterprise applications.
Comments
comments powered by Disqus