PostgreSQL 17.10 commit log

Stamp 17.10.

commit   : 25c49f3a4a742ba283f5cc43cc7f1d361552e917    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 15:46:41 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 15:46:41 -0400    

Click here for diff

M configure
M configure.ac
M meson.build

Last-minute updates for release notes.

commit   : 25d938dbcb19ba172068159c0f22826d7cc681ea    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 14:54:40 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 14:54:40 -0400    

Click here for diff

Security: CVE-2026-6472, CVE-2026-6473, CVE-2026-6474, CVE-2026-6475, CVE-2026-6476, CVE-2026-6477, CVE-2026-6478, CVE-2026-6479, CVE-2026-6575, CVE-2026-6637, CVE-2026-6638  

M doc/src/sgml/release-17.sgml

Use palloc_array() in a few more places to avoid overflow

commit   : 8e909812d00f9acfa39fc67147109d4852776fb0    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 11 May 2026 21:18:06 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 11 May 2026 21:18:06 +0300    

Click here for diff

These could overflow on 32-bit systems.  
  
Backpatch-through: 14  
Security: CVE-2026-6473  

M contrib/hstore_plperl/hstore_plperl.c
M contrib/hstore_plpython/hstore_plpython.c

Remove test cases for field overflows in intarray and ltree.

commit   : 2b429d887b23284e7c2c4767cf381daf03932075    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 12:12:03 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 12:12:03 -0400    

Click here for diff

These checks are failing in the buildfarm, reporting stack overflows  
rather than the expected errors, though seemingly only on ppc64 and  
s390x platforms.  Perhaps there is something off about our tests  
for stack depth on those architectures?  But there's no time to  
debug that right now, and surely these tests aren't too essential.  
Revert for now and plan to revisit after the release dust settles.  
  
Backpatch-through: 14  
Security: CVE-2026-6473  

M contrib/intarray/expected/_int.out
M contrib/intarray/sql/_int.sql
M contrib/ltree/expected/ltree.out
M contrib/ltree/sql/ltree.sql

refint: Fix SQL injection and buffer overruns.

commit   : 2dc64ef28b3696d202628f852c6a97ae8a2e2a62    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Mon, 11 May 2026 05:13:49 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:49 -0700    

Click here for diff

Maliciously crafted key value updates could achieve SQL injection  
within check_foreign_key().  To fix, ensure new key values are  
properly quoted and escaped in the internally generated SQL  
statements.  While at it, avoid potential buffer overruns by  
replacing the stack buffers for internally generated SQL statements  
with StringInfo.  
  
Reported-by: Nikolay Samokhvalov <nik@postgres.ai>  
Author: Nathan Bossart <nathandbossart@gmail.com>  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Security: CVE-2026-6637  
Backpatch-through: 14  

M contrib/spi/refint.c

Mark PQfn() unsafe and fix overrun in frontend LO interface.

commit   : d88c7be156bbde61ccff337152bf640387b2c629    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Mon, 11 May 2026 05:13:49 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:49 -0700    

Click here for diff

When result_is_int is set to 0, PQfn() cannot validate that the  
result fits in result_buf, so it will write data beyond the end of  
the buffer when the server returns more data than requested.  Since  
this function is insecurable and obsolete, add a warning to the top  
of the pertinent documentation advising against its use.  
  
The only in-tree caller of PQfn() is the frontend large object  
interface.  To fix that, add a buf_size parameter to  
pqFunctionCall3() that is used to protect against overruns, and use  
it in a private version of PQfn() that also accepts a buf_size  
parameter.  
  
Reported-by: Yu Kunpeng <yu443940816@live.com>  
Reported-by: Martin Heistermann <martin.heistermann@unibe.ch>  
Author: Nathan Bossart <nathandbossart@gmail.com>  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Etsuro Fujita <etsuro.fujita@gmail.com>  
Security: CVE-2026-6477  
Backpatch-through: 14  

M doc/src/sgml/libpq.sgml
M src/interfaces/libpq/fe-exec.c
M src/interfaces/libpq/fe-lobj.c
M src/interfaces/libpq/fe-protocol3.c
M src/interfaces/libpq/libpq-int.h

Fix integer overflow in array_agg(), when the array grows too large

commit   : 3c41f5534aa60402293e7a50c4e44f7d6b6e3e4d    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 11 May 2026 05:13:49 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:49 -0700    

Click here for diff

If you accumulate many arrays full of NULLs, you could overflow  
'nitems', before reaching the MaxAllocSize limit on the allocations.  
Add an explicit check that the number of items doesn't grow too large.  
With more than MaxArraySize items, getting the final result with  
makeArrayResultArr() would fail anyway, so better to error out early.  
  
Reported-by: Xint Code  
Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Backpatch-through: 14  
Security: CVE-2026-6473  

M src/backend/utils/adt/arrayfuncs.c

commit   : 26dd3cac20baa8f3ad4c8aca68351c881f92801c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 05:13:49 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:49 -0700    

Click here for diff

pg_locale_icu.c was full of places where a very long input string  
could cause integer overflow while calculating a buffer size,  
leading to buffer overruns.  
  
It also was cavalier about using char-type local arrays as buffers  
holding arrays of UChar.  The alignment of a char[] variable isn't  
guaranteed, so that this risked failure on alignment-picky platforms.  
The lack of complaints suggests that such platforms are very rare  
nowadays; but it's likely that we are paying a performance price on  
rather more platforms.  Declare those arrays as UChar[] instead,  
keeping their physical size the same.  
  
pg_locale_libc.c's strncoll_libc_win32_utf8() also had the  
disease of assuming it could double or quadruple the input  
string length without concern for overflow.  
  
Reported-by: Xint Code  
Reported-by: Pavel Kohout <pavel.kohout@aisle.com>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Backpatch-through: 14  
Security: CVE-2026-6473  

M src/backend/utils/adt/pg_locale.c

Prevent path traversal in pg_basebackup and pg_rewind

commit   : 8f881e188bba9e9d8d63ca0b42cb2597ba828e03    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 11 May 2026 05:13:49 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:49 -0700    

Click here for diff

pg_rewind and pg_basebackup could be fed paths from rogue endpoints that  
could overwrite the contents of the client when received, achieving path  
traversal.  
  
There were two areas in the tree that were sensitive to this problem:  
- pg_basebackup, through the astreamer code, where no validation was  
performed before building an output path when streaming tar data.  This  
is an issue in v15 and newer versions.  
- pg_rewind file operations for paths received through libpq, for all  
the stable branches supported.  
  
In order to address this problem, this commit adds a helper function in  
path.c, that reuses path_is_relative_and_below_cwd() after applying  
canonicalize_path().  This can be used to validate the paths received  
from a connection point.  A path is considered invalid if any of the two  
following conditions is satisfied:  
- The path is absolute.  
- The path includes a direct parent-directory reference.  
  
Reported-by: XlabAI Team of Tencent Xuanwu Lab  
Reported-by: Valery Gubanov <valerygubanov95@gmail.com>  
Author: Michael Paquier <michael@paquier.xyz>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Backpatch-through: 14  
Security: CVE-2026-6475  

M src/bin/pg_basebackup/bbstreamer_file.c
M src/bin/pg_basebackup/bbstreamer_tar.c
M src/bin/pg_rewind/file_ops.c
M src/include/port.h
M src/port/path.c

Avoid overflow in size calculations in formatting.c.

commit   : 87357a606ebbf60927a9ce4a47dccbc293821cd3    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Mon, 11 May 2026 05:13:49 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:49 -0700    

Click here for diff

A few functions in this file were incautious about multiplying a  
possibly large integer by a factor more than 1 and then using it as  
an allocation size.  This is harmless on 64-bit systems where we'd  
compute a size exceeding MaxAllocSize and then fail, but on 32-bit  
systems we could overflow size_t, leading to an undersized  
allocation and buffer overrun.  To fix, use palloc_array() or  
mul_size() instead of handwritten multiplication.  
  
Reported-by: Sven Klemm <sven@tigerdata.com>  
Reported-by: Xint Code  
Author: Nathan Bossart <nathandbossart@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Tatsuo Ishii <ishii@postgresql.org>  
Security: CVE-2026-6473  
Backpatch-through: 14  

M src/backend/utils/adt/formatting.c

Check CREATE privilege on multirange type schema in CREATE TYPE.

commit   : c27ba08cd5c58659b32805fe807683cde5429ab2    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Mon, 11 May 2026 05:13:49 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:49 -0700    

Click here for diff

This omission allowed roles to create multirange types in any  
schema, potentially leading to privilege escalations.  Note that  
when a multirange type name is not specified in CREATE TYPE, it is  
automatically placed in the range type's schema, which is checked  
at the beginning of DefineRange().  
  
Reported-by: Jelte Fennema-Nio <postgres@jeltef.nl>  
Author: Jelte Fennema-Nio <postgres@jeltef.nl>  
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>  
Reviewed-by: Tomas Vondra <tomas@vondra.me>  
Security: CVE-2026-6472  
Backpatch-through: 14  

M src/backend/commands/typecmds.c
M src/test/regress/expected/multirangetypes.out
M src/test/regress/sql/multirangetypes.sql

pg_createsubscriber: Obstruct SQL injection via subscription names.

commit   : d7de7fa84d2492f15747163cbb2f4c5c110ec4a4    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Mon, 11 May 2026 05:13:49 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:49 -0700    

Click here for diff

drop_existing_subscription() neglected to escape the subscription  
name when generating its query string.  To fix, use  
PQescapeIdentifier() to construct a properly escaped name, and use  
it in the ALTER SUBSCRIPTION and DROP SUBSCRIPTION commands.  
  
Reported-by: Yu Kunpeng <yu443940816@live.com>  
Author: Nathan Bossart <nathandbossart@gmail.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Security: CVE-2026-6476  
Backpatch-through: 17  

M src/bin/pg_basebackup/pg_createsubscriber.c

Guard against unsafe conditions in usage of pg_strftime().

commit   : a386d14feb210cd9c6c9b68cd8782e089f4d5b62    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 05:13:49 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:49 -0700    

Click here for diff

Although pg_strftime() has defined error conditions, no callers bother  
to check for errors.  This is problematic because the output string is  
very likely not null-terminated if an error occurs, so that blindly  
using it is unsafe.  Rather than trusting that we can find and fix all  
the callers, let's alter the function's API spec slightly: make it  
guarantee a null-terminated result so long as maxsize > 0.  
  
Furthermore, if we do get an error, let's make that null-terminated  
result be an empty string.  We could instead truncate at the buffer  
length, but that risks producing mis-encoded output if the tz_name  
string contains multibyte characters.  It doesn't seem reasonable for  
src/timezone/ to make use of our encoding-aware truncation logic.  
Also, the only really likely source of a failure is a user-supplied  
timezone name that is intentionally trying to overrun our buffers.  
I don't feel a need to be particularly friendly about that case.  
  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: John Naylor <johncnaylorls@gmail.com>  
Backpatch-through: 14  
Security: CVE-2026-6474  

M src/timezone/strftime.c

Avoid passing unintended format codes to snprintf().

commit   : 4197c880cada16a5ae2777cd4ef8522090376a77    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 05:13:49 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:49 -0700    

Click here for diff

timeofday() assumed that the output of pg_strftime() could not contain  
% signs, other than the one it explicitly asks for with %%.  However,  
we don't have that guarantee with respect to the time zone name (%Z).  
A crafted time zone setting could abuse the subsequent snprintf()  
call, resulting in crashes or disclosure of server memory.  
  
To fix, split the pg_strftime() call into two and then treat the  
outputs as literal strings, not a snprintf format string.  The  
extra pg_strftime() call doesn't really cost anything, since the  
bulk of the conversion work was done by pg_localtime().  
  
Also, adjust buffer widths so that we're not risking string truncation  
during the snprintf() step, as that would create a hazard of producing  
mis-encoded output.  
  
This also fixes a latent portability issue: the format string expects  
an int, but tp.tv_usec is long int on many platforms.  
  
Reported-by: Xint Code  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: John Naylor <johncnaylorls@gmail.com>  
Backpatch-through: 14  
Security: CVE-2026-6474  

M src/backend/utils/adt/timestamp.c

Fix SQL injection in logical replication origin checks.

commit   : f0f59b658ef10901c9af3af7705c802a72a0577e    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    

Click here for diff

ALTER SUBSCRIPTION ... REFRESH PUBLICATION interpolates schema and  
relation names into SQL without quoting them.  A crafted subscriber  
relation name can inject arbitrary SQL on the publisher.  Test such a  
name.  Back-patch to v16, where commit  
875693019053b8897ec3983e292acbb439b088c3 first appeared.  
  
Reported-by: Pavel Kohout <pavel.kohout@aisle.com>  
Author: Pavel Kohout <pavel.kohout@aisle.com>  
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>  
Backpatch-through: 16  
Security: CVE-2026-6638  

M src/backend/commands/subscriptioncmds.c
M src/test/subscription/t/030_origin.pl

Apply timingsafe_bcmp() in authentication paths

commit   : c4e7435b30984dacd0396ce0128bd54c8026fef5    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 11 May 2026 05:13:48 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    

Click here for diff

This commit applies timingsafe_bcmp() to authentication paths that  
handle attributes or data previously compared with memcpy() or strcmp(),  
which are sensitive to timing attacks.  
  
The following data is concerned by this change, some being in the  
backend and some in the frontend:  
- For a SCRAM or MD5 password, the computed key or the MD5 hash compared  
with a password during a plain authentication.  
- For a SCRAM exchange, the stored key, the client's final nonce and the  
server nonce.  
- RADIUS (up to v18), the encrypted password.  
- For MD5 authentication, the MD5(MD5()) hash.  
  
Reported-by: Joe Conway <mail@joeconway.com>  
Security: CVE-2026-6478  
Author: Michael Paquier <michael@paquier.xyz>  
Reviewed-by: John Naylor <johncnaylorls@gmail.com>  
Backpatch-through: 14  

M src/backend/libpq/auth-scram.c
M src/backend/libpq/auth.c
M src/backend/libpq/crypt.c
M src/interfaces/libpq/fe-auth-scram.c

Add timingsafe_bcmp(), for constant-time memory comparison

commit   : 8e34acfda11595fdf2b1cfa96dc6949c34d34cd7    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 11 May 2026 05:13:48 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    

Click here for diff

timingsafe_bcmp() should be used instead of memcmp() or a naive  
for-loop, when comparing passwords or secret tokens, to avoid leaking  
information about the secret token by timing. This commit just  
introduces the function but does not change any existing code to use  
it yet.  
  
This has been initially applied as of 09be39112654 in v18 and newer  
versions, and will be used in all the stable branches for an upcoming  
fix.  
  
Co-authored-by: Jelte Fennema-Nio <github-tech@jeltef.nl>  
Discussion: https://www.postgresql.org/message-id/7b86da3b-9356-4e50-aa1b-56570825e234@iki.fi  
Security: CVE-2026-6478  
Backpatch-through: 14  

M configure
M configure.ac
M meson.build
M src/include/pg_config.h.in
M src/include/port.h
M src/port/meson.build
A src/port/timingsafe_bcmp.c

Guard against overflow in "left" fields of query_int and ltxtquery.

commit   : c4d04cc4810303427d2f6fcf914bb856af32cc52    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 05:13:48 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    

Click here for diff

contrib/intarray's query_int type uses an int16 field to hold the  
offset from a binary operator node to its left operand.  However, it  
allows the number of nodes to be as much as will fit in MaxAllocSize,  
so there is a risk of overflowing int16 depending on the precise shape  
of the tree.  Simple right-associative cases like "a | b | c | ..."  
work fine, so we should not solve this by restricting the overall  
number of nodes.  Instead add a direct test of whether each individual  
offset is too large.  
  
contrib/ltree's ltxtquery type uses essentially the same logic and  
has the same 16-bit restriction.  
  
(The core backend's tsquery.c has a variant of this logic too, but  
in that case the target field is 32 bits, so it is okay so long  
as varlena datums are restricted to 1GB.)  
  
In v16 and up, these types support soft error reporting, so we have  
to complicate the recursive findoprnd function's API a bit to allow  
the complaint to be reported softly.  v14/v15 don't need that.  
  
Undocumented and overcomplicated code like this makes my head hurt,  
so add some comments and simplify while at it.  
  
Reported-by: Xint Code  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Michael Paquier <michael@paquier.xyz>  
Backpatch-through: 14  
Security: CVE-2026-6473  

M contrib/intarray/_int_bool.c
M contrib/intarray/expected/_int.out
M contrib/intarray/sql/_int.sql
M contrib/ltree/expected/ltree.out
M contrib/ltree/ltxtquery_io.c
M contrib/ltree/sql/ltree.sql

Unify src/common/'s definitions of MaxAllocSize.

commit   : e5babf7541d39cf7c2aee54a5034b1109b9e93ed    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 05:13:48 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    

Click here for diff

Define MaxAllocSize in src/include/common/fe_memutils.h rather  
than having several copies of it in different src/common/*.c files.  
This also provides an opportunity to document it better.  
  
Back-patch of commit 11b7de4a7, needed now because assorted security  
fixes are adding additional references to MaxAllocSize in frontend  
code.  
  
Backpatch-through: 14-17  
Security: CVE-2026-6473  

M src/common/psprintf.c
M src/common/saslprep.c
M src/common/stringinfo.c
M src/include/common/fe_memutils.h

Fix unbounded recursive handling of SSL/GSS in ProcessStartupPacket()

commit   : 32a4ce55ccabe4c1e9b1e45d4efc1e62e69fc754    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 11 May 2026 05:13:48 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    

Click here for diff

The handling of SSL and GSS negotiation messages in  
ProcessStartupPacket() could cause a recursion of the backend,  
ultimately crashing the server as the negotiation attempts were not  
tracked across multiple calls processing startup packets.  
  
A malicious client could therefore alternate rejected SSL and GSS  
requests indefinitely, each adding a stack frame, until the backend  
crashed with a stack overflow, taking down a server.  
  
This commit addresses this issue by modifying ProcessStartupPacket() so  
as processed negotiation attempts are tracked, preventing infinite  
recursive attempts.  A TAP test is added to check this problem, where  
multiple SSL and GSS negotiated attempts are stacked.  
  
Reported-by: Calif.io in collaboration with Claude and Anthropic  
Research  
Author: Michael Paquier <michael@paquier.xyz>  
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>  
Security: CVE-2026-6479  
Backpatch-through: 14  

M src/backend/tcop/backend_startup.c
M src/test/Makefile
M src/test/meson.build
A src/test/postmaster/.gitignore
A src/test/postmaster/Makefile
A src/test/postmaster/README
A src/test/postmaster/meson.build
A src/test/postmaster/t/004_negotiate.pl

Add raw_connect and raw_connect_works to Cluster.pm

commit   : 6dffaeb8e54c26a0b0ff92e56343f0dbe4633e72    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 11 May 2026 05:13:48 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    

Click here for diff

These two routines will be used in a test of an upcoming fix.  This  
commit affects the v14~v17 range.  v18 and newer versions already  
include them, thanks to 85ec945b7880.  
  
Security: CVE-2026-6479  
Backpatch-through: 14  

M src/test/perl/PostgreSQL/Test/Cluster.pm

Fix assorted places that need to use palloc_array().

commit   : 01b5ef7df090c5921cbd373d80e100b1d529c656    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 05:13:48 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    

Click here for diff

multirange_recv and BlockRefTableReaderNextRelation were incautious  
about multiplying a possibly-large integer by a factor more than 1  
and then using it as an allocation size.  This is harmless on 64-bit  
systems where we'd compute a size exceeding MaxAllocSize and then  
fail, but on 32-bit systems we could overflow size_t leading to an  
undersized allocation and buffer overrun.  
  
Fix these places by using palloc_array() instead of a handwritten  
multiplication.  (In HEAD, some of them were fixed already, but  
none of that work got back-patched at the time.)  
  
In addition, BlockRefTableReaderNextRelation passes the same value  
to BlockRefTableRead's "int length" parameter.  If built for  
64-bit frontend code, palloc_array() allows a larger array size  
than it otherwise would, potentially allowing that parameter to  
overflow.  Add an explicit check to forestall that and keep the  
behavior the same cross-platform.  
  
Reported-by: Xint Code  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Backpatch-through: 14  
Security: CVE-2026-6473  

M src/backend/utils/adt/multirangetypes.c
M src/common/blkreftable.c

Prevent buffer overrun in unicode_normalize().

commit   : ebcfa7867fb5f3acb906fda77b5cf0282d9ad81a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 05:13:48 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    

Click here for diff

Some UTF8 characters decompose to more than a dozen codepoints.  
It is possible for an input string that fits into well under  
1GB to produce more than 4G decomposed codepoints, causing  
unicode_normalize()'s decomp_size variable to wrap around to a  
small positive value.  This results in a small output buffer  
allocation and subsequent buffer overrun.  
  
To fix, test after each addition to see if we've overrun MaxAllocSize,  
and break out of the loop early if so.  In frontend code we want to  
just return NULL for this failure (treating it like OOM).  In the  
backend, we can rely on the following palloc() call to throw error.  
  
I also tightened things up in the calling functions in varlena.c,  
using size_t rather than int and allocating the input workspace  
with palloc_array().  These changes are probably unnecessary  
given the knowledge that the original input and the normalized  
output_chars array must fit into 1GB, but it's a lot easier to  
believe the code is safe with these changes.  
  
Reported-by: Xint Code  
Reported-by: Bruce Dang <bruce@calif.io>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Co-authored-by: Heikki Linnakangas <hlinnaka@iki.fi>  
Backpatch-through: 14  
Security: CVE-2026-6473  

M src/backend/utils/adt/varlena.c
M src/common/unicode_norm.c

Harden our regex engine against integer overflow in size calculations.

commit   : e3a2bea41c0c953feec0cc2468a434f12a60cc78    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 05:13:48 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    

Click here for diff

The number of NFA states, number of NFA arcs, and number of colors  
are all bounded to reasonably small values.  However, there are  
places where we try to allocate arrays sized by products of those  
quantities, and those calculations could overflow, enabling  
buffer-overrun attacks.  In practice there's no problem on 64-bit  
machines, but there are some live scenarios on 32-bit machines.  
  
A related problem is that citerdissect() and creviterdissect()  
allocate arrays based on the length of the input string, which  
potentially could overflow.  
  
To fix, invent MALLOC_ARRAY and REALLOC_ARRAY macros that rely on  
palloc_array_extended and repalloc_array_extended with the NO_OOM  
option, similarly to the existing MALLOC and REALLOC macros.  
(Like those, they'll throw an error not return a NULL result for  
oversize requests.  This doesn't really fit into the regex code's  
view of error handling, but it'll do for now.  We can consider  
whether to change that behavior in a non-security follow-up patch.)  
  
I installed similar defenses in the colormap construction code.  
It's not entirely clear whether integer overflow is possible  
there, but analyzing the behavior in detail seems not worth  
the trouble, as the risky spots are not in hot code paths.  
  
I left a bunch of calls as-is after verifying that they can't  
overflow given reasonable limits on nstates and narcs.  Those  
limits were enforced already via REG_MAX_COMPILE_SPACE, but  
add commentary to document the interactions.  
  
In passing, also fix a related edge case, which is that the  
special color numbers used in LACON carcs could overflow the  
"color" data type, if ncolors is close to MAX_COLOR.  
  
In v14 and v15, the regex engine calls malloc() directly instead  
of using palloc(), so MALLOC_ARRAY and REALLOC_ARRAY do likewise.  
  
Reported-by: Xint Code  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>  
Backpatch-through: 14  
Security: CVE-2026-6473  

M src/backend/regex/regc_color.c
M src/backend/regex/regc_cvec.c
M src/backend/regex/regc_nfa.c
M src/backend/regex/regcomp.c
M src/backend/regex/rege_dfa.c
M src/backend/regex/regexec.c
M src/include/regex/regcustom.h
M src/include/regex/regguts.h

Make palloc_array() and friends safe against integer overflow.

commit   : fe2720c450655a9986dd731a62e476ea3e1313b0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 05:13:48 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    

Click here for diff

Sufficiently large "count" arguments could result in undetected  
overflow, causing the allocated memory chunk to be much smaller  
than what the caller will subsequently write into it.  This is  
unlikely to be a hazard with 64-bit size_t but can sometimes  
happen on 32-bit builds, primarily where a function allocates  
workspace that's significantly larger than its input data.  
Rather than trying to patch the at-risk callers piecemeal,  
let's just redefine these macros so that they always check.  
  
To do that, move the longstanding add_size() and mul_size() functions  
into palloc.h and mcxt.c, and adjust them to not be specific to  
shared-memory allocation.  Then invent palloc_mul(), palloc0_mul(),  
palloc_mul_extended() to use these functions.  Actually, the latter  
use inlined copies to save one function call.  repalloc_array() gets  
similar treatment.  I didn't bother trying to inline the calls for  
repalloc0_array() though.  
  
In v14 and v15, this also adds repalloc_extended(), which previously  
was only available in v16 and up.  
  
We need copies of all this in fe_memutils.[hc] as well, since that  
module also provides palloc_array() etc.  
  
Reported-by: Xint Code  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>  
Backpatch-through: 14  
Security: CVE-2026-6473  

M src/backend/storage/ipc/shmem.c
M src/backend/utils/mmgr/mcxt.c
M src/common/fe_memutils.c
M src/include/common/fe_memutils.h
M src/include/storage/shmem.h
M src/include/utils/memutils.h
M src/include/utils/palloc.h

Add pg_add_size_overflow() and friends

commit   : 00e243e6791ecc4559282da5ead7e21ae69b8174    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2026 05:13:48 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    

Click here for diff

Commit 600086f47 added (several bespoke copies of) size_t addition with  
overflow checks to libpq. Move this to common/int.h, along with  
its subtraction and multiplication counterparts.  
  
pg_neg_size_overflow() is intentionally omitted; I'm not sure we should  
add SSIZE_MAX to win32_port.h for the sake of a function with no  
callers.  
  
Back-patch of commit 8934f2136, done now because pg_add_size_overflow()  
and friends are needed more widely for security fixes.  
  
Author: Jacob Champion <jacob.champion@enterprisedb.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: Michael Paquier <michael@paquier.xyz>  
Discussion: https://postgr.es/m/CAOYmi%2B%3D%2BpqUd2MUitvgW1pAJuXgG_TKCVc3_Ek7pe8z9nkf%2BAg%40mail.gmail.com  
Backpatch-through: 14-18  
Security: CVE-2026-6473  

M src/include/common/int.h
M src/interfaces/libpq/fe-exec.c
M src/interfaces/libpq/fe-print.c
M src/interfaces/libpq/fe-protocol3.c

Fix overflows with ts_headline()

commit   : 3ed3dbbf440a81590ade81ac6553dc7380a26561    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 11 May 2026 05:13:48 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    

Click here for diff

The options "StartSel", "StopSel" and "FragmentDelimiter" given by a  
caller of the SQL function ts_headline() have their lengths stored as  
int16.  When providing values larger than PG_INT16_MAX, it was possible  
to overflow the length values stored, leading to incorrect behaviors in  
generateHeadline(), in most cases translating to a crash.  
  
Attempting to use values for these options larger than PG_INT16_MAX is  
now blocked.  Some test cases are added to cover our tracks.  
  
Reported-by: Xint Code  
Author: Michael Paquier <michael@paquier.xyz>  
Backpatch-through: 14  
Security: CVE-2026-6473  

M src/backend/tsearch/wparser_def.c
M src/test/regress/expected/tsearch.out
M src/test/regress/sql/tsearch.sql

ltree: Fix overflows with lquery parsing

commit   : 8c3426110934af82590eb212ad5a282a9c5f7070    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 11 May 2026 05:13:48 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 11 May 2026 05:13:48 -0700    

Click here for diff

The lquery parser in contrib/ltree/ had two overflow problems:  
- A single lquery level with many OR-separated variants (e.g.,  
'label1|label2|...'), could cause an overflow of totallen, this being  
stored as a uint16, meaning a maximum value of UINT16_MAX or 65k.  Each  
variant contributes MAXALIGN(LVAR_HDRSIZE + len) bytes.  With enough  
long variants, the value would wraparound.  This would corrupt the data  
written by LQL_NEXT(), leading to a stack corruption, most likely  
translating into a crash, but it would allow incorrect memory access.  
- numvar, labelled as a uint16, counts the number of OR-variants in a  
single level, and it is incremented without bounds checking.  With more  
than PG_UINT16_MAX (65k) variants in a single level, and a minimum of  
131kB of input data, it would wrap to 0.  When a (wildcard) '*' is  
used, this would change the query results silently.  
  
For both issues, a set of overflows checks are added to guard against  
these problematic patterns.  
  
The first issue has been reported by the three people listed below,  
affecting v16 and newer versions due to b1665bf01e5f.  Its coding was  
still unsafe in v14 and v15.  The second issue affects all the stable  
branches; I have bumped into while reviewing the code of the module.  
  
Reported-by: Vergissmeinnicht <vergissmeinnichtzh@gmail.com>  
Reported-by: A1ex <alex000young@gmail.com>  
Reported-by: Jihe Wang <wangjihe.mail@gmail.com>  
Author: Michael Paquier <michael@paquier.xyz>  
Security: CVE-2026-6473  
Backpatch-through: 14  

M contrib/ltree/expected/ltree.out
M contrib/ltree/ltree_io.c
M contrib/ltree/sql/ltree.sql

Translation updates

commit   : 9b6a0445619efb8a1188ba8146774409c4450708    
  
author   : Peter Eisentraut <peter@eisentraut.org>    
date     : Mon, 11 May 2026 13:05:05 +0200    
  
committer: Peter Eisentraut <peter@eisentraut.org>    
date     : Mon, 11 May 2026 13:05:05 +0200    

Click here for diff

Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git  
Source-Git-Hash: 97ae266c2ddd348e9eed1f5677081cc69134b0c5  

M src/backend/po/de.po
M src/backend/po/es.po
M src/backend/po/ka.po
M src/backend/po/ru.po
M src/bin/pg_basebackup/po/ru.po
M src/bin/pg_combinebackup/po/ru.po
M src/bin/pg_dump/po/de.po
M src/bin/pg_dump/po/fr.po
M src/bin/pg_dump/po/ru.po
M src/bin/pg_test_fsync/po/ru.po
M src/bin/pg_test_timing/po/de.po
M src/bin/pg_test_timing/po/fr.po
M src/bin/pg_test_timing/po/ru.po
M src/bin/pg_upgrade/po/ru.po
M src/bin/pg_verifybackup/po/ru.po
M src/interfaces/ecpg/ecpglib/po/ru.po

Release notes for 18.4, 17.10, 16.14, 15.18, 14.23.

commit   : c2e6deea668b6e001b0c4a8b24322ce38ff999e5    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 10 May 2026 12:07:32 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 10 May 2026 12:07:32 -0400    

Click here for diff

M doc/src/sgml/release-17.sgml

postgres_fdw: Fix handling of abort-cleanup-failed connections.

commit   : af8f9248fbe72f8c19f4f20daf38495afdd0a26b    
  
author   : Etsuro Fujita <efujita@postgresql.org>    
date     : Tue, 5 May 2026 18:55:03 +0900    
  
committer: Etsuro Fujita <efujita@postgresql.org>    
date     : Tue, 5 May 2026 18:55:03 +0900    

Click here for diff

As connections that failed abort cleanup can't safely be further used,  
if a remote query tries to get such a connection, we reject it.  
Previously, this rejection involved dropping the connection if it was  
open, without accounting for the possibility of open cursors using it,  
causing a server crash when such an open cursor tried to use an  
already-dropped connection, as a cursor-handling function  
(create_cursor, fetch_more_data, or close_cursor) was called on a freed  
PGconn.  To fix, delay dropping failed connections until abort cleanup  
of the main transaction, to ensure open cursors using such a connection  
can safely refer to the PGconn for it.  
  
Oversight in commit 8bf58c0d9.  
  
Reported-by: Zhibai Song <songzhibai1234@gmail.com>  
Diagnosed-by: Zhibai Song <songzhibai1234@gmail.com>  
Author: Etsuro Fujita <etsuro.fujita@gmail.com>  
Reviewed-by: Michael Paquier <michael@paquier.xyz>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>  
Discussion: https://postgr.es/m/CAPmGK176y6JP017-Cn%2BhS9CEJx_6iVhRoYbAqzuLU4d8-XPPNg%40mail.gmail.com  
Backpatch-through: 14  

M contrib/postgres_fdw/connection.c
M contrib/postgres_fdw/expected/postgres_fdw.out
M contrib/postgres_fdw/sql/postgres_fdw.sql

Consider collation when proving subquery uniqueness

commit   : 13226050e85d27759c20786504efe9f5994f41ee    
  
author   : Richard Guo <rguo@postgresql.org>    
date     : Tue, 5 May 2026 10:29:01 +0900    
  
committer: Richard Guo <rguo@postgresql.org>    
date     : Tue, 5 May 2026 10:29:01 +0900    

Click here for diff

rel_is_distinct_for()'s RTE_SUBQUERY branch passed only the equality  
operator from each join clause to query_is_distinct_for(), discarding  
the operator's input collation.  query_is_distinct_for() then verified  
opfamily compatibility but never checked collations, so a DISTINCT /  
GROUP BY / set-op operating under one collation was trusted to prove  
uniqueness for a comparison performed under an unrelated collation.  
As with the recent fix in relation_has_unique_index_for(), this is  
unsound for nondeterministic collations and yields wrong query results  
in any optimization that consumes the proof.  
  
Fix by carrying each clause's operator input collation into  
query_is_distinct_for() and validating it at every check-site against  
the subquery target expression's collation.  
  
Back-patch to all supported branches.  query_is_distinct_for() is  
declared in an installed header, so on stable branches the existing  
two-list signature is retained as a thin wrapper that forwards to a  
new collation-aware entry point; external callers continue to receive  
the historical collation-blind answer.  
  
Author: Richard Guo <guofenglinux@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/CAMbWs4_XUUSTyzCaRjUeeahWNqi=8ZOA5Q4coi8zUVEDSBkM6A@mail.gmail.com  
Backpatch-through: 14  

M src/backend/optimizer/plan/analyzejoins.c
M src/test/regress/expected/collate.icu.utf8.out
M src/test/regress/sql/collate.icu.utf8.sql
M src/tools/pgindent/typedefs.list

Consider collation when proving uniqueness from unique indexes

commit   : d0e73bb18017f54fb406f0595709ceb4ae3962c9    
  
author   : Richard Guo <rguo@postgresql.org>    
date     : Tue, 5 May 2026 10:28:24 +0900    
  
committer: Richard Guo <rguo@postgresql.org>    
date     : Tue, 5 May 2026 10:28:24 +0900    

Click here for diff

relation_has_unique_index_for() has long had an XXX noting that it  
doesn't check collations when matching a unique index's columns  
against equality clauses.  This was benign as long as all collations  
in play reduced to the same notion of equality, but has been incorrect  
since nondeterministic collations were introduced in PG 12: a unique  
index under a deterministic collation does not prove uniqueness under  
a nondeterministic collation, nor vice versa.  
  
The consequence is wrong query results for any planner optimization  
that consumes the faulty proof, including inner-unique join execution  
(which stops the inner search after the first match per outer row),  
useless-left-join removal, semijoin-to-innerjoin reduction, and  
self-join elimination.  
  
Fix by requiring the index's collation to agree on equality with the  
clause's input collation.  Two collations agree on equality if either  
is InvalidOid (denoting a non-collation-sensitive operation, which  
cannot conflict with the other side), if they have the same OID, or if  
both are deterministic: by definition a deterministic collation treats  
two strings as equal iff they are byte-wise equal (see CREATE  
COLLATION), so any two deterministic collations share the same  
equality relation and the uniqueness proof carries over.  Any mismatch  
involving a nondeterministic collation is rejected.  
  
Back-patch to all supported branches; the bug has existed since  
nondeterministic collations were introduced in PG 12.  
  
Author: Richard Guo <guofenglinux@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/CAMbWs4_XUUSTyzCaRjUeeahWNqi=8ZOA5Q4coi8zUVEDSBkM6A@mail.gmail.com  
Backpatch-through: 14  

M src/backend/optimizer/path/indxpath.c
M src/backend/utils/cache/lsyscache.c
M src/include/utils/lsyscache.h
M src/test/regress/expected/collate.icu.utf8.out
M src/test/regress/sql/collate.icu.utf8.sql

Mark modified the FSM buffer as dirty during recovery

commit   : 1cf010f216138c6d580107589c9e42de72da37f2    
  
author   : Alexander Korotkov <akorotkov@postgresql.org>    
date     : Sun, 3 May 2026 20:23:50 +0300    
  
committer: Alexander Korotkov <akorotkov@postgresql.org>    
date     : Sun, 3 May 2026 20:23:50 +0300    

Click here for diff

The XLogRecordPageWithFreeSpace function updates the freespace map (FSM) data  
while replaying data-level WAL records during the recovery. If the FSM block  
is updated, it needs to be marked as modified. Currently, this is done with  
the MarkBufferDirtyHint call (as in all other cases for modifying FSM data).  
However, in the recovery context, this function will actually do nothing if  
checksums are enabled. It's assumed that the page should not be dirtied  
during recovery while modifying hints to protect against torn pages, since no  
new WAL data can be generated at this point to store FPI.  
  
Such logic does not seem fully aligned with the FSM case, as its blocks could  
be simply zeroed if a checksum mismatch is detected. Currently, changes to an  
FSM block could be lost if each change to that block occurs infrequently  
enough to allow it to be evicted from the cache. To persist the change, the  
modification needs to be performed while the FSM block is still kept in  
buffers and marked as dirty after receiving its FPI. If the block has already  
been cleaned, the change won't be persisted, so stored FSM blocks may remain  
in an obsolete state.  
  
If a large number of discrepancies between the data in leaf FSM blocks and the  
actual data blocks accumulate on the replica server, this could cause  
significant delays in insert operations after switchover. Such an insert  
operation may need to visit many data blocks marked as having sufficient  
space in the FSM, only to discover that the information is incorrect and the  
FSM records need to be corrected. In a heavily trafficked insert-only table  
with many concurrent clients performing inserts, this has been observed to  
cause several-second stalls, causing visible application malfunction. The  
desire to avoid such cases was the reason behind the commit ab7dbd681, which  
introduced an update of FSM data during the heap_xlog_visible invocation.  
However, an update to the FSM data on the standby side could be lost due to a  
missing 'dirty' flag, so there is still a possibility that a large number of  
FSM records will contain incorrect data. Note that having a zeroed FSM page  
in such a case (due to a checksum mismatch) is preferable, as a zero value  
will be interpreted as an indication of full data blocks, and the inserter  
will be routed to the next FSM block or to the end of the table.  
  
Given that FSM is ready to handle torn page writes and  
XLogRecordPageWithFreeSpace is called only during the recovery, there seems  
to be no reason to use MarkBufferDirtyHint here instead of a regular  
MarkBufferDirty call.  
  
Discussion: https://postgr.es/m/596c4f1c-f966-4512-b9c9-dd8fbcaf0928%40postgrespro.ru  
Author: Alexey Makhmutov <a.makhmutov@postgrespro.ru>  
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>  
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>  
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>  

M src/backend/storage/freespace/freespace.c

Add missing connection validation in ECPG

commit   : 5d67549d941258d6d642ce24688c4c82bfd6e0a1    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Fri, 1 May 2026 15:12:28 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Fri, 1 May 2026 15:12:28 -0400    

Click here for diff

ECPGdeallocate_all(), ECPGprepared_statement(), ECPGget_desc(), and  
ecpg_freeStmtCacheEntry() could crash with a SIGSEGV when called  
without an established connection (for example, when EXEC SQL CONNECT  
was forgotten or a non-existent connection name was used), because  
they dereferenced the result of ecpg_get_connection() without first  
checking it for NULL.  
  
Each site is fixed in the style of the surrounding code.  
  
New tests are added for these conditions.  
  
Author: Shruthi Gowda <gowdashru@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Reviewed-by: Mahendra Singh Thalor <mahi6run@gmail.com>  
Reviewed-by: Nishant Sharma <nishant.sharma@enterprisedb.com>  
Discussion: https://postgr.es/m/3007317.1765210195@sss.pgh.pa.us  
Backpatch-through: 14  

M src/interfaces/ecpg/ecpglib/descriptor.c
M src/interfaces/ecpg/ecpglib/prepare.c
M src/interfaces/ecpg/test/connect/.gitignore
M src/interfaces/ecpg/test/connect/Makefile
M src/interfaces/ecpg/test/connect/meson.build
A src/interfaces/ecpg/test/connect/test6.pgc
M src/interfaces/ecpg/test/ecpg_schedule
A src/interfaces/ecpg/test/expected/connect-test6.c
A src/interfaces/ecpg/test/expected/connect-test6.stderr
A src/interfaces/ecpg/test/expected/connect-test6.stdout

doc: Mention validation attempt during ALTER INDEX .. ATTACH PARTITION

commit   : 1f0a58a0c21218e215df7847a08bdf75cf191abc    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Fri, 1 May 2026 13:10:40 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Fri, 1 May 2026 13:10:40 +0900    

Click here for diff

Since 9d3e094f12, the command tries to validate the parent index of the  
named index, if invalid.  The documentation did not mention this  
behavior, which could be confusing.  
  
Author: Mohamed ALi <moali.pg@gmail.com>  
Discussion: https://postgr.es/m/CAGnOmWpHu25_LpT=zv7KtetQhqV1QEZzFYLd_TDyOLu1Od9fpw@mail.gmail.com  
Backpatch-through: 14  

M doc/src/sgml/ref/alter_index.sgml

Fix attnum remapping in generateClonedExtStatsStmt()

commit   : a0104b4474e92a130c742dfa36da1aa8e221fa61    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Thu, 30 Apr 2026 11:04:57 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Thu, 30 Apr 2026 11:04:57 -0400    

Click here for diff

When cloning extended statistics via CREATE TABLE ... LIKE ... INCLUDING  
STATISTICS, stxkeys holds attribute numbers from the source (parent)  
table, but get_attname() was being called with the child relation's  
OID.  If the parent has dropped columns, the child's attribute numbers  
are renumbered sequentially and no longer match, so the lookup either  
returns the wrong column name (silent corruption) or errors out when  
the attnum does not exist in the child.  
  
Fix it by remapping the parent attnum through attmap before the lookup,  
consistent with how expression statistics are already handled a few  
lines below.  
  
Add a regression test covering both manifestations: a 3-column parent  
where the stale attnum refers to no child column (cache-lookup error),  
and a 4-column parent where the stale attnum silently refers to the  
wrong child column.  
  
Author: Julien Tachoires <julmon@gmail.com>  
Reviewed-by: Srinath Reddy Sadipiralla <srinath2133@gmail.com>  
Discussion: https://postgr.es/m/20260415105718.tomuncfbmlt67oel@poseidon.home.virt  
Backpatch-through: 14  

M src/backend/parser/parse_utilcmd.c
M src/test/regress/expected/create_table_like.out
M src/test/regress/sql/create_table_like.sql

Fix errno check based on EINTR in pg_flush_data()

commit   : 5499be3325d8d7d53795214c98495f378c501fe1    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Thu, 30 Apr 2026 18:44:43 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Thu, 30 Apr 2026 18:44:43 +0900    

Click here for diff

Upon a failure of sync_file_range(), EINTR was checked based on the  
returned result of the routine rather than its errno.  sync_file_range()  
returns -1 on failure, making the check a no-op, invalidating the retry  
attempt in this case.  
  
Oversight in 0d369ac65004.  
  
Author: DaeMyung Kang <charsyam@gmail.com>  
Discussion: https://postgr.es/m/20260429151811.1810874-1-charsyam@gmail.com  
Backpatch-through: 16  

M src/backend/storage/file/fd.c

Suppress "has no symbols" linker warnings on macOS.

commit   : 455ddabc8d9ece7e9d805dd11606b42fa06d2e25    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Wed, 29 Apr 2026 12:25:09 -0500    
  
committer: Nathan Bossart <nathan@postgresql.org>    
date     : Wed, 29 Apr 2026 12:25:09 -0500    

Click here for diff

After a recent macOS update, building Postgres produces warnings  
that look like this:  
  
    ranlib: warning: 'libpgport_shlib.a(pg_cpu_x86.c.o)' has no symbols  
    ranlib: warning: 'libpgport_shlib.a(pg_popcount_x86.c.o)' has no symbols  
  
To fix, add a dummy symbol to files that may otherwise have none.  
Per project policy, this is a candidate for back-patching into  
out-of-support branches: it suppresses annoying compiler warnings  
but changes no behavior.  
  
Reported-by: Zhang Mingli <zmlpostgres@gmail.com>  
Reviewed-by: John Naylor <johncnaylorls@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/229aaaf3-f529-44ed-8e50-00cb6909af21%40Spark  
Backpatch-through: 13  

M src/common/protocol_openssl.c

test_tidstore: Stabilize regression tests by sorting offsets.

commit   : 0b3f72f8816b06bfa8e4806f959c85040a765031    
  
author   : Masahiko Sawada <msawada@postgresql.org>    
date     : Wed, 29 Apr 2026 09:10:10 -0700    
  
committer: Masahiko Sawada <msawada@postgresql.org>    
date     : Wed, 29 Apr 2026 09:10:10 -0700    

Click here for diff

TidStoreSetBlockOffsets() requires its offsets array to be strictly  
ascending and asserts this precondition. In test_tidstore, we were  
passing random offset numbers deduplicated by a DISTINCT clause in an  
array_agg() call directly to the do_set_block_offsets() test  
harness. However, DISTINCT without an ORDER BY clause does not  
guarantee sorted results according to the SQL standard.  
  
Fix this by sorting the offsets in-place inside do_set_block_offsets()  
before calling TidStoreSetBlockOffsets().  
  
While this assertion failure is not observed during regular regression  
tests because they use queries simple enough that the optimizer  
consistently chooses plans yielding sorted results, it makes sense to  
stabilize the test. The failure could theoretically occur depending on  
the optimizer's plan choice, and has been reported when experimenting  
with certain third-party extensions.  
  
Backpatch to v17, where test_tidstore was introduced, to ensure  
extension development on stable branches does not hit this assertion.  
  
Reported-by: Andrei Lepikhov <lepihov@gmail.com>  
Author: Andrei Lepikhov <lepihov@gmail.com>  
Discussion: https://postgr.es/m/b97f1850-fc7b-43c4-9b04-4e97bb9e7dc0@gmail.com  
Backpatch-through: 17  

M src/test/modules/test_tidstore/test_tidstore.c

doc: Fix grammar in some logical replication pages

commit   : 21777c21d0d872df7100d0b3a0e8c356e51e5ff3    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 27 Apr 2026 16:17:26 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Mon, 27 Apr 2026 16:17:26 +0900    

Click here for diff

Author: Peter Smith <smithpb2250@gmail.com>  
Discussion: https://postgr.es/m/CAHut+PuvY_wYLPJ4DTs7NE9Lu2ty4d-OgZAOJC-NvCM=2wwcQQ@mail.gmail.com  
Backpatch-through: 14  

M doc/src/sgml/logical-replication.sgml
M doc/src/sgml/ref/create_publication.sgml

Update time zone data files to tzdata release 2026b.

commit   : 4c0eab6f0bf4c4b689a71a361d5528a854ba4557    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 24 Apr 2026 12:28:35 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 24 Apr 2026 12:28:35 -0400    

Click here for diff

British Columbia (America/Vancouver) moved to permanent UTC-07 on  
2026-03-09, which will affect their clocks beginning on 2026-11-01.  
For lack of any clarity on the point, assume their TZ abbreviation  
will be MST from that time forward.  
  
Moldova (Europe/Chisinau) has followed EU DST transition times since  
2022.  
  
Backpatch-through: 14  

M src/timezone/data/tzdata.zi

Fix incorrect logic for hashed IN / NOT IN with non-strict operators

commit   : 3fda3e12f41b5e64d665f881f38ccba5d8d31e02    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Fri, 24 Apr 2026 14:04:05 +1200    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Fri, 24 Apr 2026 14:04:05 +1200    

Click here for diff

ExecEvalHashedScalarArrayOp(), when using a strict equality function,  
performs a short-circuit when looking up NULL values.  When the function  
is non-strict, the code incorrectly looked up the hash table for a  
zero-valued Datum, which could have resulted in an accidental true  
return if the hash table contained zero valued Datum, or could result  
in a crash for non-byval types.  
  
Here we fix this by adding an extra step when we build the hash table to  
check what the result of a NULL lookup would be.  This requires looping  
over the array and checking what the non-hashed version of the code  
would do.  We cache the results of that in the expression so that we can  
reuse the result any time we're asked to search for a NULL value.  
  
It's important to note that non-strict equality functions are free to  
treat any NULL value as equal to any non-NULL value.  For example,  
someone may wish to design a type that treats an empty string and NULL  
as equal.  
  
All built-in types have strict equality functions, so this could affect  
custom / user-defined types.  
  
Author: Chengpeng Yan <chengpeng_yan@outlook.com>  
Author: David Rowley <dgrowleyml@gmail.com>  
Reviewed-by: ChangAo Chen <cca5507@qq.com>  
Discussion: https://postgr.es/m/A16187AE-2359-4265-9F5E-71D015EC2B2D@outlook.com  
Backpatch-through: 14  

M src/backend/executor/execExprInterp.c
M src/include/executor/execExpr.h
M src/test/regress/expected/expressions.out
M src/test/regress/sql/expressions.sql

pg_test_timing: fix unit in backward-clock warning

commit   : 1fdf1c63744c301bef44bf1723ba44961a4c4af5    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Fri, 24 Apr 2026 08:59:14 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Fri, 24 Apr 2026 08:59:14 +0900    

Click here for diff

pg_test_timing reports timing differences in nanoseconds in master, and  
in microseconds in v14 through v18, but previously the backward-clock  
warning incorrectly labeled the value as milliseconds.  
  
This commit fixes the warning message to use "ns" in master and  
"us" in v14 through v18, matching the actual unit being reported.  
  
Backpatch to all supported versions.  
  
Author: Chao Li <lic@highgo.com>  
Reviewed-by: Lukas Fittl <lukas@fittl.com>  
Reviewed-by: Xiaopeng Wang <wxp_728@163.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/F780CEEB-A237-4302-9F55-60E9D8B6533D@gmail.com  
Backpatch-through: 14  

M src/bin/pg_test_timing/pg_test_timing.c

Don't call CheckAttributeType() with InvalidOid on dropped cols

commit   : d54e75441518da3207fa5a44f34d300956d3c2c2    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 23 Apr 2026 21:05:27 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 23 Apr 2026 21:05:27 +0300    

Click here for diff

If CheckAttributeType() is called with InvalidOid, it performs a bunch  
of pointless, futile syscache lookups with InvalidOid, but ultimately  
tolerates it and has no effect. We were calling it with InvalidOid on  
dropped columns, but it seems accidental that it works, so let's stop  
doing it.  
  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://www.postgresql.org/message-id/93ce56cd-02a6-4db1-8224-c8999372facc@iki.fi  
Backpatch-through: 14  

M src/backend/catalog/heap.c

Don't allow composite type to be member of itself via multirange

commit   : 54343f6f9046caa31c71a72479fd567322d72591    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 23 Apr 2026 21:28:11 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 23 Apr 2026 21:28:11 +0300    

Click here for diff

CheckAttributeType() checks that a composite type is not made a member  
of itself with ALTER TABLE ADD COLUMN or ALTER TYPE ADD ATTRIBUTE,  
even indirectly via a domain, array, another composite type or a range  
type. But it missed checking for multiranges. That was a simple  
oversight when multiranges were added.  
  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://www.postgresql.org/message-id/93ce56cd-02a6-4db1-8224-c8999372facc@iki.fi  
Backpatch-through: 14  

M src/backend/catalog/heap.c
M src/test/regress/expected/multirangetypes.out
M src/test/regress/sql/multirangetypes.sql

catcache.c: use C_COLLATION_OID for texteqfast/texthashfast.

commit   : 9dda30dd3d4936e3d590fc319b08eaed41f1f748    
  
author   : Jeff Davis <jdavis@postgresql.org>    
date     : Wed, 22 Apr 2026 10:22:44 -0700    
  
committer: Jeff Davis <jdavis@postgresql.org>    
date     : Wed, 22 Apr 2026 10:22:44 -0700    

Click here for diff

The problem report was about setting GUCs in the startup packet for a  
physical replication connection. Setting the GUC required an ACL  
check, which performed a lookup on pg_parameter_acl.parname. The  
catalog cache was hardwired to use DEFAULT_COLLATION_OID for  
texteqfast() and texthashfast(), but the database default collation  
was uninitialized because it's a physical walsender and never connects  
to a database. In versions 18 and later, this resulted in a NULL  
pointer dereference, while in version 17 it resulted in an ERROR.  
  
As the comments stated, using DEFAULT_COLLATION_OID was arbitrary  
anyway: if the collation actually mattered, it should have used the  
column's actual collation. (In the catalog, some text columns are the  
default collation and some are "C".)  
  
Fix by using C_COLLATION_OID, which doesn't require any initialization  
and is always available. When any deterministic collation will do,  
it's best to consistently use the simplest and fastest one, so this is  
a good idea anyway.  
  
Another problem was raised in the thread, which this commit doesn't  
fix (see second discussion link).  
  
Reported-by: Andrey Borodin <x4mmm@yandex-team.ru>  
Discussion: https://postgr.es/m/D18AD72A-5004-4EF8-AF80-10732AF677FA@yandex-team.ru  
Discussion: https://postgr.es/m/4524ed61a015d3496fc008644dcb999bb31916a7.camel%40j-davis.com  
Backpatch-through: 17  

M src/backend/utils/cache/catcache.c

Guard against overly-long numeric formatting symbols from locale.

commit   : c97a2861851d3f92fb235ec80bd426b3f5a3d28c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 22 Apr 2026 12:41:00 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 22 Apr 2026 12:41:00 -0400    

Click here for diff

to_char() allocates its output buffer with 8 bytes per formatting  
code in the pattern.  If the locale's currency symbol, thousands  
separator, or decimal or sign symbol is more than 8 bytes long,  
in principle we could overrun the output buffer.  No such locales  
exist in the real world, so it seems sufficient to truncate the  
symbol if we do see it's too long.  
  
Reported-by: Xint Code  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/638232.1776790821@sss.pgh.pa.us  
Backpatch-through: 14  

M src/backend/utils/adt/formatting.c

Prevent some buffer overruns in spell.c's parsing of affix files.

commit   : ea5f0d176a9d40df0ee6096203e1d1452f8db200    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 22 Apr 2026 12:02:15 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 22 Apr 2026 12:02:15 -0400    

Click here for diff

parse_affentry() and addCompoundAffixFlagValue() each collect fields  
from an affix file into working buffers of size BUFSIZ.  They failed  
to defend against overlength fields, so that a malicious affix file  
could cause a stack smash.  BUFSIZ (typically 8K) is certainly way  
longer than any reasonable affix field, but let's fix this while  
we're closing holes in this area.  
  
I chose to do this by silently truncating the input before it can  
overrun the buffer, using logic comparable to the existing logic in  
get_nextfield().  Certainly there's at least as good an argument for  
raising an error, but for now let's follow the existing precedent.  
  
Reported-by: Igor Stepansky <igor.stepansky@orca.security>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>  
Discussion: https://postgr.es/m/864123.1776810909@sss.pgh.pa.us  
Backpatch-through: 14  

M src/backend/tsearch/spell.c

Prevent buffer overrun in spell.c's CheckAffix().

commit   : a5426dbf841513ac642a1f32c1a240a6960d21bc    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 22 Apr 2026 10:47:56 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 22 Apr 2026 10:47:56 -0400    

Click here for diff

This function writes into a caller-supplied buffer of length  
2 * MAXNORMLEN, which should be plenty in real-world cases.  
However a malicious affix file could supply an affix long  
enough to overrun that.  Defend by just rejecting the match  
if it would overrun the buffer.  I also inserted a check of  
the input word length against Affix->replen, just to be sure  
we won't index off the buffer, though it would be caller error  
for that not to be true.  
  
Also make the actual copying steps a bit more readable, and remove  
an unnecessary requirement for the whole input word to fit into the  
output buffer (even though it always will with the current caller).  
  
The lack of documentation in this code makes my head hurt, so  
I also reverse-engineered a basic header comment for CheckAffix.  
  
Reported-by: Xint Code  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>  
Discussion: https://postgr.es/m/641711.1776792744@sss.pgh.pa.us  
Backpatch-through: 14  

M src/backend/tsearch/spell.c

Allow ALTER INDEX .. ATTACH PARTITION to validate a parent index

commit   : becf6d26961aabe26facebee2604ef5def9733e5    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Wed, 22 Apr 2026 10:34:35 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Wed, 22 Apr 2026 10:34:35 +0900    

Click here for diff

This commit tweaks ALTER INDEX .. ATTACH PARTITION to attempt a  
validation of a parent index in the case where an index is already  
attached but the parent is not yet valid.  This occurs in cases where a  
parent index was created invalid such as with CREATE INDEX ONLY, but was  
left invalid after an invalid child index was attached (partitioned  
indexes set indisvalid to false if at least one partition is  
!indisvalid, indisvalid is true in a partitioned table iff all  
partitions are indisvalid).  This could leave a partition tree in a  
situation where a user could not bring the parent index back to valid  
after fixing the child index, as there is no built-in mechanism to do  
so.  This commit relies on the fact that repeated ATTACH PARTITION  
commands on the same index silently succeed.  
  
An invalid parent index is more than just a passive issue.  It causes  
for example ON CONFLICT on a partitioned table if the invalid parent  
index is used to enforce a unique constraint.  
  
Some test cases are added to track some of problematic patterns, using a  
set of partition trees with combinations of invalid indexes and ATTACH  
PARTITION.  
  
Reported-by: Mohamed Ali <moali.pg@gmail.com>  
Author: Sami Imseih <sanmimseih@gmail.com>  
Reviewed-by: Michael Paquier <michael@paquier.xyz>  
Reviewed-by: Haibo Yan <tristan.yim@gmail.com>  
Discussion: http://postgr.es/m/CAGnOmWqi1D9ycBgUeOGf6mOCd2Dcf=6sKhbf4sHLs5xAcKVCMQ@mail.gmail.com  
Backpatch-through: 14  

M src/backend/commands/tablecmds.c
M src/test/regress/expected/indexing.out
M src/test/regress/sql/indexing.sql

Make plpgsql_trap test more robust and less resource-intensive.

commit   : 94f1409847e0eb9c6eaaefe3366e53b19641ccfe    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 21 Apr 2026 10:54:39 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 21 Apr 2026 10:54:39 -0400    

Click here for diff

We were using "select count(*) into x from generate_series(1,  
1_000_000_000_000)" to waste one second waiting for a statement  
timeout trap.  Aside from consuming CPU to little purpose, this could  
easily eat several hundred MB of temporary file space, which has been  
observed to cause out-of-disk-space errors in the buildfarm.  
Let's just use "pg_sleep(10)", which is far less resource-intensive.  
  
Also update the "when others" exception handler so that if it does  
ever again trap an error, it will tell us what error.  The cause of  
these intermittent buildfarm failures had been obscure for awhile.  
  
Discussion: https://postgr.es/m/557992.1776779694@sss.pgh.pa.us  
Backpatch-through: 14  

M src/pl/plpgsql/src/expected/plpgsql_trap.out
M src/pl/plpgsql/src/sql/plpgsql_trap.sql

Fix incorrect NEW references to generated columns in rule rewriting

commit   : 9d6208939a0ad05171e37a106f02c55a591a26f1    
  
author   : Richard Guo <rguo@postgresql.org>    
date     : Tue, 21 Apr 2026 14:31:15 +0900    
  
committer: Richard Guo <rguo@postgresql.org>    
date     : Tue, 21 Apr 2026 14:31:15 +0900    

Click here for diff

When a rule action or rule qualification references NEW.col where col  
is a generated column (stored or virtual), the rewriter produces  
incorrect results.  
  
rewriteTargetListIU removes generated columns from the query's target  
list, since stored generated columns are recomputed by the executor  
and virtual ones store nothing.  However, ReplaceVarsFromTargetList  
then cannot find these columns when resolving NEW references during  
rule rewriting.  For UPDATE, the REPLACEVARS_CHANGE_VARNO fallback  
redirects NEW.col to the original target relation, making it read the  
pre-update value (same as OLD.col).  For INSERT,  
REPLACEVARS_SUBSTITUTE_NULL replaces it with NULL.  Both are wrong  
when the generated column depends on columns being modified.  
  
Fix by building target list entries for generated columns from their  
generation expressions, pre-resolving the NEW.attribute references  
within those expressions against the query's targetlist, and passing  
them together with the query's targetlist to ReplaceVarsFromTargetList.  
  
Back-patch to all supported branches.  Virtual generated columns were  
added in v18, so the back-patches in pre-v18 branches only handle  
stored generated columns.  
  
Reported-by: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>  
Author: Richard Guo <guofenglinux@gmail.com>  
Author: Dean Rasheed <dean.a.rasheed@gmail.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/CAHg+QDexGTmCZzx=73gXkY2ZADS6LRhpnU+-8Y_QmrdTS6yUhA@mail.gmail.com  
Backpatch-through: 14  

M src/backend/rewrite/rewriteHandler.c
M src/test/regress/expected/generated.out
M src/test/regress/sql/generated.sql

Fix orphaned processes when startup process fails during PM_STARTUP

commit   : e381843cfaf47f0055a786c4fa4ca7125be6f65e    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Tue, 21 Apr 2026 09:40:04 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Tue, 21 Apr 2026 09:40:04 +0900    

Click here for diff

When the startup process exists with a FATAL error during PM_STARTUP,  
the postmaster called ExitPostmaster() directly, assuming that no other  
processes are running at this stage.  Since 7ff23c6d277d, this  
assumption is not true, as the checkpointer, the background writer, the  
IO workers and bgworkers kicking in early would be around.  
  
This commit removes the startup-specific shortcut happening in  
process_pm_child_exit() for a failing startup process during PM_STARTUP,  
falling down to the existing exit() flow to signal all the started  
children with SIGQUIT, so as we have no risk of creating orphaned  
processes.  
  
This required an extra change in HandleFatalError() for v18 and newer  
versions, as an assertion could be triggered for PM_STARTUP.  It is now  
incorrect.  In v17 and older versions, HandleChildCrash() needs to be  
changed to handle PM_STARTUP so as children can be waited on.  
  
While on it, fix a comment at the top of postmaster.c.  It was claiming  
that the checkpointer and the background writer were started after  
PM_RECOVERY.  That is not the case.  
  
Author: Ayush Tiwari <ayushtiwari.slg01@gmail.com>  
Discussion: https://postgr.es/m/CAJTYsWVoD3V9yhhqSae1_wqcnTdpFY-hDT7dPm5005ZFsL_bpA@mail.gmail.com  
Backpatch-through: 15  

M src/backend/postmaster/postmaster.c

doc: Correct context description for some JIT support GUCs

commit   : 19fcb75c81314b2dea1ae229e40ffbf9ed0da9e7    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Tue, 21 Apr 2026 08:44:19 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Tue, 21 Apr 2026 08:44:19 +0900    

Click here for diff

The documentation for jit_debugging_support and jit_profiling_support  
previously stated that these parameters can only be set at server start.  
  
However, both parameters use the PGC_SU_BACKEND context, meaning they  
can be set at session start by superusers or users granted the appropriate  
SET privilege, but cannot be changed within an active session.  
  
This commit updates the documentation to reflect the actual behavior.  
  
Backpatch to all supported versions.  
  
Author: Fujii Masao <masao.fujii@gmail.com>  
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>  
Discussion: https://postgr.es/m/CAHGQGwEpMDpB-K8SSUVRRHg6L6z3pLAkekd9aviOS=ns0EC=+Q@mail.gmail.com  
Backpatch-through: 14  

M doc/src/sgml/config.sgml

Fix relid-set clobber during join removal.

commit   : 53cb4ec1ded7537770c68f709416de068a3e40d5    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 20 Apr 2026 19:24:46 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 20 Apr 2026 19:24:46 -0400    

Click here for diff

Commit cfcd57111 et al fell over under Valgrind testing.  
(It seems to be enough to #define USE_VALGRIND, you don't actually  
need to run it under Valgrind to see failures.)  The cause is that  
remove_rel_from_eclass updates each EquivalenceMember's em_relids,  
and those can be aliases of the left_relids or right_relids of some  
RestrictInfo in ec_sources.  If the update made em_relids empty then  
bms_del_member will have pfree'd the relid set, so that the subsequent  
attempt to clean up ec_sources accesses already-freed memory.  
  
We missed seeing ill effects before cfcd57111 because (a) if the  
pfree happens then we will remove the EquivalenceMember altogether,  
making the source RestrictInfo no longer of use, and (b) the  
cleanup of ec_sources didn't touch left/right_relids before that.  
  
I'm unclear though on how cfcd57111 managed to pass non-USE_VALGRIND  
testing.  Apparently we managed to store another Bitmapset into the  
freed space before trying to access it, but you'd not think that would  
happen 100% of the time.  I think what USE_VALGRIND changes is that it  
makes list.c much more memory-hungry, so that the freed space gets  
claimed by some List node before a Bitmapset can be put there.  
  
This failure can be seen in v16, v17, and master, but oddly enough not  
v18.  That's because the SJE patch replaced the simple bms_del_members  
calls used here with adjust_relid_set, which is careful not to  
scribble on its input.  But commit 20efbdffe just recently put back  
the old coding and thus resurrected the problem.  
  
Discussion: https://postgr.es/m/458729.1776724816@sss.pgh.pa.us  
Backpatch-through: 16, 17, master  

M src/backend/optimizer/plan/analyzejoins.c

Clean up all relid fields of RestrictInfos during join removal.

commit   : 766d40286600ebb9e3aa241451fa96427ed2f454    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 20 Apr 2026 14:48:23 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 20 Apr 2026 14:48:23 -0400    

Click here for diff

The original implementation of remove_rel_from_restrictinfo()  
thought it could skate by with removing no-longer-valid relid  
bits from only the clause_relids and required_relids fields.  
This is quite bogus, although somehow we had not run across a  
counterexample before now.  At minimum, the left_relids and  
right_relids fields need to be fixed because they will be  
examined later by clause_sides_match_join().  But it seems  
pretty foolish not to fix all the relid fields, so do that.  
  
This needs to be back-patched as far as v16, because the  
bug report shows a planner failure that does not occur  
before v16.  I'm a little nervous about back-patching,  
because this could cause unexpected plan changes due to  
opening up join possibilities that were rejected before.  
But it's hard to argue that this isn't a regression.  Also,  
the fact that this changes no existing regression test results  
suggests that the scope of changes may be fairly narrow.  
I'll refrain from back-patching further though, since no  
adverse effects have been demonstrated in older branches.  
  
Bug: #19460  
Reported-by: François Jehl <francois.jehl@pigment.com>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Richard Guo <guofenglinux@gmail.com>  
Discussion: https://postgr.es/m/19460-5625143cef66012f@postgresql.org  
Backpatch-through: 16  

M src/backend/optimizer/plan/analyzejoins.c
M src/test/regress/expected/join.out
M src/test/regress/sql/join.sql

Flush statistics during idle periods in parallel apply worker.

commit   : 88d7fdcc9aaa065daf10ff3b777f607fe65bdc02    
  
author   : Amit Kapila <akapila@postgresql.org>    
date     : Mon, 20 Apr 2026 10:17:47 +0530    
  
committer: Amit Kapila <akapila@postgresql.org>    
date     : Mon, 20 Apr 2026 10:17:47 +0530    

Click here for diff

Parallel apply workers previously failed to report statistics while  
waiting for new work in the main loop. This resulted in the stats from the  
most recent transaction remaining unbuffered, leading to arbitrary  
reporting delays—particularly when streamed transactions were infrequent.  
  
This commit ensures that statistics are explicitly flushed when the worker  
is idle, providing timely visibility into accumulated worker activity.  
  
Author: Zhijie Hou <houzj.fnst@fujitsu.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Backpatch-through: 16, where it was introduced  
Discussion: https://postgr.es/m/TYRPR01MB1419579F217CC4332B615589594202@TYRPR01MB14195.jpnprd01.prod.outlook.com  

M src/backend/replication/logical/applyparallelworker.c

doc: Improve description of pg_ctl -l log file permissions

commit   : 0d09492a745b9f054c595ea785cbd119d7018b44    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Fri, 17 Apr 2026 15:30:59 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Fri, 17 Apr 2026 15:30:59 +0900    

Click here for diff

The documentation stated only that the log file created by pg_ctl -l is  
inaccessible to other users by default. However, since commit c37b3d0,  
the actual behavior is that only the cluster owner has access by default,  
but users in the same group as the cluster owner may also read the file  
if group access is enabled in the cluster.  
  
This commit updates the documentation to describe this behavior  
more clearly.  
  
Backpatch to all supported versions.  
  
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>  
Reviewed-by: Andreas Karlsson <andreas@proxel.se>  
Reviewed-by: Xiaopeng Wang <wxp_728@163.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/OS9PR01MB1214959BE987B4839E3046050F54BA@OS9PR01MB12149.jpnprd01.prod.outlook.com  
Backpatch-through: 14  

M doc/src/sgml/ref/pg_ctl-ref.sgml

Update .abi-compliance-history for change to enum ProcSignalReason

commit   : 29d8bd9085600a5cf751c0666a924b22e9970efb    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 16 Apr 2026 23:48:41 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 16 Apr 2026 23:48:41 +0900    

Click here for diff

As noted in the commit message for 586f4266fb4, increasing the value of  
NUM_PROCSIGNALS in enum ProcSignalReason breaks the ABI compatibility,  
but no affected third-party code is known. Therefore this commit updates  
.abi-compliance-history accordingly.  
  
Discussion: https://postgr.es/m/CAHGQGwH_AAbtsiYDJt65N7_4PJ0CgOJmBEaCq68e5_tcuG_vXw@mail.gmail.com  
Backpatch-through: 17 only  

M .abi-compliance-history

Fix comments for Korean encodings in encnames.c

commit   : ec61832231c4f8c28e513870b9dfce87c1d8625a    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Thu, 16 Apr 2026 18:17:05 +1200    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Thu, 16 Apr 2026 18:17:05 +1200    

Click here for diff

  * JOHAB: replace the incorrect "simplified Chinese" description with  
    a correct one that identifies it as the Korean combining (Johab)  
    encoding standardized in KS X 1001 annex 3.  
  
  * EUC_KR: drop a stray space before the comma in the existing  
    comment, and note that the encoding covers the KS X 1001  
    precomposed (Wansung) form.  
  
  * UHC: spell out "Unified Hangul Code", clarify that it is  
    Microsoft Windows CodePage 949, and describe its relationship to  
    EUC-KR (superset covering all 11,172 precomposed Hangul syllables).  
  
Backpatch-through: 14  
Author: Henson Choi <assam258@gmail.com>  
Discussion: https://postgr.es/m/CAAAe_zAFz1v-3b7Je4L%2B%3DwZM3UGAczXV47YVZfZi9wbJxspxeA%40mail.gmail.com  

M src/common/encnames.c

Fix incorrect comment in JsonTablePlanJoinNextRow()

commit   : 1a2d60cc0410c19484a389aee5dc41d28541b0f8    
  
author   : Amit Langote <amitlan@postgresql.org>    
date     : Thu, 16 Apr 2026 11:52:16 +0900    
  
committer: Amit Langote <amitlan@postgresql.org>    
date     : Thu, 16 Apr 2026 11:52:16 +0900    

Click here for diff

The comment on the return-false path when both UNION siblings are  
exhausted said "there are more rows," which is the opposite of what  
the code does. The code itself is correct, returning false to signal  
no more rows, but the misleading comment could tempt a reader into  
"fixing" the return value, which would cause UNION plans to loop  
indefinitely.  
  
Back-patch to 17, where JSON_TABLE was introduced.  
  
Author: Chuanwen Hu <463945512@qq.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/tencent_4CC6316F02DECA61ACCF22F933FEA5C12806@qq.com  
Backpatch-through: 17  

M src/backend/utils/adt/jsonpath_exec.c

Check for unterminated strings when calling uloc_getLanguage().

commit   : a756067a0e3b6ab6943f186f241b119aba2624f0    
  
author   : Jeff Davis <jdavis@postgresql.org>    
date     : Tue, 14 Apr 2026 12:06:02 -0700    
  
committer: Jeff Davis <jdavis@postgresql.org>    
date     : Tue, 14 Apr 2026 12:06:02 -0700    

Click here for diff

Missed by commit 1671f990dd66.  
  
Author: Andreas Karlsson <andreas@proxel.se>  
Discussion: https://postgr.es/m/118ca69e-47eb-42e1-83e9-72ccf40dd6fd@proxel.se  
Backpatch-through: 16  

M src/bin/initdb/initdb.c

Add tests for low-level PGLZ [de]compression routines

commit   : c78947badc701ffda625455ddac98bad9bd4109b    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Wed, 15 Apr 2026 05:09:10 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Wed, 15 Apr 2026 05:09:10 +0900    

Click here for diff

The goal of this module is to provide an entry point for the coverage of  
the low-level compression and decompression PGLZ routines.  The new test  
is moved to a new parallel group, with all the existing  
compression-related tests added to it.  
  
This includes tests for the cases detected by fuzzing that emulate  
corrupted compressed data, as fixed by 2b5ba2a0a141:  
- Set control bit with read of a match tag, where no data follows.  
- Set control bit with read of a match tag, where 1 byte follows.  
- Set control bit with match tag where length nibble is 3 bytes  
(extended case).  
  
While on it, some tests are added for compress/decompress roundtrips,  
and for check_complete=false/true.  Like 2b5ba2a0a141, backpatch to all  
the stable branches.  
  
Discussion: https://postgr.es/m/adw647wuGjh1oU6p@paquier.xyz  
Backpatch-through: 14  

A src/test/regress/expected/compression_pglz.out
M src/test/regress/parallel_schedule
M src/test/regress/regress.c
A src/test/regress/sql/compression_pglz.sql

Fix excessive logging in idle slotsync worker.

commit   : 91741b7cb7b5e0f0eb5100a548798b5a395cd3b5    
  
author   : Amit Kapila <akapila@postgresql.org>    
date     : Mon, 13 Apr 2026 09:21:34 +0530    
  
committer: Amit Kapila <akapila@postgresql.org>    
date     : Mon, 13 Apr 2026 09:21:34 +0530    

Click here for diff

The slotsync worker was incorrectly identifying no-op states as successful  
updates, triggering a busy loop to sync slots that logged messages every  
200ms. This patch corrects the logic to properly classify these states,  
enabling the worker to respect normal sleep intervals when no work is  
performed.  
  
Reported-by: Fujii Masao <masao.fujii@gmail.com>  
Author: Zhijie Hou <houzj.fnst@fujitsu.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Reviewed-by: shveta malik <shveta.malik@gmail.com>  
Backpatch-through: 17, where it was introduced  
Discussion: https://postgr.es/m/CAHGQGwF6zG9Z8ws1yb3hY1VqV-WT7hR0qyXCn2HdbjvZQKufDw@mail.gmail.com  

M src/backend/replication/logical/slotsync.c

Honor passed-in database OIDs in pgstat_database.c

commit   : a4fefb3e0dc5451166ebef6adcc082a57ddba039    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Sat, 11 Apr 2026 17:03:06 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Sat, 11 Apr 2026 17:03:06 +0900    

Click here for diff

Three routines in pgstat_database.c incorrectly ignore the database OID  
provided by their caller, using MyDatabaseId instead:  
- pgstat_report_connect()  
- pgstat_report_disconnect()  
- pgstat_reset_database_timestamp()  
  
The first two functions, for connection and disconnection, each have a  
single caller that already passes MyDatabaseId.  This was harmless,  
still incorrect.  
  
The timestamp reset function also has a single caller, but in this case  
the issue has a real impact: it fails to reset the timestamp for the  
shared-database entry (datid=0) when operating on shared objects.  This  
situation can occur, for example, when resetting counters for shared  
relations via pg_stat_reset_single_table_counters().  
  
There is currently one test in the tree that checks the reset of a  
shared relation, for pg_shdescription, we rely on it to check what is  
stored in pg_stat_database.  As stats_reset may be NULL, two resets are  
done to provide a baseline for comparison.  
  
Author: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: Michael Paquier <michael@paquier.xyz>  
Reviewed-by: Dapeng Wang <wangdp20191008@gmail.com>  
Discussion: https://postgr.es/m/ABBD5026-506F-4006-A569-28F72C188693@gmail.com  
Backpatch-through: 15  

M src/backend/utils/activity/pgstat_database.c
M src/test/regress/expected/stats.out
M src/test/regress/sql/stats.sql

Fix estimate_array_length error with set-operation array coercions

commit   : 93ed18720105ebacfd2a2bdeb55bf33feaf2ec80    
  
author   : Richard Guo <rguo@postgresql.org>    
date     : Sat, 11 Apr 2026 16:38:47 +0900    
  
committer: Richard Guo <rguo@postgresql.org>    
date     : Sat, 11 Apr 2026 16:38:47 +0900    

Click here for diff

When a nested set operation's output type doesn't match the parent's  
expected type, recurse_set_operations builds a projection target list  
using generate_setop_tlist with varno 0.  If the required type  
coercion involves an ArrayCoerceExpr, estimate_array_length could be  
called on such a Var, and would pass it to examine_variable, which  
errors in find_base_rel because varno 0 has no valid relation entry.  
  
Fix by skipping the statistics lookup for Vars with varno 0.  
  
Bug introduced by commit 9391f7152.  Back-patch to v17, where  
estimate_array_length was taught to use statistics.  
  
Reported-by: Justin Pryzby <pryzby@telsasoft.com>  
Author: Tender Wang <tndrwang@gmail.com>  
Reviewed-by: Richard Guo <guofenglinux@gmail.com>  
Discussion: https://postgr.es/m/adjW8rfPDkplC7lF@pryzbyj2023  
Backpatch-through: 17  

M src/backend/utils/adt/selfuncs.c
M src/test/regress/expected/union.out
M src/test/regress/sql/union.sql

Fix heap-buffer-overflow in pglz_decompress() on corrupt input.

commit   : c05c3baf169f353914ed34fcabd057be1d25f9b4    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Thu, 9 Apr 2026 11:48:55 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Thu, 9 Apr 2026 11:48:55 -0400    

Click here for diff

When decoding a match tag, pglz_decompress() reads 2 bytes (or 3  
for extended-length matches) from the source buffer before checking  
whether enough data remains.  The existing bounds check (sp > srcend)  
occurs after the reads, so truncated compressed data that ends  
mid-tag causes a read past the allocated buffer.  
  
Fix by validating that sufficient source bytes are available before  
reading each part of the match tag.  The post-read sp > srcend  
check is no longer needed and is removed.  
  
Found by fuzz testing with libFuzzer and AddressSanitizer.  
  
Backpatch-through: 14  

M src/common/pg_lzcompress.c

Fix incremental JSON parser numeric token reassembly across chunks.

commit   : 2e373785ec07102badee139236ac78c4da4f7c16    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Thu, 9 Apr 2026 07:57:07 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Thu, 9 Apr 2026 07:57:07 -0400    

Click here for diff

When the incremental JSON parser splits a numeric token across chunk  
boundaries, it accumulates continuation characters into the partial  
token buffer.  The accumulator's switch statement unconditionally  
accepted '+', '-', '.', 'e', and 'E' as valid numeric continuations  
regardless of position, which violated JSON number grammar  
(-? int [frac] [exp]).  For example, input "4-" fed in single-byte  
chunks would accumulate the '-' into the numeric token, producing an  
invalid token that later triggered an assertion failure during  
re-lexing.  
  
Fix by tracking parser state (seen_dot, seen_exp, prev character)  
across the existing partial token and incoming bytes, so that each  
character class is accepted only in its grammatically valid position.  
  
Backpatch-through: 17  

M src/common/jsonapi.c

Zero-fill private_data when attaching an injection point

commit   : 492c386b4df465df8ebb9347e1579ad1b96bf41a    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Fri, 10 Apr 2026 11:17:32 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Fri, 10 Apr 2026 11:17:32 +0900    

Click here for diff

InjectionPointAttach() did not initialize the private_data buffer of the  
shared memory entry before (perhaps partially) overwriting it.  When the  
private data is set to NULL by the caler, the buffer was left  
uninitialized.  If set, it could have stale contents.  
  
The buffer is initialized to zero, so as the contents recorded when a  
point is attached are deterministic.  
  
Author: Sami Imseih <samimseih@gmail.com>  
Discussion: https://postgr.es/m/CAA5RZ0tsGHu2h6YLnVu4HiK05q+gTE_9WVUAqihW2LSscAYS-g@mail.gmail.com  
Backpatch-through: 17  

M src/backend/utils/misc/injection_point.c

Fix integer overflow in nodeWindowAgg.c

commit   : f8736f8bc5315fe94cba961825acc3552f8064b0    
  
author   : Richard Guo <rguo@postgresql.org>    
date     : Thu, 9 Apr 2026 19:28:33 +0900    
  
committer: Richard Guo <rguo@postgresql.org>    
date     : Thu, 9 Apr 2026 19:28:33 +0900    

Click here for diff

In nodeWindowAgg.c, the calculations for frame start and end positions  
in ROWS and GROUPS modes were performed using simple integer addition.  
If a user-supplied offset was sufficiently large (close to INT64_MAX),  
adding it to the current row or group index could cause a signed  
integer overflow, wrapping the result to a negative number.  
  
This led to incorrect behavior where frame boundaries that should have  
extended indefinitely (or beyond the partition end) were treated as  
falling at the first row, or where valid rows were incorrectly marked  
as out-of-frame.  Depending on the specific query and data, these  
overflows can result in incorrect query results, execution errors, or  
assertion failures.  
  
To fix, use overflow-aware integer addition (ie, pg_add_s64_overflow)  
to check for overflows during these additions.  If an overflow is  
detected, the boundary is now clamped to INT64_MAX.  This ensures the  
logic correctly treats the boundary as extending to the end of the  
partition.  
  
Bug: #19405  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Author: Richard Guo <guofenglinux@gmail.com>  
Reviewed-by: Tender Wang <tndrwang@gmail.com>  
Discussion: https://postgr.es/m/19405-1ecf025dda171555@postgresql.org  
Backpatch-through: 14  

M src/backend/executor/nodeWindowAgg.c
M src/test/regress/expected/window.out
M src/test/regress/sql/window.sql

Fix ABI break by moving PROCSIG_SLOTSYNC_MESSAGE in ProcSignalReason

commit   : 586f4266fb4945f6ea3564b9c1bab093eb74bee4    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 9 Apr 2026 15:30:59 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 9 Apr 2026 15:30:59 +0900    

Click here for diff

Commit 15910b1c363 PROCSIG_SLOTSYNC_MESSAGE in the middle of  
enum ProcSignalReason, breaking the ABI.  
  
Fix this by moving PROCSIG_SLOTSYNC_MESSAGE to just before the last entry,  
NUM_PROCSIGNALS, to restore ordering of other entries.  
  
This increases the value of NUM_PROCSIGNALS, which technically changes the ABI.  
However, since it represents the number of enum entries (not a signal reason),  
and no affected third-party code is known, this change will be recorded in  
.abi-compliance-history later.  
  
Per buildfarm member crake.  
  
Author: Fujii Masao <masao.fujii@gmail.com>  
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>  
Discussion: https://postgr.es/m/CAHGQGwH_AAbtsiYDJt65N7_4PJ0CgOJmBEaCq68e5_tcuG_vXw@mail.gmail.com  
Backpatch-through: 17 only  

M src/include/storage/procsignal.h

Fix slotsync worker blocking promotion when stuck in wait

commit   : 15910b1c363f47b3984d24a91ed75ddac36070d8    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Wed, 8 Apr 2026 11:24:00 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Wed, 8 Apr 2026 11:24:00 +0900    

Click here for diff

Previously, on standby promotion, the startup process sent SIGUSR1 to  
the slotsync worker (or a backend performing slot synchronization) and  
waited for it to exit. This worked in most cases, but if the process was  
blocked waiting for a response from the primary (e.g., due to a network  
failure), SIGUSR1 would not interrupt the wait. As a result, the process  
could remain stuck, causing the startup process to wait for a long time  
and delaying promotion.  
  
This commit fixes the issue by introducing a new procsignal reason,  
PROCSIG_SLOTSYNC_MESSAGE. On promotion, the startup process  
sends this signal, and the handler sets interrupt flags so the process  
exits (or errors out) promptly at CHECK_FOR_INTERRUPTS(), allowing  
promotion to complete without delay.  
  
Backpatch to v17, where slotsync was introduced.  
  
Author: Nisha Moond <nisha.moond412@gmail.com>  
Reviewed-by: shveta malik <shveta.malik@gmail.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Reviewed-by: Zhijie Hou <houzj.fnst@fujitsu.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/CAHGQGwFzNYroAxSoyJhqTU-pH=t4Ej6RyvhVmBZ91Exj_TPMMQ@mail.gmail.com  
Backpatch-through: 17  

M src/backend/replication/logical/slotsync.c
M src/backend/storage/ipc/procsignal.c
M src/backend/tcop/postgres.c
M src/include/replication/slotsync.h
M src/include/storage/procsignal.h

Enhance slot synchronization API to respect promotion signal.

commit   : 4bed04d39566b788549ad157b32ccca9032bb4a1    
  
author   : Amit Kapila <akapila@postgresql.org>    
date     : Thu, 11 Dec 2025 03:49:28 +0000    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 11 Dec 2025 03:49:28 +0000    

Click here for diff

Previously, during a promotion, only the slot synchronization worker was  
signaled to shut down. The backend executing slot synchronization via the  
pg_sync_replication_slots() SQL function was not signaled, allowing it to  
complete its synchronization cycle before exiting.  
  
An upcoming patch improves pg_sync_replication_slots() to wait until  
replication slots are fully persisted before finishing. This behaviour  
requires the backend to exit promptly if a promotion occurs.  
  
This patch ensures that, during promotion, a signal is also sent to the  
backend running pg_sync_replication_slots(), allowing it to be interrupted  
and exit immediately.  
  
This change was originally committed to master only. However, backpatch  
it to v17, where slot synchronization was introduced. Because it is required  
for an upcoming bug fix addressing slotsync (including  
pg_sync_replication_slots()) blocking promotion when stuck in a wait.  
  
Author: Ajin Cherian <itsajin@gmail.com>  
Reviewed-by: Shveta Malik <shveta.malik@gmail.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Discussion: https://postgr.es/m/CAFPTHDZAA%2BgWDntpa5ucqKKba41%3DtXmoXqN3q4rpjO9cdxgQrw%40mail.gmail.com  
Backpatch-through: 17  

M src/backend/replication/logical/slotsync.c

Avoid unsafe access to negative index in a TupleDesc.

commit   : 681a91d29d4d8722b4e6fbafd2a6ee5b6b6c6068    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Apr 2026 14:22:17 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Apr 2026 14:22:17 -0400    

Click here for diff

Commit aa606b931 installed a test that would reference a nonexistent  
TupleDesc array entry if a system column is used in COPY FROM WHERE.  
Typically this would be harmless, but with bad luck it could result  
in a phony "generated columns are not supported in COPY FROM WHERE  
conditions" error, and at least in principle it could cause SIGSEGV.  
(Compare 570e2fcc0 which fixed the identical problem in another  
place.)  Also, since c98ad086a it throws an Assert instead.  
  
In the back branches, just guard the test to make it a safe no-op for  
system columns.  Commit 21c69dc73 installed a more aggressive answer  
in master.  
  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/6f435023-8ab6-47c2-ba07-035d0c4212f9@gmail.com  
Backpatch-through: 14-18  

M src/backend/commands/copy.c

Fix null-bitmap combining in array_agg_array_combine().

commit   : d6c9432cb5da3b3d2c31b31fbe8eb668a9f927a4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Apr 2026 13:14:50 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Apr 2026 13:14:50 -0400    

Click here for diff

This code missed the need to update the combined state's  
nullbitmap if state1 already had a bitmap but state2 didn't.  
We need to extend the existing bitmap with 1's but didn't.  
This could result in wrong output from a parallelized  
array_agg(anyarray) calculation, if the input has a mix of  
null and non-null elements.  The errors depended on timing  
of the parallel workers, and therefore would vary from one  
run to another.  
  
Also install guards against integer overflow when calculating  
the combined object's sizes, and make some trivial cosmetic  
improvements.  
  
Author: Dmytro Astapov <dastapov@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/CAFQUnFj2pQ1HbGp69+w2fKqARSfGhAi9UOb+JjyExp7kx3gsqA@mail.gmail.com  
Backpatch-through: 16  

M src/backend/utils/adt/array_userfuncs.c

jit: No backport::SectionMemoryManager for LLVM 22.

commit   : 0a2291b59f9b7f44b2b2c362fad60ca9ff026a60    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Fri, 3 Apr 2026 14:48:54 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Fri, 3 Apr 2026 14:48:54 +1300    

Click here for diff

LLVM 22 has the fix that we copied into our tree in commit 9044fc1d and  
a new function to reach it[1][2], so we only need to use our copy for  
Aarch64 + LLVM < 22.  The only change to the final version that our copy  
didn't get is a new LLVM_ABI macro, but that isn't appropriate for us.  
Our copy is hopefully now frozen and would only need maintenance if bugs  
are found in the upstream code.  
  
Non-Aarch64 systems now also use the new API with LLVM 22.  It allocates  
all sections with one contiguous mmap() instead of one per  
section.  We could have done that earlier, but commit 9044fc1d wanted to  
limit the blast radius to the affected systems.  We might as well  
benefit from that small improvement everywhere now that it is available  
out of the box.  
  
We can't delete our copy until LLVM 22 is our minimum supported version,  
or we switch to the newer JITLink API for at least Aarch64.  
  
[1] https://github.com/llvm/llvm-project/pull/71968  
[2] https://github.com/llvm/llvm-project/pull/174307  
  
Backpatch-through: 14  
Discussion: https://postgr.es/m/CA%2BhUKGJTumad75o8Zao-LFseEbt%3DenbUFCM7LZVV%3Dc8yg2i7dg%40mail.gmail.com  

M src/backend/jit/llvm/SectionMemoryManager.cpp
M src/backend/jit/llvm/llvmjit.c
M src/include/jit/SectionMemoryManager.h
M src/include/jit/llvmjit_backport.h

jit: Stop emitting lifetime.end for LLVM 22.

commit   : b6d0cddbe27d69cf88ebf0702589e0d05c5c58fa    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Thu, 2 Apr 2026 15:24:44 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Thu, 2 Apr 2026 15:24:44 +1300    

Click here for diff

The lifetime.end intrinsic can now only be used for stack memory  
allocated with alloca[1][2][3].  We use it to tell LLVM about the  
lifetime of function arguments/isnull values that we keep in palloc'd  
memory, so that it can avoid spilling registers to memory.  
  
We might need to rearrange things and put them on the stack, but that'll  
take some research.  In the meantime, unbreak the build on LLVM 22.  
  
[1] https://github.com/llvm/llvm-project/pull/149310  
[2] https://llvm.org/docs/LangRef.html#llvm-lifetime-end-intrinsic  
[3] https://llvm.org/docs/LangRef.html#i-alloca  
  
Backpatch-through: 14  
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com> (earlier attempt)  
Reviewed-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> (earlier attempt)  
Reviewed-by: Andres Freund <andres@anarazel.de> (earlier attempt)  
Discussion: https://postgr.es/m/CA%2BhUKGJTumad75o8Zao-LFseEbt%3DenbUFCM7LZVV%3Dc8yg2i7dg%40mail.gmail.com  

M src/backend/jit/llvm/llvmjit_expr.c

doc: Add missing description for DROP SUBSCRIPTION IF EXISTS.

commit   : a6ab311c70f7a0b18ae2ec144b5744dbac44b55d    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Wed, 1 Apr 2026 09:48:48 -0500    
  
committer: Nathan Bossart <nathan@postgresql.org>    
date     : Wed, 1 Apr 2026 09:48:48 -0500    

Click here for diff

Oversight in commit 665d1fad99.  
  
Author: Peter Smith <smithpb2250@gmail.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/CAHut%2BPv72haFerrCdYdmF6hu6o2jKcGzkXehom%2BsP-JBBmOVDg%40mail.gmail.com  
Backpatch-through: 14  

M doc/src/sgml/ref/drop_subscription.sgml

Be more careful to preserve consistency of a tuplestore.

commit   : 1f5b6a5e5d7475021009e8e14fbed40844b3cf0b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 30 Mar 2026 13:59:54 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 30 Mar 2026 13:59:54 -0400    

Click here for diff

Several places in tuplestore.c would leave the tuplestore data  
structure effectively corrupt if some subroutine were to throw  
an error.  Notably, if WRITETUP() failed after some number of  
successful calls within dumptuples(), the tuplestore would  
contain some memtuples pointers that were apparently live  
entries but in fact pointed to pfree'd chunks.  
  
In most cases this sort of thing is fine because transaction  
abort cleanup is not too picky about the contents of memory that  
it's going to throw away anyway.  There's at least one exception  
though: if a Portal has a holdStore, we're going to call  
tuplestore_end() on that, even during transaction abort.  
So it's not cool if that tuplestore is corrupt, and that means  
tuplestore.c has to be more careful.  
  
This oversight demonstrably leads to crashes in v15 and before,  
if a holdable cursor fails to persist its data due to an undersized  
temp_file_limit setting.  Very possibly the same thing can happen in  
v16 and v17 as well, though the specific test case submitted failed  
to fail there (cf. 095555daf).  The failure is accidentally dodged  
as of v18 because 590b045c3 got rid of tuplestore_end's retail tuple  
deletion loop.  Still, it seems unwise to permit tuplestores to become  
internally inconsistent in any branch, so I've applied the same fix  
across the board.  
  
Since the known test case for this is rather expensive and doesn't  
fail in recent branches, I've omitted it.  
  
Bug: #19438  
Reported-by: Dmitriy Kuzmin <kuzmin.db4@gmail.com>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: David Rowley <dgrowleyml@gmail.com>  
Discussion: https://postgr.es/m/19438-9d37b179c56d43aa@postgresql.org  
Backpatch-through: 14  

M src/backend/utils/sort/tuplestore.c

Detect pfree or repalloc of a previously-freed memory chunk.

commit   : 0c8b4e9cfc453c484feabeaf95813a788d3fb16c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 30 Mar 2026 12:02:08 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 30 Mar 2026 12:02:08 -0400    

Click here for diff

Before the major rewrite in commit c6e0fe1f2, AllocSetFree() would  
typically crash when asked to free an already-free chunk.  That was  
an ugly but serviceable way of detecting coding errors that led to  
double pfrees.  But since that rewrite, double pfrees went through  
just fine, because the "hdrmask" of a freed chunk isn't changed at all  
when putting it on the freelist.  We'd end with a corrupt freelist  
that circularly links back to the doubly-freed chunk, which would  
usually result in trouble later, far removed from the actual bug.  
  
This situation is no good at all for debugging purposes.  Fortunately,  
we can fix it at low cost in MEMORY_CONTEXT_CHECKING builds by making  
AllocSetFree() check for chunk->requested_size == InvalidAllocSize,  
relying on the pre-existing code that sets it that way just below.  
  
I investigated the alternative of changing a freed chunk's methodid  
field, which would allow detection in non-MEMORY_CONTEXT_CHECKING  
builds too.  But that adds measurable overhead.  Seeing that we didn't  
notice this oversight for more than three years, it's hard to argue  
that detecting this type of bug is worth any extra overhead in  
production builds.  
  
Likewise fix AllocSetRealloc() to detect repalloc() on a freed chunk,  
and apply similar changes in generation.c and slab.c.  (generation.c  
would hit an Assert failure anyway, but it seems best to make it act  
like aset.c.)  bump.c doesn't need changes since it doesn't support  
pfree in the first place.  Ideally alignedalloc.c would receive  
similar changes, but in debugging builds it's impossible to reach  
AlignedAllocFree() or AlignedAllocRealloc() on a pfreed chunk, because  
the underlying context's pfree would have wiped the chunk header of  
the aligned chunk.  But that means we should get an error of some  
sort, so let's be content with that.  
  
Per investigation of why the test case for bug #19438 didn't appear to  
fail in v16 and up, even though the underlying bug was still present.  
(This doesn't fix the underlying double-free bug, just cause it to  
get detected.)  
  
Bug: #19438  
Reported-by: Dmitriy Kuzmin <kuzmin.db4@gmail.com>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: David Rowley <dgrowleyml@gmail.com>  
Discussion: https://postgr.es/m/19438-9d37b179c56d43aa@postgresql.org  
Backpatch-through: 16  

M src/backend/utils/mmgr/aset.c
M src/backend/utils/mmgr/generation.c
M src/backend/utils/mmgr/slab.c

Fix datum_image_*()'s inability to detect sign-extension variations

commit   : d29808e35d94c70187cebb7a3a4483ab8b591387    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Mon, 30 Mar 2026 16:16:39 +1300    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Mon, 30 Mar 2026 16:16:39 +1300    

Click here for diff

Functions such as hash_numeric() are not careful to use the correct  
PG_RETURN_*() macro according to the return type of that function as  
defined in pg_proc.  Because that function is meant to return int32,  
when the hashed value exceeds 2^31, the 64-bit Datum value won't wrap to  
a negative number, which means the Datum won't have the same value as it  
would have had it been cast to int32 on a two's complement machine.  This  
isn't harmless as both datum_image_eq() and datum_image_hash() may receive  
a Datum that's been formed and deformed from a tuple in some cases, and  
not in other cases.  When formed into a tuple, the Datum value will be  
coerced into an integer according to the attlen as specified by the  
TupleDesc.  This can result in two Datums that should be equal being  
classed as not equal, which could result in (but not limited to) an error  
such as:  
  
ERROR:  could not find memoization table entry  
  
Here we fix this by ensuring we cast the Datum value to a signed integer  
according to the typLen specified in the datum_image_eq/datum_image_hash  
function call before comparing or hashing.  
  
Author: David Rowley <dgrowleyml@gmail.com>  
Reported-by: Tender Wang <tndrwang@gmail.com>  
Backpatch-through: 14  
Discussion: https://postgr.es/m/CAHewXNmcXVFdB9_WwA8Ez0P+m_TQy_KzYk5Ri5dvg+fuwjD_yw@mail.gmail.com  

M src/backend/utils/adt/datum.c

Fix multiple bugs in astreamer pipeline code.

commit   : f1298a4c207262efe023af92ca76bef2c5227c56    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Sun, 29 Mar 2026 09:06:54 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Sun, 29 Mar 2026 09:06:54 -0400    

Click here for diff

astreamer_tar_parser_content() sent the wrong data pointer when  
forwarding MEMBER_TRAILER padding to the next streamer.  After  
astreamer_buffer_until() buffers the padding bytes, the 'data'  
pointer has been advanced past them, but the code passed 'data'  
instead of bbs_buffer.data.  This caused the downstream consumer  
to receive bytes from after the padding rather than the padding  
itself, and could read past the end of the input buffer.  
  
astreamer_gzip_decompressor_content() only checked for  
Z_STREAM_ERROR from inflate(), silently ignoring Z_DATA_ERROR  
(corrupted data) and Z_MEM_ERROR (out of memory).  Fix by  
treating any return other than Z_OK, Z_STREAM_END, and  
Z_BUF_ERROR as fatal.  
  
astreamer_gzip_decompressor_free() missed calling inflateEnd() to  
release zlib's internal decompression state.  
  
astreamer_tar_parser_free() neglected to pfree() the streamer  
struct itself, leaking it.  
  
astreamer_extractor_content() did not check the return value of  
fclose() when closing an extracted file.  A deferred write error  
(e.g., disk full on buffered I/O) would be silently lost.  
  
Discussion: https://postgr.es/m/results/98c6b630-acbb-44a7-97fa-1692ce2b827c@dunslane.net  
  
Reviewed-By: Tom Lane <tgl@sss.pgh.pa.us>  
  
Backpatch-through: 15  

M src/bin/pg_basebackup/bbstreamer_file.c
M src/bin/pg_basebackup/bbstreamer_gzip.c
M src/bin/pg_basebackup/bbstreamer_tar.c

Avoid memory leak on error while parsing pg_stat_statements dump file

commit   : 351e59f344b57bb6958bdf455bf5069d6e52a922    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 27 Mar 2026 12:20:38 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 27 Mar 2026 12:20:38 +0200    

Click here for diff

By using palloc() instead of raw malloc().  
  
Reported-by: Gaurav Singh <gaurav.singh@yugabyte.com>  
Reviewed-by: Lukas Fittl <lukas@fittl.com>  
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>  
Discussion: https://www.postgresql.org/message-id/CAEcQ1bYR9s4eQLFDjzzJHU8fj-MTbmRpW-9J-r2gsCn+HEsynw@mail.gmail.com  
Backpatch-through: 14  

M contrib/pg_stat_statements/pg_stat_statements.c

Fix premature NULL lag reporting in pg_stat_replication

commit   : fdce5de552c2b5cb22678dbf2b37cf88da8fa2ec    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 26 Mar 2026 20:49:31 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 26 Mar 2026 20:49:31 +0900    

Click here for diff

pg_stat_replication is documented to keep the last measured lag values for  
a short time after the standby catches up, and then set them to NULL when  
there is no WAL activity. However, previously lag values could become NULL  
prematurely even while WAL activity was ongoing, especially in logical  
replication.  
  
This happened because the code cleared lag when two consecutive reply messages  
indicated that the apply location had caught up with the send location.  
It did not verify that the reported positions were unchanged, so lag could be  
cleared even when positions had advanced between messages. In logical  
replication, where the apply location often quickly catches up, this issue was  
more likely to occur.  
  
This commit fixes the issue by clearing lag only when the standby reports that  
it has fully replayed WAL (i.e., both flush and apply locations have caught up  
with the send location) and the write/flush/apply positions remain unchanged  
across two consecutive reply messages.  
  
The second message with unchanged positions typically results from  
wal_receiver_status_interval, so lag values are cleared after that interval  
when there is no activity. This avoids showing stale lag data while preventing  
premature NULL values.  
  
Even with this fix, lag may rarely become NULL during activity if identical  
position reports are sent repeatedly. Eliminating such duplicate messages  
would address this fully, but that change is considered too invasive for stable  
branches and will be handled in master only later.  
  
Backpatch to all supported branches.  
  
Author: Shinya Kato <shinya11.kato@gmail.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/CAOzEurTzcUrEzrH97DD7+Yz=HGPU81kzWQonKZvqBwYhx2G9_A@mail.gmail.com  
Backpatch-through: 14  

M src/backend/replication/walsender.c

Fix copy-paste error in test_ginpostinglist

commit   : 791ff1df1ee040cf171100b173f88ffd9ea8a446    
  
author   : John Naylor <john.naylor@postgresql.org>    
date     : Tue, 24 Mar 2026 16:40:33 +0700    
  
committer: John Naylor <john.naylor@postgresql.org>    
date     : Tue, 24 Mar 2026 16:40:33 +0700    

Click here for diff

The check for a mismatch on the second decoded item pointer  
was an exact copy of the first item pointer check, comparing  
orig_itemptrs[0] with decoded_itemptrs[0] instead of orig_itemptrs[1]  
with decoded_itemptrs[1].  The error message also reported (0, 1) as  
the expected value instead of (blk, off).  As a result, any decoding  
error in the second item pointer (where the varbyte delta encoding  
is exercised) would go undetected.  
  
This has been wrong since commit bde7493d1, so backpatch to all  
supported versions.  
  
Author: Jianghua Yang <yjhjstz@gmail.com>  
Discussion: https://postgr.es/m/CAAZLFmSOD8R7tZjRLZsmpKtJLoqjgawAaM-Pne1j8B_Q2aQK8w@mail.gmail.com  
Backpatch-through: 14  

M src/test/modules/test_ginpostinglist/test_ginpostinglist.c

Fix multixact backwards-compatibility with CHECKPOINT race condition

commit   : 1ca38503219abf6b291c47c7586a8a82f8014f8d    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 23 Mar 2026 11:53:32 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 23 Mar 2026 11:53:32 +0200    

Click here for diff

If a CHECKPOINT record with nextMulti N is written to the WAL before  
the CREATE_ID record for N, and N happens to be the first multixid on  
an offset page, the backwards compatibility logic to tolerate WAL  
generated by older minor versions (before commit 789d65364c) failed to  
compensate for the missing XLOG_MULTIXACT_ZERO_OFF_PAGE record. In  
that case, the latest_page_number was initialized at the start of WAL  
replay to the page for nextMulti from the CHECKPOINT record, even if  
we had not seen the CREATE_ID record for that multixid yet, which  
fooled the backwards compatibility logic to think that the page was  
already initialized.  
  
To fix, track the last XLOG_MULTIXACT_ZERO_OFF_PAGE that we've seen  
separately from latest_page_number. If we haven't seen any  
XLOG_MULTIXACT_ZERO_OFF_PAGE records yet, use  
SimpleLruDoesPhysicalPageExist() to check if the page needs to be  
initialized.  
  
Reported-by: duankunren.dkr <duankunren.dkr@alibaba-inc.com>  
Analyzed-by: duankunren.dkr <duankunren.dkr@alibaba-inc.com>  
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>  
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>  
Discussion: https://www.postgresql.org/message-id/c4ef1737-8cba-458e-b6fd-4e2d6011e985.duankunren.dkr@alibaba-inc.com  
Backpatch-through: 14-18  

M src/backend/access/transam/multixact.c
M src/include/access/slru.h

Fix finalization of decompressor astreamers.

commit   : 6ccfc44922123d497501a491b8becfca7b0096bb    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 22 Mar 2026 18:06:48 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 22 Mar 2026 18:06:48 -0400    

Click here for diff

Send the correct amount of data to the next astreamer, not the  
whole allocated buffer size.  This bug escaped detection because  
in present uses the next astreamer is always a tar-file parser  
which is insensitive to trailing garbage.  But that may not  
be true in future uses.  
  
Author: Andrew Dunstan <andrew@dunslane.net>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/2178517.1774064942@sss.pgh.pa.us  
Backpatch-through: 15  

M src/bin/pg_basebackup/bbstreamer_gzip.c
M src/bin/pg_basebackup/bbstreamer_lz4.c
M src/bin/pg_basebackup/bbstreamer_zstd.c

Fix dependency on FDW handler.

commit   : 876fa84a275eea578b4a89bb910e184f98d991c2    
  
author   : Jeff Davis <jdavis@postgresql.org>    
date     : Thu, 19 Mar 2026 14:59:07 -0700    
  
committer: Jeff Davis <jdavis@postgresql.org>    
date     : Thu, 19 Mar 2026 14:59:07 -0700    

Click here for diff

ALTER FOREIGN DATA WRAPPER could drop the dependency on the handler  
function if it wasn't explicitly specified.  
  
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>  
Discussion: https://postgr.es/m/35c44a4b7fb76d35418c4d66b775a88f4ce60c86.camel@j-davis.com  
Backpatch-through: 14  

M src/backend/commands/foreigncmds.c
M src/test/regress/expected/foreign_data.out
M src/test/regress/sql/foreign_data.sql

Fix WAL flush LSN used by logical walsender during shutdown

commit   : 8ee536c89517c0d2fc9df2177eb1e51e61367a68    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Tue, 17 Mar 2026 08:10:20 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Tue, 17 Mar 2026 08:10:20 +0900    

Click here for diff

Commit 6eedb2a5fd8 made the logical walsender call  
XLogFlush(GetXLogInsertRecPtr()) to ensure that all pending WAL is flushed,  
fixing a publisher shutdown hang. However, if the last WAL record ends at  
a page boundary, GetXLogInsertRecPtr() can return an LSN pointing past  
the page header, which can cause XLogFlush() to report an error.  
  
A similar issue previously existed in the GiST code. Commit b1f14c96720  
introduced GetXLogInsertEndRecPtr(), which returns a safe WAL insertion end  
location (returning the start of the page when the last record ends at a page  
boundary), and updated the GiST code to use it with XLogFlush().  
  
This commit fixes the issue by making the logical walsender use  
XLogFlush(GetXLogInsertEndRecPtr()) when flushing pending WAL during shutdown.  
  
Backpatch to all supported versions.  
  
Reported-by: Andres Freund <andres@anarazel.de>  
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/vzguaguldbcyfbyuq76qj7hx5qdr5kmh67gqkncyb2yhsygrdt@dfhcpteqifux  
Backpatch-through: 14  

M src/backend/replication/walsender.c

Tighten asserts on ParallelWorkerNumber

commit   : 2fa42feb7921d9486d56a38608d2b27b8f4e607c    
  
author   : Tomas Vondra <tomas.vondra@postgresql.org>    
date     : Sat, 14 Mar 2026 15:24:37 +0100    
  
committer: Tomas Vondra <tomas.vondra@postgresql.org>    
date     : Sat, 14 Mar 2026 15:24:37 +0100    

Click here for diff

The comment about ParallelWorkerNumbr in parallel.c says:  
  
  In parallel workers, it will be set to a value >= 0 and < the number  
  of workers before any user code is invoked; each parallel worker will  
  get a different parallel worker number.  
  
However asserts in various places collecting instrumentation allowed  
(ParallelWorkerNumber == num_workers). That would be a bug, as the value  
is used as index into an array with num_workers entries.  
  
Fixed by adjusting the asserts accordingly. Backpatch to all supported  
versions.  
  
Discussion: https://postgr.es/m/5db067a1-2cdf-4afb-a577-a04f30b69167@vondra.me  
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>  
Backpatch-through: 14  

M src/backend/executor/nodeAgg.c
M src/backend/executor/nodeIncrementalSort.c
M src/backend/executor/nodeMemoize.c
M src/backend/executor/nodeSort.c

Use GetXLogInsertEndRecPtr in gistGetFakeLSN

commit   : 6ef36bb358842e0f3a2de0cbf622c67c808706cf    
  
author   : Tomas Vondra <tomas.vondra@postgresql.org>    
date     : Fri, 13 Mar 2026 22:42:29 +0100    
  
committer: Tomas Vondra <tomas.vondra@postgresql.org>    
date     : Fri, 13 Mar 2026 22:42:29 +0100    

Click here for diff

The function used GetXLogInsertRecPtr() to generate the fake LSN. Most  
of the time this is the same as what XLogInsert() would return, and so  
it works fine with the XLogFlush() call. But if the last record ends at  
a page boundary, GetXLogInsertRecPtr() returns LSN pointing after the  
page header. In such case XLogFlush() fails with errors like this:  
  
  ERROR: xlog flush request 0/01BD2018 is not satisfied --- flushed only to 0/01BD2000  
  
Such failures are very hard to trigger, particularly outside aggressive  
test scenarios.  
  
Fixed by introducing GetXLogInsertEndRecPtr(), returning the correct LSN  
without skipping the header. This is the same as GetXLogInsertRecPtr(),  
except that it calls XLogBytePosToEndRecPtr().  
  
Initial investigation by me, root cause identified by Andres Freund.  
  
This is a long-standing bug in gistGetFakeLSN(), probably introduced by  
c6b92041d38 in PG13. Backpatch to all supported versions.  
  
Reported-by: Peter Geoghegan <pg@bowt.ie>  
Reviewed-by: Andres Freund <andres@anarazel.de>  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Discussion: https://postgr.es/m/vf4hbwrotvhbgcnknrqmfbqlu75oyjkmausvy66ic7x7vuhafx@e4rvwavtjswo  
Backpatch-through: 14  

M src/backend/access/gist/gistutil.c
M src/backend/access/transam/xlog.c
M src/include/access/xlog.h

xml2: Fix failure with xslt_process() under -fsanitize=undefined

commit   : 63aa4342da23156e612a81c96c7634e49f534bc3    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Fri, 13 Mar 2026 16:06:47 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Fri, 13 Mar 2026 16:06:47 +0900    

Click here for diff

The logic of xslt_process() has never considered the fact that  
xsltSaveResultToString() would return NULL for an empty string (the  
upstream code has always done so, with a string length of 0).  This  
would cause memcpy() to be called with a NULL pointer, something  
forbidden by POSIX.  
  
Like 46ab07ffda9d and similar fixes, this is backpatched down to all the  
supported branches, with a test case to cover this scenario.  An empty  
string has been always returned in xml2 in this case, based on the  
history of the module, so this is an old issue.  
  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Discussion: https://postgr.es/m/c516a0d9-4406-47e3-9087-5ca5176ebcf9@gmail.com  
Backpatch-through: 14  

M contrib/xml2/expected/xml2.out
M contrib/xml2/expected/xml2_1.out
M contrib/xml2/sql/xml2.sql
M contrib/xml2/xslt_proc.c

Prevent restore of incremental backup from bloating VM fork.

commit   : 076bc57fa46bd7359d55abf151e55c91443a5a58    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Mon, 9 Mar 2026 06:36:42 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Mon, 9 Mar 2026 06:36:42 -0400    

Click here for diff

When I (rhaas) wrote the WAL summarizer code, I incorrectly believed  
that XLOG_SMGR_TRUNCATE truncates all forks to the same length.  In  
fact, what other parts of the code do is compute the truncation length  
for the FSM and VM forks from the truncation length used for the main  
fork. But, because I was confused, I coded the WAL summarizer to set the  
limit block for the VM fork to the same value as for the main fork.  
(Incremental backup always copies FSM forks in full, so there is no  
similar issue in that case.)  
  
Doing that doesn't directly cause any data corruption, as far as I can  
see. However, it does create a serious risk of consuming a large amount  
of extra disk space, because pg_combinebackup's reconstruct.c believes  
that the reconstructed file should always be at least as long as the  
limit block value. We might want to be smarter about that at some point  
in the future, because it's always safe to omit all-zeroes blocks at the  
end of the last segment of a relation, and doing so could save disk  
space, but the current algorithm will rarely waste enough disk space to  
worry about unless we believe that a relation has been truncated to a  
length much longer than its actual length on disk, which is exactly what  
happens as a result of the problem mentioned in the previous paragraph.  
  
To fix, create a new visibilitymap helper function and use it to include  
the right limit block in the summary files. Incremental backups taken  
with existing summary files will still have this issue, but this should  
improve the situation going forward.  
  
Diagnosed-by: Oleg Tkachenko <oatkachenko@gmail.com>  
Diagnosed-by: Amul Sul <sulamul@gmail.com>  
Discussion: http://postgr.es/m/CAAJ_b97PqG89hvPNJ8cGwmk94gJ9KOf_pLsowUyQGZgJY32o9g@mail.gmail.com  
Discussion: http://postgr.es/m/6897DAF7-B699-41BF-A6FB-B818FCFFD585%40gmail.com  
Backpatch-through: 17  

M src/backend/access/heap/visibilitymap.c
M src/backend/postmaster/walsummarizer.c
M src/bin/pg_combinebackup/t/011_ib_truncation.pl
M src/include/access/visibilitymap.h

doc: Document IF NOT EXISTS option for ALTER FOREIGN TABLE ADD COLUMN.

commit   : c3fa8399415f404275767bd0c737c883e2cf5334    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Mon, 9 Mar 2026 18:24:41 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Mon, 9 Mar 2026 18:24:41 +0900    

Click here for diff

Commit 2cd40adb85d added the IF NOT EXISTS option to ALTER TABLE ADD COLUMN.  
This also enabled IF NOT EXISTS for ALTER FOREIGN TABLE ADD COLUMN,  
but the ALTER FOREIGN TABLE documentation was not updated to mention it.  
  
This commit updates the documentation to describe the IF NOT EXISTS option for  
ALTER FOREIGN TABLE ADD COLUMN.  
  
While updating that section, also this commit clarifies that the COLUMN keyword  
is optional in ALTER FOREIGN TABLE ADD/DROP COLUMN. Previously, part of  
the documentation could be read as if COLUMN were required.  
  
This commit adds regression tests covering these ALTER FOREIGN TABLE syntaxes.  
  
Backpatch to all supported versions.  
  
Suggested-by: Fujii Masao <masao.fujii@gmail.com>  
Author: Chao Li <lic@highgo.com>  
Reviewed-by: Robert Treat <rob@xzilla.net>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/CAHGQGwFk=rrhrwGwPtQxBesbT4DzSZ86Q3ftcwCu3AR5bOiXLw@mail.gmail.com  
Backpatch-through: 14  

M doc/src/sgml/ref/alter_foreign_table.sgml
M src/test/regress/expected/foreign_data.out
M src/test/regress/sql/foreign_data.sql

Fix size underestimation of DSA pagemap for odd-sized segments

commit   : 2543b9ea92fe49c0e4ba1d5adfdc1178c977f371    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 9 Mar 2026 13:46:33 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Mon, 9 Mar 2026 13:46:33 +0900    

Click here for diff

When make_new_segment() creates an odd-sized segment, the pagemap was  
only sized based on a number of usable_pages entries, forgetting that a  
segment also contains metadata pages, and that the FreePageManager uses  
absolute page indices that cover the entire segment.  This  
miscalculation could cause accesses to pagemap entries to be out of  
bounds.  During subsequent reuse of the allocated segment, allocations  
landing on pages with indices higher than usable_pages could cause  
out-of-bounds pagemap reads and/or writes.  On write, 'span' pointers  
are stored into the data area, corrupting the allocated objects.  On  
read (aka during a dsa_free), garbage is interpreted as a span pointer,  
typically crashing the server in dsa_get_address().  
  
The normal geometric path correctly sizes the pagemap for all pages in  
the segment.  The odd-sized path needs to do the same, but it works  
forward from usable_pages rather than backward from total_size.  
  
This commit fixes the sizing of the odd-sized case by adding pagemap  
entries for the metadata pages after the initial metadata_bytes  
calculation, using an integer ceiling division to compute the exact  
number of additional entries needed in one go, avoiding any iteration in  
the calculation.  
  
An assertion is added in the code path for odd-sized segments, ensuring  
that the pagemap includes the metadata area, and that the result is  
appropriately sized.  
  
This problem would show up depending on the size requested for the  
allocation of a DSA segment.  The reporter has noticed this issue when a  
parallel hash join makes a DSA allocation large enough to trigger the  
odd-sized segment path, but it could happen for anything that does a DSA  
allocation.  
  
A regression test is added to test_dsa, down to v17 where the test  
module has been introduced.  This adds a set of cheap tests to check the  
problem, the new assertion being useful for this purpose.  Sami has  
proposed a test that took a longer time than what I have done here; the  
test committed is faster and good enough to check the odd-sized  
allocation path.  
  
Author: Paul Bunn <paul.bunn@icloud.com>  
Reviewed-by: Sami Imseih <samimseih@gmail.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: Michael Paquier <michael@paquier.xyz>  
Discussion: https://postgr.es/m/044401dcabac$fe432490$fac96db0$@icloud.com  
Backpatch-through: 14  

M src/backend/utils/mmgr/dsa.c
M src/test/modules/test_dsa/expected/test_dsa.out
M src/test/modules/test_dsa/sql/test_dsa.sql
M src/test/modules/test_dsa/test_dsa–1.0.sql
M src/test/modules/test_dsa/test_dsa.c

Fix publisher shutdown hang caused by logical walsender busy loop.

commit   : bbbc0888b356af7687fc917e7835701a75955503    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Fri, 6 Mar 2026 16:43:40 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Fri, 6 Mar 2026 16:43:40 +0900    

Click here for diff

Previously, when logical replication was running, shutting down  
the publisher could cause the logical walsender to enter a busy loop  
and prevent the publisher from completing shutdown.  
  
During shutdown, the logical walsender waits for all pending WAL  
to be written out. However, some WAL records could remain unflushed,  
causing the walsender to wait indefinitely.  
  
The issue occurred because the walsender used XLogBackgroundFlush() to  
flush pending WAL. This function does not guarantee that all WAL is written.  
For example, WAL generated by a transaction without an assigned  
transaction ID that aborts might not be flushed.  
  
This commit fixes the bug by making the logical walsender call XLogFlush()  
instead, ensuring that all pending WAL is written and preventing  
the busy loop during shutdown.  
  
Backpatch to all supported versions.  
  
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>  
Reviewed-by: Alexander Lakhin <exclusion@gmail.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/CAO6_Xqo3co3BuUVEVzkaBVw9LidBgeeQ_2hfxeLMQcXwovB3GQ@mail.gmail.com  
Backpatch-through: 14  

M src/backend/replication/walsender.c

Exit after fatal errors in client-side compression code.

commit   : 8b198b093fd2ccc2712f6b5a43665818310698f4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 5 Mar 2026 14:43:21 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 5 Mar 2026 14:43:21 -0500    

Click here for diff

It looks like whoever wrote the astreamer (nee bbstreamer) code  
thought that pg_log_error() is equivalent to elog(ERROR), but  
it's not; it just prints a message.  So all these places tried to  
continue on after a compression or decompression error return,  
with the inevitable result being garbage output and possibly  
cascading error messages.  We should use pg_fatal() instead.  
  
These error conditions are probably pretty unlikely in practice,  
which no doubt accounts for the lack of field complaints.  
  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/1531718.1772644615@sss.pgh.pa.us  
Backpatch-through: 15  

M src/bin/pg_basebackup/bbstreamer_gzip.c
M src/bin/pg_basebackup/bbstreamer_lz4.c
M src/bin/pg_basebackup/bbstreamer_zstd.c
M src/bin/pg_dump/compress_lz4.c

Fix handling of updated tuples in the MERGE statement

commit   : 2dcac93c00655cc62e8105dc12b6cd302d2c0625    
  
author   : Alexander Korotkov <akorotkov@postgresql.org>    
date     : Thu, 5 Mar 2026 19:47:20 +0200    
  
committer: Alexander Korotkov <akorotkov@postgresql.org>    
date     : Thu, 5 Mar 2026 19:47:20 +0200    

Click here for diff

This branch missed the IsolationUsesXactSnapshot() check.  That led to EPQ on  
repeatable read and serializable isolation levels.  This commit fixes the  
issue and provides a simple isolation check for that.  Backpatch through v15  
where MERGE statement was introduced.  
  
Reported-by: Tender Wang <tndrwang@gmail.com>  
Discussion: https://postgr.es/m/CAPpHfdvzZSaNYdj5ac-tYRi6MuuZnYHiUkZ3D-AoY-ny8v%2BS%2Bw%40mail.gmail.com  
Author: Tender Wang <tndrwang@gmail.com>  
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>  
Backpatch-through: 15  

M src/backend/executor/nodeModifyTable.c
M src/test/isolation/expected/merge-update.out
M src/test/isolation/specs/merge-update.spec

doc: Clarify that COLUMN is optional in ALTER TABLE ... ADD/DROP COLUMN.

commit   : 28239b761d20774b7d1a66616482b84db9baaafb    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 5 Mar 2026 12:55:52 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 5 Mar 2026 12:55:52 +0900    

Click here for diff

In ALTER TABLE ... ADD/DROP COLUMN, the COLUMN keyword is optional. However,  
part of the documentation could be read as if COLUMN were required, which may  
mislead users about the command syntax.  
  
This commit updates the ALTER TABLE documentation to clearly state that  
COLUMN is optional for ADD and DROP.  
  
Also this commit adds regression tests covering ALTER TABLE ... ADD/DROP  
without the COLUMN keyword.  
  
Backpatch to all supported versions.  
  
Author: Chao Li <lic@highgo.com>  
Reviewed-by: Robert Treat <rob@xzilla.net>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/CAEoWx2n6ShLMOnjOtf63TjjgGbgiTVT5OMsSOFmbjGb6Xue1Bw@mail.gmail.com  
Backpatch-through: 14  

M doc/src/sgml/ref/alter_table.sgml
M src/test/regress/expected/alter_table.out
M src/test/regress/sql/alter_table.sql

Fix rare instability in recovery TAP test 004_timeline_switch

commit   : 6fb8f424be05148b584f0f206612fb35e4b0e107    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Thu, 5 Mar 2026 10:06:03 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Thu, 5 Mar 2026 10:06:03 +0900    

Click here for diff

This fixes a problem similar to ad8c86d22cbd.  In this case, the test  
could fail under the following circumstances:  
- The primary is stopped with teardown_node(), meaning that it may not  
be able to send all its WAL records to standby_1 and standby_2.  
- If standby_2 receives more records than standby_1, attempting to  
reconnect standby_2 to the promoted standby_1 would fail because of a  
timeline fork.  
  
This race condition is fixed with a simple trick: instead of tearing  
down the primary, it is stopped cleanly so as all the WAL records of the  
primary are received and flushed by both standby_1 and standby_2.  Once  
we do that, there is no need for a wait_for_catchup() before stopping  
the node.  The test wants to check that a timeline jump can be achieved  
when reconnecting a standby to a promoted standby in the same cluster,  
hence an immediate stop of the primary is not required.  
  
This failure is harder to reach than the previous instability of  
009_twophase, still the buildfarm has been able to detect this failure  
at least once.  I have tried Alexander Lakhin's test trick with the  
bgwriter and very aggressive standby snapshots, but I could not  
reproduce it directly.  It is reachable, as the buildfarm has proved.  
  
Backpatch down to all supported branches, and this problem can lead to  
spurious failures in the buildfarm.  
  
Discussion: https://postgr.es/m/493401a8-063f-436a-8287-a235d9e065fc@gmail.com  
Backpatch-through: 14  

M src/test/recovery/t/004_timeline_switch.pl

Fix yet another bug in archive streamer with LZ4 decompression.

commit   : 2640c5ba772e603dbd96b9221a23f27625d06ff9    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 4 Mar 2026 12:08:37 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 4 Mar 2026 12:08:37 -0500    

Click here for diff

The code path in astreamer_lz4_decompressor_content() that updated  
the output pointers when the output buffer isn't full was wrong.  
It advanced next_out by bytes_written, which could include previous  
decompression output not just that of the current cycle.  The  
correct amount to advance is out_size.  While at it, make the  
output pointer updates look more like the input pointer updates.  
  
This bug is pretty hard to reach, as it requires consecutive  
compression frames that are too small to fill the output buffer.  
pg_dump could have produced such data before 66ec01dc4, but  
I'm unsure whether any files we use astreamer with would be  
likely to contain problematic data.  
  
Author: Chao Li <lic@highgo.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/0594CC79-1544-45DD-8AA4-26270DE777A7@gmail.com  
Backpatch-through: 15  

M src/bin/pg_basebackup/bbstreamer_lz4.c

Don't malloc(0) in EventTriggerCollectAlterTSConfig

commit   : 616798d0114a4261c88a6b9938bd01682fb504f0    
  
author   : Álvaro Herrera <alvherre@kurilemu.de>    
date     : Wed, 4 Mar 2026 15:04:53 +0100    
  
committer: Álvaro Herrera <alvherre@kurilemu.de>    
date     : Wed, 4 Mar 2026 15:04:53 +0100    

Click here for diff

Author: Florin Irion <florin.irion@enterprisedb.com>  
Discussion: https://postgr.es/m/c6fff161-9aee-4290-9ada-71e21e4d84de@gmail.com  

M src/backend/commands/event_trigger.c
M src/test/modules/test_ddl_deparse/Makefile
A src/test/modules/test_ddl_deparse/expected/textsearch.out
M src/test/modules/test_ddl_deparse/meson.build
A src/test/modules/test_ddl_deparse/sql/textsearch.sql

Add test for row-locking and multixids with prepared transactions

commit   : 969576dab47b2998a259b388c8212aee3e2cdeea    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 4 Mar 2026 11:29:02 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 4 Mar 2026 11:29:02 +0200    

Click here for diff

This is a repro for the issue fixed in commit ccae90abdb. Backpatch to  
v17 like that commit, although that's a little arbitrary as this test  
would work on older versions too.  
  
Author: Sami Imseih <samimseih@gmail.com>  
Discussion: https://www.postgresql.org/message-id/CAA5RZ0twq5bNMq0r0QNoopQnAEv+J3qJNCrLs7HVqTEntBhJ=g@mail.gmail.com  
Backpatch-through: 17  

M src/test/regress/expected/prepared_xacts.out
M src/test/regress/sql/prepared_xacts.sql

Skip prepared_xacts test if max_prepared_transactions < 2

commit   : 80d1d4ee06fdb6a6c60ff55435fd78511004d1d9    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 4 Mar 2026 11:06:43 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 4 Mar 2026 11:06:43 +0200    

Click here for diff

This reduces maintenance overhead, as we no longer need to update the  
dummy expected output file every time the .sql file changes.  
  
Discussion: https://www.postgresql.org/message-id/1009073.1772551323@sss.pgh.pa.us  
Backpatch-through: 14  

M src/test/regress/expected/prepared_xacts.out
M src/test/regress/expected/prepared_xacts_1.out
M src/test/regress/sql/prepared_xacts.sql

Fix rare instability in recovery TAP test 009_twophase

commit   : c81b10459bb84776ca2a696ae638a6a2a85b4e88    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Wed, 4 Mar 2026 16:30:59 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Wed, 4 Mar 2026 16:30:59 +0900    

Click here for diff

The phase of the test where we want to check that 2PC transactions  
prepared on a primary can be committed on a promoted standby relied on  
an immediate stop of the primary.  This logic has a race condition: it  
could be possible that some records (most likely standby snapshot  
records) are generated on the primary before it finishes its shutdown,  
without the promoted standby know about them.  When the primary is  
recycled as new standby, the test could fail because of a timeline fork  
as an effect of these extra records.  
  
This fix takes care of the instability by doing a clean stop of the  
primary instead of a teardown (aka immediate stop), so as all records  
generated on the primary are sent to the promoted standby and flushed  
there.  There is no need for a teardown of the primary in this test  
scenario: the commit of 2PC transactions on a promoted standby do not  
care about the state of the primary, only of the standby.  
  
This race is very hard to hit in practice, even slow buildfarm members  
like skink have a very low rate of reproduction.  Alexander Lakhin has  
come up with a recipe to improve the reproduction rate a lot:  
- Enable -DWAL_DEBUG.  
- Patch the bgwriter so as standby snapshots are generated every  
milliseconds.  
- Run 009_twophase tests under heavy parallelism.  
  
With this method, the failure appears after a couple of iterations.  
With the fix in place, I have been able to run more than 50 iterations  
of the parallel test sequence, without seeing a failure.  
  
Issue introduced in 30820982b295, due to a copy-pasto coming from the  
surrounding tests.  Thanks also to Hayato Kuroda for digging into the  
details of the failure.  He has proposed a fix different than the one of  
this commit.  Unfortunately, it relied on injection points, feature only  
available in v17.  The solution of this commit is simpler, and can be  
applied to v14~v16.  
  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Discussion: https://postgr.es/m/b0102688-6d6c-c86a-db79-e0e91d245b1a@gmail.com  
Backpatch-through: 14  

M src/test/recovery/t/009_twophase.pl

doc: Fix sentence of pg_walsummary page

commit   : aa7c37c2e71ad1a119d2b01740426fca06ac5880    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Tue, 3 Mar 2026 15:27:58 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Tue, 3 Mar 2026 15:27:58 +0900    

Click here for diff

Author: Peter Smith <smithpb2250@gmail.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: Robert Treat <rob@xzilla.net>  
Discussion: https://postgr.es/m/CAHut+PvfYBL-ppX-i8DPeRu7cakYCZz+QYBhrmQzicx7z_Tj5w@mail.gmail.com  
Backpatch-through: 17  

M doc/src/sgml/ref/pg_walsummary.sgml

doc: Clarify that empty COMMENT string removes the comment.

commit   : ec80fab7a1fdaeb15041f312447f32dd0420d46f    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Tue, 3 Mar 2026 14:45:52 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Tue, 3 Mar 2026 14:45:52 +0900    

Click here for diff

Clarify the documentation of COMMENT ON to state that specifying an empty  
string is treated as NULL, meaning that the comment is removed.  
  
This makes the behavior explicit and avoids possible confusion about how  
empty strings are handled.  
  
Also adds regress test cases that use empty string to remove a comment.  
  
Backpatch to all supported versions.  
  
Author: Chao Li <lic@highgo.com>  
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>  
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>  
Reviewed-by: Shengbin Zhao <zshengbin91@gmail.com>  
Reviewed-by: Jim Jones <jim.jones@uni-muenster.de>  
Reviewed-by: zhangqiang <zhang_qiang81@163.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/26476097-B1C1-4BA8-AA92-0AD0B8EC7190@gmail.com  
Backpatch-through: 14  

M doc/src/sgml/ref/comment.sgml
M src/test/regress/expected/create_index.out
M src/test/regress/expected/create_role.out
M src/test/regress/sql/create_index.sql
M src/test/regress/sql/create_role.sql

basic_archive: Allow archive directory to be missing at startup.

commit   : f510577de4f5a0b33453941a08cdb2302dc55f02    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Mon, 2 Mar 2026 13:12:25 -0600    
  
committer: Nathan Bossart <nathan@postgresql.org>    
date     : Mon, 2 Mar 2026 13:12:25 -0600    

Click here for diff

Presently, the GUC check hook for basic_archive.archive_directory  
checks that the specified directory exists.  Consequently, if the  
directory does not exist at server startup, archiving will be stuck  
indefinitely, even if it appears later.  To fix, remove this check  
from the hook so that archiving will resume automatically once the  
directory is present.  basic_archive must already be prepared to  
deal with the directory disappearing at any time, so no additional  
special handling is required.  
  
Reported-by: Олег Самойлов <splarv@ya.ru>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Reviewed-by: Sergei Kornilov <sk@zsrv.org>  
Discussion: https://postgr.es/m/73271769675212%40mail.yandex.ru  
Backpatch-through: 15  

M contrib/basic_archive/basic_archive.c

Fix OldestMemberMXactId and OldestVisibleMXactId array usage

commit   : dcd9c06a4203cb5489bf1bd54f0a27db5d33cac4    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 2 Mar 2026 19:19:22 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 2 Mar 2026 19:19:22 +0200    

Click here for diff

Commit ab355e3a88 changed how the OldestMemberMXactId array is  
indexed. It's no longer indexed by synthetic dummyBackendId, but with  
ProcNumber. The PGPROC entries for prepared xacts come after auxiliary  
processes in the allProcs array, which rendered the calculation for  
MaxOldestSlot and the indexes into the array incorrect.  (The  
OldestVisibleMXactId array is not used for prepared xacts, and thus  
never accessed with ProcNumber's greater than MaxBackends, so this  
only affects the OldestMemberMXactId array.)  
  
As a result, a prepared xact would store its value past the end of the  
OldestMemberMXactId array, overflowing into the OldestVisibleMXactId  
array. That could cause a transaction's row lock to appear invisible  
to other backends, or other such visibility issues. With a very small  
max_connections setting, the store could even go beyond the  
OldestVisibleMXactId array, stomping over the first element in the  
BufferDescriptor array.  
  
To fix, calculate the array sizes more precisely, and introduce helper  
functions to calculate the array indexes correctly.  
  
Author: Yura Sokolov <y.sokolov@postgrespro.ru>  
Reviewed-by: Sami Imseih <samimseih@gmail.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://www.postgresql.org/message-id/7acc94b0-ea82-4657-b1b0-77842cb7a60c@postgrespro.ru  
Backpatch-through: 17  

M src/backend/access/transam/multixact.c
M src/backend/access/transam/twophase.c
M src/backend/storage/lmgr/proc.c
M src/include/storage/proc.h

In pg_dumpall, don't skip role GRANTs with dangling grantor OIDs.

commit   : 1cd783d205443fec47c57a8ac28ac54ba8981e6f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 2 Mar 2026 11:14:58 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 2 Mar 2026 11:14:58 -0500    

Click here for diff

In commits 29d75b25b et al, I made pg_dumpall's dumpRoleMembership  
logic treat a dangling grantor OID the same as dangling role and  
member OIDs: print a warning and skip emitting the GRANT.  This wasn't  
terribly well thought out; instead, we should handle the case by  
emitting the GRANT without the GRANTED BY clause.  When the source  
database is pre-v16, such cases are somewhat expected because those  
versions didn't prevent dropping the grantor role; so don't even  
print a warning that we did this.  (This change therefore restores  
pg_dumpall's pre-v16 behavior for these cases.)  The case is not  
expected in >= v16, so then we do print a warning, but soldiering on  
with no GRANTED BY clause still seems like a reasonable strategy.  
  
Per complaint from Robert Haas that we were now dropping GRANTs  
altogether in easily-reachable scenarios.  
  
Reported-by: Robert Haas <robertmhaas@gmail.com>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/CA+TgmoauoiW4ydDhdrseg+DD4Kwha=+TSZp18BrJeHKx3o1Fdw@mail.gmail.com  
Backpatch-through: 16  

M src/bin/pg_dump/pg_dumpall.c

test_custom_types: Test module with fancy custom data types

commit   : d4f33d026a6b8c20cf9791780f3adcfd56c30711    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 2 Mar 2026 11:10:37 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Mon, 2 Mar 2026 11:10:37 +0900    

Click here for diff

This commit adds a new test module called "test_custom_types", that can  
be used to stress code paths related to custom data type  
implementations.  
  
Currently, this is used as a test suite to validate the set of fixes  
done in 3b7a6fa15720, that requires some typanalyze callbacks that can  
force very specific backend behaviors, as of:  
- typanalyze callback that returns "false" as status, to mark a failure  
in computing statistics.  
- typanalyze callback that returns "true" but let's the backend know  
that no interesting stats could be computed, with stats_valid set to  
"false".  
  
This could be extended more in the future if more problems are found.  
For simplicity, the module uses a fake int4 data type, that requires a  
btree operator class to be usable with extended statistics.  The type is  
created by the extension, and its properties are altered in the test.  
  
Like 3b7a6fa15720, this module is backpatched down to v14, for coverage  
purposes.  
  
Author: Michael Paquier <michael@paquier.xyz>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/aaDrJsE1I5mrE-QF@paquier.xyz  
Backpatch-through: 14  

M src/test/modules/Makefile
M src/test/modules/meson.build
A src/test/modules/test_custom_types/.gitignore
A src/test/modules/test_custom_types/Makefile
A src/test/modules/test_custom_types/README
A src/test/modules/test_custom_types/expected/test_custom_types.out
A src/test/modules/test_custom_types/meson.build
A src/test/modules/test_custom_types/sql/test_custom_types.sql
A src/test/modules/test_custom_types/test_custom_types–1.0.sql
A src/test/modules/test_custom_types/test_custom_types.c
A src/test/modules/test_custom_types/test_custom_types.control

Fix set of issues with extended statistics on expressions

commit   : 530b6b02f891db45ff9416a0f3307e87b9c03b45    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 2 Mar 2026 09:38:42 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Mon, 2 Mar 2026 09:38:42 +0900    

Click here for diff

This commit addresses two defects regarding extended statistics on  
expressions:  
- When building extended statistics in lookup_var_attr_stats(), the call  
to examine_attribute() did not account for the possibility of a NULL  
return value.  This can happen depending on the behavior of a typanalyze  
callback — for example, if the callback returns false, if no rows are  
sampled, or if no statistics are computed.  In such cases, the code  
attempted to build MCV, dependency, and ndistinct statistics using a  
NULL pointer, incorrectly assuming valid statistics were available,  
which could lead to a server crash.  
- When loading extended statistics for expressions,  
statext_expressions_load() did not account for NULL entries in the  
pg_statistic array storing expression statistics.  Such NULL entries can  
be generated when statistics collection fails for an expression, as may  
occur during the final step of serialize_expr_stats().  An extended  
statistics object defining N expressions requires N corresponding  
elements in the pg_statistic array stored for the expressions, and some  
of these elements can be NULL.  This situation is reachable when a  
typanalyze callback returns true, but sets stats_valid to indicate that  
no useful statistics could be computed.  
  
While these scenarios cannot occur with in-core typanalyze callbacks, as  
far as I have analyzed, they can be triggered by custom data types with  
custom typanalyze implementations, at least.  
  
No tests are added in this commit.  A follow-up commit will introduce a  
test module that can be extended to cover similar edge cases if  
additional issues are discovered.  This takes care of the core of the  
problem.  
  
Attribute and relation statistics already offer similar protections:  
- ANALYZE detects and skips the build of invalid statistics.  
- Invalid catalog data is handled defensively when loading statistics.  
  
This issue exists since the support for extended statistics on  
expressions has been added, down to v14 as of a4d75c86bf15.  Backpatch  
to all supported stable branches.  
  
Author: Michael Paquier <michael@paquier.xyz>  
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/aaDrJsE1I5mrE-QF@paquier.xyz  
Backpatch-through: 14  

M src/backend/statistics/extended_stats.c
M src/backend/utils/adt/selfuncs.c

postgres_fdw: Fix thinko in comment for UserMappingPasswordRequired().

commit   : 596a400df7a273398dc471b07729f8af09cb6e40    
  
author   : Etsuro Fujita <efujita@postgresql.org>    
date     : Fri, 27 Feb 2026 17:05:03 +0900    
  
committer: Etsuro Fujita <efujita@postgresql.org>    
date     : Fri, 27 Feb 2026 17:05:03 +0900    

Click here for diff

This commit also rephrases this comment to improve readability.  
  
Oversight in commit 6136e94dc.  
  
Reported-by: Etsuro Fujita <etsuro.fujita@gmail.com>  
Author: Andreas Karlsson <andreas@proxel.se>  
Co-authored-by: Etsuro Fujita <etsuro.fujita@gmail.com>  
Discussion: https://postgr.es/m/CAPmGK16pDnM_wU3kmquPj-M9MYqG3y0BdntRZ0eytqbCaFY3WQ%40mail.gmail.com  
Backpatch-through: 14  

M contrib/postgres_fdw/connection.c

Fix more multibyte issues in ltree.

commit   : d1bd9a7dc30f7020ad08a9c78c683911f99fb771    
  
author   : Jeff Davis <jdavis@postgresql.org>    
date     : Thu, 26 Feb 2026 12:24:12 -0800    
  
committer: Jeff Davis <jdavis@postgresql.org>    
date     : Thu, 26 Feb 2026 12:24:12 -0800    

Click here for diff

Commit 84d5efa7e3 missed some multibyte issues caused by short-circuit  
logic in the callers. The callers assumed that if the predicate string  
is longer than the label string, then it couldn't possibly be a match,  
but it can be when using case-insensitive matching (LVAR_INCASE) if  
casefolding changes the byte length.  
  
Fix by refactoring to get rid of the short-circuit logic as well as  
the function pointer, and consolidate the logic in a replacement  
function ltree_label_match().  
  
Discussion: https://postgr.es/m/02c6ef6cf56a5013ede61ad03c7a26affd27d449.camel@j-davis.com  
Backpatch-through: 14  

M contrib/ltree/lquery_op.c
M contrib/ltree/ltree.h
M contrib/ltree/ltxtquery_op.c

Fix memory leaks in pg_locale_icu.c.

commit   : 4761f2eee6c0617f54dc7b6d95d28d9ba8f034c7    
  
author   : Jeff Davis <jdavis@postgresql.org>    
date     : Thu, 29 Jan 2026 10:37:09 -0800    
  
committer: Jeff Davis <jdavis@postgresql.org>    
date     : Thu, 29 Jan 2026 10:37:09 -0800    

Click here for diff

The backport prior to 18 requires minor modification due to code  
refactoring.  
  
Discussion: https://postgr.es/m/e2b7a0a88aaadded7e2d19f42d5ab03c9e182ad8.camel@j-davis.com  
Backpatch-through: 16  

M src/backend/utils/adt/pg_locale.c

Use CXXFLAGS instead of CFLAGS for linking C++ code

commit   : b6b7e96365eb7b5f79d4e655afcfb4f9cb12ab88    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 26 Feb 2026 12:06:58 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 26 Feb 2026 12:06:58 -0500    

Click here for diff

Otherwise, this would break if using C and C++ compilers from  
different families and they understand different options.  It already  
used the right flags for compiling, this is only for linking.  Also,  
the meson setup already did this correctly.  
  
Back-patch of v18 commit 365b5a345 into older supported branches.  
At the time we were only aware of trouble in v18, but as shown  
by buildfarm member siren, older branches can hit the problem too.  
  
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>  
Author: Peter Eisentraut <peter@eisentraut.org>  
Discussion: https://www.postgresql.org/message-id/228700.1722717983@sss.pgh.pa.us  
Discussion: https://postgr.es/m/3109540.1771698685@sss.pgh.pa.us  
Backpatch-through: 14-17  

M src/backend/jit/llvm/Makefile

EUC_CN, EUC_JP, EUC_KR, EUC_TW: Skip U+00A0 tests instead of failing.

commit   : 4cb70f73eacc4231f354432e8ed0f65d27f5cc09    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 25 Feb 2026 18:13:22 -0800    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 25 Feb 2026 18:13:22 -0800    

Click here for diff

Settings that ran the new test euc_kr.sql to completion would fail these  
older src/pl tests.  Use alternative expected outputs, for which psql  
\gset and \if have reduced the maintenance burden.  This fixes  
"LANG=ko_KR.euckr LC_MESSAGES=C make check-world".  (LC_MESSAGES=C fixes  
IO::Pty usage in tests 010_tab_completion and 001_password.)  That file  
is new in commit c67bef3f3252a3a38bf347f9f119944176a796ce.  Back-patch  
to v14, like that commit.  
  
Discussion: https://postgr.es/m/20260217184758.da.noahmisch@microsoft.com  
Backpatch-through: 14  

M src/pl/plperl/GNUmakefile
M src/pl/plperl/expected/plperl_elog.out
M src/pl/plperl/expected/plperl_elog_1.out
A src/pl/plperl/expected/plperl_unicode.out
A src/pl/plperl/expected/plperl_unicode_1.out
M src/pl/plperl/meson.build
M src/pl/plperl/sql/plperl_elog.sql
A src/pl/plperl/sql/plperl_unicode.sql
M src/pl/plpython/expected/plpython_unicode.out
A src/pl/plpython/expected/plpython_unicode_1.out
M src/pl/plpython/sql/plpython_unicode.sql
M src/pl/tcl/expected/pltcl_unicode.out
A src/pl/tcl/expected/pltcl_unicode_1.out
M src/pl/tcl/sql/pltcl_unicode.sql

doc: Clarify INCLUDING COMMENTS behavior in CREATE TABLE LIKE.

commit   : df927d3d08b139f1a02262940339e15d126722e9    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 26 Feb 2026 09:02:53 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 26 Feb 2026 09:02:53 +0900    

Click here for diff

The documentation for the INCLUDING COMMENTS option of the LIKE clause  
in CREATE TABLE was inaccurate and incomplete. It stated that comments for  
copied columns, constraints, and indexes are copied, but regarding comments  
on constraints in reality only comments on CHECK and NOT NULL constraints  
are copied; comments on other constraints (such as primary keys) are not.  
In addition, comments on extended statistics are copied, but this was not  
documented.  
  
The CREATE FOREIGN TABLE documentation had a similar omission: comments  
on extended statistics are also copied, but this was not mentioned.  
  
This commit updates the documentation to clarify the actual behavior.  
The CREATE TABLE reference now specifies that comments on copied columns,  
CHECK constraints, NOT NULL constraints, indexes, and extended statistics are  
copied. The CREATE FOREIGN TABLE reference now notes that comments on  
extended statistics are copied as well.  
  
Backpatch to all supported versions. Documentation updates related to  
CREATE FOREIGN TABLE LIKE and NOT NULL constraint comment copying are  
not applied to v17 and earlier, since those features were introduced in v18.  
  
Author: Fujii Masao <masao.fujii@gmail.com>  
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>  
Discussion: https://postgr.es/m/CAHGQGwHSOSGcaYDvHF8EYCUCfGPjbRwGFsJ23cx5KbJ1X6JouQ@mail.gmail.com  
Backpatch-through: 14  

M doc/src/sgml/ref/create_table.sgml

Fix ProcWakeup() resetting wrong waitStart field.

commit   : f72c92a7f9456b6c490b37e7c81d6a4b49307262    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 26 Feb 2026 08:46:12 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 26 Feb 2026 08:46:12 +0900    

Click here for diff

Previously, when one process woke another that was waiting on a lock,  
ProcWakeup() incorrectly cleared its own waitStart field (i.e.,  
MyProc->waitStart) instead of that of the process being awakened.  
As a result, the awakened process retained a stale lock-wait start timestamp.  
  
This did not cause user-visible issues. pg_locks.waitstart was reported as  
NULL for the awakened process (i.e., when pg_locks.granted is true),  
regardless of the waitStart value.  
  
This bug was introduced by commit 46d6e5f56790.  
  
This commit fixes this by resetting the waitStart field of the process  
being awakened in ProcWakeup().  
  
Backpatch to all supported branches.  
  
Reported-by: Chao Li <li.evan.chao@gmail.com>  
Author: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: ji xu <thanksgreed@gmail.com>  
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>  
Discussion: https://postgr.es/m/537BD852-EC61-4D25-AB55-BE8BE46D07D7@gmail.com  
Backpatch-through: 14  

M src/backend/storage/lmgr/proc.c

Allow PG_PRINTF_ATTRIBUTE to be different in C and C++ code.

commit   : ae40bb835135a8ebdba5f32f3470ddad20e78b85    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 25 Feb 2026 11:57:26 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 25 Feb 2026 11:57:26 -0500    

Click here for diff

Although clang claims to be compatible with gcc's printf format  
archetypes, this appears to be a falsehood: it likes __syslog__  
(which gcc does not, on most platforms) and doesn't accept  
gnu_printf.  This means that if you try to use gcc with clang++  
or clang with g++, you get compiler warnings when compiling  
printf-like calls in our C++ code.  This has been true for quite  
awhile, but it's gotten more annoying with the recent appearance  
of several buildfarm members that are configured like this.  
  
To fix, run separate probes for the format archetype to use with the  
C and C++ compilers, and conditionally define PG_PRINTF_ATTRIBUTE  
depending on __cplusplus.  
  
(We could alternatively insist that you not mix-and-match C and  
C++ compilers; but if the case works otherwise, this is a poor  
reason to insist on that.)  
  
This commit back-patches 0909380e4 into supported branches.  
  
Discussion: https://postgr.es/m/986485.1764825548@sss.pgh.pa.us  
Discussion: https://postgr.es/m/3988414.1771950285@sss.pgh.pa.us  
Backpatch-through: 14-18  

M config/c-compiler.m4
M configure
M configure.ac
M meson.build
M src/include/c.h
M src/include/pg_config.h.in

Fix some cases of indirectly casting away const.

commit   : a56a70141e09df5401a341dab29a77bafb42c8b6    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 25 Feb 2026 11:19:50 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 25 Feb 2026 11:19:50 -0500    

Click here for diff

Newest versions of gcc+glibc are able to detect cases where code  
implicitly casts away const by assigning the result of strchr() or  
a similar function applied to a "const char *" value to a target  
variable that's just "char *".  This of course creates a hazard of  
not getting a compiler warning about scribbling on a string one was  
not supposed to, so fixing up such cases is good.  
  
This patch fixes a dozen or so places where we were doing that.  
Most are trivial additions of "const" to the target variable,  
since no actually-hazardous change was occurring.  
  
Thanks to Bertrand Drouvot for finding a couple more spots than  
I had.  
  
This commit back-patches relevant portions of 8f1791c61 and  
9f7565c6c into supported branches.  However, there are two  
places in ecpg (in v18 only) where a proper fix is more  
complicated than seems appropriate for a back-patch.  I opted  
to silence those two warnings by adding casts.  
  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>  
Discussion: https://postgr.es/m/1324889.1764886170@sss.pgh.pa.us  
Discussion: https://postgr.es/m/3988414.1771950285@sss.pgh.pa.us  
Backpatch-through: 14-18  

M src/backend/catalog/pg_type.c
M src/backend/tsearch/spell.c
M src/backend/utils/adt/formatting.c
M src/backend/utils/adt/pg_locale.c
M src/backend/utils/adt/xid8funcs.c
M src/bin/pg_waldump/pg_waldump.c
M src/bin/pgbench/pgbench.c
M src/common/compression.c
M src/interfaces/ecpg/pgtypeslib/datetime.c
M src/port/getopt.c
M src/port/getopt_long.c
M src/port/win32setlocale.c
M src/test/regress/pg_regress.c
M src/timezone/zic.c

Stabilize output of new isolation test insert-conflict-do-update-4.

commit   : 0a2600e4cd525382212581267877ca6e5bbdf335    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 25 Feb 2026 10:51:42 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 25 Feb 2026 10:51:42 -0500    

Click here for diff

The test added by commit 4b760a181 assumed that a table's physical  
row order would be predictable after an UPDATE.  But a non-heap table  
AM might produce some other order.  Even with heap AM, the assumption  
seems risky; compare a3fd53bab for instance.  Adding an ORDER BY is  
cheap insurance and doesn't break any goal of the test.  
  
Author: Pavel Borisov <pashkin.elfe@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/CALT9ZEHcE6tpvumScYPO6pGk_ASjTjWojLkodHnk33dvRPHXVw@mail.gmail.com  
Backpatch-through: 14  

M src/test/isolation/expected/insert-conflict-do-update-4.out
M src/test/isolation/specs/insert-conflict-do-update-4.spec

pg_upgrade: Use max_protocol_version=3.0 for older servers

commit   : ad7fc3f1f83f776812af27dbb747642dc88bba78    
  
author   : Jacob Champion <jchampion@postgresql.org>    
date     : Tue, 24 Feb 2026 14:01:46 -0800    
  
committer: Jacob Champion <jchampion@postgresql.org>    
date     : Tue, 24 Feb 2026 14:01:46 -0800    

Click here for diff

The grease patch in 4966bd3ed found its first problem: prior to the  
February 2018 patch releases, no server knew how to negotiate protocol  
versions, so pg_upgrade needs to take that into account when speaking to  
those older servers.  
  
This will be true even after the grease feature is reverted; we don't  
need anyone to trip over this again in the future. Backpatch so that all  
supported versions of pg_upgrade can gracefully handle an update to the  
default protocol version. (This is needed for any distributions that  
link older binaries against newer libpqs, such as Debian.) Branches  
prior to 18 need an additional version check, for the existence of  
max_protocol_version.  
  
Per buildfarm member crake.  
  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/CAOYmi%2B%3D4QhCjssfNEoZVK8LPtWxnfkwT5p-PAeoxtG9gpNjqOQ%40mail.gmail.com  
Backpatch-through: 14  

M src/bin/pg_upgrade/dump.c
M src/bin/pg_upgrade/pg_upgrade.h
M src/bin/pg_upgrade/server.c
M src/bin/pg_upgrade/version.c