PostgreSQL 17.8 commit log

Stamp 17.8.

commit   : 6af885119b52a2a6229959670ba3ae5e36bf9806    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 9 Feb 2026 16:51:54 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 9 Feb 2026 16:51:54 -0500    

Click here for diff

M configure
M configure.ac
M meson.build

Last-minute updates for release notes.

commit   : a3acb409025a2f8e2cb93346bbc1d87281f861fc    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 9 Feb 2026 14:01:20 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 9 Feb 2026 14:01:20 -0500    

Click here for diff

Security: CVE-2026-2003, CVE-2026-2004, CVE-2026-2005, CVE-2026-2006, CVE-2026-2007  

M doc/src/sgml/release-17.sgml

Fix test "NUL byte in text decrypt" for --without-zlib builds.

commit   : 955433ebd864f41e079e75521dd17c2f9175f7c7    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 9 Feb 2026 09:08:10 -0800    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 9 Feb 2026 09:08:10 -0800    

Click here for diff

Backpatch-through: 14  
Security: CVE-2026-2006  

M contrib/pgcrypto/expected/pgp-decrypt.out
M contrib/pgcrypto/expected/pgp-decrypt_1.out
M contrib/pgcrypto/sql/pgp-decrypt.sql

Harden _int_matchsel() against being attached to the wrong operator.

commit   : dd3ad2a4d7e8c7bcc34e6574787744b3524d28be    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 9 Feb 2026 10:14:22 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 9 Feb 2026 10:14:22 -0500    

Click here for diff

While the preceding commit prevented such attachments from occurring  
in future, this one aims to prevent further abuse of any already-  
created operator that exposes _int_matchsel to the wrong data types.  
(No other contrib module has a vulnerable selectivity estimator.)  
  
We need only check that the Const we've found in the query is indeed  
of the type we expect (query_int), but there's a difficulty: as an  
extension type, query_int doesn't have a fixed OID that we could  
hard-code into the estimator.  
  
Therefore, the bulk of this patch consists of infrastructure to let  
an extension function securely look up the OID of a datatype  
belonging to the same extension.  (Extension authors have requested  
such functionality before, so we anticipate that this code will  
have additional non-security uses, and may soon be extended to allow  
looking up other kinds of SQL objects.)  
  
This is done by first finding the extension that owns the calling  
function (there can be only one), and then thumbing through the  
objects owned by that extension to find a type that has the desired  
name.  This is relatively expensive, especially for large extensions,  
so a simple cache is put in front of these lookups.  
  
Reported-by: Daniel Firer as part of zeroday.cloud  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Security: CVE-2026-2004  
Backpatch-through: 14  

M contrib/intarray/_int_selfuncs.c
M src/backend/catalog/pg_depend.c
M src/backend/commands/extension.c
M src/include/catalog/dependency.h
M src/include/commands/extension.h
M src/tools/pgindent/typedefs.list

Require superuser to install a non-built-in selectivity estimator.

commit   : bbf5bcf587bdd6f8fe446456fe3a515a00772f3c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 9 Feb 2026 10:07:31 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 9 Feb 2026 10:07:31 -0500    

Click here for diff

Selectivity estimators come in two flavors: those that make specific  
assumptions about the data types they are working with, and those  
that don't.  Most of the built-in estimators are of the latter kind  
and are meant to be safely attachable to any operator.  If the  
operator does not behave as the estimator expects, you might get a  
poor estimate, but it won't crash.  
  
However, estimators that do make datatype assumptions can malfunction  
if they are attached to the wrong operator, since then the data they  
get from pg_statistic may not be of the type they expect.  This can  
rise to the level of a security problem, even permitting arbitrary  
code execution by a user who has the ability to create SQL objects.  
  
To close this hole, establish a rule that built-in estimators are  
required to protect themselves against being called on the wrong type  
of data.  It does not seem practical however to expect estimators in  
extensions to reach a similar level of security, at least not in the  
near term.  Therefore, also establish a rule that superuser privilege  
is required to attach a non-built-in estimator to an operator.  
We expect that this restriction will have little negative impact on  
extensions, since estimators generally have to be written in C and  
thus superuser privilege is required to create them in the first  
place.  
  
This commit changes the privilege checks in CREATE/ALTER OPERATOR  
to enforce the rule about superuser privilege, and fixes a couple  
of built-in estimators that were making datatype assumptions without  
sufficiently checking that they're valid.  
  
Reported-by: Daniel Firer as part of zeroday.cloud  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Security: CVE-2026-2004  
Backpatch-through: 14  

M src/backend/commands/operatorcmds.c
M src/backend/tsearch/ts_selfuncs.c
M src/backend/utils/adt/network_selfuncs.c

Add a syscache on pg_extension.oid.

commit   : dbb09fd8e8c19c46f09a6f31b2d607239167181d    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 9 Feb 2026 10:02:23 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 9 Feb 2026 10:02:23 -0500    

Click here for diff

An upcoming patch requires this cache so that it can track updates  
in the pg_extension catalog.  So far though, the EXTENSIONOID cache  
only exists in v18 and up (see 490f869d9).  We can add it in older  
branches without an ABI break, if we are careful not to disturb the  
numbering of existing syscache IDs.  
  
In v16 and before, that just requires adding the new ID at the end  
of the hand-assigned enum list, ignoring our convention about  
alphabetizing the IDs.  But in v17, genbki.pl enforces alphabetical  
order of the IDs listed in MAKE_SYSCACHE macros.  We can fake it  
out by calling the new cache ZEXTENSIONOID.  
  
Note that adding a syscache does change the required contents of the  
relcache init file (pg_internal.init).  But that isn't problematic  
since we blow those away at postmaster start for other reasons.  
  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Security: CVE-2026-2004  
Backpatch-through: 14-17  

M src/include/catalog/pg_extension.h

Guard against unexpected dimensions of oidvector/int2vector.

commit   : 3d160401b65e1d37ca19cf9b78d01aac53ac9605    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 9 Feb 2026 09:57:44 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 9 Feb 2026 09:57:44 -0500    

Click here for diff

These data types are represented like full-fledged arrays, but  
functions that deal specifically with these types assume that the  
array is 1-dimensional and contains no nulls.  However, there are  
cast pathways that allow general oid[] or int2[] arrays to be cast  
to these types, allowing these expectations to be violated.  This  
can be exploited to cause server memory disclosure or SIGSEGV.  
Fix by installing explicit checks in functions that accept these  
types.  
  
Reported-by: Altan Birler <altan.birler@tum.de>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Security: CVE-2026-2003  
Backpatch-through: 14  

M src/backend/access/hash/hashfunc.c
M src/backend/access/nbtree/nbtcompare.c
M src/backend/utils/adt/format_type.c
M src/backend/utils/adt/int.c
M src/backend/utils/adt/oid.c
M src/include/utils/builtins.h
M src/test/regress/expected/arrays.out
M src/test/regress/sql/arrays.sql

Require PGP-decrypted text to pass encoding validation.

commit   : dc072a09ad6a0b89d021047b2418f517a430966d    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 9 Feb 2026 06:14:47 -0800    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 9 Feb 2026 06:14:47 -0800    

Click here for diff

pgp_sym_decrypt() and pgp_pub_decrypt() will raise such errors, while  
bytea variants will not.  The existing "dat3" test decrypted to non-UTF8  
text, so switch that query to bytea.  
  
The long-term intent is for type "text" to always be valid in the  
database encoding.  pgcrypto has long been known as a source of  
exceptions to that intent, but a report about exploiting invalid values  
of type "text" brought this module to the forefront.  This particular  
exception is straightforward to fix, with reasonable effect on user  
queries.  Back-patch to v14 (all supported versions).  
  
Reported-by: Paul Gerste (as part of zeroday.cloud)  
Reported-by: Moritz Sanft (as part of zeroday.cloud)  
Author: shihao zhong <zhong950419@gmail.com>  
Reviewed-by: cary huang <hcary328@gmail.com>  
Discussion: https://postgr.es/m/CAGRkXqRZyo0gLxPJqUsDqtWYBbgM14betsHiLRPj9mo2=z9VvA@mail.gmail.com  
Backpatch-through: 14  
Security: CVE-2026-2006  

M contrib/pgcrypto/expected/pgp-decrypt.out
M contrib/pgcrypto/expected/pgp-decrypt_1.out
M contrib/pgcrypto/pgp-pgsql.c
M contrib/pgcrypto/sql/pgp-decrypt.sql

Code coverage for most pg_mblen* calls.

commit   : 10ebc4bd67ec46009e18215e77347390b29d70b3    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Mon, 12 Jan 2026 10:20:06 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Mon, 12 Jan 2026 10:20:06 +1300    

Click here for diff

A security patch changed them today, so close the coverage gap now.  
Test that buffer overrun is avoided when pg_mblen*() requires more  
than the number of bytes remaining.  
  
This does not cover the calls in dict_thesaurus.c or in dict_synonym.c.  
That code is straightforward.  To change that code's input, one must  
have access to modify installed OS files, so low-privilege users are not  
a threat.  Testing this would likewise require changing installed  
share/postgresql/tsearch_data, which was enough of an obstacle to not  
bother.  
  
Security: CVE-2026-2006  
Backpatch-through: 14  
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>  
Co-authored-by: Noah Misch <noah@leadboat.com>  
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>  

M contrib/pg_trgm/Makefile
A contrib/pg_trgm/data/trgm_utf8.data
A contrib/pg_trgm/expected/pg_utf8_trgm.out
A contrib/pg_trgm/expected/pg_utf8_trgm_1.out
M contrib/pg_trgm/meson.build
A contrib/pg_trgm/sql/pg_utf8_trgm.sql
M src/backend/utils/adt/arrayfuncs.c
A src/test/regress/expected/encoding.out
A src/test/regress/expected/encoding_1.out
A src/test/regress/expected/euc_kr.out
A src/test/regress/expected/euc_kr_1.out
M src/test/regress/parallel_schedule
M src/test/regress/regress.c
A src/test/regress/sql/encoding.sql
A src/test/regress/sql/euc_kr.sql

Replace pg_mblen() with bounds-checked versions.

commit   : 319e8a64419aee3ecc0b39b52abcc28b2a3fe2d0    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Wed, 7 Jan 2026 22:14:31 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Wed, 7 Jan 2026 22:14:31 +1300    

Click here for diff

A corrupted string could cause code that iterates with pg_mblen() to  
overrun its buffer.  Fix, by converting all callers to one of the  
following:  
  
1. Callers with a null-terminated string now use pg_mblen_cstr(), which  
raises an "illegal byte sequence" error if it finds a terminator in the  
middle of the sequence.  
  
2. Callers with a length or end pointer now use either  
pg_mblen_with_len() or pg_mblen_range(), for the same effect, depending  
on which of the two seems more convenient at each site.  
  
3. A small number of cases pre-validate a string, and can use  
pg_mblen_unbounded().  
  
The traditional pg_mblen() function and COPYCHAR macro still exist for  
backward compatibility, but are no longer used by core code and are  
hereby deprecated.  The same applies to the t_isXXX() functions.  
  
Security: CVE-2026-2006  
Backpatch-through: 14  
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>  
Co-authored-by: Noah Misch <noah@leadboat.com>  
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>  
Reported-by: Paul Gerste (as part of zeroday.cloud)  
Reported-by: Moritz Sanft (as part of zeroday.cloud)  

M contrib/btree_gist/btree_utils_var.c
M contrib/dict_xsyn/dict_xsyn.c
M contrib/hstore/hstore_io.c
M contrib/ltree/lquery_op.c
M contrib/ltree/ltree.h
M contrib/ltree/ltree_io.c
M contrib/ltree/ltxtquery_io.c
M contrib/pageinspect/heapfuncs.c
M contrib/pg_trgm/trgm.h
M contrib/pg_trgm/trgm_op.c
M contrib/pg_trgm/trgm_regexp.c
M contrib/unaccent/unaccent.c
M src/backend/catalog/pg_proc.c
M src/backend/tsearch/dict_synonym.c
M src/backend/tsearch/dict_thesaurus.c
M src/backend/tsearch/regis.c
M src/backend/tsearch/spell.c
M src/backend/tsearch/ts_locale.c
M src/backend/tsearch/ts_utils.c
M src/backend/tsearch/wparser_def.c
M src/backend/utils/adt/encode.c
M src/backend/utils/adt/formatting.c
M src/backend/utils/adt/jsonfuncs.c
M src/backend/utils/adt/jsonpath_gram.y
M src/backend/utils/adt/levenshtein.c
M src/backend/utils/adt/like.c
M src/backend/utils/adt/like_match.c
M src/backend/utils/adt/oracle_compat.c
M src/backend/utils/adt/regexp.c
M src/backend/utils/adt/tsquery.c
M src/backend/utils/adt/tsvector.c
M src/backend/utils/adt/tsvector_op.c
M src/backend/utils/adt/tsvector_parser.c
M src/backend/utils/adt/varbit.c
M src/backend/utils/adt/varlena.c
M src/backend/utils/adt/xml.c
M src/backend/utils/mb/mbutils.c
M src/include/mb/pg_wchar.h
M src/include/tsearch/ts_locale.h
M src/include/tsearch/ts_utils.h
M src/test/modules/test_regex/test_regex.c

Fix mb2wchar functions on short input.

commit   : 7a522039f7010ea9ec45dcbf11a4038dce240bb3    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Mon, 26 Jan 2026 11:22:32 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Mon, 26 Jan 2026 11:22:32 +1300    

Click here for diff

When converting multibyte to pg_wchar, the UTF-8 implementation would  
silently ignore an incomplete final character, while the other  
implementations would cast a single byte to pg_wchar, and then repeat  
for the remaining byte sequence.  While it didn't overrun the buffer, it  
was surely garbage output.  
  
Make all encodings behave like the UTF-8 implementation.  A later change  
for master only will convert this to an error, but we choose not to  
back-patch that behavior change on the off-chance that someone is  
relying on the existing UTF-8 behavior.  
  
Security: CVE-2026-2006  
Backpatch-through: 14  
Author: Thomas Munro <thomas.munro@gmail.com>  
Reported-by: Noah Misch <noah@leadboat.com>  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>  

M src/common/wchar.c

Fix encoding length for EUC_CN.

commit   : 838248b1bf6b20762d13878006a404c27189f326    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Thu, 5 Feb 2026 01:04:24 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Thu, 5 Feb 2026 01:04:24 +1300    

Click here for diff

While EUC_CN supports only 1- and 2-byte sequences (CS0, CS1), the  
mb<->wchar conversion functions allow 3-byte sequences beginning SS2,  
SS3.  
  
Change pg_encoding_max_length() to return 3, not 2, to close a  
hypothesized buffer overrun if a corrupted string is converted to wchar  
and back again in a newly allocated buffer.  We might reconsider that in  
master (ie harmonizing in a different direction), but this change seems  
better for the back-branches.  
  
Also change pg_euccn_mblen() to report SS2 and SS3 characters as having  
length 3 (following the example of EUC_KR).  Even though such characters  
would not pass verification, it's remotely possible that invalid bytes  
could be used to compute a buffer size for use in wchar conversion.  
  
Security: CVE-2026-2006  
Backpatch-through: 14  
Author: Thomas Munro <thomas.munro@gmail.com>  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>  

M src/common/wchar.c

pgcrypto: Fix buffer overflow in pgp_pub_decrypt_bytea()

commit   : 7a7d9693c72e680af86298f01d850f95fef0988e    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 9 Feb 2026 08:01:07 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Mon, 9 Feb 2026 08:01:07 +0900    

Click here for diff

pgp_pub_decrypt_bytea() was missing a safeguard for the session key  
length read from the message data, that can be given in input of  
pgp_pub_decrypt_bytea().  This can result in the possibility of a buffer  
overflow for the session key data, when the length specified is longer  
than PGP_MAX_KEY, which is the maximum size of the buffer where the  
session data is copied to.  
  
A script able to rebuild the message and key data that can trigger the  
overflow is included in this commit, based on some contents provided by  
the reporter, heavily editted by me.  A SQL test is added, based on the  
data generated by the script.  
  
Reported-by: Team Xint Code as part of zeroday.cloud  
Author: Michael Paquier <michael@paquier.xyz>  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Security: CVE-2026-2005  
Backpatch-through: 14  

M contrib/pgcrypto/Makefile
A contrib/pgcrypto/expected/pgp-pubkey-session.out
M contrib/pgcrypto/meson.build
M contrib/pgcrypto/pgp-pubdec.c
M contrib/pgcrypto/px.c
M contrib/pgcrypto/px.h
A contrib/pgcrypto/scripts/pgp_session_data.py
A contrib/pgcrypto/sql/pgp-pubkey-session.sql

Release notes for 18.2, 17.8, 16.12, 15.16, 14.21.

commit   : 4d3d88844b1b6c18e2ec752b03cb304d3b296216    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 8 Feb 2026 13:00:40 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 8 Feb 2026 13:00:40 -0500    

Click here for diff

M doc/src/sgml/release-17.sgml

Translation updates

commit   : 2a53576c68001c57fa7f2c37d436a4bbbdfc6f47    
  
author   : Peter Eisentraut <peter@eisentraut.org>    
date     : Sun, 8 Feb 2026 15:10:56 +0100    
  
committer: Peter Eisentraut <peter@eisentraut.org>    
date     : Sun, 8 Feb 2026 15:10:56 +0100    

Click here for diff

Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git  
Source-Git-Hash: 0e865d0d3f2e9523f47609b1ee1569fde84c5389  

M src/backend/po/de.po
M src/backend/po/es.po
M src/backend/po/ja.po
M src/backend/po/ru.po
M src/backend/po/sv.po
M src/backend/po/uk.po
M src/bin/initdb/po/es.po
M src/bin/pg_amcheck/po/es.po
M src/bin/pg_archivecleanup/po/es.po
M src/bin/pg_basebackup/po/es.po
M src/bin/pg_basebackup/po/ja.po
M src/bin/pg_basebackup/po/ru.po
M src/bin/pg_basebackup/po/sv.po
M src/bin/pg_basebackup/po/uk.po
M src/bin/pg_checksums/po/es.po
M src/bin/pg_combinebackup/po/de.po
M src/bin/pg_combinebackup/po/es.po
M src/bin/pg_combinebackup/po/ru.po
M src/bin/pg_combinebackup/po/sv.po
M src/bin/pg_config/po/es.po
M src/bin/pg_controldata/po/es.po
M src/bin/pg_ctl/po/es.po
M src/bin/pg_dump/po/es.po
M src/bin/pg_dump/po/ru.po
M src/bin/pg_dump/po/uk.po
M src/bin/pg_resetwal/po/de.po
M src/bin/pg_resetwal/po/es.po
M src/bin/pg_resetwal/po/ja.po
M src/bin/pg_resetwal/po/ru.po
M src/bin/pg_resetwal/po/sv.po
M src/bin/pg_resetwal/po/uk.po
M src/bin/pg_rewind/po/es.po
M src/bin/pg_rewind/po/ru.po
M src/bin/pg_rewind/po/sv.po
M src/bin/pg_test_fsync/po/es.po
M src/bin/pg_test_timing/po/es.po
M src/bin/pg_upgrade/po/es.po
M src/bin/pg_upgrade/po/uk.po
M src/bin/pg_verifybackup/po/es.po
M src/bin/pg_waldump/po/es.po
M src/bin/pg_walsummary/po/es.po
M src/bin/psql/po/de.po
M src/bin/psql/po/es.po
M src/bin/psql/po/ja.po
M src/bin/psql/po/ru.po
M src/bin/psql/po/sv.po
M src/bin/psql/po/uk.po
M src/bin/scripts/po/es.po
M src/interfaces/ecpg/ecpglib/po/es.po
M src/interfaces/ecpg/preproc/po/es.po
M src/interfaces/libpq/po/de.po
M src/interfaces/libpq/po/es.po
M src/interfaces/libpq/po/fr.po
M src/interfaces/libpq/po/ja.po
M src/interfaces/libpq/po/ru.po
M src/interfaces/libpq/po/sv.po
M src/interfaces/libpq/po/uk.po
M src/pl/plperl/po/es.po
M src/pl/plpgsql/src/po/es.po
M src/pl/plpython/po/es.po
M src/pl/tcl/po/es.po

meson: host_system value for Solaris is 'sunos' not 'solaris'.

commit   : 59c2f7efaef665d1d3c47a4d3b72cf39bd03cffb    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 7 Feb 2026 20:05:52 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 7 Feb 2026 20:05:52 -0500    

Click here for diff

This thinko caused us to not substitute our own getopt() code,  
which results in failing to parse long options for the postmaster  
since Solaris' getopt() doesn't do what we expect.  This can be seen  
in the results of buildfarm member icarus, which is the only one  
trying to build via meson on Solaris.  
  
Per consultation with pgsql-release, it seems okay to fix this  
now even though we're in release freeze.  The fix visibly won't  
affect any other platforms, and it can't break Solaris/meson  
builds any worse than they're already broken.  
  
Discussion: https://postgr.es/m/2471229.1770499291@sss.pgh.pa.us  
Backpatch-through: 16  

M meson.build

Further error message fix

commit   : 5449fd261ea8c66ad55039f092768dade4922128    
  
author   : Peter Eisentraut <peter@eisentraut.org>    
date     : Sat, 7 Feb 2026 22:37:02 +0100    
  
committer: Peter Eisentraut <peter@eisentraut.org>    
date     : Sat, 7 Feb 2026 22:37:02 +0100    

Click here for diff

Further fix of error message changed in commit 74a116a79b4.  The  
initial fix was not quite correct.  
  
Discussion: https://www.postgresql.org/message-id/flat/tencent_1EE1430B1E6C18A663B8990F%40qq.com  

M src/bin/pg_rewind/file_ops.c

Placate ABI checker.

commit   : c7f7057e7ecd7150ff32e3a7fea9e62575cf1d8b    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Sat, 7 Feb 2026 11:36:51 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Sat, 7 Feb 2026 11:36:51 +1300    

Click here for diff

It's not really an ABI break if you change the layout/size of an object  
with incomplete type, as commit f94e9141 did, so advance the ABI  
compliance reference commit in 16-18 to satisfy build farm animal crake.  
  
Backpatch-through: 16-18  
Discussion: https://www.postgresql.org/message-id/1871492.1770409863%40sss.pgh.pa.us  

M .abi-compliance-history

Protect against small overread in SASLprep validation

commit   : 5d61bdd11448bd8e61ed12df07ae8b5064772a48    
  
author   : Jacob Champion <jchampion@postgresql.org>    
date     : Fri, 6 Feb 2026 11:08:24 -0800    
  
committer: Jacob Champion <jchampion@postgresql.org>    
date     : Fri, 6 Feb 2026 11:08:24 -0800    

Click here for diff

(This is a cherry-pick of 390b3cbbb, which I hadn't realized wasn't  
backpatched. It was originally reported to security@ and determined not  
to be a vulnerability; thanks to Stanislav Osipov for noticing the  
omission in the back branches.)  
  
In case of torn UTF8 in the input data we might end up going  
past the end of the string since we don't account for length.  
While validation won't be performed on a sequence with a NULL  
byte it's better to avoid going past the end to beging with.  
Fix by taking the length into consideration.  
  
Reported-by: Stanislav Osipov <stasos24@gmail.com>  
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>  
Discussion: https://postgr.es/m/CAOYmi+mTnmM172g=_+Yvc47hzzeAsYPy2C4UBY3HK9p-AXNV0g@mail.gmail.com  
Backpatch-through: 14  

M src/common/saslprep.c

Fix some error message inconsistencies

commit   : 67ad4387b2260884407388d597d79b34ebd992a5    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Fri, 6 Feb 2026 15:38:23 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Fri, 6 Feb 2026 15:38:23 +0900    

Click here for diff

These errors are very unlikely going to show up, but in the event that  
they happen, some incorrect information would have been provided:  
- In pg_rewind, a stat() failure was reported as an open() failure.  
- In pg_combinebackup, a check for the new directory of a tablespace  
mapping was referred as the old directory.  
- In pg_combinebackup, a failure in reading a source file when copying  
blocks referred to the destination file.  
  
The changes for pg_combinebackup affect v17 and newer versions.  For  
pg_rewind, all the stable branches are affected.  
  
Author: Man Zeng <zengman@halodbtech.com>  
Discussion: https://postgr.es/m/tencent_1EE1430B1E6C18A663B8990F@qq.com  
Backpatch-through: 14  

M src/bin/pg_combinebackup/copy_file.c
M src/bin/pg_combinebackup/pg_combinebackup.c
M src/bin/pg_rewind/file_ops.c

Add file_extend_method=posix_fallocate,write_zeros.

commit   : 4dac22aa10d2882c2e6fb465d7c314cc2d8fb754    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Sat, 31 May 2025 22:50:22 +1200    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Sat, 31 May 2025 22:50:22 +1200    

Click here for diff

Provide a way to disable the use of posix_fallocate() for relation  
files.  It was introduced by commit 4d330a61bb1.  The new setting  
file_extend_method=write_zeros can be used as a workaround for problems  
reported from the field:  
  
 * BTRFS compression is disabled by the use of posix_fallocate()  
 * XFS could produce spurious ENOSPC errors in some Linux kernel  
   versions, though that problem is reported to have been fixed  
  
The default is file_extend_method=posix_fallocate if available, as  
before.  The write_zeros option is similar to PostgreSQL < 16, except  
that now it's multi-block.  
  
Backpatch-through: 16  
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>  
Reported-by: Dimitrios Apostolou <jimis@gmx.net>  
Discussion: https://postgr.es/m/b1843124-fd22-e279-a31f-252dffb6fbf2%40gmx.net  

M doc/src/sgml/config.sgml
M src/backend/storage/file/fd.c
M src/backend/storage/smgr/md.c
M src/backend/utils/misc/guc_tables.c
M src/backend/utils/misc/postgresql.conf.sample
M src/include/storage/fd.h

doc: Move synchronized_standby_slots to "Primary Server" section.

commit   : f6a263e811f5c971cd772a5fd39e048924727173    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Fri, 6 Feb 2026 09:40:05 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Fri, 6 Feb 2026 09:40:05 +0900    

Click here for diff

synchronized_standby_slots is defined in guc_parameter.dat as part of  
the REPLICATION_PRIMARY group and is listed under the "Primary Server"  
section in postgresql.conf.sample. However, in the documentation  
its description was previously placed under the "Sending Servers" section.  
  
Since synchronized_standby_slots only takes effect on the primary server,  
this commit moves its documentation to the "Primary Server" section to  
match its behavior and other references.  
  
Backpatch to v17 where synchronized_standby_slots was added.  
  
Author: Fujii Masao <masao.fujii@gmail.com>  
Reviewed-by: Shinya Kato <shinya11.kato@gmail.com>  
Discussion: https://postgr.es/m/CAHGQGwE_LwgXgCrqd08OFteJqdERiF3noqOKu2vt7Kjk4vMiGg@mail.gmail.com  
Backpatch-through: 17  

M doc/src/sgml/config.sgml

Fix logical replication TAP test to read publisher log correctly.

commit   : c423bf63a096c47a2829f9c3edda5dde3426fdf6    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 5 Feb 2026 00:46:09 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 5 Feb 2026 00:46:09 +0900    

Click here for diff

Commit 5f13999aa11 added a TAP test for GUC settings passed via the  
CONNECTION string in logical replication, but the buildfarm member  
sungazer reported test failures.  
  
The test incorrectly used the subscriber's log file position as the  
starting offset when reading the publisher's log. As a result, the test  
failed to find the expected log message in the publisher's log and  
erroneously reported a failure.  
  
This commit fixes the test to use the publisher's own log file position  
when reading the publisher's log.  
  
Also, to avoid similar confusion in the future, this commit splits the single  
$log_location variable into $log_location_pub and $log_location_sub,  
clearly distinguishing publisher and subscriber log positions.  
  
Backpatched to v15, where commit 5f13999aa11 introduced the test.  
  
Per buildfarm member sungazer.  
This issue was reported and diagnosed by Alexander Lakhin.  
  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Discussion: https://postgr.es/m/966ec3d8-1b6f-4f57-ae59-fc7d55bc9a5a@gmail.com  
Backpatch-through: 15  

M src/test/subscription/t/001_rep_changes.pl

Fix various instances of undefined behavior

commit   : 1662cd0cb7ae42334af17caa7dceb172b45248f7    
  
author   : John Naylor <john.naylor@postgresql.org>    
date     : Wed, 4 Feb 2026 17:55:49 +0700    
  
committer: John Naylor <john.naylor@postgresql.org>    
date     : Wed, 4 Feb 2026 17:55:49 +0700    

Click here for diff

Mostly this involves checking for NULL pointer before doing operations  
that add a non-zero offset.  
  
The exception is an overflow warning in heap_fetch_toast_slice(). This  
was caused by unneeded parentheses forcing an expression to be  
evaluated to a negative integer, which then got cast to size_t.  
  
Per clang 21 undefined behavior sanitizer.  
  
Backpatch to all supported versions.  
  
Co-authored-by: Alexander Lakhin <exclusion@gmail.com>  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Discussion: https://postgr.es/m/777bd201-6e3a-4da0-a922-4ea9de46a3ee@gmail.com  
Backpatch-through: 14  

M contrib/pg_trgm/trgm_gist.c
M src/backend/access/heap/heaptoast.c
M src/backend/utils/adt/multirangetypes.c
M src/backend/utils/sort/sharedtuplestore.c

commit   : 263af458e4ceb0c17e717a2cafd648908aeb4911    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Wed, 4 Feb 2026 16:38:12 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Wed, 4 Feb 2026 16:38:12 +0900    

Click here for diff

A failure while closing pg_wal/summaries/ incorrectly generated a report  
about pg_wal/archive_status/.  
  
While at it, this commit adds #undefs for the macros used in  
KillExistingWALSummaries() and KillExistingArchiveStatus() to prevent  
those values from being misused in an incorrect function context.  
  
Oversight in dc212340058b.  
  
Author: Tianchen Zhang <zhang_tian_chen@163.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>  
Discussion: https://postgr.es/m/SE2P216MB2390C84C23F428A7864EE07FA19BA@SE2P216MB2390.KORP216.PROD.OUTLOOK.COM  
Backpatch-through: 17  

M src/bin/pg_resetwal/pg_resetwal.c

Fix incorrect errno in OpenWalSummaryFile()

commit   : 3b15032dad50d18e45cf12070af13035bd1f2fe9    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Tue, 3 Feb 2026 11:25:16 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Tue, 3 Feb 2026 11:25:16 +0900    

Click here for diff

This routine has an option to bypass an error if a WAL summary file is  
opened for read but is missing (missing_ok=true).  However, the code  
incorrectly checked for EEXIST, that matters when using O_CREAT and  
O_EXCL, rather than ENOENT, for this case.  
  
There are currently only two callers of OpenWalSummaryFile() in the  
tree, and both use missing_ok=false, meaning that the check based on the  
errno is currently dead code.  This issue could matter for out-of-core  
code or future backpatches that would like to use missing_ok set to  
true.  
  
Issue spotted while monitoring this area of the code, after  
a9afa021e95f.  
  
Author: Michael Paquier <michael@paquier.xyz>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/aYAf8qDHbpBZ3Rml@paquier.xyz  
Backpatch-through: 17  

M src/backend/backup/walsummary.c

Fix error message in RemoveWalSummaryIfOlderThan()

commit   : 5995135f13031b37ad167ed83b5249ba8419ade2    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 2 Feb 2026 10:21:10 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Mon, 2 Feb 2026 10:21:10 +0900    

Click here for diff

A failing unlink() was reporting an incorrect error message, referring  
to stat().  
  
Author: Man Zeng <zengman@halodbtech.com>  
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>  
Discussion: https://postgr.es/m/tencent_3BBE865C5F49D452360FF190@qq.com  
Backpath-through: 17  

M src/backend/backup/walsummary.c

Fix build inconsistency due to the generation of wait-event code

commit   : 241803febff453c1b243b48c4a0dfc3ac07fd346    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 2 Feb 2026 08:03:02 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Mon, 2 Feb 2026 08:03:02 +0900    

Click here for diff

The build generates four files based on the wait event contents stored  
in wait_event_names.txt:  
- wait_event_types.h  
- pgstat_wait_event.c  
- wait_event_funcs_data.c  
- wait_event_types.sgml  
  
The SGML file is generated as part of a documentation build, with its  
data stored in doc/src/sgml/ for meson and configure.  The three others  
are handled differently for meson and configure:  
- In configure, all the files are created in src/backend/utils/activity/.  
A link to wait_event_types.h is created in src/include/utils/.  
- In meson, all the files are created in src/include/utils/.  
  
The two C files, pgstat_wait_event.c and wait_event_funcs_data.c, are  
then included in respectively wait_event.c and wait_event_funcs.c,  
without the "utils/" path.  
  
For configure, this does not present a problem.  For meson, this has to  
be combined with a trick in src/backend/utils/activity/meson.build,  
where include_directories needs to point to include/utils/ to make the  
inclusion of the C files work properly, causing builds to pull in  
PostgreSQL headers rather than system headers in some build paths, as  
src/include/utils/ would take priority.  
  
In order to fix this issue, this commit reworks the way the C/H files  
are generated, becoming consistent with guc_tables.inc.c:  
- For meson, basically nothing changes.  The files are still generated  
in src/include/utils/.  The trick with include_directories is removed.  
- For configure, the files are now generated in src/backend/utils/, with  
links in src/include/utils/ pointing to the ones in src/backend/.  This  
requires extra rules in src/backend/utils/activity/Makefile so as a  
make command in this sub-directory is able to work.  
- The three files now fall under header-stamp, which is actually simpler  
as guc_tables.inc.c does the same.  
- wait_event_funcs_data.c and pgstat_wait_event.c are now included with  
"utils/" in their path.  
  
This problem has not been an issue in the buildfarm; it has been noted  
with AIX and a conflict with float.h.  This issue could, however, create  
conflicts in the buildfarm depending on the environment with unexpected  
headers pulled in, so this fix is backpatched down to where the  
generation of the wait-event files has been introduced.  
  
While on it, this commit simplifies wait_event_names.txt regarding the  
paths of the files generated, to mention just the names of the files  
generated.  The paths where the files are generated became incorrect.  
The path of the SGML path was wrong.  
  
This change has been tested in the CI, down to v17.  Locally, I have run  
tests with configure (with and without VPATH), as well as meson, on the  
three branches.  
  
Combo oversight in fa88928470b5 and 1e68e43d3f0f.  
  
Reported-by: Aditya Kamath <aditya.kamath1@ibm.com>  
Discussion: https://postgr.es/m/LV8PR15MB64888765A43D229EA5D1CFE6D691A@LV8PR15MB6488.namprd15.prod.outlook.com  
Backpatch-through: 17  

M src/backend/Makefile
M src/backend/utils/.gitignore
M src/backend/utils/Makefile
D src/backend/utils/activity/.gitignore
M src/backend/utils/activity/Makefile
M src/backend/utils/activity/meson.build
M src/backend/utils/activity/wait_event.c
M src/backend/utils/activity/wait_event_funcs.c
M src/backend/utils/activity/wait_event_names.txt
M src/include/Makefile
M src/include/utils/.gitignore
M src/include/utils/meson.build

Improve guards against false regex matches in BackgroundPsql.pm.

commit   : f356e2888cdbc1b5d02d29658537f1a3ed8092ca    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 30 Jan 2026 14:59:25 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 30 Jan 2026 14:59:25 -0500    

Click here for diff

BackgroundPsql needs to wait for all the output from an interactive  
psql command to come back.  To make sure that's happened, it issues  
the command, then issues \echo and \warn psql commands that echo  
a "banner" string (which we assume won't appear in the command's  
output), then waits for the banner strings to appear.  The hazard  
in this approach is that the banner will also appear in the echoed  
psql commands themselves, so we need to distinguish those echoes from  
the desired output.  Commit 8b886a4e3 tried to do that by positing  
that the desired output would be directly preceded and followed by  
newlines, but it turns out that that assumption is timing-sensitive.  
In particular, it tends to fail in builds made --without-readline,  
wherein the command echoes will be made by the pty driver and may  
be interspersed with prompts issued by psql proper.  
  
It does seem safe to assume that the banner output we want will be  
followed by a newline, since that should be the last output before  
things quiesce.  Therefore, we can improve matters by putting quotes  
around the banner strings in the \echo and \warn psql commands, so  
that their echoes cannot include banner directly followed by newline,  
and then checking for just banner-and-newline in the match pattern.  
  
While at it, spruce up the pump() call in sub query() to look like  
the neater version in wait_connect(), and don't die on timeout  
until after printing whatever we got.  
  
Reported-by: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>  
Diagnosed-by: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Soumya S Murali <soumyamurali.work@gmail.com>  
Discussion: https://postgr.es/m/db6fdb35a8665ad3c18be01181d44b31@postgrespro.ru  
Backpatch-through: 14  

M src/test/perl/PostgreSQL/Test/BackgroundPsql.pm

Update .abi-compliance-history for change to TransitionCaptureState.

commit   : 661d55e2573766c06948f4b94de4186df8363e59    
  
author   : Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Fri, 30 Jan 2026 08:49:43 +0000    
  
committer: Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Fri, 30 Jan 2026 08:49:43 +0000    

Click here for diff

As noted in the commit message for b4307ae2e54, the change to the  
TransitionCaptureState structure is nominally an ABI break, but it is  
not expected to affect any third-party code. Therefore, add it to the  
.abi-compliance-history file.  
  
Discussion: https://postgr.es/m/19380-4e293be2b4007248%40postgresql.org  
Backpatch-through: 15-18  

M .abi-compliance-history

Fix CI failure introduced in commit 851f6649cc.

commit   : 9649f1adfdee66a843cf97e0b292c489ba05e2ec    
  
author   : Amit Kapila <akapila@postgresql.org>    
date     : Thu, 29 Jan 2026 02:57:02 +0000    
  
committer: Amit Kapila <akapila@postgresql.org>    
date     : Thu, 29 Jan 2026 02:57:02 +0000    

Click here for diff

The test added in commit 851f6649cc uses a backup taken from a node  
created by the previous test to perform standby related checks. On  
Windows, however, the standby failed to start with the following error:  
FATAL:  could not rename file "backup_label" to "backup_label.old": Permission denied  
  
This occurred because some background sessions from the earlier test were  
still active. These leftover processes continued accessing the parent  
directory of the backup_label file, likely preventing the rename and  
causing the failure. Ensuring that these sessions are cleanly terminated  
resolves the issue in local testing.  
  
Additionally, the has_restoring => 1 option has been removed, as it was  
not required by the new test.  
  
Reported-by: Robert Haas <robertmhaas@gmail.com>  
Backpatch-through: 17  
Discussion: https://postgr.es/m/CA+TgmobdVhO0ckZfsBZ0wqDO4qHVCwZZx8sf=EinafvUam-dsQ@mail.gmail.com  

M src/test/recovery/t/046_checkpoint_logical_slot.pl

Prevent invalidation of newly synced replication slots.

commit   : 3243c0177efb77511269d6037ee668abd2e81396    
  
author   : Amit Kapila <akapila@postgresql.org>    
date     : Tue, 27 Jan 2026 05:49:23 +0000    
  
committer: Amit Kapila <akapila@postgresql.org>    
date     : Tue, 27 Jan 2026 05:49:23 +0000    

Click here for diff

A race condition could cause a newly synced replication slot to become  
invalidated between its initial sync and the checkpoint.  
  
When syncing a replication slot to a standby, the slot's initial  
restart_lsn is taken from the publisher's remote_restart_lsn. Because slot  
sync happens asynchronously, this value can lag behind the standby's  
current redo pointer. Without any interlocking between WAL reservation and  
checkpoints, a checkpoint may remove WAL required by the newly synced  
slot, causing the slot to be invalidated.  
  
To fix this, we acquire ReplicationSlotAllocationLock before reserving WAL  
for a newly synced slot, similar to commit 006dd4b2e5. This ensures that  
if WAL reservation happens first, the checkpoint process must wait for  
slotsync to update the slot's restart_lsn before it computes the minimum  
required LSN.  
  
However, unlike in ReplicationSlotReserveWal(), this lock alone cannot  
protect a newly synced slot if a checkpoint has already run  
CheckPointReplicationSlots() before slotsync updates the slot. In such  
cases, the remote restart_lsn may be stale and earlier than the current  
redo pointer. To prevent relying on an outdated LSN, we use the oldest  
WAL location available if it is greater than the remote restart_lsn.  
  
This ensures that newly synced slots always start with a safe, non-stale  
restart_lsn and are not invalidated by concurrent checkpoints.  
  
Author: Zhijie Hou <houzj.fnst@fujitsu.com>  
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Reviewed-by: Vitaly Davydov <v.davydov@postgrespro.ru>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Backpatch-through: 17  
Discussion: https://postgr.es/m/TY4PR01MB16907E744589B1AB2EE89A31F94D7A%40TY4PR01MB16907.jpnprd01.prod.outlook.com  

M src/backend/access/transam/xlog.c
M src/backend/replication/logical/slotsync.c
M src/include/access/xlog.h
M src/test/recovery/t/046_checkpoint_logical_slot.pl

Reduce length of TAP test file name.

commit   : cf3170ff315998dffdc7aa3501d6362a840d777c    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Mon, 26 Jan 2026 12:43:52 -0500    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Mon, 26 Jan 2026 12:43:52 -0500    

Click here for diff

Buildfarm member fairywren hit the Windows limitation on the length of a  
file path. While there may be other things we should also do to prevent  
this from happening, it's certainly the case that the length of this  
test file name is much longer than others in the same directory, so make  
it shorter.  
  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Discussion: http://postgr.es/m/274e0a1a-d7d2-4bc8-8b56-dd09f285715e@gmail.com  
Backpatch-through: 17  

M src/bin/pg_combinebackup/meson.build
R100 src/bin/pg_combinebackup/t/011_incremental_backup_truncation_block.pl src/bin/pg_combinebackup/t/011_ib_truncation.pl

Fix possible issue of a WindowFunc being in the wrong WindowClause

commit   : cae8127411670becc195f46af723025ddd6e135d    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Mon, 26 Jan 2026 23:47:07 +1300    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Mon, 26 Jan 2026 23:47:07 +1300    

Click here for diff

ed1a88dda made it so WindowClauses can be merged when all window  
functions belonging to the WindowClause can equally well use some  
other WindowClause without any behavioral changes.  When that  
optimization applies, the WindowFunc's "winref" gets adjusted to  
reference the new WindowClause.  
  
That commit does not work well with the deduplication logic in  
find_window_functions(), which only added the WindowFunc to the list  
when there wasn't already an identical WindowFunc in the list.  That  
deduplication logic meant that the duplicate WindowFunc wouldn't get the  
"winref" changed when optimize_window_clauses() was able to swap the  
WindowFunc to another WindowClause.  This could lead to the following  
error in the unlikely event that the deduplication code did something and  
the duplicate WindowFunc happened to be moved into another WindowClause.  
  
ERROR:  WindowFunc with winref 2 assigned to WindowAgg with winref 1  
  
As it turns out, the deduplication logic in find_window_functions() is  
pretty bogus.  It might have done something when added, as that code  
predates b8d7f053c, which changed how projections work.  As it turns  
out, at least now we *will* evaluate the duplicate WindowFuncs.  All  
that the deduplication code seems to do today is assist in  
underestimating the WindowAggPath costs due to not counting the  
evaluation costs of duplicate WindowFuncs.  
  
Ideally the fix would be to remove the deduplication code, but that  
could result in changes to the plan costs, as duplicate WindowFuncs  
would then be costed.  Instead, let's play it safe and shift the  
deduplication code so it runs after the other processing in  
optimize_window_clauses().  
  
Backpatch only as far as v16 as there doesn't seem to be any other harm  
done by the WindowFunc deduplication code before then.  This issue was  
fixed in master by 7027dd499.  
  
Reported-by: Meng Zhang <mza117jc@gmail.com>  
Author: Meng Zhang <mza117jc@gmail.com>  
Author: David Rowley <dgrowleyml@gmail.com>  
Discussion: https://postgr.es/m/CAErYLFAuxmW0UVdgrz7iiuNrxGQnFK_OP9hBD5CUzRgjrVrz=Q@mail.gmail.com  
Backpatch-through: 16  

M src/backend/optimizer/plan/planner.c
M src/backend/optimizer/util/clauses.c

Fix trigger transition table capture for MERGE in CTE queries.

commit   : c5fc17ddaccff14bc22217df2b06ed43a5af16ba    
  
author   : Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Sat, 24 Jan 2026 11:30:49 +0000    
  
committer: Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Sat, 24 Jan 2026 11:30:49 +0000    

Click here for diff

When executing a data-modifying CTE query containing MERGE and some  
other DML operation on a table with statement-level AFTER triggers,  
the transition tables passed to the triggers would fail to include the  
rows affected by the MERGE.  
  
The reason is that, when initializing a ModifyTable node for MERGE,  
MakeTransitionCaptureState() would create a TransitionCaptureState  
structure with a single "tcs_private" field pointing to an  
AfterTriggersTableData structure with cmdType == CMD_MERGE. Tuples  
captured there would then not be included in the sets of tuples  
captured when executing INSERT/UPDATE/DELETE ModifyTable nodes in the  
same query.  
  
Since there are no MERGE triggers, we should only create  
AfterTriggersTableData structures for INSERT/UPDATE/DELETE. Individual  
MERGE actions should then use those, thereby sharing the same capture  
tuplestores as any other DML commands executed in the same query.  
  
This requires changing the TransitionCaptureState structure, replacing  
"tcs_private" with 3 separate pointers to AfterTriggersTableData  
structures, one for each of INSERT, UPDATE, and DELETE. Nominally,  
this is an ABI break to a public structure in commands/trigger.h.  
However, since this is a private field pointing to an opaque data  
structure, the only way to create a valid TransitionCaptureState is by  
calling MakeTransitionCaptureState(), and no extensions appear to be  
doing that anyway, so it seems safe for back-patching.  
  
Backpatch to v15, where MERGE was introduced.  
  
Bug: #19380  
Reported-by: Daniel Woelfel <dwwoelfel@gmail.com>  
Author: Dean Rasheed <dean.a.rasheed@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/19380-4e293be2b4007248%40postgresql.org  
Backpatch-through: 15  

M src/backend/commands/trigger.c
M src/include/commands/trigger.h
M src/test/regress/expected/triggers.out
M src/test/regress/sql/triggers.sql

Fix bogus ctid requirement for dummy-root partitioned targets

commit   : 933f67fb6a79d26c5fd69ca6adbadae9efd215fc    
  
author   : Amit Langote <amitlan@postgresql.org>    
date     : Fri, 23 Jan 2026 10:20:59 +0900    
  
committer: Amit Langote <amitlan@postgresql.org>    
date     : Fri, 23 Jan 2026 10:20:59 +0900    

Click here for diff

ExecInitModifyTable() unconditionally required a ctid junk column even  
when the target was a partitioned table. This led to spurious "could  
not find junk ctid column" errors when all children were excluded and  
only the dummy root result relation remained.  
  
A partitioned table only appears in the result relations list when all  
leaf partitions have been pruned, leaving the dummy root as the sole  
entry. Assert this invariant (nrels == 1) and skip the ctid requirement.  
Also adjust ExecModifyTable() to tolerate invalid ri_RowIdAttNo for  
partitioned tables, which is safe since no rows will be processed in  
this case.  
  
Bug: #19099  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Author: Amit Langote <amitlangote09@gmail.com>  
Reviewed-by: Tender Wang <tndrwang@gmail.com>  
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/19099-e05dcfa022fe553d%40postgresql.org  
Backpatch-through: 14  

M contrib/file_fdw/expected/file_fdw.out
M contrib/file_fdw/sql/file_fdw.sql
M src/backend/executor/nodeModifyTable.c

Remove faulty Assert in partitioned INSERT...ON CONFLICT DO UPDATE.

commit   : 3fabfc6595ced067a22ac309ec94c79e86d61494    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 22 Jan 2026 18:35:31 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 22 Jan 2026 18:35:31 -0500    

Click here for diff

Commit f16241bef mistakenly supposed that INSERT...ON CONFLICT DO  
UPDATE rejects partitioned target tables.  (This may have been  
accurate when the patch was written, but it was already obsolete  
when committed.)  Hence, there's an assertion that we can't see  
ItemPointerIndicatesMovedPartitions() in that path, but the assertion  
is triggerable.  
  
Some other places throw error if they see a moved-across-partitions  
tuple, but there seems no need for that here, because if we just retry  
then we get the same behavior as in the update-within-partition case,  
as demonstrated by the new isolation test.  So fix by deleting the  
faulty Assert.  (The fact that this is the fix doubtless explains  
why we've heard no field complaints: the behavior of a non-assert  
build is fine.)  
  
The TM_Deleted case contains a cargo-culted copy of the same Assert,  
which I also deleted to avoid confusion, although I believe that one  
is actually not triggerable.  
  
Per our code coverage report, neither the TM_Updated nor the  
TM_Deleted case were reached at all by existing tests, so this  
patch adds tests for both.  
  
Reported-by: Dmitry Koval <d.koval@postgrespro.ru>  
Author: Joseph Koshakow <koshy44@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/f5fffe4b-11b2-4557-a864-3587ff9b4c36@postgrespro.ru  
Backpatch-through: 14  

M src/backend/executor/nodeModifyTable.c
A src/test/isolation/expected/insert-conflict-do-update-4.out
M src/test/isolation/isolation_schedule
A src/test/isolation/specs/insert-conflict-do-update-4.spec

doc: Mention pg_get_partition_constraintdef()

commit   : f1cb59e6e3417683cb26e165f25ab6fa5f039342    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Thu, 22 Jan 2026 16:35:48 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Thu, 22 Jan 2026 16:35:48 +0900    

Click here for diff

All the other SQL functions reconstructing definitions or commands are  
listed in the documentation, except this one.  
  
Oversight in 1848b73d4576.  
  
Author: Todd Liebenschutz-Jones <todd.liebenschutz-jones@starlingbank.com>  
Discussion: https://postgr.es/m/CAGTRfaD6uRQ9iutASDzc_iDoS25sQTLWgXTtR3ta63uwTxq6bA@mail.gmail.com  
Backpatch-through: 14  

M doc/src/sgml/func.sgml

jit: Add missing inline pass for LLVM >= 17.

commit   : d0bb0e5b364f157e93a00658c370527ae91e985e    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Thu, 22 Jan 2026 15:43:13 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Thu, 22 Jan 2026 15:43:13 +1300    

Click here for diff

With LLVM >= 17, transform passes are provided as a string to  
LLVMRunPasses. Only two strings were used: "default<O3>" and  
"default<O0>,mem2reg".  
  
With previous LLVM versions, an additional inline pass was added when  
JIT inlining was enabled without optimization. With LLVM >= 17, the code  
would go through llvm_inline, prepare the functions for inlining, but  
the generated bitcode would be the same due to the missing inline pass.  
  
This patch restores the previous behavior by adding an inline pass when  
inlining is enabled but no optimization is done.  
  
This fixes an oversight introduced by 76200e5e when support for LLVM 17  
was added.  
  
Backpatch-through: 14  
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>  
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>  
Reviewed-by: Andreas Karlsson <andreas@proxel.se>  
Reviewed-by: Andres Freund <andres@anarazel.de>  
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>  
Reviewed-by: Pierre Ducroquet <p.psql@pinaraf.info>  
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>  
Discussion: https://postgr.es/m/CAO6_XqrNjJnbn15ctPv7o4yEAT9fWa-dK15RSyun6QNw9YDtKg%40mail.gmail.com  

M src/backend/jit/llvm/llvmjit.c

amcheck: Fix snapshot usage in bt_index_parent_check

commit   : e1a327dc4dcbd6ecbc8b97b57a2ffdb20ca2e3f5    
  
author   : Álvaro Herrera <alvherre@kurilemu.de>    
date     : Wed, 21 Jan 2026 18:55:43 +0100    
  
committer: Álvaro Herrera <alvherre@kurilemu.de>    
date     : Wed, 21 Jan 2026 18:55:43 +0100    

Click here for diff

We were using SnapshotAny to do some index checks, but that's wrong and  
causes spurious errors when used on indexes created by CREATE INDEX  
CONCURRENTLY.  Fix it to use an MVCC snapshot, and add a test for it.  
  
Backpatch of 6bd469d26aca to branches 14-16.  I previously misidentified  
the bug's origin: it came in with commit 7f563c09f890 (pg11-era, not  
5ae2087202af as claimed previously), so all live branches are affected.  
  
Also take the opportunity to fix some comments that we failed to update  
in the original commits and apply pgperltidy.  In branch 14, remove the  
unnecessary test plan specification (which would have need to have been  
changed anyway; c.f. commit 549ec201d613.)  
  
Diagnosed-by: Donghang Lin <donghanglin@gmail.com>  
Author: Mihail Nikalayeu <mihailnikalayeu@gmail.com>  
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>  
Backpatch-through: 17  
Discussion: https://postgr.es/m/CANtu0ojmVd27fEhfpST7RG2KZvwkX=dMyKUqg0KM87FkOSdz8Q@mail.gmail.com  

M contrib/amcheck/t/002_cic.pl
M contrib/amcheck/verify_nbtree.c

Don't set the truncation block length greater than RELSEG_SIZE.

commit   : ad569b54a1064d8f9e787f5fe72995002b37c106    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Mon, 19 Jan 2026 12:02:08 -0500    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Mon, 19 Jan 2026 12:02:08 -0500    

Click here for diff

When faced with a relation containing more than 1 physical segment  
(i.e. >1GB, with normal settings), the previous code could compute a  
truncation block length greater than RELSEG_SIZE, which could lead to  
restore failures of this form:  
  
file "%s" has truncation block length %u in excess of segment size %u  
  
The fix is simply to clamp the maximum computed truncation_block_length  
to RELSEG_SiZE. I have also added some comments to clarify the logic.  
  
The test case was written by Oleg Tkachenko, but I have rewritten its  
comments.  
  
Reported-by: Oleg Tkachenko <oatkachenko@gmail.com>  
Diagnosed-by: Oleg Tkachenko <oatkachenko@gmail.com>  
Co-authored-by: Robert Haas <rhaas@postgresql.org>  
Co-authored-by: Oleg Tkachenko <oatkachenko@gmail.com>  
Reviewed-by: Amul Sul <sulamul@gmail.com>  
Backpatch-through: 17  
Discussion: http://postgr.es/m/00FEFC88-EA1D-4271-B38F-EB741733A84A@gmail.com  

M src/backend/backup/basebackup_incremental.c
M src/bin/pg_combinebackup/meson.build
A src/bin/pg_combinebackup/t/011_incremental_backup_truncation_block.pl

Update time zone data files to tzdata release 2025c.

commit   : f87c0b84e88a968af3fda4471cdd696559a5934a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 18 Jan 2026 14:54:33 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 18 Jan 2026 14:54:33 -0500    

Click here for diff

This is pretty pro-forma for our purposes, as the only change  
is a historical correction for pre-1976 DST laws in  
Baja California.  (Upstream made this release mostly to update  
their leap-second data, which we don't use.)  But with minor  
releases coming up, we should be up-to-date.  
  
Backpatch-through: 14  

M src/timezone/data/tzdata.zi

commit   : 05ef2371a3f06859fc4110ce58d120c468717b41    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Sun, 18 Jan 2026 17:25:00 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Sun, 18 Jan 2026 17:25:00 +0900    

Click here for diff

The code adding the WAL information included in a backup manifest is  
cross-checked with the contents of the timeline history file of the end  
timeline.  A check based on the end timeline, when it fails, reported  
the value of the start timeline in the error message.  This error is  
fixed to show the correct timeline number in the report.  
  
This error report would be confusing for users if seen, because it would  
provide an incorrect information, so backpatch all the way down.  
  
Oversight in 0d8c9c1210c4.  
  
Author: Man Zeng <zengman@halodbtech.com>  
Discussion: https://postgr.es/m/tencent_0F2949C4594556F672CF4658@qq.com  
Backpatch-through: 14  

M src/backend/backup/backup_manifest.c

Fix crash in test function on removable_cutoff(NULL)

commit   : 6ad01b1152643763e485425f84d03a6777d8ba15    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 16 Jan 2026 14:42:22 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 16 Jan 2026 14:42:22 +0200    

Click here for diff

The function is part of the injection_points test module and only used  
in tests. None of the current tests call it with a NULL argument, but  
it is supposed to work.  
  
Backpatch-through: 17  

M src/test/modules/injection_points/regress_injection.c

Fix segfault from releasing locks in detached DSM segments

commit   : 4071fe900e67a801f16ee9b4843e5fbe82584a44    
  
author   : Amit Langote <amitlan@postgresql.org>    
date     : Fri, 16 Jan 2026 13:01:52 +0900    
  
committer: Amit Langote <amitlan@postgresql.org>    
date     : Fri, 16 Jan 2026 13:01:52 +0900    

Click here for diff

If a FATAL error occurs while holding a lock in a DSM segment (such  
as a dshash lock) and the process is not in a transaction, a  
segmentation fault can occur during process exit.  
  
The problem sequence is:  
  
 1. Process acquires a lock in a DSM segment (e.g., via dshash)  
 2. FATAL error occurs outside transaction context  
 3. proc_exit() begins, calling before_shmem_exit callbacks  
 4. dsm_backend_shutdown() detaches all DSM segments  
 5. Later, on_shmem_exit callbacks run  
 6. ProcKill() calls LWLockReleaseAll()  
 7. Segfault: the lock being released is in unmapped memory  
  
This only manifests outside transaction contexts because  
AbortTransaction() calls LWLockReleaseAll() during transaction  
abort, releasing locks before DSM cleanup. Background workers and  
other non-transactional code paths are vulnerable.  
  
Fix by calling LWLockReleaseAll() unconditionally at the start of  
shmem_exit(), before any callbacks run. Releasing locks before  
callbacks prevents the segfault - locks must be released before  
dsm_backend_shutdown() detaches their memory. This is safe because  
after an error, held locks are protecting potentially inconsistent  
data anyway, and callbacks can acquire fresh locks if needed.  
  
Also add a comment noting that LWLockReleaseAll() must be safe to  
call before LWLock initialization (which it is, since  
num_held_lwlocks will be 0), plus an Assert for the post-condition.  
  
This fix aligns with the original design intent from commit  
001a573a2, which noted that backends must clean up shared memory  
state (including releasing lwlocks) before unmapping dynamic shared  
memory segments.  
  
Reported-by: Rahila Syed <rahilasyed90@gmail.com>  
Author: Rahila Syed <rahilasyed90@gmail.com>  
Reviewed-by: Amit Langote <amitlangote09@gmail.com>  
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>  
Reviewed-by: Andres Freund <andres@anarazel.de>  
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>  
Discussion: https://postgr.es/m/CAH2L28uSvyiosL+kaic9249jRVoQiQF6JOnaCitKFq=xiFzX3g@mail.gmail.com  
Backpatch-through: 14  

M src/backend/storage/ipc/ipc.c
M src/backend/storage/lmgr/lwlock.c

Fix 'unexpected data beyond EOF' on replica restart

commit   : c3770181c83feeae546e3fb3d19f99f325fca8f4    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 15 Jan 2026 20:57:12 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 15 Jan 2026 20:57:12 +0200    

Click here for diff

On restart, a replica can fail with an error like 'unexpected data  
beyond EOF in block 200 of relation T/D/R'. These are the steps to  
reproduce it:  
  
- A relation has a size of 400 blocks.  
  - Blocks 201 to 400 are empty.  
  - Block 200 has two rows.  
  - Blocks 100 to 199 are empty.  
- A restartpoint is done  
- Vacuum truncates the relation to 200 blocks  
- A FPW deletes a row in block 200  
- A checkpoint is done  
- A FPW deletes the last row in block 200  
- Vacuum truncates the relation to 100 blocks  
- The replica restarts  
  
When the replica restarts:  
  
- The relation on disk starts at 100 blocks, because all the  
  truncations were applied before restart.  
- The first truncate to 200 blocks is replayed. It silently fails, but  
  it will still (incorrectly!) update the cache size to 200 blocks  
- The first FPW on block 200 is applied. XLogReadBufferForRead relies  
  on the cached size and incorrectly assumes that the page already  
  exists in the file, and thus won't extend the relation.  
- The online checkpoint record is replayed, calling smgrdestroyall  
  which causes the cached size to be discarded  
- The second FPW on block 200 is applied. This time, the detected size  
  is 100 blocks, an extend is attempted. However, the block 200 is  
  already present in the buffer cache due to the first FPW. This  
  triggers the 'unexpected data beyond EOF'.  
  
To fix, update the cached size in SmgrRelation with the current size  
rather than the requested new size, when the requested new size is  
greater.  
  
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>  
Discussion: https://www.postgresql.org/message-id/CAO6_Xqrv-snNJNhbj1KjQmWiWHX3nYGDgAc=vxaZP3qc4g1Siw@mail.gmail.com  
Backpatch-through: 14  

M src/backend/storage/smgr/md.c
M src/backend/storage/smgr/smgr.c

Add check for invalid offset at multixid truncation

commit   : d3ad4cef6ea90cc0be02ccee8fa8b3bd961c9c33    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 15 Jan 2026 16:48:45 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 15 Jan 2026 16:48:45 +0200    

Click here for diff

If a multixid with zero offset is left behind after a crash, and that  
multixid later becomes the oldest multixid, truncation might try to  
look up its offset and read the zero value. In the worst case, we  
might incorrectly use the zero offset to truncate valid SLRU segments  
that are still needed. I'm not sure if that can happen in practice, or  
if there are some other lower-level safeguards or incidental reasons  
that prevent the caller from passing an unwritten multixid as the  
oldest multi. But better safe than sorry, so let's add an explicit  
check for it.  
  
In stable branches, we should perhaps do the same check for  
'oldestOffset', i.e. the offset of the old oldest multixid (in master,  
'oldestOffset' is gone). But if the old oldest multixid has an invalid  
offset, the damage has been done already, and we would never advance  
past that point. It's not clear what we should do in that case. The  
check that this commit adds will prevent such an multixid with invalid  
offset from becoming the oldest multixid in the first place, which  
seems enough for now.  
  
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>  
Discussion: Discussion: https://www.postgresql.org/message-id/000301b2-5b81-4938-bdac-90f6eb660843@iki.fi  
Backpatch-through: 14  

M src/backend/access/transam/multixact.c

pg_waldump: Relax LSN comparison check in TAP test

commit   : a244e17077301ec3c32d0eaf6f8d72b0132791fb    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Wed, 14 Jan 2026 16:02:36 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Wed, 14 Jan 2026 16:02:36 +0900    

Click here for diff

The test 002_save_fullpage.pl, checking --save-fullpage fails with  
wal_consistency_checking enabled, due to the fact that the block saved  
in the file has the same LSN as the LSN used in the file name.  The test  
required that the block LSN is stritly lower than file LSN.  This commit  
relaxes the check a bit, by allowing the LSNs to match.  
  
While on it, the test name is reworded to include some information about  
the file and block LSNs, which is useful for debugging.  
  
Author: Andrey Borodin <x4mmm@yandex-team.ru>  
Discussion: https://postgr.es/m/4226AED7-E38F-419B-AAED-9BC853FB55DE@yandex-team.ru  
Backpatch-through: 16  

M src/bin/pg_waldump/t/002_save_fullpage.pl

doc: Document DEFAULT option in file_fdw.

commit   : 930a0508a08f410f11db1e7696efa290b98f4199    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Tue, 13 Jan 2026 22:54:45 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Tue, 13 Jan 2026 22:54:45 +0900    

Click here for diff

Commit 9f8377f7a introduced the DEFAULT option for file_fdw but did not  
update the documentation. This commit adds the missing description of  
the DEFAULT option to the file_fdw documentation.  
  
Backpatch to v16, where the DEFAULT option was introduced.  
  
Author: Shinya Kato <shinya11.kato@gmail.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/CAOzEurT_PE7QEh5xAdb7Cja84Rur5qPv2Fzt3Tuqi=NU0WJsbg@mail.gmail.com  
Backpatch-through: 16  

M doc/src/sgml/file-fdw.sgml

doc: Improve description of publish_via_partition_root

commit   : 862f83f3aa402f485877f50d5049decc14959283    
  
author   : Jacob Champion <jchampion@postgresql.org>    
date     : Fri, 9 Jan 2026 10:02:49 -0800    
  
committer: Jacob Champion <jchampion@postgresql.org>    
date     : Fri, 9 Jan 2026 10:02:49 -0800    

Click here for diff

Reword publish_via_partition_root's opening paragraph. Describe its  
behavior more clearly, and directly state that its default is false.  
  
Per complaint by Peter Smith; final text of the patch made in  
collaboration with Chao Li.  
  
Author: Chao Li <li.evan.chao@gmail.com>  
Author: Peter Smith <peter.b.smith@fujitsu.com>  
Reported-by: Peter Smith <peter.b.smith@fujitsu.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Discussion: https://postgr.es/m/CAHut%2BPu7SpK%2BctOYoqYR3V4w5LKc9sCs6c_qotk9uTQJQ4zp6g%40mail.gmail.com  
Backpatch-through: 14  

M doc/src/sgml/ref/create_publication.sgml

Fix possible incorrect column reference in ERROR message

commit   : 84b787ae66e6736685d3275b0c7c9becd2a82508    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Fri, 9 Jan 2026 11:03:24 +1300    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Fri, 9 Jan 2026 11:03:24 +1300    

Click here for diff

When creating a partition for a RANGE partitioned table, the reporting  
of errors relating to converting the specified range values into  
constant values for the partition key's type could display the name of a  
previous partition key column when an earlier range was specified as  
MINVALUE or MAXVALUE.  
  
This was caused by the code not correctly incrementing the index that  
tracks which partition key the foreach loop was working on after  
processing MINVALUE/MAXVALUE ranges.  
  
Fix by using foreach_current_index() to ensure the index variable is  
always set to the List element being worked on.  
  
Author: myzhen <zhenmingyang@yeah.net>  
Reviewed-by: zhibin wang <killerwzb@gmail.com>  
Discussion: https://postgr.es/m/273cab52.978.19b96fc75e7.Coremail.zhenmingyang@yeah.net  
Backpatch-through: 14  

M src/backend/parser/parse_utilcmd.c

Prevent invalidation of newly created replication slots.

commit   : 3510ebeb0dfad31e2fd77eb88c6ebabfc773c8e1    
  
author   : Amit Kapila <akapila@postgresql.org>    
date     : Thu, 8 Jan 2026 07:17:56 +0000    
  
committer: Amit Kapila <akapila@postgresql.org>    
date     : Thu, 8 Jan 2026 07:17:56 +0000    

Click here for diff

A race condition could cause a newly created replication slot to become  
invalidated between WAL reservation and a checkpoint.  
  
Previously, if the required WAL was removed, we retried the reservation  
process. However, the slot could still be invalidated before the retry if  
the WAL was not yet removed but the checkpoint advanced the redo pointer  
beyond the slot's intended restart LSN and computed the minimum LSN that  
needs to be preserved for the slots.  
  
The fix is to acquire an exclusive lock on ReplicationSlotAllocationLock  
during WAL reservation, and a shared lock during the minimum LSN  
calculation at checkpoints to serialize the process. This ensures that, if  
WAL reservation occurs first, the checkpoint waits until restart_lsn is  
updated before calculating the minimum LSN. If the checkpoint runs first,  
subsequent WAL reservations pick a position at or after the latest  
checkpoint's redo pointer.  
  
We used a similar fix in HEAD (via commit 006dd4b2e5) and 18. The  
difference is that in 17 and prior branches we need to additionally handle  
the race condition with slot's minimum LSN computation during checkpoints.  
  
Reported-by: suyu.cmj <mengjuan.cmj@alibaba-inc.com>  
Author: Hou Zhijie <houzj.fnst@fujitsu.com>  
Author: vignesh C <vignesh21@gmail.com>  
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>  
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Backpatch-through: 14  
Discussion: https://postgr.es/m/5e045179-236f-4f8f-84f1-0f2566ba784c.mengjuan.cmj@alibaba-inc.com  

M src/backend/access/transam/xlog.c
M src/backend/replication/slot.c

Fix typo

commit   : 608f9572a7f013c6b071fba993970c218ddf381b    
  
author   : Peter Eisentraut <peter@eisentraut.org>    
date     : Wed, 7 Jan 2026 15:47:02 +0100    
  
committer: Peter Eisentraut <peter@eisentraut.org>    
date     : Wed, 7 Jan 2026 15:47:02 +0100    

Click here for diff

Reported-by: Xueyu Gao <gaoxueyu_hope@163.com>  
Discussion: https://www.postgresql.org/message-id/42b5c99a.856d.19b73d858e2.Coremail.gaoxueyu_hope%40163.com  

M .cirrus.tasks.yml

createuser: Update docs to reflect defaults

commit   : 8dd074abd0c31b99ca79c6408d1a549cf655eb93    
  
author   : John Naylor <john.naylor@postgresql.org>    
date     : Wed, 7 Jan 2026 16:02:19 +0700    
  
committer: John Naylor <john.naylor@postgresql.org>    
date     : Wed, 7 Jan 2026 16:02:19 +0700    

Click here for diff

Commit c7eab0e97 changed the default password_encryption setting to  
'scram-sha-256', so update the example for creating a user with an  
assigned password.  
  
In addition, commit 08951a7c9 added new options that in turn pass  
default tokens NOBYPASSRLS and NOREPLICATION to the CREATE ROLE  
command, so fix this omission as well for v16 and later.  
  
Reported-by: Heikki Linnakangas <hlinnaka@iki.fi>  
Discussion: https://postgr.es/m/cff1ea60-c67d-4320-9e33-094637c2c4fb%40iki.fi  
Backpatch-through: 14  

M doc/src/sgml/ref/createuser.sgml

Fix issue with EVENT TRIGGERS and ALTER PUBLICATION

commit   : bb08ac7ac38763076e5507559d9a8b0559572258    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Tue, 6 Jan 2026 17:30:08 +1300    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Tue, 6 Jan 2026 17:30:08 +1300    

Click here for diff

When processing the "publish" options of an ALTER PUBLICATION command,  
we call SplitIdentifierString() to split the options into a List of  
strings.  Since SplitIdentifierString() modifies the delimiter  
character and puts NULs in their place, this would overwrite the memory  
of the AlterPublicationStmt.  Later in AlterPublicationOptions(), the  
modified AlterPublicationStmt is copied for event triggers, which would  
result in the event trigger only seeing the first "publish" option  
rather than all options that were specified in the command.  
  
To fix this, make a copy of the string before passing to  
SplitIdentifierString().  
  
Here we also adjust a similar case in the pgoutput plugin.  There's no  
known issues caused by SplitIdentifierString() here, so this is being  
done out of paranoia.  
  
Thanks to Henson Choi for putting together an example case showing the  
ALTER PUBLICATION issue.  
  
Author: sunil s <sunilfeb26@gmail.com>  
Reviewed-by: Henson Choi <assam258@gmail.com>  
Reviewed-by: zengman <zengman@halodbtech.com>  
Backpatch-through: 14  

M src/backend/commands/publicationcmds.c
M src/backend/replication/pgoutput/pgoutput.c

Add TAP test for GUC settings passed via CONNECTION in logical replication.

commit   : 358784645eb864a805a859405d50386c9a3805b8    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Tue, 6 Jan 2026 11:57:12 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Tue, 6 Jan 2026 11:57:12 +0900    

Click here for diff

Commit d926462d819 restored the behavior of passing GUC settings from  
the CONNECTION string to the publisher's walsender, allowing per-connection  
configuration.  
  
This commit adds a TAP test to verify that behavior works correctly.  
  
Since commit d926462d819 was recently applied and backpatched to v15,  
this follow-up commit is also backpatched accordingly.  
  
Author: Fujii Masao <masao.fujii@gmail.com>  
Reviewed-by: Chao Li <lic@highgo.com>  
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Reviewed-by: Japin Li <japinli@hotmail.com>  
Discussion: https://postgr.es/m/CAHGQGwGYV+-abbKwdrM2UHUe-JYOFWmsrs6=QicyJO-j+-Widw@mail.gmail.com  
Backpatch-through: 15  

M src/test/subscription/t/001_rep_changes.pl

Honor GUC settings specified in CREATE SUBSCRIPTION CONNECTION.

commit   : 7a990e801a72fa438365532b8ebdfd88f250a6bf    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Tue, 6 Jan 2026 11:52:22 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Tue, 6 Jan 2026 11:52:22 +0900    

Click here for diff

Prior to v15, GUC settings supplied in the CONNECTION clause of  
CREATE SUBSCRIPTION were correctly passed through to  
the publisher's walsender. For example:  
  
        CREATE SUBSCRIPTION mysub  
            CONNECTION 'options=''-c wal_sender_timeout=1000'''  
            PUBLICATION ...  
  
would cause wal_sender_timeout to take effect on the publisher's walsender.  
  
However, commit f3d4019da5d changed the way logical replication  
connections are established, forcing the publisher's relevant  
GUC settings (datestyle, intervalstyle, extra_float_digits) to  
override those provided in the CONNECTION string. As a result,  
from v15 through v18, GUC settings in the CONNECTION string were  
always ignored.  
  
This regression prevented per-connection tuning of logical replication.  
For example, using a shorter timeout for walsender connecting  
to a nearby subscriber and a longer one for walsender connecting  
to a remote subscriber.  
  
This commit restores the intended behavior by ensuring that  
GUC settings in the CONNECTION string are again passed through  
and applied by the walsender, allowing per-connection configuration.  
  
Backpatch to v15, where the regression was introduced.  
  
Author: Fujii Masao <masao.fujii@gmail.com>  
Reviewed-by: Chao Li <lic@highgo.com>  
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Reviewed-by: Japin Li <japinli@hotmail.com>  
Discussion: https://postgr.es/m/CAHGQGwGYV+-abbKwdrM2UHUe-JYOFWmsrs6=QicyJO-j+-Widw@mail.gmail.com  
Backpatch-through: 15  

M src/backend/replication/libpqwalreceiver/libpqwalreceiver.c

Doc: add missing punctuation

commit   : 498b163a161e2a545a327ae5604682d5e00f3dfa    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Sun, 4 Jan 2026 21:13:40 +1300    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Sun, 4 Jan 2026 21:13:40 +1300    

Click here for diff

Author: Daisuke Higuchi <higuchi.daisuke11@gmail.com>  
Reviewed-by: Robert Treat <rob@xzilla.net>  
Discussion: https://postgr.es/m/CAEVT6c-yWYstu76YZ7VOxmij2XA8vrOEvens08QLmKHTDjEPBw@mail.gmail.com  
Backpatch-through: 14  

M doc/src/sgml/history.sgml

Fix selectivity estimation integer overflow in contrib/intarray

commit   : a5f2dc421f7f8ed9588cf0a32566c71b9d8f52c6    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Sun, 4 Jan 2026 20:33:39 +1300    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Sun, 4 Jan 2026 20:33:39 +1300    

Click here for diff

This fixes a poorly written integer comparison function which was  
performing subtraction in an attempt to return a negative value when  
a < b and a positive value when a > b, and 0 when the values were equal.  
Unfortunately that didn't always work correctly due to two's complement  
having the INT_MIN 1 further from zero than INT_MAX.  This could result  
in an overflow and cause the comparison function to return an incorrect  
result, which would result in the binary search failing to find the  
value being searched for.  
  
This could cause poor selectivity estimates when the statistics stored  
the value of INT_MAX (2147483647) and the value being searched for was  
large enough to result in the binary search doing a comparison with that  
INT_MAX value.  
  
Author: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: David Rowley <dgrowleyml@gmail.com>  
Discussion: https://postgr.es/m/CAEoWx2ng1Ot5LoKbVU-Dh---dFTUZWJRH8wv2chBu29fnNDMaQ@mail.gmail.com  
Backpatch-through: 14  

M contrib/intarray/_int_selfuncs.c

commit   : 625e4495bf58733f1dac24e363b12e6f95d7e92d    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Thu, 1 Jan 2026 13:24:10 -0500    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Thu, 1 Jan 2026 13:24:10 -0500    

Click here for diff

Backpatch-through: 14  

M COPYRIGHT
M doc/src/sgml/legal.sgml

jit: Fix jit_profiling_support when unavailable.

commit   : 174bbc06775cbe571184338d530f93707615f953    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Wed, 31 Dec 2025 13:24:17 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Wed, 31 Dec 2025 13:24:17 +1300    

Click here for diff

jit_profiling_support=true captures profile data for Linux perf.  On  
other platforms, LLVMCreatePerfJITEventListener() returns NULL and the  
attempt to register the listener would crash.  
  
Fix by ignoring the setting in that case.  The documentation already  
says that it only has an effect if perf support is present, and we  
already did the same for older LLVM versions that lacked support.  
  
No field reports, unsurprisingly for an obscure developer-oriented  
setting.  Noticed in passing while working on commit 1a28b4b4.  
  
Backpatch-through: 14  
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>  
Reviewed-by: Andres Freund <andres@anarazel.de>  
Discussion: https://postgr.es/m/CA%2BhUKGJgB6gvrdDohgwLfCwzVQm%3DVMtb9m0vzQn%3DCwWn-kwG9w%40mail.gmail.com  

M src/backend/jit/llvm/llvmjit.c

Fix a race condition in updating procArray->replication_slot_xmin.

commit   : 123b851abdaced8da7edcbc4c1bca6e29a7a8ff0    
  
author   : Masahiko Sawada <msawada@postgresql.org>    
date     : Tue, 30 Dec 2025 10:56:25 -0800    
  
committer: Masahiko Sawada <msawada@postgresql.org>    
date     : Tue, 30 Dec 2025 10:56:25 -0800    

Click here for diff

Previously, ReplicationSlotsComputeRequiredXmin() computed the oldest  
xmin across all slots without holding ProcArrayLock (when  
already_locked is false), acquiring the lock just before updating the  
replication slot xmin.  
  
This could lead to a race condition: if a backend created a new slot  
and updates the global replication slot xmin, another backend  
concurrently running ReplicationSlotsComputeRequiredXmin() could  
overwrite that update with an invalid or stale value. This happens  
because the concurrent backend might have computed the aggregate xmin  
before the new slot was accounted for, but applied the update after  
the new slot had already updated the global value.  
  
In the reported failure, a walsender for an apply worker computed  
InvalidTransactionId as the oldest xmin and overwrote a valid  
replication slot xmin value computed by a walsender for a tablesync  
worker. Consequently, the tablesync worker computed a transaction ID  
via GetOldestSafeDecodingTransactionId() effectively without  
considering the replication slot xmin. This led to the error "cannot  
build an initial slot snapshot as oldest safe xid %u follows  
snapshot's xmin %u", which was an assertion failure prior to commit  
240e0dbacd3.  
  
To fix this, we acquire ReplicationSlotControlLock in exclusive mode  
during slot creation to perform the initial update of the slot  
xmin. In ReplicationSlotsComputeRequiredXmin(), we hold  
ReplicationSlotControlLock in shared mode until the global slot xmin  
is updated in ProcArraySetReplicationSlotXmin(). This prevents  
concurrent computations and updates of the global xmin by other  
backends during the initial slot xmin update process, while still  
permitting concurrent calls to ReplicationSlotsComputeRequiredXmin().  
  
Backpatch to all supported versions.  
  
Author: Zhijie Hou <houzj.fnst@fujitsu.com>  
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Reviewed-by: Pradeep Kumar <spradeepkumar29@gmail.com>  
Reviewed-by: Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>  
Reviewed-by: Robert Haas <robertmhaas@gmail.com>  
Reviewed-by: Andres Freund <andres@anarazel.de>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/CAA4eK1L8wYcyTPxNzPGkhuO52WBGoOZbT0A73Le=ZUWYAYmdfw@mail.gmail.com  
Backpatch-through: 14  

M src/backend/replication/logical/logical.c
M src/backend/replication/logical/slotsync.c
M src/backend/replication/slot.c

jit: Remove -Wno-deprecated-declarations in 18+.

commit   : a989237aea8f4b7a5d6236e7578ad9c8979cacf6    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Tue, 30 Dec 2025 14:11:37 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Tue, 30 Dec 2025 14:11:37 +1300    

Click here for diff

REL_18_STABLE and master have commit ee485912, so they always use the  
newer LLVM opaque pointer functions.  Drop -Wno-deprecated-declarations  
(commit a56e7b660) for code under jit/llvm in those branches, to catch  
any new deprecation warnings that arrive in future version of LLVM.  
  
Older branches continued to use functions marked deprecated in LLVM 14  
and 15 (ie switched to the newer functions only for LLVM 16+), as a  
precaution against unforeseen compatibility problems with bitcode  
already shipped.  In those branches, the comment about warning  
suppression is updated to explain that situation better.  In theory we  
could suppress warnings only for LLVM 14 and 15 specifically, but that  
isn't done here.  
  
Backpatch-through: 14  
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/1407185.1766682319%40sss.pgh.pa.us  

M src/backend/jit/llvm/Makefile

Fix Mkvcbuild.pm builds of test_cloexec.c.

commit   : b3c8119e28c0ba4ef9921c67ea0cd42341f355e3    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Mon, 29 Dec 2025 15:22:16 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Mon, 29 Dec 2025 15:22:16 +1300    

Click here for diff

Mkvcbuild.pm scrapes Makefile contents, but couldn't understand the  
change made by commit bec2a0aa.  Revealed by BF animal hamerkop in  
branch REL_16_STABLE.  
  
1.  It used += instead of =, which didn't match the pattern that  
Mkvcbuild.pm looks for.  Drop the +.  
  
2.  Mkvcbuild.pm doesn't link PROGRAM executables with libpgport.  Apply  
a local workaround to REL_16_STABLE only (later branches dropped  
Mkvcbuild.pm).  
  
Backpatch-through: 16  
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/175163.1766357334%40sss.pgh.pa.us  

M src/test/modules/test_cloexec/Makefile

Fix pg_stat_get_backend_activity() to use multi-byte truncated result

commit   : 52b27f5859f6bf747fbb8275302fac962b645312    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Sat, 27 Dec 2025 17:23:53 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Sat, 27 Dec 2025 17:23:53 +0900    

Click here for diff

pg_stat_get_backend_activity() calls pgstat_clip_activity() to ensure  
that the reported query string is correctly truncated when it finishes  
with an incomplete multi-byte sequence.  However, the result returned by  
the function was not what pgstat_clip_activity() generated, but the  
non-truncated, original, contents from PgBackendStatus.st_activity_raw.  
  
Oversight in 54b6cd589ac2, so backpatch all the way down.  
  
Author: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/CAEoWx2mDzwc48q2EK9tSXS6iJMJ35wvxNQnHX+rXjy5VgLvJQw@mail.gmail.com  
Backpatch-through: 14  

M src/backend/utils/adt/pgstatfuncs.c

doc: warn about the use of "ctid" queries beyond the examples

commit   : 02dc089bf1560c391fc295dbfaaf0d4a13d178b7    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Fri, 26 Dec 2025 17:34:17 -0500    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Fri, 26 Dec 2025 17:34:17 -0500    

Click here for diff

Also be more assertive that "ctid" should not be used for long-term  
storage.  
  
Reported-by: Bernice Southey  
  
Discussion: https://postgr.es/m/CAEDh4nyn5swFYuSfcnGAbpQrKOc47Hh_ZyKVSPYJcu2P=51Luw@mail.gmail.com  
  
Backpatch-through: 17  

M doc/src/sgml/ddl.sgml
M doc/src/sgml/ref/delete.sgml
M doc/src/sgml/ref/update.sgml

doc: Remove duplicate word in ECPG description

commit   : 862ed6e4983af0dc5973c583dc1d5ac0c77bb863    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Fri, 26 Dec 2025 15:26:04 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Fri, 26 Dec 2025 15:26:04 +0900    

Click here for diff

Author: Laurenz Albe <laurenz.albe@cybertec.at>  
Reviewed-by: vignesh C <vignesh21@gmail.com>  
Discussion: https://postgr.es/m/d6d6a800f8b503cd78d5f4fa721198e40eec1677.camel@cybertec.at  
Backpatch-through: 14  

M doc/src/sgml/ecpg.sgml

Update comments to reflect changes in 8e0d32a4a1.

commit   : a1cdb81201109d26d80156babfbb2f7ba8c02669    
  
author   : Amit Kapila <akapila@postgresql.org>    
date     : Wed, 24 Dec 2025 09:55:03 +0000    
  
committer: Amit Kapila <akapila@postgresql.org>    
date     : Wed, 24 Dec 2025 09:55:03 +0000    

Click here for diff

Commit 8e0d32a4a1 fixed an issue by allowing the replication origin to be  
created while marking the table sync state as SUBREL_STATE_DATASYNC.  
Update the comment in check_old_cluster_subscription_state() to accurately  
describe this corrected behavior.  
  
Author: Amit Kapila <amit.kapila16@gmail.com>  
Reviewed-by: Michael Paquier <michael@paquier.xyz>  
Backpatch-through: 17, where the code was introduced  
Discussion: https://postgr.es/m/CAA4eK1+KaSf5nV_tWy+SDGV6MnFnKMhdt41jJjSDWm6yCyOcTw@mail.gmail.com  
Discussion: https://postgr.es/m/aUTekQTg4OYnw-Co@paquier.xyz  

M src/bin/pg_upgrade/check.c

Don't advance origin during apply failure.

commit   : 0ed8f1afb15f5913458b0363c0eb818e758f97bd    
  
author   : Amit Kapila <akapila@postgresql.org>    
date     : Wed, 24 Dec 2025 04:04:06 +0000    
  
committer: Amit Kapila <akapila@postgresql.org>    
date     : Wed, 24 Dec 2025 04:04:06 +0000    

Click here for diff

The logical replication parallel apply worker could incorrectly advance  
the origin progress during an error or failed apply. This behavior risks  
transaction loss because such transactions will not be resent by the  
server.  
  
Commit 3f28b2fcac addressed a similar issue for both the apply worker and  
the table sync worker by registering a before_shmem_exit callback to reset  
origin information. This prevents the worker from advancing the origin  
during transaction abortion on shutdown. This patch applies the same fix  
to the parallel apply worker, ensuring consistent behavior across all  
worker types.  
  
As with 3f28b2fcac, we are backpatching through version 16, since parallel  
apply mode was introduced there and the issue only occurs when changes are  
applied before the transaction end record (COMMIT or ABORT) is received.  
  
Author: Hou Zhijie <houzj.fnst@fujitsu.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Backpatch-through: 16  
Discussion: https://postgr.es/m/TY4PR01MB169078771FB31B395AB496A6B94B4A@TY4PR01MB16907.jpnprd01.prod.outlook.com  
Discussion: https://postgr.es/m/TYAPR01MB5692FAC23BE40C69DA8ED4AFF5B92@TYAPR01MB5692.jpnprd01.prod.outlook.com  

M src/backend/replication/logical/worker.c
M src/test/subscription/t/023_twophase_stream.pl

Fix bug in following update chain when locking a heap tuple

commit   : bb87d7fef1031b03e7595c05ef83f38d1df29028    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 23 Dec 2025 13:37:16 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 23 Dec 2025 13:37:16 +0200    

Click here for diff

After waiting for a concurrent updater to finish, heap_lock_tuple()  
followed the update chain to lock all tuple versions. However, when  
stepping from the initial tuple to the next one, it failed to check  
that the next tuple's XMIN matches the initial tuple's XMAX. That's an  
important check whenever following an update chain, and the recursive  
part that follows the chain did it, but the initial step missed it.  
Without the check, if the updating transaction aborts, the updated  
tuple is vacuumed away and replaced by an unrelated tuple, the  
unrelated tuple might get incorrectly locked.  
  
Author: Jasper Smit <jasper.smit@servicenow.com>  
Discussion: https://www.postgresql.org/message-id/CAOG+RQ74x0q=kgBBQ=mezuvOeZBfSxM1qu_o0V28bwDz3dHxLw@mail.gmail.com  
Backpatch-through: 14  

M src/backend/access/heap/heapam.c
M src/test/modules/injection_points/Makefile
A src/test/modules/injection_points/expected/heap_lock_update.out
M src/test/modules/injection_points/meson.build
A src/test/modules/injection_points/specs/heap_lock_update.spec

Add missing .gitignore for src/test/modules/test_cloexec.

commit   : 4c9f262ba81d689cfac52e6f33f39271d7c76a31    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 22 Dec 2025 14:06:54 -0500    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Mon, 22 Dec 2025 14:06:54 -0500    

Click here for diff

A src/test/modules/test_cloexec/.gitignore

Fix orphaned origin in shared memory after DROP SUBSCRIPTION

commit   : e063ccc722e4d6ebb90f3ee601cd4354720202da    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Tue, 23 Dec 2025 14:32:21 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Tue, 23 Dec 2025 14:32:21 +0900    

Click here for diff

Since ce0fdbfe9722, a replication slot and an origin are created by each  
tablesync worker, whose information is stored in both a catalog and  
shared memory (once the origin is set up in the latter case).  The  
transaction where the origin is created is the same as the one that runs  
the initial COPY, with the catalog state of the origin becoming visible  
for other sessions only once the COPY transaction has committed.  The  
catalog state is coupled with a state in shared memory, initialized at  
the same time as the origin created in the catalogs.  Note that the  
transaction doing the initial data sync can take a long time, time that  
depends on the amount of data to transfer from a publication node to its  
subscriber node.  
  
Now, when a DROP SUBSCRIPTION is executed, all its workers are stopped  
with the origins removed.  The removal of each origin relies on a  
catalog lookup.  A worker still running the initial COPY would fail its  
transaction, with the catalog state of the origin rolled back while the  
shared memory state remains around.  The session running the DROP  
SUBSCRIPTION should be in charge of cleaning up the catalog and the  
shared memory state, but as there is no data in the catalogs the shared  
memory state is not removed.  This issue would leave orphaned origin  
data in shared memory, leading to a confusing state as it would still  
show up in pg_replication_origin_status.  Note that this shared memory  
data is sticky, being flushed on disk in replorigin_checkpoint at  
checkpoint.  This prevents other origins from reusing a slot position  
in the shared memory data.  
  
To address this problem, the commit moves the creation of the origin at  
the end of the transaction that precedes the one executing the initial  
COPY, making the origin immediately visible in the catalogs for other  
sessions, giving DROP SUBSCRIPTION a way to know about it.  A different  
solution would have been to clean up the shared memory state using an  
abort callback within the tablesync worker.  The solution of this commit  
is more consistent with the apply worker that creates an origin in a  
short transaction.  
  
A test is added in the subscription test 004_sync.pl, which was able to  
display the problem.  The test fails when this commit is reverted.  
  
Reported-by: Tenglong Gu <brucegu@amazon.com>  
Reported-by: Daisuke Higuchi <higudai@amazon.com>  
Analyzed-by: Michael Paquier <michael@paquier.xyz>  
Author: Hou Zhijie <houzj.fnst@fujitsu.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>  
Discussion: https://postgr.es/m/aUTekQTg4OYnw-Co@paquier.xyz  
Backpatch-through: 14  

M src/backend/commands/subscriptioncmds.c
M src/backend/replication/logical/tablesync.c
M src/test/subscription/t/004_sync.pl

Fix printf format string warning on MinGW.

commit   : d4549176eabd3845475643b7b950dbb03795fbdd    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Fri, 6 Dec 2024 12:34:33 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Fri, 6 Dec 2024 12:34:33 +1300    

Click here for diff

This is a back-patch of 1319997d to branches 14-17 to fix an old warning  
about a printf type mismatch on MinGW, in anticipation of a potential  
expansion of the scope of CI's CompilerWarnings checks.  Though CI began  
in 15, BF animal fairwren also shows the warning in 14, so we might as  
well fix that too.  
  
Original commit message (except for new "Backpatch-through" tag):  
  
Commit 517bf2d91 changed a printf format string to placate MinGW, which  
at the time warned about "%lld".  Current MinGW is now warning about the  
replacement "%I64d".  Reverting the change clears the warning on the  
MinGW CI task, and hopefully it will clear it on build farm animal  
fairywren too.  
  
Backpatch-through: 14-17  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Reported-by: "Hayato Kuroda (Fujitsu)" <kuroda.hayato@fujitsu.com>  
Discussion: https://postgr.es/m/TYAPR01MB5866A71B744BE01B3BF71791F5AEA%40TYAPR01MB5866.jpnprd01.prod.outlook.com  

M src/interfaces/ecpg/test/expected/sql-sqlda.c
M src/interfaces/ecpg/test/expected/sql-sqlda.stderr
M src/interfaces/ecpg/test/sql/sqlda.pgc

Clean up test_cloexec.c and Makefile.

commit   : 0451859131e530f3f5f906dc148d52011da48268    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Sun, 21 Dec 2025 15:40:07 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Sun, 21 Dec 2025 15:40:07 +1300    

Click here for diff

An unused variable caused a compiler warning on BF animal fairywren, an  
snprintf() call was redundant, and some buffer sizes were inconsistent.  
Per code review from Tom Lane.  
  
The Makefile's test ifeq ($(PORTNAME), win32) never succeeded due to a  
circularity, so only Meson builds were actually compiling the new test  
code, partially explaining why CI didn't tell us about the warning  
sooner (the other problem being that CompilerWarnings only makes  
world-bin, a problem for another commit).  Simplify.  
  
Backpatch-through: 16, like commit c507ba55  
Author: Bryan Green <dbryan.green@gmail.com>  
Co-authored-by: Thomas Munro <tmunro@gmail.com>  
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/1086088.1765593851%40sss.pgh.pa.us  

M src/test/modules/test_cloexec/Makefile
M src/test/modules/test_cloexec/test_cloexec.c

Add guard to prevent recursive memory context logging.

commit   : 699293d2749a64e5da8155ca491f19acc1aba341    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Fri, 19 Dec 2025 12:05:37 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Fri, 19 Dec 2025 12:05:37 +0900    

Click here for diff

Previously, if memory context logging was triggered repeatedly and  
rapidly while a previous request was still being processed, it could  
result in recursive calls to ProcessLogMemoryContextInterrupt().  
This could lead to infinite recursion and potentially crash the process.  
  
This commit adds a guard to prevent such recursion.  
If ProcessLogMemoryContextInterrupt() is already in progress and  
logging memory contexts, subsequent calls will exit immediately,  
avoiding unintended recursive calls.  
  
While this scenario is unlikely in practice, it's not impossible.  
This change adds a safety check to prevent such failures.  
  
Back-patch to v14, where memory context logging was introduced.  
  
Reported-by: Robert Haas <robertmhaas@gmail.com>  
Author: Fujii Masao <masao.fujii@gmail.com>  
Reviewed-by: Atsushi Torikoshi <torikoshia@oss.nttdata.com>  
Reviewed-by: Robert Haas <robertmhaas@gmail.com>  
Reviewed-by: Artem Gavrilov <artem.gavrilov@percona.com>  
Discussion: https://postgr.es/m/CA+TgmoZMrv32tbNRrFTvF9iWLnTGqbhYSLVcrHGuwZvCtph0NA@mail.gmail.com  
Backpatch-through: 14  

M src/backend/utils/mmgr/mcxt.c

Sort DO_SUBSCRIPTION_REL dump objects independent of OIDs.

commit   : 1cdc07ad5ae5a50443abed871bd304c0836eaa72    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Thu, 18 Dec 2025 10:23:47 -0800    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Thu, 18 Dec 2025 10:23:47 -0800    

Click here for diff

Commit 0decd5e89db9f5edb9b27351082f0d74aae7a9b6 missed  
DO_SUBSCRIPTION_REL, leading to assertion failures.  In the unlikely use  
case of diffing "pg_dump --binary-upgrade" output, spurious diffs were  
possible.  As part of fixing that, align the DumpableObject naming and  
sort order with DO_PUBLICATION_REL.  The overall effect of this commit  
is to change sort order from (subname, srsubid) to (rel, subname).  
Since DO_SUBSCRIPTION_REL is only for --binary-upgrade, accept that  
larger-than-usual dump order change.  Back-patch to v17, where commit  
9a17be1e244a45a77de25ed2ada246fd34e4557d introduced DO_SUBSCRIPTION_REL.  
  
Reported-by: vignesh C <vignesh21@gmail.com>  
Author: vignesh C <vignesh21@gmail.com>  
Discussion: https://postgr.es/m/CALDaNm2x3rd7C0_HjUpJFbxpAqXgm=QtoKfkEWDVA8h+JFpa_w@mail.gmail.com  
Backpatch-through: 17  

M src/bin/pg_dump/pg_dump.c
M src/bin/pg_dump/pg_dump_sort.c
M src/bin/pg_upgrade/t/004_subscription.pl

Do not emit WAL for unlogged BRIN indexes

commit   : 4b6d096a0f99d684db60a9f99aef851ebe0a5fa3    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 18 Dec 2025 15:08:48 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 18 Dec 2025 15:08:48 +0200    

Click here for diff

Operations on unlogged relations should not be WAL-logged. The  
brin_initialize_empty_new_buffer() function didn't get the memo.  
  
The function is only called when a concurrent update to a brin page  
uses up space that we're just about to insert to, which makes it  
pretty hard to hit. If you do manage to hit it, a full-page WAL record  
is erroneously emitted for the unlogged index. If you then crash,  
crash recovery will fail on that record with an error like this:  
  
    FATAL:  could not create file "base/5/32819": File exists  
  
Author: Kirill Reshke <reshkekirill@gmail.com>  
Discussion: https://www.postgresql.org/message-id/CALdSSPhpZXVFnWjwEBNcySx_vXtXHwB2g99gE6rK0uRJm-3GgQ@mail.gmail.com  
Backpatch-through: 14  

M src/backend/access/brin/brin_pageops.c

Update .abi-compliance-history for PrepareToInvalidateCacheTuple().

commit   : a3e7bbd410d4c2e4e32c592340882191d215fe0b    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 17 Dec 2025 09:48:56 -0800    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 17 Dec 2025 09:48:56 -0800    

Click here for diff

Commit 0f69beddea113dd1d6c5b6f6d82df577ef3c21f2 (v17) anticipated this:  
  
  [C] 'function void PrepareToInvalidateCacheTuple(Relation, HeapTuple, HeapTuple, void (int, typedef uint32, typedef Oid)*)' has some sub-type changes:  
    parameter 5 of type 'void*' was added  
    parameter 4 of type 'void (int, typedef uint32, typedef Oid)*' changed:  
      pointer type changed from: 'void (int, typedef uint32, typedef Oid)*' to: 'void (int, typedef uint32, typedef Oid, void*)*'  
  
Discussion: https://postgr.es/m/20240523000548.58.nmisch@google.com  
Backpatch-through: 14-17  

M .abi-compliance-history

Assert lack of hazardous buffer locks before possible catalog read.

commit   : bcb784e7d2f9539ff65e63ffdb89631574034ec8    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Tue, 16 Dec 2025 16:13:54 -0800    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Tue, 16 Dec 2025 16:13:54 -0800    

Click here for diff

Commit 0bada39c83a150079567a6e97b1a25a198f30ea3 fixed a bug of this kind,  
which existed in all branches for six days before detection.  While the  
probability of reaching the trouble was low, the disruption was extreme.  No  
new backends could start, and service restoration needed an immediate  
shutdown.  Hence, add this to catch the next bug like it.  
  
The new check in RelationIdGetRelation() suffices to make autovacuum detect  
the bug in commit 243e9b40f1b2dd09d6e5bf91ebf6e822a2cd3704 that led to commit  
0bada39.  This also checks in a number of similar places.  It replaces each  
Assert(IsTransactionState()) that pertained to a conditional catalog read.  
  
Back-patch to v14 - v17.  This a back-patch of commit  
f4ece891fc2f3f96f0571720a1ae30db8030681b (from before v18 branched) to  
all supported branches, to accompany the back-patch of commits 243e9b4  
and 0bada39.  For catalog indexes, the bttextcmp() behavior that  
motivated IsCatalogTextUniqueIndexOid() was v18-specific.  Hence, this  
back-patch doesn't need that or its correction from commit  
4a4ee0c2c1e53401924101945ac3d517c0a8a559.  
  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Discussion: https://postgr.es/m/20250410191830.0e.nmisch@google.com  
Discussion: https://postgr.es/m/10ec0bc3-5933-1189-6bb8-5dec4114558e@gmail.com  
Backpatch-through: 14-17  

M src/backend/storage/buffer/bufmgr.c
M src/backend/storage/lmgr/lwlock.c
M src/backend/utils/adt/pg_locale.c
M src/backend/utils/cache/catcache.c
M src/backend/utils/cache/inval.c
M src/backend/utils/cache/relcache.c
M src/backend/utils/mb/mbutils.c
M src/include/storage/bufmgr.h
M src/include/storage/lwlock.h
M src/include/utils/relcache.h

WAL-log inplace update before revealing it to other sessions.

commit   : d3e5d89504484d69bc41e66fc2c2346ec0da6b07    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Tue, 16 Dec 2025 16:13:54 -0800    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Tue, 16 Dec 2025 16:13:54 -0800    

Click here for diff

A buffer lock won't stop a reader having already checked tuple  
visibility.  If a vac_update_datfrozenid() and then a crash happened  
during inplace update of a relfrozenxid value, datfrozenxid could  
overtake relfrozenxid.  That could lead to "could not access status of  
transaction" errors.  
  
Back-patch to v14 - v17.  This is a back-patch of commits:  
  
- 8e7e672cdaa6bfec85d4d5dd9be84159df23bb41  
  (main change, on master, before v18 branched)  
- 818013665259d4242ba641aad705ebe5a3e2db8e  
  (defect fix, on master, before v18 branched)  
  
It reverses commit bc6bad88572501aecaa2ac5d4bc900ac0fd457d5, my revert  
of the original back-patch.  
  
In v14, this also back-patches the assertion removal from commit  
7fcf2faf9c7dd473208fd6d5565f88d7f733782b.  
  
Discussion: https://postgr.es/m/20240620012908.92.nmisch@google.com  
Backpatch-through: 14-17  

M src/backend/access/heap/README.tuplock
M src/backend/access/heap/heapam.c
M src/include/storage/proc.h

For inplace update, send nontransactional invalidations.

commit   : 0f69beddea113dd1d6c5b6f6d82df577ef3c21f2    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Tue, 16 Dec 2025 16:13:54 -0800    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Tue, 16 Dec 2025 16:13:54 -0800    

Click here for diff

The inplace update survives ROLLBACK.  The inval didn't, so another  
backend's DDL could then update the row without incorporating the  
inplace update.  In the test this fixes, a mix of CREATE INDEX and ALTER  
TABLE resulted in a table with an index, yet relhasindex=f.  That is a  
source of index corruption.  
  
Back-patch to v14 - v17.  This is a back-patch of commits:  
  
- 243e9b40f1b2dd09d6e5bf91ebf6e822a2cd3704  
  (main change, on master, before v18 branched)  
- 0bada39c83a150079567a6e97b1a25a198f30ea3  
  (defect fix, on master, before v18 branched)  
- bae8ca82fd00603ebafa0658640d6e4dfe20af92  
  (cosmetics from post-commit review, on REL_18_STABLE)  
  
It reverses commit c1099dd745b0135960895caa8892a1873ac6cbe5, my revert  
of the original back-patch of 243e9b4.  
  
This back-patch omits the non-comment heap_decode() changes.  I find  
those changes removed harmless code that was last necessary in v13.  See  
discussion thread for details.  The back branches aren't the place to  
remove such code.  
  
Like the original back-patch, this doesn't change WAL, because these  
branches use end-of-recovery SIResetAll().  All branches change the ABI  
of extern function PrepareToInvalidateCacheTuple().  No PGXN extension  
calls that, and there's no apparent use case in extensions.  Expect  
".abi-compliance-history" edits to follow.  
  
Reviewed-by: Paul A Jungwirth <pj@illuminatedcomputing.com>  
Reviewed-by: Surya Poondla <s_poondla@apple.com>  
Reviewed-by: Ilyasov Ian <ianilyasov@outlook.com>  
Reviewed-by: Nitin Motiani <nitinmotiani@google.com> (in earlier versions)  
Reviewed-by: Andres Freund <andres@anarazel.de> (in earlier versions)  
Discussion: https://postgr.es/m/20240523000548.58.nmisch@google.com  
Backpatch-through: 14-17  

M src/backend/access/heap/README.tuplock
M src/backend/access/heap/heapam.c
M src/backend/access/transam/xact.c
M src/backend/catalog/index.c
M src/backend/commands/event_trigger.c
M src/backend/replication/logical/decode.c
M src/backend/utils/cache/catcache.c
M src/backend/utils/cache/inval.c
M src/backend/utils/cache/syscache.c
M src/include/utils/catcache.h
M src/include/utils/inval.h
M src/test/isolation/expected/inplace-inval.out
M src/test/isolation/specs/inplace-inval.spec
M src/tools/pgindent/typedefs.list

Fix multibyte issue in ltree_strncasecmp().

commit   : b8cfe9dc2e7f052e727fc3bc07ccaef13ef55979    
  
author   : Jeff Davis <jdavis@postgresql.org>    
date     : Tue, 16 Dec 2025 10:35:40 -0800    
  
committer: Jeff Davis <jdavis@postgresql.org>    
date     : Tue, 16 Dec 2025 10:35:40 -0800    

Click here for diff

Previously, the API for ltree_strncasecmp() took two inputs but only  
one length (that of the smaller input). It truncated the larger input  
to that length, but that could break a multibyte sequence.  
  
Change the API to be a check for prefix equality (possibly  
case-insensitive) instead, which is all that's needed by the  
callers. Also, provide the lengths of both inputs.  
  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>  
Discussion: https://postgr.es/m/5f65b85740197ba6249ea507cddf609f84a6188b.camel%40j-davis.com  
Backpatch-through: 14  

M contrib/ltree/lquery_op.c
M contrib/ltree/ltree.h
M contrib/ltree/ltxtquery_op.c

Switch memory contexts in ReinitializeParallelDSM.

commit   : 1d0fc2499ff2f7b81e1e2c742ccb587c48d0f6d6    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Tue, 16 Dec 2025 10:40:53 -0500    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Tue, 16 Dec 2025 10:40:53 -0500    

Click here for diff

We already do this in CreateParallelContext, InitializeParallelDSM, and  
LaunchParallelWorkers. I suspect the reason why the matching logic was  
omitted from ReinitializeParallelDSM is that I failed to realize that  
any memory allocation was happening here -- but shm_mq_attach does  
allocate, which could result in a shm_mq_handle being allocated in a  
shorter-lived context than the ParallelContext which points to it.  
  
That could result in a crash if the shorter-lived context is freed  
before the parallel context is destroyed. As far as I am currently  
aware, there is no way to reach a crash using only code that is  
present in core PostgreSQL, but extensions could potentially trip  
over this. Fixing this in the back-branches appears low-risk, so  
back-patch to all supported versions.  
  
Author: Jakub Wartak <jakub.wartak@enterprisedb.com>  
Co-authored-by: Jeevan Chalke <jeevan.chalke@enterprisedb.com>  
Backpatch-through: 14  
Discussion: http://postgr.es/m/CAKZiRmwfVripa3FGo06=5D1EddpsLu9JY2iJOTgbsxUQ339ogQ@mail.gmail.com  

M src/backend/access/transam/parallel.c

Fail recovery when missing redo checkpoint record without backup_label

commit   : f5927da4ff6c4447109cd00e584a5f29fcc62625    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Tue, 16 Dec 2025 13:29:39 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Tue, 16 Dec 2025 13:29:39 +0900    

Click here for diff

This commit adds an extra check at the beginning of recovery to ensure  
that the redo record of a checkpoint exists before attempting WAL  
replay, logging a PANIC if the redo record referenced by the checkpoint  
record could not be found.  This is the same level of failure as when a  
checkpoint record is missing.  This check is added when a cluster is  
started without a backup_label, after retrieving its checkpoint record.  
The redo LSN used for the check is retrieved from the checkpoint record  
successfully read.  
  
In the case where a backup_label exists, the startup process already  
fails if the redo record cannot be found after reading a checkpoint  
record at the beginning of recovery.  
  
Previously, the presence of the redo record was not checked.  If the  
redo and checkpoint records were located on different WAL segments, it  
would be possible to miss a entire range of WAL records that should have  
been replayed but were just ignored.  The consequences of missing the  
redo record depend on the version dealt with, these becoming worse the  
older the version used:  
- On HEAD, v18 and v17, recovery fails with a pointer dereference at the  
beginning of the redo loop, as the redo record is expected but cannot be  
found.  These versions are good students, because we detect a failure  
before doing anything, even if the failure is misleading in the shape of  
a segmentation fault, giving no information that the redo record is  
missing.  
- In v16 and v15, problems show at the end of recovery within  
FinishWalRecovery(), the startup process using a buggy LSN to decide  
from where to start writing WAL.  The cluster gets corrupted, still it  
is noisy about it.  
- v14 and older versions are worse: a cluster gets corrupted but it is  
entirely silent about the matter.  The redo record missing causes the  
startup process to skip entirely recovery, because a missing record is  
the same as not redo being required at all.  This leads to data loss, as  
everything is missed between the redo record and the checkpoint record.  
  
Note that I have tested that down to 9.4, reproducing the issue with a  
version of the author's reproducer slightly modified.  The code is wrong  
since at least 9.2, but I did not look at the exact point of origin.  
  
This problem has been found by debugging a cluster where the WAL segment  
including the redo segment was missing due to an operator error, leading  
to a crash, based on an investigation in v15.  
  
Requesting archive recovery with the creation of a recovery.signal or  
a standby.signal even without a backup_label would mitigate the issue:  
if the record cannot be found in pg_wal/, the missing segment can be  
retrieved with a restore_command when checking that the redo record  
exists.  This was already the case without this commit, where recovery  
would re-fetch the WAL segment that includes the redo record.  The check  
introduced by this commit makes the segment to be retrieved earlier to  
make sure that the redo record can be found.  
  
On HEAD, the code will be slightly changed in a follow-up commit to not  
rely on a PANIC, to include a test able to emulate the original problem.  
This is a minimal backpatchable fix, kept separated for clarity.  
  
Reported-by: Andres Freund <andres@anarazel.de>  
Analyzed-by: Andres Freund <andres@anarazel.de>  
Author: Nitin Jadhav <nitinjadhavpostgres@gmail.com>  
Discussion: https://postgr.es/m/20231023232145.cmqe73stvivsmlhs@awork3.anarazel.de  
Discussion: https://postgr.es/m/CAMm1aWaaJi2w49c0RiaDBfhdCL6ztbr9m=daGqiOuVdizYWYaA@mail.gmail.com  
Backpatch-through: 14  

M src/backend/access/transam/xlogrecovery.c

Clarify comment on multixid offset wraparound check

commit   : cd1a887fe9bfdfcc7a2e2ba5145123c892197648    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 15 Dec 2025 11:47:04 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 15 Dec 2025 11:47:04 +0200    

Click here for diff

Coverity complained that offset cannot be 0 here because there's an  
explicit check for "offset == 0" earlier in the function, but it  
didn't see the possibility that offset could've wrapped around to 0.  
The code is correct, but clarify the comment about it.  
  
The same code exists in backbranches in the server  
GetMultiXactIdMembers() function and in 'master' in the pg_upgrade  
GetOldMultiXactIdSingleMember function. In backbranches Coverity  
didn't complain about it because the check was merely an assertion,  
but change the comment in all supported branches for consistency.  
  
Per Tom Lane's suggestion.  
  
Discussion: https://www.postgresql.org/message-id/1827755.1765752936@sss.pgh.pa.us  

M src/backend/access/transam/multixact.c

Fix allocation formula in llvmjit_expr.c

commit   : 0bab0c3b74af29d05bdd401c8ebe658389a64507    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Thu, 11 Dec 2025 10:25:46 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Thu, 11 Dec 2025 10:25:46 +0900    

Click here for diff

An array of LLVMBasicBlockRef is allocated with the size used for an  
element being "LLVMBasicBlockRef *" rather than "LLVMBasicBlockRef".  
LLVMBasicBlockRef is a type that refers to a pointer, so this did not  
directly cause a problem because both should have the same size, still  
it is incorrect.  
  
This issue has been spotted while reviewing a different patch, and  
exists since 2a0faed9d702, so backpatch all the way down.  
  
Discussion: https://postgr.es/m/CA+hUKGLngd9cKHtTUuUdEo2eWEgUcZ_EQRbP55MigV2t_zTReg@mail.gmail.com  
Backpatch-through: 14  

M src/backend/jit/llvm/llvmjit_expr.c

Fix bogus extra arguments to query_safe in test

commit   : 807b2f261d643944120d29bfab70937ebbd4aad6    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 10 Dec 2025 19:38:07 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 10 Dec 2025 19:38:07 +0200    

Click here for diff

The test seemed to incorrectly think that query_safe() takes an  
argument that describes what the query does, similar to e.g.  
command_ok(). Until commit bd8d9c9bdf the extra arguments were  
harmless and were just ignored, but when commit bd8d9c9bdf introduced  
a new optional argument to query_safe(), the extra arguments started  
clashing with that, causing the test to fail.  
  
Backpatch to v17, that's the oldest branch where the test exists. The  
extra arguments didn't cause any trouble on the older branches, but  
they were clearly bogus anyway.  

M src/test/modules/xid_wraparound/t/004_notify_freeze.pl

commit   : 998d100cdb7220a81b8775b67fbc1d5bbb918052    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 10 Dec 2025 11:43:16 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 10 Dec 2025 11:43:16 +0200    

Click here for diff

These functions took a ResourceOwner argument, but only checked if it  
was NULL, and then used CurrentResourceOwner for the actual work.  
Surely the intention was to use the passed-in resource owner. All  
current callers passed CurrentResourceOwner or NULL, so this has no  
consequences at the moment, but it's an accident waiting to happen for  
future caller and extensions.  
  
Author: Matthias van de Meent <boekewurm+postgres@gmail.com>  
Discussion: https://www.postgresql.org/message-id/CAEze2Whnfv8VuRZaohE-Af+GxBA1SNfD_rXfm84Jv-958UCcJA@mail.gmail.com  
Backpatch-through: 17  

M src/backend/utils/cache/catcache.c

Fix failures with cross-version pg_upgrade tests

commit   : d0518e965e6489ff2df0a71953e56656c710141a    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Wed, 10 Dec 2025 12:47:23 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Wed, 10 Dec 2025 12:47:23 +0900    

Click here for diff

Buildfarm members skimmer and crake have reported that pg_upgrade  
running from v18 fails due to the changes of d52c24b0f808, with the  
expectations that the objects removed in the test module  
injection_points should still be present post upgrades, but the test  
module does not have them anymore.  
  
The origin of the issue is that the following test modules depend on  
injection_points, but they do not drop the extension once the tests  
finish, leaving its traces in the dumps used for the upgrades:  
- gin, down to v17  
- typcache, down to v18  
- nbtree, HEAD-only  
Test modules have no upgrade requirements, as they are used only for..  
Tests, so there is no point in keeping them around.  
  
An alternative solution would be to drop the databases created by these  
modules in AdjustUpgrade.pm, but the solution of this commit to drop the  
extension is simpler.  Note that there would be a catch if using a  
solution based on AdjustUpgrade.pm as the database name used for the  
test runs differs between configure and meson:  
- configure relies on USE_MODULE_DB for the database name unicity, that  
would build a database name based on the *first* entry of REGRESS, that  
lists all the SQL tests.  
- meson relies on a "name" field.  
  
For example, for the test module "gin", the regression database is named  
"regression_gin" under meson, while it is more complex for configure, as  
of "contrib_regression_gin_incomplete_splits".  So a AdjustUpgrade.pm  
would need a set of DROP DATABASE IF EXISTS to solve this issue, to cope  
with each build system.  
  
The failure has been caused by d52c24b0f808, and the problem can happen  
with upgrade dumps from v17 and v18 to HEAD.  This problem is not  
currently reachable in the back-branches, but it could be possible that  
a future change in injection_points in stable branches invalidates this  
theory, so this commit is applied down to v17 in the test modules that  
matter.  
  
Per discussion with Tom Lane and Heikki Linnakangas.  
  
Discussion: https://postgr.es/m/2899652.1765167313@sss.pgh.pa.us  
Backpatch-through: 17  

M src/test/modules/gin/expected/gin_incomplete_splits.out
M src/test/modules/gin/sql/gin_incomplete_splits.sql

Fix O_CLOEXEC flag handling in Windows port.

commit   : f24af0e04cef983945f13aeed99de989bf6ee9f0    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Wed, 10 Dec 2025 09:01:35 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Wed, 10 Dec 2025 09:01:35 +1300    

Click here for diff

PostgreSQL's src/port/open.c has always set bInheritHandle = TRUE  
when opening files on Windows, making all file descriptors inheritable  
by child processes.  This meant the O_CLOEXEC flag, added to many call  
sites by commit 1da569ca1f (v16), was silently ignored.  
  
The original commit included a comment suggesting that our open()  
replacement doesn't create inheritable handles, but it was a mis-  
understanding of the code path.  In practice, the code was creating  
inheritable handles in all cases.  
  
This hasn't caused widespread problems because most child processes  
(archive_command, COPY PROGRAM, etc.) operate on file paths passed as  
arguments rather than inherited file descriptors.  Even if a child  
wanted to use an inherited handle, it would need to learn the numeric  
handle value, which isn't passed through our IPC mechanisms.  
  
Nonetheless, the current behavior is wrong.  It violates documented  
O_CLOEXEC semantics, contradicts our own code comments, and makes  
PostgreSQL behave differently on Windows than on Unix.  It also creates  
potential issues with future code or security auditing tools.  
  
To fix, define O_CLOEXEC to _O_NOINHERIT in master, previously used by  
O_DSYNC.  We use different values in the back branches to preserve  
existing values.  In pgwin32_open_handle() we set bInheritHandle  
according to whether O_CLOEXEC is specified, for the same atomic  
semantics as POSIX in multi-threaded programs that create processes.  
  
Backpatch-through: 16  
Author: Bryan Green <dbryan.green@gmail.com>  
Co-authored-by: Thomas Munro <thomas.munro@gmail.com> (minor adjustments)  
Discussion: https://postgr.es/m/e2b16375-7430-4053-bda3-5d2194ff1880%40gmail.com  

M src/include/port.h
M src/include/port/win32_port.h
M src/port/open.c
M src/test/modules/Makefile
M src/test/modules/meson.build
A src/test/modules/test_cloexec/Makefile
A src/test/modules/test_cloexec/meson.build
A src/test/modules/test_cloexec/t/001_cloexec.pl
A src/test/modules/test_cloexec/test_cloexec.c

doc: Fix statement about ON CONFLICT and deferrable constraints.

commit   : 7a02ac28ab11026a5c9ccc774fed2f2613bde77e    
  
author   : Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Tue, 9 Dec 2025 10:49:17 +0000    
  
committer: Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Tue, 9 Dec 2025 10:49:17 +0000    

Click here for diff

The description of deferrable constraints in create_table.sgml states  
that deferrable constraints cannot be used as conflict arbitrators in  
an INSERT with an ON CONFLICT DO UPDATE clause, but in fact this  
restriction applies to all ON CONFLICT clauses, not just those with DO  
UPDATE. Fix this, and while at it, change the word "arbitrators" to  
"arbiters", to match the terminology used elsewhere.  
  
Author: Dean Rasheed <dean.a.rasheed@gmail.com>  
Discussion: https://postgr.es/m/CAEZATCWsybvZP3ce8rGcVNx-QHuDOJZDz8y=p1SzqHwjRXyV4Q@mail.gmail.com  
Backpatch-through: 14  

M doc/src/sgml/ref/create_table.sgml

Fix LOCK_TIMEOUT handling in slotsync worker.

commit   : f2818868aec856e7d2502e5232e08d3a4857a802    
  
author   : Amit Kapila <akapila@postgresql.org>    
date     : Tue, 9 Dec 2025 07:02:08 +0000    
  
committer: Amit Kapila <akapila@postgresql.org>    
date     : Tue, 9 Dec 2025 07:02:08 +0000    

Click here for diff

Previously, the slotsync worker relied on SIGINT for graceful shutdown  
during promotion. However, SIGINT is also used by the LOCK_TIMEOUT handler  
to cancel queries. Since the slotsync worker can lock catalog tables while  
parsing libpq tuples, this overlap caused it to ignore LOCK_TIMEOUT  
signals and potentially wait indefinitely on locks.  
  
This patch replaces the slotsync worker's SIGINT handler with  
StatementCancelHandler to correctly process query-cancel interrupts.  
Additionally, the startup process now uses SIGUSR1 to signal the slotsync  
worker to stop during promotion. The worker exits after detecting that the  
shared memory flag stopSignaled is set.  
  
Author: Hou Zhijie <houzj.fnst@fujitsu.com>  
Reviewed-by: shveta malik <shveta.malik@gmail.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Backpatch-through: 17, here it was introduced  
Discussion: https://postgr.es/m/TY4PR01MB169078F33846E9568412D878C94A2A@TY4PR01MB16907.jpnprd01.prod.outlook.com  

M src/backend/replication/logical/slotsync.c

Doc: fix typo in hash index documentation

commit   : ca98d8ba1087286a1599df83527037879ff86bd3    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Tue, 9 Dec 2025 14:42:40 +1300    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Tue, 9 Dec 2025 14:42:40 +1300    

Click here for diff

Plus a similar fix to the README.  
  
Backpatch as far back as the sgml issue exists.  The README issue does  
exist in v14, but that seems unlikely to harm anyone.  
  
Author: David Geier <geidav.pg@gmail.com>  
Discussion: https://postgr.es/m/ed3db7ea-55b4-4809-86af-81ad3bb2c7d3@gmail.com  
Backpatch-through: 15  

M doc/src/sgml/hash.sgml
M src/backend/access/hash/README

Fix setting next multixid's offset at offset wraparound

commit   : cad40cec24f338a4ac8004b58301c0809df48c03    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 5 Dec 2025 11:32:38 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 5 Dec 2025 11:32:38 +0200    

Click here for diff

In commit 789d65364c, we started updating the next multixid's offset  
too when recording a multixid, so that it can always be used to  
calculate the number of members. I got it wrong at offset wraparound:  
we need to skip over offset 0. Fix that.  
  
Discussion: https://www.postgresql.org/message-id/d9996478-389a-4340-8735-bfad456b313c@iki.fi  
Backpatch-through: 14  

M src/backend/access/transam/multixact.c

Show version of nodes in output of TAP tests

commit   : 9d4f6d17f579ff7ead7927aba9a7742b9031d2d7    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Fri, 5 Dec 2025 09:21:18 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Fri, 5 Dec 2025 09:21:18 +0900    

Click here for diff

This commit adds the version information of a node initialized by  
Cluster.pm, that may vary depending on the install_path given by the  
test.  The code was written so as the node information, that includes  
the version number, was dumped before the version number was set.  
  
This is particularly useful for the pg_upgrade TAP tests, that may mix  
several versions for cross-version runs.  The TAP infrastructure also  
allows mixing nodes with different versions, so this information can be  
useful for out-of-core tests.  
  
Backpatch down to v15, where Cluster.pm and the pg_upgrade TAP tests  
have been introduced.  
  
Author: Potapov Alexander <a.potapov@postgrespro.com>  
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>  
Discussion: https://postgr.es/m/e59bb-692c0a80-5-6f987180@170377126  
Backpatch-through: 15  

M src/test/perl/PostgreSQL/Test/Cluster.pm

amcheck: Fix snapshot usage in bt_index_parent_check

commit   : ce2f575b7cf4ac9dbebed4df226b7de64c7d340d    
  
author   : Álvaro Herrera <alvherre@kurilemu.de>    
date     : Thu, 4 Dec 2025 18:12:08 +0100    
  
committer: Álvaro Herrera <alvherre@kurilemu.de>    
date     : Thu, 4 Dec 2025 18:12:08 +0100    

Click here for diff

We were using SnapshotAny to do some index checks, but that's wrong and  
causes spurious errors when used on indexes created by CREATE INDEX  
CONCURRENTLY.  Fix it to use an MVCC snapshot, and add a test for it.  
  
This problem came in with commit 5ae2087202af, which introduced  
uniqueness check.  Backpatch to 17.  
  
Author: Mihail Nikalayeu <mihailnikalayeu@gmail.com>  
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>  
Backpatch-through: 17  
Discussion: https://postgr.es/m/CANtu0ojmVd27fEhfpST7RG2KZvwkX=dMyKUqg0KM87FkOSdz8Q@mail.gmail.com  

M contrib/amcheck/t/002_cic.pl
M contrib/amcheck/verify_nbtree.c
M doc/src/sgml/amcheck.sgml

Set next multixid's offset when creating a new multixid

commit   : 8ba61bc06386b6c09fb9943a95dce973ea1c7b1d    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 3 Dec 2025 19:15:08 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 3 Dec 2025 19:15:08 +0200    

Click here for diff

With this commit, the next multixid's offset will always be set on the  
offsets page, by the time that a backend might try to read it, so we  
no longer need the waiting mechanism with the condition variable. In  
other words, this eliminates "corner case 2" mentioned in the  
comments.  
  
The waiting mechanism was broken in a few scenarios:  
  
- When nextMulti was advanced without WAL-logging the next  
  multixid. For example, if a later multixid was already assigned and  
  WAL-logged before the previous one was WAL-logged, and then the  
  server crashed. In that case the next offset would never be set in  
  the offsets SLRU, and a query trying to read it would get stuck  
  waiting for it. Same thing could happen if pg_resetwal was used to  
  forcibly advance nextMulti.  
  
- In hot standby mode, a deadlock could happen where one backend waits  
  for the next multixid assignment record, but WAL replay is not  
  advancing because of a recovery conflict with the waiting backend.  
  
The old TAP test used carefully placed injection points to exercise  
the old waiting code, but now that the waiting code is gone, much of  
the old test is no longer relevant. Rewrite the test to reproduce the  
IPC/MultixactCreation hang after crash recovery instead, and to verify  
that previously recorded multixids stay readable.  
  
Backpatch to all supported versions. In back-branches, we still need  
to be able to read WAL that was generated before this fix, so in the  
back-branches this includes a hack to initialize the next offsets page  
when replaying XLOG_MULTIXACT_CREATE_ID for the last multixid on a  
page. On 'master', bump XLOG_PAGE_MAGIC instead to indicate that the  
WAL is not compatible.  
  
Author: Andrey Borodin <amborodin@acm.org>  
Reviewed-by: Dmitry Yurichev <dsy.075@yandex.ru>  
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>  
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>  
Reviewed-by: Ivan Bykov <i.bykov@modernsys.ru>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://www.postgresql.org/message-id/172e5723-d65f-4eec-b512-14beacb326ce@yandex.ru  
Backpatch-through: 14  

M src/backend/access/transam/multixact.c

Fix amcheck's handling of half-dead B-tree pages

commit   : e8ae594458a3813006af963d7547b206640762b9    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 2 Dec 2025 21:11:15 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 2 Dec 2025 21:11:15 +0200    

Click here for diff

amcheck incorrectly reported the following error if there were any  
half-dead pages in the index:  
  
ERROR:  mismatch between parent key and child high key in index  
"amchecktest_id_idx"  
  
It's expected that a half-dead page does not have a downlink in the  
parent level, so skip the test.  
  
Reported-by: Konstantin Knizhnik <knizhnik@garret.ru>  
Reviewed-by: Peter Geoghegan <pg@bowt.ie>  
Reviewed-by: Mihail Nikalayeu <mihailnikalayeu@gmail.com>  
Discussion: https://www.postgresql.org/message-id/33e39552-6a2a-46f3-8b34-3f9f8004451f@garret.ru  
Backpatch-through: 14  

M contrib/amcheck/verify_nbtree.c

Fix amcheck's handling of incomplete root splits in B-tree

commit   : 5a2d1df007182ee4b61530bf798f1cb48376d3c5    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 2 Dec 2025 21:10:51 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 2 Dec 2025 21:10:51 +0200    

Click here for diff

When the root page is being split, it's normal that root page  
according to the metapage is not marked BTP_ROOT. Fix bogus error in  
amcheck about that case.  
  
Reviewed-by: Peter Geoghegan <pg@bowt.ie>  
Discussion: https://www.postgresql.org/message-id/abd65090-5336-42cc-b768-2bdd66738404@iki.fi  
Backpatch-through: 14  

M contrib/amcheck/verify_nbtree.c

Avoid rewriting data-modifying CTEs more than once.

commit   : c090965036631775942f5e4ff3c836fc6003c819    
  
author   : Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Sat, 29 Nov 2025 12:32:12 +0000    
  
committer: Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Sat, 29 Nov 2025 12:32:12 +0000    

Click here for diff

Formerly, when updating an auto-updatable view, or a relation with  
rules, if the original query had any data-modifying CTEs, the rewriter  
would rewrite those CTEs multiple times as RewriteQuery() recursed  
into the product queries. In most cases that was harmless, because  
RewriteQuery() is mostly idempotent. However, if the CTE involved  
updating an always-generated column, it would trigger an error because  
any subsequent rewrite would appear to be attempting to assign a  
non-default value to the always-generated column.  
  
This could perhaps be fixed by attempting to make RewriteQuery() fully  
idempotent, but that looks quite tricky to achieve, and would probably  
be quite fragile, given that more generated-column-type features might  
be added in the future.  
  
Instead, fix by arranging for RewriteQuery() to rewrite each CTE  
exactly once (by tracking the number of CTEs already rewritten as it  
recurses). This has the advantage of being simpler and more efficient,  
but it does make RewriteQuery() dependent on the order in which  
rewriteRuleAction() joins the CTE lists from the original query and  
the rule action, so care must be taken if that is ever changed.  
  
Reported-by: Bernice Southey <bernice.southey@gmail.com>  
Author: Bernice Southey <bernice.southey@gmail.com>  
Author: Dean Rasheed <dean.a.rasheed@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>  
Discussion: https://postgr.es/m/CAEDh4nyD6MSH9bROhsOsuTqGAv_QceU_GDvN9WcHLtZTCYM1kA@mail.gmail.com  
Backpatch-through: 14  

M src/backend/rewrite/rewriteHandler.c
M src/test/regress/expected/with.out
M src/test/regress/sql/with.sql

Allow indexscans on partial hash indexes with implied quals.

commit   : e79b2766216ab998080725323bcb4f4e87d639cc    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 27 Nov 2025 13:09:59 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 27 Nov 2025 13:09:59 -0500    

Click here for diff

Normally, if a WHERE clause is implied by the predicate of a partial  
index, we drop that clause from the set of quals used with the index,  
since it's redundant to test it if we're scanning that index.  
However, if it's a hash index (or any !amoptionalkey index), this  
could result in dropping all available quals for the index's first  
key, preventing us from generating an indexscan.  
  
It's fair to question the practical usefulness of this case.  Since  
hash only supports equality quals, the situation could only arise  
if the index's predicate is "WHERE indexkey = constant", implying  
that the index contains only one hash value, which would make hash  
a really poor choice of index type.  However, perhaps there are  
other !amoptionalkey index AMs out there with which such cases are  
more plausible.  
  
To fix, just don't filter the candidate indexquals this way if  
the index is !amoptionalkey.  That's a bit hokey because it may  
result in testing quals we didn't need to test, but to do it  
more accurately we'd have to redundantly identify which candidate  
quals are actually usable with the index, something we don't know  
at this early stage of planning.  Doesn't seem worth the effort.  
  
Reported-by: Sergei Glukhov <s.glukhov@postgrespro.ru>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: David Rowley <dgrowleyml@gmail.com>  
Discussion: https://postgr.es/m/e200bf38-6b45-446a-83fd-48617211feff@postgrespro.ru  
Backpatch-through: 14  

M src/backend/optimizer/path/indxpath.c
M src/test/regress/expected/hash_index.out
M src/test/regress/sql/hash_index.sql

doc: Fix misleading synopsis for CREATE/ALTER PUBLICATION.

commit   : d7977668eec65363012797302b492adff1578274    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 27 Nov 2025 23:30:51 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 27 Nov 2025 23:30:51 +0900    

Click here for diff

The documentation for CREATE/ALTER PUBLICATION previously showed:  
  
        [ ONLY ] table_name [ * ] [ ( column_name [, ... ] ) ] [ WHERE ( expression ) ] [, ... ]  
  
to indicate that the table/column specification could be repeated.  
However, placing [, ... ] directly after a multi-part construct was  
misleading and made it unclear which portion was repeatable.  
  
This commit introduces a new term, table_and_columns, to represent:  
  
        [ ONLY ] table_name [ * ] [ ( column_name [, ... ] ) ] [ WHERE ( expression ) ]  
  
and updates the synopsis to use:  
  
        table_and_columns [, ... ]  
  
which clearly identifies the repeatable element.  
  
Backpatched to v15, where the misleading syntax was introduced.  
  
Author: Peter Smith <smithpb2250@gmail.com>  
Reviewed-by: Chao Li <lic@highgo.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/CAHut+PtsyvYL3KmA6C8f0ZpXQ=7FEqQtETVy-BOF+cm9WPvfMQ@mail.gmail.com  
Backpatch-through: 15  

M doc/src/sgml/ref/alter_publication.sgml
M doc/src/sgml/ref/create_publication.sgml

Fix error reporting for SQL/JSON path type mismatches

commit   : b5511fed500eb526a547f38597307552fa7acd08    
  
author   : Amit Langote <amitlan@postgresql.org>    
date     : Thu, 27 Nov 2025 10:40:19 +0900    
  
committer: Amit Langote <amitlan@postgresql.org>    
date     : Thu, 27 Nov 2025 10:40:19 +0900    

Click here for diff

transformJsonFuncExpr() used exprType()/exprLocation() on the  
possibly coerced path expression, which could be NULL when  
coercion to jsonpath failed, leading to "cache lookup failed  
for type 0" errors.  
  
Preserve the original expression node so that type and location  
in the "must be of type jsonpath" error are reported correctly.  
Add regression tests to cover these cases.  
  
Reported-by: Jian He <jian.universality@gmail.com>  
Author: Jian He <jian.universality@gmail.com>  
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>  
Discussion: https://postgr.es/m/CACJufxHunVg81JMuNo8Yvv_hJD0DicgaVN2Wteu8aJbVJPBjZA@mail.gmail.com  
Backpatch-through: 17  

M src/backend/parser/parse_expr.c
M src/test/regress/expected/sqljson_queryfuncs.out
M src/test/regress/sql/sqljson_queryfuncs.sql

Teach DSM registry to retry entry initialization if needed.

commit   : 2fc5c5062207a26b4ea4144a3d70099767eee523    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Wed, 26 Nov 2025 15:12:25 -0600    
  
committer: Nathan Bossart <nathan@postgresql.org>    
date     : Wed, 26 Nov 2025 15:12:25 -0600    

Click here for diff

If DSM registry entry initialization fails, backends could try to  
use an uninitialized DSM segment, DSA, or dshash table (since the  
entry is still added to the registry).  To fix, restructure the  
code so that the registry retries initialization as needed.  This  
commit also modifies pg_get_dsm_registry_allocations() to leave out  
partially-initialized entries, as they shouldn't have any allocated  
memory.  
  
DSM registry entry initialization shouldn't fail often in practice,  
but retrying was deemed better than leaving entries in a  
permanently failed state (as was done by commit 1165a933aa, which  
has since been reverted).  
  
Suggested-by: Robert Haas <robertmhaas@gmail.com>  
Reviewed-by: Robert Haas <robertmhaas@gmail.com>  
Discussion: https://postgr.es/m/E1vJHUk-006I7r-37%40gemulon.postgresql.org  
Backpatch-through: 17  

M src/backend/storage/ipc/dsm_registry.c

Revert "Teach DSM registry to ERROR if attaching to an uninitialized entry."

commit   : c7e0f263d67f75525193f874aaf6e082a88567c8    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Wed, 26 Nov 2025 11:37:21 -0600    
  
committer: Nathan Bossart <nathan@postgresql.org>    
date     : Wed, 26 Nov 2025 11:37:21 -0600    

Click here for diff

This reverts commit 1165a933aa (and the corresponding commits on  
the back-branches).  In a follow-up commit, we'll teach the  
registry to retry entry initialization instead of leaving it in a  
permanently failed state.  
  
Reviewed-by: Robert Haas <robertmhaas@gmail.com>  
Discussion: https://postgr.es/m/E1vJHUk-006I7r-37%40gemulon.postgresql.org  
Backpatch-through: 17  

M src/backend/storage/ipc/dsm_registry.c

doc: Clarify passphrase command reloading on Windows

commit   : cbb69a65a7d2529b2ba62be0a9eca88aceaafa30    
  
author   : Daniel Gustafsson <dgustafsson@postgresql.org>    
date     : Wed, 26 Nov 2025 14:24:04 +0100    
  
committer: Daniel Gustafsson <dgustafsson@postgresql.org>    
date     : Wed, 26 Nov 2025 14:24:04 +0100    

Click here for diff

When running on Windows (or EXEC_BACKEND) the SSL configuration will  
be reloaded on each backend start, so the passphrase command will be  
reloaded along with it.  This implies that passphrase command reload  
must be enabled on Windows for connections to work at all.  Document  
this since it wasn't mentioned explicitly, and will there add markup  
for parameter value to match the rest of the docs.  
  
Backpatch to all supported versions.  
  
Author: Daniel Gustafsson <daniel@yesql.se>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>  
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>  
Discussion: https://postgr.es/m/5F301096-921A-427D-8EC1-EBAEC2A35082@yesql.se  
Backpatch-through: 14  

M doc/src/sgml/config.sgml

lwlock: Fix, currently harmless, bug in LWLockWakeup()

commit   : 427e886a79a530aea379e82d2ccbdd6099eac3ad    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Mon, 24 Nov 2025 17:37:09 -0500    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Mon, 24 Nov 2025 17:37:09 -0500    

Click here for diff

Accidentally the code in LWLockWakeup() checked the list of to-be-woken up  
processes to see if LW_FLAG_HAS_WAITERS should be unset. That means that  
HAS_WAITERS would not get unset immediately, but only during the next,  
unnecessary, call to LWLockWakeup().  
  
Luckily, as the code stands, this is just a small efficiency issue.  
  
However, if there were (as in a patch of mine) a case in which LWLockWakeup()  
would not find any backend to wake, despite the wait list not being empty,  
we'd wrongly unset LW_FLAG_HAS_WAITERS, leading to potentially hanging.  
  
While the consequences in the backbranches are limited, the code as-is  
confusing, and it is possible that there are workloads where the additional  
wait list lock acquisitions hurt, therefore backpatch.  
  
Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff  
Backpatch-through: 14  

M src/backend/storage/lmgr/lwlock.c

Fix incorrect IndexOptInfo header comment

commit   : 232e0f5de41b904daa111dbd274c2cb2028991a4    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Mon, 24 Nov 2025 17:01:13 +1300    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Mon, 24 Nov 2025 17:01:13 +1300    

Click here for diff

The comment incorrectly indicated that indexcollations[] stored  
collations for both key columns and INCLUDE columns, but in reality it  
only has elements for the key columns.  canreturn[] didn't get a mention,  
so add that while we're here.  
  
Author: Junwang Zhao <zhjwpku@gmail.com>  
Reviewed-by: David Rowley <dgrowleyml@gmail.com>  
Discussion: https://postgr.es/m/CAEG8a3LwbZgMKOQ9CmZarX5DEipKivdHp5PZMOO-riL0w%3DL%3D4A%40mail.gmail.com  
Backpatch-through: 14  

M src/include/nodes/pathnodes.h

jit: Adjust AArch64-only code for LLVM 21.

commit   : 60215eae7c4285e526fdf607074a28140552d3c5    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Sat, 22 Nov 2025 20:51:16 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Sat, 22 Nov 2025 20:51:16 +1300    

Click here for diff

LLVM 21 changed the arguments of RTDyldObjectLinkingLayer's  
constructor, breaking compilation with the backported  
SectionMemoryManager from commit 9044fc1d.  
  
https://github.com/llvm/llvm-project/commit/cd585864c0bbbd74ed2a2b1ccc191eed4d1c8f90  
  
Backpatch-through: 14  
Author: Holger Hoffstätte <holger@applied-asynchrony.com>  
Reviewed-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>  
Discussion: https://postgr.es/m/d25e6e4a-d1b4-84d3-2f8a-6c45b975f53d%40applied-asynchrony.com  

M src/backend/jit/llvm/llvmjit_wrap.cpp

Print new OldestXID value in pg_resetwal when it's being changed

commit   : f2e0ca0af97bc188626f1a3472aa27f25665e296    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 19 Nov 2025 18:05:42 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 19 Nov 2025 18:05:42 +0200    

Click here for diff

Commit 74cf7d46a91d added the --oldest-transaction-id option to  
pg_resetwal, but forgot to update the code that prints all the new  
values that are being set. Fix that.  
  
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>  
Discussion: https://www.postgresql.org/message-id/5461bc85-e684-4531-b4d2-d2e57ad18cba@iki.fi  
Backpatch-through: 14  

M src/bin/pg_resetwal/pg_resetwal.c

Don't allow CTEs to determine semantic levels of aggregates.

commit   : 075a763e2d42ab817db677c0e77aae23ac6e2e02    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 18 Nov 2025 12:56:55 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 18 Nov 2025 12:56:55 -0500    

Click here for diff

The fix for bug #19055 (commit b0cc0a71e) allowed CTE references in  
sub-selects within aggregate functions to affect the semantic levels  
assigned to such aggregates.  It turns out this broke some related  
cases, leading to assertion failures or strange planner errors such  
as "unexpected outer reference in CTE query".  After experimenting  
with some alternative rules for assigning the semantic level in  
such cases, we've come to the conclusion that changing the level  
is more likely to break things than be helpful.  
  
Therefore, this patch undoes what b0cc0a71e changed, and instead  
installs logic to throw an error if there is any reference to a  
CTE that's below the semantic level that standard SQL rules would  
assign to the aggregate based on its contained Var and Aggref nodes.  
(The SQL standard disallows sub-selects within aggregate functions,  
so it can't reach the troublesome case and hence has no rule for  
what to do.)  
  
Perhaps someone will come along with a legitimate query that this  
logic rejects, and if so probably the example will help us craft  
a level-adjustment rule that works better than what b0cc0a71e did.  
I'm not holding my breath for that though, because the previous  
logic had been there for a very long time before bug #19055 without  
complaints, and that bug report sure looks to have originated from  
fuzzing not from real usage.  
  
Like b0cc0a71e, back-patch to all supported branches, though  
sadly that no longer includes v13.  
  
Bug: #19106  
Reported-by: Kamil Monicz <kamil@monicz.dev>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/19106-9dd3668a0734cd72@postgresql.org  
Backpatch-through: 14  

M src/backend/parser/parse_agg.c
M src/test/regress/expected/with.out
M src/test/regress/sql/with.sql

Update .abi-compliance-history for change to CreateStatistics().

commit   : 1cd020324ef5f9e9651e834643fd5166b21c749a    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Mon, 17 Nov 2025 14:14:41 -0600    
  
committer: Nathan Bossart <nathan@postgresql.org>    
date     : Mon, 17 Nov 2025 14:14:41 -0600    

Click here for diff

As noted in the commit message for 5e4fcbe531, the addition of a  
second parameter to CreateStatistics() breaks ABI compatibility,  
but we are unaware of any impacted third-party code.  This commit  
updates .abi-compliance-history accordingly.  
  
Backpatch-through: 14-18  

M .abi-compliance-history

Define PS_USE_CLOBBER_ARGV on GNU/Hurd.

commit   : d66a922f9247ceaa6caed9b51ce686f12e280091    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Mon, 17 Nov 2025 12:01:12 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Mon, 17 Nov 2025 12:01:12 +1300    

Click here for diff

Until d2ea2d310dfdc40328aca5b6c52225de78432e01, the PS_USE_PS_STRINGS  
option was used on the GNU/Hurd. As this option got removed and  
PS_USE_CLOBBER_ARGV appears to work fine nowadays on the Hurd, define  
this one to re-enable process title changes on this platform.  
  
In the 14 and 15 branches, the existing test for __hurd__ (added 25  
years ago by commit 209aa77d, removed in 16 by the above commit) is left  
unchanged for now as it was activating slightly different code paths and  
would need investigation by a Hurd user.  
  
Author: Michael Banck <mbanck@debian.org>  
Discussion: https://postgr.es/m/CA%2BhUKGJMNGUAqf27WbckYFrM-Mavy0RKJvocfJU%3DJ2XcAZyv%2Bw%40mail.gmail.com  
Backpatch-through: 16  

M src/backend/utils/misc/ps_status.c

Fix Assert failure in EXPLAIN ANALYZE MERGE with a concurrent update.

commit   : d6c415c4b4b19b9fa39151c5d1a804683b83535d    
  
author   : Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Sun, 16 Nov 2025 22:15:45 +0000    
  
committer: Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Sun, 16 Nov 2025 22:15:45 +0000    

Click here for diff

When instrumenting a MERGE command containing both WHEN NOT MATCHED BY  
SOURCE and WHEN NOT MATCHED BY TARGET actions using EXPLAIN ANALYZE, a  
concurrent update of the target relation could lead to an Assert  
failure in show_modifytable_info(). In a non-assert build, this would  
lead to an incorrect value for "skipped" tuples in the EXPLAIN output,  
rather than a crash.  
  
This could happen if the concurrent update caused a matched row to no  
longer match, in which case ExecMerge() treats the single originally  
matched row as a pair of not matched rows, and potentially executes 2  
not-matched actions for the single source row. This could then lead to  
a state where the number of rows processed by the ModifyTable node  
exceeds the number of rows produced by its source node, causing  
"skipped_path" in show_modifytable_info() to be negative, triggering  
the Assert.  
  
Fix this in ExecMergeMatched() by incrementing the instrumentation  
tuple count on the source node whenever a concurrent update of this  
kind is detected, if both kinds of merge actions exist, so that the  
number of source rows matches the number of actions potentially  
executed, and the "skipped" tuple count is correct.  
  
Back-patch to v17, where support for WHEN NOT MATCHED BY SOURCE  
actions was introduced.  
  
Bug: #19111  
Reported-by: Dilip Kumar <dilipbalaut@gmail.com>  
Author: Dean Rasheed <dean.a.rasheed@gmail.com>  
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>  
Discussion: https://postgr.es/m/19111-5b06624513d301b3@postgresql.org  
Backpatch-through: 17  

M src/backend/executor/nodeModifyTable.c
M src/test/isolation/expected/merge-update.out
M src/test/isolation/specs/merge-update.spec

Doc: include MERGE in variable substitution command list

commit   : 213596a0c1089ef752dc080ca74e4ea6f8f7049d    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Mon, 17 Nov 2025 10:52:30 +1300    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Mon, 17 Nov 2025 10:52:30 +1300    

Click here for diff

Backpatch to 15, where MERGE was introduced.  
  
Reported-by: <emorgunov@mail.ru>  
Author: David Rowley <dgrowleyml@gmail.com>  
Discussion: https://postgr.es/m/176278494385.770.15550176063450771532@wrigleys.postgresql.org  
Backpatch-through: 15  

M doc/src/sgml/plpgsql.sgml

Add note about CreateStatistics()'s selective use of check_rights.

commit   : 505ce19a205add23934f61bbced98a1b9a74ec1f    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Fri, 14 Nov 2025 13:20:09 -0600    
  
committer: Nathan Bossart <nathan@postgresql.org>    
date     : Fri, 14 Nov 2025 13:20:09 -0600    

Click here for diff

Commit 5e4fcbe531 added a check_rights parameter to this function  
for use by ALTER TABLE commands that re-create statistics objects.  
However, we intentionally ignore check_rights when verifying  
relation ownership because this function's lookup could return a  
different answer than the caller's.  This commit adds a note to  
this effect so that we remember it down the road.  
  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Backpatch-through: 14  

M src/backend/commands/statscmds.c

pgbench: Fix assertion failure with multiple \syncpipeline in pipeline mode.

commit   : 5bc251b288392fc432b6174c2a6e6a040fe1698e    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Fri, 14 Nov 2025 22:40:39 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Fri, 14 Nov 2025 22:40:39 +0900    

Click here for diff

Previously, when pgbench ran a custom script that triggered retriable errors  
(e.g., deadlocks) followed by multiple \syncpipeline commands in pipeline mode,  
the following assertion failure could occur:  
  
    Assertion failed: (res == ((void*)0)), function discardUntilSync, file pgbench.c, line 3594.  
  
The issue was that discardUntilSync() assumed a pipeline sync result  
(PGRES_PIPELINE_SYNC) would always be followed by either another sync result  
or NULL. This assumption was incorrect: when multiple sync requests were sent,  
a sync result could instead be followed by another result type. In such cases,  
discardUntilSync() mishandled the results, leading to the assertion failure.  
  
This commit fixes the issue by making discardUntilSync() correctly handle cases  
where a pipeline sync result is followed by other result types. It now continues  
discarding results until another pipeline sync followed by NULL is reached.  
  
Backpatched to v17, where support for \syncpipeline command in pgbench was  
introduced.  
  
Author: Yugo Nagata <nagata@sraoss.co.jp>  
Reviewed-by: Chao Li <lic@highgo.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/20251111105037.f3fc554616bc19891f926c5b@sraoss.co.jp  
Backpatch-through: 17  

M src/bin/pgbench/pgbench.c

doc: Improve description of RLS policies applied by command type.

commit   : d60dabfe2507cb8822c739064264e12e12caccf7    
  
author   : Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Thu, 13 Nov 2025 12:03:18 +0000    
  
committer: Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Thu, 13 Nov 2025 12:03:18 +0000    

Click here for diff

On the CREATE POLICY page, the "Policies Applied by Command Type"  
table was missing MERGE ... THEN DELETE and some of the policies  
applied during INSERT ... ON CONFLICT and MERGE. Fix that, and try to  
improve readability by listing the various MERGE cases separately,  
rather than together with INSERT/UPDATE/DELETE. Mention COPY ... TO  
along with SELECT, since it behaves in the same way. In addition,  
document which policy violations cause errors to be thrown, and which  
just cause rows to be silently ignored.  
  
Also, a paragraph above the table states that INSERT ... ON CONFLICT  
DO UPDATE only checks the WITH CHECK expressions of INSERT policies  
for rows appended to the relation by the INSERT path, which is  
incorrect -- all rows proposed for insertion are checked, regardless  
of whether they end up being inserted. Fix that, and also mention that  
the same applies to INSERT ... ON CONFLICT DO NOTHING.  
  
In addition, in various other places on that page, clarify how the  
different types of policy are applied to different commands, and  
whether or not errors are thrown when policy checks do not pass.  
  
Backpatch to all supported versions. Prior to v17, MERGE did not  
support RETURNING, and so MERGE ... THEN INSERT would never check new  
rows against SELECT policies. Prior to v15, MERGE was not supported at  
all.  
  
Author: Dean Rasheed <dean.a.rasheed@gmail.com>  
Reviewed-by: Viktor Holmberg <v@viktorh.net>  
Reviewed-by: Jian He <jian.universality@gmail.com>  
Discussion: https://postgr.es/m/CAEZATCWqnfeChjK=n1V_dYZT4rt4mnq+ybf9c0qXDYTVMsy8pg@mail.gmail.com  
Backpatch-through: 14  

M doc/src/sgml/ref/create_policy.sgml

Teach DSM registry to ERROR if attaching to an uninitialized entry.

commit   : ac2800ddc185615aaf8547a051b324592340b666    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Wed, 12 Nov 2025 14:30:11 -0600    
  
committer: Nathan Bossart <nathan@postgresql.org>    
date     : Wed, 12 Nov 2025 14:30:11 -0600    

Click here for diff

If DSM entry initialization fails, backends could try to use an  
uninitialized DSM segment, DSA, or dshash table (since the entry is  
still added to the registry).  To fix, keep track of whether  
initialization completed, and ERROR if a backend tries to attach to  
an uninitialized entry.  We could instead retry initialization as  
needed, but that seemed complicated, error prone, and unlikely to  
help most cases.  Furthermore, such problems probably indicate a  
coding error.  
  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Reviewed-by: Sami Imseih <samimseih@gmail.com>  
Discussion: https://postgr.es/m/dd36d384-55df-4fc2-825c-5bc56c950fa9%40gmail.com  
Backpatch-through: 17  

M src/backend/storage/ipc/dsm_registry.c

Clear 'xid' in dummy async notify entries written to fill up pages

commit   : d80d5f09950281c6b852f8d121955bfa182ae539    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 12 Nov 2025 21:19:03 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 12 Nov 2025 21:19:03 +0200    

Click here for diff

Before we started to freeze async notify entries (commit 8eeb4a0f7c),  
no one looked at the 'xid' on an entry with invalid 'dboid'. But now  
we might actually need to freeze it later. Initialize them with  
InvalidTransactionId to begin with, to avoid that work later.  
  
Álvaro pointed this out in review of commit 8eeb4a0f7c, but I forgot  
to include this change there.  
  
Author: Álvaro Herrera <alvherre@kurilemu.de>  
Discussion: https://www.postgresql.org/message-id/202511071410.52ll56eyixx7@alvherre.pgsql  
Backpatch-through: 14  

M src/backend/commands/async.c

Fix remaining race condition with CLOG truncation and LISTEN/NOTIFY

commit   : c2682810ab7d7127b99b4ce0db73d5db28a67c3f    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 12 Nov 2025 20:59:44 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 12 Nov 2025 20:59:44 +0200    

Click here for diff

Previous commit fixed a bug where VACUUM would truncate the CLOG  
that's still needed to check the commit status of XIDs in the async  
notify queue, but as mentioned in the commit message, it wasn't a full  
fix. If a backend is executing asyncQueueReadAllNotifications() and  
has just made a local copy of an async SLRU page which contains old  
XIDs, vacuum can concurrently truncate the CLOG covering those XIDs,  
and the backend still gets an error when it calls  
TransactionIdDidCommit() on those XIDs in the local copy. This commit  
fixes that race condition.  
  
To fix, hold the SLRU bank lock across the TransactionIdDidCommit()  
calls in NOTIFY processing.  
  
Per Tom Lane's idea. Backpatch to all supported versions.  
  
Reviewed-by: Joel Jacobson <joel@compiler.org>  
Reviewed-by: Arseniy Mukhin <arseniy.mukhin.dev@gmail.com>  
Discussion: https://www.postgresql.org/message-id/2759499.1761756503@sss.pgh.pa.us  
Backpatch-through: 14  

M src/backend/commands/async.c

Fix bug where we truncated CLOG that was still needed by LISTEN/NOTIFY

commit   : d02c03ddc5e3e6286c3b1e88f54e0ed1334e7ffd    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 12 Nov 2025 20:59:36 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 12 Nov 2025 20:59:36 +0200    

Click here for diff

The async notification queue contains the XID of the sender, and when  
processing notifications we call TransactionIdDidCommit() on the  
XID. But we had no safeguards to prevent the CLOG segments containing  
those XIDs from being truncated away. As a result, if a backend didn't  
for some reason process its notifications for a long time, or when a  
new backend issued LISTEN, you could get an error like:  
  
test=# listen c21;  
ERROR:  58P01: could not access status of transaction 14279685  
DETAIL:  Could not open file "pg_xact/000D": No such file or directory.  
LOCATION:  SlruReportIOError, slru.c:1087  
  
To fix, make VACUUM "freeze" the XIDs in the async notification queue  
before truncating the CLOG. Old XIDs are replaced with  
FrozenTransactionId or InvalidTransactionId.  
  
Note: This commit is not a full fix. A race condition remains, where a  
backend is executing asyncQueueReadAllNotifications() and has just  
made a local copy of an async SLRU page which contains old XIDs, while  
vacuum concurrently truncates the CLOG covering those XIDs. When the  
backend then calls TransactionIdDidCommit() on those XIDs from the  
local copy, you still get the error. The next commit will fix that  
remaining race condition.  
  
This was first reported by Sergey Zhuravlev in 2021, with many other  
people hitting the same issue later. Thanks to:  
- Alexandra Wang, Daniil Davydov, Andrei Varashen and Jacques Combrink  
  for investigating and providing reproducable test cases,  
- Matheus Alcantara and Arseniy Mukhin for review and earlier proposed  
  patches to fix this,  
- Álvaro Herrera and Masahiko Sawada for reviews,  
- Yura Sokolov aka funny-falcon for the idea of marking transactions  
  as committed in the notification queue, and  
- Joel Jacobson for the final patch version. I hope I didn't forget  
  anyone.  
  
Backpatch to all supported versions. I believe the bug goes back all  
the way to commit d1e027221d, which introduced the SLRU-based async  
notification queue.  
  
Discussion: https://www.postgresql.org/message-id/16961-25f29f95b3604a8a@postgresql.org  
Discussion: https://www.postgresql.org/message-id/18804-bccbbde5e77a68c2@postgresql.org  
Discussion: https://www.postgresql.org/message-id/CAK98qZ3wZLE-RZJN_Y%2BTFjiTRPPFPBwNBpBi5K5CU8hUHkzDpw@mail.gmail.com  
Backpatch-through: 14  

M src/backend/commands/async.c
M src/backend/commands/vacuum.c
M src/include/commands/async.h
M src/test/modules/xid_wraparound/meson.build
A src/test/modules/xid_wraparound/t/004_notify_freeze.pl

Escalate ERRORs during async notify processing to FATAL

commit   : b821c92920f0044d8f4da9ae0dffcb63e3f4ff49    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 12 Nov 2025 20:59:28 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 12 Nov 2025 20:59:28 +0200    

Click here for diff

Previously, if async notify processing encountered an error, we would  
report the error to the client and advance our read position past the  
offending entry to prevent trying to process it over and over  
again. Trying to continue after an error has a few problems however:  
  
- We have no way of telling the client that a notification was  
  lost. They get an ERROR, but that doesn't tell you much. As such,  
  it's not clear if keeping the connection alive after losing a  
  notification is a good thing. Depending on the application logic,  
  missing a notification could cause the application to get stuck  
  waiting, for example.  
  
- If the connection is idle, PqCommReadingMsg is set and any ERROR is  
  turned into FATAL anyway.  
  
- We bailed out of the notification processing loop on first error  
  without processing any subsequent notifications. The subsequent  
  notifications would not be processed until another notify interrupt  
  arrives. For example, if there were two notifications pending, and  
  processing the first one caused an ERROR, the second notification  
  would not be processed until someone sent a new NOTIFY.  
  
This commit changes the behavior so that any ERROR while processing  
async notifications is turned into FATAL, causing the client  
connection to be terminated. That makes the behavior more consistent  
as that's what happened in idle state already, and terminating the  
connection is a clear signal to the application that it might've  
missed some notifications.  
  
The reason to do this now is that the next commits will change the  
notification processing code in a way that would make it harder to  
skip over just the offending notification entry on error.  
  
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>  
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>  
Reviewed-by: Arseniy Mukhin <arseniy.mukhin.dev@gmail.com>  
Discussion: https://www.postgresql.org/message-id/fedbd908-4571-4bbe-b48e-63bfdcc38f64@iki.fi  
Backpatch-through: 14  

M src/backend/commands/async.c

doc: Document effects of ownership change on privileges

commit   : a00c9618bf2c98aabaf217fde1cb2d5d599b522c    
  
author   : Daniel Gustafsson <dgustafsson@postgresql.org>    
date     : Wed, 12 Nov 2025 17:04:35 +0100    
  
committer: Daniel Gustafsson <dgustafsson@postgresql.org>    
date     : Wed, 12 Nov 2025 17:04:35 +0100    

Click here for diff

Explicitly document that privileges are transferred along with the  
ownership. Backpatch to all supported versions since this behavior  
has always been present.  
  
Author: Laurenz Albe <laurenz.albe@cybertec.at>  
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>  
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Josef Šimánek <josef.simanek@gmail.com>  
Reported-by: Gilles Parc <gparc@free.fr>  
Discussion: https://postgr.es/m/2023185982.281851219.1646733038464.JavaMail.root@zimbra15-e2.priv.proxad.net  
Backpatch-through: 14  

M doc/src/sgml/ddl.sgml

Fix range for commit_siblings in sample conf

commit   : 1aa5a029fcef7f4f971f3a2b686d2d4280ee44c2    
  
author   : Daniel Gustafsson <dgustafsson@postgresql.org>    
date     : Wed, 12 Nov 2025 13:51:53 +0100    
  
committer: Daniel Gustafsson <dgustafsson@postgresql.org>    
date     : Wed, 12 Nov 2025 13:51:53 +0100    

Click here for diff

The range for commit_siblings was incorrectly listed as starting on 1  
instead of 0 in the sample configuration file.  Backpatch down to all  
supported branches.  
  
Author: Man Zeng <zengman@halodbtech.com>  
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>  
Discussion: https://postgr.es/m/tencent_53B70BA72303AE9C6889E78E@qq.com  
Backpatch-through: 14  

M src/backend/utils/misc/postgresql.conf.sample

Fix pg_upgrade around multixid and mxoff wraparound

commit   : cb2ef0e92edb9710e599d0a8b1f850fb46859787    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 12 Nov 2025 12:20:16 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 12 Nov 2025 12:20:16 +0200    

Click here for diff

pg_resetwal didn't accept multixid 0 or multixact offset UINT32_MAX,  
but they are both valid values that can appear in the control file.  
That caused pg_upgrade to fail if you tried to upgrade a cluster  
exactly at multixid or offset wraparound, because pg_upgrade calls  
pg_resetwal to restore multixid/offset on the new cluster to the  
values from the old cluster. To fix, allow those values in  
pg_resetwal.  
  
Fixes bugs #18863 and #18865 reported by Dmitry Kovalenko.  
  
Backpatch down to v15. Version 14 has the same bug, but the patch  
doesn't apply cleanly there. It could be made to work but it doesn't  
seem worth the effort given how rare it is to hit this problem with  
pg_upgrade, and how few people are upgrading to v14 anymore.  
  
Author: Maxim Orlov <orlovmg@gmail.com>  
Discussion: https://www.postgresql.org/message-id/CACG%3DezaApSMTjd%3DM2Sfn5Ucuggd3FG8Z8Qte8Xq9k5-%2BRQis-g@mail.gmail.com  
Discussion: https://www.postgresql.org/message-id/18863-72f08858855344a2@postgresql.org  
Discussion: https://www.postgresql.org/message-id/18865-d4c66cf35c2a67af@postgresql.org  
Backpatch-through: 15  

M src/bin/pg_resetwal/pg_resetwal.c
M src/bin/pg_resetwal/t/001_basic.pl

doc: Fix incorrect synopsis for ALTER PUBLICATION ... DROP ...

commit   : d5c32753b033d7aa62382dfe4b10b0c2699b6ee1    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Wed, 12 Nov 2025 13:37:58 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Wed, 12 Nov 2025 13:37:58 +0900    

Click here for diff

The synopsis for the ALTER PUBLICATION ... DROP ... command incorrectly  
implied that a column list and WHERE clause could be specified as part of  
the publication object. However, these options are not allowed for  
DROP operations, making the documentation misleading.  
  
This commit corrects the synopsis  to clearly show only the valid forms  
of publication objects.  
  
Backpatched to v15, where the incorrect synopsis was introduced.  
  
Author: Peter Smith <smithpb2250@gmail.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/CAHut+PsPu+47Q7b0o6h1r-qSt90U3zgbAHMHUag5o5E1Lo+=uw@mail.gmail.com  
Backpatch-through: 15  

M doc/src/sgml/ref/alter_publication.sgml

Report better object limits in error messages for injection points

commit   : f30cd34b3fd01a7d59080a7d074a1d2c6c670b12    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Wed, 12 Nov 2025 10:19:20 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Wed, 12 Nov 2025 10:19:20 +0900    

Click here for diff

Previously, error messages for oversized injection point names, libraries,  
and functions showed buffer sizes (64, 128, 128) instead of the usable  
character limits (63, 127, 127) as it did not count for the  
zero-terminated byte, which was confusing.  These messages are adjusted  
to show better the reality.  
  
The limit enforced for the private area was also too strict by one byte,  
as specifying a zone worth exactly INJ_PRIVATE_MAXLEN should be able to  
work because three is no zero-terminated byte in this case.  
  
This is a stylistic change (well, mostly, a private_area size of exactly  
1024 bytes can be defined with this change, something that nobody seem  
to care about based on the lack of complaints).  However, this is a  
testing facility let's keep the logic consistent across all the branches  
where this code exists, as there is an argument in favor of out-of-core  
extensions that use injection points.  
  
Author: Xuneng Zhou <xunengzhou@gmail.com>  
Co-authored-by: Michael Paquier <michael@paquier.xyz>  
Discussion: https://postgr.es/m/CABPTF7VxYp4Hny1h+7ejURY-P4O5-K8WZg79Q3GUx13cQ6B2kg@mail.gmail.com  
Backpatch-through: 17  

M src/backend/utils/misc/injection_point.c

Add check for large files in meson.build

commit   : f6c1342e72aa72b2490da30491b6003d7ab25095    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Wed, 12 Nov 2025 09:02:32 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Wed, 12 Nov 2025 09:02:32 +0900    

Click here for diff

A similar check existed in the MSVC scripts that have been removed in  
v17 by 1301c80b2167, but nothing of the kind was checked in meson when  
building with a 4-byte off_t.  
  
This commit adds a check to fail the builds when trying to use a  
relation file size higher than 1GB when off_t is 4 bytes, like  
./configure, rather than detecting these failures at runtime because the  
code is not able to handle large files in this case.  
  
Backpatch down to v16, where meson has been introduced.  
  
Discussion: https://postgr.es/m/aQ0hG36IrkaSGfN8@paquier.xyz  
Backpatch-through: 16  

M meson.build