Stamp 16.12.
commit : e15d96551f9760e62888b5082ad050329c1c4cdf
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 16:53:53 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 16:53:53 -0500 M configure
M configure.ac
M meson.build
Last-minute updates for release notes.
commit : 9889b3b64fe6bb52084159ae9bc5f2f5943fdd8a
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 14:01:20 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 14:01:20 -0500 Security: CVE-2026-2003, CVE-2026-2004, CVE-2026-2005, CVE-2026-2006, CVE-2026-2007 M doc/src/sgml/release-16.sgml
Fix test "NUL byte in text decrypt" for --without-zlib builds.
commit : 763671b745e1d8c4a9825dd48bdf02ead034de1c
author : Noah Misch <noah@leadboat.com>
date : Mon, 9 Feb 2026 09:08:10 -0800
committer: Noah Misch <noah@leadboat.com>
date : Mon, 9 Feb 2026 09:08:10 -0800 Backpatch-through: 14
Security: CVE-2026-2006 M contrib/pgcrypto/expected/pgp-decrypt.out
M contrib/pgcrypto/expected/pgp-decrypt_1.out
M contrib/pgcrypto/sql/pgp-decrypt.sql
Harden _int_matchsel() against being attached to the wrong operator.
commit : c0887b39dc5440babd935ab9cab84fbf80454389
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 10:14:22 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 10:14:22 -0500 While the preceding commit prevented such attachments from occurring
in future, this one aims to prevent further abuse of any already-
created operator that exposes _int_matchsel to the wrong data types.
(No other contrib module has a vulnerable selectivity estimator.)
We need only check that the Const we've found in the query is indeed
of the type we expect (query_int), but there's a difficulty: as an
extension type, query_int doesn't have a fixed OID that we could
hard-code into the estimator.
Therefore, the bulk of this patch consists of infrastructure to let
an extension function securely look up the OID of a datatype
belonging to the same extension. (Extension authors have requested
such functionality before, so we anticipate that this code will
have additional non-security uses, and may soon be extended to allow
looking up other kinds of SQL objects.)
This is done by first finding the extension that owns the calling
function (there can be only one), and then thumbing through the
objects owned by that extension to find a type that has the desired
name. This is relatively expensive, especially for large extensions,
so a simple cache is put in front of these lookups.
Reported-by: Daniel Firer as part of zeroday.cloud
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Security: CVE-2026-2004
Backpatch-through: 14 M contrib/intarray/_int_selfuncs.c
M src/backend/catalog/pg_depend.c
M src/backend/commands/extension.c
M src/include/catalog/dependency.h
M src/include/commands/extension.h
M src/tools/pgindent/typedefs.list
Require superuser to install a non-built-in selectivity estimator.
commit : 91d7c0bfdd4ae1da0aa3946e35eb1327b5ca2e6d
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 10:07:31 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 10:07:31 -0500 Selectivity estimators come in two flavors: those that make specific
assumptions about the data types they are working with, and those
that don't. Most of the built-in estimators are of the latter kind
and are meant to be safely attachable to any operator. If the
operator does not behave as the estimator expects, you might get a
poor estimate, but it won't crash.
However, estimators that do make datatype assumptions can malfunction
if they are attached to the wrong operator, since then the data they
get from pg_statistic may not be of the type they expect. This can
rise to the level of a security problem, even permitting arbitrary
code execution by a user who has the ability to create SQL objects.
To close this hole, establish a rule that built-in estimators are
required to protect themselves against being called on the wrong type
of data. It does not seem practical however to expect estimators in
extensions to reach a similar level of security, at least not in the
near term. Therefore, also establish a rule that superuser privilege
is required to attach a non-built-in estimator to an operator.
We expect that this restriction will have little negative impact on
extensions, since estimators generally have to be written in C and
thus superuser privilege is required to create them in the first
place.
This commit changes the privilege checks in CREATE/ALTER OPERATOR
to enforce the rule about superuser privilege, and fixes a couple
of built-in estimators that were making datatype assumptions without
sufficiently checking that they're valid.
Reported-by: Daniel Firer as part of zeroday.cloud
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Security: CVE-2026-2004
Backpatch-through: 14 M src/backend/commands/operatorcmds.c
M src/backend/tsearch/ts_selfuncs.c
M src/backend/utils/adt/network_selfuncs.c
Add a syscache on pg_extension.oid.
commit : d484bc26013caedd0b210bddfcec43c2ab6fa649
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 10:02:23 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 10:02:23 -0500 An upcoming patch requires this cache so that it can track updates
in the pg_extension catalog. So far though, the EXTENSIONOID cache
only exists in v18 and up (see 490f869d9). We can add it in older
branches without an ABI break, if we are careful not to disturb the
numbering of existing syscache IDs.
In v16 and before, that just requires adding the new ID at the end
of the hand-assigned enum list, ignoring our convention about
alphabetizing the IDs. But in v17, genbki.pl enforces alphabetical
order of the IDs listed in MAKE_SYSCACHE macros. We can fake it
out by calling the new cache ZEXTENSIONOID.
Note that adding a syscache does change the required contents of the
relcache init file (pg_internal.init). But that isn't problematic
since we blow those away at postmaster start for other reasons.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Security: CVE-2026-2004
Backpatch-through: 14-17 M src/backend/utils/cache/syscache.c
M src/include/utils/syscache.h
Guard against unexpected dimensions of oidvector/int2vector.
commit : 595956fc7268b5183c1e0e39673e478febbd008f
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 09:57:44 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 09:57:44 -0500 These data types are represented like full-fledged arrays, but
functions that deal specifically with these types assume that the
array is 1-dimensional and contains no nulls. However, there are
cast pathways that allow general oid[] or int2[] arrays to be cast
to these types, allowing these expectations to be violated. This
can be exploited to cause server memory disclosure or SIGSEGV.
Fix by installing explicit checks in functions that accept these
types.
Reported-by: Altan Birler <altan.birler@tum.de>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Security: CVE-2026-2003
Backpatch-through: 14 M src/backend/access/hash/hashfunc.c
M src/backend/access/nbtree/nbtcompare.c
M src/backend/utils/adt/format_type.c
M src/backend/utils/adt/int.c
M src/backend/utils/adt/oid.c
M src/include/utils/builtins.h
M src/test/regress/expected/arrays.out
M src/test/regress/sql/arrays.sql
Require PGP-decrypted text to pass encoding validation.
commit : 0c33d560899f80f23bb393269e992fa104e8c79f
author : Noah Misch <noah@leadboat.com>
date : Mon, 9 Feb 2026 06:14:47 -0800
committer: Noah Misch <noah@leadboat.com>
date : Mon, 9 Feb 2026 06:14:47 -0800 pgp_sym_decrypt() and pgp_pub_decrypt() will raise such errors, while
bytea variants will not. The existing "dat3" test decrypted to non-UTF8
text, so switch that query to bytea.
The long-term intent is for type "text" to always be valid in the
database encoding. pgcrypto has long been known as a source of
exceptions to that intent, but a report about exploiting invalid values
of type "text" brought this module to the forefront. This particular
exception is straightforward to fix, with reasonable effect on user
queries. Back-patch to v14 (all supported versions).
Reported-by: Paul Gerste (as part of zeroday.cloud)
Reported-by: Moritz Sanft (as part of zeroday.cloud)
Author: shihao zhong <zhong950419@gmail.com>
Reviewed-by: cary huang <hcary328@gmail.com>
Discussion: https://postgr.es/m/CAGRkXqRZyo0gLxPJqUsDqtWYBbgM14betsHiLRPj9mo2=z9VvA@mail.gmail.com
Backpatch-through: 14
Security: CVE-2026-2006 M contrib/pgcrypto/expected/pgp-decrypt.out
M contrib/pgcrypto/expected/pgp-decrypt_1.out
M contrib/pgcrypto/pgp-pgsql.c
M contrib/pgcrypto/sql/pgp-decrypt.sql
Code coverage for most pg_mblen* calls.
commit : 4c08960d97e950b00a4f6bf255d5409da98c6032
author : Thomas Munro <tmunro@postgresql.org>
date : Mon, 12 Jan 2026 10:20:06 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Mon, 12 Jan 2026 10:20:06 +1300 A security patch changed them today, so close the coverage gap now.
Test that buffer overrun is avoided when pg_mblen*() requires more
than the number of bytes remaining.
This does not cover the calls in dict_thesaurus.c or in dict_synonym.c.
That code is straightforward. To change that code's input, one must
have access to modify installed OS files, so low-privilege users are not
a threat. Testing this would likewise require changing installed
share/postgresql/tsearch_data, which was enough of an obstacle to not
bother.
Security: CVE-2026-2006
Backpatch-through: 14
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Co-authored-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> M contrib/pg_trgm/Makefile
A contrib/pg_trgm/data/trgm_utf8.data
A contrib/pg_trgm/expected/pg_utf8_trgm.out
A contrib/pg_trgm/expected/pg_utf8_trgm_1.out
M contrib/pg_trgm/meson.build
A contrib/pg_trgm/sql/pg_utf8_trgm.sql
M src/backend/utils/adt/arrayfuncs.c
A src/test/regress/expected/encoding.out
A src/test/regress/expected/encoding_1.out
A src/test/regress/expected/euc_kr.out
A src/test/regress/expected/euc_kr_1.out
M src/test/regress/parallel_schedule
M src/test/regress/regress.c
A src/test/regress/sql/encoding.sql
A src/test/regress/sql/euc_kr.sql
Replace pg_mblen() with bounds-checked versions.
commit : d837fb02925561091a70c5a6a74f42da57a022f9
author : Thomas Munro <tmunro@postgresql.org>
date : Wed, 7 Jan 2026 22:14:31 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Wed, 7 Jan 2026 22:14:31 +1300 A corrupted string could cause code that iterates with pg_mblen() to
overrun its buffer. Fix, by converting all callers to one of the
following:
1. Callers with a null-terminated string now use pg_mblen_cstr(), which
raises an "illegal byte sequence" error if it finds a terminator in the
middle of the sequence.
2. Callers with a length or end pointer now use either
pg_mblen_with_len() or pg_mblen_range(), for the same effect, depending
on which of the two seems more convenient at each site.
3. A small number of cases pre-validate a string, and can use
pg_mblen_unbounded().
The traditional pg_mblen() function and COPYCHAR macro still exist for
backward compatibility, but are no longer used by core code and are
hereby deprecated. The same applies to the t_isXXX() functions.
Security: CVE-2026-2006
Backpatch-through: 14
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Co-authored-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reported-by: Paul Gerste (as part of zeroday.cloud)
Reported-by: Moritz Sanft (as part of zeroday.cloud) M contrib/btree_gist/btree_utils_var.c
M contrib/dict_xsyn/dict_xsyn.c
M contrib/hstore/hstore_io.c
M contrib/ltree/lquery_op.c
M contrib/ltree/ltree.h
M contrib/ltree/ltree_io.c
M contrib/ltree/ltxtquery_io.c
M contrib/pageinspect/heapfuncs.c
M contrib/pg_trgm/trgm.h
M contrib/pg_trgm/trgm_op.c
M contrib/pg_trgm/trgm_regexp.c
M contrib/unaccent/unaccent.c
M src/backend/catalog/pg_proc.c
M src/backend/tsearch/dict_synonym.c
M src/backend/tsearch/dict_thesaurus.c
M src/backend/tsearch/regis.c
M src/backend/tsearch/spell.c
M src/backend/tsearch/ts_locale.c
M src/backend/tsearch/ts_utils.c
M src/backend/tsearch/wparser_def.c
M src/backend/utils/adt/encode.c
M src/backend/utils/adt/formatting.c
M src/backend/utils/adt/jsonfuncs.c
M src/backend/utils/adt/jsonpath_gram.y
M src/backend/utils/adt/levenshtein.c
M src/backend/utils/adt/like.c
M src/backend/utils/adt/like_match.c
M src/backend/utils/adt/oracle_compat.c
M src/backend/utils/adt/regexp.c
M src/backend/utils/adt/tsquery.c
M src/backend/utils/adt/tsvector.c
M src/backend/utils/adt/tsvector_op.c
M src/backend/utils/adt/tsvector_parser.c
M src/backend/utils/adt/varbit.c
M src/backend/utils/adt/varlena.c
M src/backend/utils/adt/xml.c
M src/backend/utils/mb/mbutils.c
M src/include/mb/pg_wchar.h
M src/include/tsearch/ts_locale.h
M src/include/tsearch/ts_utils.h
M src/test/modules/test_regex/test_regex.c
Fix mb2wchar functions on short input.
commit : b0e3f5cf94086baa3b3b13630db333be3e525f27
author : Thomas Munro <tmunro@postgresql.org>
date : Mon, 26 Jan 2026 11:22:32 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Mon, 26 Jan 2026 11:22:32 +1300 When converting multibyte to pg_wchar, the UTF-8 implementation would
silently ignore an incomplete final character, while the other
implementations would cast a single byte to pg_wchar, and then repeat
for the remaining byte sequence. While it didn't overrun the buffer, it
was surely garbage output.
Make all encodings behave like the UTF-8 implementation. A later change
for master only will convert this to an error, but we choose not to
back-patch that behavior change on the off-chance that someone is
relying on the existing UTF-8 behavior.
Security: CVE-2026-2006
Backpatch-through: 14
Author: Thomas Munro <thomas.munro@gmail.com>
Reported-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> M src/common/wchar.c
Fix encoding length for EUC_CN.
commit : 70ff9ede5ad7a2636bc15b03373535ab990fd254
author : Thomas Munro <tmunro@postgresql.org>
date : Thu, 5 Feb 2026 01:04:24 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Thu, 5 Feb 2026 01:04:24 +1300 While EUC_CN supports only 1- and 2-byte sequences (CS0, CS1), the
mb<->wchar conversion functions allow 3-byte sequences beginning SS2,
SS3.
Change pg_encoding_max_length() to return 3, not 2, to close a
hypothesized buffer overrun if a corrupted string is converted to wchar
and back again in a newly allocated buffer. We might reconsider that in
master (ie harmonizing in a different direction), but this change seems
better for the back-branches.
Also change pg_euccn_mblen() to report SS2 and SS3 characters as having
length 3 (following the example of EUC_KR). Even though such characters
would not pass verification, it's remotely possible that invalid bytes
could be used to compute a buffer size for use in wchar conversion.
Security: CVE-2026-2006
Backpatch-through: 14
Author: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> M src/common/wchar.c
pgcrypto: Fix buffer overflow in pgp_pub_decrypt_bytea()
commit : 527b730f41b2f2fbcda92cfd1dbbc50c14c9a46f
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 9 Feb 2026 08:01:09 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 9 Feb 2026 08:01:09 +0900 pgp_pub_decrypt_bytea() was missing a safeguard for the session key
length read from the message data, that can be given in input of
pgp_pub_decrypt_bytea(). This can result in the possibility of a buffer
overflow for the session key data, when the length specified is longer
than PGP_MAX_KEY, which is the maximum size of the buffer where the
session data is copied to.
A script able to rebuild the message and key data that can trigger the
overflow is included in this commit, based on some contents provided by
the reporter, heavily editted by me. A SQL test is added, based on the
data generated by the script.
Reported-by: Team Xint Code as part of zeroday.cloud
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Noah Misch <noah@leadboat.com>
Security: CVE-2026-2005
Backpatch-through: 14 M contrib/pgcrypto/Makefile
A contrib/pgcrypto/expected/pgp-pubkey-session.out
M contrib/pgcrypto/meson.build
M contrib/pgcrypto/pgp-pubdec.c
M contrib/pgcrypto/px.c
M contrib/pgcrypto/px.h
A contrib/pgcrypto/scripts/pgp_session_data.py
A contrib/pgcrypto/sql/pgp-pubkey-session.sql
Release notes for 18.2, 17.8, 16.12, 15.16, 14.21.
commit : 79378568178764d962d11e1f5a00a7bf7f480278
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 8 Feb 2026 13:00:40 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 8 Feb 2026 13:00:40 -0500 M doc/src/sgml/release-16.sgml
Translation updates
commit : 2104e68de1df01c0da67835327d578de63fc21f2
author : Peter Eisentraut <peter@eisentraut.org>
date : Sun, 8 Feb 2026 15:11:02 +0100
committer: Peter Eisentraut <peter@eisentraut.org>
date : Sun, 8 Feb 2026 15:11:02 +0100 Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 1d1a9da458df0b05f3ca2951f42b00f61ecbedc3 M src/backend/po/de.po
M src/backend/po/es.po
M src/backend/po/ja.po
M src/backend/po/ru.po
M src/backend/po/sv.po
M src/backend/po/uk.po
M src/bin/initdb/po/es.po
M src/bin/pg_amcheck/po/es.po
M src/bin/pg_archivecleanup/po/es.po
M src/bin/pg_basebackup/po/es.po
M src/bin/pg_basebackup/po/ru.po
M src/bin/pg_basebackup/po/uk.po
M src/bin/pg_checksums/po/es.po
M src/bin/pg_config/po/es.po
M src/bin/pg_controldata/po/es.po
M src/bin/pg_ctl/po/es.po
M src/bin/pg_dump/po/es.po
M src/bin/pg_dump/po/ru.po
M src/bin/pg_dump/po/uk.po
M src/bin/pg_resetwal/po/de.po
M src/bin/pg_resetwal/po/es.po
M src/bin/pg_resetwal/po/ja.po
M src/bin/pg_resetwal/po/ru.po
M src/bin/pg_resetwal/po/sv.po
M src/bin/pg_resetwal/po/uk.po
M src/bin/pg_rewind/po/es.po
M src/bin/pg_rewind/po/ru.po
M src/bin/pg_rewind/po/sv.po
M src/bin/pg_test_fsync/po/es.po
M src/bin/pg_test_timing/po/es.po
M src/bin/pg_upgrade/po/es.po
M src/bin/pg_upgrade/po/uk.po
M src/bin/pg_verifybackup/po/es.po
M src/bin/pg_waldump/po/es.po
M src/bin/psql/po/de.po
M src/bin/psql/po/es.po
M src/bin/psql/po/ja.po
M src/bin/psql/po/ru.po
M src/bin/psql/po/sv.po
M src/bin/psql/po/uk.po
M src/bin/scripts/po/es.po
M src/interfaces/ecpg/ecpglib/po/es.po
M src/interfaces/ecpg/preproc/po/es.po
M src/interfaces/libpq/po/de.po
M src/interfaces/libpq/po/es.po
M src/interfaces/libpq/po/fr.po
M src/interfaces/libpq/po/ja.po
M src/interfaces/libpq/po/ru.po
M src/interfaces/libpq/po/sv.po
M src/interfaces/libpq/po/uk.po
M src/pl/plperl/po/es.po
M src/pl/plpgsql/src/po/es.po
M src/pl/plpython/po/es.po
M src/pl/tcl/po/es.po
meson: host_system value for Solaris is 'sunos' not 'solaris'.
commit : 7369656faa6e7c403fb2757b62a1e6a6cd179c76
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 7 Feb 2026 20:05:52 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 7 Feb 2026 20:05:52 -0500 This thinko caused us to not substitute our own getopt() code,
which results in failing to parse long options for the postmaster
since Solaris' getopt() doesn't do what we expect. This can be seen
in the results of buildfarm member icarus, which is the only one
trying to build via meson on Solaris.
Per consultation with pgsql-release, it seems okay to fix this
now even though we're in release freeze. The fix visibly won't
affect any other platforms, and it can't break Solaris/meson
builds any worse than they're already broken.
Discussion: https://postgr.es/m/2471229.1770499291@sss.pgh.pa.us
Backpatch-through: 16 M meson.build
Further error message fix
commit : a7bdbbadac3b984555d096d9a2765f71906bd9f2
author : Peter Eisentraut <peter@eisentraut.org>
date : Sat, 7 Feb 2026 22:37:02 +0100
committer: Peter Eisentraut <peter@eisentraut.org>
date : Sat, 7 Feb 2026 22:37:02 +0100 Further fix of error message changed in commit 74a116a79b4. The
initial fix was not quite correct.
Discussion: https://www.postgresql.org/message-id/flat/tencent_1EE1430B1E6C18A663B8990F%40qq.com M src/bin/pg_rewind/file_ops.c
Placate ABI checker.
commit : f84a8d95ea57711d6f740e75dab2f007acc4b647
author : Thomas Munro <tmunro@postgresql.org>
date : Sat, 7 Feb 2026 11:50:35 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Sat, 7 Feb 2026 11:50:35 +1300 It's not really an ABI break if you change the layout/size of an object
with incomplete type, as commit f94e9141 did, so advance the ABI
compliance reference commit in 16-18 to satisfy build farm animal crake.
Backpatch-through: 16-18
Discussion: https://www.postgresql.org/message-id/1871492.1770409863%40sss.pgh.pa.us M .abi-compliance-history
Protect against small overread in SASLprep validation
commit : 46aaec4c0e6d90e9f074982feb43efd4b3c42a78
author : Jacob Champion <jchampion@postgresql.org>
date : Fri, 6 Feb 2026 11:08:59 -0800
committer: Jacob Champion <jchampion@postgresql.org>
date : Fri, 6 Feb 2026 11:08:59 -0800 (This is a cherry-pick of 390b3cbbb, which I hadn't realized wasn't
backpatched. It was originally reported to security@ and determined not
to be a vulnerability; thanks to Stanislav Osipov for noticing the
omission in the back branches.)
In case of torn UTF8 in the input data we might end up going
past the end of the string since we don't account for length.
While validation won't be performed on a sequence with a NULL
byte it's better to avoid going past the end to beging with.
Fix by taking the length into consideration.
Reported-by: Stanislav Osipov <stasos24@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAOYmi+mTnmM172g=_+Yvc47hzzeAsYPy2C4UBY3HK9p-AXNV0g@mail.gmail.com
Backpatch-through: 14 M src/common/saslprep.c
Fix some error message inconsistencies
commit : 977a17a3eb335f1432322ecae6f0d53bafc9436a
author : Michael Paquier <michael@paquier.xyz>
date : Fri, 6 Feb 2026 15:38:25 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Fri, 6 Feb 2026 15:38:25 +0900 These errors are very unlikely going to show up, but in the event that
they happen, some incorrect information would have been provided:
- In pg_rewind, a stat() failure was reported as an open() failure.
- In pg_combinebackup, a check for the new directory of a tablespace
mapping was referred as the old directory.
- In pg_combinebackup, a failure in reading a source file when copying
blocks referred to the destination file.
The changes for pg_combinebackup affect v17 and newer versions. For
pg_rewind, all the stable branches are affected.
Author: Man Zeng <zengman@halodbtech.com>
Discussion: https://postgr.es/m/tencent_1EE1430B1E6C18A663B8990F@qq.com
Backpatch-through: 14 M src/bin/pg_rewind/file_ops.c
Add file_extend_method=posix_fallocate,write_zeros.
commit : e37b598028469d260650e5e1671c8275d30e22b6
author : Thomas Munro <tmunro@postgresql.org>
date : Sat, 31 May 2025 22:50:22 +1200
committer: Thomas Munro <tmunro@postgresql.org>
date : Sat, 31 May 2025 22:50:22 +1200 Provide a way to disable the use of posix_fallocate() for relation
files. It was introduced by commit 4d330a61bb1. The new setting
file_extend_method=write_zeros can be used as a workaround for problems
reported from the field:
* BTRFS compression is disabled by the use of posix_fallocate()
* XFS could produce spurious ENOSPC errors in some Linux kernel
versions, though that problem is reported to have been fixed
The default is file_extend_method=posix_fallocate if available, as
before. The write_zeros option is similar to PostgreSQL < 16, except
that now it's multi-block.
Backpatch-through: 16
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reported-by: Dimitrios Apostolou <jimis@gmx.net>
Discussion: https://postgr.es/m/b1843124-fd22-e279-a31f-252dffb6fbf2%40gmx.net M doc/src/sgml/config.sgml
M src/backend/storage/file/fd.c
M src/backend/storage/smgr/md.c
M src/backend/utils/misc/guc_tables.c
M src/backend/utils/misc/postgresql.conf.sample
M src/include/storage/fd.h
Fix logical replication TAP test to read publisher log correctly.
commit : 221232596fc6a26daffa64c294604f4f8507b95c
author : Fujii Masao <fujii@postgresql.org>
date : Thu, 5 Feb 2026 00:46:09 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Thu, 5 Feb 2026 00:46:09 +0900 Commit 5f13999aa11 added a TAP test for GUC settings passed via the
CONNECTION string in logical replication, but the buildfarm member
sungazer reported test failures.
The test incorrectly used the subscriber's log file position as the
starting offset when reading the publisher's log. As a result, the test
failed to find the expected log message in the publisher's log and
erroneously reported a failure.
This commit fixes the test to use the publisher's own log file position
when reading the publisher's log.
Also, to avoid similar confusion in the future, this commit splits the single
$log_location variable into $log_location_pub and $log_location_sub,
clearly distinguishing publisher and subscriber log positions.
Backpatched to v15, where commit 5f13999aa11 introduced the test.
Per buildfarm member sungazer.
This issue was reported and diagnosed by Alexander Lakhin.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/966ec3d8-1b6f-4f57-ae59-fc7d55bc9a5a@gmail.com
Backpatch-through: 15 M src/test/subscription/t/001_rep_changes.pl
Fix various instances of undefined behavior
commit : 73ac2b37401d7aa910ebd81795bb0f4faf0eaab3
author : John Naylor <john.naylor@postgresql.org>
date : Wed, 4 Feb 2026 17:55:49 +0700
committer: John Naylor <john.naylor@postgresql.org>
date : Wed, 4 Feb 2026 17:55:49 +0700 Mostly this involves checking for NULL pointer before doing operations
that add a non-zero offset.
The exception is an overflow warning in heap_fetch_toast_slice(). This
was caused by unneeded parentheses forcing an expression to be
evaluated to a negative integer, which then got cast to size_t.
Per clang 21 undefined behavior sanitizer.
Backpatch to all supported versions.
Co-authored-by: Alexander Lakhin <exclusion@gmail.com>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/777bd201-6e3a-4da0-a922-4ea9de46a3ee@gmail.com
Backpatch-through: 14 M contrib/pg_trgm/trgm_gist.c
M src/backend/access/heap/heaptoast.c
M src/backend/utils/adt/multirangetypes.c
M src/backend/utils/sort/sharedtuplestore.c
Improve guards against false regex matches in BackgroundPsql.pm.
commit : 6548e4a10d1807b04bdb9370b110204821b7506c
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 30 Jan 2026 14:59:25 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 30 Jan 2026 14:59:25 -0500 BackgroundPsql needs to wait for all the output from an interactive
psql command to come back. To make sure that's happened, it issues
the command, then issues \echo and \warn psql commands that echo
a "banner" string (which we assume won't appear in the command's
output), then waits for the banner strings to appear. The hazard
in this approach is that the banner will also appear in the echoed
psql commands themselves, so we need to distinguish those echoes from
the desired output. Commit 8b886a4e3 tried to do that by positing
that the desired output would be directly preceded and followed by
newlines, but it turns out that that assumption is timing-sensitive.
In particular, it tends to fail in builds made --without-readline,
wherein the command echoes will be made by the pty driver and may
be interspersed with prompts issued by psql proper.
It does seem safe to assume that the banner output we want will be
followed by a newline, since that should be the last output before
things quiesce. Therefore, we can improve matters by putting quotes
around the banner strings in the \echo and \warn psql commands, so
that their echoes cannot include banner directly followed by newline,
and then checking for just banner-and-newline in the match pattern.
While at it, spruce up the pump() call in sub query() to look like
the neater version in wait_connect(), and don't die on timeout
until after printing whatever we got.
Reported-by: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>
Diagnosed-by: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Soumya S Murali <soumyamurali.work@gmail.com>
Discussion: https://postgr.es/m/db6fdb35a8665ad3c18be01181d44b31@postgrespro.ru
Backpatch-through: 14 M src/test/perl/PostgreSQL/Test/BackgroundPsql.pm
Update .abi-compliance-history for change to TransitionCaptureState.
commit : b06e7c10d17276c1aecc06d4267e2663bcb3689c
author : Dean Rasheed <dean.a.rasheed@gmail.com>
date : Fri, 30 Jan 2026 08:50:58 +0000
committer: Dean Rasheed <dean.a.rasheed@gmail.com>
date : Fri, 30 Jan 2026 08:50:58 +0000 As noted in the commit message for b4307ae2e54, the change to the
TransitionCaptureState structure is nominally an ABI break, but it is
not expected to affect any third-party code. Therefore, add it to the
.abi-compliance-history file.
Discussion: https://postgr.es/m/19380-4e293be2b4007248%40postgresql.org
Backpatch-through: 15-18 M .abi-compliance-history
Fix possible issue of a WindowFunc being in the wrong WindowClause
commit : 4297a35196de281e551d500fc71c7033f253cb98
author : David Rowley <drowley@postgresql.org>
date : Mon, 26 Jan 2026 23:47:37 +1300
committer: David Rowley <drowley@postgresql.org>
date : Mon, 26 Jan 2026 23:47:37 +1300 ed1a88dda made it so WindowClauses can be merged when all window
functions belonging to the WindowClause can equally well use some
other WindowClause without any behavioral changes. When that
optimization applies, the WindowFunc's "winref" gets adjusted to
reference the new WindowClause.
That commit does not work well with the deduplication logic in
find_window_functions(), which only added the WindowFunc to the list
when there wasn't already an identical WindowFunc in the list. That
deduplication logic meant that the duplicate WindowFunc wouldn't get the
"winref" changed when optimize_window_clauses() was able to swap the
WindowFunc to another WindowClause. This could lead to the following
error in the unlikely event that the deduplication code did something and
the duplicate WindowFunc happened to be moved into another WindowClause.
ERROR: WindowFunc with winref 2 assigned to WindowAgg with winref 1
As it turns out, the deduplication logic in find_window_functions() is
pretty bogus. It might have done something when added, as that code
predates b8d7f053c, which changed how projections work. As it turns
out, at least now we *will* evaluate the duplicate WindowFuncs. All
that the deduplication code seems to do today is assist in
underestimating the WindowAggPath costs due to not counting the
evaluation costs of duplicate WindowFuncs.
Ideally the fix would be to remove the deduplication code, but that
could result in changes to the plan costs, as duplicate WindowFuncs
would then be costed. Instead, let's play it safe and shift the
deduplication code so it runs after the other processing in
optimize_window_clauses().
Backpatch only as far as v16 as there doesn't seem to be any other harm
done by the WindowFunc deduplication code before then. This issue was
fixed in master by 7027dd499.
Reported-by: Meng Zhang <mza117jc@gmail.com>
Author: Meng Zhang <mza117jc@gmail.com>
Author: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/CAErYLFAuxmW0UVdgrz7iiuNrxGQnFK_OP9hBD5CUzRgjrVrz=Q@mail.gmail.com
Backpatch-through: 16 M src/backend/optimizer/plan/planner.c
M src/backend/optimizer/util/clauses.c
Fix trigger transition table capture for MERGE in CTE queries.
commit : e7391bbf14db94afee1fd3c011314f7b8ee493e9
author : Dean Rasheed <dean.a.rasheed@gmail.com>
date : Sat, 24 Jan 2026 11:30:50 +0000
committer: Dean Rasheed <dean.a.rasheed@gmail.com>
date : Sat, 24 Jan 2026 11:30:50 +0000 When executing a data-modifying CTE query containing MERGE and some
other DML operation on a table with statement-level AFTER triggers,
the transition tables passed to the triggers would fail to include the
rows affected by the MERGE.
The reason is that, when initializing a ModifyTable node for MERGE,
MakeTransitionCaptureState() would create a TransitionCaptureState
structure with a single "tcs_private" field pointing to an
AfterTriggersTableData structure with cmdType == CMD_MERGE. Tuples
captured there would then not be included in the sets of tuples
captured when executing INSERT/UPDATE/DELETE ModifyTable nodes in the
same query.
Since there are no MERGE triggers, we should only create
AfterTriggersTableData structures for INSERT/UPDATE/DELETE. Individual
MERGE actions should then use those, thereby sharing the same capture
tuplestores as any other DML commands executed in the same query.
This requires changing the TransitionCaptureState structure, replacing
"tcs_private" with 3 separate pointers to AfterTriggersTableData
structures, one for each of INSERT, UPDATE, and DELETE. Nominally,
this is an ABI break to a public structure in commands/trigger.h.
However, since this is a private field pointing to an opaque data
structure, the only way to create a valid TransitionCaptureState is by
calling MakeTransitionCaptureState(), and no extensions appear to be
doing that anyway, so it seems safe for back-patching.
Backpatch to v15, where MERGE was introduced.
Bug: #19380
Reported-by: Daniel Woelfel <dwwoelfel@gmail.com>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19380-4e293be2b4007248%40postgresql.org
Backpatch-through: 15 M src/backend/commands/trigger.c
M src/include/commands/trigger.h
M src/test/regress/expected/triggers.out
M src/test/regress/sql/triggers.sql
Fix bogus ctid requirement for dummy-root partitioned targets
commit : fab386f74888cb3cf4df9a20bcd24df2eb232bd2
author : Amit Langote <amitlan@postgresql.org>
date : Fri, 23 Jan 2026 10:21:08 +0900
committer: Amit Langote <amitlan@postgresql.org>
date : Fri, 23 Jan 2026 10:21:08 +0900 ExecInitModifyTable() unconditionally required a ctid junk column even
when the target was a partitioned table. This led to spurious "could
not find junk ctid column" errors when all children were excluded and
only the dummy root result relation remained.
A partitioned table only appears in the result relations list when all
leaf partitions have been pruned, leaving the dummy root as the sole
entry. Assert this invariant (nrels == 1) and skip the ctid requirement.
Also adjust ExecModifyTable() to tolerate invalid ri_RowIdAttNo for
partitioned tables, which is safe since no rows will be processed in
this case.
Bug: #19099
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19099-e05dcfa022fe553d%40postgresql.org
Backpatch-through: 14 M contrib/file_fdw/expected/file_fdw.out
M contrib/file_fdw/sql/file_fdw.sql
M src/backend/executor/nodeModifyTable.c
Remove faulty Assert in partitioned INSERT...ON CONFLICT DO UPDATE.
commit : 3f56de3aad1bcffae01dc2c22a41518a18d25d77
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 22 Jan 2026 18:35:31 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 22 Jan 2026 18:35:31 -0500 Commit f16241bef mistakenly supposed that INSERT...ON CONFLICT DO
UPDATE rejects partitioned target tables. (This may have been
accurate when the patch was written, but it was already obsolete
when committed.) Hence, there's an assertion that we can't see
ItemPointerIndicatesMovedPartitions() in that path, but the assertion
is triggerable.
Some other places throw error if they see a moved-across-partitions
tuple, but there seems no need for that here, because if we just retry
then we get the same behavior as in the update-within-partition case,
as demonstrated by the new isolation test. So fix by deleting the
faulty Assert. (The fact that this is the fix doubtless explains
why we've heard no field complaints: the behavior of a non-assert
build is fine.)
The TM_Deleted case contains a cargo-culted copy of the same Assert,
which I also deleted to avoid confusion, although I believe that one
is actually not triggerable.
Per our code coverage report, neither the TM_Updated nor the
TM_Deleted case were reached at all by existing tests, so this
patch adds tests for both.
Reported-by: Dmitry Koval <d.koval@postgrespro.ru>
Author: Joseph Koshakow <koshy44@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/f5fffe4b-11b2-4557-a864-3587ff9b4c36@postgrespro.ru
Backpatch-through: 14 M src/backend/executor/nodeModifyTable.c
A src/test/isolation/expected/insert-conflict-do-update-4.out
M src/test/isolation/isolation_schedule
A src/test/isolation/specs/insert-conflict-do-update-4.spec
doc: Mention pg_get_partition_constraintdef()
commit : 3f2ab3f34da9d4299255a972a1c022a152bbf3f7
author : Michael Paquier <michael@paquier.xyz>
date : Thu, 22 Jan 2026 16:35:50 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Thu, 22 Jan 2026 16:35:50 +0900 All the other SQL functions reconstructing definitions or commands are
listed in the documentation, except this one.
Oversight in 1848b73d4576.
Author: Todd Liebenschutz-Jones <todd.liebenschutz-jones@starlingbank.com>
Discussion: https://postgr.es/m/CAGTRfaD6uRQ9iutASDzc_iDoS25sQTLWgXTtR3ta63uwTxq6bA@mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/func.sgml
jit: Add missing inline pass for LLVM >= 17.
commit : 7600dc79c231e7fd88172dde6c3a3ba701144298
author : Thomas Munro <tmunro@postgresql.org>
date : Thu, 22 Jan 2026 15:43:13 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Thu, 22 Jan 2026 15:43:13 +1300 With LLVM >= 17, transform passes are provided as a string to
LLVMRunPasses. Only two strings were used: "default<O3>" and
"default<O0>,mem2reg".
With previous LLVM versions, an additional inline pass was added when
JIT inlining was enabled without optimization. With LLVM >= 17, the code
would go through llvm_inline, prepare the functions for inlining, but
the generated bitcode would be the same due to the missing inline pass.
This patch restores the previous behavior by adding an inline pass when
inlining is enabled but no optimization is done.
This fixes an oversight introduced by 76200e5e when support for LLVM 17
was added.
Backpatch-through: 14
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Pierre Ducroquet <p.psql@pinaraf.info>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/CAO6_XqrNjJnbn15ctPv7o4yEAT9fWa-dK15RSyun6QNw9YDtKg%40mail.gmail.com M src/backend/jit/llvm/llvmjit.c
amcheck: Fix snapshot usage in bt_index_parent_check
commit : 098a1fab8a9bfe2b37eda8d790efbc15c9bdf7bc
author : Álvaro Herrera <alvherre@kurilemu.de>
date : Wed, 21 Jan 2026 18:55:43 +0100
committer: Álvaro Herrera <alvherre@kurilemu.de>
date : Wed, 21 Jan 2026 18:55:43 +0100 We were using SnapshotAny to do some index checks, but that's wrong and
causes spurious errors when used on indexes created by CREATE INDEX
CONCURRENTLY. Fix it to use an MVCC snapshot, and add a test for it.
Backpatch of 6bd469d26aca to branches 14-16. I previously misidentified
the bug's origin: it came in with commit 7f563c09f890 (pg11-era, not
5ae2087202af as claimed previously), so all live branches are affected.
Also take the opportunity to fix some comments that we failed to update
in the original commits and apply pgperltidy. In branch 14, remove the
unnecessary test plan specification (which would have need to have been
changed anyway; c.f. commit 549ec201d613.)
Diagnosed-by: Donghang Lin <donghanglin@gmail.com>
Author: Mihail Nikalayeu <mihailnikalayeu@gmail.com>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Backpatch-through: 17
Discussion: https://postgr.es/m/CANtu0ojmVd27fEhfpST7RG2KZvwkX=dMyKUqg0KM87FkOSdz8Q@mail.gmail.com M contrib/amcheck/t/002_cic.pl
M contrib/amcheck/verify_nbtree.c
M doc/src/sgml/amcheck.sgml
Update time zone data files to tzdata release 2025c.
commit : d852d105e760672a1c4e9f796fdae1e0585632db
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 18 Jan 2026 14:54:33 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 18 Jan 2026 14:54:33 -0500 This is pretty pro-forma for our purposes, as the only change
is a historical correction for pre-1976 DST laws in
Baja California. (Upstream made this release mostly to update
their leap-second data, which we don't use.) But with minor
releases coming up, we should be up-to-date.
Backpatch-through: 14 M src/timezone/data/tzdata.zi
Fix error message related to end TLI in backup manifest
commit : e8fd6c9fdaf6acbb796a3a0e540b9706559de86d
author : Michael Paquier <michael@paquier.xyz>
date : Sun, 18 Jan 2026 17:25:01 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Sun, 18 Jan 2026 17:25:01 +0900 The code adding the WAL information included in a backup manifest is
cross-checked with the contents of the timeline history file of the end
timeline. A check based on the end timeline, when it fails, reported
the value of the start timeline in the error message. This error is
fixed to show the correct timeline number in the report.
This error report would be confusing for users if seen, because it would
provide an incorrect information, so backpatch all the way down.
Oversight in 0d8c9c1210c4.
Author: Man Zeng <zengman@halodbtech.com>
Discussion: https://postgr.es/m/tencent_0F2949C4594556F672CF4658@qq.com
Backpatch-through: 14 M src/backend/backup/backup_manifest.c
Fix segfault from releasing locks in detached DSM segments
commit : 980b7c7369e44e49078ad2bbb39e7012347a0919
author : Amit Langote <amitlan@postgresql.org>
date : Fri, 16 Jan 2026 13:01:52 +0900
committer: Amit Langote <amitlan@postgresql.org>
date : Fri, 16 Jan 2026 13:01:52 +0900 If a FATAL error occurs while holding a lock in a DSM segment (such
as a dshash lock) and the process is not in a transaction, a
segmentation fault can occur during process exit.
The problem sequence is:
1. Process acquires a lock in a DSM segment (e.g., via dshash)
2. FATAL error occurs outside transaction context
3. proc_exit() begins, calling before_shmem_exit callbacks
4. dsm_backend_shutdown() detaches all DSM segments
5. Later, on_shmem_exit callbacks run
6. ProcKill() calls LWLockReleaseAll()
7. Segfault: the lock being released is in unmapped memory
This only manifests outside transaction contexts because
AbortTransaction() calls LWLockReleaseAll() during transaction
abort, releasing locks before DSM cleanup. Background workers and
other non-transactional code paths are vulnerable.
Fix by calling LWLockReleaseAll() unconditionally at the start of
shmem_exit(), before any callbacks run. Releasing locks before
callbacks prevents the segfault - locks must be released before
dsm_backend_shutdown() detaches their memory. This is safe because
after an error, held locks are protecting potentially inconsistent
data anyway, and callbacks can acquire fresh locks if needed.
Also add a comment noting that LWLockReleaseAll() must be safe to
call before LWLock initialization (which it is, since
num_held_lwlocks will be 0), plus an Assert for the post-condition.
This fix aligns with the original design intent from commit
001a573a2, which noted that backends must clean up shared memory
state (including releasing lwlocks) before unmapping dynamic shared
memory segments.
Reported-by: Rahila Syed <rahilasyed90@gmail.com>
Author: Rahila Syed <rahilasyed90@gmail.com>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CAH2L28uSvyiosL+kaic9249jRVoQiQF6JOnaCitKFq=xiFzX3g@mail.gmail.com
Backpatch-through: 14 M src/backend/storage/ipc/ipc.c
M src/backend/storage/lmgr/lwlock.c
Fix 'unexpected data beyond EOF' on replica restart
commit : a2eeb04f3a0fff86e0e94745003e705ec396d4ba
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 15 Jan 2026 20:57:12 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 15 Jan 2026 20:57:12 +0200 On restart, a replica can fail with an error like 'unexpected data
beyond EOF in block 200 of relation T/D/R'. These are the steps to
reproduce it:
- A relation has a size of 400 blocks.
- Blocks 201 to 400 are empty.
- Block 200 has two rows.
- Blocks 100 to 199 are empty.
- A restartpoint is done
- Vacuum truncates the relation to 200 blocks
- A FPW deletes a row in block 200
- A checkpoint is done
- A FPW deletes the last row in block 200
- Vacuum truncates the relation to 100 blocks
- The replica restarts
When the replica restarts:
- The relation on disk starts at 100 blocks, because all the
truncations were applied before restart.
- The first truncate to 200 blocks is replayed. It silently fails, but
it will still (incorrectly!) update the cache size to 200 blocks
- The first FPW on block 200 is applied. XLogReadBufferForRead relies
on the cached size and incorrectly assumes that the page already
exists in the file, and thus won't extend the relation.
- The online checkpoint record is replayed, calling smgrdestroyall
which causes the cached size to be discarded
- The second FPW on block 200 is applied. This time, the detected size
is 100 blocks, an extend is attempted. However, the block 200 is
already present in the buffer cache due to the first FPW. This
triggers the 'unexpected data beyond EOF'.
To fix, update the cached size in SmgrRelation with the current size
rather than the requested new size, when the requested new size is
greater.
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://www.postgresql.org/message-id/CAO6_Xqrv-snNJNhbj1KjQmWiWHX3nYGDgAc=vxaZP3qc4g1Siw@mail.gmail.com
Backpatch-through: 14 M src/backend/storage/smgr/md.c
M src/backend/storage/smgr/smgr.c
Add check for invalid offset at multixid truncation
commit : c7946e6f32c9a503512fae3ed0575581b49e8680
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 15 Jan 2026 16:48:45 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 15 Jan 2026 16:48:45 +0200 If a multixid with zero offset is left behind after a crash, and that
multixid later becomes the oldest multixid, truncation might try to
look up its offset and read the zero value. In the worst case, we
might incorrectly use the zero offset to truncate valid SLRU segments
that are still needed. I'm not sure if that can happen in practice, or
if there are some other lower-level safeguards or incidental reasons
that prevent the caller from passing an unwritten multixid as the
oldest multi. But better safe than sorry, so let's add an explicit
check for it.
In stable branches, we should perhaps do the same check for
'oldestOffset', i.e. the offset of the old oldest multixid (in master,
'oldestOffset' is gone). But if the old oldest multixid has an invalid
offset, the damage has been done already, and we would never advance
past that point. It's not clear what we should do in that case. The
check that this commit adds will prevent such an multixid with invalid
offset from becoming the oldest multixid in the first place, which
seems enough for now.
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: Discussion: https://www.postgresql.org/message-id/000301b2-5b81-4938-bdac-90f6eb660843@iki.fi
Backpatch-through: 14 M src/backend/access/transam/multixact.c
pg_waldump: Relax LSN comparison check in TAP test
commit : 2170e52193c6f02ddb915f833f269714c9c40eae
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 14 Jan 2026 16:02:39 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 14 Jan 2026 16:02:39 +0900 The test 002_save_fullpage.pl, checking --save-fullpage fails with
wal_consistency_checking enabled, due to the fact that the block saved
in the file has the same LSN as the LSN used in the file name. The test
required that the block LSN is stritly lower than file LSN. This commit
relaxes the check a bit, by allowing the LSNs to match.
While on it, the test name is reworded to include some information about
the file and block LSNs, which is useful for debugging.
Author: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/4226AED7-E38F-419B-AAED-9BC853FB55DE@yandex-team.ru
Backpatch-through: 16 M src/bin/pg_waldump/t/002_save_fullpage.pl
doc: Document DEFAULT option in file_fdw.
commit : 254038c80de9a006d257f94d160d18b9c104a8a1
author : Fujii Masao <fujii@postgresql.org>
date : Tue, 13 Jan 2026 22:54:45 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Tue, 13 Jan 2026 22:54:45 +0900 Commit 9f8377f7a introduced the DEFAULT option for file_fdw but did not
update the documentation. This commit adds the missing description of
the DEFAULT option to the file_fdw documentation.
Backpatch to v16, where the DEFAULT option was introduced.
Author: Shinya Kato <shinya11.kato@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAOzEurT_PE7QEh5xAdb7Cja84Rur5qPv2Fzt3Tuqi=NU0WJsbg@mail.gmail.com
Backpatch-through: 16 M doc/src/sgml/file-fdw.sgml
doc: Improve description of publish_via_partition_root
commit : b95bdde5119d1570b15d66492a39d85ab1366559
author : Jacob Champion <jchampion@postgresql.org>
date : Fri, 9 Jan 2026 10:02:43 -0800
committer: Jacob Champion <jchampion@postgresql.org>
date : Fri, 9 Jan 2026 10:02:43 -0800 Reword publish_via_partition_root's opening paragraph. Describe its
behavior more clearly, and directly state that its default is false.
Per complaint by Peter Smith; final text of the patch made in
collaboration with Chao Li.
Author: Chao Li <li.evan.chao@gmail.com>
Author: Peter Smith <peter.b.smith@fujitsu.com>
Reported-by: Peter Smith <peter.b.smith@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CAHut%2BPu7SpK%2BctOYoqYR3V4w5LKc9sCs6c_qotk9uTQJQ4zp6g%40mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/ref/create_publication.sgml
Fix possible incorrect column reference in ERROR message
commit : 821c4d27bca243a233d9e9fb26c9adad47fa85b5
author : David Rowley <drowley@postgresql.org>
date : Fri, 9 Jan 2026 11:03:48 +1300
committer: David Rowley <drowley@postgresql.org>
date : Fri, 9 Jan 2026 11:03:48 +1300 When creating a partition for a RANGE partitioned table, the reporting
of errors relating to converting the specified range values into
constant values for the partition key's type could display the name of a
previous partition key column when an earlier range was specified as
MINVALUE or MAXVALUE.
This was caused by the code not correctly incrementing the index that
tracks which partition key the foreach loop was working on after
processing MINVALUE/MAXVALUE ranges.
Fix by using foreach_current_index() to ensure the index variable is
always set to the List element being worked on.
Author: myzhen <zhenmingyang@yeah.net>
Reviewed-by: zhibin wang <killerwzb@gmail.com>
Discussion: https://postgr.es/m/273cab52.978.19b96fc75e7.Coremail.zhenmingyang@yeah.net
Backpatch-through: 14 M src/backend/parser/parse_utilcmd.c
Prevent invalidation of newly created replication slots.
commit : 24cce33c332ab5cdec3d35ac265965e3735ff9a4
author : Amit Kapila <akapila@postgresql.org>
date : Thu, 8 Jan 2026 07:07:23 +0000
committer: Amit Kapila <akapila@postgresql.org>
date : Thu, 8 Jan 2026 07:07:23 +0000 A race condition could cause a newly created replication slot to become
invalidated between WAL reservation and a checkpoint.
Previously, if the required WAL was removed, we retried the reservation
process. However, the slot could still be invalidated before the retry if
the WAL was not yet removed but the checkpoint advanced the redo pointer
beyond the slot's intended restart LSN and computed the minimum LSN that
needs to be preserved for the slots.
The fix is to acquire an exclusive lock on ReplicationSlotAllocationLock
during WAL reservation, and a shared lock during the minimum LSN
calculation at checkpoints to serialize the process. This ensures that, if
WAL reservation occurs first, the checkpoint waits until restart_lsn is
updated before calculating the minimum LSN. If the checkpoint runs first,
subsequent WAL reservations pick a position at or after the latest
checkpoint's redo pointer.
We used a similar fix in HEAD (via commit 006dd4b2e5) and 18. The
difference is that in 17 and prior branches we need to additionally handle
the race condition with slot's minimum LSN computation during checkpoints.
Reported-by: suyu.cmj <mengjuan.cmj@alibaba-inc.com>
Author: Hou Zhijie <houzj.fnst@fujitsu.com>
Author: vignesh C <vignesh21@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 14
Discussion: https://postgr.es/m/5e045179-236f-4f8f-84f1-0f2566ba784c.mengjuan.cmj@alibaba-inc.com M src/backend/access/transam/xlog.c
M src/backend/replication/slot.c
Fix typo
commit : 2ce476543807629e790057eaef833ee817d659cc
author : Peter Eisentraut <peter@eisentraut.org>
date : Wed, 7 Jan 2026 15:47:02 +0100
committer: Peter Eisentraut <peter@eisentraut.org>
date : Wed, 7 Jan 2026 15:47:02 +0100 Reported-by: Xueyu Gao <gaoxueyu_hope@163.com>
Discussion: https://www.postgresql.org/message-id/42b5c99a.856d.19b73d858e2.Coremail.gaoxueyu_hope%40163.com M .cirrus.tasks.yml
createuser: Update docs to reflect defaults
commit : d3a2781e58770723ec99685f52b498629f361610
author : John Naylor <john.naylor@postgresql.org>
date : Wed, 7 Jan 2026 16:02:19 +0700
committer: John Naylor <john.naylor@postgresql.org>
date : Wed, 7 Jan 2026 16:02:19 +0700 Commit c7eab0e97 changed the default password_encryption setting to
'scram-sha-256', so update the example for creating a user with an
assigned password.
In addition, commit 08951a7c9 added new options that in turn pass
default tokens NOBYPASSRLS and NOREPLICATION to the CREATE ROLE
command, so fix this omission as well for v16 and later.
Reported-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/cff1ea60-c67d-4320-9e33-094637c2c4fb%40iki.fi
Backpatch-through: 14 M doc/src/sgml/ref/createuser.sgml
Fix issue with EVENT TRIGGERS and ALTER PUBLICATION
commit : 0687a6eb03b6f25527c2a6b61e43679e496499d3
author : David Rowley <drowley@postgresql.org>
date : Tue, 6 Jan 2026 17:30:32 +1300
committer: David Rowley <drowley@postgresql.org>
date : Tue, 6 Jan 2026 17:30:32 +1300 When processing the "publish" options of an ALTER PUBLICATION command,
we call SplitIdentifierString() to split the options into a List of
strings. Since SplitIdentifierString() modifies the delimiter
character and puts NULs in their place, this would overwrite the memory
of the AlterPublicationStmt. Later in AlterPublicationOptions(), the
modified AlterPublicationStmt is copied for event triggers, which would
result in the event trigger only seeing the first "publish" option
rather than all options that were specified in the command.
To fix this, make a copy of the string before passing to
SplitIdentifierString().
Here we also adjust a similar case in the pgoutput plugin. There's no
known issues caused by SplitIdentifierString() here, so this is being
done out of paranoia.
Thanks to Henson Choi for putting together an example case showing the
ALTER PUBLICATION issue.
Author: sunil s <sunilfeb26@gmail.com>
Reviewed-by: Henson Choi <assam258@gmail.com>
Reviewed-by: zengman <zengman@halodbtech.com>
Backpatch-through: 14 M src/backend/commands/publicationcmds.c
M src/backend/replication/pgoutput/pgoutput.c
Add TAP test for GUC settings passed via CONNECTION in logical replication.
commit : 67edd54f0639d8e194c6f0379b9fed6220f2f36b
author : Fujii Masao <fujii@postgresql.org>
date : Tue, 6 Jan 2026 11:57:12 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Tue, 6 Jan 2026 11:57:12 +0900 Commit d926462d819 restored the behavior of passing GUC settings from
the CONNECTION string to the publisher's walsender, allowing per-connection
configuration.
This commit adds a TAP test to verify that behavior works correctly.
Since commit d926462d819 was recently applied and backpatched to v15,
this follow-up commit is also backpatched accordingly.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Chao Li <lic@highgo.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/CAHGQGwGYV+-abbKwdrM2UHUe-JYOFWmsrs6=QicyJO-j+-Widw@mail.gmail.com
Backpatch-through: 15 M src/test/subscription/t/001_rep_changes.pl
Honor GUC settings specified in CREATE SUBSCRIPTION CONNECTION.
commit : 75f3428f2436614f696c2e28ab3dde57830cad7d
author : Fujii Masao <fujii@postgresql.org>
date : Tue, 6 Jan 2026 11:54:46 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Tue, 6 Jan 2026 11:54:46 +0900 Prior to v15, GUC settings supplied in the CONNECTION clause of
CREATE SUBSCRIPTION were correctly passed through to
the publisher's walsender. For example:
CREATE SUBSCRIPTION mysub
CONNECTION 'options=''-c wal_sender_timeout=1000'''
PUBLICATION ...
would cause wal_sender_timeout to take effect on the publisher's walsender.
However, commit f3d4019da5d changed the way logical replication
connections are established, forcing the publisher's relevant
GUC settings (datestyle, intervalstyle, extra_float_digits) to
override those provided in the CONNECTION string. As a result,
from v15 through v18, GUC settings in the CONNECTION string were
always ignored.
This regression prevented per-connection tuning of logical replication.
For example, using a shorter timeout for walsender connecting
to a nearby subscriber and a longer one for walsender connecting
to a remote subscriber.
This commit restores the intended behavior by ensuring that
GUC settings in the CONNECTION string are again passed through
and applied by the walsender, allowing per-connection configuration.
Backpatch to v15, where the regression was introduced.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Chao Li <lic@highgo.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/CAHGQGwGYV+-abbKwdrM2UHUe-JYOFWmsrs6=QicyJO-j+-Widw@mail.gmail.com
Backpatch-through: 15 M src/backend/replication/libpqwalreceiver/libpqwalreceiver.c
Doc: add missing punctuation
commit : 5e02f92d9ef2a585acfaddbb55dec9b0341087fd
author : David Rowley <drowley@postgresql.org>
date : Sun, 4 Jan 2026 21:14:11 +1300
committer: David Rowley <drowley@postgresql.org>
date : Sun, 4 Jan 2026 21:14:11 +1300 Author: Daisuke Higuchi <higuchi.daisuke11@gmail.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Discussion: https://postgr.es/m/CAEVT6c-yWYstu76YZ7VOxmij2XA8vrOEvens08QLmKHTDjEPBw@mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/history.sgml
Fix selectivity estimation integer overflow in contrib/intarray
commit : 54f82c4aae768037fe14d909b2b535cfbe89f900
author : David Rowley <drowley@postgresql.org>
date : Sun, 4 Jan 2026 20:34:01 +1300
committer: David Rowley <drowley@postgresql.org>
date : Sun, 4 Jan 2026 20:34:01 +1300 This fixes a poorly written integer comparison function which was
performing subtraction in an attempt to return a negative value when
a < b and a positive value when a > b, and 0 when the values were equal.
Unfortunately that didn't always work correctly due to two's complement
having the INT_MIN 1 further from zero than INT_MAX. This could result
in an overflow and cause the comparison function to return an incorrect
result, which would result in the binary search failing to find the
value being searched for.
This could cause poor selectivity estimates when the statistics stored
the value of INT_MAX (2147483647) and the value being searched for was
large enough to result in the binary search doing a comparison with that
INT_MAX value.
Author: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/CAEoWx2ng1Ot5LoKbVU-Dh---dFTUZWJRH8wv2chBu29fnNDMaQ@mail.gmail.com
Backpatch-through: 14 M contrib/intarray/_int_selfuncs.c
Update copyright for 2026
commit : 3e77e944e730ba19ebd5fc24876e284f6bdb9b91
author : Bruce Momjian <bruce@momjian.us>
date : Thu, 1 Jan 2026 13:24:10 -0500
committer: Bruce Momjian <bruce@momjian.us>
date : Thu, 1 Jan 2026 13:24:10 -0500 Backpatch-through: 14 M COPYRIGHT
M doc/src/sgml/legal.sgml
jit: Fix jit_profiling_support when unavailable.
commit : 130b001c15232305531b206d3f14d2ba01105979
author : Thomas Munro <tmunro@postgresql.org>
date : Wed, 31 Dec 2025 13:24:17 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Wed, 31 Dec 2025 13:24:17 +1300 jit_profiling_support=true captures profile data for Linux perf. On
other platforms, LLVMCreatePerfJITEventListener() returns NULL and the
attempt to register the listener would crash.
Fix by ignoring the setting in that case. The documentation already
says that it only has an effect if perf support is present, and we
already did the same for older LLVM versions that lacked support.
No field reports, unsurprisingly for an obscure developer-oriented
setting. Noticed in passing while working on commit 1a28b4b4.
Backpatch-through: 14
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKGJgB6gvrdDohgwLfCwzVQm%3DVMtb9m0vzQn%3DCwWn-kwG9w%40mail.gmail.com M src/backend/jit/llvm/llvmjit.c
Fix a race condition in updating procArray->replication_slot_xmin.
commit : 82146672261db869914eecaecf7ee3d237e7b69f
author : Masahiko Sawada <msawada@postgresql.org>
date : Tue, 30 Dec 2025 10:56:23 -0800
committer: Masahiko Sawada <msawada@postgresql.org>
date : Tue, 30 Dec 2025 10:56:23 -0800 Previously, ReplicationSlotsComputeRequiredXmin() computed the oldest
xmin across all slots without holding ProcArrayLock (when
already_locked is false), acquiring the lock just before updating the
replication slot xmin.
This could lead to a race condition: if a backend created a new slot
and updates the global replication slot xmin, another backend
concurrently running ReplicationSlotsComputeRequiredXmin() could
overwrite that update with an invalid or stale value. This happens
because the concurrent backend might have computed the aggregate xmin
before the new slot was accounted for, but applied the update after
the new slot had already updated the global value.
In the reported failure, a walsender for an apply worker computed
InvalidTransactionId as the oldest xmin and overwrote a valid
replication slot xmin value computed by a walsender for a tablesync
worker. Consequently, the tablesync worker computed a transaction ID
via GetOldestSafeDecodingTransactionId() effectively without
considering the replication slot xmin. This led to the error "cannot
build an initial slot snapshot as oldest safe xid %u follows
snapshot's xmin %u", which was an assertion failure prior to commit
240e0dbacd3.
To fix this, we acquire ReplicationSlotControlLock in exclusive mode
during slot creation to perform the initial update of the slot
xmin. In ReplicationSlotsComputeRequiredXmin(), we hold
ReplicationSlotControlLock in shared mode until the global slot xmin
is updated in ProcArraySetReplicationSlotXmin(). This prevents
concurrent computations and updates of the global xmin by other
backends during the initial slot xmin update process, while still
permitting concurrent calls to ReplicationSlotsComputeRequiredXmin().
Backpatch to all supported versions.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Pradeep Kumar <spradeepkumar29@gmail.com>
Reviewed-by: Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAA4eK1L8wYcyTPxNzPGkhuO52WBGoOZbT0A73Le=ZUWYAYmdfw@mail.gmail.com
Backpatch-through: 14 M src/backend/replication/logical/logical.c
M src/backend/replication/slot.c
jit: Remove -Wno-deprecated-declarations in 18+.
commit : dfb9ff5904074976b2a27bc44735a2680303de6d
author : Thomas Munro <tmunro@postgresql.org>
date : Tue, 30 Dec 2025 14:11:37 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Tue, 30 Dec 2025 14:11:37 +1300 REL_18_STABLE and master have commit ee485912, so they always use the
newer LLVM opaque pointer functions. Drop -Wno-deprecated-declarations
(commit a56e7b660) for code under jit/llvm in those branches, to catch
any new deprecation warnings that arrive in future version of LLVM.
Older branches continued to use functions marked deprecated in LLVM 14
and 15 (ie switched to the newer functions only for LLVM 16+), as a
precaution against unforeseen compatibility problems with bitcode
already shipped. In those branches, the comment about warning
suppression is updated to explain that situation better. In theory we
could suppress warnings only for LLVM 14 and 15 specifically, but that
isn't done here.
Backpatch-through: 14
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1407185.1766682319%40sss.pgh.pa.us M src/backend/jit/llvm/Makefile
ci: Test Windows + Mkvcbuild.pm in REL_16_STABLE.
commit : 4b9ce1ef609ba5a5eabc41a82d1a3e3710fc663f
author : Thomas Munro <tmunro@postgresql.org>
date : Mon, 29 Dec 2025 15:52:33 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Mon, 29 Dec 2025 15:52:33 +1300 * REL_15_STABLE introduced CI and tested Windows with Mkvcbuild.pm.
* REL_16_STABLE introduced Meson and switched Windows CI to that.
* REL_17_STABLE dropped Mkvcbuild.pm.
That left a blind spot when testing Makefile changes back-patched into
16. Mkvcbuild.pm scrapes Makefiles and might break, so it's useful to
be able to check that before hitting "hamerkop" in the build farm.
Copy REL_15_STABLE's Windows task into REL_16_STABLE as a separate task,
with a few small adjustments to match later task definition style.
Discussion: https://postgr.es/m/CA%2BhUKG%2B-d0OyLMdMiZ%2BFtj2hhZXT%2B0HOyHfrPBecE_vZzh9rRg%40mail.gmail.com M .cirrus.tasks.yml
Fix Mkvcbuild.pm builds of test_cloexec.c.
commit : 80e8ec772bff93e04f6cafe20bdf142bfa8c79de
author : Thomas Munro <tmunro@postgresql.org>
date : Mon, 29 Dec 2025 15:22:16 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Mon, 29 Dec 2025 15:22:16 +1300 Mkvcbuild.pm scrapes Makefile contents, but couldn't understand the
change made by commit bec2a0aa. Revealed by BF animal hamerkop in
branch REL_16_STABLE.
1. It used += instead of =, which didn't match the pattern that
Mkvcbuild.pm looks for. Drop the +.
2. Mkvcbuild.pm doesn't link PROGRAM executables with libpgport. Apply
a local workaround to REL_16_STABLE only (later branches dropped
Mkvcbuild.pm).
Backpatch-through: 16
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/175163.1766357334%40sss.pgh.pa.us M src/test/modules/test_cloexec/Makefile
Fix pg_stat_get_backend_activity() to use multi-byte truncated result
commit : c48829ed8308e9e1767e5ce4d883996348af5b68
author : Michael Paquier <michael@paquier.xyz>
date : Sat, 27 Dec 2025 17:23:54 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Sat, 27 Dec 2025 17:23:54 +0900 pg_stat_get_backend_activity() calls pgstat_clip_activity() to ensure
that the reported query string is correctly truncated when it finishes
with an incomplete multi-byte sequence. However, the result returned by
the function was not what pgstat_clip_activity() generated, but the
non-truncated, original, contents from PgBackendStatus.st_activity_raw.
Oversight in 54b6cd589ac2, so backpatch all the way down.
Author: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAEoWx2mDzwc48q2EK9tSXS6iJMJ35wvxNQnHX+rXjy5VgLvJQw@mail.gmail.com
Backpatch-through: 14 M src/backend/utils/adt/pgstatfuncs.c
doc: Remove duplicate word in ECPG description
commit : 82a923bc61e4843c05a9fb56fee3961821ef4603
author : Michael Paquier <michael@paquier.xyz>
date : Fri, 26 Dec 2025 15:26:06 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Fri, 26 Dec 2025 15:26:06 +0900 Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Discussion: https://postgr.es/m/d6d6a800f8b503cd78d5f4fa721198e40eec1677.camel@cybertec.at
Backpatch-through: 14 M doc/src/sgml/ecpg.sgml
Don't advance origin during apply failure.
commit : 63a65adf4d8ea85ad9870fdce9351f73be00e26d
author : Amit Kapila <akapila@postgresql.org>
date : Wed, 24 Dec 2025 03:53:42 +0000
committer: Amit Kapila <akapila@postgresql.org>
date : Wed, 24 Dec 2025 03:53:42 +0000 The logical replication parallel apply worker could incorrectly advance
the origin progress during an error or failed apply. This behavior risks
transaction loss because such transactions will not be resent by the
server.
Commit 3f28b2fcac addressed a similar issue for both the apply worker and
the table sync worker by registering a before_shmem_exit callback to reset
origin information. This prevents the worker from advancing the origin
during transaction abortion on shutdown. This patch applies the same fix
to the parallel apply worker, ensuring consistent behavior across all
worker types.
As with 3f28b2fcac, we are backpatching through version 16, since parallel
apply mode was introduced there and the issue only occurs when changes are
applied before the transaction end record (COMMIT or ABORT) is received.
Author: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 16
Discussion: https://postgr.es/m/TY4PR01MB169078771FB31B395AB496A6B94B4A@TY4PR01MB16907.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/TYAPR01MB5692FAC23BE40C69DA8ED4AFF5B92@TYAPR01MB5692.jpnprd01.prod.outlook.com M src/backend/replication/logical/worker.c
M src/test/subscription/t/023_twophase_stream.pl
Fix bug in following update chain when locking a heap tuple
commit : 7efef18ffc14af2399bce34d40850b410dd1fe8d
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 23 Dec 2025 13:37:16 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 23 Dec 2025 13:37:16 +0200 After waiting for a concurrent updater to finish, heap_lock_tuple()
followed the update chain to lock all tuple versions. However, when
stepping from the initial tuple to the next one, it failed to check
that the next tuple's XMIN matches the initial tuple's XMAX. That's an
important check whenever following an update chain, and the recursive
part that follows the chain did it, but the initial step missed it.
Without the check, if the updating transaction aborts, the updated
tuple is vacuumed away and replaced by an unrelated tuple, the
unrelated tuple might get incorrectly locked.
Author: Jasper Smit <jasper.smit@servicenow.com>
Discussion: https://www.postgresql.org/message-id/CAOG+RQ74x0q=kgBBQ=mezuvOeZBfSxM1qu_o0V28bwDz3dHxLw@mail.gmail.com
Backpatch-through: 14 M src/backend/access/heap/heapam.c
Add missing .gitignore for src/test/modules/test_cloexec.
commit : ebd5696166f6781afb8b4c0b102270059749fad4
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 22 Dec 2025 14:06:54 -0500
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 22 Dec 2025 14:06:54 -0500 A src/test/modules/test_cloexec/.gitignore
Fix orphaned origin in shared memory after DROP SUBSCRIPTION
commit : e22e9ab0cd452112e457c02f76b1fa3e845594b8
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 23 Dec 2025 14:32:22 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 23 Dec 2025 14:32:22 +0900 Since ce0fdbfe9722, a replication slot and an origin are created by each
tablesync worker, whose information is stored in both a catalog and
shared memory (once the origin is set up in the latter case). The
transaction where the origin is created is the same as the one that runs
the initial COPY, with the catalog state of the origin becoming visible
for other sessions only once the COPY transaction has committed. The
catalog state is coupled with a state in shared memory, initialized at
the same time as the origin created in the catalogs. Note that the
transaction doing the initial data sync can take a long time, time that
depends on the amount of data to transfer from a publication node to its
subscriber node.
Now, when a DROP SUBSCRIPTION is executed, all its workers are stopped
with the origins removed. The removal of each origin relies on a
catalog lookup. A worker still running the initial COPY would fail its
transaction, with the catalog state of the origin rolled back while the
shared memory state remains around. The session running the DROP
SUBSCRIPTION should be in charge of cleaning up the catalog and the
shared memory state, but as there is no data in the catalogs the shared
memory state is not removed. This issue would leave orphaned origin
data in shared memory, leading to a confusing state as it would still
show up in pg_replication_origin_status. Note that this shared memory
data is sticky, being flushed on disk in replorigin_checkpoint at
checkpoint. This prevents other origins from reusing a slot position
in the shared memory data.
To address this problem, the commit moves the creation of the origin at
the end of the transaction that precedes the one executing the initial
COPY, making the origin immediately visible in the catalogs for other
sessions, giving DROP SUBSCRIPTION a way to know about it. A different
solution would have been to clean up the shared memory state using an
abort callback within the tablesync worker. The solution of this commit
is more consistent with the apply worker that creates an origin in a
short transaction.
A test is added in the subscription test 004_sync.pl, which was able to
display the problem. The test fails when this commit is reverted.
Reported-by: Tenglong Gu <brucegu@amazon.com>
Reported-by: Daisuke Higuchi <higudai@amazon.com>
Analyzed-by: Michael Paquier <michael@paquier.xyz>
Author: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/aUTekQTg4OYnw-Co@paquier.xyz
Backpatch-through: 14 M src/backend/commands/subscriptioncmds.c
M src/backend/replication/logical/tablesync.c
M src/test/subscription/t/004_sync.pl
Fix printf format string warning on MinGW.
commit : b1316b78f8a93a077a9db589644103037c0c1aa6
author : Thomas Munro <tmunro@postgresql.org>
date : Fri, 6 Dec 2024 12:34:33 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Fri, 6 Dec 2024 12:34:33 +1300 This is a back-patch of 1319997d to branches 14-17 to fix an old warning
about a printf type mismatch on MinGW, in anticipation of a potential
expansion of the scope of CI's CompilerWarnings checks. Though CI began
in 15, BF animal fairwren also shows the warning in 14, so we might as
well fix that too.
Original commit message (except for new "Backpatch-through" tag):
Commit 517bf2d91 changed a printf format string to placate MinGW, which
at the time warned about "%lld". Current MinGW is now warning about the
replacement "%I64d". Reverting the change clears the warning on the
MinGW CI task, and hopefully it will clear it on build farm animal
fairywren too.
Backpatch-through: 14-17
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reported-by: "Hayato Kuroda (Fujitsu)" <kuroda.hayato@fujitsu.com>
Discussion: https://postgr.es/m/TYAPR01MB5866A71B744BE01B3BF71791F5AEA%40TYAPR01MB5866.jpnprd01.prod.outlook.com M src/interfaces/ecpg/test/expected/sql-sqlda.c
M src/interfaces/ecpg/test/expected/sql-sqlda.stderr
M src/interfaces/ecpg/test/sql/sqlda.pgc
Clean up test_cloexec.c and Makefile.
commit : 0666ccc16cc2d1b97032e7723601dbf5300d057d
author : Thomas Munro <tmunro@postgresql.org>
date : Sun, 21 Dec 2025 15:40:07 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Sun, 21 Dec 2025 15:40:07 +1300 An unused variable caused a compiler warning on BF animal fairywren, an
snprintf() call was redundant, and some buffer sizes were inconsistent.
Per code review from Tom Lane.
The Makefile's test ifeq ($(PORTNAME), win32) never succeeded due to a
circularity, so only Meson builds were actually compiling the new test
code, partially explaining why CI didn't tell us about the warning
sooner (the other problem being that CompilerWarnings only makes
world-bin, a problem for another commit). Simplify.
Backpatch-through: 16, like commit c507ba55
Author: Bryan Green <dbryan.green@gmail.com>
Co-authored-by: Thomas Munro <tmunro@gmail.com>
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1086088.1765593851%40sss.pgh.pa.us M src/test/modules/test_cloexec/Makefile
M src/test/modules/test_cloexec/test_cloexec.c
Add guard to prevent recursive memory context logging.
commit : 3853f61681e850e846150b38cf9f5ebb34523740
author : Fujii Masao <fujii@postgresql.org>
date : Fri, 19 Dec 2025 12:08:20 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Fri, 19 Dec 2025 12:08:20 +0900 Previously, if memory context logging was triggered repeatedly and
rapidly while a previous request was still being processed, it could
result in recursive calls to ProcessLogMemoryContextInterrupt().
This could lead to infinite recursion and potentially crash the process.
This commit adds a guard to prevent such recursion.
If ProcessLogMemoryContextInterrupt() is already in progress and
logging memory contexts, subsequent calls will exit immediately,
avoiding unintended recursive calls.
While this scenario is unlikely in practice, it's not impossible.
This change adds a safety check to prevent such failures.
Back-patch to v14, where memory context logging was introduced.
Reported-by: Robert Haas <robertmhaas@gmail.com>
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Artem Gavrilov <artem.gavrilov@percona.com>
Discussion: https://postgr.es/m/CA+TgmoZMrv32tbNRrFTvF9iWLnTGqbhYSLVcrHGuwZvCtph0NA@mail.gmail.com
Backpatch-through: 14 M src/backend/utils/mmgr/mcxt.c
Do not emit WAL for unlogged BRIN indexes
commit : a5277700e47862e9f83b0695fb34ffff0ea2fa34
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 18 Dec 2025 15:08:48 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 18 Dec 2025 15:08:48 +0200 Operations on unlogged relations should not be WAL-logged. The
brin_initialize_empty_new_buffer() function didn't get the memo.
The function is only called when a concurrent update to a brin page
uses up space that we're just about to insert to, which makes it
pretty hard to hit. If you do manage to hit it, a full-page WAL record
is erroneously emitted for the unlogged index. If you then crash,
crash recovery will fail on that record with an error like this:
FATAL: could not create file "base/5/32819": File exists
Author: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://www.postgresql.org/message-id/CALdSSPhpZXVFnWjwEBNcySx_vXtXHwB2g99gE6rK0uRJm-3GgQ@mail.gmail.com
Backpatch-through: 14 M src/backend/access/brin/brin_pageops.c
Update .abi-compliance-history for PrepareToInvalidateCacheTuple().
commit : 2655d2e47803f9075dd6354f7c317037f2c32e8f
author : Noah Misch <noah@leadboat.com>
date : Wed, 17 Dec 2025 09:48:56 -0800
committer: Noah Misch <noah@leadboat.com>
date : Wed, 17 Dec 2025 09:48:56 -0800 Commit 0f69beddea113dd1d6c5b6f6d82df577ef3c21f2 (v17) anticipated this:
[C] 'function void PrepareToInvalidateCacheTuple(Relation, HeapTuple, HeapTuple, void (int, typedef uint32, typedef Oid)*)' has some sub-type changes:
parameter 5 of type 'void*' was added
parameter 4 of type 'void (int, typedef uint32, typedef Oid)*' changed:
pointer type changed from: 'void (int, typedef uint32, typedef Oid)*' to: 'void (int, typedef uint32, typedef Oid, void*)*'
Discussion: https://postgr.es/m/20240523000548.58.nmisch@google.com
Backpatch-through: 14-17 M .abi-compliance-history
Assert lack of hazardous buffer locks before possible catalog read.
commit : 27e4fad9804c0dabe15b00168ad9b65696bedff5
author : Noah Misch <noah@leadboat.com>
date : Tue, 16 Dec 2025 16:13:54 -0800
committer: Noah Misch <noah@leadboat.com>
date : Tue, 16 Dec 2025 16:13:54 -0800 Commit 0bada39c83a150079567a6e97b1a25a198f30ea3 fixed a bug of this kind,
which existed in all branches for six days before detection. While the
probability of reaching the trouble was low, the disruption was extreme. No
new backends could start, and service restoration needed an immediate
shutdown. Hence, add this to catch the next bug like it.
The new check in RelationIdGetRelation() suffices to make autovacuum detect
the bug in commit 243e9b40f1b2dd09d6e5bf91ebf6e822a2cd3704 that led to commit
0bada39. This also checks in a number of similar places. It replaces each
Assert(IsTransactionState()) that pertained to a conditional catalog read.
Back-patch to v14 - v17. This a back-patch of commit
f4ece891fc2f3f96f0571720a1ae30db8030681b (from before v18 branched) to
all supported branches, to accompany the back-patch of commits 243e9b4
and 0bada39. For catalog indexes, the bttextcmp() behavior that
motivated IsCatalogTextUniqueIndexOid() was v18-specific. Hence, this
back-patch doesn't need that or its correction from commit
4a4ee0c2c1e53401924101945ac3d517c0a8a559.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/20250410191830.0e.nmisch@google.com
Discussion: https://postgr.es/m/10ec0bc3-5933-1189-6bb8-5dec4114558e@gmail.com
Backpatch-through: 14-17 M src/backend/storage/buffer/bufmgr.c
M src/backend/storage/lmgr/lwlock.c
M src/backend/utils/adt/pg_locale.c
M src/backend/utils/cache/catcache.c
M src/backend/utils/cache/inval.c
M src/backend/utils/cache/relcache.c
M src/backend/utils/mb/mbutils.c
M src/include/storage/bufmgr.h
M src/include/storage/lwlock.h
M src/include/utils/relcache.h
WAL-log inplace update before revealing it to other sessions.
commit : 720e9304fa0d395d26f4218f4c13bfb868736ce8
author : Noah Misch <noah@leadboat.com>
date : Tue, 16 Dec 2025 16:13:54 -0800
committer: Noah Misch <noah@leadboat.com>
date : Tue, 16 Dec 2025 16:13:54 -0800 A buffer lock won't stop a reader having already checked tuple
visibility. If a vac_update_datfrozenid() and then a crash happened
during inplace update of a relfrozenxid value, datfrozenxid could
overtake relfrozenxid. That could lead to "could not access status of
transaction" errors.
Back-patch to v14 - v17. This is a back-patch of commits:
- 8e7e672cdaa6bfec85d4d5dd9be84159df23bb41
(main change, on master, before v18 branched)
- 818013665259d4242ba641aad705ebe5a3e2db8e
(defect fix, on master, before v18 branched)
It reverses commit bc6bad88572501aecaa2ac5d4bc900ac0fd457d5, my revert
of the original back-patch.
In v14, this also back-patches the assertion removal from commit
7fcf2faf9c7dd473208fd6d5565f88d7f733782b.
Discussion: https://postgr.es/m/20240620012908.92.nmisch@google.com
Backpatch-through: 14-17 M src/backend/access/heap/README.tuplock
M src/backend/access/heap/heapam.c
M src/include/storage/proc.h
For inplace update, send nontransactional invalidations.
commit : 1d7b02711f70f1ae87be562bca11ea2a9c43e85b
author : Noah Misch <noah@leadboat.com>
date : Tue, 16 Dec 2025 16:13:54 -0800
committer: Noah Misch <noah@leadboat.com>
date : Tue, 16 Dec 2025 16:13:54 -0800 The inplace update survives ROLLBACK. The inval didn't, so another
backend's DDL could then update the row without incorporating the
inplace update. In the test this fixes, a mix of CREATE INDEX and ALTER
TABLE resulted in a table with an index, yet relhasindex=f. That is a
source of index corruption.
Back-patch to v14 - v17. This is a back-patch of commits:
- 243e9b40f1b2dd09d6e5bf91ebf6e822a2cd3704
(main change, on master, before v18 branched)
- 0bada39c83a150079567a6e97b1a25a198f30ea3
(defect fix, on master, before v18 branched)
- bae8ca82fd00603ebafa0658640d6e4dfe20af92
(cosmetics from post-commit review, on REL_18_STABLE)
It reverses commit c1099dd745b0135960895caa8892a1873ac6cbe5, my revert
of the original back-patch of 243e9b4.
This back-patch omits the non-comment heap_decode() changes. I find
those changes removed harmless code that was last necessary in v13. See
discussion thread for details. The back branches aren't the place to
remove such code.
Like the original back-patch, this doesn't change WAL, because these
branches use end-of-recovery SIResetAll(). All branches change the ABI
of extern function PrepareToInvalidateCacheTuple(). No PGXN extension
calls that, and there's no apparent use case in extensions. Expect
".abi-compliance-history" edits to follow.
Reviewed-by: Paul A Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Surya Poondla <s_poondla@apple.com>
Reviewed-by: Ilyasov Ian <ianilyasov@outlook.com>
Reviewed-by: Nitin Motiani <nitinmotiani@google.com> (in earlier versions)
Reviewed-by: Andres Freund <andres@anarazel.de> (in earlier versions)
Discussion: https://postgr.es/m/20240523000548.58.nmisch@google.com
Backpatch-through: 14-17 M src/backend/access/heap/README.tuplock
M src/backend/access/heap/heapam.c
M src/backend/access/transam/xact.c
M src/backend/catalog/index.c
M src/backend/replication/logical/decode.c
M src/backend/utils/cache/catcache.c
M src/backend/utils/cache/inval.c
M src/backend/utils/cache/syscache.c
M src/include/utils/catcache.h
M src/include/utils/inval.h
M src/test/isolation/expected/inplace-inval.out
M src/test/isolation/specs/inplace-inval.spec
M src/tools/pgindent/typedefs.list
Reorder two functions in inval.c
commit : ed75434c45c3fe47ffcf6a7bff3563edd08e648e
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 7 Nov 2023 11:55:13 +0900
committer: Noah Misch <noah@leadboat.com>
date : Tue, 7 Nov 2023 11:55:13 +0900 This file separates public and static functions with a separator
comment, but two routines were not defined in a location reflecting
that, so reorder them.
Back-patch commit c2bdd2c5b1d48a7e39e1a8d5e1d90b731b53c4c9 to v15 - v16.
This avoids merge conflicts in the next commit, which modifies a
function this moved. Exclude v14, which is so different that the merge
conflict savings would be immaterial.
Author: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAJ7c6TMX2dd0g91UKvcC+CVygKQYJkKJq1+ZzT4rOK42+b53=w@mail.gmail.com
Backpatch-through: 15-16 M src/backend/utils/cache/inval.c
Fix multibyte issue in ltree_strncasecmp().
commit : b80227c0a54ccc6c358a24f2a8772667d0a4f0d4
author : Jeff Davis <jdavis@postgresql.org>
date : Tue, 16 Dec 2025 10:35:40 -0800
committer: Jeff Davis <jdavis@postgresql.org>
date : Tue, 16 Dec 2025 10:35:40 -0800 Previously, the API for ltree_strncasecmp() took two inputs but only
one length (that of the smaller input). It truncated the larger input
to that length, but that could break a multibyte sequence.
Change the API to be a check for prefix equality (possibly
case-insensitive) instead, which is all that's needed by the
callers. Also, provide the lengths of both inputs.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/5f65b85740197ba6249ea507cddf609f84a6188b.camel%40j-davis.com
Backpatch-through: 14 M contrib/ltree/lquery_op.c
M contrib/ltree/ltree.h
M contrib/ltree/ltxtquery_op.c
Switch memory contexts in ReinitializeParallelDSM.
commit : 12c2f843cddae802302cb1a197a5b80dd5b3a04c
author : Robert Haas <rhaas@postgresql.org>
date : Tue, 16 Dec 2025 10:40:53 -0500
committer: Robert Haas <rhaas@postgresql.org>
date : Tue, 16 Dec 2025 10:40:53 -0500 We already do this in CreateParallelContext, InitializeParallelDSM, and
LaunchParallelWorkers. I suspect the reason why the matching logic was
omitted from ReinitializeParallelDSM is that I failed to realize that
any memory allocation was happening here -- but shm_mq_attach does
allocate, which could result in a shm_mq_handle being allocated in a
shorter-lived context than the ParallelContext which points to it.
That could result in a crash if the shorter-lived context is freed
before the parallel context is destroyed. As far as I am currently
aware, there is no way to reach a crash using only code that is
present in core PostgreSQL, but extensions could potentially trip
over this. Fixing this in the back-branches appears low-risk, so
back-patch to all supported versions.
Author: Jakub Wartak <jakub.wartak@enterprisedb.com>
Co-authored-by: Jeevan Chalke <jeevan.chalke@enterprisedb.com>
Backpatch-through: 14
Discussion: http://postgr.es/m/CAKZiRmwfVripa3FGo06=5D1EddpsLu9JY2iJOTgbsxUQ339ogQ@mail.gmail.com M src/backend/access/transam/parallel.c
Fail recovery when missing redo checkpoint record without backup_label
commit : 1aa57e9ed548c8cb6371a6a43f3ed90b2c16fc79
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 16 Dec 2025 13:29:41 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 16 Dec 2025 13:29:41 +0900 This commit adds an extra check at the beginning of recovery to ensure
that the redo record of a checkpoint exists before attempting WAL
replay, logging a PANIC if the redo record referenced by the checkpoint
record could not be found. This is the same level of failure as when a
checkpoint record is missing. This check is added when a cluster is
started without a backup_label, after retrieving its checkpoint record.
The redo LSN used for the check is retrieved from the checkpoint record
successfully read.
In the case where a backup_label exists, the startup process already
fails if the redo record cannot be found after reading a checkpoint
record at the beginning of recovery.
Previously, the presence of the redo record was not checked. If the
redo and checkpoint records were located on different WAL segments, it
would be possible to miss a entire range of WAL records that should have
been replayed but were just ignored. The consequences of missing the
redo record depend on the version dealt with, these becoming worse the
older the version used:
- On HEAD, v18 and v17, recovery fails with a pointer dereference at the
beginning of the redo loop, as the redo record is expected but cannot be
found. These versions are good students, because we detect a failure
before doing anything, even if the failure is misleading in the shape of
a segmentation fault, giving no information that the redo record is
missing.
- In v16 and v15, problems show at the end of recovery within
FinishWalRecovery(), the startup process using a buggy LSN to decide
from where to start writing WAL. The cluster gets corrupted, still it
is noisy about it.
- v14 and older versions are worse: a cluster gets corrupted but it is
entirely silent about the matter. The redo record missing causes the
startup process to skip entirely recovery, because a missing record is
the same as not redo being required at all. This leads to data loss, as
everything is missed between the redo record and the checkpoint record.
Note that I have tested that down to 9.4, reproducing the issue with a
version of the author's reproducer slightly modified. The code is wrong
since at least 9.2, but I did not look at the exact point of origin.
This problem has been found by debugging a cluster where the WAL segment
including the redo segment was missing due to an operator error, leading
to a crash, based on an investigation in v15.
Requesting archive recovery with the creation of a recovery.signal or
a standby.signal even without a backup_label would mitigate the issue:
if the record cannot be found in pg_wal/, the missing segment can be
retrieved with a restore_command when checking that the redo record
exists. This was already the case without this commit, where recovery
would re-fetch the WAL segment that includes the redo record. The check
introduced by this commit makes the segment to be retrieved earlier to
make sure that the redo record can be found.
On HEAD, the code will be slightly changed in a follow-up commit to not
rely on a PANIC, to include a test able to emulate the original problem.
This is a minimal backpatchable fix, kept separated for clarity.
Reported-by: Andres Freund <andres@anarazel.de>
Analyzed-by: Andres Freund <andres@anarazel.de>
Author: Nitin Jadhav <nitinjadhavpostgres@gmail.com>
Discussion: https://postgr.es/m/20231023232145.cmqe73stvivsmlhs@awork3.anarazel.de
Discussion: https://postgr.es/m/CAMm1aWaaJi2w49c0RiaDBfhdCL6ztbr9m=daGqiOuVdizYWYaA@mail.gmail.com
Backpatch-through: 14 M src/backend/access/transam/xlogrecovery.c
Clarify comment on multixid offset wraparound check
commit : 7d42e2367c6bdd5538fc914cd6858133bfb63f79
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Mon, 15 Dec 2025 11:47:04 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Mon, 15 Dec 2025 11:47:04 +0200 Coverity complained that offset cannot be 0 here because there's an
explicit check for "offset == 0" earlier in the function, but it
didn't see the possibility that offset could've wrapped around to 0.
The code is correct, but clarify the comment about it.
The same code exists in backbranches in the server
GetMultiXactIdMembers() function and in 'master' in the pg_upgrade
GetOldMultiXactIdSingleMember function. In backbranches Coverity
didn't complain about it because the check was merely an assertion,
but change the comment in all supported branches for consistency.
Per Tom Lane's suggestion.
Discussion: https://www.postgresql.org/message-id/1827755.1765752936@sss.pgh.pa.us M src/backend/access/transam/multixact.c
Fix allocation formula in llvmjit_expr.c
commit : 5a4dc4aabd0345a194d27219f2424eb3dd3bf8fb
author : Michael Paquier <michael@paquier.xyz>
date : Thu, 11 Dec 2025 10:25:48 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Thu, 11 Dec 2025 10:25:48 +0900 An array of LLVMBasicBlockRef is allocated with the size used for an
element being "LLVMBasicBlockRef *" rather than "LLVMBasicBlockRef".
LLVMBasicBlockRef is a type that refers to a pointer, so this did not
directly cause a problem because both should have the same size, still
it is incorrect.
This issue has been spotted while reviewing a different patch, and
exists since 2a0faed9d702, so backpatch all the way down.
Discussion: https://postgr.es/m/CA+hUKGLngd9cKHtTUuUdEo2eWEgUcZ_EQRbP55MigV2t_zTReg@mail.gmail.com
Backpatch-through: 14 M src/backend/jit/llvm/llvmjit_expr.c
Fix O_CLOEXEC flag handling in Windows port.
commit : d62a258cd45a255866b79577c85c68ea99b653aa
author : Thomas Munro <tmunro@postgresql.org>
date : Wed, 10 Dec 2025 09:01:35 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Wed, 10 Dec 2025 09:01:35 +1300 PostgreSQL's src/port/open.c has always set bInheritHandle = TRUE
when opening files on Windows, making all file descriptors inheritable
by child processes. This meant the O_CLOEXEC flag, added to many call
sites by commit 1da569ca1f (v16), was silently ignored.
The original commit included a comment suggesting that our open()
replacement doesn't create inheritable handles, but it was a mis-
understanding of the code path. In practice, the code was creating
inheritable handles in all cases.
This hasn't caused widespread problems because most child processes
(archive_command, COPY PROGRAM, etc.) operate on file paths passed as
arguments rather than inherited file descriptors. Even if a child
wanted to use an inherited handle, it would need to learn the numeric
handle value, which isn't passed through our IPC mechanisms.
Nonetheless, the current behavior is wrong. It violates documented
O_CLOEXEC semantics, contradicts our own code comments, and makes
PostgreSQL behave differently on Windows than on Unix. It also creates
potential issues with future code or security auditing tools.
To fix, define O_CLOEXEC to _O_NOINHERIT in master, previously used by
O_DSYNC. We use different values in the back branches to preserve
existing values. In pgwin32_open_handle() we set bInheritHandle
according to whether O_CLOEXEC is specified, for the same atomic
semantics as POSIX in multi-threaded programs that create processes.
Backpatch-through: 16
Author: Bryan Green <dbryan.green@gmail.com>
Co-authored-by: Thomas Munro <thomas.munro@gmail.com> (minor adjustments)
Discussion: https://postgr.es/m/e2b16375-7430-4053-bda3-5d2194ff1880%40gmail.com M src/include/port.h
M src/include/port/win32_port.h
M src/port/open.c
M src/test/modules/Makefile
M src/test/modules/meson.build
A src/test/modules/test_cloexec/Makefile
A src/test/modules/test_cloexec/meson.build
A src/test/modules/test_cloexec/t/001_cloexec.pl
A src/test/modules/test_cloexec/test_cloexec.c
doc: Fix statement about ON CONFLICT and deferrable constraints.
commit : 8348004b54a7e7ff155db25844ffa64da3ba7339
author : Dean Rasheed <dean.a.rasheed@gmail.com>
date : Tue, 9 Dec 2025 10:49:18 +0000
committer: Dean Rasheed <dean.a.rasheed@gmail.com>
date : Tue, 9 Dec 2025 10:49:18 +0000 The description of deferrable constraints in create_table.sgml states
that deferrable constraints cannot be used as conflict arbitrators in
an INSERT with an ON CONFLICT DO UPDATE clause, but in fact this
restriction applies to all ON CONFLICT clauses, not just those with DO
UPDATE. Fix this, and while at it, change the word "arbitrators" to
"arbiters", to match the terminology used elsewhere.
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/CAEZATCWsybvZP3ce8rGcVNx-QHuDOJZDz8y=p1SzqHwjRXyV4Q@mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/ref/create_table.sgml
Doc: fix typo in hash index documentation
commit : 08e1ea3b285924fc3ff435bb4392bd10a013e889
author : David Rowley <drowley@postgresql.org>
date : Tue, 9 Dec 2025 14:43:03 +1300
committer: David Rowley <drowley@postgresql.org>
date : Tue, 9 Dec 2025 14:43:03 +1300 Plus a similar fix to the README.
Backpatch as far back as the sgml issue exists. The README issue does
exist in v14, but that seems unlikely to harm anyone.
Author: David Geier <geidav.pg@gmail.com>
Discussion: https://postgr.es/m/ed3db7ea-55b4-4809-86af-81ad3bb2c7d3@gmail.com
Backpatch-through: 15 M doc/src/sgml/hash.sgml
M src/backend/access/hash/README
Fix setting next multixid's offset at offset wraparound
commit : 4d689a17693ed65461b2f3a02c24c98f30d930d0
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Fri, 5 Dec 2025 11:32:38 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Fri, 5 Dec 2025 11:32:38 +0200 In commit 789d65364c, we started updating the next multixid's offset
too when recording a multixid, so that it can always be used to
calculate the number of members. I got it wrong at offset wraparound:
we need to skip over offset 0. Fix that.
Discussion: https://www.postgresql.org/message-id/d9996478-389a-4340-8735-bfad456b313c@iki.fi
Backpatch-through: 14 M src/backend/access/transam/multixact.c
Show version of nodes in output of TAP tests
commit : b38feca1ce007adbd0db534e02f6d6b46b968051
author : Michael Paquier <michael@paquier.xyz>
date : Fri, 5 Dec 2025 09:21:20 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Fri, 5 Dec 2025 09:21:20 +0900 This commit adds the version information of a node initialized by
Cluster.pm, that may vary depending on the install_path given by the
test. The code was written so as the node information, that includes
the version number, was dumped before the version number was set.
This is particularly useful for the pg_upgrade TAP tests, that may mix
several versions for cross-version runs. The TAP infrastructure also
allows mixing nodes with different versions, so this information can be
useful for out-of-core tests.
Backpatch down to v15, where Cluster.pm and the pg_upgrade TAP tests
have been introduced.
Author: Potapov Alexander <a.potapov@postgrespro.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/e59bb-692c0a80-5-6f987180@170377126
Backpatch-through: 15 M src/test/perl/PostgreSQL/Test/Cluster.pm
Set next multixid's offset when creating a new multixid
commit : 6351669130782ed01eed3aeefded171789d0bc35
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 3 Dec 2025 19:15:08 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 3 Dec 2025 19:15:08 +0200 With this commit, the next multixid's offset will always be set on the
offsets page, by the time that a backend might try to read it, so we
no longer need the waiting mechanism with the condition variable. In
other words, this eliminates "corner case 2" mentioned in the
comments.
The waiting mechanism was broken in a few scenarios:
- When nextMulti was advanced without WAL-logging the next
multixid. For example, if a later multixid was already assigned and
WAL-logged before the previous one was WAL-logged, and then the
server crashed. In that case the next offset would never be set in
the offsets SLRU, and a query trying to read it would get stuck
waiting for it. Same thing could happen if pg_resetwal was used to
forcibly advance nextMulti.
- In hot standby mode, a deadlock could happen where one backend waits
for the next multixid assignment record, but WAL replay is not
advancing because of a recovery conflict with the waiting backend.
The old TAP test used carefully placed injection points to exercise
the old waiting code, but now that the waiting code is gone, much of
the old test is no longer relevant. Rewrite the test to reproduce the
IPC/MultixactCreation hang after crash recovery instead, and to verify
that previously recorded multixids stay readable.
Backpatch to all supported versions. In back-branches, we still need
to be able to read WAL that was generated before this fix, so in the
back-branches this includes a hack to initialize the next offsets page
when replaying XLOG_MULTIXACT_CREATE_ID for the last multixid on a
page. On 'master', bump XLOG_PAGE_MAGIC instead to indicate that the
WAL is not compatible.
Author: Andrey Borodin <amborodin@acm.org>
Reviewed-by: Dmitry Yurichev <dsy.075@yandex.ru>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Ivan Bykov <i.bykov@modernsys.ru>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/172e5723-d65f-4eec-b512-14beacb326ce@yandex.ru
Backpatch-through: 14 M src/backend/access/transam/multixact.c
Fix amcheck's handling of half-dead B-tree pages
commit : 1829016268c3bed87db1faee22ad9d205c8b6eb5
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 2 Dec 2025 21:11:15 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 2 Dec 2025 21:11:15 +0200 amcheck incorrectly reported the following error if there were any
half-dead pages in the index:
ERROR: mismatch between parent key and child high key in index
"amchecktest_id_idx"
It's expected that a half-dead page does not have a downlink in the
parent level, so skip the test.
Reported-by: Konstantin Knizhnik <knizhnik@garret.ru>
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Reviewed-by: Mihail Nikalayeu <mihailnikalayeu@gmail.com>
Discussion: https://www.postgresql.org/message-id/33e39552-6a2a-46f3-8b34-3f9f8004451f@garret.ru
Backpatch-through: 14 M contrib/amcheck/verify_nbtree.c
Fix amcheck's handling of incomplete root splits in B-tree
commit : f2a6df9fd56dd8a49c91a85722eac0a62d0578c8
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 2 Dec 2025 21:10:51 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 2 Dec 2025 21:10:51 +0200 When the root page is being split, it's normal that root page
according to the metapage is not marked BTP_ROOT. Fix bogus error in
amcheck about that case.
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Discussion: https://www.postgresql.org/message-id/abd65090-5336-42cc-b768-2bdd66738404@iki.fi
Backpatch-through: 14 M contrib/amcheck/verify_nbtree.c
Avoid rewriting data-modifying CTEs more than once.
commit : 4d288e33b957ce5023dd1ec24b53f02c8a9e8ba0
author : Dean Rasheed <dean.a.rasheed@gmail.com>
date : Sat, 29 Nov 2025 12:33:04 +0000
committer: Dean Rasheed <dean.a.rasheed@gmail.com>
date : Sat, 29 Nov 2025 12:33:04 +0000 Formerly, when updating an auto-updatable view, or a relation with
rules, if the original query had any data-modifying CTEs, the rewriter
would rewrite those CTEs multiple times as RewriteQuery() recursed
into the product queries. In most cases that was harmless, because
RewriteQuery() is mostly idempotent. However, if the CTE involved
updating an always-generated column, it would trigger an error because
any subsequent rewrite would appear to be attempting to assign a
non-default value to the always-generated column.
This could perhaps be fixed by attempting to make RewriteQuery() fully
idempotent, but that looks quite tricky to achieve, and would probably
be quite fragile, given that more generated-column-type features might
be added in the future.
Instead, fix by arranging for RewriteQuery() to rewrite each CTE
exactly once (by tracking the number of CTEs already rewritten as it
recurses). This has the advantage of being simpler and more efficient,
but it does make RewriteQuery() dependent on the order in which
rewriteRuleAction() joins the CTE lists from the original query and
the rule action, so care must be taken if that is ever changed.
Reported-by: Bernice Southey <bernice.southey@gmail.com>
Author: Bernice Southey <bernice.southey@gmail.com>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://postgr.es/m/CAEDh4nyD6MSH9bROhsOsuTqGAv_QceU_GDvN9WcHLtZTCYM1kA@mail.gmail.com
Backpatch-through: 14 M src/backend/rewrite/rewriteHandler.c
M src/test/regress/expected/with.out
M src/test/regress/sql/with.sql
Allow indexscans on partial hash indexes with implied quals.
commit : b497766a8e638f48a71fedfe9b4019d5a7fde6bf
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 27 Nov 2025 13:09:59 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 27 Nov 2025 13:09:59 -0500 Normally, if a WHERE clause is implied by the predicate of a partial
index, we drop that clause from the set of quals used with the index,
since it's redundant to test it if we're scanning that index.
However, if it's a hash index (or any !amoptionalkey index), this
could result in dropping all available quals for the index's first
key, preventing us from generating an indexscan.
It's fair to question the practical usefulness of this case. Since
hash only supports equality quals, the situation could only arise
if the index's predicate is "WHERE indexkey = constant", implying
that the index contains only one hash value, which would make hash
a really poor choice of index type. However, perhaps there are
other !amoptionalkey index AMs out there with which such cases are
more plausible.
To fix, just don't filter the candidate indexquals this way if
the index is !amoptionalkey. That's a bit hokey because it may
result in testing quals we didn't need to test, but to do it
more accurately we'd have to redundantly identify which candidate
quals are actually usable with the index, something we don't know
at this early stage of planning. Doesn't seem worth the effort.
Reported-by: Sergei Glukhov <s.glukhov@postgrespro.ru>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/e200bf38-6b45-446a-83fd-48617211feff@postgrespro.ru
Backpatch-through: 14 M src/backend/optimizer/path/indxpath.c
M src/test/regress/expected/hash_index.out
M src/test/regress/sql/hash_index.sql
doc: Fix misleading synopsis for CREATE/ALTER PUBLICATION.
commit : fc6e1a0f2bad75bc021024c5a676576962222157
author : Fujii Masao <fujii@postgresql.org>
date : Thu, 27 Nov 2025 23:30:51 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Thu, 27 Nov 2025 23:30:51 +0900 The documentation for CREATE/ALTER PUBLICATION previously showed:
[ ONLY ] table_name [ * ] [ ( column_name [, ... ] ) ] [ WHERE ( expression ) ] [, ... ]
to indicate that the table/column specification could be repeated.
However, placing [, ... ] directly after a multi-part construct was
misleading and made it unclear which portion was repeatable.
This commit introduces a new term, table_and_columns, to represent:
[ ONLY ] table_name [ * ] [ ( column_name [, ... ] ) ] [ WHERE ( expression ) ]
and updates the synopsis to use:
table_and_columns [, ... ]
which clearly identifies the repeatable element.
Backpatched to v15, where the misleading syntax was introduced.
Author: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Chao Li <lic@highgo.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAHut+PtsyvYL3KmA6C8f0ZpXQ=7FEqQtETVy-BOF+cm9WPvfMQ@mail.gmail.com
Backpatch-through: 15 M doc/src/sgml/ref/alter_publication.sgml
M doc/src/sgml/ref/create_publication.sgml
doc: Clarify passphrase command reloading on Windows
commit : 54ba4a66fdc4782e04a9454de13962c11d73b090
author : Daniel Gustafsson <dgustafsson@postgresql.org>
date : Wed, 26 Nov 2025 14:24:04 +0100
committer: Daniel Gustafsson <dgustafsson@postgresql.org>
date : Wed, 26 Nov 2025 14:24:04 +0100 When running on Windows (or EXEC_BACKEND) the SSL configuration will
be reloaded on each backend start, so the passphrase command will be
reloaded along with it. This implies that passphrase command reload
must be enabled on Windows for connections to work at all. Document
this since it wasn't mentioned explicitly, and will there add markup
for parameter value to match the rest of the docs.
Backpatch to all supported versions.
Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/5F301096-921A-427D-8EC1-EBAEC2A35082@yesql.se
Backpatch-through: 14 M doc/src/sgml/config.sgml
lwlock: Fix, currently harmless, bug in LWLockWakeup()
commit : 89c8a1b9069f7f1375b114c85258289f77b03a0f
author : Andres Freund <andres@anarazel.de>
date : Mon, 24 Nov 2025 17:37:09 -0500
committer: Andres Freund <andres@anarazel.de>
date : Mon, 24 Nov 2025 17:37:09 -0500 Accidentally the code in LWLockWakeup() checked the list of to-be-woken up
processes to see if LW_FLAG_HAS_WAITERS should be unset. That means that
HAS_WAITERS would not get unset immediately, but only during the next,
unnecessary, call to LWLockWakeup().
Luckily, as the code stands, this is just a small efficiency issue.
However, if there were (as in a patch of mine) a case in which LWLockWakeup()
would not find any backend to wake, despite the wait list not being empty,
we'd wrongly unset LW_FLAG_HAS_WAITERS, leading to potentially hanging.
While the consequences in the backbranches are limited, the code as-is
confusing, and it is possible that there are workloads where the additional
wait list lock acquisitions hurt, therefore backpatch.
Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
Backpatch-through: 14 M src/backend/storage/lmgr/lwlock.c
Fix incorrect IndexOptInfo header comment
commit : 14cdab029287b510fb6872370358feab470bef1f
author : David Rowley <drowley@postgresql.org>
date : Mon, 24 Nov 2025 17:01:34 +1300
committer: David Rowley <drowley@postgresql.org>
date : Mon, 24 Nov 2025 17:01:34 +1300 The comment incorrectly indicated that indexcollations[] stored
collations for both key columns and INCLUDE columns, but in reality it
only has elements for the key columns. canreturn[] didn't get a mention,
so add that while we're here.
Author: Junwang Zhao <zhjwpku@gmail.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/CAEG8a3LwbZgMKOQ9CmZarX5DEipKivdHp5PZMOO-riL0w%3DL%3D4A%40mail.gmail.com
Backpatch-through: 14 M src/include/nodes/pathnodes.h
jit: Adjust AArch64-only code for LLVM 21.
commit : 600acd34b09a7a06c236e503c130ab01c0fb1f5c
author : Thomas Munro <tmunro@postgresql.org>
date : Sat, 22 Nov 2025 20:51:16 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Sat, 22 Nov 2025 20:51:16 +1300 LLVM 21 changed the arguments of RTDyldObjectLinkingLayer's
constructor, breaking compilation with the backported
SectionMemoryManager from commit 9044fc1d.
https://github.com/llvm/llvm-project/commit/cd585864c0bbbd74ed2a2b1ccc191eed4d1c8f90
Backpatch-through: 14
Author: Holger Hoffstätte <holger@applied-asynchrony.com>
Reviewed-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/d25e6e4a-d1b4-84d3-2f8a-6c45b975f53d%40applied-asynchrony.com M src/backend/jit/llvm/llvmjit_wrap.cpp
Print new OldestXID value in pg_resetwal when it's being changed
commit : 890cc81b6ee2ce418d3a71d80de4aa0b0450b3b3
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 19 Nov 2025 18:05:42 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 19 Nov 2025 18:05:42 +0200 Commit 74cf7d46a91d added the --oldest-transaction-id option to
pg_resetwal, but forgot to update the code that prints all the new
values that are being set. Fix that.
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://www.postgresql.org/message-id/5461bc85-e684-4531-b4d2-d2e57ad18cba@iki.fi
Backpatch-through: 14 M src/bin/pg_resetwal/pg_resetwal.c
Don't allow CTEs to determine semantic levels of aggregates.
commit : 1c8c3206f4e024b582738094c5119f19e1e012ab
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 18 Nov 2025 12:56:55 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 18 Nov 2025 12:56:55 -0500 The fix for bug #19055 (commit b0cc0a71e) allowed CTE references in
sub-selects within aggregate functions to affect the semantic levels
assigned to such aggregates. It turns out this broke some related
cases, leading to assertion failures or strange planner errors such
as "unexpected outer reference in CTE query". After experimenting
with some alternative rules for assigning the semantic level in
such cases, we've come to the conclusion that changing the level
is more likely to break things than be helpful.
Therefore, this patch undoes what b0cc0a71e changed, and instead
installs logic to throw an error if there is any reference to a
CTE that's below the semantic level that standard SQL rules would
assign to the aggregate based on its contained Var and Aggref nodes.
(The SQL standard disallows sub-selects within aggregate functions,
so it can't reach the troublesome case and hence has no rule for
what to do.)
Perhaps someone will come along with a legitimate query that this
logic rejects, and if so probably the example will help us craft
a level-adjustment rule that works better than what b0cc0a71e did.
I'm not holding my breath for that though, because the previous
logic had been there for a very long time before bug #19055 without
complaints, and that bug report sure looks to have originated from
fuzzing not from real usage.
Like b0cc0a71e, back-patch to all supported branches, though
sadly that no longer includes v13.
Bug: #19106
Reported-by: Kamil Monicz <kamil@monicz.dev>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19106-9dd3668a0734cd72@postgresql.org
Backpatch-through: 14 M src/backend/parser/parse_agg.c
M src/test/regress/expected/with.out
M src/test/regress/sql/with.sql
Update .abi-compliance-history for change to CreateStatistics().
commit : 9a991d414e64e8c0a62959c25eb08cfccffcd610
author : Nathan Bossart <nathan@postgresql.org>
date : Mon, 17 Nov 2025 14:14:41 -0600
committer: Nathan Bossart <nathan@postgresql.org>
date : Mon, 17 Nov 2025 14:14:41 -0600 As noted in the commit message for 5e4fcbe531, the addition of a
second parameter to CreateStatistics() breaks ABI compatibility,
but we are unaware of any impacted third-party code. This commit
updates .abi-compliance-history accordingly.
Backpatch-through: 14-18 M .abi-compliance-history
Define PS_USE_CLOBBER_ARGV on GNU/Hurd.
commit : a1407daded69d7b01a96dde648dc5e2246ddefcc
author : Thomas Munro <tmunro@postgresql.org>
date : Mon, 17 Nov 2025 12:01:12 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Mon, 17 Nov 2025 12:01:12 +1300 Until d2ea2d310dfdc40328aca5b6c52225de78432e01, the PS_USE_PS_STRINGS
option was used on the GNU/Hurd. As this option got removed and
PS_USE_CLOBBER_ARGV appears to work fine nowadays on the Hurd, define
this one to re-enable process title changes on this platform.
In the 14 and 15 branches, the existing test for __hurd__ (added 25
years ago by commit 209aa77d, removed in 16 by the above commit) is left
unchanged for now as it was activating slightly different code paths and
would need investigation by a Hurd user.
Author: Michael Banck <mbanck@debian.org>
Discussion: https://postgr.es/m/CA%2BhUKGJMNGUAqf27WbckYFrM-Mavy0RKJvocfJU%3DJ2XcAZyv%2Bw%40mail.gmail.com
Backpatch-through: 16 M src/backend/utils/misc/ps_status.c
Doc: include MERGE in variable substitution command list
commit : 2791d4987956096cc1a524ce8bb8843410626e79
author : David Rowley <drowley@postgresql.org>
date : Mon, 17 Nov 2025 10:52:51 +1300
committer: David Rowley <drowley@postgresql.org>
date : Mon, 17 Nov 2025 10:52:51 +1300 Backpatch to 15, where MERGE was introduced.
Reported-by: <emorgunov@mail.ru>
Author: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/176278494385.770.15550176063450771532@wrigleys.postgresql.org
Backpatch-through: 15 M doc/src/sgml/plpgsql.sgml
Add note about CreateStatistics()'s selective use of check_rights.
commit : 414e1ece9d71cece76fcab0abca6816605cb7db5
author : Nathan Bossart <nathan@postgresql.org>
date : Fri, 14 Nov 2025 13:20:09 -0600
committer: Nathan Bossart <nathan@postgresql.org>
date : Fri, 14 Nov 2025 13:20:09 -0600 Commit 5e4fcbe531 added a check_rights parameter to this function
for use by ALTER TABLE commands that re-create statistics objects.
However, we intentionally ignore check_rights when verifying
relation ownership because this function's lookup could return a
different answer than the caller's. This commit adds a note to
this effect so that we remember it down the road.
Reviewed-by: Noah Misch <noah@leadboat.com>
Backpatch-through: 14 M src/backend/commands/statscmds.c
doc: Improve description of RLS policies applied by command type.
commit : 8d43607cd4223c3ff48d87ced11780c8508d26c8
author : Dean Rasheed <dean.a.rasheed@gmail.com>
date : Thu, 13 Nov 2025 12:03:52 +0000
committer: Dean Rasheed <dean.a.rasheed@gmail.com>
date : Thu, 13 Nov 2025 12:03:52 +0000 On the CREATE POLICY page, the "Policies Applied by Command Type"
table was missing MERGE ... THEN DELETE and some of the policies
applied during INSERT ... ON CONFLICT and MERGE. Fix that, and try to
improve readability by listing the various MERGE cases separately,
rather than together with INSERT/UPDATE/DELETE. Mention COPY ... TO
along with SELECT, since it behaves in the same way. In addition,
document which policy violations cause errors to be thrown, and which
just cause rows to be silently ignored.
Also, a paragraph above the table states that INSERT ... ON CONFLICT
DO UPDATE only checks the WITH CHECK expressions of INSERT policies
for rows appended to the relation by the INSERT path, which is
incorrect -- all rows proposed for insertion are checked, regardless
of whether they end up being inserted. Fix that, and also mention that
the same applies to INSERT ... ON CONFLICT DO NOTHING.
In addition, in various other places on that page, clarify how the
different types of policy are applied to different commands, and
whether or not errors are thrown when policy checks do not pass.
Backpatch to all supported versions. Prior to v17, MERGE did not
support RETURNING, and so MERGE ... THEN INSERT would never check new
rows against SELECT policies. Prior to v15, MERGE was not supported at
all.
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Viktor Holmberg <v@viktorh.net>
Reviewed-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CAEZATCWqnfeChjK=n1V_dYZT4rt4mnq+ybf9c0qXDYTVMsy8pg@mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/ref/create_policy.sgml
Clear 'xid' in dummy async notify entries written to fill up pages
commit : 0e8eaa2181d477cc739e462295d344db750674e4
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 21:19:03 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 21:19:03 +0200 Before we started to freeze async notify entries (commit 8eeb4a0f7c),
no one looked at the 'xid' on an entry with invalid 'dboid'. But now
we might actually need to freeze it later. Initialize them with
InvalidTransactionId to begin with, to avoid that work later.
Álvaro pointed this out in review of commit 8eeb4a0f7c, but I forgot
to include this change there.
Author: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://www.postgresql.org/message-id/202511071410.52ll56eyixx7@alvherre.pgsql
Backpatch-through: 14 M src/backend/commands/async.c
Fix remaining race condition with CLOG truncation and LISTEN/NOTIFY
commit : 44e8c60be66c7c174431c4cd7948c1ff015fe516
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 20:59:44 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 20:59:44 +0200 Previous commit fixed a bug where VACUUM would truncate the CLOG
that's still needed to check the commit status of XIDs in the async
notify queue, but as mentioned in the commit message, it wasn't a full
fix. If a backend is executing asyncQueueReadAllNotifications() and
has just made a local copy of an async SLRU page which contains old
XIDs, vacuum can concurrently truncate the CLOG covering those XIDs,
and the backend still gets an error when it calls
TransactionIdDidCommit() on those XIDs in the local copy. This commit
fixes that race condition.
To fix, hold the SLRU bank lock across the TransactionIdDidCommit()
calls in NOTIFY processing.
Per Tom Lane's idea. Backpatch to all supported versions.
Reviewed-by: Joel Jacobson <joel@compiler.org>
Reviewed-by: Arseniy Mukhin <arseniy.mukhin.dev@gmail.com>
Discussion: https://www.postgresql.org/message-id/2759499.1761756503@sss.pgh.pa.us
Backpatch-through: 14 M src/backend/commands/async.c
Fix bug where we truncated CLOG that was still needed by LISTEN/NOTIFY
commit : 053e1868b7ee1eacc5c09b11f5a18cab57285a50
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 20:59:36 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 20:59:36 +0200 The async notification queue contains the XID of the sender, and when
processing notifications we call TransactionIdDidCommit() on the
XID. But we had no safeguards to prevent the CLOG segments containing
those XIDs from being truncated away. As a result, if a backend didn't
for some reason process its notifications for a long time, or when a
new backend issued LISTEN, you could get an error like:
test=# listen c21;
ERROR: 58P01: could not access status of transaction 14279685
DETAIL: Could not open file "pg_xact/000D": No such file or directory.
LOCATION: SlruReportIOError, slru.c:1087
To fix, make VACUUM "freeze" the XIDs in the async notification queue
before truncating the CLOG. Old XIDs are replaced with
FrozenTransactionId or InvalidTransactionId.
Note: This commit is not a full fix. A race condition remains, where a
backend is executing asyncQueueReadAllNotifications() and has just
made a local copy of an async SLRU page which contains old XIDs, while
vacuum concurrently truncates the CLOG covering those XIDs. When the
backend then calls TransactionIdDidCommit() on those XIDs from the
local copy, you still get the error. The next commit will fix that
remaining race condition.
This was first reported by Sergey Zhuravlev in 2021, with many other
people hitting the same issue later. Thanks to:
- Alexandra Wang, Daniil Davydov, Andrei Varashen and Jacques Combrink
for investigating and providing reproducable test cases,
- Matheus Alcantara and Arseniy Mukhin for review and earlier proposed
patches to fix this,
- Álvaro Herrera and Masahiko Sawada for reviews,
- Yura Sokolov aka funny-falcon for the idea of marking transactions
as committed in the notification queue, and
- Joel Jacobson for the final patch version. I hope I didn't forget
anyone.
Backpatch to all supported versions. I believe the bug goes back all
the way to commit d1e027221d, which introduced the SLRU-based async
notification queue.
Discussion: https://www.postgresql.org/message-id/16961-25f29f95b3604a8a@postgresql.org
Discussion: https://www.postgresql.org/message-id/18804-bccbbde5e77a68c2@postgresql.org
Discussion: https://www.postgresql.org/message-id/CAK98qZ3wZLE-RZJN_Y%2BTFjiTRPPFPBwNBpBi5K5CU8hUHkzDpw@mail.gmail.com
Backpatch-through: 14 M src/backend/commands/async.c
M src/backend/commands/vacuum.c
M src/include/commands/async.h
Escalate ERRORs during async notify processing to FATAL
commit : c1a5bde003b8a8f1b6d613a4c19fd2c65456d002
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 20:59:28 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 20:59:28 +0200 Previously, if async notify processing encountered an error, we would
report the error to the client and advance our read position past the
offending entry to prevent trying to process it over and over
again. Trying to continue after an error has a few problems however:
- We have no way of telling the client that a notification was
lost. They get an ERROR, but that doesn't tell you much. As such,
it's not clear if keeping the connection alive after losing a
notification is a good thing. Depending on the application logic,
missing a notification could cause the application to get stuck
waiting, for example.
- If the connection is idle, PqCommReadingMsg is set and any ERROR is
turned into FATAL anyway.
- We bailed out of the notification processing loop on first error
without processing any subsequent notifications. The subsequent
notifications would not be processed until another notify interrupt
arrives. For example, if there were two notifications pending, and
processing the first one caused an ERROR, the second notification
would not be processed until someone sent a new NOTIFY.
This commit changes the behavior so that any ERROR while processing
async notifications is turned into FATAL, causing the client
connection to be terminated. That makes the behavior more consistent
as that's what happened in idle state already, and terminating the
connection is a clear signal to the application that it might've
missed some notifications.
The reason to do this now is that the next commits will change the
notification processing code in a way that would make it harder to
skip over just the offending notification entry on error.
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Arseniy Mukhin <arseniy.mukhin.dev@gmail.com>
Discussion: https://www.postgresql.org/message-id/fedbd908-4571-4bbe-b48e-63bfdcc38f64@iki.fi
Backpatch-through: 14 M src/backend/commands/async.c
doc: Document effects of ownership change on privileges
commit : ecb884b58ef2bafcc1ee066ab624a775e29f10bd
author : Daniel Gustafsson <dgustafsson@postgresql.org>
date : Wed, 12 Nov 2025 17:04:35 +0100
committer: Daniel Gustafsson <dgustafsson@postgresql.org>
date : Wed, 12 Nov 2025 17:04:35 +0100 Explicitly document that privileges are transferred along with the
ownership. Backpatch to all supported versions since this behavior
has always been present.
Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Josef Šimánek <josef.simanek@gmail.com>
Reported-by: Gilles Parc <gparc@free.fr>
Discussion: https://postgr.es/m/2023185982.281851219.1646733038464.JavaMail.root@zimbra15-e2.priv.proxad.net
Backpatch-through: 14 M doc/src/sgml/ddl.sgml
Fix range for commit_siblings in sample conf
commit : 995c971832fe16234bff3b0f01f3a524b1154bd4
author : Daniel Gustafsson <dgustafsson@postgresql.org>
date : Wed, 12 Nov 2025 13:51:53 +0100
committer: Daniel Gustafsson <dgustafsson@postgresql.org>
date : Wed, 12 Nov 2025 13:51:53 +0100 The range for commit_siblings was incorrectly listed as starting on 1
instead of 0 in the sample configuration file. Backpatch down to all
supported branches.
Author: Man Zeng <zengman@halodbtech.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/tencent_53B70BA72303AE9C6889E78E@qq.com
Backpatch-through: 14 M src/backend/utils/misc/postgresql.conf.sample
Fix pg_upgrade around multixid and mxoff wraparound
commit : e039b09f8d39c4fb76b17b161c1382998e1a47c8
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 12:20:16 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 12:20:16 +0200 pg_resetwal didn't accept multixid 0 or multixact offset UINT32_MAX,
but they are both valid values that can appear in the control file.
That caused pg_upgrade to fail if you tried to upgrade a cluster
exactly at multixid or offset wraparound, because pg_upgrade calls
pg_resetwal to restore multixid/offset on the new cluster to the
values from the old cluster. To fix, allow those values in
pg_resetwal.
Fixes bugs #18863 and #18865 reported by Dmitry Kovalenko.
Backpatch down to v15. Version 14 has the same bug, but the patch
doesn't apply cleanly there. It could be made to work but it doesn't
seem worth the effort given how rare it is to hit this problem with
pg_upgrade, and how few people are upgrading to v14 anymore.
Author: Maxim Orlov <orlovmg@gmail.com>
Discussion: https://www.postgresql.org/message-id/CACG%3DezaApSMTjd%3DM2Sfn5Ucuggd3FG8Z8Qte8Xq9k5-%2BRQis-g@mail.gmail.com
Discussion: https://www.postgresql.org/message-id/18863-72f08858855344a2@postgresql.org
Discussion: https://www.postgresql.org/message-id/18865-d4c66cf35c2a67af@postgresql.org
Backpatch-through: 15 M src/bin/pg_resetwal/pg_resetwal.c
doc: Fix incorrect synopsis for ALTER PUBLICATION ... DROP ...
commit : 807df4918622c9f7480fdb9d6539301f609e4769
author : Fujii Masao <fujii@postgresql.org>
date : Wed, 12 Nov 2025 13:40:43 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Wed, 12 Nov 2025 13:40:43 +0900 The synopsis for the ALTER PUBLICATION ... DROP ... command incorrectly
implied that a column list and WHERE clause could be specified as part of
the publication object. However, these options are not allowed for
DROP operations, making the documentation misleading.
This commit corrects the synopsis to clearly show only the valid forms
of publication objects.
Backpatched to v15, where the incorrect synopsis was introduced.
Author: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAHut+PsPu+47Q7b0o6h1r-qSt90U3zgbAHMHUag5o5E1Lo+=uw@mail.gmail.com
Backpatch-through: 15 M doc/src/sgml/ref/alter_publication.sgml
Add check for large files in meson.build
commit : d715aaa76f686109f0a6b611f84838901ccd137d
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 12 Nov 2025 09:02:35 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 12 Nov 2025 09:02:35 +0900 A similar check existed in the MSVC scripts that have been removed in
v17 by 1301c80b2167, but nothing of the kind was checked in meson when
building with a 4-byte off_t.
This commit adds a check to fail the builds when trying to use a
relation file size higher than 1GB when off_t is 4 bytes, like
./configure, rather than detecting these failures at runtime because the
code is not able to handle large files in this case.
Backpatch down to v16, where meson has been introduced.
Discussion: https://postgr.es/m/aQ0hG36IrkaSGfN8@paquier.xyz
Backpatch-through: 16 M meson.build