Stamp 18.2.
commit : 5a461dc4dbf72a1ec281394a76eb36d68cbdd935
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 16:49:49 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 16:49:49 -0500 M configure
M configure.ac
M meson.build
Last-minute updates for release notes.
commit : 30d2603f5c340133ca03e098fcaa9c242843d5e1
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 14:01:20 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 14:01:20 -0500 Security: CVE-2026-2003, CVE-2026-2004, CVE-2026-2005, CVE-2026-2006, CVE-2026-2007 M doc/src/sgml/release-18.sgml
Fix test "NUL byte in text decrypt" for --without-zlib builds.
commit : 4543b02af3d3077b8505d533dc51bd51fa47b34a
author : Noah Misch <noah@leadboat.com>
date : Mon, 9 Feb 2026 09:08:10 -0800
committer: Noah Misch <noah@leadboat.com>
date : Mon, 9 Feb 2026 09:08:10 -0800 Backpatch-through: 14
Security: CVE-2026-2006 M contrib/pgcrypto/expected/pgp-decrypt.out
M contrib/pgcrypto/expected/pgp-decrypt_1.out
M contrib/pgcrypto/sql/pgp-decrypt.sql
Harden _int_matchsel() against being attached to the wrong operator.
commit : b69af3dda26104b54d4e728c6946edcc79a8ac61
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 10:14:22 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 10:14:22 -0500 While the preceding commit prevented such attachments from occurring
in future, this one aims to prevent further abuse of any already-
created operator that exposes _int_matchsel to the wrong data types.
(No other contrib module has a vulnerable selectivity estimator.)
We need only check that the Const we've found in the query is indeed
of the type we expect (query_int), but there's a difficulty: as an
extension type, query_int doesn't have a fixed OID that we could
hard-code into the estimator.
Therefore, the bulk of this patch consists of infrastructure to let
an extension function securely look up the OID of a datatype
belonging to the same extension. (Extension authors have requested
such functionality before, so we anticipate that this code will
have additional non-security uses, and may soon be extended to allow
looking up other kinds of SQL objects.)
This is done by first finding the extension that owns the calling
function (there can be only one), and then thumbing through the
objects owned by that extension to find a type that has the desired
name. This is relatively expensive, especially for large extensions,
so a simple cache is put in front of these lookups.
Reported-by: Daniel Firer as part of zeroday.cloud
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Security: CVE-2026-2004
Backpatch-through: 14 M contrib/intarray/_int_selfuncs.c
M src/backend/catalog/pg_depend.c
M src/backend/commands/extension.c
M src/include/catalog/dependency.h
M src/include/commands/extension.h
M src/tools/pgindent/typedefs.list
Require superuser to install a non-built-in selectivity estimator.
commit : 66ddac6982c6dc0369dc7b2d251f4d210d704a57
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 10:07:31 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 10:07:31 -0500 Selectivity estimators come in two flavors: those that make specific
assumptions about the data types they are working with, and those
that don't. Most of the built-in estimators are of the latter kind
and are meant to be safely attachable to any operator. If the
operator does not behave as the estimator expects, you might get a
poor estimate, but it won't crash.
However, estimators that do make datatype assumptions can malfunction
if they are attached to the wrong operator, since then the data they
get from pg_statistic may not be of the type they expect. This can
rise to the level of a security problem, even permitting arbitrary
code execution by a user who has the ability to create SQL objects.
To close this hole, establish a rule that built-in estimators are
required to protect themselves against being called on the wrong type
of data. It does not seem practical however to expect estimators in
extensions to reach a similar level of security, at least not in the
near term. Therefore, also establish a rule that superuser privilege
is required to attach a non-built-in estimator to an operator.
We expect that this restriction will have little negative impact on
extensions, since estimators generally have to be written in C and
thus superuser privilege is required to create them in the first
place.
This commit changes the privilege checks in CREATE/ALTER OPERATOR
to enforce the rule about superuser privilege, and fixes a couple
of built-in estimators that were making datatype assumptions without
sufficiently checking that they're valid.
Reported-by: Daniel Firer as part of zeroday.cloud
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Security: CVE-2026-2004
Backpatch-through: 14 M src/backend/commands/operatorcmds.c
M src/backend/tsearch/ts_selfuncs.c
M src/backend/utils/adt/network_selfuncs.c
Guard against unexpected dimensions of oidvector/int2vector.
commit : 3b6588cd902faa967f61f539f057f9b7643cf6a5
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 09:57:44 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 9 Feb 2026 09:57:44 -0500 These data types are represented like full-fledged arrays, but
functions that deal specifically with these types assume that the
array is 1-dimensional and contains no nulls. However, there are
cast pathways that allow general oid[] or int2[] arrays to be cast
to these types, allowing these expectations to be violated. This
can be exploited to cause server memory disclosure or SIGSEGV.
Fix by installing explicit checks in functions that accept these
types.
Reported-by: Altan Birler <altan.birler@tum.de>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Security: CVE-2026-2003
Backpatch-through: 14 M src/backend/access/hash/hashfunc.c
M src/backend/access/nbtree/nbtcompare.c
M src/backend/utils/adt/format_type.c
M src/backend/utils/adt/int.c
M src/backend/utils/adt/oid.c
M src/include/utils/builtins.h
M src/test/regress/expected/arrays.out
M src/test/regress/sql/arrays.sql
Require PGP-decrypted text to pass encoding validation.
commit : b427091947e59788289e80f0ff4279cb7d32dab1
author : Noah Misch <noah@leadboat.com>
date : Mon, 9 Feb 2026 06:14:47 -0800
committer: Noah Misch <noah@leadboat.com>
date : Mon, 9 Feb 2026 06:14:47 -0800 pgp_sym_decrypt() and pgp_pub_decrypt() will raise such errors, while
bytea variants will not. The existing "dat3" test decrypted to non-UTF8
text, so switch that query to bytea.
The long-term intent is for type "text" to always be valid in the
database encoding. pgcrypto has long been known as a source of
exceptions to that intent, but a report about exploiting invalid values
of type "text" brought this module to the forefront. This particular
exception is straightforward to fix, with reasonable effect on user
queries. Back-patch to v14 (all supported versions).
Reported-by: Paul Gerste (as part of zeroday.cloud)
Reported-by: Moritz Sanft (as part of zeroday.cloud)
Author: shihao zhong <zhong950419@gmail.com>
Reviewed-by: cary huang <hcary328@gmail.com>
Discussion: https://postgr.es/m/CAGRkXqRZyo0gLxPJqUsDqtWYBbgM14betsHiLRPj9mo2=z9VvA@mail.gmail.com
Backpatch-through: 14
Security: CVE-2026-2006 M contrib/pgcrypto/expected/pgp-decrypt.out
M contrib/pgcrypto/expected/pgp-decrypt_1.out
M contrib/pgcrypto/pgp-pgsql.c
M contrib/pgcrypto/sql/pgp-decrypt.sql
Code coverage for most pg_mblen* calls.
commit : b0f5d25bc3679afaed69d367c72efd387c763d04
author : Thomas Munro <tmunro@postgresql.org>
date : Mon, 12 Jan 2026 10:20:06 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Mon, 12 Jan 2026 10:20:06 +1300 A security patch changed them today, so close the coverage gap now.
Test that buffer overrun is avoided when pg_mblen*() requires more
than the number of bytes remaining.
This does not cover the calls in dict_thesaurus.c or in dict_synonym.c.
That code is straightforward. To change that code's input, one must
have access to modify installed OS files, so low-privilege users are not
a threat. Testing this would likewise require changing installed
share/postgresql/tsearch_data, which was enough of an obstacle to not
bother.
Security: CVE-2026-2006
Backpatch-through: 14
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Co-authored-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> M contrib/pg_trgm/Makefile
A contrib/pg_trgm/data/trgm_utf8.data
A contrib/pg_trgm/expected/pg_utf8_trgm.out
A contrib/pg_trgm/expected/pg_utf8_trgm_1.out
M contrib/pg_trgm/meson.build
A contrib/pg_trgm/sql/pg_utf8_trgm.sql
M src/backend/utils/adt/arrayfuncs.c
A src/test/regress/expected/encoding.out
A src/test/regress/expected/encoding_1.out
A src/test/regress/expected/euc_kr.out
A src/test/regress/expected/euc_kr_1.out
M src/test/regress/parallel_schedule
M src/test/regress/regress.c
A src/test/regress/sql/encoding.sql
A src/test/regress/sql/euc_kr.sql
Replace pg_mblen() with bounds-checked versions.
commit : 7b5fc85bef8a3baa530ec98f89376f9d4b7de83c
author : Thomas Munro <tmunro@postgresql.org>
date : Wed, 7 Jan 2026 22:14:31 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Wed, 7 Jan 2026 22:14:31 +1300 A corrupted string could cause code that iterates with pg_mblen() to
overrun its buffer. Fix, by converting all callers to one of the
following:
1. Callers with a null-terminated string now use pg_mblen_cstr(), which
raises an "illegal byte sequence" error if it finds a terminator in the
middle of the sequence.
2. Callers with a length or end pointer now use either
pg_mblen_with_len() or pg_mblen_range(), for the same effect, depending
on which of the two seems more convenient at each site.
3. A small number of cases pre-validate a string, and can use
pg_mblen_unbounded().
The traditional pg_mblen() function and COPYCHAR macro still exist for
backward compatibility, but are no longer used by core code and are
hereby deprecated. The same applies to the t_isXXX() functions.
Security: CVE-2026-2006
Backpatch-through: 14
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Co-authored-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reported-by: Paul Gerste (as part of zeroday.cloud)
Reported-by: Moritz Sanft (as part of zeroday.cloud) M contrib/btree_gist/btree_utils_var.c
M contrib/dict_xsyn/dict_xsyn.c
M contrib/hstore/hstore_io.c
M contrib/ltree/crc32.c
M contrib/ltree/lquery_op.c
M contrib/ltree/ltree.h
M contrib/ltree/ltree_io.c
M contrib/ltree/ltxtquery_io.c
M contrib/pageinspect/heapfuncs.c
M contrib/pg_trgm/trgm.h
M contrib/pg_trgm/trgm_op.c
M contrib/pg_trgm/trgm_regexp.c
M contrib/pgcrypto/crypt-sha.c
M contrib/unaccent/unaccent.c
M src/backend/catalog/pg_proc.c
M src/backend/tsearch/dict_synonym.c
M src/backend/tsearch/dict_thesaurus.c
M src/backend/tsearch/regis.c
M src/backend/tsearch/spell.c
M src/backend/tsearch/ts_locale.c
M src/backend/tsearch/ts_utils.c
M src/backend/tsearch/wparser_def.c
M src/backend/utils/adt/encode.c
M src/backend/utils/adt/formatting.c
M src/backend/utils/adt/jsonfuncs.c
M src/backend/utils/adt/jsonpath_gram.y
M src/backend/utils/adt/levenshtein.c
M src/backend/utils/adt/like.c
M src/backend/utils/adt/like_match.c
M src/backend/utils/adt/oracle_compat.c
M src/backend/utils/adt/regexp.c
M src/backend/utils/adt/tsquery.c
M src/backend/utils/adt/tsvector.c
M src/backend/utils/adt/tsvector_op.c
M src/backend/utils/adt/tsvector_parser.c
M src/backend/utils/adt/varbit.c
M src/backend/utils/adt/varlena.c
M src/backend/utils/adt/xml.c
M src/backend/utils/mb/mbutils.c
M src/include/mb/pg_wchar.h
M src/include/tsearch/ts_locale.h
M src/include/tsearch/ts_utils.h
M src/test/modules/test_regex/test_regex.c
Fix mb2wchar functions on short input.
commit : efef05ba995fb2f553c146acb5c33828cc4f898a
author : Thomas Munro <tmunro@postgresql.org>
date : Mon, 26 Jan 2026 11:22:32 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Mon, 26 Jan 2026 11:22:32 +1300 When converting multibyte to pg_wchar, the UTF-8 implementation would
silently ignore an incomplete final character, while the other
implementations would cast a single byte to pg_wchar, and then repeat
for the remaining byte sequence. While it didn't overrun the buffer, it
was surely garbage output.
Make all encodings behave like the UTF-8 implementation. A later change
for master only will convert this to an error, but we choose not to
back-patch that behavior change on the off-chance that someone is
relying on the existing UTF-8 behavior.
Security: CVE-2026-2006
Backpatch-through: 14
Author: Thomas Munro <thomas.munro@gmail.com>
Reported-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> M src/common/wchar.c
Fix encoding length for EUC_CN.
commit : df0852fe037246289cc00b4d36da6c1f25ff5844
author : Thomas Munro <tmunro@postgresql.org>
date : Thu, 5 Feb 2026 01:04:24 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Thu, 5 Feb 2026 01:04:24 +1300 While EUC_CN supports only 1- and 2-byte sequences (CS0, CS1), the
mb<->wchar conversion functions allow 3-byte sequences beginning SS2,
SS3.
Change pg_encoding_max_length() to return 3, not 2, to close a
hypothesized buffer overrun if a corrupted string is converted to wchar
and back again in a newly allocated buffer. We might reconsider that in
master (ie harmonizing in a different direction), but this change seems
better for the back-branches.
Also change pg_euccn_mblen() to report SS2 and SS3 characters as having
length 3 (following the example of EUC_KR). Even though such characters
would not pass verification, it's remotely possible that invalid bytes
could be used to compute a buffer size for use in wchar conversion.
Security: CVE-2026-2006
Backpatch-through: 14
Author: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> M src/common/wchar.c
Fix buffer overflows in pg_trgm due to lower-casing
commit : e0965fb1a8550716db08e2183560be3546851647
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 20 Jan 2026 11:53:28 +0200
committer: Thomas Munro <tmunro@postgresql.org>
date : Tue, 20 Jan 2026 11:53:28 +0200 The code made a subtle assumption that the lower-cased version of a
string never has more characters than the original. That is not always
true. For example, in a database with the latin9 encoding:
latin9db=# select lower(U&'\00CC' COLLATE "lt-x-icu");
lower
-----------
i\x1A\x1A
(1 row)
In this example, lower-casing expands the single input character into
three characters.
The generate_trgm_only() function relied on that assumption in two
ways:
- It used "slen * pg_database_encoding_max_length() + 4" to allocate
the buffer to hold the lowercased and blank-padded string. That
formula accounts for expansion if the lower-case characters are
longer (in bytes) than the originals, but it's still not enough if
the lower-cased string contains more *characters* than the original.
- Its callers sized the output array to hold the trigrams extracted
from the input string with the formula "(slen / 2 + 1) * 3", where
'slen' is the input string length in bytes. (The formula was
generous to account for the possibility that RPADDING was set to 2.)
That's also not enough if one input byte can turn into multiple
characters.
To fix, introduce a growable trigram array and give up on trying to
choose the correct max buffer sizes ahead of time.
Backpatch to v18, but no further. In previous versions lower-casing was
done character by character, and thus the assumption that lower-casing
doesn't change the character length was valid. That was changed in v18,
commit fb1a18810f.
Security: CVE-2026-2007
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Jeff Davis <pgsql@j-davis.com> M contrib/pg_trgm/trgm_op.c
M src/tools/pgindent/typedefs.list
Remove 'charlen' argument from make_trigrams()
commit : 18548681da38b2376d0c071d568b9d0c1f8b6ad2
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 20 Jan 2026 14:34:32 +0200
committer: Thomas Munro <tmunro@postgresql.org>
date : Tue, 20 Jan 2026 14:34:32 +0200 The function assumed that if charlen == bytelen, there are no
multibyte characters in the string. That's sensible, but the callers
were a little careless in how they calculated the lengths. The callers
converted the string to lowercase before calling make_trigram(), and
the 'charlen' value was calculated *before* the conversion to
lowercase while 'bytelen' was calculated after the conversion. If the
lowercased string had a different number of characters than the
original, make_trigram() might incorrectly apply the fastpath and
treat all the bytes as single-byte characters, or fail to apply the
fastpath (which is harmless), or it might hit the "Assert(bytelen ==
charlen)" assertion. I'm not aware of any locale / character
combinations where you could hit that assertion in practice,
i.e. where a string converted to lowercase would have fewer characters
than the original, but it seems best to avoid making that assumption.
To fix, remove the 'charlen' argument. To keep the performance when
there are no multibyte characters, always try the fast path first, but
check the input for multibyte characters as we go. The check on each
byte adds some overhead, but it's close enough. And to compensate, the
find_word() function no longer needs to count the characters.
This fixes one small bug in make_trigrams(): in the multibyte
codepath, it peeked at the byte just after the end of the input
string. When compiled with IGNORECASE, that was harmless because there
is always a NUL byte or blank after the input string. But with
!IGNORECASE, the call from generate_wildcard_trgm() doesn't guarantee
that.
Backpatch to v18, but no further. In previous versions lower-casing was
done character by character, and thus the assumption that lower-casing
doesn't change the character length was valid. That was changed in v18,
commit fb1a18810f.
Security: CVE-2026-2007
Reviewed-by: Noah Misch <noah@leadboat.com> M contrib/pg_trgm/trgm_op.c
pgcrypto: Fix buffer overflow in pgp_pub_decrypt_bytea()
commit : 209f387b81660e478eea147db9130af1d1c861f2
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 9 Feb 2026 08:01:05 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 9 Feb 2026 08:01:05 +0900 pgp_pub_decrypt_bytea() was missing a safeguard for the session key
length read from the message data, that can be given in input of
pgp_pub_decrypt_bytea(). This can result in the possibility of a buffer
overflow for the session key data, when the length specified is longer
than PGP_MAX_KEY, which is the maximum size of the buffer where the
session data is copied to.
A script able to rebuild the message and key data that can trigger the
overflow is included in this commit, based on some contents provided by
the reporter, heavily editted by me. A SQL test is added, based on the
data generated by the script.
Reported-by: Team Xint Code as part of zeroday.cloud
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Noah Misch <noah@leadboat.com>
Security: CVE-2026-2005
Backpatch-through: 14 M contrib/pgcrypto/Makefile
A contrib/pgcrypto/expected/pgp-pubkey-session.out
M contrib/pgcrypto/meson.build
M contrib/pgcrypto/pgp-pubdec.c
M contrib/pgcrypto/px.c
M contrib/pgcrypto/px.h
A contrib/pgcrypto/scripts/pgp_session_data.py
A contrib/pgcrypto/sql/pgp-pubkey-session.sql
Release notes for 18.2, 17.8, 16.12, 15.16, 14.21.
commit : 5944beb7398d76f746c9bb32dbca41bd05419925
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 8 Feb 2026 13:00:40 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 8 Feb 2026 13:00:40 -0500 M doc/src/sgml/release-18.sgml
Translation updates
commit : 731e03272e30ac548d310767a39ab92350da077b
author : Peter Eisentraut <peter@eisentraut.org>
date : Sun, 8 Feb 2026 15:07:02 +0100
committer: Peter Eisentraut <peter@eisentraut.org>
date : Sun, 8 Feb 2026 15:07:02 +0100 Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: bdee668bac7ab3256b6f922c0b6fb663a3b03e16 M src/backend/po/de.po
M src/backend/po/es.po
M src/backend/po/ja.po
M src/backend/po/ka.po
M src/backend/po/ru.po
M src/backend/po/sv.po
M src/backend/po/uk.po
M src/bin/initdb/po/es.po
M src/bin/initdb/po/uk.po
M src/bin/pg_amcheck/po/es.po
M src/bin/pg_archivecleanup/po/es.po
M src/bin/pg_archivecleanup/po/uk.po
M src/bin/pg_basebackup/po/de.po
M src/bin/pg_basebackup/po/es.po
M src/bin/pg_basebackup/po/ja.po
M src/bin/pg_basebackup/po/ka.po
M src/bin/pg_basebackup/po/ru.po
M src/bin/pg_basebackup/po/sv.po
M src/bin/pg_basebackup/po/uk.po
M src/bin/pg_checksums/po/es.po
M src/bin/pg_checksums/po/uk.po
M src/bin/pg_combinebackup/po/de.po
M src/bin/pg_combinebackup/po/es.po
M src/bin/pg_combinebackup/po/ru.po
M src/bin/pg_combinebackup/po/sv.po
M src/bin/pg_combinebackup/po/uk.po
M src/bin/pg_config/po/es.po
M src/bin/pg_controldata/po/es.po
M src/bin/pg_controldata/po/uk.po
M src/bin/pg_ctl/po/es.po
M src/bin/pg_ctl/po/uk.po
M src/bin/pg_dump/po/de.po
M src/bin/pg_dump/po/es.po
M src/bin/pg_dump/po/ja.po
M src/bin/pg_dump/po/ka.po
M src/bin/pg_dump/po/ru.po
M src/bin/pg_dump/po/sv.po
M src/bin/pg_dump/po/uk.po
M src/bin/pg_resetwal/po/de.po
M src/bin/pg_resetwal/po/es.po
M src/bin/pg_resetwal/po/ja.po
M src/bin/pg_resetwal/po/ka.po
M src/bin/pg_resetwal/po/ru.po
M src/bin/pg_resetwal/po/sv.po
M src/bin/pg_resetwal/po/uk.po
M src/bin/pg_rewind/po/es.po
M src/bin/pg_rewind/po/ru.po
M src/bin/pg_rewind/po/sv.po
M src/bin/pg_rewind/po/uk.po
M src/bin/pg_test_fsync/po/es.po
M src/bin/pg_test_timing/po/es.po
M src/bin/pg_test_timing/po/uk.po
M src/bin/pg_upgrade/po/es.po
M src/bin/pg_upgrade/po/uk.po
M src/bin/pg_verifybackup/po/es.po
M src/bin/pg_verifybackup/po/uk.po
M src/bin/pg_waldump/po/es.po
M src/bin/pg_waldump/po/uk.po
M src/bin/pg_walsummary/po/es.po
M src/bin/pg_walsummary/po/uk.po
M src/bin/psql/po/de.po
M src/bin/psql/po/es.po
M src/bin/psql/po/ja.po
M src/bin/psql/po/ru.po
M src/bin/psql/po/sv.po
M src/bin/psql/po/uk.po
M src/bin/scripts/po/es.po
M src/bin/scripts/po/uk.po
M src/interfaces/ecpg/ecpglib/po/es.po
M src/interfaces/ecpg/preproc/po/es.po
M src/interfaces/ecpg/preproc/po/uk.po
M src/interfaces/libpq/po/de.po
M src/interfaces/libpq/po/es.po
M src/interfaces/libpq/po/fr.po
M src/interfaces/libpq/po/ja.po
M src/interfaces/libpq/po/ka.po
M src/interfaces/libpq/po/ru.po
M src/interfaces/libpq/po/sv.po
M src/interfaces/libpq/po/uk.po
M src/pl/plperl/po/es.po
M src/pl/plperl/po/uk.po
M src/pl/plpgsql/src/po/es.po
M src/pl/plpgsql/src/po/uk.po
M src/pl/plpython/po/es.po
M src/pl/plpython/po/uk.po
M src/pl/tcl/po/es.po
M src/pl/tcl/po/uk.po
meson: host_system value for Solaris is 'sunos' not 'solaris'.
commit : 5eac1d68fc0c51abebafd518f77d1172e191c805
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 7 Feb 2026 20:05:52 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 7 Feb 2026 20:05:52 -0500 This thinko caused us to not substitute our own getopt() code,
which results in failing to parse long options for the postmaster
since Solaris' getopt() doesn't do what we expect. This can be seen
in the results of buildfarm member icarus, which is the only one
trying to build via meson on Solaris.
Per consultation with pgsql-release, it seems okay to fix this
now even though we're in release freeze. The fix visibly won't
affect any other platforms, and it can't break Solaris/meson
builds any worse than they're already broken.
Discussion: https://postgr.es/m/2471229.1770499291@sss.pgh.pa.us
Backpatch-through: 16 M meson.build
Further error message fix
commit : cff2ef9845d6d26c99036c8331b705144186690f
author : Peter Eisentraut <peter@eisentraut.org>
date : Sat, 7 Feb 2026 22:37:02 +0100
committer: Peter Eisentraut <peter@eisentraut.org>
date : Sat, 7 Feb 2026 22:37:02 +0100 Further fix of error message changed in commit 74a116a79b4. The
initial fix was not quite correct.
Discussion: https://www.postgresql.org/message-id/flat/tencent_1EE1430B1E6C18A663B8990F%40qq.com M src/bin/pg_rewind/file_ops.c
Placate ABI checker.
commit : 3c3b34bbee41e8bb1f71e4360d444daa50ea86b1
author : Thomas Munro <tmunro@postgresql.org>
date : Sat, 7 Feb 2026 10:56:04 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Sat, 7 Feb 2026 10:56:04 +1300 It's not really an ABI break if you change the layout/size of an object
with incomplete type, as commit f94e9141 did, so advance the ABI
compliance reference commit in 16-18 to satisfy build farm animal crake.
Backpatch-through: 16-18
Discussion: https://www.postgresql.org/message-id/1871492.1770409863%40sss.pgh.pa.us M .abi-compliance-history
First-draft release notes for 18.2.
commit : c6881f792281d1abeba9acf3fe7972da091f2bbd
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 6 Feb 2026 13:06:16 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 6 Feb 2026 13:06:16 -0500 As usual, the release notes for other branches will be made by cutting
these down, but put them up for community review first. M doc/src/sgml/release-18.sgml
Fix use of proc number in pgstat_create_backend()
commit : e679d0f0b6729d0e97510dd4ab6a793700d6d66a
author : Michael Paquier <michael@paquier.xyz>
date : Fri, 6 Feb 2026 19:57:26 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Fri, 6 Feb 2026 19:57:26 +0900 This routine's internals directly used MyProcNumber to choose which
object ID to assign for the hash key of a backend's stats entry, while
the value to use is given as input argument of the function.
The original intention was to pass MyProcNumber as an argument of
pgstat_create_backend() when called in pgstat_bestart_final(),
pgstat_beinit() ensuring that MyProcNumber has been set, not use it
directly in the function. This commit addresses this inconsistency by
using the procnum given by the caller of pgstat_create_backend(), not
MyProcNumber.
This issue is not a cause of bugs currently. However, let's keep the
code in sync across all the branches where this code exists, as it could
matter in a future backpatch.
Oversight in 4feba03d8b92.
Reported-by: Ryo Matsumura <matsumura.ryo@fujitsu.com>
Discussion: https://postgr.es/m/TYCPR01MB11316AD8150C8F470319ACCAEE866A@TYCPR01MB11316.jpnprd01.prod.outlook.com
Backpatch-through: 18 M src/backend/utils/activity/pgstat_backend.c
Fix some error message inconsistencies
commit : acfa422c3c1f3a6001a109699cae06236efd1aa4
author : Michael Paquier <michael@paquier.xyz>
date : Fri, 6 Feb 2026 15:38:21 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Fri, 6 Feb 2026 15:38:21 +0900 These errors are very unlikely going to show up, but in the event that
they happen, some incorrect information would have been provided:
- In pg_rewind, a stat() failure was reported as an open() failure.
- In pg_combinebackup, a check for the new directory of a tablespace
mapping was referred as the old directory.
- In pg_combinebackup, a failure in reading a source file when copying
blocks referred to the destination file.
The changes for pg_combinebackup affect v17 and newer versions. For
pg_rewind, all the stable branches are affected.
Author: Man Zeng <zengman@halodbtech.com>
Discussion: https://postgr.es/m/tencent_1EE1430B1E6C18A663B8990F@qq.com
Backpatch-through: 14 M src/bin/pg_combinebackup/copy_file.c
M src/bin/pg_combinebackup/pg_combinebackup.c
M src/bin/pg_rewind/file_ops.c
Add file_extend_method=posix_fallocate,write_zeros.
commit : 33e3de6d77e87d6c3c6f8f878dd8de42d37c3b8f
author : Thomas Munro <tmunro@postgresql.org>
date : Sat, 31 May 2025 22:50:22 +1200
committer: Thomas Munro <tmunro@postgresql.org>
date : Sat, 31 May 2025 22:50:22 +1200 Provide a way to disable the use of posix_fallocate() for relation
files. It was introduced by commit 4d330a61bb1. The new setting
file_extend_method=write_zeros can be used as a workaround for problems
reported from the field:
* BTRFS compression is disabled by the use of posix_fallocate()
* XFS could produce spurious ENOSPC errors in some Linux kernel
versions, though that problem is reported to have been fixed
The default is file_extend_method=posix_fallocate if available, as
before. The write_zeros option is similar to PostgreSQL < 16, except
that now it's multi-block.
Backpatch-through: 16
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reported-by: Dimitrios Apostolou <jimis@gmx.net>
Discussion: https://postgr.es/m/b1843124-fd22-e279-a31f-252dffb6fbf2%40gmx.net M doc/src/sgml/config.sgml
M src/backend/storage/file/fd.c
M src/backend/storage/smgr/md.c
M src/backend/utils/misc/guc_tables.c
M src/backend/utils/misc/postgresql.conf.sample
M src/include/storage/fd.h
doc: Move synchronized_standby_slots to "Primary Server" section.
commit : 441de639ee2360016be388350abff3afcea76b7a
author : Fujii Masao <fujii@postgresql.org>
date : Fri, 6 Feb 2026 09:40:05 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Fri, 6 Feb 2026 09:40:05 +0900 synchronized_standby_slots is defined in guc_parameter.dat as part of
the REPLICATION_PRIMARY group and is listed under the "Primary Server"
section in postgresql.conf.sample. However, in the documentation
its description was previously placed under the "Sending Servers" section.
Since synchronized_standby_slots only takes effect on the primary server,
this commit moves its documentation to the "Primary Server" section to
match its behavior and other references.
Backpatch to v17 where synchronized_standby_slots was added.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Shinya Kato <shinya11.kato@gmail.com>
Discussion: https://postgr.es/m/CAHGQGwE_LwgXgCrqd08OFteJqdERiF3noqOKu2vt7Kjk4vMiGg@mail.gmail.com
Backpatch-through: 17 M doc/src/sgml/config.sgml
Fix logical replication TAP test to read publisher log correctly.
commit : 8eb17e82fc1d12b2e897b0c053c7c5c003940833
author : Fujii Masao <fujii@postgresql.org>
date : Thu, 5 Feb 2026 00:46:09 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Thu, 5 Feb 2026 00:46:09 +0900 Commit 5f13999aa11 added a TAP test for GUC settings passed via the
CONNECTION string in logical replication, but the buildfarm member
sungazer reported test failures.
The test incorrectly used the subscriber's log file position as the
starting offset when reading the publisher's log. As a result, the test
failed to find the expected log message in the publisher's log and
erroneously reported a failure.
This commit fixes the test to use the publisher's own log file position
when reading the publisher's log.
Also, to avoid similar confusion in the future, this commit splits the single
$log_location variable into $log_location_pub and $log_location_sub,
clearly distinguishing publisher and subscriber log positions.
Backpatched to v15, where commit 5f13999aa11 introduced the test.
Per buildfarm member sungazer.
This issue was reported and diagnosed by Alexander Lakhin.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/966ec3d8-1b6f-4f57-ae59-fc7d55bc9a5a@gmail.com
Backpatch-through: 15 M src/test/subscription/t/001_rep_changes.pl
Fix various instances of undefined behavior
commit : b5e1cd2fdca1ad48982e376c0d22f468e862933c
author : John Naylor <john.naylor@postgresql.org>
date : Wed, 4 Feb 2026 17:55:49 +0700
committer: John Naylor <john.naylor@postgresql.org>
date : Wed, 4 Feb 2026 17:55:49 +0700 Mostly this involves checking for NULL pointer before doing operations
that add a non-zero offset.
The exception is an overflow warning in heap_fetch_toast_slice(). This
was caused by unneeded parentheses forcing an expression to be
evaluated to a negative integer, which then got cast to size_t.
Per clang 21 undefined behavior sanitizer.
Backpatch to all supported versions.
Co-authored-by: Alexander Lakhin <exclusion@gmail.com>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/777bd201-6e3a-4da0-a922-4ea9de46a3ee@gmail.com
Backpatch-through: 14 M contrib/pg_trgm/trgm_gist.c
M src/backend/access/heap/heaptoast.c
M src/backend/utils/adt/multirangetypes.c
M src/backend/utils/sort/sharedtuplestore.c
pg_resetwal: Fix incorrect error message related to pg_wal/summaries/
commit : 2ca4464b6992508a6be201ff8f10847e64e2291d
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 4 Feb 2026 16:38:10 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 4 Feb 2026 16:38:10 +0900 A failure while closing pg_wal/summaries/ incorrectly generated a report
about pg_wal/archive_status/.
While at it, this commit adds #undefs for the macros used in
KillExistingWALSummaries() and KillExistingArchiveStatus() to prevent
those values from being misused in an incorrect function context.
Oversight in dc212340058b.
Author: Tianchen Zhang <zhang_tian_chen@163.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/SE2P216MB2390C84C23F428A7864EE07FA19BA@SE2P216MB2390.KORP216.PROD.OUTLOOK.COM
Backpatch-through: 17 M src/bin/pg_resetwal/pg_resetwal.c
Update .abi-compliance-history for AdjustNotNullInheritance().
commit : a0f98b27557257f3c50574d381af19897f2de376
author : Álvaro Herrera <alvherre@kurilemu.de>
date : Tue, 3 Feb 2026 15:33:08 +0100
committer: Álvaro Herrera <alvherre@kurilemu.de>
date : Tue, 3 Feb 2026 15:33:08 +0100 Commit 492a69e14070 anticipated this change:
[C] 'function bool AdjustNotNullInheritance(Oid, AttrNumber, bool, bool, bool)' has some sub-type changes:
parameter 6 of type 'bool' was added
parameter 3 of type 'bool' changed:
entity changed from 'bool' to 'const char*'
type size changed from 1 to 8 (in bytes)
Discussion: https://postgr.es/m/19351-8f1c523ead498545%40postgresql.org
Backpatch-through: 18 only M .abi-compliance-history
Reject ADD CONSTRAINT NOT NULL if name mismatches existing constraint
commit : 492a69e1407029f8c673484f44aa719a63323d77
author : Álvaro Herrera <alvherre@kurilemu.de>
date : Tue, 3 Feb 2026 12:33:29 +0100
committer: Álvaro Herrera <alvherre@kurilemu.de>
date : Tue, 3 Feb 2026 12:33:29 +0100 When using ALTER TABLE ... ADD CONSTRAINT to add a not-null constraint
with an explicit name, we have to ensure that if the column is already
marked NOT NULL, the provided name matches the existing constraint name.
Failing to do so could lead to confusion regarding which constraint
object actually enforces the rule.
This patch adds a check to throw an error if the user tries to add a
named not-null constraint to a column that already has one with a
different name.
Reported-by: yanliang lei <msdnchina@163.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Co-authored-bu: Srinath Reddy Sadipiralla <srinath2133@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/19351-8f1c523ead498545%40postgresql.org M src/backend/catalog/heap.c
M src/backend/catalog/pg_constraint.c
M src/include/catalog/pg_constraint.h
M src/test/regress/expected/constraints.out
M src/test/regress/sql/constraints.sql
Fix incorrect errno in OpenWalSummaryFile()
commit : 719aa13b58576dd4428bd3a31496aac8572d8640
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 3 Feb 2026 11:25:14 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 3 Feb 2026 11:25:14 +0900 This routine has an option to bypass an error if a WAL summary file is
opened for read but is missing (missing_ok=true). However, the code
incorrectly checked for EEXIST, that matters when using O_CREAT and
O_EXCL, rather than ENOENT, for this case.
There are currently only two callers of OpenWalSummaryFile() in the
tree, and both use missing_ok=false, meaning that the check based on the
errno is currently dead code. This issue could matter for out-of-core
code or future backpatches that would like to use missing_ok set to
true.
Issue spotted while monitoring this area of the code, after
a9afa021e95f.
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/aYAf8qDHbpBZ3Rml@paquier.xyz
Backpatch-through: 17 M src/backend/backup/walsummary.c
Fix error message in RemoveWalSummaryIfOlderThan()
commit : ab61f00874e5e27ec04a787505f45d797421b475
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 2 Feb 2026 10:21:07 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 2 Feb 2026 10:21:07 +0900 A failing unlink() was reporting an incorrect error message, referring
to stat().
Author: Man Zeng <zengman@halodbtech.com>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Discussion: https://postgr.es/m/tencent_3BBE865C5F49D452360FF190@qq.com
Backpath-through: 17 M src/backend/backup/walsummary.c
Fix build inconsistency due to the generation of wait-event code
commit : d5a4856ffe1815774ba4dc46a6fa453e856f72a3
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 2 Feb 2026 08:02:59 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 2 Feb 2026 08:02:59 +0900 The build generates four files based on the wait event contents stored
in wait_event_names.txt:
- wait_event_types.h
- pgstat_wait_event.c
- wait_event_funcs_data.c
- wait_event_types.sgml
The SGML file is generated as part of a documentation build, with its
data stored in doc/src/sgml/ for meson and configure. The three others
are handled differently for meson and configure:
- In configure, all the files are created in src/backend/utils/activity/.
A link to wait_event_types.h is created in src/include/utils/.
- In meson, all the files are created in src/include/utils/.
The two C files, pgstat_wait_event.c and wait_event_funcs_data.c, are
then included in respectively wait_event.c and wait_event_funcs.c,
without the "utils/" path.
For configure, this does not present a problem. For meson, this has to
be combined with a trick in src/backend/utils/activity/meson.build,
where include_directories needs to point to include/utils/ to make the
inclusion of the C files work properly, causing builds to pull in
PostgreSQL headers rather than system headers in some build paths, as
src/include/utils/ would take priority.
In order to fix this issue, this commit reworks the way the C/H files
are generated, becoming consistent with guc_tables.inc.c:
- For meson, basically nothing changes. The files are still generated
in src/include/utils/. The trick with include_directories is removed.
- For configure, the files are now generated in src/backend/utils/, with
links in src/include/utils/ pointing to the ones in src/backend/. This
requires extra rules in src/backend/utils/activity/Makefile so as a
make command in this sub-directory is able to work.
- The three files now fall under header-stamp, which is actually simpler
as guc_tables.inc.c does the same.
- wait_event_funcs_data.c and pgstat_wait_event.c are now included with
"utils/" in their path.
This problem has not been an issue in the buildfarm; it has been noted
with AIX and a conflict with float.h. This issue could, however, create
conflicts in the buildfarm depending on the environment with unexpected
headers pulled in, so this fix is backpatched down to where the
generation of the wait-event files has been introduced.
While on it, this commit simplifies wait_event_names.txt regarding the
paths of the files generated, to mention just the names of the files
generated. The paths where the files are generated became incorrect.
The path of the SGML path was wrong.
This change has been tested in the CI, down to v17. Locally, I have run
tests with configure (with and without VPATH), as well as meson, on the
three branches.
Combo oversight in fa88928470b5 and 1e68e43d3f0f.
Reported-by: Aditya Kamath <aditya.kamath1@ibm.com>
Discussion: https://postgr.es/m/LV8PR15MB64888765A43D229EA5D1CFE6D691A@LV8PR15MB6488.namprd15.prod.outlook.com
Backpatch-through: 17 M src/backend/Makefile
M src/backend/utils/.gitignore
M src/backend/utils/Makefile
D src/backend/utils/activity/.gitignore
M src/backend/utils/activity/Makefile
M src/backend/utils/activity/meson.build
M src/backend/utils/activity/wait_event.c
M src/backend/utils/activity/wait_event_funcs.c
M src/backend/utils/activity/wait_event_names.txt
M src/include/Makefile
M src/include/utils/.gitignore
M src/include/utils/meson.build
Improve guards against false regex matches in BackgroundPsql.pm.
commit : 92b3cc5a28f1557f9f6c59dc6a30868381692ec1
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 30 Jan 2026 14:59:25 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 30 Jan 2026 14:59:25 -0500 BackgroundPsql needs to wait for all the output from an interactive
psql command to come back. To make sure that's happened, it issues
the command, then issues \echo and \warn psql commands that echo
a "banner" string (which we assume won't appear in the command's
output), then waits for the banner strings to appear. The hazard
in this approach is that the banner will also appear in the echoed
psql commands themselves, so we need to distinguish those echoes from
the desired output. Commit 8b886a4e3 tried to do that by positing
that the desired output would be directly preceded and followed by
newlines, but it turns out that that assumption is timing-sensitive.
In particular, it tends to fail in builds made --without-readline,
wherein the command echoes will be made by the pty driver and may
be interspersed with prompts issued by psql proper.
It does seem safe to assume that the banner output we want will be
followed by a newline, since that should be the last output before
things quiesce. Therefore, we can improve matters by putting quotes
around the banner strings in the \echo and \warn psql commands, so
that their echoes cannot include banner directly followed by newline,
and then checking for just banner-and-newline in the match pattern.
While at it, spruce up the pump() call in sub query() to look like
the neater version in wait_connect(), and don't die on timeout
until after printing whatever we got.
Reported-by: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>
Diagnosed-by: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Soumya S Murali <soumyamurali.work@gmail.com>
Discussion: https://postgr.es/m/db6fdb35a8665ad3c18be01181d44b31@postgrespro.ru
Backpatch-through: 14 M src/test/perl/PostgreSQL/Test/BackgroundPsql.pm
Update .abi-compliance-history for change to TransitionCaptureState.
commit : fff87cb50dbd702240c7662feacada4d4eca827b
author : Dean Rasheed <dean.a.rasheed@gmail.com>
date : Fri, 30 Jan 2026 08:48:25 +0000
committer: Dean Rasheed <dean.a.rasheed@gmail.com>
date : Fri, 30 Jan 2026 08:48:25 +0000 As noted in the commit message for b4307ae2e54, the change to the
TransitionCaptureState structure is nominally an ABI break, but it is
not expected to affect any third-party code. Therefore, add it to the
.abi-compliance-history file.
Discussion: https://postgr.es/m/19380-4e293be2b4007248%40postgresql.org
Backpatch-through: 15-18 M .abi-compliance-history
Fix theoretical memory leaks in pg_locale_libc.c.
commit : 09d8c351744d3fdc7e1f72ab3a3b08b25e0c36f1
author : Jeff Davis <jdavis@postgresql.org>
date : Thu, 29 Jan 2026 10:14:55 -0800
committer: Jeff Davis <jdavis@postgresql.org>
date : Thu, 29 Jan 2026 10:14:55 -0800 The leaks were hard to reach in practice and the impact was low.
The callers provide a buffer the same number of bytes as the source
string (plus one for NUL terminator) as a starting size, and libc
never increases the number of characters. But, if the byte length of
one of the converted characters is larger, then it might need a larger
destination buffer. Previously, in that case, the working buffers
would be leaked.
Even in that case, the call typically happens within a context that
will soon be reset. Regardless, it's worth fixing to avoid such
assumptions, and the fix is simple so it's worth backporting.
Discussion: https://postgr.es/m/e2b7a0a88aaadded7e2d19f42d5ab03c9e182ad8.camel@j-davis.com
Backpatch-through: 18 M src/backend/utils/adt/pg_locale_libc.c
psql: Disable %P (pipeline status) for non-active connection
commit : d42735b1e8c8c6454a07b709e4ff7ccae4ad58c6
author : Michael Paquier <michael@paquier.xyz>
date : Thu, 29 Jan 2026 16:20:50 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Thu, 29 Jan 2026 16:20:50 +0900 In the psql prompt, %P prompt shows the current pipeline status. Unlike
most of the other options, its status was showing up in the output
generated even if psql was not connected to a database. This was
confusing, because without a connection a pipeline status makes no
sense.
Like the other options, %P is updated so as its data is now hidden
without an active connection.
Author: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/86EF76B5-6E62-404D-B9EC-66F4714D7D5F@gmail.com
Backpatch-through: 18 M src/bin/psql/prompt.c
Fix CI failure introduced in commit 851f6649cc.
commit : 1c60f7236368ececfd9dc949251c28cebadfbe77
author : Amit Kapila <akapila@postgresql.org>
date : Thu, 29 Jan 2026 03:05:12 +0000
committer: Amit Kapila <akapila@postgresql.org>
date : Thu, 29 Jan 2026 03:05:12 +0000 The test added in commit 851f6649cc uses a backup taken from a node
created by the previous test to perform standby related checks. On
Windows, however, the standby failed to start with the following error:
FATAL: could not rename file "backup_label" to "backup_label.old": Permission denied
This occurred because some background sessions from the earlier test were
still active. These leftover processes continued accessing the parent
directory of the backup_label file, likely preventing the rename and
causing the failure. Ensuring that these sessions are cleanly terminated
resolves the issue in local testing.
Additionally, the has_restoring => 1 option has been removed, as it was
not required by the new test.
Reported-by: Robert Haas <robertmhaas@gmail.com>
Backpatch-through: 17
Discussion: https://postgr.es/m/CA+TgmobdVhO0ckZfsBZ0wqDO4qHVCwZZx8sf=EinafvUam-dsQ@mail.gmail.com M src/test/recovery/t/046_checkpoint_logical_slot.pl
oauth: Correct test dependency on oauth_hook_client
commit : 444826b6dc17c9102d3114e670adb8a4119ab2a2
author : Jacob Champion <jchampion@postgresql.org>
date : Tue, 27 Jan 2026 11:58:26 -0800
committer: Jacob Champion <jchampion@postgresql.org>
date : Tue, 27 Jan 2026 11:58:26 -0800 The oauth_validator tests missed the lessons of c89525d57 et al, so
certain combinations of command-line build order and `meson test`
options can result in
Command 'oauth_hook_client' not found in [...] at src/test/perl/PostgreSQL/Test/Utils.pm line 427.
Add the missing dependency on the test executable. This fixes, for
example,
$ ninja clean && ninja meson-test-prereq && PG_TEST_EXTRA=oauth meson test --no-rebuild
Reported-by: Jonathan Gonzalez V. <jonathan.abdiel@gmail.com>
Author: Jonathan Gonzalez V. <jonathan.abdiel@gmail.com>
Discussion: https://postgr.es/m/6e8f4f7c23faf77c4b6564c4b7dc5d3de64aa491.camel@gmail.com
Discussion: https://postgr.es/m/qh4c5tvkgjef7jikjig56rclbcdrrotngnwpycukd2n3k25zi2%4044hxxvtwmgum
Backpatch-through: 18 M src/test/modules/oauth_validator/meson.build
Fix crash introduced by incorrect backport 806555e300.
commit : 8993bf0991d876c878fe3739d6d4e200a1e122f3
author : Jeff Davis <jdavis@postgresql.org>
date : Tue, 27 Jan 2026 08:16:07 -0800
committer: Jeff Davis <jdavis@postgresql.org>
date : Tue, 27 Jan 2026 08:16:07 -0800 Commit 7f007e4a04 in master depends on 1476028225, but the latter was
not backported. Therefore 806555e300 (the backport of commit
7f007e4a04) incorrectly used pg_strfold() in a locale where
ctype_is_c.
The fix is to simply have the callers check for ctype_is_c.
Because 7f007e4a04 was only backported to version 18, and because the
commit in master is fine, this fix only exists in version 18.
Reported-by: Александр Кожемякин <a.kozhemyakin@postgrespro.ru>
Discussion: https://postgr.es/m/456f7143-51ea-4342-b4a1-85f0d9b6c79f@postgrespro.ru M contrib/ltree/crc32.c
M contrib/ltree/lquery_op.c
Prevent invalidation of newly synced replication slots.
commit : 919c9fa13cd0684b437a88719d670a9bf6dd0dc8
author : Amit Kapila <akapila@postgresql.org>
date : Tue, 27 Jan 2026 05:45:25 +0000
committer: Amit Kapila <akapila@postgresql.org>
date : Tue, 27 Jan 2026 05:45:25 +0000 A race condition could cause a newly synced replication slot to become
invalidated between its initial sync and the checkpoint.
When syncing a replication slot to a standby, the slot's initial
restart_lsn is taken from the publisher's remote_restart_lsn. Because slot
sync happens asynchronously, this value can lag behind the standby's
current redo pointer. Without any interlocking between WAL reservation and
checkpoints, a checkpoint may remove WAL required by the newly synced
slot, causing the slot to be invalidated.
To fix this, we acquire ReplicationSlotAllocationLock before reserving WAL
for a newly synced slot, similar to commit 006dd4b2e5. This ensures that
if WAL reservation happens first, the checkpoint process must wait for
slotsync to update the slot's restart_lsn before it computes the minimum
required LSN.
However, unlike in ReplicationSlotReserveWal(), this lock alone cannot
protect a newly synced slot if a checkpoint has already run
CheckPointReplicationSlots() before slotsync updates the slot. In such
cases, the remote restart_lsn may be stale and earlier than the current
redo pointer. To prevent relying on an outdated LSN, we use the oldest
WAL location available if it is greater than the remote restart_lsn.
This ensures that newly synced slots always start with a safe, non-stale
restart_lsn and are not invalidated by concurrent checkpoints.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Vitaly Davydov <v.davydov@postgrespro.ru>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Backpatch-through: 17
Discussion: https://postgr.es/m/TY4PR01MB16907E744589B1AB2EE89A31F94D7A%40TY4PR01MB16907.jpnprd01.prod.outlook.com M src/backend/access/transam/xlog.c
M src/backend/replication/logical/slotsync.c
M src/include/access/xlog.h
M src/test/recovery/t/046_checkpoint_logical_slot.pl
pgindent fix for 3fccbd94cba
commit : 3a8b6e56cdbca12723a58f4ea13e39ac0611a59b
author : Tomas Vondra <tomas.vondra@postgresql.org>
date : Tue, 27 Jan 2026 00:26:36 +0100
committer: Tomas Vondra <tomas.vondra@postgresql.org>
date : Tue, 27 Jan 2026 00:26:36 +0100 Backpatch-through: 18 M contrib/pg_buffercache/pg_buffercache_pages.c
Handle ENOENT status when querying NUMA node
commit : 9796c4f5607be5807f2d2ba9bca1bc87af198db3
author : Tomas Vondra <tomas.vondra@postgresql.org>
date : Mon, 26 Jan 2026 22:20:18 +0100
committer: Tomas Vondra <tomas.vondra@postgresql.org>
date : Mon, 26 Jan 2026 22:20:18 +0100 We've assumed that touching the memory is sufficient for a page to be
located on one of the NUMA nodes. But a page may be moved to a swap
after we touch it, due to memory pressure.
We touch the memory before querying the status, but there is no
guarantee it won't be moved to the swap in the meantime. The touching
happens only on the first call, so later calls are more likely to be
affected. And the batching increases the window too.
It's up to the kernel if/when pages get moved to swap. We have to accept
ENOENT (-2) as a valid result, and handle it without failing. This patch
simply treats it as an unknown node, and returns NULL in the two
affected views (pg_shmem_allocations_numa and pg_buffercache_numa).
Hugepages cannot be swapped out, so this affects only regular pages.
Reported by Christoph Berg, investigation and fix by me. Backpatch to
18, where the two views were introduced.
Reported-by: Christoph Berg <myon@debian.org>
Discussion: 18
Backpatch-through: https://postgr.es/m/aTq5Gt_n-oS_QSpL@msg.df7cb.de M contrib/pg_buffercache/pg_buffercache_pages.c
M src/backend/storage/ipc/shmem.c
Exercise parallel GIN builds in regression tests
commit : 32593394ee439703db558fc4be83de2cb249ded8
author : Tomas Vondra <tomas.vondra@postgresql.org>
date : Mon, 26 Jan 2026 18:54:12 +0100
committer: Tomas Vondra <tomas.vondra@postgresql.org>
date : Mon, 26 Jan 2026 18:54:12 +0100 Modify two places creating GIN indexes in regression tests, so that the
build is parallel. This provides a basic test coverage, even if the
amounts of data are fairly small.
Reported-by: Kirill Reshke <reshkekirill@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/CALdSSPjUprTj+vYp1tRKWkcLYzdy=N=O4Cn4y_HoxNSqQwBttg@mail.gmail.com M src/test/regress/expected/jsonb.out
M src/test/regress/expected/tsearch.out
M src/test/regress/sql/jsonb.sql
M src/test/regress/sql/tsearch.sql
Lookup the correct ordering for parallel GIN builds
commit : eee71a66cc860771837ed645a8dcf0ccffd735c6
author : Tomas Vondra <tomas.vondra@postgresql.org>
date : Mon, 26 Jan 2026 18:52:16 +0100
committer: Tomas Vondra <tomas.vondra@postgresql.org>
date : Mon, 26 Jan 2026 18:52:16 +0100 When building a tuplesort during parallel GIN builds, the function
incorrectly looked up the default B-Tree operator, not the function
associated with the GIN opclass (through GIN_COMPARE_PROC).
Fixed by using the same logic as initGinState(), and the other place
in parallel GIN builds.
This could cause two types of issues. First, a data type might not have
a B-Tree opclass, in which case the PrepareSortSupportFromOrderingOp()
fails with an ERROR. Second, a data type might have both B-Tree and GIN
opclasses, defining order/equality in different ways. This could lead to
logical corruption in the index.
Backpatch to 18, where parallel GIN builds were introduced.
Discussion: https://postgr.es/m/73a28b94-43d5-4f77-b26e-0d642f6de777@iki.fi
Reported-by: Heikki Linnakangas <hlinnaka@iki.fi>
Backpatch-through: 18 M src/backend/utils/sort/tuplesortvariants.c
Reduce length of TAP test file name.
commit : 7903377d9de2f056fd66adc1892cc0771bd2d131
author : Robert Haas <rhaas@postgresql.org>
date : Mon, 26 Jan 2026 12:43:52 -0500
committer: Robert Haas <rhaas@postgresql.org>
date : Mon, 26 Jan 2026 12:43:52 -0500 Buildfarm member fairywren hit the Windows limitation on the length of a
file path. While there may be other things we should also do to prevent
this from happening, it's certainly the case that the length of this
test file name is much longer than others in the same directory, so make
it shorter.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: http://postgr.es/m/274e0a1a-d7d2-4bc8-8b56-dd09f285715e@gmail.com
Backpatch-through: 17 M src/bin/pg_combinebackup/meson.build
R100 src/bin/pg_combinebackup/t/011_incremental_backup_truncation_block.pl src/bin/pg_combinebackup/t/011_ib_truncation.pl
Fix possible issue of a WindowFunc being in the wrong WindowClause
commit : ccde5be6869c13ad88259d300c684900d7c1eb8c
author : David Rowley <drowley@postgresql.org>
date : Mon, 26 Jan 2026 23:46:23 +1300
committer: David Rowley <drowley@postgresql.org>
date : Mon, 26 Jan 2026 23:46:23 +1300 ed1a88dda made it so WindowClauses can be merged when all window
functions belonging to the WindowClause can equally well use some
other WindowClause without any behavioral changes. When that
optimization applies, the WindowFunc's "winref" gets adjusted to
reference the new WindowClause.
That commit does not work well with the deduplication logic in
find_window_functions(), which only added the WindowFunc to the list
when there wasn't already an identical WindowFunc in the list. That
deduplication logic meant that the duplicate WindowFunc wouldn't get the
"winref" changed when optimize_window_clauses() was able to swap the
WindowFunc to another WindowClause. This could lead to the following
error in the unlikely event that the deduplication code did something and
the duplicate WindowFunc happened to be moved into another WindowClause.
ERROR: WindowFunc with winref 2 assigned to WindowAgg with winref 1
As it turns out, the deduplication logic in find_window_functions() is
pretty bogus. It might have done something when added, as that code
predates b8d7f053c, which changed how projections work. As it turns
out, at least now we *will* evaluate the duplicate WindowFuncs. All
that the deduplication code seems to do today is assist in
underestimating the WindowAggPath costs due to not counting the
evaluation costs of duplicate WindowFuncs.
Ideally the fix would be to remove the deduplication code, but that
could result in changes to the plan costs, as duplicate WindowFuncs
would then be costed. Instead, let's play it safe and shift the
deduplication code so it runs after the other processing in
optimize_window_clauses().
Backpatch only as far as v16 as there doesn't seem to be any other harm
done by the WindowFunc deduplication code before then. This issue was
fixed in master by 7027dd499.
Reported-by: Meng Zhang <mza117jc@gmail.com>
Author: Meng Zhang <mza117jc@gmail.com>
Author: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/CAErYLFAuxmW0UVdgrz7iiuNrxGQnFK_OP9hBD5CUzRgjrVrz=Q@mail.gmail.com
Backpatch-through: 16 M src/backend/optimizer/plan/planner.c
M src/backend/optimizer/util/clauses.c
Fix trigger transition table capture for MERGE in CTE queries.
commit : c6ce4dcf9d3b7a8a89aca386124c473f98fc329e
author : Dean Rasheed <dean.a.rasheed@gmail.com>
date : Sat, 24 Jan 2026 11:30:48 +0000
committer: Dean Rasheed <dean.a.rasheed@gmail.com>
date : Sat, 24 Jan 2026 11:30:48 +0000 When executing a data-modifying CTE query containing MERGE and some
other DML operation on a table with statement-level AFTER triggers,
the transition tables passed to the triggers would fail to include the
rows affected by the MERGE.
The reason is that, when initializing a ModifyTable node for MERGE,
MakeTransitionCaptureState() would create a TransitionCaptureState
structure with a single "tcs_private" field pointing to an
AfterTriggersTableData structure with cmdType == CMD_MERGE. Tuples
captured there would then not be included in the sets of tuples
captured when executing INSERT/UPDATE/DELETE ModifyTable nodes in the
same query.
Since there are no MERGE triggers, we should only create
AfterTriggersTableData structures for INSERT/UPDATE/DELETE. Individual
MERGE actions should then use those, thereby sharing the same capture
tuplestores as any other DML commands executed in the same query.
This requires changing the TransitionCaptureState structure, replacing
"tcs_private" with 3 separate pointers to AfterTriggersTableData
structures, one for each of INSERT, UPDATE, and DELETE. Nominally,
this is an ABI break to a public structure in commands/trigger.h.
However, since this is a private field pointing to an opaque data
structure, the only way to create a valid TransitionCaptureState is by
calling MakeTransitionCaptureState(), and no extensions appear to be
doing that anyway, so it seems safe for back-patching.
Backpatch to v15, where MERGE was introduced.
Bug: #19380
Reported-by: Daniel Woelfel <dwwoelfel@gmail.com>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19380-4e293be2b4007248%40postgresql.org
Backpatch-through: 15 M src/backend/commands/trigger.c
M src/include/commands/trigger.h
M src/test/regress/expected/triggers.out
M src/test/regress/sql/triggers.sql
Fix bogus ctid requirement for dummy-root partitioned targets
commit : 9f4b7bfc5eb6b3068f35ef5b879d3d8725f5f167
author : Amit Langote <amitlan@postgresql.org>
date : Fri, 23 Jan 2026 10:20:51 +0900
committer: Amit Langote <amitlan@postgresql.org>
date : Fri, 23 Jan 2026 10:20:51 +0900 ExecInitModifyTable() unconditionally required a ctid junk column even
when the target was a partitioned table. This led to spurious "could
not find junk ctid column" errors when all children were excluded and
only the dummy root result relation remained.
A partitioned table only appears in the result relations list when all
leaf partitions have been pruned, leaving the dummy root as the sole
entry. Assert this invariant (nrels == 1) and skip the ctid requirement.
Also adjust ExecModifyTable() to tolerate invalid ri_RowIdAttNo for
partitioned tables, which is safe since no rows will be processed in
this case.
Bug: #19099
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19099-e05dcfa022fe553d%40postgresql.org
Backpatch-through: 14 M contrib/file_fdw/expected/file_fdw.out
M contrib/file_fdw/sql/file_fdw.sql
M src/backend/executor/nodeModifyTable.c
Remove faulty Assert in partitioned INSERT...ON CONFLICT DO UPDATE.
commit : 9f7c803c91584fb6e4b45dc87de44bac370477a9
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 22 Jan 2026 18:35:31 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 22 Jan 2026 18:35:31 -0500 Commit f16241bef mistakenly supposed that INSERT...ON CONFLICT DO
UPDATE rejects partitioned target tables. (This may have been
accurate when the patch was written, but it was already obsolete
when committed.) Hence, there's an assertion that we can't see
ItemPointerIndicatesMovedPartitions() in that path, but the assertion
is triggerable.
Some other places throw error if they see a moved-across-partitions
tuple, but there seems no need for that here, because if we just retry
then we get the same behavior as in the update-within-partition case,
as demonstrated by the new isolation test. So fix by deleting the
faulty Assert. (The fact that this is the fix doubtless explains
why we've heard no field complaints: the behavior of a non-assert
build is fine.)
The TM_Deleted case contains a cargo-culted copy of the same Assert,
which I also deleted to avoid confusion, although I believe that one
is actually not triggerable.
Per our code coverage report, neither the TM_Updated nor the
TM_Deleted case were reached at all by existing tests, so this
patch adds tests for both.
Reported-by: Dmitry Koval <d.koval@postgrespro.ru>
Author: Joseph Koshakow <koshy44@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/f5fffe4b-11b2-4557-a864-3587ff9b4c36@postgrespro.ru
Backpatch-through: 14 M src/backend/executor/nodeModifyTable.c
A src/test/isolation/expected/insert-conflict-do-update-4.out
M src/test/isolation/isolation_schedule
A src/test/isolation/specs/insert-conflict-do-update-4.spec
doc: Mention pg_get_partition_constraintdef()
commit : a3bbd60b94cfbc2fa91f0cc5b94c2105b176b303
author : Michael Paquier <michael@paquier.xyz>
date : Thu, 22 Jan 2026 16:35:40 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Thu, 22 Jan 2026 16:35:40 +0900 All the other SQL functions reconstructing definitions or commands are
listed in the documentation, except this one.
Oversight in 1848b73d4576.
Author: Todd Liebenschutz-Jones <todd.liebenschutz-jones@starlingbank.com>
Discussion: https://postgr.es/m/CAGTRfaD6uRQ9iutASDzc_iDoS25sQTLWgXTtR3ta63uwTxq6bA@mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/func.sgml
jit: Add missing inline pass for LLVM >= 17.
commit : f1c6b153cabdc9ea33c3396f13e1cee92836df75
author : Thomas Munro <tmunro@postgresql.org>
date : Thu, 22 Jan 2026 15:43:13 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Thu, 22 Jan 2026 15:43:13 +1300 With LLVM >= 17, transform passes are provided as a string to
LLVMRunPasses. Only two strings were used: "default<O3>" and
"default<O0>,mem2reg".
With previous LLVM versions, an additional inline pass was added when
JIT inlining was enabled without optimization. With LLVM >= 17, the code
would go through llvm_inline, prepare the functions for inlining, but
the generated bitcode would be the same due to the missing inline pass.
This patch restores the previous behavior by adding an inline pass when
inlining is enabled but no optimization is done.
This fixes an oversight introduced by 76200e5e when support for LLVM 17
was added.
Backpatch-through: 14
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Pierre Ducroquet <p.psql@pinaraf.info>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/CAO6_XqrNjJnbn15ctPv7o4yEAT9fWa-dK15RSyun6QNw9YDtKg%40mail.gmail.com M src/backend/jit/llvm/llvmjit.c
amcheck: Fix snapshot usage in bt_index_parent_check
commit : 3c83a2a0ace90a83249a51925706395c76a85bdf
author : Álvaro Herrera <alvherre@kurilemu.de>
date : Wed, 21 Jan 2026 18:55:43 +0100
committer: Álvaro Herrera <alvherre@kurilemu.de>
date : Wed, 21 Jan 2026 18:55:43 +0100 We were using SnapshotAny to do some index checks, but that's wrong and
causes spurious errors when used on indexes created by CREATE INDEX
CONCURRENTLY. Fix it to use an MVCC snapshot, and add a test for it.
Backpatch of 6bd469d26aca to branches 14-16. I previously misidentified
the bug's origin: it came in with commit 7f563c09f890 (pg11-era, not
5ae2087202af as claimed previously), so all live branches are affected.
Also take the opportunity to fix some comments that we failed to update
in the original commits and apply pgperltidy. In branch 14, remove the
unnecessary test plan specification (which would have need to have been
changed anyway; c.f. commit 549ec201d613.)
Diagnosed-by: Donghang Lin <donghanglin@gmail.com>
Author: Mihail Nikalayeu <mihailnikalayeu@gmail.com>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Backpatch-through: 17
Discussion: https://postgr.es/m/CANtu0ojmVd27fEhfpST7RG2KZvwkX=dMyKUqg0KM87FkOSdz8Q@mail.gmail.com M contrib/amcheck/t/002_cic.pl
M contrib/amcheck/verify_nbtree.c
doc: revert "xreflabel" used for PL/Python & libpq chapters
commit : 85aedc67e9674b7fdbf7627bbb54ed5b741880c5
author : Bruce Momjian <bruce@momjian.us>
date : Mon, 19 Jan 2026 22:59:10 -0500
committer: Bruce Momjian <bruce@momjian.us>
date : Mon, 19 Jan 2026 22:59:10 -0500 This reverts d8aa21b74ff, which was added for the PG 18 release notes,
and adjusts the PG 18 release notes for this change. This is necessary
since the "xreflabel" affected other references to these chapters.
Reported-by: Robert Treat
Author: Robert Treat
Discussion: https://postgr.es/m/CABV9wwNEZDdp5QtrW5ut0H+MOf6U1PvrqBqmgSTgcixqk+Q73A@mail.gmail.com
Backpatch-through: 18 M doc/src/sgml/libpq.sgml
M doc/src/sgml/plpython.sgml
M doc/src/sgml/release-18.sgml
pg_stat_statements: Fix crash in list squashing with Vars
commit : 3304e97b1b73e0ca7b14bbd8ed17162b3cb056ec
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 20 Jan 2026 08:11:16 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 20 Jan 2026 08:11:16 +0900 When IN/ANY clauses contain both constants and variable expressions, the
optimizer transforms them into separate structures: constants become
an array expression while variables become individual OR conditions.
This transformation was creating an overlap with the token locations,
causing pg_stat_statements query normalization to crash because it
could not calculate the amount of bytes remaining to write for the
normalized query.
This commit disables squashing for mixed IN list expressions when
constructing a scalar array op, by setting list_start and list_end
to -1 when both variables and non-variables are present. Some
regression tests are added to PGSS to verify these patterns.
Author: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0ts9qiONnHjjHxPxtePs22GBo4d3jZ_s2BQC59AN7XbAA@mail.gmail.com
Backpatch-through: 18 M contrib/pg_stat_statements/expected/squashing.out
M contrib/pg_stat_statements/sql/squashing.sql
M src/backend/parser/parse_expr.c
Don't set the truncation block length greater than RELSEG_SIZE.
commit : c80b0c9d63b25a1e7fc751a4cf66a6510ffafbb8
author : Robert Haas <rhaas@postgresql.org>
date : Mon, 19 Jan 2026 12:02:08 -0500
committer: Robert Haas <rhaas@postgresql.org>
date : Mon, 19 Jan 2026 12:02:08 -0500 When faced with a relation containing more than 1 physical segment
(i.e. >1GB, with normal settings), the previous code could compute a
truncation block length greater than RELSEG_SIZE, which could lead to
restore failures of this form:
file "%s" has truncation block length %u in excess of segment size %u
The fix is simply to clamp the maximum computed truncation_block_length
to RELSEG_SiZE. I have also added some comments to clarify the logic.
The test case was written by Oleg Tkachenko, but I have rewritten its
comments.
Reported-by: Oleg Tkachenko <oatkachenko@gmail.com>
Diagnosed-by: Oleg Tkachenko <oatkachenko@gmail.com>
Co-authored-by: Robert Haas <rhaas@postgresql.org>
Co-authored-by: Oleg Tkachenko <oatkachenko@gmail.com>
Reviewed-by: Amul Sul <sulamul@gmail.com>
Backpatch-through: 17
Discussion: http://postgr.es/m/00FEFC88-EA1D-4271-B38F-EB741733A84A@gmail.com M src/backend/backup/basebackup_incremental.c
M src/bin/pg_combinebackup/meson.build
A src/bin/pg_combinebackup/t/011_incremental_backup_truncation_block.pl
Fix unsafe pushdown of quals referencing grouping Vars
commit : 7650eabb662f2f3708042c0b713c46aa042db94f
author : Richard Guo <rguo@postgresql.org>
date : Mon, 19 Jan 2026 11:13:23 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Mon, 19 Jan 2026 11:13:23 +0900 When checking a subquery's output expressions to see if it's safe to
push down an upper-level qual, check_output_expressions() previously
treated grouping Vars as opaque Vars. This implicitly assumed they
were stable and scalar.
However, a grouping Var's underlying expression corresponds to the
grouping clause, which may be volatile or set-returning. If an
upper-level qual references such an output column, pushing it down
into the subquery is unsafe. This can cause strange results due to
multiple evaluation of a volatile function, or introduce SRFs into
the subquery's WHERE/HAVING quals.
This patch teaches check_output_expressions() to look through grouping
Vars to their underlying expressions. This ensures that any
volatility or set-returning properties in the grouping clause are
detected, preventing the unsafe pushdown.
We do not need to recursively examine the Vars contained in these
underlying expressions. Even if they reference outputs from
lower-level subqueries (at any depth), those references are guaranteed
not to expand to volatile or set-returning functions, because
subqueries containing such functions in their targetlists are never
pulled up.
Backpatch to v18, where this issue was introduced.
Reported-by: Eric Ridge <eebbrr@gmail.com>
Diagnosed-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/7900964C-F99E-481E-BEE5-4338774CEB9F@gmail.com
Backpatch-through: 18 M src/backend/optimizer/path/allpaths.c
M src/backend/optimizer/util/var.c
M src/test/regress/expected/subselect.out
M src/test/regress/sql/subselect.sql
Update time zone data files to tzdata release 2025c.
commit : 6574bee6459e73c42816ac7ec16e6b6b6197000c
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 18 Jan 2026 14:54:33 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 18 Jan 2026 14:54:33 -0500 This is pretty pro-forma for our purposes, as the only change
is a historical correction for pre-1976 DST laws in
Baja California. (Upstream made this release mostly to update
their leap-second data, which we don't use.) But with minor
releases coming up, we should be up-to-date.
Backpatch-through: 14 M src/timezone/data/tzdata.zi
Fix error message related to end TLI in backup manifest
commit : 69ee81932a161768833264e6db5523a8412952f2
author : Michael Paquier <michael@paquier.xyz>
date : Sun, 18 Jan 2026 17:24:58 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Sun, 18 Jan 2026 17:24:58 +0900 The code adding the WAL information included in a backup manifest is
cross-checked with the contents of the timeline history file of the end
timeline. A check based on the end timeline, when it fails, reported
the value of the start timeline in the error message. This error is
fixed to show the correct timeline number in the report.
This error report would be confusing for users if seen, because it would
provide an incorrect information, so backpatch all the way down.
Oversight in 0d8c9c1210c4.
Author: Man Zeng <zengman@halodbtech.com>
Discussion: https://postgr.es/m/tencent_0F2949C4594556F672CF4658@qq.com
Backpatch-through: 14 M src/backend/backup/backup_manifest.c
Fix crash in test function on removable_cutoff(NULL)
commit : 9b6714ed9a1ab18af9cbfff8dd0f52cf99a1557e
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Fri, 16 Jan 2026 14:42:22 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Fri, 16 Jan 2026 14:42:22 +0200 The function is part of the injection_points test module and only used
in tests. None of the current tests call it with a NULL argument, but
it is supposed to work.
Backpatch-through: 17 M src/test/modules/injection_points/regress_injection.c
Fix rowmark handling for non-relation RTEs during executor init
commit : f335457e8adb32ce2e506e0d2b62c7f6f4bb98d1
author : Amit Langote <amitlan@postgresql.org>
date : Fri, 16 Jan 2026 14:53:32 +0900
committer: Amit Langote <amitlan@postgresql.org>
date : Fri, 16 Jan 2026 14:53:32 +0900 Commit cbc127917e introduced tracking of unpruned relids to skip
processing of pruned partitions. PlannedStmt.unprunableRelids is
computed as the difference between PlannerGlobal.allRelids and
prunableRelids, but allRelids only contains RTE_RELATION entries.
This means non-relation RTEs (VALUES, subqueries, CTEs, etc.) are
never included in unprunableRelids, and consequently not in
es_unpruned_relids at runtime.
As a result, rowmarks attached to non-relation RTEs were incorrectly
skipped during executor initialization. This affects any DML statement
that has rowmarks on such RTEs, including MERGE with a VALUES or
subquery source, and UPDATE/DELETE with joins against subqueries or
CTEs. When a concurrent update triggers an EPQ recheck, the missing
rowmark leads to incorrect results.
Fix by restricting the es_unpruned_relids membership check to
RTE_RELATION entries only, since partition pruning only applies to
actual relations. Rowmarks for other RTE kinds are now always
processed.
Bug: #19355
Reported-by: Bihua Wang <wangbihua.cn@gmail.com>
Diagnosed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Diagnosed-by: Tender Wang <tndrwang@gmail.com>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/19355-57d7d52ea4980dc6@postgresql.org
Backpatch-through: 18 M src/backend/executor/execMain.c
M src/backend/executor/nodeLockRows.c
M src/backend/executor/nodeModifyTable.c
M src/test/isolation/expected/eval-plan-qual.out
M src/test/isolation/specs/eval-plan-qual.spec
Fix segfault from releasing locks in detached DSM segments
commit : 1943ceb38842ada55f13630f989c78184e82a397
author : Amit Langote <amitlan@postgresql.org>
date : Fri, 16 Jan 2026 13:01:52 +0900
committer: Amit Langote <amitlan@postgresql.org>
date : Fri, 16 Jan 2026 13:01:52 +0900 If a FATAL error occurs while holding a lock in a DSM segment (such
as a dshash lock) and the process is not in a transaction, a
segmentation fault can occur during process exit.
The problem sequence is:
1. Process acquires a lock in a DSM segment (e.g., via dshash)
2. FATAL error occurs outside transaction context
3. proc_exit() begins, calling before_shmem_exit callbacks
4. dsm_backend_shutdown() detaches all DSM segments
5. Later, on_shmem_exit callbacks run
6. ProcKill() calls LWLockReleaseAll()
7. Segfault: the lock being released is in unmapped memory
This only manifests outside transaction contexts because
AbortTransaction() calls LWLockReleaseAll() during transaction
abort, releasing locks before DSM cleanup. Background workers and
other non-transactional code paths are vulnerable.
Fix by calling LWLockReleaseAll() unconditionally at the start of
shmem_exit(), before any callbacks run. Releasing locks before
callbacks prevents the segfault - locks must be released before
dsm_backend_shutdown() detaches their memory. This is safe because
after an error, held locks are protecting potentially inconsistent
data anyway, and callbacks can acquire fresh locks if needed.
Also add a comment noting that LWLockReleaseAll() must be safe to
call before LWLock initialization (which it is, since
num_held_lwlocks will be 0), plus an Assert for the post-condition.
This fix aligns with the original design intent from commit
001a573a2, which noted that backends must clean up shared memory
state (including releasing lwlocks) before unmapping dynamic shared
memory segments.
Reported-by: Rahila Syed <rahilasyed90@gmail.com>
Author: Rahila Syed <rahilasyed90@gmail.com>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CAH2L28uSvyiosL+kaic9249jRVoQiQF6JOnaCitKFq=xiFzX3g@mail.gmail.com
Backpatch-through: 14 M src/backend/storage/ipc/ipc.c
M src/backend/storage/lmgr/lwlock.c
pgindent fix for 8077649907d
commit : a80811e592a3f6851e113fbdf5fe21e677391bb9
author : Andres Freund <andres@anarazel.de>
date : Thu, 15 Jan 2026 14:54:16 -0500
committer: Andres Freund <andres@anarazel.de>
date : Thu, 15 Jan 2026 14:54:16 -0500 Per buildfarm member koel.
Backpatch-through: 18 M src/backend/storage/aio/method_io_uring.c
Fix 'unexpected data beyond EOF' on replica restart
commit : 9ed411e084b7e25885d761a15e3a54818cf856a9
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 15 Jan 2026 20:57:12 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 15 Jan 2026 20:57:12 +0200 On restart, a replica can fail with an error like 'unexpected data
beyond EOF in block 200 of relation T/D/R'. These are the steps to
reproduce it:
- A relation has a size of 400 blocks.
- Blocks 201 to 400 are empty.
- Block 200 has two rows.
- Blocks 100 to 199 are empty.
- A restartpoint is done
- Vacuum truncates the relation to 200 blocks
- A FPW deletes a row in block 200
- A checkpoint is done
- A FPW deletes the last row in block 200
- Vacuum truncates the relation to 100 blocks
- The replica restarts
When the replica restarts:
- The relation on disk starts at 100 blocks, because all the
truncations were applied before restart.
- The first truncate to 200 blocks is replayed. It silently fails, but
it will still (incorrectly!) update the cache size to 200 blocks
- The first FPW on block 200 is applied. XLogReadBufferForRead relies
on the cached size and incorrectly assumes that the page already
exists in the file, and thus won't extend the relation.
- The online checkpoint record is replayed, calling smgrdestroyall
which causes the cached size to be discarded
- The second FPW on block 200 is applied. This time, the detected size
is 100 blocks, an extend is attempted. However, the block 200 is
already present in the buffer cache due to the first FPW. This
triggers the 'unexpected data beyond EOF'.
To fix, update the cached size in SmgrRelation with the current size
rather than the requested new size, when the requested new size is
greater.
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://www.postgresql.org/message-id/CAO6_Xqrv-snNJNhbj1KjQmWiWHX3nYGDgAc=vxaZP3qc4g1Siw@mail.gmail.com
Backpatch-through: 14 M src/backend/storage/smgr/md.c
M src/backend/storage/smgr/smgr.c
aio: io_uring: Fix danger of completion getting reused before being read
commit : 7f1b3a4cea563d791d8a83e5c482f1ed8306ee6a
author : Andres Freund <andres@anarazel.de>
date : Thu, 15 Jan 2026 10:17:51 -0500
committer: Andres Freund <andres@anarazel.de>
date : Thu, 15 Jan 2026 10:17:51 -0500 We called io_uring_cqe_seen(..., cqe) before reading cqe->res. That allows the
completion to be reused, which in turn could lead to cqe->res being
overwritten. The window for that is very narrow and the likelihood of it
happening is very low, as we should never actually utilize all CQEs, but the
consequences would be bad.
This bug was reported to me privately.
Backpatch-through: 18
Discussion: https://postgr.es/m/bwo3e5lj2dgi2wzq4yvbyzu7nmwueczvvzioqsqo6azu6lm5oy@pbx75g2ach3p M src/backend/storage/aio/method_io_uring.c
Add check for invalid offset at multixid truncation
commit : 09532a78b8c6b49b5176bc1cd4671c571520a8c8
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 15 Jan 2026 16:48:45 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 15 Jan 2026 16:48:45 +0200 If a multixid with zero offset is left behind after a crash, and that
multixid later becomes the oldest multixid, truncation might try to
look up its offset and read the zero value. In the worst case, we
might incorrectly use the zero offset to truncate valid SLRU segments
that are still needed. I'm not sure if that can happen in practice, or
if there are some other lower-level safeguards or incidental reasons
that prevent the caller from passing an unwritten multixid as the
oldest multi. But better safe than sorry, so let's add an explicit
check for it.
In stable branches, we should perhaps do the same check for
'oldestOffset', i.e. the offset of the old oldest multixid (in master,
'oldestOffset' is gone). But if the old oldest multixid has an invalid
offset, the damage has been done already, and we would never advance
past that point. It's not clear what we should do in that case. The
check that this commit adds will prevent such an multixid with invalid
offset from becoming the oldest multixid in the first place, which
seems enough for now.
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: Discussion: https://www.postgresql.org/message-id/000301b2-5b81-4938-bdac-90f6eb660843@iki.fi
Backpatch-through: 14 M src/backend/access/transam/multixact.c
pg_waldump: Relax LSN comparison check in TAP test
commit : 64893323925322d2236aaac111343768cf7dafa0
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 14 Jan 2026 16:02:33 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 14 Jan 2026 16:02:33 +0900 The test 002_save_fullpage.pl, checking --save-fullpage fails with
wal_consistency_checking enabled, due to the fact that the block saved
in the file has the same LSN as the LSN used in the file name. The test
required that the block LSN is stritly lower than file LSN. This commit
relaxes the check a bit, by allowing the LSNs to match.
While on it, the test name is reworded to include some information about
the file and block LSNs, which is useful for debugging.
Author: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/4226AED7-E38F-419B-AAED-9BC853FB55DE@yandex-team.ru
Backpatch-through: 16 M src/bin/pg_waldump/t/002_save_fullpage.pl
Fix query jumbling with GROUP BY clauses
commit : 9c3caad0264011490134816b2264de7f20b2eb99
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 14 Jan 2026 08:44:52 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 14 Jan 2026 08:44:52 +0900 RangeTblEntry.groupexprs was marked with the node attribute
query_jumble_ignore, causing a list of GROUP BY expressions to be
ignored during the query jumbling. For example, these two queries could
be grouped together within the same query ID:
SELECT count(*) FROM t GROUP BY a;
SELECT count(*) FROM t GROUP BY b;
However, as such queries use different GROUP BY clauses, they should be
split across multiple entries.
This fixes an oversight in 247dea89f761, that has introduced an RTE for
GROUP BY clauses. Query IDs are documented as being stable across minor
releases, but as this is a regression new to v18 and that we are still
early in its support cycle, a backpatch is exceptionally done as this
has broken a behavior that exists since query jumbling is supported in
core, since its introduction in pg_stat_statements.
The tests of pg_stat_statements are expanded to cover this area, with
patterns involving GROUP BY and GROUPING clauses.
Author: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxEy2W+tCqC7XuJ94r3ivWsM=onKJp94kRFx3hoARjBeFQ@mail.gmail.com
Backpatch-through: 18 M contrib/pg_stat_statements/expected/select.out
M contrib/pg_stat_statements/sql/select.sql
M src/include/nodes/parsenodes.h
doc: Document DEFAULT option in file_fdw.
commit : 6920fc34531449a5e19f8e81f884f9f1de032b42
author : Fujii Masao <fujii@postgresql.org>
date : Tue, 13 Jan 2026 22:54:45 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Tue, 13 Jan 2026 22:54:45 +0900 Commit 9f8377f7a introduced the DEFAULT option for file_fdw but did not
update the documentation. This commit adds the missing description of
the DEFAULT option to the file_fdw documentation.
Backpatch to v16, where the DEFAULT option was introduced.
Author: Shinya Kato <shinya11.kato@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAOzEurT_PE7QEh5xAdb7Cja84Rur5qPv2Fzt3Tuqi=NU0WJsbg@mail.gmail.com
Backpatch-through: 16 M doc/src/sgml/file-fdw.sgml
pg_dump: Fix memory leak in dumpSequenceData().
commit : 56e1f501010fc79fd1d11df6fd04eec463d40319
author : Nathan Bossart <nathan@postgresql.org>
date : Sun, 11 Jan 2026 13:52:50 -0600
committer: Nathan Bossart <nathan@postgresql.org>
date : Sun, 11 Jan 2026 13:52:50 -0600 Oversight in commit 7a485bd641. Per Coverity.
Backpatch-through: 18 M src/bin/pg_dump/pg_dump.c
doc: Improve description of publish_via_partition_root
commit : d2c6ff7c525bc6d67109021ad28faeb795fa2878
author : Jacob Champion <jchampion@postgresql.org>
date : Fri, 9 Jan 2026 10:02:56 -0800
committer: Jacob Champion <jchampion@postgresql.org>
date : Fri, 9 Jan 2026 10:02:56 -0800 Reword publish_via_partition_root's opening paragraph. Describe its
behavior more clearly, and directly state that its default is false.
Per complaint by Peter Smith; final text of the patch made in
collaboration with Chao Li.
Author: Chao Li <li.evan.chao@gmail.com>
Author: Peter Smith <peter.b.smith@fujitsu.com>
Reported-by: Peter Smith <peter.b.smith@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CAHut%2BPu7SpK%2BctOYoqYR3V4w5LKc9sCs6c_qotk9uTQJQ4zp6g%40mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/ref/create_publication.sgml
pg_dump: Fix gathering of sequence information.
commit : 39d55557661f6d2fc0b5781b3f40390ca4febdad
author : Nathan Bossart <nathan@postgresql.org>
date : Fri, 9 Jan 2026 10:12:54 -0600
committer: Nathan Bossart <nathan@postgresql.org>
date : Fri, 9 Jan 2026 10:12:54 -0600 Since commit bd15b7db48, pg_dump uses pg_get_sequence_data() (née
pg_sequence_read_tuple()) to gather all sequence data in a single
query as opposed to a query per sequence. Two related bugs have
been identified:
* If the user lacks appropriate privileges on the sequence, pg_dump
generates a setval() command with garbage values instead of
failing as expected.
* pg_dump can fail due to a concurrently dropped sequence, even if
the dropped sequence's data isn't part of the dump.
This commit fixes the above issues by 1) teaching
pg_get_sequence_data() to return nulls instead of erroring for a
missing sequence and 2) teaching pg_dump to fail if it tries to
dump the data of a sequence for which pg_get_sequence_data()
returned nulls. Note that pg_dump may still fail due to a
concurrently dropped sequence, but it should now only do so when
the sequence data is part of the dump. This matches the behavior
before commit bd15b7db48.
Bug: #19365
Reported-by: Paveł Tyślacki <pavel.tyslacki@gmail.com>
Suggested-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19365-6245240d8b926327%40postgresql.org
Discussion: https://postgr.es/m/2885944.1767029161%40sss.pgh.pa.us
Backpatch-through: 18 M src/backend/commands/sequence.c
M src/bin/pg_dump/pg_dump.c
Fix possible incorrect column reference in ERROR message
commit : c35e5dd9ae972393338b81a2427ab4e587f18534
author : David Rowley <drowley@postgresql.org>
date : Fri, 9 Jan 2026 11:02:59 +1300
committer: David Rowley <drowley@postgresql.org>
date : Fri, 9 Jan 2026 11:02:59 +1300 When creating a partition for a RANGE partitioned table, the reporting
of errors relating to converting the specified range values into
constant values for the partition key's type could display the name of a
previous partition key column when an earlier range was specified as
MINVALUE or MAXVALUE.
This was caused by the code not correctly incrementing the index that
tracks which partition key the foreach loop was working on after
processing MINVALUE/MAXVALUE ranges.
Fix by using foreach_current_index() to ensure the index variable is
always set to the List element being worked on.
Author: myzhen <zhenmingyang@yeah.net>
Reviewed-by: zhibin wang <killerwzb@gmail.com>
Discussion: https://postgr.es/m/273cab52.978.19b96fc75e7.Coremail.zhenmingyang@yeah.net
Backpatch-through: 14 M src/backend/parser/parse_utilcmd.c
Fix nbtree skip array transformation comments.
commit : 6c99c715ddb338e169c2ffd2a4cf754fa510cccb
author : Peter Geoghegan <pg@bowt.ie>
date : Wed, 7 Jan 2026 12:53:05 -0500
committer: Peter Geoghegan <pg@bowt.ie>
date : Wed, 7 Jan 2026 12:53:05 -0500 Fix comments that incorrectly described transformations performed by the
"Avoid extra index searches through preprocessing" mechanism introduced
by commit b3f1a13f.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-By: Chao Li <li.evan.chao@gmail.com>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/20251230190145.c3c88c5eb0f88b136adda92f@sraoss.co.jp
Backpatch-through: 18 M src/backend/access/nbtree/nbtpreprocesskeys.c
Fix typo
commit : f9125ca3db513f94907ef21ecc2665643bc7e5cf
author : Peter Eisentraut <peter@eisentraut.org>
date : Wed, 7 Jan 2026 15:47:02 +0100
committer: Peter Eisentraut <peter@eisentraut.org>
date : Wed, 7 Jan 2026 15:47:02 +0100 Reported-by: Xueyu Gao <gaoxueyu_hope@163.com>
Discussion: https://www.postgresql.org/message-id/42b5c99a.856d.19b73d858e2.Coremail.gaoxueyu_hope%40163.com M .cirrus.tasks.yml
createuser: Update docs to reflect defaults
commit : 77ade60a0a3a55ce5e591cd02616fe08a0acc806
author : John Naylor <john.naylor@postgresql.org>
date : Wed, 7 Jan 2026 16:02:19 +0700
committer: John Naylor <john.naylor@postgresql.org>
date : Wed, 7 Jan 2026 16:02:19 +0700 Commit c7eab0e97 changed the default password_encryption setting to
'scram-sha-256', so update the example for creating a user with an
assigned password.
In addition, commit 08951a7c9 added new options that in turn pass
default tokens NOBYPASSRLS and NOREPLICATION to the CREATE ROLE
command, so fix this omission as well for v16 and later.
Reported-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/cff1ea60-c67d-4320-9e33-094637c2c4fb%40iki.fi
Backpatch-through: 14 M doc/src/sgml/ref/createuser.sgml
Further doc updates to reflect MD5 deprecation
commit : cdcab17e7e5deb3fa843214ca1215023735b45cb
author : John Naylor <john.naylor@postgresql.org>
date : Wed, 7 Jan 2026 11:55:01 +0700
committer: John Naylor <john.naylor@postgresql.org>
date : Wed, 7 Jan 2026 11:55:01 +0700 Followup to 44f49511b.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CAHGQGwH_UfN96vcvLGA%3DYro%2Bo6qCn0nEgEGoviwzEiLTHtt2Pw%40mail.gmail.com
Backpatch-through: 18 M doc/src/sgml/high-availability.sgml
M doc/src/sgml/logical-replication.sgml
Fix buggy interaction between array subscripts and subplan params
commit : bdc5dedfcaa57ddeef115252283019d79083d8a2
author : Andres Freund <andres@anarazel.de>
date : Tue, 6 Jan 2026 19:51:19 -0500
committer: Andres Freund <andres@anarazel.de>
date : Tue, 6 Jan 2026 19:51:19 -0500 In a7f107df2 I changed subplan param evaluation to happen within the
containing expression. As part of that, ExecInitSubPlanExpr() was changed to
evaluate parameters via a new EEOP_PARAM_SET expression step. These parameters
were temporarily stored into ExprState->resvalue/resnull, with some reasoning
why that would be fine. Unfortunately, that analysis was wrong -
ExecInitSubscriptionRef() evaluates the input array into "resv"/"resnull",
which will often point to ExprState->resvalue/resnull. This means that the
EEOP_PARAM_SET, if inside an array subscript, would overwrite the input array
to array subscript.
The fix is fairly simple - instead of evaluating into
ExprState->resvalue/resnull, store the temporary result of the subplan in the
subplan's return value.
Bug: #19370
Reported-by: Zepeng Zhang <redraiment@gmail.com>
Diagnosed-by: Tom Lane <tgl@sss.pgh.pa.us>
Diagnosed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/19370-7fb7a5854b7618f1@postgresql.org
Backpatch-through: 18 M src/backend/executor/execExpr.c
M src/backend/executor/execExprInterp.c
M src/test/regress/expected/subselect.out
M src/test/regress/sql/subselect.sql
Update comments atop ReplicationSlotCreate.
commit : aa40615cc99211db70429ebbc4c8a1549dcc06ae
author : Amit Kapila <akapila@postgresql.org>
date : Tue, 6 Jan 2026 04:48:49 +0000
committer: Amit Kapila <akapila@postgresql.org>
date : Tue, 6 Jan 2026 04:48:49 +0000 Since commit 1462aad2e4, which introduced the ability to modify the
two_phase property of a slot, the comments above ReplicationSlotCreate
have become outdated. We have now added a cautionary note in the comments
above ReplicationSlotAlter explaining when it is safe to modify the
two_phase property of a slot.
Author: Daniil Davydov <3danissimo@gmail.com>
Author: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/CAJDiXggZXQZ7bD0QcTizDt6us9aX6ZKK4dWxzgb5x3+TsVHjqQ@mail.gmail.com M src/backend/replication/slot.c
Fix issue with EVENT TRIGGERS and ALTER PUBLICATION
commit : bea57a6b430c988d628582dd88e27281ccbae796
author : David Rowley <drowley@postgresql.org>
date : Tue, 6 Jan 2026 17:29:44 +1300
committer: David Rowley <drowley@postgresql.org>
date : Tue, 6 Jan 2026 17:29:44 +1300 When processing the "publish" options of an ALTER PUBLICATION command,
we call SplitIdentifierString() to split the options into a List of
strings. Since SplitIdentifierString() modifies the delimiter
character and puts NULs in their place, this would overwrite the memory
of the AlterPublicationStmt. Later in AlterPublicationOptions(), the
modified AlterPublicationStmt is copied for event triggers, which would
result in the event trigger only seeing the first "publish" option
rather than all options that were specified in the command.
To fix this, make a copy of the string before passing to
SplitIdentifierString().
Here we also adjust a similar case in the pgoutput plugin. There's no
known issues caused by SplitIdentifierString() here, so this is being
done out of paranoia.
Thanks to Henson Choi for putting together an example case showing the
ALTER PUBLICATION issue.
Author: sunil s <sunilfeb26@gmail.com>
Reviewed-by: Henson Choi <assam258@gmail.com>
Reviewed-by: zengman <zengman@halodbtech.com>
Backpatch-through: 14 M src/backend/commands/publicationcmds.c
M src/backend/replication/pgoutput/pgoutput.c
Add TAP test for GUC settings passed via CONNECTION in logical replication.
commit : 6ec5968151252b6c01f1150f668fa9331db1313e
author : Fujii Masao <fujii@postgresql.org>
date : Tue, 6 Jan 2026 11:57:12 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Tue, 6 Jan 2026 11:57:12 +0900 Commit d926462d819 restored the behavior of passing GUC settings from
the CONNECTION string to the publisher's walsender, allowing per-connection
configuration.
This commit adds a TAP test to verify that behavior works correctly.
Since commit d926462d819 was recently applied and backpatched to v15,
this follow-up commit is also backpatched accordingly.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Chao Li <lic@highgo.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/CAHGQGwGYV+-abbKwdrM2UHUe-JYOFWmsrs6=QicyJO-j+-Widw@mail.gmail.com
Backpatch-through: 15 M src/test/subscription/t/001_rep_changes.pl
Honor GUC settings specified in CREATE SUBSCRIPTION CONNECTION.
commit : 797fc5d1b38dd46f46d8482bfe762f5d5ca6a001
author : Fujii Masao <fujii@postgresql.org>
date : Tue, 6 Jan 2026 11:52:22 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Tue, 6 Jan 2026 11:52:22 +0900 Prior to v15, GUC settings supplied in the CONNECTION clause of
CREATE SUBSCRIPTION were correctly passed through to
the publisher's walsender. For example:
CREATE SUBSCRIPTION mysub
CONNECTION 'options=''-c wal_sender_timeout=1000'''
PUBLICATION ...
would cause wal_sender_timeout to take effect on the publisher's walsender.
However, commit f3d4019da5d changed the way logical replication
connections are established, forcing the publisher's relevant
GUC settings (datestyle, intervalstyle, extra_float_digits) to
override those provided in the CONNECTION string. As a result,
from v15 through v18, GUC settings in the CONNECTION string were
always ignored.
This regression prevented per-connection tuning of logical replication.
For example, using a shorter timeout for walsender connecting
to a nearby subscriber and a longer one for walsender connecting
to a remote subscriber.
This commit restores the intended behavior by ensuring that
GUC settings in the CONNECTION string are again passed through
and applied by the walsender, allowing per-connection configuration.
Backpatch to v15, where the regression was introduced.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Chao Li <lic@highgo.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/CAHGQGwGYV+-abbKwdrM2UHUe-JYOFWmsrs6=QicyJO-j+-Widw@mail.gmail.com
Backpatch-through: 15 M src/backend/replication/libpqwalreceiver/libpqwalreceiver.c
Fix misleading comment for GetOperatorFromCompareType
commit : 9ba5d40e9037ae24c2f4f9a5f3d08a619e1185d6
author : David Rowley <drowley@postgresql.org>
date : Tue, 6 Jan 2026 15:16:56 +1300
committer: David Rowley <drowley@postgresql.org>
date : Tue, 6 Jan 2026 15:16:56 +1300 The comment claimed *strat got set to InvalidStrategy when the function
lookup fails. This isn't true; an ERROR is raised when that happens.
Author: Paul A Jungwirth <pj@illuminatedcomputing.com>
Discussion: https://postgr.es/m/CA+renyXOrjLacP_nhqEQUf2W+ZCoY2q5kpQCfG05vQVYzr8b9w@mail.gmail.com
Backpatch-through: 18 M src/backend/commands/indexcmds.c
doc: Fix outdated doc in pg_rewind.
commit : b86c2b712b503cdcdf7f801ffd0ef97066204504
author : Fujii Masao <fujii@postgresql.org>
date : Tue, 6 Jan 2026 11:00:54 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Tue, 6 Jan 2026 11:00:54 +0900 Update pg_rewind documentation to reflect the change that data checksums are
now enabled by default during initdb.
Backpatch to v18, where data checksums were changed to be enabled by default.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/TY4PR01MB16907D62F3A0A377B30FDBEA794B2A@TY4PR01MB16907.jpnprd01.prod.outlook.com
Backpatch-through: 18 M doc/src/sgml/ref/pg_rewind.sgml
ci: Remove ulimit -p for netbsd/openbsd
commit : eda0e9383dc1c8ca1d50d358352af06778838447
author : Andres Freund <andres@anarazel.de>
date : Mon, 5 Jan 2026 13:09:03 -0500
committer: Andres Freund <andres@anarazel.de>
date : Mon, 5 Jan 2026 13:09:03 -0500 Previously the ulimit -p 256 was needed to increase the limit on
openbsd. However, sometimes the limit actually was too low, causing
"could not fork new process for connection: Resource temporarily unavailable"
errors. Most commonly on netbsd, but also on openbsd.
The ulimit on openbsd couldn't trivially be increased with ulimit, because of
hitting the hard limit.
Instead of increasing the limit in the CI script, the CI image generation now
increases the limits: https://github.com/anarazel/pg-vm-images/pull/129
Backpatch-through: 18 M .cirrus.tasks.yml
Tighten up assertion on a local variable
commit : b63302d900767cd0fdcb13c9cc53b538f4c04b78
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Mon, 5 Jan 2026 11:33:35 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Mon, 5 Jan 2026 11:33:35 +0200 'lineindex' is 0-based, as mentioned in the comments.
Backpatch to v18 where the assertion was added.
Author: ChangAo Chen <cca5507@qq.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/tencent_A84F3C810365BB9BD08442955AE494141907@qq.com
Backpatch-through: 18 M src/backend/access/heap/heapam.c
Doc: add missing punctuation
commit : 789016be8ef015f65e9f76e5a58945b7b93f17b7
author : David Rowley <drowley@postgresql.org>
date : Sun, 4 Jan 2026 21:13:10 +1300
committer: David Rowley <drowley@postgresql.org>
date : Sun, 4 Jan 2026 21:13:10 +1300 Author: Daisuke Higuchi <higuchi.daisuke11@gmail.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Discussion: https://postgr.es/m/CAEVT6c-yWYstu76YZ7VOxmij2XA8vrOEvens08QLmKHTDjEPBw@mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/config.sgml
M doc/src/sgml/history.sgml
Fix selectivity estimation integer overflow in contrib/intarray
commit : 07c1c6ec51a4474c22abbee731cfc8111fc09a43
author : David Rowley <drowley@postgresql.org>
date : Sun, 4 Jan 2026 20:33:14 +1300
committer: David Rowley <drowley@postgresql.org>
date : Sun, 4 Jan 2026 20:33:14 +1300 This fixes a poorly written integer comparison function which was
performing subtraction in an attempt to return a negative value when
a < b and a positive value when a > b, and 0 when the values were equal.
Unfortunately that didn't always work correctly due to two's complement
having the INT_MIN 1 further from zero than INT_MAX. This could result
in an overflow and cause the comparison function to return an incorrect
result, which would result in the binary search failing to find the
value being searched for.
This could cause poor selectivity estimates when the statistics stored
the value of INT_MAX (2147483647) and the value being searched for was
large enough to result in the binary search doing a comparison with that
INT_MAX value.
Author: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/CAEoWx2ng1Ot5LoKbVU-Dh---dFTUZWJRH8wv2chBu29fnNDMaQ@mail.gmail.com
Backpatch-through: 14 M contrib/intarray/_int_selfuncs.c
Update copyright for 2026
commit : aa4b5ebc7640f60905cd4c71db45674e5941b611
author : Bruce Momjian <bruce@momjian.us>
date : Thu, 1 Jan 2026 13:24:10 -0500
committer: Bruce Momjian <bruce@momjian.us>
date : Thu, 1 Jan 2026 13:24:10 -0500 Backpatch-through: 14 M COPYRIGHT
M doc/src/sgml/legal.sgml
Fix macro name for io_uring_queue_init_mem check.
commit : 640772c6df2bdb3e2b905b03ac199ae46e29cda3
author : Masahiko Sawada <msawada@postgresql.org>
date : Wed, 31 Dec 2025 11:18:17 -0800
committer: Masahiko Sawada <msawada@postgresql.org>
date : Wed, 31 Dec 2025 11:18:17 -0800 Commit f54af9f2679d added a check for
io_uring_queue_init_mem(). However, it used the macro name
HAVE_LIBURING_QUEUE_INIT_MEM in both meson.build and the C code, while
the Autotools build script defined HAVE_IO_URING_QUEUE_INIT_MEM. As a
result, the optimization was never enabled in builds configured with
Autotools, as the C code checked for the wrong macro name.
This commit changes the macro name to HAVE_IO_URING_QUEUE_INIT_MEM in
meson.build and the C code. This matches the actual function
name (io_uring_queue_init_mem), following the standard HAVE_<FUNCTION>
convention.
Backpatch to 18, where the macro was introduced.
Bug: #19368
Reported-by: Evan Si <evsi@amazon.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19368-016d79a7f3a1c599@postgresql.org
Backpatch-through: 18 M meson.build
M src/backend/storage/aio/method_io_uring.c
jit: Fix jit_profiling_support when unavailable.
commit : 6377b17257c69c6c87b9aa1da3fac62bd91345eb
author : Thomas Munro <tmunro@postgresql.org>
date : Wed, 31 Dec 2025 13:24:17 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Wed, 31 Dec 2025 13:24:17 +1300 jit_profiling_support=true captures profile data for Linux perf. On
other platforms, LLVMCreatePerfJITEventListener() returns NULL and the
attempt to register the listener would crash.
Fix by ignoring the setting in that case. The documentation already
says that it only has an effect if perf support is present, and we
already did the same for older LLVM versions that lacked support.
No field reports, unsurprisingly for an obscure developer-oriented
setting. Noticed in passing while working on commit 1a28b4b4.
Backpatch-through: 14
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKGJgB6gvrdDohgwLfCwzVQm%3DVMtb9m0vzQn%3DCwWn-kwG9w%40mail.gmail.com M src/backend/jit/llvm/llvmjit.c
Fix a race condition in updating procArray->replication_slot_xmin.
commit : fd7c86cfaf139c57a8e0dcf68fe37dd6086f758f
author : Masahiko Sawada <msawada@postgresql.org>
date : Tue, 30 Dec 2025 10:56:28 -0800
committer: Masahiko Sawada <msawada@postgresql.org>
date : Tue, 30 Dec 2025 10:56:28 -0800 Previously, ReplicationSlotsComputeRequiredXmin() computed the oldest
xmin across all slots without holding ProcArrayLock (when
already_locked is false), acquiring the lock just before updating the
replication slot xmin.
This could lead to a race condition: if a backend created a new slot
and updates the global replication slot xmin, another backend
concurrently running ReplicationSlotsComputeRequiredXmin() could
overwrite that update with an invalid or stale value. This happens
because the concurrent backend might have computed the aggregate xmin
before the new slot was accounted for, but applied the update after
the new slot had already updated the global value.
In the reported failure, a walsender for an apply worker computed
InvalidTransactionId as the oldest xmin and overwrote a valid
replication slot xmin value computed by a walsender for a tablesync
worker. Consequently, the tablesync worker computed a transaction ID
via GetOldestSafeDecodingTransactionId() effectively without
considering the replication slot xmin. This led to the error "cannot
build an initial slot snapshot as oldest safe xid %u follows
snapshot's xmin %u", which was an assertion failure prior to commit
240e0dbacd3.
To fix this, we acquire ReplicationSlotControlLock in exclusive mode
during slot creation to perform the initial update of the slot
xmin. In ReplicationSlotsComputeRequiredXmin(), we hold
ReplicationSlotControlLock in shared mode until the global slot xmin
is updated in ProcArraySetReplicationSlotXmin(). This prevents
concurrent computations and updates of the global xmin by other
backends during the initial slot xmin update process, while still
permitting concurrent calls to ReplicationSlotsComputeRequiredXmin().
Backpatch to all supported versions.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Pradeep Kumar <spradeepkumar29@gmail.com>
Reviewed-by: Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAA4eK1L8wYcyTPxNzPGkhuO52WBGoOZbT0A73Le=ZUWYAYmdfw@mail.gmail.com
Backpatch-through: 14 M src/backend/replication/logical/logical.c
M src/backend/replication/logical/slotsync.c
M src/backend/replication/slot.c
jit: Remove -Wno-deprecated-declarations in 18+.
commit : c5e1281fd893ea8c86cd17cec402dc684b05167c
author : Thomas Munro <tmunro@postgresql.org>
date : Tue, 30 Dec 2025 14:11:37 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Tue, 30 Dec 2025 14:11:37 +1300 REL_18_STABLE and master have commit ee485912, so they always use the
newer LLVM opaque pointer functions. Drop -Wno-deprecated-declarations
(commit a56e7b660) for code under jit/llvm in those branches, to catch
any new deprecation warnings that arrive in future version of LLVM.
Older branches continued to use functions marked deprecated in LLVM 14
and 15 (ie switched to the newer functions only for LLVM 16+), as a
precaution against unforeseen compatibility problems with bitcode
already shipped. In those branches, the comment about warning
suppression is updated to explain that situation better. In theory we
could suppress warnings only for LLVM 14 and 15 specifically, but that
isn't done here.
Backpatch-through: 14
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1407185.1766682319%40sss.pgh.pa.us M src/backend/jit/llvm/Makefile
Fix Mkvcbuild.pm builds of test_cloexec.c.
commit : 4da5c33a3a046fc81a6b490568801c5739286936
author : Thomas Munro <tmunro@postgresql.org>
date : Mon, 29 Dec 2025 15:22:16 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Mon, 29 Dec 2025 15:22:16 +1300 Mkvcbuild.pm scrapes Makefile contents, but couldn't understand the
change made by commit bec2a0aa. Revealed by BF animal hamerkop in
branch REL_16_STABLE.
1. It used += instead of =, which didn't match the pattern that
Mkvcbuild.pm looks for. Drop the +.
2. Mkvcbuild.pm doesn't link PROGRAM executables with libpgport. Apply
a local workaround to REL_16_STABLE only (later branches dropped
Mkvcbuild.pm).
Backpatch-through: 16
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/175163.1766357334%40sss.pgh.pa.us M src/test/modules/test_cloexec/Makefile
Ignore PlaceHolderVars when looking up statistics
commit : 7e9f852a79fe19d4d0f18aabc32a620797fb676e
author : Richard Guo <rguo@postgresql.org>
date : Mon, 29 Dec 2025 11:40:45 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Mon, 29 Dec 2025 11:40:45 +0900 When looking up statistical data about an expression, we failed to
look through PlaceHolderVar nodes, treating them as opaque. This
could prevent us from matching an expression to base columns, index
expressions, or extended statistics, as examine_variable() relies on
strict structural matching.
As a result, queries involving PlaceHolderVar nodes often fell back to
default selectivity estimates, potentially leading to poor plan
choices.
This patch updates examine_variable() to strip PlaceHolderVars before
analysis. This is safe during estimation because PlaceHolderVars are
transparent for the purpose of statistics lookup: they do not alter
the value distribution of the underlying expression.
To minimize performance overhead on this hot path, a lightweight
walker first checks for the presence of PlaceHolderVars. The more
expensive mutator is invoked only when necessary.
There is one ensuing plan change in the regression tests, which is
expected and demonstrates the fix: the rowcount estimate becomes much
more accurate with this patch.
Back-patch to v18. Although this issue exists before that, changes in
this version made it common enough to notice. Given the lack of field
reports for older versions, I am not back-patching further.
Reported-by: Haowu Ge <gehaowu@bitmoe.com>
Author: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/62af586c-c270-44f3-9c5e-02c81d537e3d.gehaowu@bitmoe.com
Backpatch-through: 18 M src/backend/utils/adt/selfuncs.c
M src/test/regress/expected/join.out
M src/test/regress/sql/join.sql
Strip PlaceHolderVars from index operands
commit : b4cf7442058f0b0f525b5df36f4bbfc73a97ed0c
author : Richard Guo <rguo@postgresql.org>
date : Mon, 29 Dec 2025 11:38:49 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Mon, 29 Dec 2025 11:38:49 +0900 When pulling up a subquery, we may need to wrap its targetlist items
in PlaceHolderVars to enforce separate identity or as a result of
outer joins. However, this causes any upper-level WHERE clauses
referencing these outputs to contain PlaceHolderVars, which prevents
indxpath.c from recognizing that they could be matched to index
columns or index expressions, potentially affecting the planner's
ability to use indexes.
To fix, explicitly strip PlaceHolderVars from index operands. A
PlaceHolderVar appearing in a relation-scan-level expression is
effectively a no-op. Nevertheless, to play it safe, we strip only
PlaceHolderVars that are not marked nullable.
The stripping is performed recursively to handle cases where
PlaceHolderVars are nested or interleaved with other node types. To
minimize performance impact, we first use a lightweight walker to
check for the presence of strippable PlaceHolderVars. The expensive
mutator is invoked only if a candidate is found, avoiding unnecessary
memory allocation and tree copying in the common case where no
PlaceHolderVars are present.
Back-patch to v18. Although this issue exists before that, changes in
this version made it common enough to notice. Given the lack of field
reports for older versions, I am not back-patching further.
Reported-by: Haowu Ge <gehaowu@bitmoe.com>
Author: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/62af586c-c270-44f3-9c5e-02c81d537e3d.gehaowu@bitmoe.com
Backpatch-through: 18 M src/backend/optimizer/path/indxpath.c
M src/backend/optimizer/plan/createplan.c
M src/include/optimizer/paths.h
M src/test/regress/expected/groupingsets.out
M src/test/regress/sql/groupingsets.sql
Add oauth_validator_libraries to variable_is_guc_list_quote
commit : 61c78e1f494cc737807c9fa7f1de0d8c39b53428
author : Daniel Gustafsson <dgustafsson@postgresql.org>
date : Sat, 27 Dec 2025 23:05:48 +0100
committer: Daniel Gustafsson <dgustafsson@postgresql.org>
date : Sat, 27 Dec 2025 23:05:48 +0100 The variable_is_guc_list_quote function need to know about all
GUC_QUOTE variables, this adds oauth_validator_libraries which
was missing. Backpatch to v18 where OAuth was introduced.
Author: ChangAo Chen <cca5507@qq.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/tencent_03D4D2A5C0C8DCE0CD1DB4D945858E15420A@qq.com
Backpatch-through: 18 M src/bin/pg_dump/dumputils.c
Fix pg_stat_get_backend_activity() to use multi-byte truncated result
commit : 06907e864733ed02056e510c3f405414335bcae3
author : Michael Paquier <michael@paquier.xyz>
date : Sat, 27 Dec 2025 17:23:51 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Sat, 27 Dec 2025 17:23:51 +0900 pg_stat_get_backend_activity() calls pgstat_clip_activity() to ensure
that the reported query string is correctly truncated when it finishes
with an incomplete multi-byte sequence. However, the result returned by
the function was not what pgstat_clip_activity() generated, but the
non-truncated, original, contents from PgBackendStatus.st_activity_raw.
Oversight in 54b6cd589ac2, so backpatch all the way down.
Author: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAEoWx2mDzwc48q2EK9tSXS6iJMJ35wvxNQnHX+rXjy5VgLvJQw@mail.gmail.com
Backpatch-through: 14 M src/backend/utils/adt/pgstatfuncs.c
doc: warn about the use of "ctid" queries beyond the examples
commit : c6d2cd06cb43050fbe5cc1a928bdb9eb0299ca27
author : Bruce Momjian <bruce@momjian.us>
date : Fri, 26 Dec 2025 17:34:17 -0500
committer: Bruce Momjian <bruce@momjian.us>
date : Fri, 26 Dec 2025 17:34:17 -0500 Also be more assertive that "ctid" should not be used for long-term
storage.
Reported-by: Bernice Southey
Discussion: https://postgr.es/m/CAEDh4nyn5swFYuSfcnGAbpQrKOc47Hh_ZyKVSPYJcu2P=51Luw@mail.gmail.com
Backpatch-through: 17 M doc/src/sgml/ddl.sgml
M doc/src/sgml/ref/delete.sgml
M doc/src/sgml/ref/update.sgml
doc: Remove duplicate word in ECPG description
commit : 2359c5945c2c8092ef41734221af59dbffce242e
author : Michael Paquier <michael@paquier.xyz>
date : Fri, 26 Dec 2025 15:26:02 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Fri, 26 Dec 2025 15:26:02 +0900 Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Discussion: https://postgr.es/m/d6d6a800f8b503cd78d5f4fa721198e40eec1677.camel@cybertec.at
Backpatch-through: 14 M doc/src/sgml/ecpg.sgml
Fix planner error with SRFs and grouping sets
commit : 382ce9cb717f3376174f852e8f4b8c28b1c87020
author : Richard Guo <rguo@postgresql.org>
date : Thu, 25 Dec 2025 12:12:52 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Thu, 25 Dec 2025 12:12:52 +0900 If there are any SRFs in a PathTarget, we must separate it into
SRF-computing and SRF-free targets. This is because the executor can
only handle SRFs that appear at the top level of the targetlist of a
ProjectSet plan node.
If we find a subexpression that matches an expression already computed
in the previous plan level, we should treat it like a Var and should
not split it again. setrefs.c will later replace the expression with
a Var referencing the subplan output.
However, when processing the grouping target for grouping sets, the
planner can fail to recognize that an expression is already computed
in the scan/join phase. The root cause is a mismatch in the
nullingrels bits. Expressions in the grouping target carry the
grouping nulling bit in their nullingrels to indicate that they can be
nulled by the grouping step. However, the corresponding expressions
in the scan/join target do not have these bits.
As a result, the exact match check in list_member() fails, leading the
planner to incorrectly believe that the expression needs to be
re-evaluated from its arguments, which are often not available in the
subplan. This can lead to planner errors such as "variable not found
in subplan target list".
To fix, ignore the grouping nulling bit when checking whether an
expression from the grouping target is available in the pre-grouping
input target. This aligns with the matching logic in setrefs.c.
Backpatch to v18, where this issue was introduced.
Bug: #19353
Reported-by: Marian MULLER REBEYROL <marian.muller@serli.com>
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/19353-aaa179bba986a19b@postgresql.org
Backpatch-through: 18 M src/backend/optimizer/plan/planner.c
M src/backend/optimizer/util/tlist.c
M src/include/optimizer/tlist.h
M src/test/regress/expected/groupingsets.out
M src/test/regress/sql/groupingsets.sql
psql: Fix tab completion for VACUUM option values.
commit : 4e13769004c8b2a337d24060e15f23684d3df98b
author : Masahiko Sawada <msawada@postgresql.org>
date : Wed, 24 Dec 2025 13:55:32 -0800
committer: Masahiko Sawada <msawada@postgresql.org>
date : Wed, 24 Dec 2025 13:55:32 -0800 Commit 8a3e4011 introduced tab completion for the ONLY option of
VACUUM and ANALYZE, along with some code simplification using
MatchAnyN. However, it caused a regression in tab completion for
VACUUM option values. For example, neither ON nor OFF was suggested
after "VACUUM (VERBOSE". In addition, the ONLY keyword was not
suggested immediately after a completed option list.
Backpatch to v18.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/20251223021509.19bba68ecbbc70c9f983c2b4@sraoss.co.jp
Backpatch-through: 18 M src/bin/psql/tab-complete.in.c
doc: Use proper tags in pg_overexplain documentation.
commit : 02a0f385fa980c1eb14947936161123b4848fe9b
author : Fujii Masao <fujii@postgresql.org>
date : Thu, 25 Dec 2025 00:27:19 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Thu, 25 Dec 2025 00:27:19 +0900 The pg_overexplain documentation previously used the <literal> tag for
some file names, struct names, and commands. Update the markup to
use the more appropriate tags: <filename>, <structname>, and <command>.
Backpatch to v18, where pg_overexplain was introduced.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Shixin Wang <wang-shi-xin@outlook.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAHGQGwEyYUzz0LjBV_fMcdwU3wgmu0NCoT+JJiozPa8DG6eeog@mail.gmail.com
Backpatch-through: 18 M doc/src/sgml/pgoverexplain.sgml
Update comments to reflect changes in 8e0d32a4a1.
commit : 214c17bd623e1fd80b6ac02bd7c428eb4ab4307d
author : Amit Kapila <akapila@postgresql.org>
date : Wed, 24 Dec 2025 10:06:20 +0000
committer: Amit Kapila <akapila@postgresql.org>
date : Wed, 24 Dec 2025 10:06:20 +0000 Commit 8e0d32a4a1 fixed an issue by allowing the replication origin to be
created while marking the table sync state as SUBREL_STATE_DATASYNC.
Update the comment in check_old_cluster_subscription_state() to accurately
describe this corrected behavior.
Author: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Backpatch-through: 17, where the code was introduced
Discussion: https://postgr.es/m/CAA4eK1+KaSf5nV_tWy+SDGV6MnFnKMhdt41jJjSDWm6yCyOcTw@mail.gmail.com
Discussion: https://postgr.es/m/aUTekQTg4OYnw-Co@paquier.xyz M src/bin/pg_upgrade/check.c
Don't advance origin during apply failure.
commit : 2f7ffe124a9ba0ddc477fa5643da5e59cc1e4db0
author : Amit Kapila <akapila@postgresql.org>
date : Wed, 24 Dec 2025 04:21:43 +0000
committer: Amit Kapila <akapila@postgresql.org>
date : Wed, 24 Dec 2025 04:21:43 +0000 The logical replication parallel apply worker could incorrectly advance
the origin progress during an error or failed apply. This behavior risks
transaction loss because such transactions will not be resent by the
server.
Commit 3f28b2fcac addressed a similar issue for both the apply worker and
the table sync worker by registering a before_shmem_exit callback to reset
origin information. This prevents the worker from advancing the origin
during transaction abortion on shutdown. This patch applies the same fix
to the parallel apply worker, ensuring consistent behavior across all
worker types.
As with 3f28b2fcac, we are backpatching through version 16, since parallel
apply mode was introduced there and the issue only occurs when changes are
applied before the transaction end record (COMMIT or ABORT) is received.
Author: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 16
Discussion: https://postgr.es/m/TY4PR01MB169078771FB31B395AB496A6B94B4A@TY4PR01MB16907.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/TYAPR01MB5692FAC23BE40C69DA8ED4AFF5B92@TYAPR01MB5692.jpnprd01.prod.outlook.com M src/backend/replication/logical/worker.c
M src/test/subscription/t/023_twophase_stream.pl
Fix bug in following update chain when locking a heap tuple
commit : 3e3a80f62c09709de899cc7649f1c86c63b78981
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 23 Dec 2025 13:37:16 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 23 Dec 2025 13:37:16 +0200 After waiting for a concurrent updater to finish, heap_lock_tuple()
followed the update chain to lock all tuple versions. However, when
stepping from the initial tuple to the next one, it failed to check
that the next tuple's XMIN matches the initial tuple's XMAX. That's an
important check whenever following an update chain, and the recursive
part that follows the chain did it, but the initial step missed it.
Without the check, if the updating transaction aborts, the updated
tuple is vacuumed away and replaced by an unrelated tuple, the
unrelated tuple might get incorrectly locked.
Author: Jasper Smit <jasper.smit@servicenow.com>
Discussion: https://www.postgresql.org/message-id/CAOG+RQ74x0q=kgBBQ=mezuvOeZBfSxM1qu_o0V28bwDz3dHxLw@mail.gmail.com
Backpatch-through: 14 M src/backend/access/heap/heapam.c
M src/test/modules/injection_points/Makefile
A src/test/modules/injection_points/expected/heap_lock_update.out
M src/test/modules/injection_points/meson.build
A src/test/modules/injection_points/specs/heap_lock_update.spec
Add missing .gitignore for src/test/modules/test_cloexec.
commit : 00a851f0c480f4d2c1f8c957f8b53e29ed6113fb
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 22 Dec 2025 14:06:54 -0500
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 22 Dec 2025 14:06:54 -0500 A src/test/modules/test_cloexec/.gitignore
Fix orphaned origin in shared memory after DROP SUBSCRIPTION
commit : b07c326192d09d996afd761a6224bf74273128a1
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 23 Dec 2025 14:32:19 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 23 Dec 2025 14:32:19 +0900 Since ce0fdbfe9722, a replication slot and an origin are created by each
tablesync worker, whose information is stored in both a catalog and
shared memory (once the origin is set up in the latter case). The
transaction where the origin is created is the same as the one that runs
the initial COPY, with the catalog state of the origin becoming visible
for other sessions only once the COPY transaction has committed. The
catalog state is coupled with a state in shared memory, initialized at
the same time as the origin created in the catalogs. Note that the
transaction doing the initial data sync can take a long time, time that
depends on the amount of data to transfer from a publication node to its
subscriber node.
Now, when a DROP SUBSCRIPTION is executed, all its workers are stopped
with the origins removed. The removal of each origin relies on a
catalog lookup. A worker still running the initial COPY would fail its
transaction, with the catalog state of the origin rolled back while the
shared memory state remains around. The session running the DROP
SUBSCRIPTION should be in charge of cleaning up the catalog and the
shared memory state, but as there is no data in the catalogs the shared
memory state is not removed. This issue would leave orphaned origin
data in shared memory, leading to a confusing state as it would still
show up in pg_replication_origin_status. Note that this shared memory
data is sticky, being flushed on disk in replorigin_checkpoint at
checkpoint. This prevents other origins from reusing a slot position
in the shared memory data.
To address this problem, the commit moves the creation of the origin at
the end of the transaction that precedes the one executing the initial
COPY, making the origin immediately visible in the catalogs for other
sessions, giving DROP SUBSCRIPTION a way to know about it. A different
solution would have been to clean up the shared memory state using an
abort callback within the tablesync worker. The solution of this commit
is more consistent with the apply worker that creates an origin in a
short transaction.
A test is added in the subscription test 004_sync.pl, which was able to
display the problem. The test fails when this commit is reverted.
Reported-by: Tenglong Gu <brucegu@amazon.com>
Reported-by: Daisuke Higuchi <higudai@amazon.com>
Analyzed-by: Michael Paquier <michael@paquier.xyz>
Author: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/aUTekQTg4OYnw-Co@paquier.xyz
Backpatch-through: 14 M src/backend/commands/subscriptioncmds.c
M src/backend/replication/logical/tablesync.c
M src/test/subscription/t/004_sync.pl
doc: Fix incorrect reference in pg_overexplain documentation.
commit : 283e25a37187b67e9ad88fef59e036140a615389
author : Fujii Masao <fujii@postgresql.org>
date : Mon, 22 Dec 2025 17:56:28 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Mon, 22 Dec 2025 17:56:28 +0900 Correct the referenced location of the RangeTblEntry definition
in the pg_overexplain documentation.
Backpatched to v18, where pg_overexplain was introduced.
Author: Julien Tachoires <julien@tachoires.me>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/20251218092319.tht64ffmcvzqdz7u@poseidon.home.virt
Backpatch-through: 18 M doc/src/sgml/pgoverexplain.sgml
Clean up test_cloexec.c and Makefile.
commit : a7d06e74d51209702fe0712214aac07f863ec36a
author : Thomas Munro <tmunro@postgresql.org>
date : Sun, 21 Dec 2025 15:40:07 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Sun, 21 Dec 2025 15:40:07 +1300 An unused variable caused a compiler warning on BF animal fairywren, an
snprintf() call was redundant, and some buffer sizes were inconsistent.
Per code review from Tom Lane.
The Makefile's test ifeq ($(PORTNAME), win32) never succeeded due to a
circularity, so only Meson builds were actually compiling the new test
code, partially explaining why CI didn't tell us about the warning
sooner (the other problem being that CompilerWarnings only makes
world-bin, a problem for another commit). Simplify.
Backpatch-through: 16, like commit c507ba55
Author: Bryan Green <dbryan.green@gmail.com>
Co-authored-by: Thomas Munro <tmunro@gmail.com>
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1086088.1765593851%40sss.pgh.pa.us M src/test/modules/test_cloexec/Makefile
M src/test/modules/test_cloexec/test_cloexec.c
Update pg_hba.conf example to reflect MD5 deprecation
commit : cf8c8adfe38138930569eeeae7d600cf465ef334
author : John Naylor <john.naylor@postgresql.org>
date : Fri, 19 Dec 2025 15:48:18 +0700
committer: John Naylor <john.naylor@postgresql.org>
date : Fri, 19 Dec 2025 15:48:18 +0700 In the wake of commit db6a4a985, remove most use of 'md5' from the
example configuration file. The only remainder is an example exception
for a client that doesn't support SCRAM.
Author: Mikael Gustavsson <mikael.gustavsson@smhi.se>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Discussion: https://postgr.es/m/176595607507.978865.11597773194269211255@wrigleys.postgresql.org
Discussion: https://postgr.es/m/4ed268473fdb4cf9b0eced6c8019d353@smhi.se
Backpatch-through: 18 M doc/src/sgml/client-auth.sgml
Add guard to prevent recursive memory context logging.
commit : b863d8d87fc1fc44962163a335c2ea1e1d345e13
author : Fujii Masao <fujii@postgresql.org>
date : Fri, 19 Dec 2025 12:05:37 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Fri, 19 Dec 2025 12:05:37 +0900 Previously, if memory context logging was triggered repeatedly and
rapidly while a previous request was still being processed, it could
result in recursive calls to ProcessLogMemoryContextInterrupt().
This could lead to infinite recursion and potentially crash the process.
This commit adds a guard to prevent such recursion.
If ProcessLogMemoryContextInterrupt() is already in progress and
logging memory contexts, subsequent calls will exit immediately,
avoiding unintended recursive calls.
While this scenario is unlikely in practice, it's not impossible.
This change adds a safety check to prevent such failures.
Back-patch to v14, where memory context logging was introduced.
Reported-by: Robert Haas <robertmhaas@gmail.com>
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Artem Gavrilov <artem.gavrilov@percona.com>
Discussion: https://postgr.es/m/CA+TgmoZMrv32tbNRrFTvF9iWLnTGqbhYSLVcrHGuwZvCtph0NA@mail.gmail.com
Backpatch-through: 14 M src/backend/utils/mmgr/mcxt.c
Sort DO_SUBSCRIPTION_REL dump objects independent of OIDs.
commit : 573e679a26649e14742b4a7d19331bf8ced908ae
author : Noah Misch <noah@leadboat.com>
date : Thu, 18 Dec 2025 10:23:47 -0800
committer: Noah Misch <noah@leadboat.com>
date : Thu, 18 Dec 2025 10:23:47 -0800 Commit 0decd5e89db9f5edb9b27351082f0d74aae7a9b6 missed
DO_SUBSCRIPTION_REL, leading to assertion failures. In the unlikely use
case of diffing "pg_dump --binary-upgrade" output, spurious diffs were
possible. As part of fixing that, align the DumpableObject naming and
sort order with DO_PUBLICATION_REL. The overall effect of this commit
is to change sort order from (subname, srsubid) to (rel, subname).
Since DO_SUBSCRIPTION_REL is only for --binary-upgrade, accept that
larger-than-usual dump order change. Back-patch to v17, where commit
9a17be1e244a45a77de25ed2ada246fd34e4557d introduced DO_SUBSCRIPTION_REL.
Reported-by: vignesh C <vignesh21@gmail.com>
Author: vignesh C <vignesh21@gmail.com>
Discussion: https://postgr.es/m/CALDaNm2x3rd7C0_HjUpJFbxpAqXgm=QtoKfkEWDVA8h+JFpa_w@mail.gmail.com
Backpatch-through: 17 M src/bin/pg_dump/pg_dump.c
M src/bin/pg_dump/pg_dump_sort.c
M src/bin/pg_upgrade/t/004_subscription.pl
Do not emit WAL for unlogged BRIN indexes
commit : d77a5f98176ffaf3a537f4683ec87044c21bb98c
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 18 Dec 2025 15:08:48 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 18 Dec 2025 15:08:48 +0200 Operations on unlogged relations should not be WAL-logged. The
brin_initialize_empty_new_buffer() function didn't get the memo.
The function is only called when a concurrent update to a brin page
uses up space that we're just about to insert to, which makes it
pretty hard to hit. If you do manage to hit it, a full-page WAL record
is erroneously emitted for the unlogged index. If you then crash,
crash recovery will fail on that record with an error like this:
FATAL: could not create file "base/5/32819": File exists
Author: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://www.postgresql.org/message-id/CALdSSPhpZXVFnWjwEBNcySx_vXtXHwB2g99gE6rK0uRJm-3GgQ@mail.gmail.com
Backpatch-through: 14 M src/backend/access/brin/brin_pageops.c
oauth_validator: Avoid races in log_check()
commit : c3df85756ceb0246958ef2b72c04aba51e52de13
author : Jacob Champion <jchampion@postgresql.org>
date : Wed, 17 Dec 2025 11:55:04 -0800
committer: Jacob Champion <jchampion@postgresql.org>
date : Wed, 17 Dec 2025 11:55:04 -0800 Commit e0f373ee4 fixed up races in Cluster::connect_fails when using
log_like. t/002_client.pl didn't get the memo, though, because it
doesn't use Test::Cluster to perform its custom hook tests. (This is
probably not an issue at the moment, since the log check is only done
after authentication success and not failure, but there's no reason to
wait for someone to hit it.)
Introduce the fix, based on debug2 logging, to its use of log_check() as
well, and move the logic into the test() helper so that any additions
don't need to continually duplicate it.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAOYmi%2BmrGg%2Bn_X2MOLgeWcj3v_M00gR8uz_D7mM8z%3DdX1JYVbg%40mail.gmail.com
Backpatch-through: 18 M src/test/modules/oauth_validator/t/002_client.pl
libpq-oauth: use correct c_args in meson.build
commit : 023a3c786b81bf9e0ca023f8e279f03b197b189f
author : Jacob Champion <jchampion@postgresql.org>
date : Wed, 17 Dec 2025 11:54:56 -0800
committer: Jacob Champion <jchampion@postgresql.org>
date : Wed, 17 Dec 2025 11:54:56 -0800 Copy-paste bug from b0635bfda: libpq-oauth.so was being built with
libpq_so_c_args, rather than libpq_oauth_so_c_args. (At the moment, the
two lists are identical, but that won't be true forever.)
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAOYmi%2BmrGg%2Bn_X2MOLgeWcj3v_M00gR8uz_D7mM8z%3DdX1JYVbg%40mail.gmail.com
Backpatch-through: 18 M src/interfaces/libpq-oauth/meson.build
libpq-fe.h: Don't claim SOCKTYPE in the global namespace
commit : cc824482a3c0e6957c252730a62e7460d16a91f4
author : Jacob Champion <jchampion@postgresql.org>
date : Wed, 17 Dec 2025 11:54:47 -0800
committer: Jacob Champion <jchampion@postgresql.org>
date : Wed, 17 Dec 2025 11:54:47 -0800 The definition of PGoauthBearerRequest uses a temporary SOCKTYPE macro
to hide the difference between Windows and Berkeley socket handles,
since we don't surface pgsocket in our public API. This macro doesn't
need to escape the header, because implementers will choose the correct
socket type based on their platform, so I #undef'd it immediately after
use.
I didn't namespace that helper, though, so if anyone else needs a
SOCKTYPE macro, libpq-fe.h will now unhelpfully get rid of it. This
doesn't seem too far-fetched, given its proximity to existing POSIX
macro names.
Add a PQ_ prefix to avoid collisions, update and improve the surrounding
documentation, and backpatch.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAOYmi%2BmrGg%2Bn_X2MOLgeWcj3v_M00gR8uz_D7mM8z%3DdX1JYVbg%40mail.gmail.com
Backpatch-through: 18 M doc/src/sgml/libpq.sgml
M src/interfaces/libpq/libpq-fe.h
Make postmaster 003_start_stop.pl test less flaky
commit : c8098aa411ee72b36879acba95819100b263f726
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 17 Dec 2025 16:23:13 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 17 Dec 2025 16:23:13 +0200 The test is very sensitive to how backends start and exit, because it
tests dead-end backends which occur when all the connection slots are
in use. The test failed occasionally in the CI, when the backend that
was launched for the raw_connect_works() check lingered for a while,
and exited only later during the test. When it exited, it released a
connection slot, when the test expected all the slots to be in use at
that time.
The 002_connection_limits.pl test had a similar issue: if the backend
launched for safe_psql() in the test initialization lingers around, it
uses up a connection slot during the test, messing up the test's
connection counting. I haven't seen that in the CI, but when I added a
"sleep(1);" to proc_exit(), the test failed.
To make the tests more robust, restart the server to ensure that the
lingering backends doesn't interfere with the later test steps.
In the passing, fix a bogus test name.
Report and analysis by Jelte Fennema-Nio, Andres Freund, Thomas Munro.
Discussion: https://www.postgresql.org/message-id/CAGECzQSU2iGuocuP+fmu89hmBmR3tb-TNyYKjCcL2M_zTCkAFw@mail.gmail.com
Backpatch-through: 18 M src/test/postmaster/t/002_connection_limits.pl
M src/test/postmaster/t/003_start_stop.pl
ltree: fix case-insensitive matching.
commit : 806555e3000d0b0e0c536c1dc65548128d457d86
author : Jeff Davis <jdavis@postgresql.org>
date : Tue, 16 Dec 2025 11:13:17 -0800
committer: Jeff Davis <jdavis@postgresql.org>
date : Tue, 16 Dec 2025 11:13:17 -0800 Previously, ltree_prefix_eq_ci() used lowercasing with the default
collation; while ltree_crc32_sz() used tolower() directly. These were
equivalent only if the default collation provider was libc and the
encoding was single-byte.
Change both to use casefolding with the default collation.
Backpatch through 18, where the casefolding APIs were introduced. The
bug exists in earlier versions, but would require some adaptation.
A REINDEX is required for ltree indexes where the database default
collation is not libc.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Backpatch-through: 18
Discussion: https://postgr.es/m/450ceb6260cad30d7afdf155d991a9caafee7c0d.camel@j-davis.com
Discussion: https://postgr.es/m/01fc00fd66f641b9693d4f9f1af0ccf44cbdfbdf.camel@j-davis.com M contrib/ltree/crc32.c
M contrib/ltree/lquery_op.c
M src/include/utils/pg_locale.h
Fix multibyte issue in ltree_strncasecmp().
commit : f79e239e0bc6e4d5fe91e1a0e573ecf0715d6c8c
author : Jeff Davis <jdavis@postgresql.org>
date : Tue, 16 Dec 2025 10:35:40 -0800
committer: Jeff Davis <jdavis@postgresql.org>
date : Tue, 16 Dec 2025 10:35:40 -0800 Previously, the API for ltree_strncasecmp() took two inputs but only
one length (that of the smaller input). It truncated the larger input
to that length, but that could break a multibyte sequence.
Change the API to be a check for prefix equality (possibly
case-insensitive) instead, which is all that's needed by the
callers. Also, provide the lengths of both inputs.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/5f65b85740197ba6249ea507cddf609f84a6188b.camel%40j-davis.com
Backpatch-through: 14 M contrib/ltree/lquery_op.c
M contrib/ltree/ltree.h
M contrib/ltree/ltxtquery_op.c
Update .abi-compliance-history for CacheInvalidateHeapTupleInplace().
commit : 06b030e8973fa440d5b0d3ce0cd93a6c0a3f72ab
author : Noah Misch <noah@leadboat.com>
date : Tue, 16 Dec 2025 10:01:28 -0800
committer: Noah Misch <noah@leadboat.com>
date : Tue, 16 Dec 2025 10:01:28 -0800 Commit bae8ca82fd00603ebafa0658640d6e4dfe20af92 anticipated this:
[C] 'function void CacheInvalidateHeapTupleInplace(Relation, HeapTuple, HeapTuple)' has some sub-type changes:
parameter 3 of type 'typedef HeapTuple' was removed
Discussion: https://postgr.es/m/CA+renyU+LGLvCqS0=fHit-N1J-2=2_mPK97AQxvcfKm+F-DxJA@mail.gmail.com
Backpatch-through: 18 only M .abi-compliance-history
Switch memory contexts in ReinitializeParallelDSM.
commit : 57df5ab8049782d7b24594ac677783512ffce2bc
author : Robert Haas <rhaas@postgresql.org>
date : Tue, 16 Dec 2025 10:40:53 -0500
committer: Robert Haas <rhaas@postgresql.org>
date : Tue, 16 Dec 2025 10:40:53 -0500 We already do this in CreateParallelContext, InitializeParallelDSM, and
LaunchParallelWorkers. I suspect the reason why the matching logic was
omitted from ReinitializeParallelDSM is that I failed to realize that
any memory allocation was happening here -- but shm_mq_attach does
allocate, which could result in a shm_mq_handle being allocated in a
shorter-lived context than the ParallelContext which points to it.
That could result in a crash if the shorter-lived context is freed
before the parallel context is destroyed. As far as I am currently
aware, there is no way to reach a crash using only code that is
present in core PostgreSQL, but extensions could potentially trip
over this. Fixing this in the back-branches appears low-risk, so
back-patch to all supported versions.
Author: Jakub Wartak <jakub.wartak@enterprisedb.com>
Co-authored-by: Jeevan Chalke <jeevan.chalke@enterprisedb.com>
Backpatch-through: 14
Discussion: http://postgr.es/m/CAKZiRmwfVripa3FGo06=5D1EddpsLu9JY2iJOTgbsxUQ339ogQ@mail.gmail.com M src/backend/access/transam/parallel.c
doc: Update header file mention for CompareType
commit : b30089fde1e9e945f96db9366f5ff5eb4abd3774
author : Daniel Gustafsson <dgustafsson@postgresql.org>
date : Tue, 16 Dec 2025 09:46:53 +0100
committer: Daniel Gustafsson <dgustafsson@postgresql.org>
date : Tue, 16 Dec 2025 09:46:53 +0100 Commit 119fc30 moved CompareType to cmptype.h but the mention in
the docs still refered to primnodes.h
Author: Daisuke Higuchi <higuchi.daisuke11@gmail.com>
Reviewed-by: Paul A Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAEVT6c8guXe5P=L_Un5NUUzCgEgbHnNcP+Y3TV2WbQh-xjiwqA@mail.gmail.com
Backpatch-through: 18 M doc/src/sgml/gist.sgml
Fail recovery when missing redo checkpoint record without backup_label
commit : 68ebdf2b07f6fb2d83f6e6440310fdb4b7377bb3
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 16 Dec 2025 13:29:36 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 16 Dec 2025 13:29:36 +0900 This commit adds an extra check at the beginning of recovery to ensure
that the redo record of a checkpoint exists before attempting WAL
replay, logging a PANIC if the redo record referenced by the checkpoint
record could not be found. This is the same level of failure as when a
checkpoint record is missing. This check is added when a cluster is
started without a backup_label, after retrieving its checkpoint record.
The redo LSN used for the check is retrieved from the checkpoint record
successfully read.
In the case where a backup_label exists, the startup process already
fails if the redo record cannot be found after reading a checkpoint
record at the beginning of recovery.
Previously, the presence of the redo record was not checked. If the
redo and checkpoint records were located on different WAL segments, it
would be possible to miss a entire range of WAL records that should have
been replayed but were just ignored. The consequences of missing the
redo record depend on the version dealt with, these becoming worse the
older the version used:
- On HEAD, v18 and v17, recovery fails with a pointer dereference at the
beginning of the redo loop, as the redo record is expected but cannot be
found. These versions are good students, because we detect a failure
before doing anything, even if the failure is misleading in the shape of
a segmentation fault, giving no information that the redo record is
missing.
- In v16 and v15, problems show at the end of recovery within
FinishWalRecovery(), the startup process using a buggy LSN to decide
from where to start writing WAL. The cluster gets corrupted, still it
is noisy about it.
- v14 and older versions are worse: a cluster gets corrupted but it is
entirely silent about the matter. The redo record missing causes the
startup process to skip entirely recovery, because a missing record is
the same as not redo being required at all. This leads to data loss, as
everything is missed between the redo record and the checkpoint record.
Note that I have tested that down to 9.4, reproducing the issue with a
version of the author's reproducer slightly modified. The code is wrong
since at least 9.2, but I did not look at the exact point of origin.
This problem has been found by debugging a cluster where the WAL segment
including the redo segment was missing due to an operator error, leading
to a crash, based on an investigation in v15.
Requesting archive recovery with the creation of a recovery.signal or
a standby.signal even without a backup_label would mitigate the issue:
if the record cannot be found in pg_wal/, the missing segment can be
retrieved with a restore_command when checking that the redo record
exists. This was already the case without this commit, where recovery
would re-fetch the WAL segment that includes the redo record. The check
introduced by this commit makes the segment to be retrieved earlier to
make sure that the redo record can be found.
On HEAD, the code will be slightly changed in a follow-up commit to not
rely on a PANIC, to include a test able to emulate the original problem.
This is a minimal backpatchable fix, kept separated for clarity.
Reported-by: Andres Freund <andres@anarazel.de>
Analyzed-by: Andres Freund <andres@anarazel.de>
Author: Nitin Jadhav <nitinjadhavpostgres@gmail.com>
Discussion: https://postgr.es/m/20231023232145.cmqe73stvivsmlhs@awork3.anarazel.de
Discussion: https://postgr.es/m/CAMm1aWaaJi2w49c0RiaDBfhdCL6ztbr9m=daGqiOuVdizYWYaA@mail.gmail.com
Backpatch-through: 14 M src/backend/access/transam/xlogrecovery.c
libpq: Align oauth_json_set_error() with other NLS patterns
commit : 7a15cff1f11193467898da1c1fabf06fd2caee04
author : Jacob Champion <jchampion@postgresql.org>
date : Mon, 15 Dec 2025 13:30:48 -0800
committer: Jacob Champion <jchampion@postgresql.org>
date : Mon, 15 Dec 2025 13:30:48 -0800 Now that the prior commits have fixed missing OAuth translations, pull
the bespoke usage of libpq_gettext() for OAUTHBEARER parsing into
oauth_json_set_error() itself, and make that a gettext trigger as well,
to better match what the other sites are doing. Add an _internal()
variant to handle the existing untranslated case.
Suggested-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/0EEBCAA8-A5AC-4E3B-BABA-0BA7A08C361B%40yesql.se
Backpatch-through: 18 M src/interfaces/libpq/fe-auth-oauth.c
M src/interfaces/libpq/nls.mk
libpq-oauth: Don't translate internal errors
commit : aac25567fec10b7b2cc382654e5586acebec5431
author : Jacob Champion <jchampion@postgresql.org>
date : Mon, 15 Dec 2025 13:30:44 -0800
committer: Jacob Champion <jchampion@postgresql.org>
date : Mon, 15 Dec 2025 13:30:44 -0800 Some error messages are generated when OAuth multiplexer operations fail
unexpectedly in the client. Álvaro pointed out that these are both
difficult to translate idiomatically (as they use internal terminology
heavily) and of dubious translation value to end users (since they're
going to need to get developer help anyway). The response parsing engine
has a similar issue.
Remove these from the translation files by introducing internal variants
of actx_error() and oauth_parse_set_error().
Suggested-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CAOYmi%2BkQQ8vpRcoSrA5EQ98Wa3G6jFj1yRHs6mh1V7ohkTC7JA%40mail.gmail.com
Backpatch-through: 18 M src/interfaces/libpq-oauth/oauth-curl.c
libpq: Add missing OAuth translations
commit : 169ff4ca930bc2562a1c938244a1b098bb09186b
author : Jacob Champion <jchampion@postgresql.org>
date : Mon, 15 Dec 2025 13:30:31 -0800
committer: Jacob Champion <jchampion@postgresql.org>
date : Mon, 15 Dec 2025 13:30:31 -0800 Several strings that should have been translated as they passed through
libpq_gettext were not actually being pulled into the translation files,
because I hadn't directly wrapped them in one of the GETTEXT_TRIGGERS.
Move the responsibility for calling libpq_gettext() to the code that
sets actx->errctx. Doing the same in report_type_mismatch() would result
in double-translation, so mark those strings with gettext_noop()
instead. And wrap two ternary operands with gettext_noop(), even though
they're already in one of the triggers, since xgettext sees only the
first.
Finally, fe-auth-oauth.c was missing from nls.mk, so none of that file
was being translated at all. Add it now.
Original patch by Zhijie Hou, plus suggested tweaks by Álvaro Herrera
and small additions by me.
Reported-by: Zhijie Hou <houzj.fnst@fujitsu.com>
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Co-authored-by: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/TY4PR01MB1690746DB91991D1E9A47F57E94CDA%40TY4PR01MB16907.jpnprd01.prod.outlook.com
Backpatch-through: 18 M src/interfaces/libpq-oauth/oauth-curl.c
M src/interfaces/libpq/nls.mk
Revisit cosmetics of "For inplace update, send nontransactional invalidations."
commit : bae8ca82fd00603ebafa0658640d6e4dfe20af92
author : Noah Misch <noah@leadboat.com>
date : Mon, 15 Dec 2025 12:19:49 -0800
committer: Noah Misch <noah@leadboat.com>
date : Mon, 15 Dec 2025 12:19:49 -0800 This removes a never-used CacheInvalidateHeapTupleInplace() parameter.
It adds README content about inplace update visibility in logical
decoding. It rewrites other comments.
Back-patch to v18, where commit 243e9b40f1b2dd09d6e5bf91ebf6e822a2cd3704
first appeared. Since this removes a CacheInvalidateHeapTupleInplace()
parameter, expect a v18 ".abi-compliance-history" edit to follow. PGXN
contains no calls to that function.
Reported-by: Paul A Jungwirth <pj@illuminatedcomputing.com>
Reported-by: Ilyasov Ian <ianilyasov@outlook.com>
Reviewed-by: Paul A Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Surya Poondla <s_poondla@apple.com>
Discussion: https://postgr.es/m/CA+renyU+LGLvCqS0=fHit-N1J-2=2_mPK97AQxvcfKm+F-DxJA@mail.gmail.com
Backpatch-through: 18 M src/backend/access/heap/README.tuplock
M src/backend/access/heap/heapam.c
M src/backend/replication/logical/decode.c
M src/backend/utils/cache/inval.c
M src/include/utils/inval.h
Clarify comment on multixid offset wraparound check
commit : 3fbad030a24de28aa9b97e2c5b7e4a419594d4b7
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Mon, 15 Dec 2025 11:47:04 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Mon, 15 Dec 2025 11:47:04 +0200 Coverity complained that offset cannot be 0 here because there's an
explicit check for "offset == 0" earlier in the function, but it
didn't see the possibility that offset could've wrapped around to 0.
The code is correct, but clarify the comment about it.
The same code exists in backbranches in the server
GetMultiXactIdMembers() function and in 'master' in the pg_upgrade
GetOldMultiXactIdSingleMember function. In backbranches Coverity
didn't complain about it because the check was merely an assertion,
but change the comment in all supported branches for consistency.
Per Tom Lane's suggestion.
Discussion: https://www.postgresql.org/message-id/1827755.1765752936@sss.pgh.pa.us M src/backend/access/transam/multixact.c
pg_buffercache: Fix memory allocation formula
commit : 580b5c2f397fbb2f74c2661cfe53203ed6acead0
author : Michael Paquier <michael@paquier.xyz>
date : Thu, 11 Dec 2025 14:11:25 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Thu, 11 Dec 2025 14:11:25 +0900 The code over-allocated the memory required for os_page_status, relying
on uint64 for its element size instead of an int, hence doubling what
was required. This could mean quite a lot of memory if dealing with a
lot of NUMA pages.
Oversight in ba2a3c2302f1.
Author: David Geier <geidav.pg@gmail.com>
Discussion: https://postgr.es/m/ad0748d4-3080-436e-b0bc-ac8f86a3466a@gmail.com
Backpatch-through: 18 M contrib/pg_buffercache/pg_buffercache_pages.c
Fix allocation formula in llvmjit_expr.c
commit : 5b7bbf16db3427522d057c14bd9063ef21dff196
author : Michael Paquier <michael@paquier.xyz>
date : Thu, 11 Dec 2025 10:25:44 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Thu, 11 Dec 2025 10:25:44 +0900 An array of LLVMBasicBlockRef is allocated with the size used for an
element being "LLVMBasicBlockRef *" rather than "LLVMBasicBlockRef".
LLVMBasicBlockRef is a type that refers to a pointer, so this did not
directly cause a problem because both should have the same size, still
it is incorrect.
This issue has been spotted while reviewing a different patch, and
exists since 2a0faed9d702, so backpatch all the way down.
Discussion: https://postgr.es/m/CA+hUKGLngd9cKHtTUuUdEo2eWEgUcZ_EQRbP55MigV2t_zTReg@mail.gmail.com
Backpatch-through: 14 M src/backend/jit/llvm/llvmjit_expr.c
Fix bogus extra arguments to query_safe in test
commit : e08f338d0028af6f9f54616df1cb51009504eee3
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 10 Dec 2025 19:38:07 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 10 Dec 2025 19:38:07 +0200 The test seemed to incorrectly think that query_safe() takes an
argument that describes what the query does, similar to e.g.
command_ok(). Until commit bd8d9c9bdf the extra arguments were
harmless and were just ignored, but when commit bd8d9c9bdf introduced
a new optional argument to query_safe(), the extra arguments started
clashing with that, causing the test to fail.
Backpatch to v17, that's the oldest branch where the test exists. The
extra arguments didn't cause any trouble on the older branches, but
they were clearly bogus anyway. M src/test/modules/xid_wraparound/t/004_notify_freeze.pl
Fix some near-bugs related to ResourceOwner function arguments
commit : e8dc5810a227d5671c25dc2c7dbe1321093a08a6
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 10 Dec 2025 11:43:16 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 10 Dec 2025 11:43:16 +0200 These functions took a ResourceOwner argument, but only checked if it
was NULL, and then used CurrentResourceOwner for the actual work.
Surely the intention was to use the passed-in resource owner. All
current callers passed CurrentResourceOwner or NULL, so this has no
consequences at the moment, but it's an accident waiting to happen for
future caller and extensions.
Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAEze2Whnfv8VuRZaohE-Af+GxBA1SNfD_rXfm84Jv-958UCcJA@mail.gmail.com
Backpatch-through: 17 M src/backend/storage/aio/aio.c
M src/backend/utils/cache/catcache.c
Fix failures with cross-version pg_upgrade tests
commit : 1756b9f616b662f9a3c98f02d6fc2932a195d8e1
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 10 Dec 2025 12:47:20 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 10 Dec 2025 12:47:20 +0900 Buildfarm members skimmer and crake have reported that pg_upgrade
running from v18 fails due to the changes of d52c24b0f808, with the
expectations that the objects removed in the test module
injection_points should still be present post upgrades, but the test
module does not have them anymore.
The origin of the issue is that the following test modules depend on
injection_points, but they do not drop the extension once the tests
finish, leaving its traces in the dumps used for the upgrades:
- gin, down to v17
- typcache, down to v18
- nbtree, HEAD-only
Test modules have no upgrade requirements, as they are used only for..
Tests, so there is no point in keeping them around.
An alternative solution would be to drop the databases created by these
modules in AdjustUpgrade.pm, but the solution of this commit to drop the
extension is simpler. Note that there would be a catch if using a
solution based on AdjustUpgrade.pm as the database name used for the
test runs differs between configure and meson:
- configure relies on USE_MODULE_DB for the database name unicity, that
would build a database name based on the *first* entry of REGRESS, that
lists all the SQL tests.
- meson relies on a "name" field.
For example, for the test module "gin", the regression database is named
"regression_gin" under meson, while it is more complex for configure, as
of "contrib_regression_gin_incomplete_splits". So a AdjustUpgrade.pm
would need a set of DROP DATABASE IF EXISTS to solve this issue, to cope
with each build system.
The failure has been caused by d52c24b0f808, and the problem can happen
with upgrade dumps from v17 and v18 to HEAD. This problem is not
currently reachable in the back-branches, but it could be possible that
a future change in injection_points in stable branches invalidates this
theory, so this commit is applied down to v17 in the test modules that
matter.
Per discussion with Tom Lane and Heikki Linnakangas.
Discussion: https://postgr.es/m/2899652.1765167313@sss.pgh.pa.us
Backpatch-through: 17 M src/test/modules/gin/expected/gin_incomplete_splits.out
M src/test/modules/gin/sql/gin_incomplete_splits.sql
M src/test/modules/typcache/expected/typcache_rel_type_cache.out
M src/test/modules/typcache/sql/typcache_rel_type_cache.sql
Fix O_CLOEXEC flag handling in Windows port.
commit : bebb281b08b624d69fbb4a6fb94b2c1b5d0be7a5
author : Thomas Munro <tmunro@postgresql.org>
date : Wed, 10 Dec 2025 09:01:35 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Wed, 10 Dec 2025 09:01:35 +1300 PostgreSQL's src/port/open.c has always set bInheritHandle = TRUE
when opening files on Windows, making all file descriptors inheritable
by child processes. This meant the O_CLOEXEC flag, added to many call
sites by commit 1da569ca1f (v16), was silently ignored.
The original commit included a comment suggesting that our open()
replacement doesn't create inheritable handles, but it was a mis-
understanding of the code path. In practice, the code was creating
inheritable handles in all cases.
This hasn't caused widespread problems because most child processes
(archive_command, COPY PROGRAM, etc.) operate on file paths passed as
arguments rather than inherited file descriptors. Even if a child
wanted to use an inherited handle, it would need to learn the numeric
handle value, which isn't passed through our IPC mechanisms.
Nonetheless, the current behavior is wrong. It violates documented
O_CLOEXEC semantics, contradicts our own code comments, and makes
PostgreSQL behave differently on Windows than on Unix. It also creates
potential issues with future code or security auditing tools.
To fix, define O_CLOEXEC to _O_NOINHERIT in master, previously used by
O_DSYNC. We use different values in the back branches to preserve
existing values. In pgwin32_open_handle() we set bInheritHandle
according to whether O_CLOEXEC is specified, for the same atomic
semantics as POSIX in multi-threaded programs that create processes.
Backpatch-through: 16
Author: Bryan Green <dbryan.green@gmail.com>
Co-authored-by: Thomas Munro <thomas.munro@gmail.com> (minor adjustments)
Discussion: https://postgr.es/m/e2b16375-7430-4053-bda3-5d2194ff1880%40gmail.com M src/include/port.h
M src/include/port/win32_port.h
M src/port/open.c
M src/test/modules/Makefile
M src/test/modules/meson.build
A src/test/modules/test_cloexec/Makefile
A src/test/modules/test_cloexec/meson.build
A src/test/modules/test_cloexec/t/001_cloexec.pl
A src/test/modules/test_cloexec/test_cloexec.c
doc: Fix titles of some pg_buffercache functions.
commit : 1412c8ea0740ffe97c154cd63760a214e26c94a8
author : Nathan Bossart <nathan@postgresql.org>
date : Tue, 9 Dec 2025 11:01:38 -0600
committer: Nathan Bossart <nathan@postgresql.org>
date : Tue, 9 Dec 2025 11:01:38 -0600 As in commit 59d6c03956, use <function> rather than <structname> in
the <title> to be consistent with how other functions in this
module are documented.
Oversights in commits dcf7e1697b and 9ccc049dfe.
Author: Noboru Saito <noborusai@gmail.com>
Discussion: https://postgr.es/m/CAAM3qn%2B7KraFkCyoJCHq6m%3DurxcoHPEPryuyYeg%3DQ0EjJxjdTA%40mail.gmail.com
Backpatch-through: 18 M doc/src/sgml/pgbuffercache.sgml
doc: Fix statement about ON CONFLICT and deferrable constraints.
commit : ae627d8a3cb046379e90295bcf85c9fc6432841a
author : Dean Rasheed <dean.a.rasheed@gmail.com>
date : Tue, 9 Dec 2025 10:49:16 +0000
committer: Dean Rasheed <dean.a.rasheed@gmail.com>
date : Tue, 9 Dec 2025 10:49:16 +0000 The description of deferrable constraints in create_table.sgml states
that deferrable constraints cannot be used as conflict arbitrators in
an INSERT with an ON CONFLICT DO UPDATE clause, but in fact this
restriction applies to all ON CONFLICT clauses, not just those with DO
UPDATE. Fix this, and while at it, change the word "arbitrators" to
"arbiters", to match the terminology used elsewhere.
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/CAEZATCWsybvZP3ce8rGcVNx-QHuDOJZDz8y=p1SzqHwjRXyV4Q@mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/ref/create_table.sgml
Fix LOCK_TIMEOUT handling in slotsync worker.
commit : 6c61c69d5886c759c8c416486c6d7761b63c3e16
author : Amit Kapila <akapila@postgresql.org>
date : Tue, 9 Dec 2025 07:12:37 +0000
committer: Amit Kapila <akapila@postgresql.org>
date : Tue, 9 Dec 2025 07:12:37 +0000 Previously, the slotsync worker relied on SIGINT for graceful shutdown
during promotion. However, SIGINT is also used by the LOCK_TIMEOUT handler
to cancel queries. Since the slotsync worker can lock catalog tables while
parsing libpq tuples, this overlap caused it to ignore LOCK_TIMEOUT
signals and potentially wait indefinitely on locks.
This patch replaces the slotsync worker's SIGINT handler with
StatementCancelHandler to correctly process query-cancel interrupts.
Additionally, the startup process now uses SIGUSR1 to signal the slotsync
worker to stop during promotion. The worker exits after detecting that the
shared memory flag stopSignaled is set.
Author: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 17, here it was introduced
Discussion: https://postgr.es/m/TY4PR01MB169078F33846E9568412D878C94A2A@TY4PR01MB16907.jpnprd01.prod.outlook.com M src/backend/replication/logical/slotsync.c
Doc: fix typo in hash index documentation
commit : a59b03995a4d34e09c3eeb07bc18231770b5cc09
author : David Rowley <drowley@postgresql.org>
date : Tue, 9 Dec 2025 14:42:11 +1300
committer: David Rowley <drowley@postgresql.org>
date : Tue, 9 Dec 2025 14:42:11 +1300 Plus a similar fix to the README.
Backpatch as far back as the sgml issue exists. The README issue does
exist in v14, but that seems unlikely to harm anyone.
Author: David Geier <geidav.pg@gmail.com>
Discussion: https://postgr.es/m/ed3db7ea-55b4-4809-86af-81ad3bb2c7d3@gmail.com
Backpatch-through: 15 M doc/src/sgml/hash.sgml
M src/backend/access/hash/README
Unify error messages
commit : 5278222853cab4d9779b707eaea5878856e2471e
author : Álvaro Herrera <alvherre@kurilemu.de>
date : Mon, 8 Dec 2025 16:30:52 +0100
committer: Álvaro Herrera <alvherre@kurilemu.de>
date : Mon, 8 Dec 2025 16:30:52 +0100 No visible changes, just refactor how messages are constructed. M src/backend/catalog/aclchk.c
M src/backend/commands/cluster.c
M src/backend/commands/dbcommands.c
M src/backend/commands/explain_state.c
M src/backend/commands/indexcmds.c
M src/backend/commands/vacuum.c
M src/backend/replication/walsender.c
M src/bin/pg_basebackup/pg_createsubscriber.c
M src/bin/pg_dump/pg_dump.c
M src/bin/pg_dump/pg_dumpall.c
M src/bin/pg_dump/pg_restore.c
Prevent invalidation of newly created replication slots.
commit : d3ceb20846e40ec6f39bce5659ddc15eadd29167
author : Amit Kapila <akapila@postgresql.org>
date : Mon, 8 Dec 2025 05:33:14 +0000
committer: Amit Kapila <akapila@postgresql.org>
date : Mon, 8 Dec 2025 05:33:14 +0000 A race condition could cause a newly created replication slot to become
invalidated between WAL reservation and a checkpoint.
Previously, if the required WAL was removed, we retried the reservation
process. However, the slot could still be invalidated before the retry if
the WAL was not yet removed but the checkpoint advanced the redo pointer
beyond the slot's intended restart LSN and computed the minimum LSN that
needs to be preserved for the slots.
The fix is to acquire an exclusive lock on ReplicationSlotAllocationLock
during WAL reservation to serialize WAL reservation and checkpoint's
minimum restart_lsn computation. This ensures that, if WAL reservation
occurs first, the checkpoint waits until restart_lsn is updated before
removing WAL. If the checkpoint runs first, subsequent WAL reservations
pick a position at or after the latest checkpoint's redo pointer.
We can't use the same fix for branch 17 and prior because commit
2090edc6f3 changed to compute to the minimum restart_LSN among slot's at
the beginning of checkpoint (or restart point). The fix for 17 and prior
branches is under discussion and will be committed separately.
Reported-by: suyu.cmj <mengjuan.cmj@alibaba-inc.com>
Author: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Vitaly Davydov <v.davydov@postgrespro.ru>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/5e045179-236f-4f8f-84f1-0f2566ba784c.mengjuan.cmj@alibaba-inc.com M src/backend/replication/slot.c
Fix text substring search for non-deterministic collations.
commit : 18b349315ae7e00261732f514d6c91d713bb77d0
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 5 Dec 2025 20:10:33 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 5 Dec 2025 20:10:33 -0500 Due to an off-by-one error, the code failed to find matches at the
end of the haystack. Fix by rewriting the loop.
While at it, fix a comment that claimed that the function could find
a zero-length match. Such a match could send a caller into an endless
loop. However, zero-length matches only make sense with an empty
search string, and that case is explicitly excluded by all callers.
To make sure it stays that way, add an Assert and a comment.
Bug: #19341
Reported-by: Adam Warland <adam.warland@infor.com>
Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19341-1d9a22915edfec58@postgresql.org
Backpatch-through: 18 M src/backend/utils/adt/varlena.c
M src/test/regress/expected/collate.icu.utf8.out
M src/test/regress/sql/collate.icu.utf8.sql
Fix setting next multixid's offset at offset wraparound
commit : 02ba5e3be4f3520a45f3c9c22f61d62c4eadbb76
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Fri, 5 Dec 2025 11:32:38 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Fri, 5 Dec 2025 11:32:38 +0200 In commit 789d65364c, we started updating the next multixid's offset
too when recording a multixid, so that it can always be used to
calculate the number of members. I got it wrong at offset wraparound:
we need to skip over offset 0. Fix that.
Discussion: https://www.postgresql.org/message-id/d9996478-389a-4340-8735-bfad456b313c@iki.fi
Backpatch-through: 14 M src/backend/access/transam/multixact.c
Show version of nodes in output of TAP tests
commit : 28c5be4aecda3b692aabc5387de66806a37e0135
author : Michael Paquier <michael@paquier.xyz>
date : Fri, 5 Dec 2025 09:21:15 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Fri, 5 Dec 2025 09:21:15 +0900 This commit adds the version information of a node initialized by
Cluster.pm, that may vary depending on the install_path given by the
test. The code was written so as the node information, that includes
the version number, was dumped before the version number was set.
This is particularly useful for the pg_upgrade TAP tests, that may mix
several versions for cross-version runs. The TAP infrastructure also
allows mixing nodes with different versions, so this information can be
useful for out-of-core tests.
Backpatch down to v15, where Cluster.pm and the pg_upgrade TAP tests
have been introduced.
Author: Potapov Alexander <a.potapov@postgrespro.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/e59bb-692c0a80-5-6f987180@170377126
Backpatch-through: 15 M src/test/perl/PostgreSQL/Test/Cluster.pm
amcheck: Fix snapshot usage in bt_index_parent_check
commit : df93f94dda51cae1d81526472e41bbde0a089377
author : Álvaro Herrera <alvherre@kurilemu.de>
date : Thu, 4 Dec 2025 18:12:08 +0100
committer: Álvaro Herrera <alvherre@kurilemu.de>
date : Thu, 4 Dec 2025 18:12:08 +0100 We were using SnapshotAny to do some index checks, but that's wrong and
causes spurious errors when used on indexes created by CREATE INDEX
CONCURRENTLY. Fix it to use an MVCC snapshot, and add a test for it.
This problem came in with commit 5ae2087202af, which introduced
uniqueness check. Backpatch to 17.
Author: Mihail Nikalayeu <mihailnikalayeu@gmail.com>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Backpatch-through: 17
Discussion: https://postgr.es/m/CANtu0ojmVd27fEhfpST7RG2KZvwkX=dMyKUqg0KM87FkOSdz8Q@mail.gmail.com M contrib/amcheck/t/002_cic.pl
M contrib/amcheck/verify_nbtree.c
M doc/src/sgml/amcheck.sgml
Set next multixid's offset when creating a new multixid
commit : e46041fd973c367b02db92ff205cec6c1b6dd2bb
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 3 Dec 2025 19:15:08 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 3 Dec 2025 19:15:08 +0200 With this commit, the next multixid's offset will always be set on the
offsets page, by the time that a backend might try to read it, so we
no longer need the waiting mechanism with the condition variable. In
other words, this eliminates "corner case 2" mentioned in the
comments.
The waiting mechanism was broken in a few scenarios:
- When nextMulti was advanced without WAL-logging the next
multixid. For example, if a later multixid was already assigned and
WAL-logged before the previous one was WAL-logged, and then the
server crashed. In that case the next offset would never be set in
the offsets SLRU, and a query trying to read it would get stuck
waiting for it. Same thing could happen if pg_resetwal was used to
forcibly advance nextMulti.
- In hot standby mode, a deadlock could happen where one backend waits
for the next multixid assignment record, but WAL replay is not
advancing because of a recovery conflict with the waiting backend.
The old TAP test used carefully placed injection points to exercise
the old waiting code, but now that the waiting code is gone, much of
the old test is no longer relevant. Rewrite the test to reproduce the
IPC/MultixactCreation hang after crash recovery instead, and to verify
that previously recorded multixids stay readable.
Backpatch to all supported versions. In back-branches, we still need
to be able to read WAL that was generated before this fix, so in the
back-branches this includes a hack to initialize the next offsets page
when replaying XLOG_MULTIXACT_CREATE_ID for the last multixid on a
page. On 'master', bump XLOG_PAGE_MAGIC instead to indicate that the
WAL is not compatible.
Author: Andrey Borodin <amborodin@acm.org>
Reviewed-by: Dmitry Yurichev <dsy.075@yandex.ru>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Ivan Bykov <i.bykov@modernsys.ru>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/172e5723-d65f-4eec-b512-14beacb326ce@yandex.ru
Backpatch-through: 14 M src/backend/access/transam/multixact.c
M src/test/modules/test_slru/t/001_multixact.pl
M src/test/modules/test_slru/test_multixact.c
Fix amcheck's handling of half-dead B-tree pages
commit : 19e786727c4f3415fc29965677afdc909c50786e
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 2 Dec 2025 21:11:15 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 2 Dec 2025 21:11:15 +0200 amcheck incorrectly reported the following error if there were any
half-dead pages in the index:
ERROR: mismatch between parent key and child high key in index
"amchecktest_id_idx"
It's expected that a half-dead page does not have a downlink in the
parent level, so skip the test.
Reported-by: Konstantin Knizhnik <knizhnik@garret.ru>
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Reviewed-by: Mihail Nikalayeu <mihailnikalayeu@gmail.com>
Discussion: https://www.postgresql.org/message-id/33e39552-6a2a-46f3-8b34-3f9f8004451f@garret.ru
Backpatch-through: 14 M contrib/amcheck/verify_nbtree.c
Fix amcheck's handling of incomplete root splits in B-tree
commit : 50c63ebb05fc850010f571441b0349414e21c87f
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 2 Dec 2025 21:10:51 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 2 Dec 2025 21:10:51 +0200 When the root page is being split, it's normal that root page
according to the metapage is not marked BTP_ROOT. Fix bogus error in
amcheck about that case.
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Discussion: https://www.postgresql.org/message-id/abd65090-5336-42cc-b768-2bdd66738404@iki.fi
Backpatch-through: 14 M contrib/amcheck/verify_nbtree.c
Update obsolete row compare preprocessing comments.
commit : 4061992ea83adbfc6b0e4fa202d314ace83a2458
author : Peter Geoghegan <pg@bowt.ie>
date : Sat, 29 Nov 2025 16:41:49 -0500
committer: Peter Geoghegan <pg@bowt.ie>
date : Sat, 29 Nov 2025 16:41:49 -0500 We have some limited ability to detect redundant and contradictory
conditions involving an nbtree row comparison key following commits
f09816a0 and bd3f59fd: we can do so in simple cases involving IS NULL
and IS NOT NULL keys on a row compare key's first column. We can
likewise determine that a scan's qual is unsatisfiable given a row
compare whose first subkey's arg is NULL. Update obsolete comments that
claimed that we merely copied row compares into the output key array
"without any editorialization".
Also update another _bt_preprocess_keys header comment paragraph: add a
parenthetical remark that points out that preprocessing will generate a
skip array for the preceding example qual. That will ultimate lead to
preprocessing marking the example's lower-order y key required -- which
is exactly what the example supposes cannot happen. Keep the original
comment, though, since it accurately describes the mechanical rules that
determine which keys get marked required in the absence of skip arrays
(which can occasionally still matter). This fixes an oversight in
commit 92fe23d9, which added the nbtree skip scan optimization.
Author: Peter Geoghegan <pg@bowt.ie>
Backpatch-through: 18 M src/backend/access/nbtree/nbtpreprocesskeys.c
Avoid rewriting data-modifying CTEs more than once.
commit : b880d9a025bd58d24e8bf03e64839794e7a982d9
author : Dean Rasheed <dean.a.rasheed@gmail.com>
date : Sat, 29 Nov 2025 12:31:30 +0000
committer: Dean Rasheed <dean.a.rasheed@gmail.com>
date : Sat, 29 Nov 2025 12:31:30 +0000 Formerly, when updating an auto-updatable view, or a relation with
rules, if the original query had any data-modifying CTEs, the rewriter
would rewrite those CTEs multiple times as RewriteQuery() recursed
into the product queries. In most cases that was harmless, because
RewriteQuery() is mostly idempotent. However, if the CTE involved
updating an always-generated column, it would trigger an error because
any subsequent rewrite would appear to be attempting to assign a
non-default value to the always-generated column.
This could perhaps be fixed by attempting to make RewriteQuery() fully
idempotent, but that looks quite tricky to achieve, and would probably
be quite fragile, given that more generated-column-type features might
be added in the future.
Instead, fix by arranging for RewriteQuery() to rewrite each CTE
exactly once (by tracking the number of CTEs already rewritten as it
recurses). This has the advantage of being simpler and more efficient,
but it does make RewriteQuery() dependent on the order in which
rewriteRuleAction() joins the CTE lists from the original query and
the rule action, so care must be taken if that is ever changed.
Reported-by: Bernice Southey <bernice.southey@gmail.com>
Author: Bernice Southey <bernice.southey@gmail.com>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://postgr.es/m/CAEDh4nyD6MSH9bROhsOsuTqGAv_QceU_GDvN9WcHLtZTCYM1kA@mail.gmail.com
Backpatch-through: 14 M src/backend/rewrite/rewriteHandler.c
M src/test/regress/expected/with.out
M src/test/regress/sql/with.sql
Allow indexscans on partial hash indexes with implied quals.
commit : a212877dc7293cb05cf8f635f438f38e9161265e
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 27 Nov 2025 13:09:59 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 27 Nov 2025 13:09:59 -0500 Normally, if a WHERE clause is implied by the predicate of a partial
index, we drop that clause from the set of quals used with the index,
since it's redundant to test it if we're scanning that index.
However, if it's a hash index (or any !amoptionalkey index), this
could result in dropping all available quals for the index's first
key, preventing us from generating an indexscan.
It's fair to question the practical usefulness of this case. Since
hash only supports equality quals, the situation could only arise
if the index's predicate is "WHERE indexkey = constant", implying
that the index contains only one hash value, which would make hash
a really poor choice of index type. However, perhaps there are
other !amoptionalkey index AMs out there with which such cases are
more plausible.
To fix, just don't filter the candidate indexquals this way if
the index is !amoptionalkey. That's a bit hokey because it may
result in testing quals we didn't need to test, but to do it
more accurately we'd have to redundantly identify which candidate
quals are actually usable with the index, something we don't know
at this early stage of planning. Doesn't seem worth the effort.
Reported-by: Sergei Glukhov <s.glukhov@postgrespro.ru>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/e200bf38-6b45-446a-83fd-48617211feff@postgrespro.ru
Backpatch-through: 14 M src/backend/optimizer/path/indxpath.c
M src/test/regress/expected/hash_index.out
M src/test/regress/sql/hash_index.sql
doc: Fix misleading synopsis for CREATE/ALTER PUBLICATION.
commit : 9ad15f404a70114658290d4ace3b9b9d924a209b
author : Fujii Masao <fujii@postgresql.org>
date : Thu, 27 Nov 2025 23:30:51 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Thu, 27 Nov 2025 23:30:51 +0900 The documentation for CREATE/ALTER PUBLICATION previously showed:
[ ONLY ] table_name [ * ] [ ( column_name [, ... ] ) ] [ WHERE ( expression ) ] [, ... ]
to indicate that the table/column specification could be repeated.
However, placing [, ... ] directly after a multi-part construct was
misleading and made it unclear which portion was repeatable.
This commit introduces a new term, table_and_columns, to represent:
[ ONLY ] table_name [ * ] [ ( column_name [, ... ] ) ] [ WHERE ( expression ) ]
and updates the synopsis to use:
table_and_columns [, ... ]
which clearly identifies the repeatable element.
Backpatched to v15, where the misleading syntax was introduced.
Author: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Chao Li <lic@highgo.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAHut+PtsyvYL3KmA6C8f0ZpXQ=7FEqQtETVy-BOF+cm9WPvfMQ@mail.gmail.com
Backpatch-through: 15 M doc/src/sgml/ref/alter_publication.sgml
M doc/src/sgml/ref/create_publication.sgml
Fix error reporting for SQL/JSON path type mismatches
commit : 15ba0702c1ae9d46f49a6e1f80db99167d3aedf7
author : Amit Langote <amitlan@postgresql.org>
date : Thu, 27 Nov 2025 10:42:51 +0900
committer: Amit Langote <amitlan@postgresql.org>
date : Thu, 27 Nov 2025 10:42:51 +0900 transformJsonFuncExpr() used exprType()/exprLocation() on the
possibly coerced path expression, which could be NULL when
coercion to jsonpath failed, leading to "cache lookup failed
for type 0" errors.
Preserve the original expression node so that type and location
in the "must be of type jsonpath" error are reported correctly.
Add regression tests to cover these cases.
Reported-by: Jian He <jian.universality@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://postgr.es/m/CACJufxHunVg81JMuNo8Yvv_hJD0DicgaVN2Wteu8aJbVJPBjZA@mail.gmail.com
Backpatch-through: 17 M src/backend/parser/parse_expr.c
M src/test/regress/expected/sqljson_queryfuncs.out
M src/test/regress/sql/sqljson_queryfuncs.sql
Teach DSM registry to retry entry initialization if needed.
commit : b83bcc0df180056b9374bd4239d32fea84bb46f2
author : Nathan Bossart <nathan@postgresql.org>
date : Wed, 26 Nov 2025 15:12:25 -0600
committer: Nathan Bossart <nathan@postgresql.org>
date : Wed, 26 Nov 2025 15:12:25 -0600 If DSM registry entry initialization fails, backends could try to
use an uninitialized DSM segment, DSA, or dshash table (since the
entry is still added to the registry). To fix, restructure the
code so that the registry retries initialization as needed. This
commit also modifies pg_get_dsm_registry_allocations() to leave out
partially-initialized entries, as they shouldn't have any allocated
memory.
DSM registry entry initialization shouldn't fail often in practice,
but retrying was deemed better than leaving entries in a
permanently failed state (as was done by commit 1165a933aa, which
has since been reverted).
Suggested-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/E1vJHUk-006I7r-37%40gemulon.postgresql.org
Backpatch-through: 17 M src/backend/storage/ipc/dsm_registry.c
Revert "Teach DSM registry to ERROR if attaching to an uninitialized entry."
commit : 8551a289201c79d84bf8d43be2c3c93c95205497
author : Nathan Bossart <nathan@postgresql.org>
date : Wed, 26 Nov 2025 11:37:21 -0600
committer: Nathan Bossart <nathan@postgresql.org>
date : Wed, 26 Nov 2025 11:37:21 -0600 This reverts commit 1165a933aa (and the corresponding commits on
the back-branches). In a follow-up commit, we'll teach the
registry to retry entry initialization instead of leaving it in a
permanently failed state.
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/E1vJHUk-006I7r-37%40gemulon.postgresql.org
Backpatch-through: 17 M src/backend/storage/ipc/dsm_registry.c
doc: Clarify passphrase command reloading on Windows
commit : 2f9ec456aece2a89ebeef5ee2d100d7ea855fb79
author : Daniel Gustafsson <dgustafsson@postgresql.org>
date : Wed, 26 Nov 2025 14:24:04 +0100
committer: Daniel Gustafsson <dgustafsson@postgresql.org>
date : Wed, 26 Nov 2025 14:24:04 +0100 When running on Windows (or EXEC_BACKEND) the SSL configuration will
be reloaded on each backend start, so the passphrase command will be
reloaded along with it. This implies that passphrase command reload
must be enabled on Windows for connections to work at all. Document
this since it wasn't mentioned explicitly, and will there add markup
for parameter value to match the rest of the docs.
Backpatch to all supported versions.
Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/5F301096-921A-427D-8EC1-EBAEC2A35082@yesql.se
Backpatch-through: 14 M doc/src/sgml/config.sgml
oauth_validator: Shorten JSON responses in test logs
commit : 3d8183e7c47364ca558c810199855493ea5bdd6b
author : Jacob Champion <jchampion@postgresql.org>
date : Tue, 25 Nov 2025 20:42:44 -0800
committer: Jacob Champion <jchampion@postgresql.org>
date : Tue, 25 Nov 2025 20:42:44 -0800 Response padding from the oauth_validator abuse tests was adding a
couple megabytes to the test logs. We don't need the buildfarm to hold
onto that, and we don't need to read it when debugging; truncate it.
Reported-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/202511251218.zfs4nu2qnh2m%40alvherre.pgsql
Backpatch-through: 18 M src/test/modules/oauth_validator/t/oauth_server.py
pg_dump tests: don't put dumps in stdout
commit : 0e4b1af78d7c10baebbdcace9a37c9e304708bc1
author : Álvaro Herrera <alvherre@kurilemu.de>
date : Tue, 25 Nov 2025 19:08:36 +0100
committer: Álvaro Herrera <alvherre@kurilemu.de>
date : Tue, 25 Nov 2025 19:08:36 +0100 This bloats the regression log files for no reason.
Backpatch to 18; no further only because it fails to apply cleanly.
(It's just whitespace change that conflicts, but I don't think this
warrants more effort than this.)
Discussion: https://postgr.es/m/202511251218.zfs4nu2qnh2m@alvherre.pgsql M src/bin/pg_dump/t/002_pg_dump.pl
lwlock: Fix, currently harmless, bug in LWLockWakeup()
commit : 8082b759d9b5067dcbdad7090c2e4bf1a4a6842d
author : Andres Freund <andres@anarazel.de>
date : Mon, 24 Nov 2025 17:37:09 -0500
committer: Andres Freund <andres@anarazel.de>
date : Mon, 24 Nov 2025 17:37:09 -0500 Accidentally the code in LWLockWakeup() checked the list of to-be-woken up
processes to see if LW_FLAG_HAS_WAITERS should be unset. That means that
HAS_WAITERS would not get unset immediately, but only during the next,
unnecessary, call to LWLockWakeup().
Luckily, as the code stands, this is just a small efficiency issue.
However, if there were (as in a patch of mine) a case in which LWLockWakeup()
would not find any backend to wake, despite the wait list not being empty,
we'd wrongly unset LW_FLAG_HAS_WAITERS, leading to potentially hanging.
While the consequences in the backbranches are limited, the code as-is
confusing, and it is possible that there are workloads where the additional
wait list lock acquisitions hurt, therefore backpatch.
Discussion: https://postgr.es/m/fvfmkr5kk4nyex56ejgxj3uzi63isfxovp2biecb4bspbjrze7@az2pljabhnff
Backpatch-through: 14 M src/backend/storage/lmgr/lwlock.c
Fix incorrect IndexOptInfo header comment
commit : f4e68a32a0cc0ff4b17ebe4e16be18ff15ff97a8
author : David Rowley <drowley@postgresql.org>
date : Mon, 24 Nov 2025 17:00:50 +1300
committer: David Rowley <drowley@postgresql.org>
date : Mon, 24 Nov 2025 17:00:50 +1300 The comment incorrectly indicated that indexcollations[] stored
collations for both key columns and INCLUDE columns, but in reality it
only has elements for the key columns. canreturn[] didn't get a mention,
so add that while we're here.
Author: Junwang Zhao <zhjwpku@gmail.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/CAEG8a3LwbZgMKOQ9CmZarX5DEipKivdHp5PZMOO-riL0w%3DL%3D4A%40mail.gmail.com
Backpatch-through: 14 M src/include/nodes/pathnodes.h
jit: Adjust AArch64-only code for LLVM 21.
commit : 912cfa3146ce4891671c34207177fd36bd155c09
author : Thomas Munro <tmunro@postgresql.org>
date : Sat, 22 Nov 2025 20:51:16 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Sat, 22 Nov 2025 20:51:16 +1300 LLVM 21 changed the arguments of RTDyldObjectLinkingLayer's
constructor, breaking compilation with the backported
SectionMemoryManager from commit 9044fc1d.
https://github.com/llvm/llvm-project/commit/cd585864c0bbbd74ed2a2b1ccc191eed4d1c8f90
Backpatch-through: 14
Author: Holger Hoffstätte <holger@applied-asynchrony.com>
Reviewed-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/d25e6e4a-d1b4-84d3-2f8a-6c45b975f53d%40applied-asynchrony.com M src/backend/jit/llvm/llvmjit_wrap.cpp
Handle EPERM in pg_numa_init
commit : 482e98ac43022194fbf2ef0d8f6a68fe2516e54a
author : Tomas Vondra <tomas.vondra@postgresql.org>
date : Thu, 20 Nov 2025 12:51:58 +0100
committer: Tomas Vondra <tomas.vondra@postgresql.org>
date : Thu, 20 Nov 2025 12:51:58 +0100 When running in Docker, the container may not have privileges needed by
get_mempolicy(). This is called by numa_available() in libnuma, but
versions prior to 2.0.19 did not expect that. The numa_available() call
seemingly succeeds, but then we get unexpected failures when trying to
query status of pages:
postgres =# select * from pg_shmem_allocations_numa;
ERROR: XX000: failed NUMA pages inquiry status: Operation not
permitted
LOCATION: pg_get_shmem_allocations_numa, shmem.c:691
The best solution is to call get_mempolicy() first, and proceed to
numa_available() only when it does not fail with EPERM. Otherwise we'd
need to treat older libnuma versions as insufficient, which seems a bit
too harsh, as this only affects containerized systems.
Fix by me, based on suggestions by Christoph. Backpatch to 18, where the
NUMA functions were introduced.
Reported-by: Christoph Berg <myon@debian.org>
Reviewed-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/aPDZOxjrmEo_1JRG@msg.df7cb.de
Backpatch-through: 18 M src/port/pg_numa.c
doc: Update pg_upgrade documentation to match recent description changes.
commit : d984cef87c66094452be26859796f4b820f8d655
author : Fujii Masao <fujii@postgresql.org>
date : Thu, 20 Nov 2025 09:18:51 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Thu, 20 Nov 2025 09:18:51 +0900 Commit 792353f7d52 updated the pg_dump and pg_dumpall documentation to
clarify which statistics are not included in their output. The pg_upgrade
documentation contained a nearly identical description, but it was not updated
at the same time.
This commit updates the pg_upgrade documentation to match those changes.
Backpatch to v18, where commit 792353f7d52 was backpatched to.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Bruce Momjian <bruce@momjian.us>
Discussion: https://postgr.es/m/CAHGQGwFnfgdGz8aGWVzgFCFwoWQU7KnFFjmxinf4RkQAkzmR+w@mail.gmail.com
Backpatch-through: 18 M doc/src/sgml/ref/pgupgrade.sgml
Print new OldestXID value in pg_resetwal when it's being changed
commit : 19594271c1a2b5447f4ee85cd7676606faa2a682
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 19 Nov 2025 18:05:42 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 19 Nov 2025 18:05:42 +0200 Commit 74cf7d46a91d added the --oldest-transaction-id option to
pg_resetwal, but forgot to update the code that prints all the new
values that are being set. Fix that.
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://www.postgresql.org/message-id/5461bc85-e684-4531-b4d2-d2e57ad18cba@iki.fi
Backpatch-through: 14 M src/bin/pg_resetwal/pg_resetwal.c
doc: Update formula for vacuum insert threshold.
commit : c99436f433220d967297b3bb9a8d8c2d1e836abe
author : Nathan Bossart <nathan@postgresql.org>
date : Wed, 19 Nov 2025 10:01:37 -0600
committer: Nathan Bossart <nathan@postgresql.org>
date : Wed, 19 Nov 2025 10:01:37 -0600 Oversight in commit 06eae9e621.
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/aRODeqFUVkGDJSPP%40nathan
Backpatch-through: 18 M doc/src/sgml/maintenance.sgml
Fix typo in nodeHash.c
commit : db0d2d75d0bbdf08e2756b075d4a887b0d30b750
author : Richard Guo <rguo@postgresql.org>
date : Wed, 19 Nov 2025 11:04:03 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Wed, 19 Nov 2025 11:04:03 +0900 Replace "overlow" with "overflow".
Author: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/CAHewXNnzFjAjYLTkP78HE2PQ17MjBqFdQQg+0X6Wo7YMUb68xA@mail.gmail.com M src/backend/executor/nodeHash.c
Fix pg_popcount_aarch64.c to build with ancient glibc releases.
commit : 6a51707551270eb2d17ed53c4256cc89299fa3b7
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 18 Nov 2025 16:16:51 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 18 Nov 2025 16:16:51 -0500 Like commit 6d969ca68, except here we are mopping up after 519338ace.
(There are no other uses of <sys/auxv.h> in the tree, so we should
be done now.)
Reported-by: GaoZengqi <pgf00a@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAFmBtr3Av62-jBzdhFkDHXJF9vQmNtSnH2upwODjnRcsgdTytw@mail.gmail.com
Backpatch-through: 18 M src/port/pg_popcount_aarch64.c
Fix typo
commit : b1fbc6494ef57964adb22f0ee14ec314010b5629
author : Álvaro Herrera <alvherre@kurilemu.de>
date : Tue, 18 Nov 2025 19:31:23 +0100
committer: Álvaro Herrera <alvherre@kurilemu.de>
date : Tue, 18 Nov 2025 19:31:23 +0100 M src/backend/executor/nodeHash.c
M src/bin/pg_dump/t/005_pg_dump_filterfile.pl
M src/interfaces/libpq/fe-secure-openssl.c
Don't allow CTEs to determine semantic levels of aggregates.
commit : 12bc3291772e29fdf2850c890ba9b962b07a1c1c
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 18 Nov 2025 12:56:55 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 18 Nov 2025 12:56:55 -0500 The fix for bug #19055 (commit b0cc0a71e) allowed CTE references in
sub-selects within aggregate functions to affect the semantic levels
assigned to such aggregates. It turns out this broke some related
cases, leading to assertion failures or strange planner errors such
as "unexpected outer reference in CTE query". After experimenting
with some alternative rules for assigning the semantic level in
such cases, we've come to the conclusion that changing the level
is more likely to break things than be helpful.
Therefore, this patch undoes what b0cc0a71e changed, and instead
installs logic to throw an error if there is any reference to a
CTE that's below the semantic level that standard SQL rules would
assign to the aggregate based on its contained Var and Aggref nodes.
(The SQL standard disallows sub-selects within aggregate functions,
so it can't reach the troublesome case and hence has no rule for
what to do.)
Perhaps someone will come along with a legitimate query that this
logic rejects, and if so probably the example will help us craft
a level-adjustment rule that works better than what b0cc0a71e did.
I'm not holding my breath for that though, because the previous
logic had been there for a very long time before bug #19055 without
complaints, and that bug report sure looks to have originated from
fuzzing not from real usage.
Like b0cc0a71e, back-patch to all supported branches, though
sadly that no longer includes v13.
Bug: #19106
Reported-by: Kamil Monicz <kamil@monicz.dev>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19106-9dd3668a0734cd72@postgresql.org
Backpatch-through: 14 M src/backend/parser/parse_agg.c
M src/test/regress/expected/with.out
M src/test/regress/sql/with.sql
doc: clarify that pg_upgrade preserves "optimizer" stats.
commit : b56c26c493f511d7c6db1cbc22b1a14e010ed106
author : Bruce Momjian <bruce@momjian.us>
date : Mon, 17 Nov 2025 18:55:41 -0500
committer: Bruce Momjian <bruce@momjian.us>
date : Mon, 17 Nov 2025 18:55:41 -0500 Reported-by: Rambabu V
Author: Robert Treat
Discussion: https://postgr.es/m/CADtiZxrUzRRX6edyN2y-7U5HA8KSXttee7K=EFTLXjwG1SCE4A@mail.gmail.com
Backpatch-through: 18 M doc/src/sgml/ref/pg_dump.sgml
M doc/src/sgml/ref/pg_dumpall.sgml
Fix pg_crc32c_armv8_choose.c to build with ancient glibc releases.
commit : db4eba15266e4f78aa62fd1444f3e858a284c686
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 17 Nov 2025 15:24:34 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 17 Nov 2025 15:24:34 -0500 If you go back as far as the RHEL7 era, <sys/auxv.h> does not provide
the HWCAPxxx macros needed with elf_aux_info or getauxval, so you need
to get those from the kernel header <asm/hwcap.h> instead. We knew
that for the 32-bit case but failed to extrapolate to the 64-bit case.
Oversight in commit aac831caf.
Reported-by: GaoZengqi <pgf00a@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAFmBtr3Av62-jBzdhFkDHXJF9vQmNtSnH2upwODjnRcsgdTytw@mail.gmail.com
Backpatch-through: 18 M src/port/pg_crc32c_armv8_choose.c
Update .abi-compliance-history for change to CreateStatistics().
commit : 3e85af1ff4b5bdeac5c7d2f497098ab1ad7bae2d
author : Nathan Bossart <nathan@postgresql.org>
date : Mon, 17 Nov 2025 14:14:41 -0600
committer: Nathan Bossart <nathan@postgresql.org>
date : Mon, 17 Nov 2025 14:14:41 -0600 As noted in the commit message for 5e4fcbe531, the addition of a
second parameter to CreateStatistics() breaks ABI compatibility,
but we are unaware of any impacted third-party code. This commit
updates .abi-compliance-history accordingly.
Backpatch-through: 14-18 M .abi-compliance-history
Clean up match_orclause_to_indexcol().
commit : bf5b13a8a005b3606ca120b6f3d76120e70eee92
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 17 Nov 2025 13:54:52 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 17 Nov 2025 13:54:52 -0500 Remove bogus stripping of RelabelTypes: that can result in building
an output SAOP tree with incorrect exposed exprType for the operands,
which might confuse polymorphic operators. Moreover it demonstrably
prevents folding some OR-trees to SAOPs when the RHS expressions
have different base types that were coerced to the same type by
RelabelTypes.
Reduce prohibition on type_is_rowtype to just disallow type RECORD.
We need that because otherwise we would happily fold multiple RECORD
Consts into a RECORDARRAY Const even if they aren't the same record
type. (We could allow that perhaps, if we checked that they all have
the same typmod, but the case doesn't seem worth that much effort.)
However, there is no reason at all to disallow the transformation
for named composite types, nor domains over them: as long as we can
find a suitable array type we're good.
Remove some assertions that seem rather out of place (it's not
this code's duty to verify that the RestrictInfo structure is
sane). Rewrite some comments.
The issues with RelabelType stripping seem severe enough to
back-patch this into v18 where the code was introduced.
Author: Tender Wang <tndrwang@gmail.com>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAHewXN=aH7GQBk4fXU-WaEeVmQWUmBAeNyBfJ3VKzPphyPKUkQ@mail.gmail.com
Backpatch-through: 18 M src/backend/optimizer/path/indxpath.c
Mention md5 deprecation in postgresql.conf.sample
commit : 74561f8f713285ea8a7d1f6a3eba71838ea82675
author : Daniel Gustafsson <dgustafsson@postgresql.org>
date : Mon, 17 Nov 2025 12:18:18 +0100
committer: Daniel Gustafsson <dgustafsson@postgresql.org>
date : Mon, 17 Nov 2025 12:18:18 +0100 PostgreSQL 18 deprecated password_encryption='md5', but the
comments for this GUC in the sample configuration file did
not mention the deprecation. Update comments with a notice
to make as many users as possible aware of it. Also add a
comment to the related md5_password_warnings GUC while there.
Author: Michael Banck <mbanck@gmx.net>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Backpatch-through: 18 M src/backend/utils/misc/postgresql.conf.sample
Define PS_USE_CLOBBER_ARGV on GNU/Hurd.
commit : bcfca332f100061e056283f237df009f570aff8b
author : Thomas Munro <tmunro@postgresql.org>
date : Mon, 17 Nov 2025 12:01:12 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Mon, 17 Nov 2025 12:01:12 +1300 Until d2ea2d310dfdc40328aca5b6c52225de78432e01, the PS_USE_PS_STRINGS
option was used on the GNU/Hurd. As this option got removed and
PS_USE_CLOBBER_ARGV appears to work fine nowadays on the Hurd, define
this one to re-enable process title changes on this platform.
In the 14 and 15 branches, the existing test for __hurd__ (added 25
years ago by commit 209aa77d, removed in 16 by the above commit) is left
unchanged for now as it was activating slightly different code paths and
would need investigation by a Hurd user.
Author: Michael Banck <mbanck@debian.org>
Discussion: https://postgr.es/m/CA%2BhUKGJMNGUAqf27WbckYFrM-Mavy0RKJvocfJU%3DJ2XcAZyv%2Bw%40mail.gmail.com
Backpatch-through: 16 M src/backend/utils/misc/ps_status.c
Fix Assert failure in EXPLAIN ANALYZE MERGE with a concurrent update.
commit : 5749d95d47941d1240e68ad0f4e0eba0e569aeb8
author : Dean Rasheed <dean.a.rasheed@gmail.com>
date : Sun, 16 Nov 2025 22:15:10 +0000
committer: Dean Rasheed <dean.a.rasheed@gmail.com>
date : Sun, 16 Nov 2025 22:15:10 +0000 When instrumenting a MERGE command containing both WHEN NOT MATCHED BY
SOURCE and WHEN NOT MATCHED BY TARGET actions using EXPLAIN ANALYZE, a
concurrent update of the target relation could lead to an Assert
failure in show_modifytable_info(). In a non-assert build, this would
lead to an incorrect value for "skipped" tuples in the EXPLAIN output,
rather than a crash.
This could happen if the concurrent update caused a matched row to no
longer match, in which case ExecMerge() treats the single originally
matched row as a pair of not matched rows, and potentially executes 2
not-matched actions for the single source row. This could then lead to
a state where the number of rows processed by the ModifyTable node
exceeds the number of rows produced by its source node, causing
"skipped_path" in show_modifytable_info() to be negative, triggering
the Assert.
Fix this in ExecMergeMatched() by incrementing the instrumentation
tuple count on the source node whenever a concurrent update of this
kind is detected, if both kinds of merge actions exist, so that the
number of source rows matches the number of actions potentially
executed, and the "skipped" tuple count is correct.
Back-patch to v17, where support for WHEN NOT MATCHED BY SOURCE
actions was introduced.
Bug: #19111
Reported-by: Dilip Kumar <dilipbalaut@gmail.com>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Discussion: https://postgr.es/m/19111-5b06624513d301b3@postgresql.org
Backpatch-through: 17 M src/backend/executor/nodeModifyTable.c
M src/test/isolation/expected/merge-update.out
M src/test/isolation/specs/merge-update.spec
Doc: include MERGE in variable substitution command list
commit : 2ed2e71cdf4246e45cf2a089071abe8c9344c712
author : David Rowley <drowley@postgresql.org>
date : Mon, 17 Nov 2025 10:52:08 +1300
committer: David Rowley <drowley@postgresql.org>
date : Mon, 17 Nov 2025 10:52:08 +1300 Backpatch to 15, where MERGE was introduced.
Reported-by: <emorgunov@mail.ru>
Author: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/176278494385.770.15550176063450771532@wrigleys.postgresql.org
Backpatch-through: 15 M doc/src/sgml/plpgsql.sgml
Comment out autovacuum_worker_slots in postgresql.conf.sample.
commit : c732139924718eed4c9b88d54051efbdf8152fb1
author : Nathan Bossart <nathan@postgresql.org>
date : Fri, 14 Nov 2025 13:45:04 -0600
committer: Nathan Bossart <nathan@postgresql.org>
date : Fri, 14 Nov 2025 13:45:04 -0600 All settings in this file should be commented out. In addition to
fixing that, also fix the indentation for this line.
Oversight in commit c758119e5b.
Reported-by: Daniel Gustafsson <daniel@yesql.se>
Author: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/19727040-3EE4-4719-AF4F-2548544113D7%40yesql.se
Backpatch-through: 18 M src/backend/utils/misc/postgresql.conf.sample
Add note about CreateStatistics()'s selective use of check_rights.
commit : 29cf93b4b2a63273097c37f7dd0a0aed0f466798
author : Nathan Bossart <nathan@postgresql.org>
date : Fri, 14 Nov 2025 13:20:09 -0600
committer: Nathan Bossart <nathan@postgresql.org>
date : Fri, 14 Nov 2025 13:20:09 -0600 Commit 5e4fcbe531 added a check_rights parameter to this function
for use by ALTER TABLE commands that re-create statistics objects.
However, we intentionally ignore check_rights when verifying
relation ownership because this function's lookup could return a
different answer than the caller's. This commit adds a note to
this effect so that we remember it down the road.
Reviewed-by: Noah Misch <noah@leadboat.com>
Backpatch-through: 14 M src/backend/commands/statscmds.c
pgbench: Fix assertion failure with multiple \syncpipeline in pipeline mode.
commit : 00e64e35c8bd4e5a92b76feeb7f31defd7d08bd7
author : Fujii Masao <fujii@postgresql.org>
date : Fri, 14 Nov 2025 22:40:39 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Fri, 14 Nov 2025 22:40:39 +0900 Previously, when pgbench ran a custom script that triggered retriable errors
(e.g., deadlocks) followed by multiple \syncpipeline commands in pipeline mode,
the following assertion failure could occur:
Assertion failed: (res == ((void*)0)), function discardUntilSync, file pgbench.c, line 3594.
The issue was that discardUntilSync() assumed a pipeline sync result
(PGRES_PIPELINE_SYNC) would always be followed by either another sync result
or NULL. This assumption was incorrect: when multiple sync requests were sent,
a sync result could instead be followed by another result type. In such cases,
discardUntilSync() mishandled the results, leading to the assertion failure.
This commit fixes the issue by making discardUntilSync() correctly handle cases
where a pipeline sync result is followed by other result types. It now continues
discarding results until another pipeline sync followed by NULL is reached.
Backpatched to v17, where support for \syncpipeline command in pgbench was
introduced.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Chao Li <lic@highgo.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/20251111105037.f3fc554616bc19891f926c5b@sraoss.co.jp
Backpatch-through: 17 M src/bin/pgbench/pgbench.c
doc: Improve description of RLS policies applied by command type.
commit : 749f4ce4d9845b01ab2db6632ccd6610386dcd9e
author : Dean Rasheed <dean.a.rasheed@gmail.com>
date : Thu, 13 Nov 2025 12:02:52 +0000
committer: Dean Rasheed <dean.a.rasheed@gmail.com>
date : Thu, 13 Nov 2025 12:02:52 +0000 On the CREATE POLICY page, the "Policies Applied by Command Type"
table was missing MERGE ... THEN DELETE and some of the policies
applied during INSERT ... ON CONFLICT and MERGE. Fix that, and try to
improve readability by listing the various MERGE cases separately,
rather than together with INSERT/UPDATE/DELETE. Mention COPY ... TO
along with SELECT, since it behaves in the same way. In addition,
document which policy violations cause errors to be thrown, and which
just cause rows to be silently ignored.
Also, a paragraph above the table states that INSERT ... ON CONFLICT
DO UPDATE only checks the WITH CHECK expressions of INSERT policies
for rows appended to the relation by the INSERT path, which is
incorrect -- all rows proposed for insertion are checked, regardless
of whether they end up being inserted. Fix that, and also mention that
the same applies to INSERT ... ON CONFLICT DO NOTHING.
In addition, in various other places on that page, clarify how the
different types of policy are applied to different commands, and
whether or not errors are thrown when policy checks do not pass.
Backpatch to all supported versions. Prior to v17, MERGE did not
support RETURNING, and so MERGE ... THEN INSERT would never check new
rows against SELECT policies. Prior to v15, MERGE was not supported at
all.
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Viktor Holmberg <v@viktorh.net>
Reviewed-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CAEZATCWqnfeChjK=n1V_dYZT4rt4mnq+ybf9c0qXDYTVMsy8pg@mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/ref/create_policy.sgml
Teach DSM registry to ERROR if attaching to an uninitialized entry.
commit : b26d76f643277973f90a62f18de30e00edd65378
author : Nathan Bossart <nathan@postgresql.org>
date : Wed, 12 Nov 2025 14:30:11 -0600
committer: Nathan Bossart <nathan@postgresql.org>
date : Wed, 12 Nov 2025 14:30:11 -0600 If DSM entry initialization fails, backends could try to use an
uninitialized DSM segment, DSA, or dshash table (since the entry is
still added to the registry). To fix, keep track of whether
initialization completed, and ERROR if a backend tries to attach to
an uninitialized entry. We could instead retry initialization as
needed, but that seemed complicated, error prone, and unlikely to
help most cases. Furthermore, such problems probably indicate a
coding error.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/dd36d384-55df-4fc2-825c-5bc56c950fa9%40gmail.com
Backpatch-through: 17 M src/backend/storage/ipc/dsm_registry.c
Clear 'xid' in dummy async notify entries written to fill up pages
commit : 82fa6b78dba1037a8822ea5fae1018376c10fd3c
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 21:19:03 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 21:19:03 +0200 Before we started to freeze async notify entries (commit 8eeb4a0f7c),
no one looked at the 'xid' on an entry with invalid 'dboid'. But now
we might actually need to freeze it later. Initialize them with
InvalidTransactionId to begin with, to avoid that work later.
Álvaro pointed this out in review of commit 8eeb4a0f7c, but I forgot
to include this change there.
Author: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://www.postgresql.org/message-id/202511071410.52ll56eyixx7@alvherre.pgsql
Backpatch-through: 14 M src/backend/commands/async.c
Fix remaining race condition with CLOG truncation and LISTEN/NOTIFY
commit : 7b069a1876e46e319690459c578750e8b532520f
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 20:59:44 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 20:59:44 +0200 Previous commit fixed a bug where VACUUM would truncate the CLOG
that's still needed to check the commit status of XIDs in the async
notify queue, but as mentioned in the commit message, it wasn't a full
fix. If a backend is executing asyncQueueReadAllNotifications() and
has just made a local copy of an async SLRU page which contains old
XIDs, vacuum can concurrently truncate the CLOG covering those XIDs,
and the backend still gets an error when it calls
TransactionIdDidCommit() on those XIDs in the local copy. This commit
fixes that race condition.
To fix, hold the SLRU bank lock across the TransactionIdDidCommit()
calls in NOTIFY processing.
Per Tom Lane's idea. Backpatch to all supported versions.
Reviewed-by: Joel Jacobson <joel@compiler.org>
Reviewed-by: Arseniy Mukhin <arseniy.mukhin.dev@gmail.com>
Discussion: https://www.postgresql.org/message-id/2759499.1761756503@sss.pgh.pa.us
Backpatch-through: 14 M src/backend/commands/async.c
Fix bug where we truncated CLOG that was still needed by LISTEN/NOTIFY
commit : 321ec54625fdb79accba41c8094da382a4f89aa2
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 20:59:36 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 20:59:36 +0200 The async notification queue contains the XID of the sender, and when
processing notifications we call TransactionIdDidCommit() on the
XID. But we had no safeguards to prevent the CLOG segments containing
those XIDs from being truncated away. As a result, if a backend didn't
for some reason process its notifications for a long time, or when a
new backend issued LISTEN, you could get an error like:
test=# listen c21;
ERROR: 58P01: could not access status of transaction 14279685
DETAIL: Could not open file "pg_xact/000D": No such file or directory.
LOCATION: SlruReportIOError, slru.c:1087
To fix, make VACUUM "freeze" the XIDs in the async notification queue
before truncating the CLOG. Old XIDs are replaced with
FrozenTransactionId or InvalidTransactionId.
Note: This commit is not a full fix. A race condition remains, where a
backend is executing asyncQueueReadAllNotifications() and has just
made a local copy of an async SLRU page which contains old XIDs, while
vacuum concurrently truncates the CLOG covering those XIDs. When the
backend then calls TransactionIdDidCommit() on those XIDs from the
local copy, you still get the error. The next commit will fix that
remaining race condition.
This was first reported by Sergey Zhuravlev in 2021, with many other
people hitting the same issue later. Thanks to:
- Alexandra Wang, Daniil Davydov, Andrei Varashen and Jacques Combrink
for investigating and providing reproducable test cases,
- Matheus Alcantara and Arseniy Mukhin for review and earlier proposed
patches to fix this,
- Álvaro Herrera and Masahiko Sawada for reviews,
- Yura Sokolov aka funny-falcon for the idea of marking transactions
as committed in the notification queue, and
- Joel Jacobson for the final patch version. I hope I didn't forget
anyone.
Backpatch to all supported versions. I believe the bug goes back all
the way to commit d1e027221d, which introduced the SLRU-based async
notification queue.
Discussion: https://www.postgresql.org/message-id/16961-25f29f95b3604a8a@postgresql.org
Discussion: https://www.postgresql.org/message-id/18804-bccbbde5e77a68c2@postgresql.org
Discussion: https://www.postgresql.org/message-id/CAK98qZ3wZLE-RZJN_Y%2BTFjiTRPPFPBwNBpBi5K5CU8hUHkzDpw@mail.gmail.com
Backpatch-through: 14 M src/backend/commands/async.c
M src/backend/commands/vacuum.c
M src/include/commands/async.h
M src/test/modules/xid_wraparound/meson.build
A src/test/modules/xid_wraparound/t/004_notify_freeze.pl
Escalate ERRORs during async notify processing to FATAL
commit : aab4a84bb070ae73dcd72d25c2239de50c6150ab
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 20:59:28 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 20:59:28 +0200 Previously, if async notify processing encountered an error, we would
report the error to the client and advance our read position past the
offending entry to prevent trying to process it over and over
again. Trying to continue after an error has a few problems however:
- We have no way of telling the client that a notification was
lost. They get an ERROR, but that doesn't tell you much. As such,
it's not clear if keeping the connection alive after losing a
notification is a good thing. Depending on the application logic,
missing a notification could cause the application to get stuck
waiting, for example.
- If the connection is idle, PqCommReadingMsg is set and any ERROR is
turned into FATAL anyway.
- We bailed out of the notification processing loop on first error
without processing any subsequent notifications. The subsequent
notifications would not be processed until another notify interrupt
arrives. For example, if there were two notifications pending, and
processing the first one caused an ERROR, the second notification
would not be processed until someone sent a new NOTIFY.
This commit changes the behavior so that any ERROR while processing
async notifications is turned into FATAL, causing the client
connection to be terminated. That makes the behavior more consistent
as that's what happened in idle state already, and terminating the
connection is a clear signal to the application that it might've
missed some notifications.
The reason to do this now is that the next commits will change the
notification processing code in a way that would make it harder to
skip over just the offending notification entry on error.
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Arseniy Mukhin <arseniy.mukhin.dev@gmail.com>
Discussion: https://www.postgresql.org/message-id/fedbd908-4571-4bbe-b48e-63bfdcc38f64@iki.fi
Backpatch-through: 14 M src/backend/commands/async.c
doc: Document effects of ownership change on privileges
commit : f024dee6619ff0671d734e20eb99ffc3492a46bf
author : Daniel Gustafsson <dgustafsson@postgresql.org>
date : Wed, 12 Nov 2025 17:04:35 +0100
committer: Daniel Gustafsson <dgustafsson@postgresql.org>
date : Wed, 12 Nov 2025 17:04:35 +0100 Explicitly document that privileges are transferred along with the
ownership. Backpatch to all supported versions since this behavior
has always been present.
Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Josef Šimánek <josef.simanek@gmail.com>
Reported-by: Gilles Parc <gparc@free.fr>
Discussion: https://postgr.es/m/2023185982.281851219.1646733038464.JavaMail.root@zimbra15-e2.priv.proxad.net
Backpatch-through: 14 M doc/src/sgml/ddl.sgml
Fix range for commit_siblings in sample conf
commit : 6ae5228a1366b0b656dbea24d1a7557ee108d7ba
author : Daniel Gustafsson <dgustafsson@postgresql.org>
date : Wed, 12 Nov 2025 13:51:53 +0100
committer: Daniel Gustafsson <dgustafsson@postgresql.org>
date : Wed, 12 Nov 2025 13:51:53 +0100 The range for commit_siblings was incorrectly listed as starting on 1
instead of 0 in the sample configuration file. Backpatch down to all
supported branches.
Author: Man Zeng <zengman@halodbtech.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/tencent_53B70BA72303AE9C6889E78E@qq.com
Backpatch-through: 14 M src/backend/utils/misc/postgresql.conf.sample
Change coding pattern for CURL_IGNORE_DEPRECATION()
commit : 3a438e44d96942036f98c28dcae173c63f00c2c1
author : Álvaro Herrera <alvherre@kurilemu.de>
date : Wed, 12 Nov 2025 12:35:14 +0100
committer: Álvaro Herrera <alvherre@kurilemu.de>
date : Wed, 12 Nov 2025 12:35:14 +0100 Instead of having to write a semicolon inside the macro argument, we can
insert a semicolon with another macro layer. This no longer gives
pg_bsd_indent indigestion, so we can remove the digestive aids that had
to be installed in the pgindent Perl script.
Author: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/202511111134.njrwf5w5nbjm@alvherre.pgsql
Backpatch-through: 18 M src/interfaces/libpq-oauth/oauth-curl.c
M src/tools/pgindent/pgindent
Fix pg_upgrade around multixid and mxoff wraparound
commit : 8747b969fd8be79541f6e3cdb99e1b425fd9ad47
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 12:20:16 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 12 Nov 2025 12:20:16 +0200 pg_resetwal didn't accept multixid 0 or multixact offset UINT32_MAX,
but they are both valid values that can appear in the control file.
That caused pg_upgrade to fail if you tried to upgrade a cluster
exactly at multixid or offset wraparound, because pg_upgrade calls
pg_resetwal to restore multixid/offset on the new cluster to the
values from the old cluster. To fix, allow those values in
pg_resetwal.
Fixes bugs #18863 and #18865 reported by Dmitry Kovalenko.
Backpatch down to v15. Version 14 has the same bug, but the patch
doesn't apply cleanly there. It could be made to work but it doesn't
seem worth the effort given how rare it is to hit this problem with
pg_upgrade, and how few people are upgrading to v14 anymore.
Author: Maxim Orlov <orlovmg@gmail.com>
Discussion: https://www.postgresql.org/message-id/CACG%3DezaApSMTjd%3DM2Sfn5Ucuggd3FG8Z8Qte8Xq9k5-%2BRQis-g@mail.gmail.com
Discussion: https://www.postgresql.org/message-id/18863-72f08858855344a2@postgresql.org
Discussion: https://www.postgresql.org/message-id/18865-d4c66cf35c2a67af@postgresql.org
Backpatch-through: 15 M src/bin/pg_resetwal/pg_resetwal.c
M src/bin/pg_resetwal/t/001_basic.pl
doc: Fix incorrect synopsis for ALTER PUBLICATION ... DROP ...
commit : 6acb670c2ace684178d9a6ab678be343e2923e6c
author : Fujii Masao <fujii@postgresql.org>
date : Wed, 12 Nov 2025 13:37:58 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Wed, 12 Nov 2025 13:37:58 +0900 The synopsis for the ALTER PUBLICATION ... DROP ... command incorrectly
implied that a column list and WHERE clause could be specified as part of
the publication object. However, these options are not allowed for
DROP operations, making the documentation misleading.
This commit corrects the synopsis to clearly show only the valid forms
of publication objects.
Backpatched to v15, where the incorrect synopsis was introduced.
Author: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAHut+PsPu+47Q7b0o6h1r-qSt90U3zgbAHMHUag5o5E1Lo+=uw@mail.gmail.com
Backpatch-through: 15 M doc/src/sgml/ref/alter_publication.sgml
Report better object limits in error messages for injection points
commit : 84e826567cb862e062593280fb13b7fe5d857f0c
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 12 Nov 2025 10:19:17 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 12 Nov 2025 10:19:17 +0900 Previously, error messages for oversized injection point names, libraries,
and functions showed buffer sizes (64, 128, 128) instead of the usable
character limits (63, 127, 127) as it did not count for the
zero-terminated byte, which was confusing. These messages are adjusted
to show better the reality.
The limit enforced for the private area was also too strict by one byte,
as specifying a zone worth exactly INJ_PRIVATE_MAXLEN should be able to
work because three is no zero-terminated byte in this case.
This is a stylistic change (well, mostly, a private_area size of exactly
1024 bytes can be defined with this change, something that nobody seem
to care about based on the lack of complaints). However, this is a
testing facility let's keep the logic consistent across all the branches
where this code exists, as there is an argument in favor of out-of-core
extensions that use injection points.
Author: Xuneng Zhou <xunengzhou@gmail.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CABPTF7VxYp4Hny1h+7ejURY-P4O5-K8WZg79Q3GUx13cQ6B2kg@mail.gmail.com
Backpatch-through: 17 M src/backend/utils/misc/injection_point.c
Add check for large files in meson.build
commit : 434b605745a1a0337c009a79cd7c6c15867fa6c5
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 12 Nov 2025 09:02:29 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 12 Nov 2025 09:02:29 +0900 A similar check existed in the MSVC scripts that have been removed in
v17 by 1301c80b2167, but nothing of the kind was checked in meson when
building with a 4-byte off_t.
This commit adds a check to fail the builds when trying to use a
relation file size higher than 1GB when off_t is 4 bytes, like
./configure, rather than detecting these failures at runtime because the
code is not able to handle large files in this case.
Backpatch down to v16, where meson has been introduced.
Discussion: https://postgr.es/m/aQ0hG36IrkaSGfN8@paquier.xyz
Backpatch-through: 16 M meson.build