Stamp 18.4.
commit : f5cc81719e6da4cbdb1f797c48b693e91018153a
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 15:44:35 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 15:44:35 -0400 M configure
M configure.ac
M meson.build
Last-minute updates for release notes.
commit : bbd12e8010561dab2c745d2ece0e94d102bef2ea
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 14:54:40 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 14:54:40 -0400 Security: CVE-2026-6472, CVE-2026-6473, CVE-2026-6474, CVE-2026-6475, CVE-2026-6476, CVE-2026-6477, CVE-2026-6478, CVE-2026-6479, CVE-2026-6575, CVE-2026-6637, CVE-2026-6638 M doc/src/sgml/release-18.sgml
Use palloc_array() in a few more places to avoid overflow
commit : 3fbec9e504b1b4dca0a30d4081e1eaa687510fc5
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Mon, 11 May 2026 21:18:06 +0300
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Mon, 11 May 2026 21:18:06 +0300 These could overflow on 32-bit systems.
Backpatch-through: 14
Security: CVE-2026-6473 M contrib/hstore_plperl/hstore_plperl.c
M contrib/hstore_plpython/hstore_plpython.c
Remove test cases for field overflows in intarray and ltree.
commit : 05e73b5c3578cb9857dfd4904d8dc2a96e0b04eb
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 12:12:03 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 12:12:03 -0400 These checks are failing in the buildfarm, reporting stack overflows
rather than the expected errors, though seemingly only on ppc64 and
s390x platforms. Perhaps there is something off about our tests
for stack depth on those architectures? But there's no time to
debug that right now, and surely these tests aren't too essential.
Revert for now and plan to revisit after the release dust settles.
Backpatch-through: 14
Security: CVE-2026-6473 M contrib/intarray/expected/_int.out
M contrib/intarray/sql/_int.sql
M contrib/ltree/expected/ltree.out
M contrib/ltree/sql/ltree.sql
refint: Fix SQL injection and buffer overruns.
commit : 1ebda7da9a43d3ae3564d08612de9cb27fbaf482
author : Nathan Bossart <nathan@postgresql.org>
date : Mon, 11 May 2026 05:13:48 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:48 -0700 Maliciously crafted key value updates could achieve SQL injection
within check_foreign_key(). To fix, ensure new key values are
properly quoted and escaped in the internally generated SQL
statements. While at it, avoid potential buffer overruns by
replacing the stack buffers for internally generated SQL statements
with StringInfo.
Reported-by: Nikolay Samokhvalov <nik@postgres.ai>
Author: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Security: CVE-2026-6637
Backpatch-through: 14 M contrib/spi/refint.c
Mark PQfn() unsafe and fix overrun in frontend LO interface.
commit : be013644043e5bae7260c09ab49cc6d64b7992be
author : Nathan Bossart <nathan@postgresql.org>
date : Mon, 11 May 2026 05:13:48 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:48 -0700 When result_is_int is set to 0, PQfn() cannot validate that the
result fits in result_buf, so it will write data beyond the end of
the buffer when the server returns more data than requested. Since
this function is insecurable and obsolete, add a warning to the top
of the pertinent documentation advising against its use.
The only in-tree caller of PQfn() is the frontend large object
interface. To fix that, add a buf_size parameter to
pqFunctionCall3() that is used to protect against overruns, and use
it in a private version of PQfn() that also accepts a buf_size
parameter.
Reported-by: Yu Kunpeng <yu443940816@live.com>
Reported-by: Martin Heistermann <martin.heistermann@unibe.ch>
Author: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Etsuro Fujita <etsuro.fujita@gmail.com>
Security: CVE-2026-6477
Backpatch-through: 14 M doc/src/sgml/libpq.sgml
M src/interfaces/libpq/fe-exec.c
M src/interfaces/libpq/fe-lobj.c
M src/interfaces/libpq/fe-protocol3.c
M src/interfaces/libpq/libpq-int.h
Fix integer overflow in array_agg(), when the array grows too large
commit : 67dd6243dc95df560ff3c31ed5b6e9474d98c4c3
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Mon, 11 May 2026 05:13:48 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:48 -0700 If you accumulate many arrays full of NULLs, you could overflow
'nitems', before reaching the MaxAllocSize limit on the allocations.
Add an explicit check that the number of items doesn't grow too large.
With more than MaxArraySize items, getting the final result with
makeArrayResultArr() would fail anyway, so better to error out early.
Reported-by: Xint Code
Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Backpatch-through: 14
Security: CVE-2026-6473 M src/backend/utils/adt/arrayfuncs.c
Fix integer-overflow and alignment hazards in locale-related code.
commit : dd8af778d2292bd8796a4df21d8f17721ed8440c
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 05:13:48 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:48 -0700 pg_locale_icu.c was full of places where a very long input string
could cause integer overflow while calculating a buffer size,
leading to buffer overruns.
It also was cavalier about using char-type local arrays as buffers
holding arrays of UChar. The alignment of a char[] variable isn't
guaranteed, so that this risked failure on alignment-picky platforms.
The lack of complaints suggests that such platforms are very rare
nowadays; but it's likely that we are paying a performance price on
rather more platforms. Declare those arrays as UChar[] instead,
keeping their physical size the same.
pg_locale_libc.c's strncoll_libc_win32_utf8() also had the
disease of assuming it could double or quadruple the input
string length without concern for overflow.
Reported-by: Xint Code
Reported-by: Pavel Kohout <pavel.kohout@aisle.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Backpatch-through: 14
Security: CVE-2026-6473 M src/backend/utils/adt/pg_locale_icu.c
M src/backend/utils/adt/pg_locale_libc.c
Prevent path traversal in pg_basebackup and pg_rewind
commit : 6a67c540a6dc4e391560789dd29cdbb246e659e0
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 11 May 2026 05:13:48 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:48 -0700 pg_rewind and pg_basebackup could be fed paths from rogue endpoints that
could overwrite the contents of the client when received, achieving path
traversal.
There were two areas in the tree that were sensitive to this problem:
- pg_basebackup, through the astreamer code, where no validation was
performed before building an output path when streaming tar data. This
is an issue in v15 and newer versions.
- pg_rewind file operations for paths received through libpq, for all
the stable branches supported.
In order to address this problem, this commit adds a helper function in
path.c, that reuses path_is_relative_and_below_cwd() after applying
canonicalize_path(). This can be used to validate the paths received
from a connection point. A path is considered invalid if any of the two
following conditions is satisfied:
- The path is absolute.
- The path includes a direct parent-directory reference.
Reported-by: XlabAI Team of Tencent Xuanwu Lab
Reported-by: Valery Gubanov <valerygubanov95@gmail.com>
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 14
Security: CVE-2026-6475 M src/bin/pg_rewind/file_ops.c
M src/fe_utils/astreamer_file.c
M src/fe_utils/astreamer_tar.c
M src/include/port.h
M src/port/path.c
Avoid overflow in size calculations in formatting.c.
commit : 55328e3a98df0fb5aad17f7f9aec64954462c871
author : Nathan Bossart <nathan@postgresql.org>
date : Mon, 11 May 2026 05:13:48 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:48 -0700 A few functions in this file were incautious about multiplying a
possibly large integer by a factor more than 1 and then using it as
an allocation size. This is harmless on 64-bit systems where we'd
compute a size exceeding MaxAllocSize and then fail, but on 32-bit
systems we could overflow size_t, leading to an undersized
allocation and buffer overrun. To fix, use palloc_array() or
mul_size() instead of handwritten multiplication.
Reported-by: Sven Klemm <sven@tigerdata.com>
Reported-by: Xint Code
Author: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Tatsuo Ishii <ishii@postgresql.org>
Security: CVE-2026-6473
Backpatch-through: 14 M src/backend/utils/adt/formatting.c
Check CREATE privilege on multirange type schema in CREATE TYPE.
commit : a44780f412515f70c3f155db04df13af67cea74c
author : Nathan Bossart <nathan@postgresql.org>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 This omission allowed roles to create multirange types in any
schema, potentially leading to privilege escalations. Note that
when a multirange type name is not specified in CREATE TYPE, it is
automatically placed in the range type's schema, which is checked
at the beginning of DefineRange().
Reported-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Security: CVE-2026-6472
Backpatch-through: 14 M src/backend/commands/typecmds.c
M src/test/regress/expected/multirangetypes.out
M src/test/regress/sql/multirangetypes.sql
pg_createsubscriber: Obstruct SQL injection via subscription names.
commit : c2e44c370edc003367e94bde137c6d9cfab5919c
author : Nathan Bossart <nathan@postgresql.org>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 drop_existing_subscription() neglected to escape the subscription
name when generating its query string. To fix, use
PQescapeIdentifier() to construct a properly escaped name, and use
it in the ALTER SUBSCRIPTION and DROP SUBSCRIPTION commands.
Reported-by: Yu Kunpeng <yu443940816@live.com>
Author: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Security: CVE-2026-6476
Backpatch-through: 17 M src/bin/pg_basebackup/pg_createsubscriber.c
Fix MCV input array checks in statistics restore functions
commit : 661095c40c0bcbb9c49743f518417a2977b63aef
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 The SQL functions for the restore of attribute and expression statistics
accept "most_common_vals" and "most_common_freqs" as independent arrays.
The planner assumes these have the same number of elements, but it was
possible to insert in the catalogs data that would cause an over-read
when the catalog data is loaded in the planner.
There were two holes in the stats restore logic:
- Both arrays should match in size.
- The input array must be one-dimensional, and it should match with what
is delivered by pg_dump when scanning the pg_stats catalogs.
The multivariate extended statistics MCV path (import_mcv) already
validated these inputs via check_mcvlist_array(), and is not affected.
These problems exist in v18 and newer versions for the restore of
attribute statistics. These problems affect only HEAD for the restore
of the expression statistics.
Reported-by: Jeroen Gui <jeroen.gui1@proton.me>
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Security: CVE-2026-6575
Backpatch-through: 18 M src/backend/statistics/attribute_stats.c
M src/test/regress/expected/stats_import.out
M src/test/regress/sql/stats_import.sql
Guard against unsafe conditions in usage of pg_strftime().
commit : c6e7a9ef30a2c97cc2dee8915f5e9e3675c81b34
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 Although pg_strftime() has defined error conditions, no callers bother
to check for errors. This is problematic because the output string is
very likely not null-terminated if an error occurs, so that blindly
using it is unsafe. Rather than trusting that we can find and fix all
the callers, let's alter the function's API spec slightly: make it
guarantee a null-terminated result so long as maxsize > 0.
Furthermore, if we do get an error, let's make that null-terminated
result be an empty string. We could instead truncate at the buffer
length, but that risks producing mis-encoded output if the tz_name
string contains multibyte characters. It doesn't seem reasonable for
src/timezone/ to make use of our encoding-aware truncation logic.
Also, the only really likely source of a failure is a user-supplied
timezone name that is intentionally trying to overrun our buffers.
I don't feel a need to be particularly friendly about that case.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Backpatch-through: 14
Security: CVE-2026-6474 M src/timezone/strftime.c
Avoid passing unintended format codes to snprintf().
commit : ba27389c2cfa1485bbe26754b23d3f6b4c4e72e2
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 timeofday() assumed that the output of pg_strftime() could not contain
% signs, other than the one it explicitly asks for with %%. However,
we don't have that guarantee with respect to the time zone name (%Z).
A crafted time zone setting could abuse the subsequent snprintf()
call, resulting in crashes or disclosure of server memory.
To fix, split the pg_strftime() call into two and then treat the
outputs as literal strings, not a snprintf format string. The
extra pg_strftime() call doesn't really cost anything, since the
bulk of the conversion work was done by pg_localtime().
Also, adjust buffer widths so that we're not risking string truncation
during the snprintf() step, as that would create a hazard of producing
mis-encoded output.
This also fixes a latent portability issue: the format string expects
an int, but tp.tv_usec is long int on many platforms.
Reported-by: Xint Code
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Backpatch-through: 14
Security: CVE-2026-6474 M src/backend/utils/adt/timestamp.c
Fix SQL injection in logical replication origin checks.
commit : cb35d730689546dd7334437f2954a6670fbb967e
author : Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 ALTER SUBSCRIPTION ... REFRESH PUBLICATION interpolates schema and
relation names into SQL without quoting them. A crafted subscriber
relation name can inject arbitrary SQL on the publisher. Test such a
name. Back-patch to v16, where commit
875693019053b8897ec3983e292acbb439b088c3 first appeared.
Reported-by: Pavel Kohout <pavel.kohout@aisle.com>
Author: Pavel Kohout <pavel.kohout@aisle.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Backpatch-through: 16
Security: CVE-2026-6638 M src/backend/commands/subscriptioncmds.c
M src/test/subscription/t/030_origin.pl
Apply timingsafe_bcmp() in authentication paths
commit : d93ef413174daae721c6f2cfda3fbd5187f0b4ee
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 This commit applies timingsafe_bcmp() to authentication paths that
handle attributes or data previously compared with memcpy() or strcmp(),
which are sensitive to timing attacks.
The following data is concerned by this change, some being in the
backend and some in the frontend:
- For a SCRAM or MD5 password, the computed key or the MD5 hash compared
with a password during a plain authentication.
- For a SCRAM exchange, the stored key, the client's final nonce and the
server nonce.
- RADIUS (up to v18), the encrypted password.
- For MD5 authentication, the MD5(MD5()) hash.
Reported-by: Joe Conway <mail@joeconway.com>
Security: CVE-2026-6478
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Backpatch-through: 14 M src/backend/libpq/auth-scram.c
M src/backend/libpq/auth.c
M src/backend/libpq/crypt.c
M src/interfaces/libpq/fe-auth-scram.c
Guard against overflow in "left" fields of query_int and ltxtquery.
commit : c5790ec4fd9a6ae9e0bf322a06ee9de2eedf3e11
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 contrib/intarray's query_int type uses an int16 field to hold the
offset from a binary operator node to its left operand. However, it
allows the number of nodes to be as much as will fit in MaxAllocSize,
so there is a risk of overflowing int16 depending on the precise shape
of the tree. Simple right-associative cases like "a | b | c | ..."
work fine, so we should not solve this by restricting the overall
number of nodes. Instead add a direct test of whether each individual
offset is too large.
contrib/ltree's ltxtquery type uses essentially the same logic and
has the same 16-bit restriction.
(The core backend's tsquery.c has a variant of this logic too, but
in that case the target field is 32 bits, so it is okay so long
as varlena datums are restricted to 1GB.)
In v16 and up, these types support soft error reporting, so we have
to complicate the recursive findoprnd function's API a bit to allow
the complaint to be reported softly. v14/v15 don't need that.
Undocumented and overcomplicated code like this makes my head hurt,
so add some comments and simplify while at it.
Reported-by: Xint Code
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Backpatch-through: 14
Security: CVE-2026-6473 M contrib/intarray/_int_bool.c
M contrib/intarray/expected/_int.out
M contrib/intarray/sql/_int.sql
M contrib/ltree/expected/ltree.out
M contrib/ltree/ltxtquery_io.c
M contrib/ltree/sql/ltree.sql
Fix unbounded recursive handling of SSL/GSS in ProcessStartupPacket()
commit : f7a191f5377dacd05d22dd40c1d1e38b393ea9b4
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 The handling of SSL and GSS negotiation messages in
ProcessStartupPacket() could cause a recursion of the backend,
ultimately crashing the server as the negotiation attempts were not
tracked across multiple calls processing startup packets.
A malicious client could therefore alternate rejected SSL and GSS
requests indefinitely, each adding a stack frame, until the backend
crashed with a stack overflow, taking down a server.
This commit addresses this issue by modifying ProcessStartupPacket() so
as processed negotiation attempts are tracked, preventing infinite
recursive attempts. A TAP test is added to check this problem, where
multiple SSL and GSS negotiated attempts are stacked.
Reported-by: Calif.io in collaboration with Claude and Anthropic
Research
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Security: CVE-2026-6479
Backpatch-through: 14 M src/backend/tcop/backend_startup.c
M src/test/postmaster/meson.build
A src/test/postmaster/t/004_negotiate.pl
Fix assorted places that need to use palloc_array().
commit : 01e568b8c11bc0e609cb2b1f936a0697d793703d
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 multirange_recv and BlockRefTableReaderNextRelation were incautious
about multiplying a possibly-large integer by a factor more than 1
and then using it as an allocation size. This is harmless on 64-bit
systems where we'd compute a size exceeding MaxAllocSize and then
fail, but on 32-bit systems we could overflow size_t leading to an
undersized allocation and buffer overrun.
Fix these places by using palloc_array() instead of a handwritten
multiplication. (In HEAD, some of them were fixed already, but
none of that work got back-patched at the time.)
In addition, BlockRefTableReaderNextRelation passes the same value
to BlockRefTableRead's "int length" parameter. If built for
64-bit frontend code, palloc_array() allows a larger array size
than it otherwise would, potentially allowing that parameter to
overflow. Add an explicit check to forestall that and keep the
behavior the same cross-platform.
Reported-by: Xint Code
Author: Tom Lane <tgl@sss.pgh.pa.us>
Backpatch-through: 14
Security: CVE-2026-6473 M src/backend/utils/adt/multirangetypes.c
M src/common/blkreftable.c
Prevent buffer overrun in unicode_normalize().
commit : 8d1489d505cf97357d27a69da020390eb6d7018b
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 Some UTF8 characters decompose to more than a dozen codepoints.
It is possible for an input string that fits into well under
1GB to produce more than 4G decomposed codepoints, causing
unicode_normalize()'s decomp_size variable to wrap around to a
small positive value. This results in a small output buffer
allocation and subsequent buffer overrun.
To fix, test after each addition to see if we've overrun MaxAllocSize,
and break out of the loop early if so. In frontend code we want to
just return NULL for this failure (treating it like OOM). In the
backend, we can rely on the following palloc() call to throw error.
I also tightened things up in the calling functions in varlena.c,
using size_t rather than int and allocating the input workspace
with palloc_array(). These changes are probably unnecessary
given the knowledge that the original input and the normalized
output_chars array must fit into 1GB, but it's a lot easier to
believe the code is safe with these changes.
Reported-by: Xint Code
Reported-by: Bruce Dang <bruce@calif.io>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Co-authored-by: Heikki Linnakangas <hlinnaka@iki.fi>
Backpatch-through: 14
Security: CVE-2026-6473 M src/backend/utils/adt/varlena.c
M src/common/unicode_norm.c
Harden our regex engine against integer overflow in size calculations.
commit : f3cee4dc4330865540cdef3ae3f200175ad28f33
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 The number of NFA states, number of NFA arcs, and number of colors
are all bounded to reasonably small values. However, there are
places where we try to allocate arrays sized by products of those
quantities, and those calculations could overflow, enabling
buffer-overrun attacks. In practice there's no problem on 64-bit
machines, but there are some live scenarios on 32-bit machines.
A related problem is that citerdissect() and creviterdissect()
allocate arrays based on the length of the input string, which
potentially could overflow.
To fix, invent MALLOC_ARRAY and REALLOC_ARRAY macros that rely on
palloc_array_extended and repalloc_array_extended with the NO_OOM
option, similarly to the existing MALLOC and REALLOC macros.
(Like those, they'll throw an error not return a NULL result for
oversize requests. This doesn't really fit into the regex code's
view of error handling, but it'll do for now. We can consider
whether to change that behavior in a non-security follow-up patch.)
I installed similar defenses in the colormap construction code.
It's not entirely clear whether integer overflow is possible
there, but analyzing the behavior in detail seems not worth
the trouble, as the risky spots are not in hot code paths.
I left a bunch of calls as-is after verifying that they can't
overflow given reasonable limits on nstates and narcs. Those
limits were enforced already via REG_MAX_COMPILE_SPACE, but
add commentary to document the interactions.
In passing, also fix a related edge case, which is that the
special color numbers used in LACON carcs could overflow the
"color" data type, if ncolors is close to MAX_COLOR.
In v14 and v15, the regex engine calls malloc() directly instead
of using palloc(), so MALLOC_ARRAY and REALLOC_ARRAY do likewise.
Reported-by: Xint Code
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Backpatch-through: 14
Security: CVE-2026-6473 M src/backend/regex/regc_color.c
M src/backend/regex/regc_cvec.c
M src/backend/regex/regc_nfa.c
M src/backend/regex/regcomp.c
M src/backend/regex/rege_dfa.c
M src/backend/regex/regexec.c
M src/include/regex/regcustom.h
M src/include/regex/regguts.h
Make palloc_array() and friends safe against integer overflow.
commit : e1c30458a10f769c10dc9cc38d4f577bb24b31a5
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 Sufficiently large "count" arguments could result in undetected
overflow, causing the allocated memory chunk to be much smaller
than what the caller will subsequently write into it. This is
unlikely to be a hazard with 64-bit size_t but can sometimes
happen on 32-bit builds, primarily where a function allocates
workspace that's significantly larger than its input data.
Rather than trying to patch the at-risk callers piecemeal,
let's just redefine these macros so that they always check.
To do that, move the longstanding add_size() and mul_size() functions
into palloc.h and mcxt.c, and adjust them to not be specific to
shared-memory allocation. Then invent palloc_mul(), palloc0_mul(),
palloc_mul_extended() to use these functions. Actually, the latter
use inlined copies to save one function call. repalloc_array() gets
similar treatment. I didn't bother trying to inline the calls for
repalloc0_array() though.
In v14 and v15, this also adds repalloc_extended(), which previously
was only available in v16 and up.
We need copies of all this in fe_memutils.[hc] as well, since that
module also provides palloc_array() etc.
Reported-by: Xint Code
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Backpatch-through: 14
Security: CVE-2026-6473 M src/backend/storage/ipc/shmem.c
M src/backend/utils/mmgr/mcxt.c
M src/common/fe_memutils.c
M src/include/common/fe_memutils.h
M src/include/storage/shmem.h
M src/include/utils/memutils.h
M src/include/utils/palloc.h
Add pg_add_size_overflow() and friends
commit : c7fb9f765cc5d08b2edb242a2b40d5e8b3c88668
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 Commit 600086f47 added (several bespoke copies of) size_t addition with
overflow checks to libpq. Move this to common/int.h, along with
its subtraction and multiplication counterparts.
pg_neg_size_overflow() is intentionally omitted; I'm not sure we should
add SSIZE_MAX to win32_port.h for the sake of a function with no
callers.
Back-patch of commit 8934f2136, done now because pg_add_size_overflow()
and friends are needed more widely for security fixes.
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAOYmi%2B%3D%2BpqUd2MUitvgW1pAJuXgG_TKCVc3_Ek7pe8z9nkf%2BAg%40mail.gmail.com
Backpatch-through: 14-18
Security: CVE-2026-6473 M src/include/common/int.h
M src/interfaces/libpq/fe-exec.c
M src/interfaces/libpq/fe-print.c
M src/interfaces/libpq/fe-protocol3.c
Fix overflows with ts_headline()
commit : 62ad262661ed92b0fbd43e8b7bbd1e0e38cbb9c0
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 The options "StartSel", "StopSel" and "FragmentDelimiter" given by a
caller of the SQL function ts_headline() have their lengths stored as
int16. When providing values larger than PG_INT16_MAX, it was possible
to overflow the length values stored, leading to incorrect behaviors in
generateHeadline(), in most cases translating to a crash.
Attempting to use values for these options larger than PG_INT16_MAX is
now blocked. Some test cases are added to cover our tracks.
Reported-by: Xint Code
Author: Michael Paquier <michael@paquier.xyz>
Backpatch-through: 14
Security: CVE-2026-6473 M src/backend/tsearch/wparser_def.c
M src/test/regress/expected/tsearch.out
M src/test/regress/sql/tsearch.sql
ltree: Fix overflows with lquery parsing
commit : 7f019f34140ab9b98ba1ede6cb9f4ed90296b50f
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 11 May 2026 05:13:47 -0700
committer: Noah Misch <noah@leadboat.com>
date : Mon, 11 May 2026 05:13:47 -0700 The lquery parser in contrib/ltree/ had two overflow problems:
- A single lquery level with many OR-separated variants (e.g.,
'label1|label2|...'), could cause an overflow of totallen, this being
stored as a uint16, meaning a maximum value of UINT16_MAX or 65k. Each
variant contributes MAXALIGN(LVAR_HDRSIZE + len) bytes. With enough
long variants, the value would wraparound. This would corrupt the data
written by LQL_NEXT(), leading to a stack corruption, most likely
translating into a crash, but it would allow incorrect memory access.
- numvar, labelled as a uint16, counts the number of OR-variants in a
single level, and it is incremented without bounds checking. With more
than PG_UINT16_MAX (65k) variants in a single level, and a minimum of
131kB of input data, it would wrap to 0. When a (wildcard) '*' is
used, this would change the query results silently.
For both issues, a set of overflows checks are added to guard against
these problematic patterns.
The first issue has been reported by the three people listed below,
affecting v16 and newer versions due to b1665bf01e5f. Its coding was
still unsafe in v14 and v15. The second issue affects all the stable
branches; I have bumped into while reviewing the code of the module.
Reported-by: Vergissmeinnicht <vergissmeinnichtzh@gmail.com>
Reported-by: A1ex <alex000young@gmail.com>
Reported-by: Jihe Wang <wangjihe.mail@gmail.com>
Author: Michael Paquier <michael@paquier.xyz>
Security: CVE-2026-6473
Backpatch-through: 14 M contrib/ltree/expected/ltree.out
M contrib/ltree/ltree_io.c
M contrib/ltree/sql/ltree.sql
Translation updates
commit : fb5db556c253668ce56a605dbb36f12b247cd0d4
author : Peter Eisentraut <peter@eisentraut.org>
date : Mon, 11 May 2026 13:03:08 +0200
committer: Peter Eisentraut <peter@eisentraut.org>
date : Mon, 11 May 2026 13:03:08 +0200 Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 79aeb42e408514d16dffcb5f69e4d97b8a92b0c6 M src/backend/po/de.po
M src/backend/po/es.po
M src/bin/pg_basebackup/po/de.po
M src/bin/pg_basebackup/po/fr.po
M src/bin/pg_dump/po/de.po
M src/bin/pg_dump/po/fr.po
M src/bin/pg_test_timing/po/de.po
M src/bin/pg_test_timing/po/fr.po
M src/bin/pg_verifybackup/po/de.po
M src/bin/pg_verifybackup/po/fr.po
M src/bin/psql/po/ka.po
M src/interfaces/libpq/po/fr.po
First-draft release notes for 18.4.
commit : 122adefc94a4dc4e97b4fd5c3328e7dc5b58ebd0
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 8 May 2026 14:13:13 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 8 May 2026 14:13:13 -0400 As usual, the release notes for other branches will be made by cutting
these down, but put them up for community review first. M doc/src/sgml/release-18.sgml
Consider opfamily and collation when removing redundant GROUP BY columns
commit : 5c214b58b0599e9900dc777b3e00ea7120e7e10d
author : Richard Guo <rguo@postgresql.org>
date : Fri, 8 May 2026 12:47:26 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Fri, 8 May 2026 12:47:26 +0900 remove_useless_groupby_columns() uses a relation's unique indexes to
prove that some GROUP BY columns are functionally dependent on others,
and so can be dropped from the GROUP BY clause. The match between
index columns and GROUP BY columns was done by attno alone, ignoring
two equality-relation issues.
A type may belong to multiple btree opfamilies whose notions of
equality differ. The record type, for instance, has record_ops
(per-field equality) and record_image_ops (bytewise equality). A
unique index under one opfamily does not prove uniqueness under the
equality used by GROUP BY when the SortGroupClause's eqop comes from a
different opfamily.
Likewise, since nondeterministic collations were introduced in PG 12,
two collations may disagree on equality, and a unique index under one
collation does not prove uniqueness under another.
In either case, rows that the index considers distinct can collapse
into a single GROUP BY group, taking ungrouped columns of differing
values with them, so the planner drops a column that is not in fact
functionally dependent and produces wrong results.
Fix by requiring, for each unique-index key column, that some GROUP BY
item on the same column has an eqop in the index's opfamily and a
collation that agrees on equality with the index's collation. This
mirrors the combined check relation_has_unique_index_for() applies to
join clauses.
This is a v18 regression: commit bd10ec529 extended
remove_useless_groupby_columns() from primary-key constraints to
arbitrary unique indexes. Before that, the function consulted only
primary keys, whose enforcement index is required by parse_utilcmd.c
to use the default opclass and the column's declared collation, so
neither mismatch could arise. Back-patch to v18 only.
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Ayush Tiwari <ayushtiwari.slg01@gmail.com>
Discussion: https://postgr.es/m/CAMbWs49t6uArWoTT-cHY+nhsi23nJJKcF9Xb9cYGzaZ9kNJ98g@mail.gmail.com
Backpatch-through: 18 M src/backend/optimizer/plan/initsplan.c
M src/test/regress/expected/aggregates.out
M src/test/regress/expected/collate.icu.utf8.out
M src/test/regress/sql/aggregates.sql
M src/test/regress/sql/collate.icu.utf8.sql
M src/tools/pgindent/typedefs.list
Fix HAVING-to-WHERE pushdown for simple-CASE form
commit : 1132af22cf7d31c224d39bcf2b55287f42b945da
author : Richard Guo <rguo@postgresql.org>
date : Fri, 8 May 2026 10:57:50 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Fri, 8 May 2026 10:57:50 +0900 Commit f76686ce7 added a walker that detects when a HAVING clause uses
a collation that conflicts with the GROUP BY's nondeterministic
collation, keeping such clauses in HAVING. The walker uses
exprInputCollation() to identify each ancestor's comparison collation,
but missed the simple-CASE case: parse analysis builds each WHEN as
OpExpr(CaseTestExpr op val), where CaseTestExpr is a placeholder for
the arg, while the actual arg expression sits at cexpr->arg, outside
the OpExpr that carries the comparison's inputcollid. A GROUP Var at
cexpr->arg was therefore visited with the WHEN's inputcollid absent
from the ancestor stack, the conflict went undetected, and the clause
was wrongly pushed to WHERE.
Fix by handling simple CASE explicitly: before walking cexpr->arg,
push every WHEN's inputcollid onto the ancestor stack so a GROUP Var
at the arg is checked against the same collations the WHEN comparisons
would apply. Then walk the WHEN bodies and defresult under the
unchanged stack, where their own collation contexts are picked up by
the default path.
Back-patch to v18 only; this fix extends the walker added by commit
f76686ce7 and inherits its dependency on the v18 RTE_GROUP mechanism.
Author: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/CAHg+QDcqPdd=2V0PQ_oNYj50OUeqSqznqFaYtP3RdokLBDXBqw@mail.gmail.com
Backpatch-through: 18 M src/backend/optimizer/plan/planner.c
M src/test/regress/expected/collate.icu.utf8.out
M src/test/regress/sql/collate.icu.utf8.sql
Add missing guard for __builtin_constant_p
commit : 936d8974c3bcf4fc7163fcd1b403eea2adffa73e
author : John Naylor <john.naylor@postgresql.org>
date : Tue, 5 May 2026 18:51:27 +0700
committer: John Naylor <john.naylor@postgresql.org>
date : Tue, 5 May 2026 18:51:27 +0700 Oversight in commit e2809e3a1. While at it, use pg_integer_constant_p
in master.
Discussion: https://postgr.es/m/CANWCAZbOha-x5MCreQn3TRA56VdKWNMAKMy3fAV1kJSw9Vp4pw@mail.gmail.com
Backpatch-through: 18 M src/include/port/pg_crc32c.h
postgres_fdw: Fix handling of abort-cleanup-failed connections.
commit : c318777da8b82cabe7c6644695385841a223f1eb
author : Etsuro Fujita <efujita@postgresql.org>
date : Tue, 5 May 2026 18:55:02 +0900
committer: Etsuro Fujita <efujita@postgresql.org>
date : Tue, 5 May 2026 18:55:02 +0900 As connections that failed abort cleanup can't safely be further used,
if a remote query tries to get such a connection, we reject it.
Previously, this rejection involved dropping the connection if it was
open, without accounting for the possibility of open cursors using it,
causing a server crash when such an open cursor tried to use an
already-dropped connection, as a cursor-handling function
(create_cursor, fetch_more_data, or close_cursor) was called on a freed
PGconn. To fix, delay dropping failed connections until abort cleanup
of the main transaction, to ensure open cursors using such a connection
can safely refer to the PGconn for it.
Oversight in commit 8bf58c0d9.
Reported-by: Zhibai Song <songzhibai1234@gmail.com>
Diagnosed-by: Zhibai Song <songzhibai1234@gmail.com>
Author: Etsuro Fujita <etsuro.fujita@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/CAPmGK176y6JP017-Cn%2BhS9CEJx_6iVhRoYbAqzuLU4d8-XPPNg%40mail.gmail.com
Backpatch-through: 14 M contrib/postgres_fdw/connection.c
M contrib/postgres_fdw/expected/postgres_fdw.out
M contrib/postgres_fdw/sql/postgres_fdw.sql
Consider collation when proving subquery uniqueness
commit : bed3ffbf9d952be6c7d739d068cdce44c046dfb7
author : Richard Guo <rguo@postgresql.org>
date : Tue, 5 May 2026 10:27:06 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Tue, 5 May 2026 10:27:06 +0900 rel_is_distinct_for()'s RTE_SUBQUERY branch passed only the equality
operator from each join clause to query_is_distinct_for(), discarding
the operator's input collation. query_is_distinct_for() then verified
opfamily compatibility but never checked collations, so a DISTINCT /
GROUP BY / set-op operating under one collation was trusted to prove
uniqueness for a comparison performed under an unrelated collation.
As with the recent fix in relation_has_unique_index_for(), this is
unsound for nondeterministic collations and yields wrong query results
in any optimization that consumes the proof.
Fix by carrying each clause's operator input collation into
query_is_distinct_for() and validating it at every check-site against
the subquery target expression's collation.
Back-patch to all supported branches. query_is_distinct_for() is
declared in an installed header, so on stable branches the existing
two-list signature is retained as a thin wrapper that forwards to a
new collation-aware entry point; external callers continue to receive
the historical collation-blind answer.
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAMbWs4_XUUSTyzCaRjUeeahWNqi=8ZOA5Q4coi8zUVEDSBkM6A@mail.gmail.com
Backpatch-through: 14 M src/backend/optimizer/plan/analyzejoins.c
M src/test/regress/expected/collate.icu.utf8.out
M src/test/regress/sql/collate.icu.utf8.sql
M src/tools/pgindent/typedefs.list
Consider collation when proving uniqueness from unique indexes
commit : b62f514ac5334bc1581d6491dea7ab8482ff745a
author : Richard Guo <rguo@postgresql.org>
date : Tue, 5 May 2026 10:26:17 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Tue, 5 May 2026 10:26:17 +0900 relation_has_unique_index_for() has long had an XXX noting that it
doesn't check collations when matching a unique index's columns
against equality clauses. This was benign as long as all collations
in play reduced to the same notion of equality, but has been incorrect
since nondeterministic collations were introduced in PG 12: a unique
index under a deterministic collation does not prove uniqueness under
a nondeterministic collation, nor vice versa.
The consequence is wrong query results for any planner optimization
that consumes the faulty proof, including inner-unique join execution
(which stops the inner search after the first match per outer row),
useless-left-join removal, semijoin-to-innerjoin reduction, and
self-join elimination.
Fix by requiring the index's collation to agree on equality with the
clause's input collation. Two collations agree on equality if either
is InvalidOid (denoting a non-collation-sensitive operation, which
cannot conflict with the other side), if they have the same OID, or if
both are deterministic: by definition a deterministic collation treats
two strings as equal iff they are byte-wise equal (see CREATE
COLLATION), so any two deterministic collations share the same
equality relation and the uniqueness proof carries over. Any mismatch
involving a nondeterministic collation is rejected.
Back-patch to all supported branches; the bug has existed since
nondeterministic collations were introduced in PG 12.
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAMbWs4_XUUSTyzCaRjUeeahWNqi=8ZOA5Q4coi8zUVEDSBkM6A@mail.gmail.com
Backpatch-through: 14 M src/backend/optimizer/path/indxpath.c
M src/backend/utils/cache/lsyscache.c
M src/include/utils/lsyscache.h
M src/test/regress/expected/collate.icu.utf8.out
M src/test/regress/sql/collate.icu.utf8.sql
Mark modified the FSM buffer as dirty during recovery
commit : ac3b97db380ad6295e6fe582c64dd80ae32d4b94
author : Alexander Korotkov <akorotkov@postgresql.org>
date : Sun, 3 May 2026 20:23:50 +0300
committer: Alexander Korotkov <akorotkov@postgresql.org>
date : Sun, 3 May 2026 20:23:50 +0300 The XLogRecordPageWithFreeSpace function updates the freespace map (FSM) data
while replaying data-level WAL records during the recovery. If the FSM block
is updated, it needs to be marked as modified. Currently, this is done with
the MarkBufferDirtyHint call (as in all other cases for modifying FSM data).
However, in the recovery context, this function will actually do nothing if
checksums are enabled. It's assumed that the page should not be dirtied
during recovery while modifying hints to protect against torn pages, since no
new WAL data can be generated at this point to store FPI.
Such logic does not seem fully aligned with the FSM case, as its blocks could
be simply zeroed if a checksum mismatch is detected. Currently, changes to an
FSM block could be lost if each change to that block occurs infrequently
enough to allow it to be evicted from the cache. To persist the change, the
modification needs to be performed while the FSM block is still kept in
buffers and marked as dirty after receiving its FPI. If the block has already
been cleaned, the change won't be persisted, so stored FSM blocks may remain
in an obsolete state.
If a large number of discrepancies between the data in leaf FSM blocks and the
actual data blocks accumulate on the replica server, this could cause
significant delays in insert operations after switchover. Such an insert
operation may need to visit many data blocks marked as having sufficient
space in the FSM, only to discover that the information is incorrect and the
FSM records need to be corrected. In a heavily trafficked insert-only table
with many concurrent clients performing inserts, this has been observed to
cause several-second stalls, causing visible application malfunction. The
desire to avoid such cases was the reason behind the commit ab7dbd681, which
introduced an update of FSM data during the heap_xlog_visible invocation.
However, an update to the FSM data on the standby side could be lost due to a
missing 'dirty' flag, so there is still a possibility that a large number of
FSM records will contain incorrect data. Note that having a zeroed FSM page
in such a case (due to a checksum mismatch) is preferable, as a zero value
will be interpreted as an indication of full data blocks, and the inserter
will be routed to the next FSM block or to the end of the table.
Given that FSM is ready to handle torn page writes and
XLogRecordPageWithFreeSpace is called only during the recovery, there seems
to be no reason to use MarkBufferDirtyHint here instead of a regular
MarkBufferDirty call.
Discussion: https://postgr.es/m/596c4f1c-f966-4512-b9c9-dd8fbcaf0928%40postgrespro.ru
Author: Alexey Makhmutov <a.makhmutov@postgrespro.ru>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com> M src/backend/storage/freespace/freespace.c
Add missing connection validation in ECPG
commit : e2688ea5e411e3ff995c95dd207d1e1911142b8a
author : Andrew Dunstan <andrew@dunslane.net>
date : Fri, 1 May 2026 15:12:28 -0400
committer: Andrew Dunstan <andrew@dunslane.net>
date : Fri, 1 May 2026 15:12:28 -0400 ECPGdeallocate_all(), ECPGprepared_statement(), ECPGget_desc(), and
ecpg_freeStmtCacheEntry() could crash with a SIGSEGV when called
without an established connection (for example, when EXEC SQL CONNECT
was forgotten or a non-existent connection name was used), because
they dereferenced the result of ecpg_get_connection() without first
checking it for NULL.
Each site is fixed in the style of the surrounding code.
New tests are added for these conditions.
Author: Shruthi Gowda <gowdashru@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Mahendra Singh Thalor <mahi6run@gmail.com>
Reviewed-by: Nishant Sharma <nishant.sharma@enterprisedb.com>
Discussion: https://postgr.es/m/3007317.1765210195@sss.pgh.pa.us
Backpatch-through: 14 M src/interfaces/ecpg/ecpglib/descriptor.c
M src/interfaces/ecpg/ecpglib/prepare.c
M src/interfaces/ecpg/test/connect/.gitignore
M src/interfaces/ecpg/test/connect/Makefile
M src/interfaces/ecpg/test/connect/meson.build
A src/interfaces/ecpg/test/connect/test6.pgc
M src/interfaces/ecpg/test/ecpg_schedule
A src/interfaces/ecpg/test/expected/connect-test6.c
A src/interfaces/ecpg/test/expected/connect-test6.stderr
A src/interfaces/ecpg/test/expected/connect-test6.stdout
doc: Mention validation attempt during ALTER INDEX .. ATTACH PARTITION
commit : 7a24fad3d9f39cfa187b3fd240211c0de7615f9a
author : Michael Paquier <michael@paquier.xyz>
date : Fri, 1 May 2026 13:10:38 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Fri, 1 May 2026 13:10:38 +0900 Since 9d3e094f12, the command tries to validate the parent index of the
named index, if invalid. The documentation did not mention this
behavior, which could be confusing.
Author: Mohamed ALi <moali.pg@gmail.com>
Discussion: https://postgr.es/m/CAGnOmWpHu25_LpT=zv7KtetQhqV1QEZzFYLd_TDyOLu1Od9fpw@mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/ref/alter_index.sgml
Fix HAVING-to-WHERE pushdown with nondeterministic collations
commit : e8fd5e579223f669245a8f7961c71b94afec2307
author : Richard Guo <rguo@postgresql.org>
date : Fri, 1 May 2026 11:13:50 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Fri, 1 May 2026 11:13:50 +0900 When GROUP BY uses a nondeterministic collation, the planner's
optimization of moving HAVING clauses to WHERE can produce incorrect
query results. The HAVING clause may apply a stricter collation that
distinguishes values the GROUP BY considers equal. Pushing such a
clause to WHERE causes it to filter individual rows before grouping,
potentially eliminating group members and changing aggregate results.
Fix this by detecting collation conflicts before flatten_group_exprs,
while the HAVING clause still contains GROUP Vars (Vars referencing
RTE_GROUP). At that point, each GROUP Var directly carries the GROUP
BY collation as its varcollid, making it straightforward to compare
against the operator's inputcollid. A mismatch where the GROUP BY
collation is nondeterministic means the clause is unsafe to push down.
RowCompareExpr is treated specially, since it carries per-column
inputcollids[] rather than a single inputcollid.
The conflicting clause indices are recorded in a Bitmapset and
consulted during the existing HAVING-to-WHERE loop, so that only
affected clauses are kept in HAVING; other safe clauses in the same
query are still pushed.
Back-patch to v18 only. The fix relies on the RTE_GROUP mechanism
introduced in v18 (commit 247dea89f), which is what lets us identify
grouping expressions and their resolved collations via GROUP Vars on
pre-flatten havingQual. Pre-v18 branches lack that machinery, so a
back-patch there would need a different approach. Given the absence
of field reports of this bug on back branches, the risk of carrying a
different fix on stable branches is not justified.
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: wenhui qiu <qiuwenhuifx@gmail.com>
Discussion: https://postgr.es/m/CAMbWs48Dn2wW6XM94GZsoyMiH42=KgMo+WcobPKuWvGYnWaPOQ@mail.gmail.com
Backpatch-through: 18 M src/backend/optimizer/plan/planner.c
M src/test/regress/expected/collate.icu.utf8.out
M src/test/regress/sql/collate.icu.utf8.sql
M src/tools/pgindent/typedefs.list
Fix attnum remapping in generateClonedExtStatsStmt()
commit : 149c875fc20b2025608a2b3e4a0eb2821a879894
author : Andrew Dunstan <andrew@dunslane.net>
date : Thu, 30 Apr 2026 11:04:57 -0400
committer: Andrew Dunstan <andrew@dunslane.net>
date : Thu, 30 Apr 2026 11:04:57 -0400 When cloning extended statistics via CREATE TABLE ... LIKE ... INCLUDING
STATISTICS, stxkeys holds attribute numbers from the source (parent)
table, but get_attname() was being called with the child relation's
OID. If the parent has dropped columns, the child's attribute numbers
are renumbered sequentially and no longer match, so the lookup either
returns the wrong column name (silent corruption) or errors out when
the attnum does not exist in the child.
Fix it by remapping the parent attnum through attmap before the lookup,
consistent with how expression statistics are already handled a few
lines below.
Add a regression test covering both manifestations: a 3-column parent
where the stale attnum refers to no child column (cache-lookup error),
and a 4-column parent where the stale attnum silently refers to the
wrong child column.
Author: Julien Tachoires <julmon@gmail.com>
Reviewed-by: Srinath Reddy Sadipiralla <srinath2133@gmail.com>
Discussion: https://postgr.es/m/20260415105718.tomuncfbmlt67oel@poseidon.home.virt
Backpatch-through: 14 M src/backend/parser/parse_utilcmd.c
M src/test/regress/expected/create_table_like.out
M src/test/regress/sql/create_table_like.sql
Fix errno check based on EINTR in pg_flush_data()
commit : 6cb307251c5c6261286c1566496920976640108e
author : Michael Paquier <michael@paquier.xyz>
date : Thu, 30 Apr 2026 18:44:41 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Thu, 30 Apr 2026 18:44:41 +0900 Upon a failure of sync_file_range(), EINTR was checked based on the
returned result of the routine rather than its errno. sync_file_range()
returns -1 on failure, making the check a no-op, invalidating the retry
attempt in this case.
Oversight in 0d369ac65004.
Author: DaeMyung Kang <charsyam@gmail.com>
Discussion: https://postgr.es/m/20260429151811.1810874-1-charsyam@gmail.com
Backpatch-through: 16 M src/backend/storage/file/fd.c
Suppress "has no symbols" linker warnings on macOS.
commit : f00cccb798707ad851ff9afbd0bec005c0f08d6b
author : Nathan Bossart <nathan@postgresql.org>
date : Wed, 29 Apr 2026 12:25:09 -0500
committer: Nathan Bossart <nathan@postgresql.org>
date : Wed, 29 Apr 2026 12:25:09 -0500 After a recent macOS update, building Postgres produces warnings
that look like this:
ranlib: warning: 'libpgport_shlib.a(pg_cpu_x86.c.o)' has no symbols
ranlib: warning: 'libpgport_shlib.a(pg_popcount_x86.c.o)' has no symbols
To fix, add a dummy symbol to files that may otherwise have none.
Per project policy, this is a candidate for back-patching into
out-of-support branches: it suppresses annoying compiler warnings
but changes no behavior.
Reported-by: Zhang Mingli <zmlpostgres@gmail.com>
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/229aaaf3-f529-44ed-8e50-00cb6909af21%40Spark
Backpatch-through: 13 M src/port/pg_popcount_aarch64.c
M src/port/pg_popcount_avx512.c
test_tidstore: Stabilize regression tests by sorting offsets.
commit : cfbfdb963a426c4f2f47584b83e0cc4f2f463b8b
author : Masahiko Sawada <msawada@postgresql.org>
date : Wed, 29 Apr 2026 09:10:07 -0700
committer: Masahiko Sawada <msawada@postgresql.org>
date : Wed, 29 Apr 2026 09:10:07 -0700 TidStoreSetBlockOffsets() requires its offsets array to be strictly
ascending and asserts this precondition. In test_tidstore, we were
passing random offset numbers deduplicated by a DISTINCT clause in an
array_agg() call directly to the do_set_block_offsets() test
harness. However, DISTINCT without an ORDER BY clause does not
guarantee sorted results according to the SQL standard.
Fix this by sorting the offsets in-place inside do_set_block_offsets()
before calling TidStoreSetBlockOffsets().
While this assertion failure is not observed during regular regression
tests because they use queries simple enough that the optimizer
consistently chooses plans yielding sorted results, it makes sense to
stabilize the test. The failure could theoretically occur depending on
the optimizer's plan choice, and has been reported when experimenting
with certain third-party extensions.
Backpatch to v17, where test_tidstore was introduced, to ensure
extension development on stable branches does not hit this assertion.
Reported-by: Andrei Lepikhov <lepihov@gmail.com>
Author: Andrei Lepikhov <lepihov@gmail.com>
Discussion: https://postgr.es/m/b97f1850-fc7b-43c4-9b04-4e97bb9e7dc0@gmail.com
Backpatch-through: 17 M src/test/modules/test_tidstore/test_tidstore.c
Fix nbtree skip array parallel alloc accounting.
commit : 1e71970d2d2bc38dd542f029098e05ab80fd8294
author : Peter Geoghegan <pg@bowt.ie>
date : Wed, 29 Apr 2026 11:22:21 -0400
committer: Peter Geoghegan <pg@bowt.ie>
date : Wed, 29 Apr 2026 11:22:21 -0400 btestimateparallelscan neglected to add btps_arrElems[] space overhead
for skip array scan keys that were later output by nbtree preprocessing.
Skip arrays don't actually need to use this space, but a scan with a
subsequent SAOP array will need to subscript btps_arrElems[] using a
simple so->arrayKeys[]-wise offset. so->arrayKeys[] has entries for
both kinds of arrays.
As a result of this oversight, it was possible for an index scan with a
skip array and a lower-order SAOP array to write past the allocated
shared memory boundary when storing the SAOP array's cur_elem. In
practice the problem seems to be limited to scans with many skipped
index columns, since our general approach to estimating the amount of
shared memory that will be required is fairly conservative.
To fix, have btestimateparallelscan request an extra sizeof(int) space
for key columns that might require a skip array later on.
Oversight in commit 92fe23d9, which added the nbtree skip scan
optimization.
Author: Siddharth Kothari <sidkot@google.com>
Discussion: https://postgr.es/m/CAGCUe0Lwk3C0qdkBa+OLpYc7yXwW=pbaz8Sju4xMXEQAmyp+5g@mail.gmail.com
Backpatch-through: 18 M src/backend/access/nbtree/nbtree.c
doc: Fix grammar in some logical replication pages
commit : 4fe7bac34784e978c10ddc6fca495bf3bfd509cc
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 27 Apr 2026 16:17:24 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 27 Apr 2026 16:17:24 +0900 Author: Peter Smith <smithpb2250@gmail.com>
Discussion: https://postgr.es/m/CAHut+PuvY_wYLPJ4DTs7NE9Lu2ty4d-OgZAOJC-NvCM=2wwcQQ@mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/logical-replication.sgml
M doc/src/sgml/ref/alter_subscription.sgml
M doc/src/sgml/ref/create_publication.sgml
Update time zone data files to tzdata release 2026b.
commit : 8a431b6d676b2279670fb7771115ce71618e1377
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 24 Apr 2026 12:28:35 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 24 Apr 2026 12:28:35 -0400 British Columbia (America/Vancouver) moved to permanent UTC-07 on
2026-03-09, which will affect their clocks beginning on 2026-11-01.
For lack of any clarity on the point, assume their TZ abbreviation
will be MST from that time forward.
Moldova (Europe/Chisinau) has followed EU DST transition times since
2022.
Backpatch-through: 14 M src/timezone/data/tzdata.zi
Fix incorrect logic for hashed IN / NOT IN with non-strict operators
commit : 035c520db86676da771bf646d1a1ee1913a38f3a
author : David Rowley <drowley@postgresql.org>
date : Fri, 24 Apr 2026 14:03:41 +1200
committer: David Rowley <drowley@postgresql.org>
date : Fri, 24 Apr 2026 14:03:41 +1200 ExecEvalHashedScalarArrayOp(), when using a strict equality function,
performs a short-circuit when looking up NULL values. When the function
is non-strict, the code incorrectly looked up the hash table for a
zero-valued Datum, which could have resulted in an accidental true
return if the hash table contained zero valued Datum, or could result
in a crash for non-byval types.
Here we fix this by adding an extra step when we build the hash table to
check what the result of a NULL lookup would be. This requires looping
over the array and checking what the non-hashed version of the code
would do. We cache the results of that in the expression so that we can
reuse the result any time we're asked to search for a NULL value.
It's important to note that non-strict equality functions are free to
treat any NULL value as equal to any non-NULL value. For example,
someone may wish to design a type that treats an empty string and NULL
as equal.
All built-in types have strict equality functions, so this could affect
custom / user-defined types.
Author: Chengpeng Yan <chengpeng_yan@outlook.com>
Author: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: ChangAo Chen <cca5507@qq.com>
Discussion: https://postgr.es/m/A16187AE-2359-4265-9F5E-71D015EC2B2D@outlook.com
Backpatch-through: 14 M src/backend/executor/execExprInterp.c
M src/include/executor/execExpr.h
M src/test/regress/expected/expressions.out
M src/test/regress/sql/expressions.sql
pg_test_timing: fix unit in backward-clock warning
commit : a8dbe5288b0e9514713fc5de0a195e5f7d8d3fb1
author : Fujii Masao <fujii@postgresql.org>
date : Fri, 24 Apr 2026 08:59:14 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Fri, 24 Apr 2026 08:59:14 +0900 pg_test_timing reports timing differences in nanoseconds in master, and
in microseconds in v14 through v18, but previously the backward-clock
warning incorrectly labeled the value as milliseconds.
This commit fixes the warning message to use "ns" in master and
"us" in v14 through v18, matching the actual unit being reported.
Backpatch to all supported versions.
Author: Chao Li <lic@highgo.com>
Reviewed-by: Lukas Fittl <lukas@fittl.com>
Reviewed-by: Xiaopeng Wang <wxp_728@163.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/F780CEEB-A237-4302-9F55-60E9D8B6533D@gmail.com
Backpatch-through: 14 M src/bin/pg_test_timing/pg_test_timing.c
Don't call CheckAttributeType() with InvalidOid on dropped cols
commit : 01db3f0398fd2e75569473fc910de5d76d1088b7
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 23 Apr 2026 21:05:27 +0300
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 23 Apr 2026 21:05:27 +0300 If CheckAttributeType() is called with InvalidOid, it performs a bunch
of pointless, futile syscache lookups with InvalidOid, but ultimately
tolerates it and has no effect. We were calling it with InvalidOid on
dropped columns, but it seems accidental that it works, so let's stop
doing it.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/93ce56cd-02a6-4db1-8224-c8999372facc@iki.fi
Backpatch-through: 14 M src/backend/catalog/heap.c
Don't allow composite type to be member of itself via multirange
commit : ff8f27d6eae2d9ed6dcfa290ebc133fec093f414
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 23 Apr 2026 21:28:11 +0300
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Thu, 23 Apr 2026 21:28:11 +0300 CheckAttributeType() checks that a composite type is not made a member
of itself with ALTER TABLE ADD COLUMN or ALTER TYPE ADD ATTRIBUTE,
even indirectly via a domain, array, another composite type or a range
type. But it missed checking for multiranges. That was a simple
oversight when multiranges were added.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/93ce56cd-02a6-4db1-8224-c8999372facc@iki.fi
Backpatch-through: 14 M src/backend/catalog/heap.c
M src/test/regress/expected/multirangetypes.out
M src/test/regress/sql/multirangetypes.sql
catcache.c: use C_COLLATION_OID for texteqfast/texthashfast.
commit : 03c4f243e0a289cb56f639c80f5a265401d5a5ea
author : Jeff Davis <jdavis@postgresql.org>
date : Wed, 22 Apr 2026 10:22:44 -0700
committer: Jeff Davis <jdavis@postgresql.org>
date : Wed, 22 Apr 2026 10:22:44 -0700 The problem report was about setting GUCs in the startup packet for a
physical replication connection. Setting the GUC required an ACL
check, which performed a lookup on pg_parameter_acl.parname. The
catalog cache was hardwired to use DEFAULT_COLLATION_OID for
texteqfast() and texthashfast(), but the database default collation
was uninitialized because it's a physical walsender and never connects
to a database. In versions 18 and later, this resulted in a NULL
pointer dereference, while in version 17 it resulted in an ERROR.
As the comments stated, using DEFAULT_COLLATION_OID was arbitrary
anyway: if the collation actually mattered, it should have used the
column's actual collation. (In the catalog, some text columns are the
default collation and some are "C".)
Fix by using C_COLLATION_OID, which doesn't require any initialization
and is always available. When any deterministic collation will do,
it's best to consistently use the simplest and fastest one, so this is
a good idea anyway.
Another problem was raised in the thread, which this commit doesn't
fix (see second discussion link).
Reported-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/D18AD72A-5004-4EF8-AF80-10732AF677FA@yandex-team.ru
Discussion: https://postgr.es/m/4524ed61a015d3496fc008644dcb999bb31916a7.camel%40j-davis.com
Backpatch-through: 17 M src/backend/utils/cache/catcache.c
Guard against overly-long numeric formatting symbols from locale.
commit : 580e7be88ce2b5d15df83da6496b3f23a81e3163
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 22 Apr 2026 12:41:00 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 22 Apr 2026 12:41:00 -0400 to_char() allocates its output buffer with 8 bytes per formatting
code in the pattern. If the locale's currency symbol, thousands
separator, or decimal or sign symbol is more than 8 bytes long,
in principle we could overrun the output buffer. No such locales
exist in the real world, so it seems sufficient to truncate the
symbol if we do see it's too long.
Reported-by: Xint Code
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/638232.1776790821@sss.pgh.pa.us
Backpatch-through: 14 M src/backend/utils/adt/formatting.c
Prevent some buffer overruns in spell.c's parsing of affix files.
commit : 00c6e08195d5b14bd022644dba64698c2640a8e4
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 22 Apr 2026 12:02:15 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 22 Apr 2026 12:02:15 -0400 parse_affentry() and addCompoundAffixFlagValue() each collect fields
from an affix file into working buffers of size BUFSIZ. They failed
to defend against overlength fields, so that a malicious affix file
could cause a stack smash. BUFSIZ (typically 8K) is certainly way
longer than any reasonable affix field, but let's fix this while
we're closing holes in this area.
I chose to do this by silently truncating the input before it can
overrun the buffer, using logic comparable to the existing logic in
get_nextfield(). Certainly there's at least as good an argument for
raising an error, but for now let's follow the existing precedent.
Reported-by: Igor Stepansky <igor.stepansky@orca.security>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/864123.1776810909@sss.pgh.pa.us
Backpatch-through: 14 M src/backend/tsearch/spell.c
Prevent buffer overrun in spell.c's CheckAffix().
commit : c2bfeb3bbaa7b036295fa9cdbf9181dd7274e7ab
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 22 Apr 2026 10:47:56 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 22 Apr 2026 10:47:56 -0400 This function writes into a caller-supplied buffer of length
2 * MAXNORMLEN, which should be plenty in real-world cases.
However a malicious affix file could supply an affix long
enough to overrun that. Defend by just rejecting the match
if it would overrun the buffer. I also inserted a check of
the input word length against Affix->replen, just to be sure
we won't index off the buffer, though it would be caller error
for that not to be true.
Also make the actual copying steps a bit more readable, and remove
an unnecessary requirement for the whole input word to fit into the
output buffer (even though it always will with the current caller).
The lack of documentation in this code makes my head hurt, so
I also reverse-engineered a basic header comment for CheckAffix.
Reported-by: Xint Code
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/641711.1776792744@sss.pgh.pa.us
Backpatch-through: 14 M src/backend/tsearch/spell.c
Fix UPDATE/DELETE ... WHERE CURRENT OF on a table with virtual columns.
commit : f3d03fbd5d017c8e8e42a3b3bcca696cfd94a8c3
author : Dean Rasheed <dean.a.rasheed@gmail.com>
date : Wed, 22 Apr 2026 11:50:18 +0100
committer: Dean Rasheed <dean.a.rasheed@gmail.com>
date : Wed, 22 Apr 2026 11:50:18 +0100 Formerly, attempting to use WHERE CURRENT OF to update or delete from
a table with virtual generated columns would fail with the error
"WHERE CURRENT OF on a view is not implemented".
The reason was that the check preventing WHERE CURRENT OF from being
used on a view was in replace_rte_variables_mutator(), which presumed
that the only way it could get there was as part of rewriting a query
on a view. That is no longer the case, since replace_rte_variables()
is now also used to expand the virtual generated columns of a table.
Fix by doing the check for WHERE CURRENT OF on a view at parse time.
This is safe, since it is no longer possible for the relkind to change
after the query is parsed (as of b23cd185f).
Reported-by: Satyanarayana Narlapuram <satyanarlapuram@gmail.com>
Author: Satyanarayana Narlapuram <satyanarlapuram@gmail.com>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/CAHg+QDc_TwzSgb=B_QgNLt3mvZdmRK23rLb+RkanSQkDF40GjA@mail.gmail.com
Backpatch-through: 18 M src/backend/parser/analyze.c
M src/backend/rewrite/rewriteManip.c
M src/test/regress/expected/generated_virtual.out
M src/test/regress/expected/portals.out
M src/test/regress/sql/generated_virtual.sql
M src/test/regress/sql/portals.sql
Fix expansion of EXCLUDED virtual generated columns.
commit : cf38dedf693a17f9317d8ed85ab7468afebf8cbf
author : Dean Rasheed <dean.a.rasheed@gmail.com>
date : Wed, 22 Apr 2026 09:03:44 +0100
committer: Dean Rasheed <dean.a.rasheed@gmail.com>
date : Wed, 22 Apr 2026 09:03:44 +0100 If the SET or WHERE clause of an INSERT ... ON CONFLICT command
references EXCLUDED.col, where col is a virtual generated column, the
column was not properly expanded, leading to an "unexpected virtual
generated column reference" error, or incorrect results.
The problem was that expand_virtual_generated_columns() would expand
virtual generated columns in both the SET and WHERE clauses and in the
targetlist of the EXCLUDED pseudo-relation (exclRelTlist). Then
fix_join_expr() from set_plan_refs() would turn the expanded
expressions in the SET and WHERE clauses back into Vars, because they
would be found to match the expression entries in the indexed tlist
produced from exclRelTlist.
To fix this, arrange for expand_virtual_generated_columns() to not
expand virtual generated columns in exclRelTlist. This forces
set_plan_refs() to resolve generation expressions in the query using
non-virtual columns, as required by the executor.
In addition, exclRelTlist now always contains only Vars. That was
something already claimed in a couple of existing comments in the
planner, which relied on that fact to skip some processing, though
those did not appear to constitute active bugs.
Reported-by: Satyanarayana Narlapuram <satyanarlapuram@gmail.com>
Author: Satyanarayana Narlapuram <satyanarlapuram@gmail.com>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/CAHg+QDf7wTLz_vqb1wi1EJ_4Uh+Vxm75+b4c-Ky=6P+yOAHjbQ@mail.gmail.com
Backpatch-through: 18 M src/backend/optimizer/prep/prepjointree.c
M src/test/regress/expected/generated_virtual.out
M src/test/regress/sql/generated_virtual.sql
Allow ALTER INDEX .. ATTACH PARTITION to validate a parent index
commit : 5713ac248f266c689d93999aacd318d9f7f9daec
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 22 Apr 2026 10:34:33 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 22 Apr 2026 10:34:33 +0900 This commit tweaks ALTER INDEX .. ATTACH PARTITION to attempt a
validation of a parent index in the case where an index is already
attached but the parent is not yet valid. This occurs in cases where a
parent index was created invalid such as with CREATE INDEX ONLY, but was
left invalid after an invalid child index was attached (partitioned
indexes set indisvalid to false if at least one partition is
!indisvalid, indisvalid is true in a partitioned table iff all
partitions are indisvalid). This could leave a partition tree in a
situation where a user could not bring the parent index back to valid
after fixing the child index, as there is no built-in mechanism to do
so. This commit relies on the fact that repeated ATTACH PARTITION
commands on the same index silently succeed.
An invalid parent index is more than just a passive issue. It causes
for example ON CONFLICT on a partitioned table if the invalid parent
index is used to enforce a unique constraint.
Some test cases are added to track some of problematic patterns, using a
set of partition trees with combinations of invalid indexes and ATTACH
PARTITION.
Reported-by: Mohamed Ali <moali.pg@gmail.com>
Author: Sami Imseih <sanmimseih@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Haibo Yan <tristan.yim@gmail.com>
Discussion: http://postgr.es/m/CAGnOmWqi1D9ycBgUeOGf6mOCd2Dcf=6sKhbf4sHLs5xAcKVCMQ@mail.gmail.com
Backpatch-through: 14 M src/backend/commands/tablecmds.c
M src/test/regress/expected/indexing.out
M src/test/regress/sql/indexing.sql
Make plpgsql_trap test more robust and less resource-intensive.
commit : 496169e525cd4fe7c8ace3a7cffb52ac87e504b9
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 21 Apr 2026 10:54:39 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 21 Apr 2026 10:54:39 -0400 We were using "select count(*) into x from generate_series(1,
1_000_000_000_000)" to waste one second waiting for a statement
timeout trap. Aside from consuming CPU to little purpose, this could
easily eat several hundred MB of temporary file space, which has been
observed to cause out-of-disk-space errors in the buildfarm.
Let's just use "pg_sleep(10)", which is far less resource-intensive.
Also update the "when others" exception handler so that if it does
ever again trap an error, it will tell us what error. The cause of
these intermittent buildfarm failures had been obscure for awhile.
Discussion: https://postgr.es/m/557992.1776779694@sss.pgh.pa.us
Backpatch-through: 14 M src/pl/plpgsql/src/expected/plpgsql_trap.out
M src/pl/plpgsql/src/sql/plpgsql_trap.sql
Fix incorrect NEW references to generated columns in rule rewriting
commit : e528bfe971900fcd1a74da53931ade4c06eca662
author : Richard Guo <rguo@postgresql.org>
date : Tue, 21 Apr 2026 14:28:26 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Tue, 21 Apr 2026 14:28:26 +0900 When a rule action or rule qualification references NEW.col where col
is a generated column (stored or virtual), the rewriter produces
incorrect results.
rewriteTargetListIU removes generated columns from the query's target
list, since stored generated columns are recomputed by the executor
and virtual ones store nothing. However, ReplaceVarsFromTargetList
then cannot find these columns when resolving NEW references during
rule rewriting. For UPDATE, the REPLACEVARS_CHANGE_VARNO fallback
redirects NEW.col to the original target relation, making it read the
pre-update value (same as OLD.col). For INSERT,
REPLACEVARS_SUBSTITUTE_NULL replaces it with NULL. Both are wrong
when the generated column depends on columns being modified.
Fix by building target list entries for generated columns from their
generation expressions, pre-resolving the NEW.attribute references
within those expressions against the query's targetlist, and passing
them together with the query's targetlist to ReplaceVarsFromTargetList.
Back-patch to all supported branches. Virtual generated columns were
added in v18, so the back-patches in pre-v18 branches only handle
stored generated columns.
Reported-by: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>
Author: Richard Guo <guofenglinux@gmail.com>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAHg+QDexGTmCZzx=73gXkY2ZADS6LRhpnU+-8Y_QmrdTS6yUhA@mail.gmail.com
Backpatch-through: 14 M src/backend/rewrite/rewriteHandler.c
M src/test/regress/expected/generated_stored.out
M src/test/regress/expected/generated_virtual.out
M src/test/regress/sql/generated_stored.sql
M src/test/regress/sql/generated_virtual.sql
Fix orphaned processes when startup process fails during PM_STARTUP
commit : affdb2dd5c67e1e4135ab19a71114359e0fd0eb9
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 21 Apr 2026 09:40:03 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 21 Apr 2026 09:40:03 +0900 When the startup process exists with a FATAL error during PM_STARTUP,
the postmaster called ExitPostmaster() directly, assuming that no other
processes are running at this stage. Since 7ff23c6d277d, this
assumption is not true, as the checkpointer, the background writer, the
IO workers and bgworkers kicking in early would be around.
This commit removes the startup-specific shortcut happening in
process_pm_child_exit() for a failing startup process during PM_STARTUP,
falling down to the existing exit() flow to signal all the started
children with SIGQUIT, so as we have no risk of creating orphaned
processes.
This required an extra change in HandleFatalError() for v18 and newer
versions, as an assertion could be triggered for PM_STARTUP. It is now
incorrect. In v17 and older versions, HandleChildCrash() needs to be
changed to handle PM_STARTUP so as children can be waited on.
While on it, fix a comment at the top of postmaster.c. It was claiming
that the checkpointer and the background writer were started after
PM_RECOVERY. That is not the case.
Author: Ayush Tiwari <ayushtiwari.slg01@gmail.com>
Discussion: https://postgr.es/m/CAJTYsWVoD3V9yhhqSae1_wqcnTdpFY-hDT7dPm5005ZFsL_bpA@mail.gmail.com
Backpatch-through: 15 M src/backend/postmaster/postmaster.c
doc: Correct context description for some JIT support GUCs
commit : 0c9ebb4f7bea3b34877cb10ed7c2d0c1f311a815
author : Fujii Masao <fujii@postgresql.org>
date : Tue, 21 Apr 2026 08:44:19 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Tue, 21 Apr 2026 08:44:19 +0900 The documentation for jit_debugging_support and jit_profiling_support
previously stated that these parameters can only be set at server start.
However, both parameters use the PGC_SU_BACKEND context, meaning they
can be set at session start by superusers or users granted the appropriate
SET privilege, but cannot be changed within an active session.
This commit updates the documentation to reflect the actual behavior.
Backpatch to all supported versions.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAHGQGwEpMDpB-K8SSUVRRHg6L6z3pLAkekd9aviOS=ns0EC=+Q@mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/config.sgml
Fix callers of unicode_strtitle() using srclen == -1.
commit : 6044f55a47f575bac7af89fb22d4a2ff428c7c52
author : Jeff Davis <jdavis@postgresql.org>
date : Mon, 20 Apr 2026 14:44:08 -0700
committer: Jeff Davis <jdavis@postgresql.org>
date : Mon, 20 Apr 2026 14:44:08 -0700 Currently, only called that way in tests, which failed to fail.
Discussion: https://postgr.es/m/581a72ff452bb045ba83bbe3c6cf4467702d4f0f.camel@j-davis.com
Backpatch-through: 18 M src/backend/utils/adt/pg_locale_builtin.c
M src/common/unicode/case_test.c
Clean up all relid fields of RestrictInfos during join removal.
commit : 16fb94605c8f73e113d100ebb9e1d96642c85767
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 20 Apr 2026 14:48:23 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 20 Apr 2026 14:48:23 -0400 The original implementation of remove_rel_from_restrictinfo()
thought it could skate by with removing no-longer-valid relid
bits from only the clause_relids and required_relids fields.
This is quite bogus, although somehow we had not run across a
counterexample before now. At minimum, the left_relids and
right_relids fields need to be fixed because they will be
examined later by clause_sides_match_join(). But it seems
pretty foolish not to fix all the relid fields, so do that.
This needs to be back-patched as far as v16, because the
bug report shows a planner failure that does not occur
before v16. I'm a little nervous about back-patching,
because this could cause unexpected plan changes due to
opening up join possibilities that were rejected before.
But it's hard to argue that this isn't a regression. Also,
the fact that this changes no existing regression test results
suggests that the scope of changes may be fairly narrow.
I'll refrain from back-patching further though, since no
adverse effects have been demonstrated in older branches.
Bug: #19460
Reported-by: François Jehl <francois.jehl@pigment.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/19460-5625143cef66012f@postgresql.org
Backpatch-through: 16 M src/backend/optimizer/plan/analyzejoins.c
M src/test/regress/expected/join.out
M src/test/regress/sql/join.sql
Flush statistics during idle periods in parallel apply worker.
commit : 44c8dc28017800892adbdc7f248f6bed5aa90bef
author : Amit Kapila <akapila@postgresql.org>
date : Mon, 20 Apr 2026 10:23:22 +0530
committer: Amit Kapila <akapila@postgresql.org>
date : Mon, 20 Apr 2026 10:23:22 +0530 Parallel apply workers previously failed to report statistics while
waiting for new work in the main loop. This resulted in the stats from the
most recent transaction remaining unbuffered, leading to arbitrary
reporting delays—particularly when streamed transactions were infrequent.
This commit ensures that statistics are explicitly flushed when the worker
is idle, providing timely visibility into accumulated worker activity.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 16, where it was introduced
Discussion: https://postgr.es/m/TYRPR01MB1419579F217CC4332B615589594202@TYRPR01MB14195.jpnprd01.prod.outlook.com M src/backend/replication/logical/applyparallelworker.c
doc: Improve description of pg_ctl -l log file permissions
commit : 7a8c4344f749418ec8d7f4d51a6073696526217e
author : Fujii Masao <fujii@postgresql.org>
date : Fri, 17 Apr 2026 15:30:59 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Fri, 17 Apr 2026 15:30:59 +0900 The documentation stated only that the log file created by pg_ctl -l is
inaccessible to other users by default. However, since commit c37b3d0,
the actual behavior is that only the cluster owner has access by default,
but users in the same group as the cluster owner may also read the file
if group access is enabled in the cluster.
This commit updates the documentation to describe this behavior
more clearly.
Backpatch to all supported versions.
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Reviewed-by: Xiaopeng Wang <wxp_728@163.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/OS9PR01MB1214959BE987B4839E3046050F54BA@OS9PR01MB12149.jpnprd01.prod.outlook.com
Backpatch-through: 14 M doc/src/sgml/ref/pg_ctl-ref.sgml
Fix comments for Korean encodings in encnames.c
commit : ea94d2e6734ec5196bb276f21a04fef0b8dbc9a5
author : Thomas Munro <tmunro@postgresql.org>
date : Thu, 16 Apr 2026 18:17:05 +1200
committer: Thomas Munro <tmunro@postgresql.org>
date : Thu, 16 Apr 2026 18:17:05 +1200 * JOHAB: replace the incorrect "simplified Chinese" description with
a correct one that identifies it as the Korean combining (Johab)
encoding standardized in KS X 1001 annex 3.
* EUC_KR: drop a stray space before the comma in the existing
comment, and note that the encoding covers the KS X 1001
precomposed (Wansung) form.
* UHC: spell out "Unified Hangul Code", clarify that it is
Microsoft Windows CodePage 949, and describe its relationship to
EUC-KR (superset covering all 11,172 precomposed Hangul syllables).
Backpatch-through: 14
Author: Henson Choi <assam258@gmail.com>
Discussion: https://postgr.es/m/CAAAe_zAFz1v-3b7Je4L%2B%3DwZM3UGAczXV47YVZfZi9wbJxspxeA%40mail.gmail.com M src/common/encnames.c
Fix pg_overexplain to emit valid output with RANGE_TABLE option.
commit : 6723d462db5a9ecea840b946f6b2843c20b1c866
author : Amit Langote <amitlan@postgresql.org>
date : Thu, 16 Apr 2026 13:49:39 +0900
committer: Amit Langote <amitlan@postgresql.org>
date : Thu, 16 Apr 2026 13:49:39 +0900 overexplain_range_table() emitted the "Unprunable RTIs" and "Result
RTIs" properties before closing the "Range Table" group. In the JSON
and YAML formats the Range Table group is rendered as an array of RTE
objects, so emitting key/value pairs inside it produced structurally
invalid output. The XML format had a related oddity, with these
elements nested inside <Range-Table> rather than appearing as its
siblings.
These fields are properties of the PlannedStmt as a whole, not of any
individual RTE, so close the Range Table group before emitting them.
They now appear as siblings of "Range Table" in the parent Query
object, which is what was intended.
Also add a test exercising FORMAT JSON with RANGE_TABLE so that any
future regression in the output structure is caught.
Reported-by: Satyanarayana Narlapuram <satyanarlapuram@gmail.com>
Author: Satyanarayana Narlapuram <satyanarlapuram@gmail.com>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAHg+QDdDrdqMr98a_OBYDYmK3RaT7XwCEShZfvDYKZpZTfOEjQ@mail.gmail.com
Backpatch-through: 18 M contrib/pg_overexplain/expected/pg_overexplain.out
M contrib/pg_overexplain/pg_overexplain.c
M contrib/pg_overexplain/sql/pg_overexplain.sql
Fix incorrect comment in JsonTablePlanJoinNextRow()
commit : f3fb145a0adb2c0ae34f4e36cf55bb1d60df3ddd
author : Amit Langote <amitlan@postgresql.org>
date : Thu, 16 Apr 2026 11:52:16 +0900
committer: Amit Langote <amitlan@postgresql.org>
date : Thu, 16 Apr 2026 11:52:16 +0900 The comment on the return-false path when both UNION siblings are
exhausted said "there are more rows," which is the opposite of what
the code does. The code itself is correct, returning false to signal
no more rows, but the misleading comment could tempt a reader into
"fixing" the return value, which would cause UNION plans to loop
indefinitely.
Back-patch to 17, where JSON_TABLE was introduced.
Author: Chuanwen Hu <463945512@qq.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/tencent_4CC6316F02DECA61ACCF22F933FEA5C12806@qq.com
Backpatch-through: 17 M src/backend/utils/adt/jsonpath_exec.c
Check for unterminated strings when calling uloc_getLanguage().
commit : ca938ec213d954aaa9eba60bbbfbfa924fac0d7a
author : Jeff Davis <jdavis@postgresql.org>
date : Tue, 14 Apr 2026 12:06:02 -0700
committer: Jeff Davis <jdavis@postgresql.org>
date : Tue, 14 Apr 2026 12:06:02 -0700 Missed by commit 1671f990dd66.
Author: Andreas Karlsson <andreas@proxel.se>
Discussion: https://postgr.es/m/118ca69e-47eb-42e1-83e9-72ccf40dd6fd@proxel.se
Backpatch-through: 16 M src/bin/initdb/initdb.c
Add tests for low-level PGLZ [de]compression routines
commit : 42473d90098da256e3bf2bdf793e8b7a35de384b
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 15 Apr 2026 05:09:08 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 15 Apr 2026 05:09:08 +0900 The goal of this module is to provide an entry point for the coverage of
the low-level compression and decompression PGLZ routines. The new test
is moved to a new parallel group, with all the existing
compression-related tests added to it.
This includes tests for the cases detected by fuzzing that emulate
corrupted compressed data, as fixed by 2b5ba2a0a141:
- Set control bit with read of a match tag, where no data follows.
- Set control bit with read of a match tag, where 1 byte follows.
- Set control bit with match tag where length nibble is 3 bytes
(extended case).
While on it, some tests are added for compress/decompress roundtrips,
and for check_complete=false/true. Like 2b5ba2a0a141, backpatch to all
the stable branches.
Discussion: https://postgr.es/m/adw647wuGjh1oU6p@paquier.xyz
Backpatch-through: 14 A src/test/regress/expected/compression_pglz.out
M src/test/regress/parallel_schedule
M src/test/regress/regress.c
A src/test/regress/sql/compression_pglz.sql
Fix overrun when comparing with unterminated ICU language string.
commit : 6393259bd49db307c7113d70da355544479ee342
author : Jeff Davis <jdavis@postgresql.org>
date : Mon, 13 Apr 2026 11:19:21 -0700
committer: Jeff Davis <jdavis@postgresql.org>
date : Mon, 13 Apr 2026 11:19:21 -0700 The overrun was introduced in commit c4ff35f10.
Author: Andreas Karlsson <andreas@proxel.se>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/96d80a47-f17f-42fa-82b1-2908efbd6541@gmail.com
Backpatch-through: 18 M src/backend/utils/adt/pg_locale_icu.c
Fix excessive logging in idle slotsync worker.
commit : 540fe8fb5c22a6c724513f62daaae79af5d559cb
author : Amit Kapila <akapila@postgresql.org>
date : Mon, 13 Apr 2026 09:42:51 +0530
committer: Amit Kapila <akapila@postgresql.org>
date : Mon, 13 Apr 2026 09:42:51 +0530 The slotsync worker was incorrectly identifying no-op states as successful
updates, triggering a busy loop to sync slots that logged messages every
200ms. This patch corrects the logic to properly classify these states,
enabling the worker to respect normal sleep intervals when no work is
performed.
Reported-by: Fujii Masao <masao.fujii@gmail.com>
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Backpatch-through: 17, where it was introduced
Discussion: https://postgr.es/m/CAHGQGwF6zG9Z8ws1yb3hY1VqV-WT7hR0qyXCn2HdbjvZQKufDw@mail.gmail.com M src/backend/replication/logical/slotsync.c
Honor passed-in database OIDs in pgstat_database.c
commit : b081c5b07309e5f95fec90eef5041bcc7a25a794
author : Michael Paquier <michael@paquier.xyz>
date : Sat, 11 Apr 2026 17:03:04 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Sat, 11 Apr 2026 17:03:04 +0900 Three routines in pgstat_database.c incorrectly ignore the database OID
provided by their caller, using MyDatabaseId instead:
- pgstat_report_connect()
- pgstat_report_disconnect()
- pgstat_reset_database_timestamp()
The first two functions, for connection and disconnection, each have a
single caller that already passes MyDatabaseId. This was harmless,
still incorrect.
The timestamp reset function also has a single caller, but in this case
the issue has a real impact: it fails to reset the timestamp for the
shared-database entry (datid=0) when operating on shared objects. This
situation can occur, for example, when resetting counters for shared
relations via pg_stat_reset_single_table_counters().
There is currently one test in the tree that checks the reset of a
shared relation, for pg_shdescription, we rely on it to check what is
stored in pg_stat_database. As stats_reset may be NULL, two resets are
done to provide a baseline for comparison.
Author: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Dapeng Wang <wangdp20191008@gmail.com>
Discussion: https://postgr.es/m/ABBD5026-506F-4006-A569-28F72C188693@gmail.com
Backpatch-through: 15 M src/backend/utils/activity/pgstat_database.c
M src/test/regress/expected/stats.out
M src/test/regress/sql/stats.sql
Fix estimate_array_length error with set-operation array coercions
commit : 13e20d1c9d99516d13f3ee0dc164168a74cde0df
author : Richard Guo <rguo@postgresql.org>
date : Sat, 11 Apr 2026 16:38:47 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Sat, 11 Apr 2026 16:38:47 +0900 When a nested set operation's output type doesn't match the parent's
expected type, recurse_set_operations builds a projection target list
using generate_setop_tlist with varno 0. If the required type
coercion involves an ArrayCoerceExpr, estimate_array_length could be
called on such a Var, and would pass it to examine_variable, which
errors in find_base_rel because varno 0 has no valid relation entry.
Fix by skipping the statistics lookup for Vars with varno 0.
Bug introduced by commit 9391f7152. Back-patch to v17, where
estimate_array_length was taught to use statistics.
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Author: Tender Wang <tndrwang@gmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/adjW8rfPDkplC7lF@pryzbyj2023
Backpatch-through: 17 M src/backend/utils/adt/selfuncs.c
M src/test/regress/expected/union.out
M src/test/regress/sql/union.sql
read_stream: Remove obsolete comment.
commit : 36e7efbfdd786f7b0505121374052b323c90ee7d
author : Thomas Munro <tmunro@postgresql.org>
date : Sat, 11 Apr 2026 11:23:26 +1200
committer: Thomas Munro <tmunro@postgresql.org>
date : Sat, 11 Apr 2026 11:23:26 +1200 This comment was describing the v17 implementation (or io_method=sync).
Backpatch-through: 18 M src/backend/storage/aio/read_stream.c
Fix heap-buffer-overflow in pglz_decompress() on corrupt input.
commit : c3e436b1cb6090195f843daacd83b59258e1bcac
author : Andrew Dunstan <andrew@dunslane.net>
date : Thu, 9 Apr 2026 11:48:55 -0400
committer: Andrew Dunstan <andrew@dunslane.net>
date : Thu, 9 Apr 2026 11:48:55 -0400 When decoding a match tag, pglz_decompress() reads 2 bytes (or 3
for extended-length matches) from the source buffer before checking
whether enough data remains. The existing bounds check (sp > srcend)
occurs after the reads, so truncated compressed data that ends
mid-tag causes a read past the allocated buffer.
Fix by validating that sufficient source bytes are available before
reading each part of the match tag. The post-read sp > srcend
check is no longer needed and is removed.
Found by fuzz testing with libFuzzer and AddressSanitizer.
Backpatch-through: 14 M src/common/pg_lzcompress.c
Fix incremental JSON parser numeric token reassembly across chunks.
commit : 3e4955630292a7eb38f5fb3c6c5685623088ffd1
author : Andrew Dunstan <andrew@dunslane.net>
date : Thu, 9 Apr 2026 07:57:07 -0400
committer: Andrew Dunstan <andrew@dunslane.net>
date : Thu, 9 Apr 2026 07:57:07 -0400 When the incremental JSON parser splits a numeric token across chunk
boundaries, it accumulates continuation characters into the partial
token buffer. The accumulator's switch statement unconditionally
accepted '+', '-', '.', 'e', and 'E' as valid numeric continuations
regardless of position, which violated JSON number grammar
(-? int [frac] [exp]). For example, input "4-" fed in single-byte
chunks would accumulate the '-' into the numeric token, producing an
invalid token that later triggered an assertion failure during
re-lexing.
Fix by tracking parser state (seen_dot, seen_exp, prev character)
across the existing partial token and incoming bytes, so that each
character class is accepted only in its grammatically valid position.
Backpatch-through: 17 M src/common/jsonapi.c
Zero-fill private_data when attaching an injection point
commit : 35f41b29ff1dedf172552adb1a7fab124518eadc
author : Michael Paquier <michael@paquier.xyz>
date : Fri, 10 Apr 2026 11:17:30 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Fri, 10 Apr 2026 11:17:30 +0900 InjectionPointAttach() did not initialize the private_data buffer of the
shared memory entry before (perhaps partially) overwriting it. When the
private data is set to NULL by the caler, the buffer was left
uninitialized. If set, it could have stale contents.
The buffer is initialized to zero, so as the contents recorded when a
point is attached are deterministic.
Author: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0tsGHu2h6YLnVu4HiK05q+gTE_9WVUAqihW2LSscAYS-g@mail.gmail.com
Backpatch-through: 17 M src/backend/utils/misc/injection_point.c
Fix integer overflow in nodeWindowAgg.c
commit : bfc7dff26d53ab42fe6cb6bc2243f5241a6df3e4
author : Richard Guo <rguo@postgresql.org>
date : Thu, 9 Apr 2026 19:28:33 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Thu, 9 Apr 2026 19:28:33 +0900 In nodeWindowAgg.c, the calculations for frame start and end positions
in ROWS and GROUPS modes were performed using simple integer addition.
If a user-supplied offset was sufficiently large (close to INT64_MAX),
adding it to the current row or group index could cause a signed
integer overflow, wrapping the result to a negative number.
This led to incorrect behavior where frame boundaries that should have
extended indefinitely (or beyond the partition end) were treated as
falling at the first row, or where valid rows were incorrectly marked
as out-of-frame. Depending on the specific query and data, these
overflows can result in incorrect query results, execution errors, or
assertion failures.
To fix, use overflow-aware integer addition (ie, pg_add_s64_overflow)
to check for overflows during these additions. If an overflow is
detected, the boundary is now clamped to INT64_MAX. This ensures the
logic correctly treats the boundary as extending to the end of the
partition.
Bug: #19405
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/19405-1ecf025dda171555@postgresql.org
Backpatch-through: 14 M src/backend/executor/nodeWindowAgg.c
M src/test/regress/expected/window.out
M src/test/regress/sql/window.sql
Strip PlaceHolderVars from partition pruning operands
commit : 8e8b2bef780e65102ac89260427181fb655c9c3b
author : Richard Guo <rguo@postgresql.org>
date : Thu, 9 Apr 2026 16:43:28 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Thu, 9 Apr 2026 16:43:28 +0900 When pulling up a subquery, its targetlist items may be wrapped in
PlaceHolderVars to enforce separate identity or as a result of outer
joins. This causes any upper-level WHERE clauses referencing these
outputs to contain PlaceHolderVars, which prevents partprune.c from
recognizing that they match partition key columns, defeating partition
pruning.
To fix, strip PlaceHolderVars from operands before comparing them to
partition keys. A PlaceHolderVar with empty phnullingrels appearing
in a relation-scan-level expression is effectively a no-op, so
stripping it is safe. This parallels the existing treatment in
indxpath.c for index matching.
In passing, rename strip_phvs_in_index_operand() to strip_noop_phvs()
and move it from indxpath.c to placeholder.c, since it is now a
general-purpose utility used by both index matching and partition
pruning code.
Back-patch to v18. Although this issue exists before that, changes in
that version made it common enough to notice. Given the lack of field
reports for older versions, I am not back-patching further. In the
v18 back-patch, strip_phvs_in_index_operand() is retained as a thin
wrapper around the new strip_noop_phvs() to avoid breaking third-party
extensions that may reference it.
Reported-by: Cándido Antonio Martínez Descalzo <candido@ninehq.com>
Diagnosed-by: David Rowley <dgrowleyml@gmail.com>
Author: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/CAH5YaUwVUWETTyVECTnhs7C=CVwi+uMSQH=cOkwAUqMdvXdwWA@mail.gmail.com
Backpatch-through: 18 M src/backend/optimizer/path/indxpath.c
M src/backend/optimizer/plan/createplan.c
M src/backend/optimizer/util/placeholder.c
M src/backend/partitioning/partprune.c
M src/include/optimizer/placeholder.h
M src/test/regress/expected/partition_prune.out
M src/test/regress/sql/partition_prune.sql
Fix ABI break by moving PROCSIG_SLOTSYNC_MESSAGE in ProcSignalReason
commit : acf49bfede2ad5e778df7abfaf37a0e4b5fff5da
author : Fujii Masao <fujii@postgresql.org>
date : Thu, 9 Apr 2026 15:25:40 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Thu, 9 Apr 2026 15:25:40 +0900 Commit 58c1188a3ea added PROCSIG_SLOTSYNC_MESSAGE in the middle of
enum ProcSignalReason, breaking the ABI.
Fix this by moving PROCSIG_SLOTSYNC_MESSAGE to the end of the enum,
to restore ordering.
Per buildfarm member crake.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Discussion: https://postgr.es/m/CAHGQGwH_AAbtsiYDJt65N7_4PJ0CgOJmBEaCq68e5_tcuG_vXw@mail.gmail.com
Backpatch-through: 18 only M src/include/storage/procsignal.h
Fix slotsync worker blocking promotion when stuck in wait
commit : 58c1188a3eaa5681bb4de769c3f8cd84c15b8825
author : Fujii Masao <fujii@postgresql.org>
date : Wed, 8 Apr 2026 11:23:13 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Wed, 8 Apr 2026 11:23:13 +0900 Previously, on standby promotion, the startup process sent SIGUSR1 to
the slotsync worker (or a backend performing slot synchronization) and
waited for it to exit. This worked in most cases, but if the process was
blocked waiting for a response from the primary (e.g., due to a network
failure), SIGUSR1 would not interrupt the wait. As a result, the process
could remain stuck, causing the startup process to wait for a long time
and delaying promotion.
This commit fixes the issue by introducing a new procsignal reason,
PROCSIG_SLOTSYNC_MESSAGE. On promotion, the startup process
sends this signal, and the handler sets interrupt flags so the process
exits (or errors out) promptly at CHECK_FOR_INTERRUPTS(), allowing
promotion to complete without delay.
Backpatch to v17, where slotsync was introduced.
Author: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAHGQGwFzNYroAxSoyJhqTU-pH=t4Ej6RyvhVmBZ91Exj_TPMMQ@mail.gmail.com
Backpatch-through: 17 M src/backend/replication/logical/slotsync.c
M src/backend/storage/ipc/procsignal.c
M src/backend/tcop/postgres.c
M src/include/replication/slotsync.h
M src/include/storage/procsignal.h
Enhance slot synchronization API to respect promotion signal.
commit : 94efd308bcec9ecb45c2f1977c3c15bec383316e
author : Amit Kapila <akapila@postgresql.org>
date : Thu, 11 Dec 2025 03:49:28 +0000
committer: Fujii Masao <fujii@postgresql.org>
date : Thu, 11 Dec 2025 03:49:28 +0000 Previously, during a promotion, only the slot synchronization worker was
signaled to shut down. The backend executing slot synchronization via the
pg_sync_replication_slots() SQL function was not signaled, allowing it to
complete its synchronization cycle before exiting.
An upcoming patch improves pg_sync_replication_slots() to wait until
replication slots are fully persisted before finishing. This behaviour
requires the backend to exit promptly if a promotion occurs.
This patch ensures that, during promotion, a signal is also sent to the
backend running pg_sync_replication_slots(), allowing it to be interrupted
and exit immediately.
This change was originally committed to master only. However, backpatch
it to v17, where slot synchronization was introduced. Because it is required
for an upcoming bug fix addressing slotsync (including
pg_sync_replication_slots()) blocking promotion when stuck in a wait.
Author: Ajin Cherian <itsajin@gmail.com>
Reviewed-by: Shveta Malik <shveta.malik@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CAFPTHDZAA%2BgWDntpa5ucqKKba41%3DtXmoXqN3q4rpjO9cdxgQrw%40mail.gmail.com
Backpatch-through: 17 M src/backend/replication/logical/slotsync.c
Fix WITHOUT OVERLAPS' interaction with domains.
commit : 49f3cb453b9b86b771b0a15393893fb317e35572
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 7 Apr 2026 14:45:33 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 7 Apr 2026 14:45:33 -0400 UNIQUE/PRIMARY KEY ... WITHOUT OVERLAPS requires the no-overlap
column to be a range or multirange, but it should allow a domain
over such a type too. This requires minor adjustments in both
the parser and executor.
In passing, fix a nearby break-instead-of-continue thinko in
transformIndexConstraint. This had the effect of disabling
parse-time validation of the no-overlap column's type in the context
of ALTER TABLE ADD CONSTRAINT, if it follows a dropped column.
We'd still complain appropriately at runtime though.
Author: Jian He <jian.universality@gmail.com>
Reviewed-by: Paul A Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CACJufxGoAmN_0iJ=hjTG0vGpOSOyy-vYyfE+-q0AWxrq2_p5XQ@mail.gmail.com
Backpatch-through: 18 M src/backend/executor/execIndexing.c
M src/backend/parser/parse_utilcmd.c
M src/test/regress/expected/without_overlaps.out
M src/test/regress/sql/without_overlaps.sql
Fix shmem allocation of fixed-sized custom stats kind
commit : 93f08dc92cf822b9f9dc1ef5cd815b04c65e8718
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 7 Apr 2026 11:59:54 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 7 Apr 2026 11:59:54 +0900 StatsShmemSize(), that computes the shmem size needed for pgstats,
includes the amount of shared memory wanted by all the custom stats
kinds registered. However, the shared memory allocation was done by
ShmemAlloc() in StatsShmemInit(), meaning that the space reserved was
not used, wasting some memory.
These extra allocations would show up under "<anonymous>" in
pg_shmem_allocations, as the allocations done by ShmemAlloc() are not
tracked by ShmemIndexEnt.
Issue introduced by 7949d9594582.
Author: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/04b04387-92f5-476c-90b0-4064e71c5f37@iki.fi
Backpatch-through: 18 M src/backend/utils/activity/pgstat_shmem.c
Fix shared memory size of template code for custom fixed-sized pgstats
commit : af04b04f2f7a54ba4a4418a814f57b27d8e0d10c
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 7 Apr 2026 08:24:36 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 7 Apr 2026 08:24:36 +0900 On HEAD, the template code for custom fixed-sized pgstats is in the test
module test_custom_stats. On REL_18_STABLE, this code lives in the test
module injection_points.
Both cases were underestimating the size of the shared memory area
required for the storage of the stats data, using a single entry rather
than the whole area. This underestimation meant that there was no
memory allocated for the LWLock required for the stats, and even more.
This problem would be also misleading for extension developers looking
at this code.
This issue has been noticed while digging into a different bug reported
by Heikki Linnakangas, showing that the underestimation was causing
failures in the TAP tests of the test modules for 32-bit builds. The
other issue reported, related to the memory allocation of custom
fixed-sized pgstats, will be fixed in a follow-up commit.
Discussion: https://postgr.es/m/adMk_lWbnz3HDOA8@paquier.xyz
Backpatch-through: 18 M src/test/modules/injection_points/injection_stats_fixed.c
Avoid unsafe access to negative index in a TupleDesc.
commit : 11c2c0cc8d7c9fb0b68390b2b3550b54cf045dbc
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 6 Apr 2026 14:22:17 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 6 Apr 2026 14:22:17 -0400 Commit aa606b931 installed a test that would reference a nonexistent
TupleDesc array entry if a system column is used in COPY FROM WHERE.
Typically this would be harmless, but with bad luck it could result
in a phony "generated columns are not supported in COPY FROM WHERE
conditions" error, and at least in principle it could cause SIGSEGV.
(Compare 570e2fcc0 which fixed the identical problem in another
place.) Also, since c98ad086a it throws an Assert instead.
In the back branches, just guard the test to make it a safe no-op for
system columns. Commit 21c69dc73 installed a more aggressive answer
in master.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/6f435023-8ab6-47c2-ba07-035d0c4212f9@gmail.com
Backpatch-through: 14-18 M src/backend/commands/copy.c
Fix null-bitmap combining in array_agg_array_combine().
commit : 14bf2c39ee2f7e8e1ab989aacb55259fd1b70691
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 6 Apr 2026 13:14:50 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 6 Apr 2026 13:14:50 -0400 This code missed the need to update the combined state's
nullbitmap if state1 already had a bitmap but state2 didn't.
We need to extend the existing bitmap with 1's but didn't.
This could result in wrong output from a parallelized
array_agg(anyarray) calculation, if the input has a mix of
null and non-null elements. The errors depended on timing
of the parallel workers, and therefore would vary from one
run to another.
Also install guards against integer overflow when calculating
the combined object's sizes, and make some trivial cosmetic
improvements.
Author: Dmytro Astapov <dastapov@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAFQUnFj2pQ1HbGp69+w2fKqARSfGhAi9UOb+JjyExp7kx3gsqA@mail.gmail.com
Backpatch-through: 16 M src/backend/utils/adt/array_userfuncs.c
More tar portability adjustments.
commit : 5079e420b92db58412f2af03c728ff1640bdc103
author : Thomas Munro <tmunro@postgresql.org>
date : Sat, 4 Apr 2026 11:13:18 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Sat, 4 Apr 2026 11:13:18 +1300 For the three implementations that have caused problems so far:
* GNU and BSD (libarchive) tar both understand --format=ustar
* ustar doesn't support large UID/GID values, so set them to 0 to
avoid a hard error from at least GNU tar
* OpenBSD tar needs -F ustar, and it appears to warn but carry
on with "nobody" if a UID is too large
* -f /dev/null is a more portable way to throw away the output, since
the default destination might be a tape device depending on build
options that a distribution might change
* Windows ships BSD tar but lacks /dev/null, so ask perl for its name
Based on their manuals, the other two implementations the tests are
likely to encounter in the wild don't seem to need any special handling:
* Solaris/illumos tar uses ustar and replaces large UIDs with 60001
* AIX tar uses ustar (unless --format=pax) and truncates large UIDs
Backpatch-through: 18
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Co-authored-by: Sami Imseih <samimseih@gmail.com> (large UIDs)
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> (earlier version)
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com> (OpenBSD)
Reviewed-by: Andrew Dunstan <andrew@dunslane.net> (Windows)
Discussion: https://postgr.es/m/3676229.1775170250%40sss.pgh.pa.us
Discussion: https://postgr.es/m/CAA5RZ0tt89MgNi4-0F4onH%2B-TFSsysFjMM-tBc6aXbuQv5xBXw%40mail.gmail.com M src/test/perl/PostgreSQL/Test/Utils.pm
jit: No backport::SectionMemoryManager for LLVM 22.
commit : 5c54e0f48fa2bc55080529179da9e49a22eeb0f4
author : Thomas Munro <tmunro@postgresql.org>
date : Fri, 3 Apr 2026 14:48:54 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Fri, 3 Apr 2026 14:48:54 +1300 LLVM 22 has the fix that we copied into our tree in commit 9044fc1d and
a new function to reach it[1][2], so we only need to use our copy for
Aarch64 + LLVM < 22. The only change to the final version that our copy
didn't get is a new LLVM_ABI macro, but that isn't appropriate for us.
Our copy is hopefully now frozen and would only need maintenance if bugs
are found in the upstream code.
Non-Aarch64 systems now also use the new API with LLVM 22. It allocates
all sections with one contiguous mmap() instead of one per
section. We could have done that earlier, but commit 9044fc1d wanted to
limit the blast radius to the affected systems. We might as well
benefit from that small improvement everywhere now that it is available
out of the box.
We can't delete our copy until LLVM 22 is our minimum supported version,
or we switch to the newer JITLink API for at least Aarch64.
[1] https://github.com/llvm/llvm-project/pull/71968
[2] https://github.com/llvm/llvm-project/pull/174307
Backpatch-through: 14
Discussion: https://postgr.es/m/CA%2BhUKGJTumad75o8Zao-LFseEbt%3DenbUFCM7LZVV%3Dc8yg2i7dg%40mail.gmail.com M src/backend/jit/llvm/SectionMemoryManager.cpp
M src/backend/jit/llvm/llvmjit.c
M src/include/jit/SectionMemoryManager.h
M src/include/jit/llvmjit_backport.h
Further harden tests that might use not-so-compatible tar versions.
commit : c4b7be4ecb12a70e9bafa5353a787ba51bb4d2c6
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 2 Apr 2026 17:21:18 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 2 Apr 2026 17:21:18 -0400 Buildfarm testing shows that OpenSUSE (and perhaps related platforms?)
configures GNU tar in such a way that it'll archive sparse WAL files
by default, thus triggering the pax-extension detection code added by
bc30c704a. Thus, we need something similar to 852de579a but for
GNU tar's option set. "--format=ustar" seems to do the trick.
Moreover, the buildfarm shows that pg_verifybackup's 003_corruption.pl
test script is also triggering creation of pax-format tar files on
that platform. We had not noticed because those test cases all fail
(intentionally) before getting to the point of trying to verify WAL
data.
Since that means two TAP scripts need this option-selection logic, and
plausibly more will do so in future, factor it out into a subroutine
in Test::Utils. We also need to back-patch the 003_corruption.pl fix
into v18, where it's also failing.
While at it, clean up some places where guards for $tar being empty
or undefined were incomplete or even outright backwards. Presumably,
we missed noticing because the set of machines that run TAP tests
and don't have tar installed is empty. But if we're going to try
to handle that scenario, we should do it correctly.
Reported-by: Tomas Vondra <tomas@vondra.me>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/02770bea-b3f3-4015-8a43-443ae345379c@vondra.me
Backpatch-through: 18 M src/bin/pg_verifybackup/t/003_corruption.pl
M src/test/perl/PostgreSQL/Test/Utils.pm
Harden astreamer tar parsing logic against archives it can't handle.
commit : 698eae7db7ab980074636ad47485d3c2da607387
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 2 Apr 2026 12:20:26 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 2 Apr 2026 12:20:26 -0400 Previously, there was essentially no verification in this code that
the input is a tar file at all, let alone that it fits into the
subset of valid tar files that we can handle. This was exposed by
the discovery that we couldn't handle files that FreeBSD's tar
makes, because it's fairly aggressive about converting sparse WAL
files into sparse tar entries. To fix:
* Bail out if we find a pax extension header. This covers the
sparse-file case, and also protects us against scenarios where
the pax header changes other file properties that we care about.
(Eventually we may extend the logic to actually handle such
headers, but that won't happen in time for v19.)
* Be more wary about tar file type codes in general: do not assume
that anything that's neither a directory nor a symlink must be a
regular file. Instead, we just ignore entries that are none of the
three supported types.
* Apply pg_dump's isValidTarHeader to verify that a purported
header block is actually in tar format. To make this possible,
move isValidTarHeader into src/port/tar.c, which is probably where
it should have been since that file was created.
I also took the opportunity to const-ify the arguments of
isValidTarHeader and tarChecksum, and to use symbols not hard-wired
constants inside tarChecksum.
Back-patch to v18 but not further. Although this code exists inside
pg_basebackup in older branches, it's not really exposed in that
usage to tar files that weren't generated by our own code, so it
doesn't seem worth back-porting these changes across 3c9056981
and f80b09bac. I did choose to include a back-patch of 5868372bb
into v18 though, to minimize cosmetic differences between these
two branches.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/3049460.1775067940@sss.pgh.pa.us>
Backpatch-through: 18 M src/bin/pg_basebackup/astreamer_inject.c
M src/bin/pg_dump/pg_backup_archiver.c
M src/bin/pg_dump/pg_backup_archiver.h
M src/bin/pg_dump/pg_backup_tar.c
M src/bin/pg_verifybackup/astreamer_verify.c
M src/fe_utils/astreamer_file.c
M src/fe_utils/astreamer_tar.c
M src/include/fe_utils/astreamer.h
M src/include/pgtar.h
M src/port/tar.c
jit: Stop emitting lifetime.end for LLVM 22.
commit : 78cea19bf7daa16e8fff4d1bb69a349fcc1421f3
author : Thomas Munro <tmunro@postgresql.org>
date : Thu, 2 Apr 2026 15:24:44 +1300
committer: Thomas Munro <tmunro@postgresql.org>
date : Thu, 2 Apr 2026 15:24:44 +1300 The lifetime.end intrinsic can now only be used for stack memory
allocated with alloca[1][2][3]. We use it to tell LLVM about the
lifetime of function arguments/isnull values that we keep in palloc'd
memory, so that it can avoid spilling registers to memory.
We might need to rearrange things and put them on the stack, but that'll
take some research. In the meantime, unbreak the build on LLVM 22.
[1] https://github.com/llvm/llvm-project/pull/149310
[2] https://llvm.org/docs/LangRef.html#llvm-lifetime-end-intrinsic
[3] https://llvm.org/docs/LangRef.html#i-alloca
Backpatch-through: 14
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com> (earlier attempt)
Reviewed-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> (earlier attempt)
Reviewed-by: Andres Freund <andres@anarazel.de> (earlier attempt)
Discussion: https://postgr.es/m/CA%2BhUKGJTumad75o8Zao-LFseEbt%3DenbUFCM7LZVV%3Dc8yg2i7dg%40mail.gmail.com M src/backend/jit/llvm/llvmjit_expr.c
doc: Add missing description for DROP SUBSCRIPTION IF EXISTS.
commit : 9ed5015f0d84ad62a15b4f468d743d1f8ec93a5e
author : Nathan Bossart <nathan@postgresql.org>
date : Wed, 1 Apr 2026 09:48:48 -0500
committer: Nathan Bossart <nathan@postgresql.org>
date : Wed, 1 Apr 2026 09:48:48 -0500 Oversight in commit 665d1fad99.
Author: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAHut%2BPv72haFerrCdYdmF6hu6o2jKcGzkXehom%2BsP-JBBmOVDg%40mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/ref/drop_subscription.sgml
Be more careful to preserve consistency of a tuplestore.
commit : adb7873bb9330b9fb578b34c84b9ab7bcc9cdd51
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 30 Mar 2026 13:59:54 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 30 Mar 2026 13:59:54 -0400 Several places in tuplestore.c would leave the tuplestore data
structure effectively corrupt if some subroutine were to throw
an error. Notably, if WRITETUP() failed after some number of
successful calls within dumptuples(), the tuplestore would
contain some memtuples pointers that were apparently live
entries but in fact pointed to pfree'd chunks.
In most cases this sort of thing is fine because transaction
abort cleanup is not too picky about the contents of memory that
it's going to throw away anyway. There's at least one exception
though: if a Portal has a holdStore, we're going to call
tuplestore_end() on that, even during transaction abort.
So it's not cool if that tuplestore is corrupt, and that means
tuplestore.c has to be more careful.
This oversight demonstrably leads to crashes in v15 and before,
if a holdable cursor fails to persist its data due to an undersized
temp_file_limit setting. Very possibly the same thing can happen in
v16 and v17 as well, though the specific test case submitted failed
to fail there (cf. 095555daf). The failure is accidentally dodged
as of v18 because 590b045c3 got rid of tuplestore_end's retail tuple
deletion loop. Still, it seems unwise to permit tuplestores to become
internally inconsistent in any branch, so I've applied the same fix
across the board.
Since the known test case for this is rather expensive and doesn't
fail in recent branches, I've omitted it.
Bug: #19438
Reported-by: Dmitriy Kuzmin <kuzmin.db4@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/19438-9d37b179c56d43aa@postgresql.org
Backpatch-through: 14 M src/backend/utils/sort/tuplestore.c
Detect pfree or repalloc of a previously-freed memory chunk.
commit : 3f3eefc288925c61068a05d6f0f5fcbf865a1c70
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 30 Mar 2026 12:02:08 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 30 Mar 2026 12:02:08 -0400 Before the major rewrite in commit c6e0fe1f2, AllocSetFree() would
typically crash when asked to free an already-free chunk. That was
an ugly but serviceable way of detecting coding errors that led to
double pfrees. But since that rewrite, double pfrees went through
just fine, because the "hdrmask" of a freed chunk isn't changed at all
when putting it on the freelist. We'd end with a corrupt freelist
that circularly links back to the doubly-freed chunk, which would
usually result in trouble later, far removed from the actual bug.
This situation is no good at all for debugging purposes. Fortunately,
we can fix it at low cost in MEMORY_CONTEXT_CHECKING builds by making
AllocSetFree() check for chunk->requested_size == InvalidAllocSize,
relying on the pre-existing code that sets it that way just below.
I investigated the alternative of changing a freed chunk's methodid
field, which would allow detection in non-MEMORY_CONTEXT_CHECKING
builds too. But that adds measurable overhead. Seeing that we didn't
notice this oversight for more than three years, it's hard to argue
that detecting this type of bug is worth any extra overhead in
production builds.
Likewise fix AllocSetRealloc() to detect repalloc() on a freed chunk,
and apply similar changes in generation.c and slab.c. (generation.c
would hit an Assert failure anyway, but it seems best to make it act
like aset.c.) bump.c doesn't need changes since it doesn't support
pfree in the first place. Ideally alignedalloc.c would receive
similar changes, but in debugging builds it's impossible to reach
AlignedAllocFree() or AlignedAllocRealloc() on a pfreed chunk, because
the underlying context's pfree would have wiped the chunk header of
the aligned chunk. But that means we should get an error of some
sort, so let's be content with that.
Per investigation of why the test case for bug #19438 didn't appear to
fail in v16 and up, even though the underlying bug was still present.
(This doesn't fix the underlying double-free bug, just cause it to
get detected.)
Bug: #19438
Reported-by: Dmitriy Kuzmin <kuzmin.db4@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/19438-9d37b179c56d43aa@postgresql.org
Backpatch-through: 16 M src/backend/utils/mmgr/aset.c
M src/backend/utils/mmgr/generation.c
M src/backend/utils/mmgr/slab.c
Fix FK triggers losing DEFERRABLE/INITIALLY DEFERRED when marked ENFORCED again
commit : 5db5e339692ed98adb5ae6ff625a92b49f770a6f
author : Fujii Masao <fujii@postgresql.org>
date : Mon, 30 Mar 2026 14:37:33 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Mon, 30 Mar 2026 14:37:33 +0900 Previously, a foreign key defined as DEFERRABLE INITIALLY DEFERRED could
behave as NOT DEFERRABLE after being set to NOT ENFORCED and then back
to ENFORCED.
This happened because recreating the FK triggers on re-enabling the constraint
forgot to restore the tgdeferrable and tginitdeferred fields in pg_trigger.
Fix this bug by properly setting those fields when the foreign key constraint
is marked ENFORCED again and its triggers are recreated, so the original
DEFERRABLE and INITIALLY DEFERRED properties are preserved.
Backpatch to v18, where NOT ENFORCED foreign keys were introduced.
Author: Yasuo Honda <yasuo.honda@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAKmOUTms2nkxEZDdcrsjq5P3b2L_PR266Hv8kW5pANwmVaRJJQ@mail.gmail.com
Backpatch-through: 18 M src/backend/commands/tablecmds.c
M src/test/regress/expected/foreign_key.out
M src/test/regress/sql/foreign_key.sql
Fix datum_image_*()'s inability to detect sign-extension variations
commit : 49315de0c0743cd5ecddf3af15217a6e7ec3fdb0
author : David Rowley <drowley@postgresql.org>
date : Mon, 30 Mar 2026 16:16:09 +1300
committer: David Rowley <drowley@postgresql.org>
date : Mon, 30 Mar 2026 16:16:09 +1300 Functions such as hash_numeric() are not careful to use the correct
PG_RETURN_*() macro according to the return type of that function as
defined in pg_proc. Because that function is meant to return int32,
when the hashed value exceeds 2^31, the 64-bit Datum value won't wrap to
a negative number, which means the Datum won't have the same value as it
would have had it been cast to int32 on a two's complement machine. This
isn't harmless as both datum_image_eq() and datum_image_hash() may receive
a Datum that's been formed and deformed from a tuple in some cases, and
not in other cases. When formed into a tuple, the Datum value will be
coerced into an integer according to the attlen as specified by the
TupleDesc. This can result in two Datums that should be equal being
classed as not equal, which could result in (but not limited to) an error
such as:
ERROR: could not find memoization table entry
Here we fix this by ensuring we cast the Datum value to a signed integer
according to the typLen specified in the datum_image_eq/datum_image_hash
function call before comparing or hashing.
Author: David Rowley <dgrowleyml@gmail.com>
Reported-by: Tender Wang <tndrwang@gmail.com>
Backpatch-through: 14
Discussion: https://postgr.es/m/CAHewXNmcXVFdB9_WwA8Ez0P+m_TQy_KzYk5Ri5dvg+fuwjD_yw@mail.gmail.com M src/backend/utils/adt/datum.c
Doc: fix stale text about partition locking with cached plans
commit : a1baf64589758ed488eb943d4ccb64f23c0500ef
author : Amit Langote <amitlan@postgresql.org>
date : Fri, 27 Mar 2026 16:12:23 +0900
committer: Amit Langote <amitlan@postgresql.org>
date : Fri, 27 Mar 2026 16:12:23 +0900 Commit 121d774caea added text to master describing pruning-aware
locking behavior introduced by 525392d57. That behavior was
reverted in May 2025, making the text incorrect. Replace it with
the text used in back branches, which correctly describes current
behavior: pruned partitions are still locked at the beginning of
execution.
Discussion: https://postgr.es/m/CA+HiwqFT0fPPoYBr0iUFWNB-Og7bEXB9hB=6ogk_qD9=OM8Vbw@mail.gmail.com M doc/src/sgml/ddl.sgml
Fix multiple bugs in astreamer pipeline code.
commit : 5095f3f4a0dc2a91d1580598a4da8790a44aa7d2
author : Andrew Dunstan <andrew@dunslane.net>
date : Mon, 23 Mar 2026 16:17:08 -0400
committer: Andrew Dunstan <andrew@dunslane.net>
date : Mon, 23 Mar 2026 16:17:08 -0400 astreamer_tar_parser_content() sent the wrong data pointer when
forwarding MEMBER_TRAILER padding to the next streamer. After
astreamer_buffer_until() buffers the padding bytes, the 'data'
pointer has been advanced past them, but the code passed 'data'
instead of bbs_buffer.data. This caused the downstream consumer
to receive bytes from after the padding rather than the padding
itself, and could read past the end of the input buffer.
astreamer_gzip_decompressor_content() only checked for
Z_STREAM_ERROR from inflate(), silently ignoring Z_DATA_ERROR
(corrupted data) and Z_MEM_ERROR (out of memory). Fix by
treating any return other than Z_OK, Z_STREAM_END, and
Z_BUF_ERROR as fatal.
astreamer_gzip_decompressor_free() missed calling inflateEnd() to
release zlib's internal decompression state.
astreamer_tar_parser_free() neglected to pfree() the streamer
struct itself, leaking it.
astreamer_extractor_content() did not check the return value of
fclose() when closing an extracted file. A deferred write error
(e.g., disk full on buffered I/O) would be silently lost.
Discussion: https://postgr.es/m/results/98c6b630-acbb-44a7-97fa-1692ce2b827c@dunslane.net
Reviewed-By: Tom Lane <tgl@sss.pgh.pa.us>
Backpatch-through: 15 M src/fe_utils/astreamer_file.c
M src/fe_utils/astreamer_gzip.c
M src/fe_utils/astreamer_tar.c
Avoid memory leak on error while parsing pg_stat_statements dump file
commit : 25b02320e13305a03fbcbbb0202053dbcb7540d1
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Fri, 27 Mar 2026 12:20:38 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Fri, 27 Mar 2026 12:20:38 +0200 By using palloc() instead of raw malloc().
Reported-by: Gaurav Singh <gaurav.singh@yugabyte.com>
Reviewed-by: Lukas Fittl <lukas@fittl.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/CAEcQ1bYR9s4eQLFDjzzJHU8fj-MTbmRpW-9J-r2gsCn+HEsynw@mail.gmail.com
Backpatch-through: 14 M contrib/pg_stat_statements/pg_stat_statements.c
Fix off-by-one error in read IO tracing
commit : fb072e1721230a483f3aa299bcffbe0b8602b2c4
author : Andres Freund <andres@anarazel.de>
date : Thu, 26 Mar 2026 10:08:13 -0400
committer: Andres Freund <andres@anarazel.de>
date : Thu, 26 Mar 2026 10:08:13 -0400 AsyncReadBuffer()'s no-IO needed path passed
TRACE_POSTGRESQL_BUFFER_READ_DONE the wrong block number because it had
already incremented operation->nblocks_done. Fix by folding the
nblocks_done offset into the blocknum local variable at initialization.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/u73un3xeljr4fiidzwi4ikcr6vm7oqugn4fo5vqpstjio6anl2%40hph6fvdiiria
Backpatch-through: 18 M src/backend/storage/buffer/bufmgr.c
Fix premature NULL lag reporting in pg_stat_replication
commit : 98e96e579b917f87336d82228b889feb2ffe9ddd
author : Fujii Masao <fujii@postgresql.org>
date : Thu, 26 Mar 2026 20:49:31 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Thu, 26 Mar 2026 20:49:31 +0900 pg_stat_replication is documented to keep the last measured lag values for
a short time after the standby catches up, and then set them to NULL when
there is no WAL activity. However, previously lag values could become NULL
prematurely even while WAL activity was ongoing, especially in logical
replication.
This happened because the code cleared lag when two consecutive reply messages
indicated that the apply location had caught up with the send location.
It did not verify that the reported positions were unchanged, so lag could be
cleared even when positions had advanced between messages. In logical
replication, where the apply location often quickly catches up, this issue was
more likely to occur.
This commit fixes the issue by clearing lag only when the standby reports that
it has fully replayed WAL (i.e., both flush and apply locations have caught up
with the send location) and the write/flush/apply positions remain unchanged
across two consecutive reply messages.
The second message with unchanged positions typically results from
wal_receiver_status_interval, so lag values are cleared after that interval
when there is no activity. This avoids showing stale lag data while preventing
premature NULL values.
Even with this fix, lag may rarely become NULL during activity if identical
position reports are sent repeatedly. Eliminating such duplicate messages
would address this fully, but that change is considered too invasive for stable
branches and will be handled in master only later.
Backpatch to all supported branches.
Author: Shinya Kato <shinya11.kato@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAOzEurTzcUrEzrH97DD7+Yz=HGPU81kzWQonKZvqBwYhx2G9_A@mail.gmail.com
Backpatch-through: 14 M src/backend/replication/walsender.c
Prevent spurious "indexes on virtual generated columns are not supported".
commit : cceb9c18a50b3611537050b3f9bb4685072823b4
author : Robert Haas <rhaas@postgresql.org>
date : Tue, 24 Mar 2026 06:11:15 -0400
committer: Robert Haas <rhaas@postgresql.org>
date : Tue, 24 Mar 2026 06:11:15 -0400 Both of the checks in DefineIndex() that can produce this error
message have a guard against negative attribute numbers, but lack a
guard to ensure that attno is non-zero. As a result, we can index
off the beginning of the TupleDesc and read a garbage byte for
attgenerated. If that byte happens to be 'v', we'll incorrectly
produce the error mentioned above.
The first call site is easy to hit: any attempt to create an
expression index does so. The second one is not currently hit in
the regression tests, but can be hit by something like
CREATE INDEX ON some_table ((some_function(some_table))).
Found by study of a test_plan_advice failure on buildfarm member
skink, though this issue has nothing to do with test_plan_advice
and seems to have only been revealed by happenstance.
Backpatch-through: 18
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: http://postgr.es/m/CA+TgmoacixUZVvi00hOjk_d9B4iYKswWP1gNqQ8Vfray-AcOCA@mail.gmail.com M src/backend/commands/indexcmds.c
Fix copy-paste error in test_ginpostinglist
commit : 51b7316a7c2b1a87c181842b35e4dcdc31d630f1
author : John Naylor <john.naylor@postgresql.org>
date : Tue, 24 Mar 2026 16:40:33 +0700
committer: John Naylor <john.naylor@postgresql.org>
date : Tue, 24 Mar 2026 16:40:33 +0700 The check for a mismatch on the second decoded item pointer
was an exact copy of the first item pointer check, comparing
orig_itemptrs[0] with decoded_itemptrs[0] instead of orig_itemptrs[1]
with decoded_itemptrs[1]. The error message also reported (0, 1) as
the expected value instead of (blk, off). As a result, any decoding
error in the second item pointer (where the varbyte delta encoding
is exercised) would go undetected.
This has been wrong since commit bde7493d1, so backpatch to all
supported versions.
Author: Jianghua Yang <yjhjstz@gmail.com>
Discussion: https://postgr.es/m/CAAZLFmSOD8R7tZjRLZsmpKtJLoqjgawAaM-Pne1j8B_Q2aQK8w@mail.gmail.com
Backpatch-through: 14 M src/test/modules/test_ginpostinglist/test_ginpostinglist.c
Further improve commentary about ChangeVarNodesWalkExpression()
commit : 8c73ab9da9f191e07d5e1c11b59e2dcbde8cafb8
author : Alexander Korotkov <akorotkov@postgresql.org>
date : Tue, 24 Mar 2026 09:48:07 +0200
committer: Alexander Korotkov <akorotkov@postgresql.org>
date : Tue, 24 Mar 2026 09:48:07 +0200 The updated comment explains why we use ChangeVarNodes_walker() instead of
expression_tree_walker(), and provides a bit more detail about the differences
in processing top-level Query and subqueries.
Author: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAPpHfdvbjq342WTQ705Wmqhe8794pcp7wospz%2BWUJ2qB7vuOqA%40mail.gmail.com
Backpatch-through: 18 M src/backend/rewrite/rewriteManip.c
Improve commentary about ChangeVarNodesWalkExpression().
commit : a0e0b3cc685aa58d9ff63ed64e45ac1cba297e40
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 23 Mar 2026 11:14:24 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 23 Mar 2026 11:14:24 -0400 IMO the proximate cause of the bug fixed in commit 07b7a964d
was sloppy thinking about what ChangeVarNodesWalkExpression()
is to be used for. Flesh out its header comment to try to
improve that situation.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1607553.1774017006@sss.pgh.pa.us
Backpatch-through: 18 M src/backend/rewrite/rewriteManip.c
Fix multixact backwards-compatibility with CHECKPOINT race condition
commit : 0852643e1c60f3d32f723cac06e7a16f147745aa
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Mon, 23 Mar 2026 11:53:32 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Mon, 23 Mar 2026 11:53:32 +0200 If a CHECKPOINT record with nextMulti N is written to the WAL before
the CREATE_ID record for N, and N happens to be the first multixid on
an offset page, the backwards compatibility logic to tolerate WAL
generated by older minor versions (before commit 789d65364c) failed to
compensate for the missing XLOG_MULTIXACT_ZERO_OFF_PAGE record. In
that case, the latest_page_number was initialized at the start of WAL
replay to the page for nextMulti from the CHECKPOINT record, even if
we had not seen the CREATE_ID record for that multixid yet, which
fooled the backwards compatibility logic to think that the page was
already initialized.
To fix, track the last XLOG_MULTIXACT_ZERO_OFF_PAGE that we've seen
separately from latest_page_number. If we haven't seen any
XLOG_MULTIXACT_ZERO_OFF_PAGE records yet, use
SimpleLruDoesPhysicalPageExist() to check if the page needs to be
initialized.
Reported-by: duankunren.dkr <duankunren.dkr@alibaba-inc.com>
Analyzed-by: duankunren.dkr <duankunren.dkr@alibaba-inc.com>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://www.postgresql.org/message-id/c4ef1737-8cba-458e-b6fd-4e2d6011e985.duankunren.dkr@alibaba-inc.com
Backpatch-through: 14-18 M src/backend/access/transam/multixact.c
M src/include/access/slru.h
Fix invalid value of pg_aios.pid, function pg_get_aios()
commit : 882bdcf9fd05f50153bc974568e48add76547fd3
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 23 Mar 2026 18:14:28 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 23 Mar 2026 18:14:28 +0900 When the value of pg_aios.pid is found to be 0, the function had the
idea to set "nulls" to "false" instead of "true", without setting the
value stored in the tuplestore. This could lead to the display of buggy
data. The intention of the code is clearly to display NULL when a PID
of 0 is found, and this commit adjusts the logic to do so.
Issue introduced by 60f566b4f243.
Author: ChangAo Chen <cca5507@qq.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/tencent_7D61A85D6143AD57CA8D8C00DEC541869D06@qq.com
Backpatch-through: 18 M src/backend/storage/aio/aio_funcs.c
Fix finalization of decompressor astreamers.
commit : 5f9642614275a56c93774ce536feaa4c27ee2525
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 22 Mar 2026 18:06:48 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 22 Mar 2026 18:06:48 -0400 Send the correct amount of data to the next astreamer, not the
whole allocated buffer size. This bug escaped detection because
in present uses the next astreamer is always a tar-file parser
which is insensitive to trailing garbage. But that may not
be true in future uses.
Author: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2178517.1774064942@sss.pgh.pa.us
Backpatch-through: 15 M src/fe_utils/astreamer_gzip.c
M src/fe_utils/astreamer_lz4.c
M src/fe_utils/astreamer_zstd.c
Fix self-join removal to update bare Var references in join clauses
commit : e8b9d6497469dadb3c2f3765dfeed7432af77704
author : Alexander Korotkov <akorotkov@postgresql.org>
date : Fri, 20 Mar 2026 15:32:52 +0200
committer: Alexander Korotkov <akorotkov@postgresql.org>
date : Fri, 20 Mar 2026 15:32:52 +0200 Self-join removal failed to update Var nodes when the join clause was a
bare Var (e.g., ON t1.bool_col) rather than an expression containing
Vars. ChangeVarNodesWalkExpression() used expression_tree_walker(),
which descends into child nodes but does not process the top-level node
itself. When a bare Var referencing the removed relation appeared as
the clause, its varno was left unchanged, leading to "no relation entry
for relid N" errors.
Fix by calling ChangeVarNodes_walker() directly instead of
expression_tree_walker(), so the top-level node is also processed.
Bug: #19435
Reported-by: Hang Ammmkilo <ammmkilo@163.com>
Author: Andrei Lepikhov <lepihov@gmail.com>
Co-authored-by: Tender Wang <tndrwang@gmail.com>
Co-authored-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/19435-3cc1a87f291129f1%40postgresql.org
Backpatch-through: 18 M src/backend/rewrite/rewriteManip.c
M src/test/regress/expected/join.out
M src/test/regress/sql/join.sql
SET NOT NULL: Call object-alter hook only after the catalog change
commit : 6958077ceb934f5a0c00f95b673fb86d5a7dde95
author : Álvaro Herrera <alvherre@kurilemu.de>
date : Fri, 20 Mar 2026 14:38:50 +0100
committer: Álvaro Herrera <alvherre@kurilemu.de>
date : Fri, 20 Mar 2026 14:38:50 +0100 ... otherwise, the function invoked by the hook might consult the
catalog and not see that the new constraint exists.
This relies on set_attnotnull doing CommandCounterIncrement()
after successfully modifying the catalog.
Oversight in commit 14e87ffa5c54.
Author: Artur Zakirov <zaartur@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/CAKNkYnxUPCJk-3Xe0A3rmCC8B8V8kqVJbYMVN6ySGpjs_qd7dQ@mail.gmail.com M src/backend/commands/tablecmds.c
Fix dependency on FDW handler.
commit : c11f87b1a3b97d23468bdffd2aba17298f1cb3e0
author : Jeff Davis <jdavis@postgresql.org>
date : Thu, 19 Mar 2026 14:59:07 -0700
committer: Jeff Davis <jdavis@postgresql.org>
date : Thu, 19 Mar 2026 14:59:07 -0700 ALTER FOREIGN DATA WRAPPER could drop the dependency on the handler
function if it wasn't explicitly specified.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/35c44a4b7fb76d35418c4d66b775a88f4ce60c86.camel@j-davis.com
Backpatch-through: 14 M src/backend/commands/foreigncmds.c
M src/test/regress/expected/foreign_data.out
M src/test/regress/sql/foreign_data.sql
Fix WAL flush LSN used by logical walsender during shutdown
commit : 9804981386a065206790920388f4959c798b2837
author : Fujii Masao <fujii@postgresql.org>
date : Tue, 17 Mar 2026 08:10:20 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Tue, 17 Mar 2026 08:10:20 +0900 Commit 6eedb2a5fd8 made the logical walsender call
XLogFlush(GetXLogInsertRecPtr()) to ensure that all pending WAL is flushed,
fixing a publisher shutdown hang. However, if the last WAL record ends at
a page boundary, GetXLogInsertRecPtr() can return an LSN pointing past
the page header, which can cause XLogFlush() to report an error.
A similar issue previously existed in the GiST code. Commit b1f14c96720
introduced GetXLogInsertEndRecPtr(), which returns a safe WAL insertion end
location (returning the start of the page when the last record ends at a page
boundary), and updated the GiST code to use it with XLogFlush().
This commit fixes the issue by making the logical walsender use
XLogFlush(GetXLogInsertEndRecPtr()) when flushing pending WAL during shutdown.
Backpatch to all supported versions.
Reported-by: Andres Freund <andres@anarazel.de>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/vzguaguldbcyfbyuq76qj7hx5qdr5kmh67gqkncyb2yhsygrdt@dfhcpteqifux
Backpatch-through: 14 M src/backend/replication/walsender.c
Tighten asserts on ParallelWorkerNumber
commit : 0e5ff9b9b45a657aea12440478dc002e9b01f138
author : Tomas Vondra <tomas.vondra@postgresql.org>
date : Sat, 14 Mar 2026 15:24:37 +0100
committer: Tomas Vondra <tomas.vondra@postgresql.org>
date : Sat, 14 Mar 2026 15:24:37 +0100 The comment about ParallelWorkerNumbr in parallel.c says:
In parallel workers, it will be set to a value >= 0 and < the number
of workers before any user code is invoked; each parallel worker will
get a different parallel worker number.
However asserts in various places collecting instrumentation allowed
(ParallelWorkerNumber == num_workers). That would be a bug, as the value
is used as index into an array with num_workers entries.
Fixed by adjusting the asserts accordingly. Backpatch to all supported
versions.
Discussion: https://postgr.es/m/5db067a1-2cdf-4afb-a577-a04f30b69167@vondra.me
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Backpatch-through: 14 M src/backend/executor/nodeAgg.c
M src/backend/executor/nodeBitmapHeapscan.c
M src/backend/executor/nodeBitmapIndexscan.c
M src/backend/executor/nodeIncrementalSort.c
M src/backend/executor/nodeIndexonlyscan.c
M src/backend/executor/nodeIndexscan.c
M src/backend/executor/nodeMemoize.c
M src/backend/executor/nodeSort.c
Use GetXLogInsertEndRecPtr in gistGetFakeLSN
commit : 5b3f63a1bf5996a2ad1e879207ad875a24b65ee5
author : Tomas Vondra <tomas.vondra@postgresql.org>
date : Fri, 13 Mar 2026 22:42:29 +0100
committer: Tomas Vondra <tomas.vondra@postgresql.org>
date : Fri, 13 Mar 2026 22:42:29 +0100 The function used GetXLogInsertRecPtr() to generate the fake LSN. Most
of the time this is the same as what XLogInsert() would return, and so
it works fine with the XLogFlush() call. But if the last record ends at
a page boundary, GetXLogInsertRecPtr() returns LSN pointing after the
page header. In such case XLogFlush() fails with errors like this:
ERROR: xlog flush request 0/01BD2018 is not satisfied --- flushed only to 0/01BD2000
Such failures are very hard to trigger, particularly outside aggressive
test scenarios.
Fixed by introducing GetXLogInsertEndRecPtr(), returning the correct LSN
without skipping the header. This is the same as GetXLogInsertRecPtr(),
except that it calls XLogBytePosToEndRecPtr().
Initial investigation by me, root cause identified by Andres Freund.
This is a long-standing bug in gistGetFakeLSN(), probably introduced by
c6b92041d38 in PG13. Backpatch to all supported versions.
Reported-by: Peter Geoghegan <pg@bowt.ie>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/vf4hbwrotvhbgcnknrqmfbqlu75oyjkmausvy66ic7x7vuhafx@e4rvwavtjswo
Backpatch-through: 14 M src/backend/access/gist/gistutil.c
M src/backend/access/transam/xlog.c
M src/include/access/xlog.h
xml2: Fix failure with xslt_process() under -fsanitize=undefined
commit : e33a4fda00279ebf68f3ce635fbffa2e1a5db670
author : Michael Paquier <michael@paquier.xyz>
date : Fri, 13 Mar 2026 16:06:46 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Fri, 13 Mar 2026 16:06:46 +0900 The logic of xslt_process() has never considered the fact that
xsltSaveResultToString() would return NULL for an empty string (the
upstream code has always done so, with a string length of 0). This
would cause memcpy() to be called with a NULL pointer, something
forbidden by POSIX.
Like 46ab07ffda9d and similar fixes, this is backpatched down to all the
supported branches, with a test case to cover this scenario. An empty
string has been always returned in xml2 in this case, based on the
history of the module, so this is an old issue.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/c516a0d9-4406-47e3-9087-5ca5176ebcf9@gmail.com
Backpatch-through: 14 M contrib/xml2/expected/xml2.out
M contrib/xml2/expected/xml2_1.out
M contrib/xml2/sql/xml2.sql
M contrib/xml2/xslt_proc.c
Prevent restore of incremental backup from bloating VM fork.
commit : 9540c0e5dd40df47cbea9a596975e6688ccd70a3
author : Robert Haas <rhaas@postgresql.org>
date : Mon, 9 Mar 2026 06:36:42 -0400
committer: Robert Haas <rhaas@postgresql.org>
date : Mon, 9 Mar 2026 06:36:42 -0400 When I (rhaas) wrote the WAL summarizer code, I incorrectly believed
that XLOG_SMGR_TRUNCATE truncates all forks to the same length. In
fact, what other parts of the code do is compute the truncation length
for the FSM and VM forks from the truncation length used for the main
fork. But, because I was confused, I coded the WAL summarizer to set the
limit block for the VM fork to the same value as for the main fork.
(Incremental backup always copies FSM forks in full, so there is no
similar issue in that case.)
Doing that doesn't directly cause any data corruption, as far as I can
see. However, it does create a serious risk of consuming a large amount
of extra disk space, because pg_combinebackup's reconstruct.c believes
that the reconstructed file should always be at least as long as the
limit block value. We might want to be smarter about that at some point
in the future, because it's always safe to omit all-zeroes blocks at the
end of the last segment of a relation, and doing so could save disk
space, but the current algorithm will rarely waste enough disk space to
worry about unless we believe that a relation has been truncated to a
length much longer than its actual length on disk, which is exactly what
happens as a result of the problem mentioned in the previous paragraph.
To fix, create a new visibilitymap helper function and use it to include
the right limit block in the summary files. Incremental backups taken
with existing summary files will still have this issue, but this should
improve the situation going forward.
Diagnosed-by: Oleg Tkachenko <oatkachenko@gmail.com>
Diagnosed-by: Amul Sul <sulamul@gmail.com>
Discussion: http://postgr.es/m/CAAJ_b97PqG89hvPNJ8cGwmk94gJ9KOf_pLsowUyQGZgJY32o9g@mail.gmail.com
Discussion: http://postgr.es/m/6897DAF7-B699-41BF-A6FB-B818FCFFD585%40gmail.com
Backpatch-through: 17 M src/backend/access/heap/visibilitymap.c
M src/backend/postmaster/walsummarizer.c
M src/bin/pg_combinebackup/t/011_ib_truncation.pl
M src/include/access/visibilitymap.h
doc: Document IF NOT EXISTS option for ALTER FOREIGN TABLE ADD COLUMN.
commit : 94ff80f49d64ed3b2b5fe598732600c9bb1beb71
author : Fujii Masao <fujii@postgresql.org>
date : Mon, 9 Mar 2026 18:24:41 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Mon, 9 Mar 2026 18:24:41 +0900 Commit 2cd40adb85d added the IF NOT EXISTS option to ALTER TABLE ADD COLUMN.
This also enabled IF NOT EXISTS for ALTER FOREIGN TABLE ADD COLUMN,
but the ALTER FOREIGN TABLE documentation was not updated to mention it.
This commit updates the documentation to describe the IF NOT EXISTS option for
ALTER FOREIGN TABLE ADD COLUMN.
While updating that section, also this commit clarifies that the COLUMN keyword
is optional in ALTER FOREIGN TABLE ADD/DROP COLUMN. Previously, part of
the documentation could be read as if COLUMN were required.
This commit adds regression tests covering these ALTER FOREIGN TABLE syntaxes.
Backpatch to all supported versions.
Suggested-by: Fujii Masao <masao.fujii@gmail.com>
Author: Chao Li <lic@highgo.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAHGQGwFk=rrhrwGwPtQxBesbT4DzSZ86Q3ftcwCu3AR5bOiXLw@mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/ref/alter_foreign_table.sgml
M src/test/regress/expected/foreign_data.out
M src/test/regress/sql/foreign_data.sql
Fix size underestimation of DSA pagemap for odd-sized segments
commit : a0f38604d92814dbe711d1ac3fecb0e5d054c162
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 9 Mar 2026 13:46:31 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 9 Mar 2026 13:46:31 +0900 When make_new_segment() creates an odd-sized segment, the pagemap was
only sized based on a number of usable_pages entries, forgetting that a
segment also contains metadata pages, and that the FreePageManager uses
absolute page indices that cover the entire segment. This
miscalculation could cause accesses to pagemap entries to be out of
bounds. During subsequent reuse of the allocated segment, allocations
landing on pages with indices higher than usable_pages could cause
out-of-bounds pagemap reads and/or writes. On write, 'span' pointers
are stored into the data area, corrupting the allocated objects. On
read (aka during a dsa_free), garbage is interpreted as a span pointer,
typically crashing the server in dsa_get_address().
The normal geometric path correctly sizes the pagemap for all pages in
the segment. The odd-sized path needs to do the same, but it works
forward from usable_pages rather than backward from total_size.
This commit fixes the sizing of the odd-sized case by adding pagemap
entries for the metadata pages after the initial metadata_bytes
calculation, using an integer ceiling division to compute the exact
number of additional entries needed in one go, avoiding any iteration in
the calculation.
An assertion is added in the code path for odd-sized segments, ensuring
that the pagemap includes the metadata area, and that the result is
appropriately sized.
This problem would show up depending on the size requested for the
allocation of a DSA segment. The reporter has noticed this issue when a
parallel hash join makes a DSA allocation large enough to trigger the
odd-sized segment path, but it could happen for anything that does a DSA
allocation.
A regression test is added to test_dsa, down to v17 where the test
module has been introduced. This adds a set of cheap tests to check the
problem, the new assertion being useful for this purpose. Sami has
proposed a test that took a longer time than what I have done here; the
test committed is faster and good enough to check the odd-sized
allocation path.
Author: Paul Bunn <paul.bunn@icloud.com>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/044401dcabac$fe432490$fac96db0$@icloud.com
Backpatch-through: 14 M src/backend/utils/mmgr/dsa.c
M src/test/modules/test_dsa/expected/test_dsa.out
M src/test/modules/test_dsa/sql/test_dsa.sql
M src/test/modules/test_dsa/test_dsa–1.0.sql
M src/test/modules/test_dsa/test_dsa.c
Fix invalid boolean if-test
commit : 3a9cf1c925a0430264be3bd993f10b75c408b05e
author : Álvaro Herrera <alvherre@kurilemu.de>
date : Sat, 7 Mar 2026 14:28:16 +0100
committer: Álvaro Herrera <alvherre@kurilemu.de>
date : Sat, 7 Mar 2026 14:28:16 +0100 We were testing the truth value of the array of booleans (which is
always true) instead of the boolean element specific to the affected
table column.
This causes a binary-upgrade dump fail to omit the name of a constraint;
that is, the correct constraint name is always printed, even when it's
not needed. The affected case is a binary-upgrade dump of a not-null
constraint in an inherited column, which must in addition have no
comment.
Another point is that in order for this to make a difference, the
constraint must have the default name in the child table. That is, the
constraint must have been created _in the parent table_ with the name
that it would have in the child table, like so:
CREATE TABLE parent (a int CONSTRAINT child_a_not_null NOT NULL);
CREATE TABLE child () INHERITS (parent);
Otherwise, the correct name must be printed by binary-upgrade pg_dump
anyway, since it wouldn't match the name produced at the parent.
Moreover, when it does hit, the pre-18-compatibility code (which has to
work with a constraint that has no name) gets involved and uses the
UPDATE on pg_constraint using the conkey instead of column name ... and
so everything ends up working correctly AFAICS.
I think it might cause a problem if the table and column names are
overly long, but I didn't want to spend time investigating further.
Still, it's wrong code, and static analyzers have twice complained about
it, so fix it by adding the array index accessor that was obviously
meant.
Reported-by: Ranier Vilela <ranier.vf@gmail.com>
Reported-by: George Tarasov <george.v.tarasov@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/CAEudQAo7ah=4TDheuEjtb0dsv6bHoK7uBNqv53Tsub2h-xBSJw@mail.gmail.com
Discussion: https://postgr.es/m/f3029f25-acc9-4cb9-a74f-fe93bcfb3a27@gmail.com M src/bin/pg_dump/pg_dump.c
Fix publisher shutdown hang caused by logical walsender busy loop.
commit : 3eb2fecdbbfc0336888770d4c8c727b44e456393
author : Fujii Masao <fujii@postgresql.org>
date : Fri, 6 Mar 2026 16:43:40 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Fri, 6 Mar 2026 16:43:40 +0900 Previously, when logical replication was running, shutting down
the publisher could cause the logical walsender to enter a busy loop
and prevent the publisher from completing shutdown.
During shutdown, the logical walsender waits for all pending WAL
to be written out. However, some WAL records could remain unflushed,
causing the walsender to wait indefinitely.
The issue occurred because the walsender used XLogBackgroundFlush() to
flush pending WAL. This function does not guarantee that all WAL is written.
For example, WAL generated by a transaction without an assigned
transaction ID that aborts might not be flushed.
This commit fixes the bug by making the logical walsender call XLogFlush()
instead, ensuring that all pending WAL is written and preventing
the busy loop during shutdown.
Backpatch to all supported versions.
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Reviewed-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAO6_Xqo3co3BuUVEVzkaBVw9LidBgeeQ_2hfxeLMQcXwovB3GQ@mail.gmail.com
Backpatch-through: 14 M src/backend/replication/walsender.c
Exit after fatal errors in client-side compression code.
commit : a01a592b1193c4a22d897393d664f5888f7a25b5
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 5 Mar 2026 14:43:21 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 5 Mar 2026 14:43:21 -0500 It looks like whoever wrote the astreamer (nee bbstreamer) code
thought that pg_log_error() is equivalent to elog(ERROR), but
it's not; it just prints a message. So all these places tried to
continue on after a compression or decompression error return,
with the inevitable result being garbage output and possibly
cascading error messages. We should use pg_fatal() instead.
These error conditions are probably pretty unlikely in practice,
which no doubt accounts for the lack of field complaints.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/1531718.1772644615@sss.pgh.pa.us
Backpatch-through: 15 M src/bin/pg_dump/compress_lz4.c
M src/fe_utils/astreamer_gzip.c
M src/fe_utils/astreamer_lz4.c
M src/fe_utils/astreamer_zstd.c
Fix handling of updated tuples in the MERGE statement
commit : 13fab378e630f73b7bb821a211f10b66bc696525
author : Alexander Korotkov <akorotkov@postgresql.org>
date : Thu, 5 Mar 2026 19:47:20 +0200
committer: Alexander Korotkov <akorotkov@postgresql.org>
date : Thu, 5 Mar 2026 19:47:20 +0200 This branch missed the IsolationUsesXactSnapshot() check. That led to EPQ on
repeatable read and serializable isolation levels. This commit fixes the
issue and provides a simple isolation check for that. Backpatch through v15
where MERGE statement was introduced.
Reported-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/CAPpHfdvzZSaNYdj5ac-tYRi6MuuZnYHiUkZ3D-AoY-ny8v%2BS%2Bw%40mail.gmail.com
Author: Tender Wang <tndrwang@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Backpatch-through: 15 M src/backend/executor/nodeModifyTable.c
M src/test/isolation/expected/merge-update.out
M src/test/isolation/specs/merge-update.spec
doc: Clarify that COLUMN is optional in ALTER TABLE ... ADD/DROP COLUMN.
commit : e46b915db518af3748d4e59e9927898d9c6c9337
author : Fujii Masao <fujii@postgresql.org>
date : Thu, 5 Mar 2026 12:55:52 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Thu, 5 Mar 2026 12:55:52 +0900 In ALTER TABLE ... ADD/DROP COLUMN, the COLUMN keyword is optional. However,
part of the documentation could be read as if COLUMN were required, which may
mislead users about the command syntax.
This commit updates the ALTER TABLE documentation to clearly state that
COLUMN is optional for ADD and DROP.
Also this commit adds regression tests covering ALTER TABLE ... ADD/DROP
without the COLUMN keyword.
Backpatch to all supported versions.
Author: Chao Li <lic@highgo.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAEoWx2n6ShLMOnjOtf63TjjgGbgiTVT5OMsSOFmbjGb6Xue1Bw@mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/ref/alter_table.sgml
M src/test/regress/expected/alter_table.out
M src/test/regress/sql/alter_table.sql
Fix rare instability in recovery TAP test 004_timeline_switch
commit : 7185eddf0522b3146ed1ff6e063e8e129e77c706
author : Michael Paquier <michael@paquier.xyz>
date : Thu, 5 Mar 2026 10:06:01 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Thu, 5 Mar 2026 10:06:01 +0900 This fixes a problem similar to ad8c86d22cbd. In this case, the test
could fail under the following circumstances:
- The primary is stopped with teardown_node(), meaning that it may not
be able to send all its WAL records to standby_1 and standby_2.
- If standby_2 receives more records than standby_1, attempting to
reconnect standby_2 to the promoted standby_1 would fail because of a
timeline fork.
This race condition is fixed with a simple trick: instead of tearing
down the primary, it is stopped cleanly so as all the WAL records of the
primary are received and flushed by both standby_1 and standby_2. Once
we do that, there is no need for a wait_for_catchup() before stopping
the node. The test wants to check that a timeline jump can be achieved
when reconnecting a standby to a promoted standby in the same cluster,
hence an immediate stop of the primary is not required.
This failure is harder to reach than the previous instability of
009_twophase, still the buildfarm has been able to detect this failure
at least once. I have tried Alexander Lakhin's test trick with the
bgwriter and very aggressive standby snapshots, but I could not
reproduce it directly. It is reachable, as the buildfarm has proved.
Backpatch down to all supported branches, and this problem can lead to
spurious failures in the buildfarm.
Discussion: https://postgr.es/m/493401a8-063f-436a-8287-a235d9e065fc@gmail.com
Backpatch-through: 14 M src/test/recovery/t/004_timeline_switch.pl
Fix yet another bug in archive streamer with LZ4 decompression.
commit : 78dc9a808201b06a59f2bfb03018f790a5fc15ba
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 4 Mar 2026 12:08:37 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 4 Mar 2026 12:08:37 -0500 The code path in astreamer_lz4_decompressor_content() that updated
the output pointers when the output buffer isn't full was wrong.
It advanced next_out by bytes_written, which could include previous
decompression output not just that of the current cycle. The
correct amount to advance is out_size. While at it, make the
output pointer updates look more like the input pointer updates.
This bug is pretty hard to reach, as it requires consecutive
compression frames that are too small to fill the output buffer.
pg_dump could have produced such data before 66ec01dc4, but
I'm unsure whether any files we use astreamer with would be
likely to contain problematic data.
Author: Chao Li <lic@highgo.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/0594CC79-1544-45DD-8AA4-26270DE777A7@gmail.com
Backpatch-through: 15 M src/fe_utils/astreamer_lz4.c
Don't malloc(0) in EventTriggerCollectAlterTSConfig
commit : e2ee58eec0799460bb208b4357401e14762522de
author : Álvaro Herrera <alvherre@kurilemu.de>
date : Wed, 4 Mar 2026 15:04:53 +0100
committer: Álvaro Herrera <alvherre@kurilemu.de>
date : Wed, 4 Mar 2026 15:04:53 +0100 Author: Florin Irion <florin.irion@enterprisedb.com>
Discussion: https://postgr.es/m/c6fff161-9aee-4290-9ada-71e21e4d84de@gmail.com M src/backend/commands/event_trigger.c
M src/test/modules/test_ddl_deparse/Makefile
A src/test/modules/test_ddl_deparse/expected/textsearch.out
M src/test/modules/test_ddl_deparse/meson.build
A src/test/modules/test_ddl_deparse/sql/textsearch.sql
Add test for row-locking and multixids with prepared transactions
commit : fa3b328e6dc576f725371779d86e9b220bcb9f62
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 4 Mar 2026 11:29:02 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 4 Mar 2026 11:29:02 +0200 This is a repro for the issue fixed in commit ccae90abdb. Backpatch to
v17 like that commit, although that's a little arbitrary as this test
would work on older versions too.
Author: Sami Imseih <samimseih@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAA5RZ0twq5bNMq0r0QNoopQnAEv+J3qJNCrLs7HVqTEntBhJ=g@mail.gmail.com
Backpatch-through: 17 M src/test/regress/expected/prepared_xacts.out
M src/test/regress/sql/prepared_xacts.sql
Skip prepared_xacts test if max_prepared_transactions < 2
commit : 201436c19f405b2ef4cd00499cc93928c2fc1953
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 4 Mar 2026 11:06:43 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 4 Mar 2026 11:06:43 +0200 This reduces maintenance overhead, as we no longer need to update the
dummy expected output file every time the .sql file changes.
Discussion: https://www.postgresql.org/message-id/1009073.1772551323@sss.pgh.pa.us
Backpatch-through: 14 M src/test/regress/expected/prepared_xacts.out
M src/test/regress/expected/prepared_xacts_1.out
M src/test/regress/sql/prepared_xacts.sql
Fix rare instability in recovery TAP test 009_twophase
commit : 54e0a8fff1450391019f94102ae9517b10a5e454
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 4 Mar 2026 16:30:56 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 4 Mar 2026 16:30:56 +0900 The phase of the test where we want to check that 2PC transactions
prepared on a primary can be committed on a promoted standby relied on
an immediate stop of the primary. This logic has a race condition: it
could be possible that some records (most likely standby snapshot
records) are generated on the primary before it finishes its shutdown,
without the promoted standby know about them. When the primary is
recycled as new standby, the test could fail because of a timeline fork
as an effect of these extra records.
This fix takes care of the instability by doing a clean stop of the
primary instead of a teardown (aka immediate stop), so as all records
generated on the primary are sent to the promoted standby and flushed
there. There is no need for a teardown of the primary in this test
scenario: the commit of 2PC transactions on a promoted standby do not
care about the state of the primary, only of the standby.
This race is very hard to hit in practice, even slow buildfarm members
like skink have a very low rate of reproduction. Alexander Lakhin has
come up with a recipe to improve the reproduction rate a lot:
- Enable -DWAL_DEBUG.
- Patch the bgwriter so as standby snapshots are generated every
milliseconds.
- Run 009_twophase tests under heavy parallelism.
With this method, the failure appears after a couple of iterations.
With the fix in place, I have been able to run more than 50 iterations
of the parallel test sequence, without seeing a failure.
Issue introduced in 30820982b295, due to a copy-pasto coming from the
surrounding tests. Thanks also to Hayato Kuroda for digging into the
details of the failure. He has proposed a fix different than the one of
this commit. Unfortunately, it relied on injection points, feature only
available in v17. The solution of this commit is simpler, and can be
applied to v14~v16.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/b0102688-6d6c-c86a-db79-e0e91d245b1a@gmail.com
Backpatch-through: 14 M src/test/recovery/t/009_twophase.pl
doc: Fix sentence of pg_walsummary page
commit : 1aacba1829e60a26c09d68f47254a28cbd48614e
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 3 Mar 2026 15:27:55 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 3 Mar 2026 15:27:55 +0900 Author: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Discussion: https://postgr.es/m/CAHut+PvfYBL-ppX-i8DPeRu7cakYCZz+QYBhrmQzicx7z_Tj5w@mail.gmail.com
Backpatch-through: 17 M doc/src/sgml/ref/pg_walsummary.sgml
doc: Clarify that empty COMMENT string removes the comment.
commit : 47ad672a76d30110254561960f1fe33e453cf8fe
author : Fujii Masao <fujii@postgresql.org>
date : Tue, 3 Mar 2026 14:45:52 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Tue, 3 Mar 2026 14:45:52 +0900 Clarify the documentation of COMMENT ON to state that specifying an empty
string is treated as NULL, meaning that the comment is removed.
This makes the behavior explicit and avoids possible confusion about how
empty strings are handled.
Also adds regress test cases that use empty string to remove a comment.
Backpatch to all supported versions.
Author: Chao Li <lic@highgo.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Shengbin Zhao <zshengbin91@gmail.com>
Reviewed-by: Jim Jones <jim.jones@uni-muenster.de>
Reviewed-by: zhangqiang <zhang_qiang81@163.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/26476097-B1C1-4BA8-AA92-0AD0B8EC7190@gmail.com
Backpatch-through: 14 M doc/src/sgml/ref/comment.sgml
M src/test/regress/expected/create_index.out
M src/test/regress/expected/create_role.out
M src/test/regress/sql/create_index.sql
M src/test/regress/sql/create_role.sql
basic_archive: Allow archive directory to be missing at startup.
commit : bde9ad31515a92ce23c9b9e113f08a71ca0f7dd1
author : Nathan Bossart <nathan@postgresql.org>
date : Mon, 2 Mar 2026 13:12:25 -0600
committer: Nathan Bossart <nathan@postgresql.org>
date : Mon, 2 Mar 2026 13:12:25 -0600 Presently, the GUC check hook for basic_archive.archive_directory
checks that the specified directory exists. Consequently, if the
directory does not exist at server startup, archiving will be stuck
indefinitely, even if it appears later. To fix, remove this check
from the hook so that archiving will resume automatically once the
directory is present. basic_archive must already be prepared to
deal with the directory disappearing at any time, so no additional
special handling is required.
Reported-by: Олег Самойлов <splarv@ya.ru>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Sergei Kornilov <sk@zsrv.org>
Discussion: https://postgr.es/m/73271769675212%40mail.yandex.ru
Backpatch-through: 15 M contrib/basic_archive/basic_archive.c
Fix OldestMemberMXactId and OldestVisibleMXactId array usage
commit : 0a50ef0943824d9eca83a5dd454cb18349069814
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Mon, 2 Mar 2026 19:19:22 +0200
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Mon, 2 Mar 2026 19:19:22 +0200 Commit ab355e3a88 changed how the OldestMemberMXactId array is
indexed. It's no longer indexed by synthetic dummyBackendId, but with
ProcNumber. The PGPROC entries for prepared xacts come after auxiliary
processes in the allProcs array, which rendered the calculation for
MaxOldestSlot and the indexes into the array incorrect. (The
OldestVisibleMXactId array is not used for prepared xacts, and thus
never accessed with ProcNumber's greater than MaxBackends, so this
only affects the OldestMemberMXactId array.)
As a result, a prepared xact would store its value past the end of the
OldestMemberMXactId array, overflowing into the OldestVisibleMXactId
array. That could cause a transaction's row lock to appear invisible
to other backends, or other such visibility issues. With a very small
max_connections setting, the store could even go beyond the
OldestVisibleMXactId array, stomping over the first element in the
BufferDescriptor array.
To fix, calculate the array sizes more precisely, and introduce helper
functions to calculate the array indexes correctly.
Author: Yura Sokolov <y.sokolov@postgrespro.ru>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/7acc94b0-ea82-4657-b1b0-77842cb7a60c@postgrespro.ru
Backpatch-through: 17 M src/backend/access/transam/multixact.c
M src/backend/access/transam/twophase.c
M src/backend/storage/lmgr/proc.c
M src/include/storage/proc.h
In pg_dumpall, don't skip role GRANTs with dangling grantor OIDs.
commit : b09158cc776c48e551113e1a579c1c982d968c2f
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 2 Mar 2026 11:14:58 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 2 Mar 2026 11:14:58 -0500 In commits 29d75b25b et al, I made pg_dumpall's dumpRoleMembership
logic treat a dangling grantor OID the same as dangling role and
member OIDs: print a warning and skip emitting the GRANT. This wasn't
terribly well thought out; instead, we should handle the case by
emitting the GRANT without the GRANTED BY clause. When the source
database is pre-v16, such cases are somewhat expected because those
versions didn't prevent dropping the grantor role; so don't even
print a warning that we did this. (This change therefore restores
pg_dumpall's pre-v16 behavior for these cases.) The case is not
expected in >= v16, so then we do print a warning, but soldiering on
with no GRANTED BY clause still seems like a reasonable strategy.
Per complaint from Robert Haas that we were now dropping GRANTs
altogether in easily-reachable scenarios.
Reported-by: Robert Haas <robertmhaas@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CA+TgmoauoiW4ydDhdrseg+DD4Kwha=+TSZp18BrJeHKx3o1Fdw@mail.gmail.com
Backpatch-through: 16 M src/bin/pg_dump/pg_dumpall.c
Fix memory allocation size in RegisterExtensionExplainOption()
commit : 730c98d0382fb7336ed39e4961950c40c2356819
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 2 Mar 2026 13:14:18 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 2 Mar 2026 13:14:18 +0900 The allocations used for the static array ExplainExtensionOptionArray,
that tracks a set of ExplainExtensionOption, used "char *" instead of
ExplainExtensionOption as the memory size consumed by one element,
underestimating the memory required by half.
The initial allocation of ExplainExtensionNameArray wants to hold 16
elements before being reallocated, and with "char *" it meant that there
was enough space only for 8 ExplainExtensionOption elements, 16 bytes
required for each element. The backend would crash once one tries to
register a 9th EXPLAIN option.
As far as I can see, the allocation formulas of GetExplainExtensionId()
have been copy-pasted to RegisterExtensionExplainOption(), but the
internal maths of the copy were not adjusted accordingly.
Oversight in c65bc2e1d14a.
Author: Joel Jacobson <joel@compiler.org>
Discussion: https://postgr.es/m/2a4bd2f5-2a2f-409f-8ac7-110dd3fad4fc@app.fastmail.com
Backpatch-through: 18 M src/backend/commands/explain_state.c
test_custom_types: Test module with fancy custom data types
commit : 017e4e395d0dd2593e3bcef85c6c80a868877351
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 2 Mar 2026 11:10:35 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 2 Mar 2026 11:10:35 +0900 This commit adds a new test module called "test_custom_types", that can
be used to stress code paths related to custom data type
implementations.
Currently, this is used as a test suite to validate the set of fixes
done in 3b7a6fa15720, that requires some typanalyze callbacks that can
force very specific backend behaviors, as of:
- typanalyze callback that returns "false" as status, to mark a failure
in computing statistics.
- typanalyze callback that returns "true" but let's the backend know
that no interesting stats could be computed, with stats_valid set to
"false".
This could be extended more in the future if more problems are found.
For simplicity, the module uses a fake int4 data type, that requires a
btree operator class to be usable with extended statistics. The type is
created by the extension, and its properties are altered in the test.
Like 3b7a6fa15720, this module is backpatched down to v14, for coverage
purposes.
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/aaDrJsE1I5mrE-QF@paquier.xyz
Backpatch-through: 14 M src/test/modules/Makefile
M src/test/modules/meson.build
A src/test/modules/test_custom_types/.gitignore
A src/test/modules/test_custom_types/Makefile
A src/test/modules/test_custom_types/README
A src/test/modules/test_custom_types/expected/test_custom_types.out
A src/test/modules/test_custom_types/meson.build
A src/test/modules/test_custom_types/sql/test_custom_types.sql
A src/test/modules/test_custom_types/test_custom_types–1.0.sql
A src/test/modules/test_custom_types/test_custom_types.c
A src/test/modules/test_custom_types/test_custom_types.control
Fix set of issues with extended statistics on expressions
commit : 83671c0da04969494e4dcab1f05baa370b4e7cd9
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 2 Mar 2026 09:38:40 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 2 Mar 2026 09:38:40 +0900 This commit addresses two defects regarding extended statistics on
expressions:
- When building extended statistics in lookup_var_attr_stats(), the call
to examine_attribute() did not account for the possibility of a NULL
return value. This can happen depending on the behavior of a typanalyze
callback — for example, if the callback returns false, if no rows are
sampled, or if no statistics are computed. In such cases, the code
attempted to build MCV, dependency, and ndistinct statistics using a
NULL pointer, incorrectly assuming valid statistics were available,
which could lead to a server crash.
- When loading extended statistics for expressions,
statext_expressions_load() did not account for NULL entries in the
pg_statistic array storing expression statistics. Such NULL entries can
be generated when statistics collection fails for an expression, as may
occur during the final step of serialize_expr_stats(). An extended
statistics object defining N expressions requires N corresponding
elements in the pg_statistic array stored for the expressions, and some
of these elements can be NULL. This situation is reachable when a
typanalyze callback returns true, but sets stats_valid to indicate that
no useful statistics could be computed.
While these scenarios cannot occur with in-core typanalyze callbacks, as
far as I have analyzed, they can be triggered by custom data types with
custom typanalyze implementations, at least.
No tests are added in this commit. A follow-up commit will introduce a
test module that can be extended to cover similar edge cases if
additional issues are discovered. This takes care of the core of the
problem.
Attribute and relation statistics already offer similar protections:
- ANALYZE detects and skips the build of invalid statistics.
- Invalid catalog data is handled defensively when loading statistics.
This issue exists since the support for extended statistics on
expressions has been added, down to v14 as of a4d75c86bf15. Backpatch
to all supported stable branches.
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/aaDrJsE1I5mrE-QF@paquier.xyz
Backpatch-through: 14 M src/backend/statistics/extended_stats.c
M src/backend/utils/adt/selfuncs.c
Don't flatten join alias Vars that are stored within a GROUP RTE.
commit : c2c1962a64b547412c88fa2728e4fa35e65f4c90
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 27 Feb 2026 12:54:02 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 27 Feb 2026 12:54:02 -0500 The RTE's groupexprs list is used for deparsing views, and for that
usage it must contain the original alias Vars; else we can get
incorrect SQL output. But since commit 247dea89f,
parseCheckAggregates put the GROUP BY expressions through
flatten_join_alias_vars before building the RTE_GROUP RTE.
Changing the order of operations there is enough to fix it.
This patch unfortunately can do nothing for already-created views:
if they use a coding pattern that is subject to the bug, they will
deparse incorrectly and hence present a dump/reload hazard in the
future. The only fix is to recreate the view from the original SQL.
But the trouble cases seem to be quite narrow. AFAICT the output
was only wrong for "SELECT ... t1 LEFT JOIN t2 USING (x) GROUP BY x"
where t1.x and t2.x were not of identical data types and t1.x was
the side that required an implicit coercion. If there was no hidden
coercion, or if the join was plain, RIGHT, or FULL, the deparsed
output was uglier than intended but not functionally wrong.
Reported-by: Swirl Smog Dowry <swirl-smog-dowry@duck.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/CA+-gibjCg_vjcq3hWTM0sLs3_TUZ6Q9rkv8+pe2yJrdh4o4uoQ@mail.gmail.com
Backpatch-through: 18 M src/backend/parser/parse_agg.c
M src/test/regress/expected/aggregates.out
M src/test/regress/sql/aggregates.sql
postgres_fdw: Fix thinko in comment for UserMappingPasswordRequired().
commit : be37f270d7646b39155d31b17d9ab66712e07aaf
author : Etsuro Fujita <efujita@postgresql.org>
date : Fri, 27 Feb 2026 17:05:02 +0900
committer: Etsuro Fujita <efujita@postgresql.org>
date : Fri, 27 Feb 2026 17:05:02 +0900 This commit also rephrases this comment to improve readability.
Oversight in commit 6136e94dc.
Reported-by: Etsuro Fujita <etsuro.fujita@gmail.com>
Author: Andreas Karlsson <andreas@proxel.se>
Co-authored-by: Etsuro Fujita <etsuro.fujita@gmail.com>
Discussion: https://postgr.es/m/CAPmGK16pDnM_wU3kmquPj-M9MYqG3y0BdntRZ0eytqbCaFY3WQ%40mail.gmail.com
Backpatch-through: 14 M contrib/postgres_fdw/connection.c
Yet another ltree fix for REL_18_STABLE.
commit : 53a57cae1c894964db54b84a7c055eae8b5a9f03
author : Jeff Davis <jdavis@postgresql.org>
date : Thu, 26 Feb 2026 15:19:31 -0800
committer: Jeff Davis <jdavis@postgresql.org>
date : Thu, 26 Feb 2026 15:19:31 -0800 Fix buildfarm failure from code that's only present in version 18,
introduced by commit b3c2a3d386.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/739010.1772142042@sss.pgh.pa.us M contrib/ltree/lquery_op.c
Fix more multibyte issues in ltree.
commit : b3c2a3d386fa91421d81f51ff759fad4f31b6479
author : Jeff Davis <jdavis@postgresql.org>
date : Thu, 26 Feb 2026 12:23:51 -0800
committer: Jeff Davis <jdavis@postgresql.org>
date : Thu, 26 Feb 2026 12:23:51 -0800 Commit 84d5efa7e3 missed some multibyte issues caused by short-circuit
logic in the callers. The callers assumed that if the predicate string
is longer than the label string, then it couldn't possibly be a match,
but it can be when using case-insensitive matching (LVAR_INCASE) if
casefolding changes the byte length.
Fix by refactoring to get rid of the short-circuit logic as well as
the function pointer, and consolidate the logic in a replacement
function ltree_label_match().
Discussion: https://postgr.es/m/02c6ef6cf56a5013ede61ad03c7a26affd27d449.camel@j-davis.com
Backpatch-through: 14 M contrib/ltree/lquery_op.c
M contrib/ltree/ltree.h
M contrib/ltree/ltxtquery_op.c
Fix memory leaks in pg_locale_icu.c.
commit : 4abf63c62bfbff82a39767655f6e6de6f884c276
author : Jeff Davis <jdavis@postgresql.org>
date : Thu, 29 Jan 2026 10:37:09 -0800
committer: Jeff Davis <jdavis@postgresql.org>
date : Thu, 29 Jan 2026 10:37:09 -0800 The backport prior to 18 requires minor modification due to code
refactoring.
Discussion: https://postgr.es/m/e2b7a0a88aaadded7e2d19f42d5ab03c9e182ad8.camel@j-davis.com
Backpatch-through: 16 M src/backend/utils/adt/pg_locale_icu.c
pg_dump: Preserve NO INHERIT on NOT NULL on inheritance children
commit : c3c8b63d76be16d7fbd9d309e8b640c43554c453
author : Álvaro Herrera <alvherre@kurilemu.de>
date : Thu, 26 Feb 2026 11:50:26 +0100
committer: Álvaro Herrera <alvherre@kurilemu.de>
date : Thu, 26 Feb 2026 11:50:26 +0100 When the constraint is printed without the column, we were not printing
the NO INHERIT flag.
Author: Jian He <jian.universality@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/CACJufxEDEOO09G+OQFr=HmFr9ZDLZbRoV7+pj58h3_WeJ_K5UQ@mail.gmail.com M src/bin/pg_dump/pg_dump.c
M src/bin/pg_dump/t/002_pg_dump.pl
EUC_CN, EUC_JP, EUC_KR, EUC_TW: Skip U+00A0 tests instead of failing.
commit : 95e0fac1ee76e39fd5aee8d6e0e71a8ed36b32dd
author : Noah Misch <noah@leadboat.com>
date : Wed, 25 Feb 2026 18:13:22 -0800
committer: Noah Misch <noah@leadboat.com>
date : Wed, 25 Feb 2026 18:13:22 -0800 Settings that ran the new test euc_kr.sql to completion would fail these
older src/pl tests. Use alternative expected outputs, for which psql
\gset and \if have reduced the maintenance burden. This fixes
"LANG=ko_KR.euckr LC_MESSAGES=C make check-world". (LC_MESSAGES=C fixes
IO::Pty usage in tests 010_tab_completion and 001_password.) That file
is new in commit c67bef3f3252a3a38bf347f9f119944176a796ce. Back-patch
to v14, like that commit.
Discussion: https://postgr.es/m/20260217184758.da.noahmisch@microsoft.com
Backpatch-through: 14 M src/pl/plperl/GNUmakefile
M src/pl/plperl/expected/plperl_elog.out
M src/pl/plperl/expected/plperl_elog_1.out
A src/pl/plperl/expected/plperl_unicode.out
A src/pl/plperl/expected/plperl_unicode_1.out
M src/pl/plperl/meson.build
M src/pl/plperl/sql/plperl_elog.sql
A src/pl/plperl/sql/plperl_unicode.sql
M src/pl/plpython/expected/plpython_unicode.out
A src/pl/plpython/expected/plpython_unicode_1.out
M src/pl/plpython/sql/plpython_unicode.sql
M src/pl/tcl/expected/pltcl_unicode.out
A src/pl/tcl/expected/pltcl_unicode_1.out
M src/pl/tcl/sql/pltcl_unicode.sql
doc: Clarify INCLUDING COMMENTS behavior in CREATE TABLE LIKE.
commit : 315b0f3e87ffea8dca374151acbfdd7ff039acf4
author : Fujii Masao <fujii@postgresql.org>
date : Thu, 26 Feb 2026 09:01:52 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Thu, 26 Feb 2026 09:01:52 +0900 The documentation for the INCLUDING COMMENTS option of the LIKE clause
in CREATE TABLE was inaccurate and incomplete. It stated that comments for
copied columns, constraints, and indexes are copied, but regarding comments
on constraints in reality only comments on CHECK and NOT NULL constraints
are copied; comments on other constraints (such as primary keys) are not.
In addition, comments on extended statistics are copied, but this was not
documented.
The CREATE FOREIGN TABLE documentation had a similar omission: comments
on extended statistics are also copied, but this was not mentioned.
This commit updates the documentation to clarify the actual behavior.
The CREATE TABLE reference now specifies that comments on copied columns,
CHECK constraints, NOT NULL constraints, indexes, and extended statistics are
copied. The CREATE FOREIGN TABLE reference now notes that comments on
extended statistics are copied as well.
Backpatch to all supported versions. Documentation updates related to
CREATE FOREIGN TABLE LIKE and NOT NULL constraint comment copying are
not applied to v17 and earlier, since those features were introduced in v18.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/CAHGQGwHSOSGcaYDvHF8EYCUCfGPjbRwGFsJ23cx5KbJ1X6JouQ@mail.gmail.com
Backpatch-through: 14 M doc/src/sgml/ref/create_foreign_table.sgml
M doc/src/sgml/ref/create_table.sgml
Fix ProcWakeup() resetting wrong waitStart field.
commit : 0d3be050178466970644caaaa4d79848a1fcd630
author : Fujii Masao <fujii@postgresql.org>
date : Thu, 26 Feb 2026 08:46:12 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Thu, 26 Feb 2026 08:46:12 +0900 Previously, when one process woke another that was waiting on a lock,
ProcWakeup() incorrectly cleared its own waitStart field (i.e.,
MyProc->waitStart) instead of that of the process being awakened.
As a result, the awakened process retained a stale lock-wait start timestamp.
This did not cause user-visible issues. pg_locks.waitstart was reported as
NULL for the awakened process (i.e., when pg_locks.granted is true),
regardless of the waitStart value.
This bug was introduced by commit 46d6e5f56790.
This commit fixes this by resetting the waitStart field of the process
being awakened in ProcWakeup().
Backpatch to all supported branches.
Reported-by: Chao Li <li.evan.chao@gmail.com>
Author: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: ji xu <thanksgreed@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/537BD852-EC61-4D25-AB55-BE8BE46D07D7@gmail.com
Backpatch-through: 14 M src/backend/storage/lmgr/proc.c
Allow PG_PRINTF_ATTRIBUTE to be different in C and C++ code.
commit : 753d5eee46d1d9c2c7f28192ae62d5da9d7d1408
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 25 Feb 2026 11:57:26 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 25 Feb 2026 11:57:26 -0500 Although clang claims to be compatible with gcc's printf format
archetypes, this appears to be a falsehood: it likes __syslog__
(which gcc does not, on most platforms) and doesn't accept
gnu_printf. This means that if you try to use gcc with clang++
or clang with g++, you get compiler warnings when compiling
printf-like calls in our C++ code. This has been true for quite
awhile, but it's gotten more annoying with the recent appearance
of several buildfarm members that are configured like this.
To fix, run separate probes for the format archetype to use with the
C and C++ compilers, and conditionally define PG_PRINTF_ATTRIBUTE
depending on __cplusplus.
(We could alternatively insist that you not mix-and-match C and
C++ compilers; but if the case works otherwise, this is a poor
reason to insist on that.)
This commit back-patches 0909380e4 into supported branches.
Discussion: https://postgr.es/m/986485.1764825548@sss.pgh.pa.us
Discussion: https://postgr.es/m/3988414.1771950285@sss.pgh.pa.us
Backpatch-through: 14-18 M config/c-compiler.m4
M configure
M configure.ac
M meson.build
M src/include/c.h
M src/include/pg_config.h.in
Fix some cases of indirectly casting away const.
commit : de77775a7b5031a2eddd9fa758e2139c08ae68ec
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 25 Feb 2026 11:19:50 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 25 Feb 2026 11:19:50 -0500 Newest versions of gcc+glibc are able to detect cases where code
implicitly casts away const by assigning the result of strchr() or
a similar function applied to a "const char *" value to a target
variable that's just "char *". This of course creates a hazard of
not getting a compiler warning about scribbling on a string one was
not supposed to, so fixing up such cases is good.
This patch fixes a dozen or so places where we were doing that.
Most are trivial additions of "const" to the target variable,
since no actually-hazardous change was occurring.
Thanks to Bertrand Drouvot for finding a couple more spots than
I had.
This commit back-patches relevant portions of 8f1791c61 and
9f7565c6c into supported branches. However, there are two
places in ecpg (in v18 only) where a proper fix is more
complicated than seems appropriate for a back-patch. I opted
to silence those two warnings by adding casts.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/1324889.1764886170@sss.pgh.pa.us
Discussion: https://postgr.es/m/3988414.1771950285@sss.pgh.pa.us
Backpatch-through: 14-18 M src/backend/catalog/pg_type.c
M src/backend/tsearch/spell.c
M src/backend/utils/adt/formatting.c
M src/backend/utils/adt/pg_locale.c
M src/backend/utils/adt/xid8funcs.c
M src/bin/pg_waldump/pg_waldump.c
M src/bin/pgbench/pgbench.c
M src/common/compression.c
M src/interfaces/ecpg/pgtypeslib/datetime.c
M src/interfaces/ecpg/preproc/ecpg.trailer
M src/interfaces/ecpg/preproc/variable.c
M src/port/chklocale.c
M src/port/getopt.c
M src/port/getopt_long.c
M src/port/win32setlocale.c
M src/test/regress/pg_regress.c
M src/timezone/zic.c
Stabilize output of new isolation test insert-conflict-do-update-4.
commit : aeaf2fc0ddbfb1e4a706555e33d9b4165d110a1c
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 25 Feb 2026 10:51:42 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 25 Feb 2026 10:51:42 -0500 The test added by commit 4b760a181 assumed that a table's physical
row order would be predictable after an UPDATE. But a non-heap table
AM might produce some other order. Even with heap AM, the assumption
seems risky; compare a3fd53bab for instance. Adding an ORDER BY is
cheap insurance and doesn't break any goal of the test.
Author: Pavel Borisov <pashkin.elfe@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CALT9ZEHcE6tpvumScYPO6pGk_ASjTjWojLkodHnk33dvRPHXVw@mail.gmail.com
Backpatch-through: 14 M src/test/isolation/expected/insert-conflict-do-update-4.out
M src/test/isolation/specs/insert-conflict-do-update-4.spec
Fix unsafe RTE_GROUP removal in simplify_EXISTS_query
commit : 1c7358099cbe77bc622bc817ec4e9d919ca91fcf
author : Richard Guo <rguo@postgresql.org>
date : Wed, 25 Feb 2026 11:13:21 +0900
committer: Richard Guo <rguo@postgresql.org>
date : Wed, 25 Feb 2026 11:13:21 +0900 When simplify_EXISTS_query removes the GROUP BY clauses from an EXISTS
subquery, it previously deleted the RTE_GROUP RTE directly from the
subquery's range table.
This approach is dangerous because deleting an RTE from the middle of
the rtable list shifts the index of any subsequent RTE, which can
silently corrupt any Var nodes in the query tree that reference those
later relations. (Currently, this direct removal has not caused
problems because the RTE_GROUP RTE happens to always be the last entry
in the rtable list. However, relying on that is extremely fragile and
seems like trouble waiting to happen.)
Instead of deleting the RTE_GROUP RTE, this patch converts it in-place
to be RTE_RESULT type and clears its groupexprs list. This preserves
the length and indexing of the rtable list, ensuring all Var
references remain intact.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/3472344.1771858107@sss.pgh.pa.us
Backpatch-through: 18 M src/backend/optimizer/plan/subselect.c
pg_upgrade: Use max_protocol_version=3.0 for older servers
commit : 1b2773179f31b25bc560e06a35970fdd9b1a6d90
author : Jacob Champion <jchampion@postgresql.org>
date : Tue, 24 Feb 2026 14:01:41 -0800
committer: Jacob Champion <jchampion@postgresql.org>
date : Tue, 24 Feb 2026 14:01:41 -0800 The grease patch in 4966bd3ed found its first problem: prior to the
February 2018 patch releases, no server knew how to negotiate protocol
versions, so pg_upgrade needs to take that into account when speaking to
those older servers.
This will be true even after the grease feature is reverted; we don't
need anyone to trip over this again in the future. Backpatch so that all
supported versions of pg_upgrade can gracefully handle an update to the
default protocol version. (This is needed for any distributions that
link older binaries against newer libpqs, such as Debian.) Branches
prior to 18 need an additional version check, for the existence of
max_protocol_version.
Per buildfarm member crake.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAOYmi%2B%3D4QhCjssfNEoZVK8LPtWxnfkwT5p-PAeoxtG9gpNjqOQ%40mail.gmail.com
Backpatch-through: 14 M src/bin/pg_upgrade/dump.c
M src/bin/pg_upgrade/pg_upgrade.h
M src/bin/pg_upgrade/server.c
M src/bin/pg_upgrade/task.c
M src/bin/pg_upgrade/version.c