PostgreSQL 16.11 commit log

Stamp 16.11.

commit   : d61dd817be70749d14e982a369e97fdda9d5cba6    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Nov 2025 16:55:22 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Nov 2025 16:55:22 -0500    

Click here for diff

M configure
M configure.ac
M meson.build

Last-minute updates for release notes.

commit   : b2e70cc348992012cab140172d0aaf9a3bec6b0b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Nov 2025 13:36:13 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Nov 2025 13:36:13 -0500    

Click here for diff

Security: CVE-2025-12817, CVE-2025-12818  

M doc/src/sgml/release-16.sgml

Check for CREATE privilege on the schema in CREATE STATISTICS.

commit   : d20abb5876ab61a627d80131b2cb78d9652557e3    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Mon, 10 Nov 2025 09:00:00 -0600    
  
committer: Nathan Bossart <nathan@postgresql.org>    
date     : Mon, 10 Nov 2025 09:00:00 -0600    

Click here for diff

This omission allowed table owners to create statistics in any  
schema, potentially leading to unexpected naming conflicts.  For  
ALTER TABLE commands that require re-creating statistics objects,  
skip this check in case the user has since lost CREATE on the  
schema.  The addition of a second parameter to CreateStatistics()  
breaks ABI compatibility, but we are unaware of any impacted  
third-party code.  
  
Reported-by: Jelte Fennema-Nio <postgres@jeltef.nl>  
Author: Jelte Fennema-Nio <postgres@jeltef.nl>  
Co-authored-by: Nathan Bossart <nathandbossart@gmail.com>  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>  
Security: CVE-2025-12817  
Backpatch-through: 13  

M src/backend/commands/statscmds.c
M src/backend/commands/tablecmds.c
M src/backend/tcop/utility.c
M src/include/commands/defrem.h
M src/test/regress/expected/stats_ext.out
M src/test/regress/sql/stats_ext.sql

libpq: Prevent some overflows of int/size_t

commit   : 585fd9b3c617db9adeb717c3def6f64aad2135cc    
  
author   : Jacob Champion <jchampion@postgresql.org>    
date     : Mon, 10 Nov 2025 06:03:04 -0800    
  
committer: Jacob Champion <jchampion@postgresql.org>    
date     : Mon, 10 Nov 2025 06:03:04 -0800    

Click here for diff

Several functions could overflow their size calculations, when presented  
with very large inputs from remote and/or untrusted locations, and then  
allocate buffers that were too small to hold the intended contents.  
  
Switch from int to size_t where appropriate, and check for overflow  
conditions when the inputs could have plausibly originated outside of  
the libpq trust boundary. (Overflows from within the trust boundary are  
still possible, but these will be fixed separately.) A version of  
add_size() is ported from the backend to assist with code that performs  
more complicated concatenation.  
  
Reported-by: Aleksey Solovev (Positive Technologies)  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>  
Security: CVE-2025-12818  
Backpatch-through: 13  

M src/interfaces/libpq/fe-connect.c
M src/interfaces/libpq/fe-exec.c
M src/interfaces/libpq/fe-print.c
M src/interfaces/libpq/fe-protocol3.c
M src/interfaces/libpq/libpq-int.h

Translation updates

commit   : 45367761a02b090673b4f21585eb30d21be9c8eb    
  
author   : Peter Eisentraut <peter@eisentraut.org>    
date     : Mon, 10 Nov 2025 13:04:09 +0100    
  
committer: Peter Eisentraut <peter@eisentraut.org>    
date     : Mon, 10 Nov 2025 13:04:09 +0100    

Click here for diff

Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git  
Source-Git-Hash: d3bc33cce36158257311e5cfa36c97209f37dedc  

M src/backend/po/de.po
M src/backend/po/es.po
M src/backend/po/ja.po
M src/backend/po/ko.po
M src/backend/po/ru.po
M src/backend/po/sv.po
M src/bin/initdb/po/es.po
M src/bin/initdb/po/ru.po
M src/bin/pg_amcheck/po/es.po
M src/bin/pg_amcheck/po/ko.po
M src/bin/pg_archivecleanup/po/es.po
M src/bin/pg_archivecleanup/po/ru.po
M src/bin/pg_basebackup/po/es.po
M src/bin/pg_basebackup/po/ru.po
M src/bin/pg_checksums/po/es.po
M src/bin/pg_checksums/po/ru.po
M src/bin/pg_config/po/es.po
M src/bin/pg_controldata/po/es.po
M src/bin/pg_controldata/po/ru.po
M src/bin/pg_ctl/po/es.po
M src/bin/pg_ctl/po/ru.po
M src/bin/pg_dump/po/de.po
M src/bin/pg_dump/po/es.po
M src/bin/pg_dump/po/fr.po
M src/bin/pg_dump/po/ja.po
M src/bin/pg_dump/po/ru.po
M src/bin/pg_dump/po/sv.po
M src/bin/pg_resetwal/po/es.po
M src/bin/pg_resetwal/po/ru.po
M src/bin/pg_rewind/po/es.po
M src/bin/pg_rewind/po/ru.po
M src/bin/pg_test_fsync/po/es.po
M src/bin/pg_test_timing/po/es.po
M src/bin/pg_upgrade/po/es.po
M src/bin/pg_upgrade/po/fr.po
M src/bin/pg_upgrade/po/ja.po
M src/bin/pg_verifybackup/po/es.po
M src/bin/pg_verifybackup/po/ru.po
M src/bin/pg_waldump/po/es.po
M src/bin/pg_waldump/po/ru.po
M src/bin/psql/po/de.po
M src/bin/psql/po/es.po
M src/bin/psql/po/fr.po
M src/bin/psql/po/ja.po
M src/bin/psql/po/ru.po
M src/bin/psql/po/sv.po
M src/bin/scripts/po/es.po
M src/bin/scripts/po/ru.po
M src/interfaces/ecpg/ecpglib/po/es.po
M src/interfaces/ecpg/preproc/po/es.po
M src/interfaces/ecpg/preproc/po/ru.po
M src/interfaces/libpq/po/es.po
M src/interfaces/libpq/po/ru.po
M src/pl/plperl/po/es.po
M src/pl/plpgsql/src/po/es.po
M src/pl/plpython/po/es.po
M src/pl/tcl/po/es.po
M src/pl/tcl/po/ru.po

Release notes for 18.1, 17.7, 16.11, 15.15, 14.20, 13.23.

commit   : d8655b44e86571b13bb1d567a75a868d72633e30    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 9 Nov 2025 12:30:08 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 9 Nov 2025 12:30:08 -0500    

Click here for diff

M doc/src/sgml/release-16.sgml

Fix generic read and write barriers for Clang.

commit   : 2f76ffe5e4721302dde68a4be4eae974a673864c    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Sat, 8 Nov 2025 12:25:45 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Sat, 8 Nov 2025 12:25:45 +1300    

Click here for diff

generic-gcc.h maps our read and write barriers to C11 acquire and  
release fences using compiler builtins, for platforms where we don't  
have our own hand-rolled assembler.  This is apparently enough for GCC,  
but the C11 memory model is only defined in terms of atomic accesses,  
and our barriers for non-atomic, non-volatile accesses were not always  
respected under Clang's stricter interpretation of the standard.  
  
This explains the occasional breakage observed on new RISC-V + Clang  
animal greenfly in lock-free PgAioHandle manipulation code containing a  
repeating pattern of loads and read barriers.  The problem can also be  
observed in code generated for MIPS and LoongAarch, though we aren't  
currently testing those with Clang, and on x86, though we use our own  
assembler there.  The scariest aspect is that we use the generic version  
on very common ARM systems, but it doesn't seem to reorder the relevant  
code there (or we'd have debugged this long ago).  
  
Fix by inserting an explicit compiler barrier.  It expands to an empty  
assembler block declared to have memory side-effects, so registers are  
flushed and reordering is prevented.  In those respects this is like the  
architecture-specific assembler versions, but the compiler is still in  
charge of generating the appropriate fence instruction.  Done for write  
barriers on principle, though concrete problems have only been observed  
with read barriers.  
  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Tested-by: Alexander Lakhin <exclusion@gmail.com>  
Discussion: https://postgr.es/m/d79691be-22bd-457d-9d90-18033b78c40a%40gmail.com  
Backpatch-through: 13  

M src/include/port/atomics/generic-gcc.h

doc: Fix descriptions of some PGC_POSTMASTER parameters.

commit   : 9a882a5dbfe6c06072e19b056d97f92b99cdde61    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Fri, 7 Nov 2025 14:57:10 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Fri, 7 Nov 2025 14:57:10 +0900    

Click here for diff

The following parameters can only be set at server start because  
their context is PGC_POSTMASTER, but this information was missing  
or incorrectly documented. This commit adds or corrects  
that information for the following parameters:  
  
* debug_io_direct  
* dynamic_shared_memory_type  
* event_source  
* huge_pages  
* io_max_combine_limit  
* max_notify_queue_pages  
* shared_memory_type  
* track_commit_timestamp  
* wal_decode_buffer_size  
  
Backpatched to all supported branches.  
  
Author: Karina Litskevich <litskevichkarina@gmail.com>  
Reviewed-by: Chao Li <lic@highgo.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/CAHGQGwGfPzcin-_6XwPgVbWTOUFVZgHF5g9ROrwLUdCTfjy=0A@mail.gmail.com  
Backpatch-through: 13  

M doc/src/sgml/config.sgml

Introduce XLogRecPtrIsValid()

commit   : 723cc84db50a7b822f89c3ea90cbd39d530d1f70    
  
author   : Álvaro Herrera <alvherre@kurilemu.de>    
date     : Thu, 6 Nov 2025 19:08:29 +0100    
  
committer: Álvaro Herrera <alvherre@kurilemu.de>    
date     : Thu, 6 Nov 2025 19:08:29 +0100    

Click here for diff

XLogRecPtrIsInvalid() is inconsistent with the affirmative form of  
macros used for other datatypes, and leads to awkward double negatives  
in a few places.  This commit introduces XLogRecPtrIsValid(), which  
allows code to be written more naturally.  
  
This patch only adds the new macro.  XLogRecPtrIsInvalid() is left in  
place, and all existing callers remain untouched.  This means all  
supported branches can accept hypothetical bug fixes that use the new  
macro, and at the same time any code that compiled with the original  
formulation will continue to silently compile just fine.  
  
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>  
Backpatch-through: 13  
Discussion: https://postgr.es/m/aQB7EvGqrbZXrMlg@ip-10-97-1-34.eu-west-3.compute.internal  

M src/include/access/xlogdefs.h

Disallow generated columns in COPY WHERE clause

commit   : 26958f4d99b16b1c638e6d43d46623e9c79579d5    
  
author   : Peter Eisentraut <peter@eisentraut.org>    
date     : Thu, 6 Nov 2025 11:52:47 +0100    
  
committer: Peter Eisentraut <peter@eisentraut.org>    
date     : Thu, 6 Nov 2025 11:52:47 +0100    

Click here for diff

Stored generated columns are not yet computed when the filtering  
happens, so we need to prohibit them to avoid incorrect behavior.  
  
Co-authored-by: jian he <jian.universality@gmail.com>  
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>  
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>  
Discussion: https://www.postgresql.org/message-id/flat/CACJufxHb8YPQ095R_pYDr77W9XKNaXg5Rzy-WP525mkq+hRM3g@mail.gmail.com  

M src/backend/commands/copy.c
M src/test/regress/expected/generated.out
M src/test/regress/sql/generated.sql

Update obsolete comment in ExecScanReScan().

commit   : 98821eae9d3cdcdb1a9690a0b2fd0248dbc65a13    
  
author   : Etsuro Fujita <efujita@postgresql.org>    
date     : Thu, 6 Nov 2025 12:25:02 +0900    
  
committer: Etsuro Fujita <efujita@postgresql.org>    
date     : Thu, 6 Nov 2025 12:25:02 +0900    

Click here for diff

Commit 27cc7cd2b removed the epqScanDone flag from the EState struct,  
and instead added an equivalent flag named relsubs_done to the EPQState  
struct; but it failed to update this comment.  
  
Author: Etsuro Fujita <etsuro.fujita@gmail.com>  
Discussion: https://postgr.es/m/CAPmGK152zJ3fU5avDT5udfL0namrDeVfMTL3dxdOXw28SOrycg%40mail.gmail.com  
Backpatch-through: 13  

M src/backend/executor/execScan.c

postgres_fdw: Add more test coverage for EvalPlanQual testing.

commit   : b432acd71d53e0efc4bb741a44be585477ebd187    
  
author   : Etsuro Fujita <efujita@postgresql.org>    
date     : Thu, 6 Nov 2025 12:15:03 +0900    
  
committer: Etsuro Fujita <efujita@postgresql.org>    
date     : Thu, 6 Nov 2025 12:15:03 +0900    

Click here for diff

postgres_fdw supports EvalPlanQual testing by using the infrastructure  
provided by the core with the RecheckForeignScan callback routine (cf.  
commits 5fc4c26db and 385f337c9), but there has been no test coverage  
for that, except that recent commit 12609fbac, which fixed an issue in  
commit 385f337c9, added a test case to exercise only a code path added  
by that commit to the core infrastructure.  So let's add test cases to  
exercise other code paths as well at this time.  
  
Like commit 12609fbac, back-patch to all supported branches.  
  
Reported-by: Masahiko Sawada <sawada.mshk@gmail.com>  
Author: Etsuro Fujita <etsuro.fujita@gmail.com>  
Discussion: https://postgr.es/m/CAPmGK15%2B6H%3DkDA%3D-y3Y28OAPY7fbAdyMosVofZZ%2BNc769epVTQ%40mail.gmail.com  
Backpatch-through: 13  

M contrib/postgres_fdw/expected/eval_plan_qual.out
M contrib/postgres_fdw/specs/eval_plan_qual.spec

ci: Add missing "set -e" to scripts run by su.

commit   : 4da1f66fae836c4579ea38fe6d1d8fa2e6e9faf2    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Thu, 6 Nov 2025 13:24:30 +1300    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Thu, 6 Nov 2025 13:24:30 +1300    

Click here for diff

If any shell command fails, the whole script should fail.  To avoid  
future omissions, add this even for single-command scripts that use su  
with heredoc syntax, as they might be extended or copied-and-pasted.  
  
Extracted from a larger patch that wanted to use #error during  
compilation, leading to the diagnosis of this problem.  
  
Reviewed-by: Tristan Partin <tristan@partin.io> (earlier version)  
Discussion: https://postgr.es/m/DDZP25P4VZ48.3LWMZBGA1K9RH%40partin.io  
Backpatch-through: 15  

M .cirrus.tasks.yml

Avoid possible crash within libsanitizer.

commit   : c775bf048abdb625d862c94f945c2fa11ccf0907    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 5 Nov 2025 11:09:30 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 5 Nov 2025 11:09:30 -0500    

Click here for diff

We've successfully used libsanitizer for awhile with the undefined  
and alignment sanitizers, but with some other sanitizers (at least  
thread and hwaddress) it crashes due to internal recursion before  
it's fully initialized itself.  It turns out that that's due to the  
"__ubsan_default_options" hack installed by commit f686ae82f, and we  
can fix it by ensuring that __ubsan_default_options is built without  
any sanitizer instrumentation hooks.  
  
Reported-by: Emmanuel Sibi <emmanuelsibi.mec@gmail.com>  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Diagnosed-by: Emmanuel Sibi <emmanuelsibi.mec@gmail.com>  
Fix-suggested-by: Jacob Champion <jacob.champion@enterprisedb.com>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/F7543B04-E56C-4D68-A040-B14CCBAD38F1@gmail.com  
Discussion: https://postgr.es/m/dbf77bf7-6e54-ed8a-c4ae-d196eeb664ce@gmail.com  
Backpatch-through: 16  

M src/backend/main/main.c

Fix timing-dependent failure in recovery test 004_timeline_switch

commit   : c06162b639ea5be69f700e66b43d013c6946296c    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Wed, 5 Nov 2025 16:48:26 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Wed, 5 Nov 2025 16:48:26 +0900    

Click here for diff

The test introduced by 17b2d5ec759c verifies that a WAL receiver  
survives across a timeline jump by searching the server logs for  
termination messages.  However, it called restart() before the timeline  
switch, which kills the WAL receiver and may log the exact message being  
checked, hence failing the test.  As TAP tests reuse the same log file  
across restarts, a rotate_logfile() is used before the restart so as the  
log matching check is not impacted by log entries generated by a  
previous shutdown.  
  
Recent changes to file handle inheritance altered I/O timing enough to  
make this fail consistently while testing another patch.  
  
While on it, this adds an extra check based on a PID comparison.  This  
test may lead to false positives as it could be possible that the WAL  
receiver has processed a timeline jump before the initial PID is  
grabbed, but it should be good enough in most cases.  
  
Like 17b2d5ec759c, backpatch down to v13.  
  
Author: Bryan Green <dbryan.green@gmail.com>  
Co-authored-by: Xuneng Zhou <xunengzhou@gmail.com>  
Discussion: https://postgr.es/m/9d00b597-d64a-4f1e-802e-90f9dc394c70@gmail.com  
Backpatch-through: 13  

M src/test/recovery/t/004_timeline_switch.pl

jit: Fix accidentally-harmless type confusion

commit   : 0909a2971ced9a894ecbb4753bc770aa0923c044    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Tue, 4 Nov 2025 18:36:18 -0500    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Tue, 4 Nov 2025 18:36:18 -0500    

Click here for diff

In 2a0faed9d702, which added JIT compilation support for expressions, I  
accidentally used sizeof(LLVMBasicBlockRef *) instead of  
sizeof(LLVMBasicBlockRef) as part of computing the size of an allocation. That  
turns out to have no real negative consequences due to LLVMBasicBlockRef being  
a pointer itself (and thus having the same size). It still is wrong and  
confusing, so fix it.  
  
Reported by coverity.  
  
Backpatch-through: 13  

M src/backend/jit/llvm/llvmjit_expr.c

Fix snapshot handling bug in recent BRIN fix

commit   : 20442cf5075da987dee42d7a31b1f5065d9d2c27    
  
author   : Álvaro Herrera <alvherre@kurilemu.de>    
date     : Tue, 4 Nov 2025 20:31:43 +0100    
  
committer: Álvaro Herrera <alvherre@kurilemu.de>    
date     : Tue, 4 Nov 2025 20:31:43 +0100    

Click here for diff

Commit a95e3d84c0e0 added ActiveSnapshot push+pop when processing  
work-items (BRIN autosummarization), but forgot to handle the case of  
a transaction failing during the run, which drops the snapshot untimely.  
Fix by making the pop conditional on an element being actually there.  
  
Author: Álvaro Herrera <alvherre@kurilemu.de>  
Backpatch-through: 13  
Discussion: https://postgr.es/m/202511041648.nofajnuddmwk@alvherre.pgsql  

M src/backend/postmaster/autovacuum.c

ci: debian: Switch to Debian Trixie release

commit   : b81489dd2c9b1ddf30e3cc815d09790fb8cbf9d3    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Tue, 4 Nov 2025 13:25:42 -0500    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Tue, 4 Nov 2025 13:25:42 -0500    

Click here for diff

Debian Trixie CI images are generated now [1], so use them with the  
following changes:  
  
- detect_stack_use_after_return=0 option is added to the ASAN_OPTIONS  
  because ASAN uses a "shadow stack" to track stack variable lifetimes  
  and this confuses Postgres' stack depth check [2].  
  
- Perl is updated to the newer version (perl5.40-i386-linux-gnu).  
  
- LLVM-14 is no longer default installation, no need to force using  
  LLVM-16.  
  
- Switch MinGW CC/CXX to x86_64-w64-mingw32ucrt-* to fix build failure  
  from missing _iswctype_l in mingw-w64 v12 headers.  
  
[1] https://github.com/anarazel/pg-vm-images/commit/35a144793f  
[2] https://postgr.es/m/20240130212304.q66rquj5es4375ab%40awork3.anarazel.de  
  
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>  
Discussion: https://postgr.es/m/CAN55FZ1_B1usTskAv+AYt1bA7abVd9YH6XrUUSbr-2Z0d5Wd8w@mail.gmail.com  
Backpatch: 15-, where CI support was added  

M .cirrus.tasks.yml

Backpatch: Fix warnings about declaration of environ on MinGW

commit   : 50fbd0945c57d3eee005dbae966138016da1d525    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Tue, 4 Nov 2025 13:24:58 -0500    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Tue, 4 Nov 2025 13:24:58 -0500    

Click here for diff

Backpatch commit 7bc9a8bdd2d to 13-17. The motivation for backpatching is that  
we want to update CI to Debian Trixie. Trixie contains a newer mingw  
installation, which would trigger the warning addressed by 7bc9a8bdd2d. The  
risk of backpatching seems fairly low, given that it did not cause issues in  
the branches the commit is already present.  
  
While CI is not present in 13-14, it seems better to be consistent across  
branches.  
  
Author: Thomas Munro <tmunro@postgresql.org>  
Discussion: https://postgr.es/m/o5yadhhmyjo53svzwvaocww6zkrp63i4f32cw3treuh46pxtza@hyqio5b2tkt6  
Backpatch-through: 13  

M src/backend/postmaster/postmaster.c
M src/backend/utils/misc/ps_status.c
M src/test/regress/regress.c

Have psql's "\? variables" show csv_fieldsep

commit   : 87fbbd48c2b8564783f22e076f63d90de099128c    
  
author   : Álvaro Herrera <alvherre@kurilemu.de>    
date     : Tue, 4 Nov 2025 17:30:44 +0100    
  
committer: Álvaro Herrera <alvherre@kurilemu.de>    
date     : Tue, 4 Nov 2025 17:30:44 +0100    

Click here for diff

Accidental omission in commit aa2ba50c2c13.  There are too many lists of  
these variables ...  
  
Discussion: https://postgr.es/m/202511031738.eqaeaedpx5cr@alvherre.pgsql  

M src/bin/psql/help.c

Tighten check for generated column in partition key expression

commit   : 7180d56c5609166ac9bd7203e7ac0ad50bf400ce    
  
author   : Peter Eisentraut <peter@eisentraut.org>    
date     : Tue, 4 Nov 2025 14:31:57 +0100    
  
committer: Peter Eisentraut <peter@eisentraut.org>    
date     : Tue, 4 Nov 2025 14:31:57 +0100    

Click here for diff

A generated column may end up being part of the partition key  
expression, if it's specified as an expression e.g. "(<generated  
column name>)" or if the partition key expression contains a whole-row  
reference, even though we do not allow a generated column to be part  
of partition key expression.  Fix this hole.  
  
Co-authored-by: jian he <jian.universality@gmail.com>  
Co-authored-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>  
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>  
Discussion: https://www.postgresql.org/message-id/flat/CACJufxF%3DWDGthXSAQr9thYUsfx_1_t9E6N8tE3B8EqXcVoVfQw%40mail.gmail.com  

M src/backend/commands/tablecmds.c
M src/test/regress/expected/generated.out
M src/test/regress/sql/generated.sql

BRIN autosummarization may need a snapshot

commit   : 6ef33c8051ddfcfff32e5c788e8e7e2f5b10669b    
  
author   : Álvaro Herrera <alvherre@kurilemu.de>    
date     : Tue, 4 Nov 2025 13:23:26 +0100    
  
committer: Álvaro Herrera <alvherre@kurilemu.de>    
date     : Tue, 4 Nov 2025 13:23:26 +0100    

Click here for diff

It's possible to define BRIN indexes on functions that require a  
snapshot to run, but the autosummarization feature introduced by commit  
7526e10224f0 fails to provide one.  This causes autovacuum to leave a  
BRIN placeholder tuple behind after a failed work-item execution, making  
such indexes less efficient.  Repair by obtaining a snapshot prior to  
running the task, and add a test to verify this behavior.  
  
Author: Álvaro Herrera <alvherre@kurilemu.de>  
Reported-by: Giovanni Fabris <giovanni.fabris@icon.it>  
Reported-by: Arthur Nascimento <tureba@gmail.com>  
Backpatch-through: 13  
Discussion: https://postgr.es/m/202511031106.h4fwyuyui6fz@alvherre.pgsql  

M src/backend/postmaster/autovacuum.c
M src/test/modules/brin/t/01_workitems.pl

Fix unconditional WAL receiver shutdown during stream-archive transition

commit   : 9b61096074af936cb3a55611b21b8b545f56cc1a    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Tue, 4 Nov 2025 10:52:38 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Tue, 4 Nov 2025 10:52:38 +0900    

Click here for diff

Commit b4f584f9d2a1 (affecting v15~, later backpatched down to 13 as of  
3635a0a35aaf) introduced an unconditional WAL receiver shutdown when  
switching from streaming to archive WAL sources.  This causes problems  
during a timeline switch, when a WAL receiver enters WALRCV_WAITING  
state but remains alive, waiting for instructions.  
  
The unconditional shutdown can break some monitoring scenarios as the  
WAL receiver gets repeatedly terminated and re-spawned, causing  
pg_stat_wal_receiver.status to show a "streaming" instead of "waiting"  
status, masking the fact that the WAL receiver is waiting for a new TLI  
and a new LSN to be able to continue streaming.  
  
This commit changes the WAL receiver behavior so as the shutdown becomes  
conditional, with InstallXLogFileSegmentActive being always reset to  
prevent the regression fixed by b4f584f9d2a1: only terminate the WAL  
receiver when it is actively streaming (WALRCV_STREAMING,  
WALRCV_STARTING, or WALRCV_RESTARTING).  When in WALRCV_WAITING state,  
just reset InstallXLogFileSegmentActive flag to allow archive  
restoration without killing the process.  WALRCV_STOPPED and  
WALRCV_STOPPING are not reachable states in this code path.  For the  
latter, the startup process is the one in charge of setting  
WALRCV_STOPPING via ShutdownWalRcv(), waiting for the WAL receiver to  
reach a WALRCV_STOPPED state after switching walRcvState, so  
WaitForWALToBecomeAvailable() cannot be reached while a WAL receiver is  
in a WALRCV_STOPPING state.  
  
A regression test is added to check that a WAL receiver is not stopped  
on timeline jump, that fails when the fix of this commit is reverted.  
  
Reported-by: Ryan Bird <ryanzxg@gmail.com>  
Author: Xuneng Zhou <xunengzhou@gmail.com>  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Reviewed-by: Michael Paquier <michael@paquier.xyz>  
Discussion: https://postgr.es/m/19093-c4fff49a608f82a0@postgresql.org  
Backpatch-through: 13  

M src/backend/access/transam/xlog.c
M src/backend/access/transam/xlogrecovery.c
M src/include/access/xlog.h
M src/test/recovery/t/004_timeline_switch.pl

Doc: cover index CONCURRENTLY causing errors in INSERT ... ON CONFLICT.

commit   : 4c2895629f6fface95085121809ce34c5e6d6503    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 3 Nov 2025 12:57:09 -0800    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 3 Nov 2025 12:57:09 -0800    

Click here for diff

Author: Mikhail Nikalayeu <mihailnikalayeu@gmail.com>  
Reviewed-by: Noah Misch <noah@leadboat.com>  
Discussion: https://postgr.es/m/CANtu0ojXmqjmEzp-=aJSxjsdE76iAsRgHBoK0QtYHimb_mEfsg@mail.gmail.com  
Backpatch-through: 13  

M doc/src/sgml/ref/insert.sgml
M src/backend/optimizer/util/plancat.c

Avoid mixing void and integer in a conditional expression.

commit   : d5e9c7115027a310ee6c4023c340a9c4adc1f26c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 2 Nov 2025 12:30:44 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 2 Nov 2025 12:30:44 -0500    

Click here for diff

The C standard says that the second and third arguments of a  
conditional operator shall be both void type or both not-void  
type.  The Windows version of INTERRUPTS_PENDING_CONDITION()  
got this wrong.  It's pretty harmless because the result of  
the operator is ignored anyway, but apparently recent versions  
of MSVC have started issuing a warning about it.  Silence the  
warning by casting the dummy zero to void.  
  
Reported-by: Christian Ullrich <chris@chrullrich.net>  
Author: Bryan Green <dbryan.green@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/cc4ef8db-f8dc-4347-8a22-e7ebf44c0308@chrullrich.net  
Backpatch-through: 13  

M src/include/miscadmin.h

doc: rewrite random_page_cost description

commit   : d4e4cecace91bd91f77da1cf081df2dba1b76a41    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Thu, 30 Oct 2025 19:11:53 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Thu, 30 Oct 2025 19:11:53 -0400    

Click here for diff

This removes some of the specifics of how the default was set, and adds  
a mention of latency as a reason the value is lower than the storage  
hardware might suggest.  It still mentions caching.  
  
Discussion: https://postgr.es/m/CAKAnmmK_nSPYr53LobUwQD59a-8U9GEC3XGJ43oaTYJq5nAOkw@mail.gmail.com  
  
Backpatch-through: 13  

M doc/src/sgml/config.sgml

ci: macos: Upgrade to Sequoia

commit   : fca4f5f62da0a17097d1326be9dc9d1545065271    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Thu, 30 Oct 2025 16:08:52 -0400    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Thu, 30 Oct 2025 16:08:52 -0400    

Click here for diff

Author: Nazir Bilal Yavuz <byavuz81@gmail.com>  
Discussion: https://postgr.es/m/CAN55FZ3kO4vLq56PWrfJ7Fw6Wz8DhEN9j9GX3aScx%2BWOirtK-g%40mail.gmail.com  
Backpatch: 15-, where CI support was added  

M .cirrus.tasks.yml

ci: Fix Windows and MinGW task names

commit   : 2533c0f26280803819a149baeb5c29994b4cb134    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Thu, 30 Oct 2025 13:07:06 -0400    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Thu, 30 Oct 2025 13:07:06 -0400    

Click here for diff

They use Windows Server 2022, not 2019.  
  
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>  
Discussion: https://postgr.es/m/flat/CAN55FZ1OsaM+852BMQDJ+Kgfg+07knJ6dM3PjbGbtYaK4qwfqA@mail.gmail.com  

M .cirrus.tasks.yml

Fix regression with slot invalidation checks

commit   : e3714dc059db06783a841c253ceeb16095ac943e    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Thu, 30 Oct 2025 13:13:37 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Thu, 30 Oct 2025 13:13:37 +0900    

Click here for diff

This commit reverts 818fefd8fd4, that has been introduced to address a  
an instability in some of the TAP tests due to the presence of random  
standby snapshot WAL records, when slots are invalidated by  
InvalidatePossiblyObsoleteSlot().  
  
Anyway, this commit had also the consequence of introducing a behavior  
regression.  After 818fefd8fd4, the code may determine that a slot needs  
to be invalidated while it may not require one: the slot may have moved  
from a conflicting state to a non-conflicting state between the moment  
when the mutex is released and the moment when we recheck the slot, in  
InvalidatePossiblyObsoleteSlot().  Hence, the invalidations may be more  
aggressive than they actually have to.  
  
105b2cb3361 has tackled the test instability in a way that should be  
hopefully sufficient for the buildfarm, even for slow members:  
- In v18, the test relies on an injection point that bypasses the  
creation of the random records generated for standby snapshots,  
eliminating the random factor that impacted the test.  This option was  
not available when 818fefd8fd4 was discussed.  
- In v16 and v17, the problem was bypassed by disallowing a slot to  
become active in some of the scenarios tested.  
  
While on it, this commit adds a comment to document that it is fine for  
a recheck to use xmin and LSN values stored in the slot, without storing  
and reusing them across multiple checks.  
  
Reported-by: "suyu.cmj" <mengjuan.cmj@alibaba-inc.com>  
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>  
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Discussion: https://postgr.es/m/f492465f-657e-49af-8317-987460cb68b0.mengjuan.cmj@alibaba-inc.com  
Backpatch-through: 16  

M src/backend/replication/slot.c

Fix bogus use of "long" in AllocSetCheck()

commit   : cdc04a6c339972a4c64b0978bf596f18ebdcff74    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Thu, 30 Oct 2025 14:50:05 +1300    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Thu, 30 Oct 2025 14:50:05 +1300    

Click here for diff

Because long is 32-bit on 64-bit Windows, it isn't a good datatype to  
store the difference between 2 pointers.  The under-sized type could  
overflow and lead to scary warnings in MEMORY_CONTEXT_CHECKING builds,  
such as:  
  
WARNING:  problem in alloc set ExecutorState: bad single-chunk %p in block %p  
  
However, the problem lies only in the code running the check, not from  
an actual memory accounting bug.  
  
Fix by using "Size" instead of "long".  This means using an unsigned  
type rather than the previous signed type.  If the block's freeptr was  
corrupted, we'd still catch that if the unsigned type wrapped.  Unsigned  
allows us to avoid further needless complexities around comparing signed  
and unsigned types.  
  
Author: David Rowley <dgrowleyml@gmail.com>  
Reviewed-by: Michael Paquier <michael@paquier.xyz>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Backpatch-through: 13  
Discussion: https://postgr.es/m/CAApHDvo-RmiT4s33J=aC9C_-wPZjOXQ232V-EZFgKftSsNRi4w@mail.gmail.com  

M src/backend/utils/mmgr/aset.c

Fix incorrect logic for caching ResultRelInfos for triggers

commit   : a546964db6fbb2e3afdb08940fdd322b957e9447    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Sun, 26 Oct 2025 11:02:15 +1300    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Sun, 26 Oct 2025 11:02:15 +1300    

Click here for diff

When dealing with ResultRelInfos for partitions, there are cases where  
there are mixed requirements for the ri_RootResultRelInfo.  There are  
cases when the partition itself requires a NULL ri_RootResultRelInfo and  
in the same query, the same partition may require a ResultRelInfo with  
its parent set in ri_RootResultRelInfo.  This could cause the column  
mapping between the partitioned table and the partition not to be done  
which could result in crashes if the column attnums didn't match  
exactly.  
  
The fix is simple.  We now check that the ri_RootResultRelInfo matches  
what the caller passed to ExecGetTriggerResultRel() and only return a  
cached ResultRelInfo when the ri_RootResultRelInfo matches what the  
caller wants, otherwise we'll make a new one.  
  
Author: David Rowley <dgrowleyml@gmail.com>  
Author: Amit Langote <amitlangote09@gmail.com>  
Reported-by: Dmitry Fomin <fomin.list@gmail.com>  
Discussion: https://postgr.es/m/7DCE78D7-0520-4207-822B-92F60AEA14B4@gmail.com  
Backpatch-through: 15  

M src/backend/executor/execMain.c
M src/test/regress/expected/foreign_key.out
M src/test/regress/sql/foreign_key.sql

doc: Remove mention of Git protocol support

commit   : a978882dfd3209126eecd5c78928fb88cd7fa394    
  
author   : Daniel Gustafsson <dgustafsson@postgresql.org>    
date     : Thu, 23 Oct 2025 21:26:15 +0200    
  
committer: Daniel Gustafsson <dgustafsson@postgresql.org>    
date     : Thu, 23 Oct 2025 21:26:15 +0200    

Click here for diff

The project Git server hasn't supported cloning with the Git protocol  
in a very long time, but the documentation never got the memo. Remove  
the mention of using the Git protocol, and while there wrap a mention  
of Git in <productname> tags.  
  
Backpatch down to all supported versions.  
  
Author: Daniel Gustafsson <daniel@yesql.se>  
Reported-by: Gurjeet Singh <gurjeet@singh.im>  
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>  
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>  
Reviewed-by: Gurjeet Singh <gurjeet@singh.im>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/CABwTF4WMiMb-KT2NRcib5W0C8TQF6URMb+HK9a_=rnZnY8Q42w@mail.gmail.com  
Backpatch-through: 13  

M doc/src/sgml/sourcerepo.sgml

Fix off-by-one Asserts in FreePageBtreeInsertInternal/Leaf.

commit   : a8838689594d95a3c6342cb86b2be8a3add2d4d1    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Oct 2025 12:32:06 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Oct 2025 12:32:06 -0400    

Click here for diff

These two functions expect there to be room to insert another item  
in the FreePageBtree's array, but their assertions were too weak  
to guarantee that.  This has little practical effect granting that  
the callers are not buggy, but it seems to be misleading late-model  
Coverity into complaining about possible array overrun.  
  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/799984.1761150474@sss.pgh.pa.us  
Backpatch-through: 13  

M src/backend/utils/mmgr/freepage.c

Fix resource leaks in PL/Python error reporting, redux.

commit   : cbfd4d0f883d5214d06a112912227fa41fc60fb6    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Oct 2025 11:47:46 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Oct 2025 11:47:46 -0400    

Click here for diff

Commit c6f7f11d8 intended to prevent leaking any PyObject reference  
counts in edge cases (such as out-of-memory during string  
construction), but actually it introduced a leak in the normal case.  
Repeating an error-trapping operation often enough would lead to  
session-lifespan memory bloat.  The problem is that I failed to  
think about the fact that PyObject_GetAttrString() increments the  
refcount of the returned PyObject, so that simply walking down the  
list of error frame objects causes all but the first one to have  
their refcount incremented.  
  
I experimented with several more-or-less-complex ways around that,  
and eventually concluded that the right fix is simply to drop the  
newly-obtained refcount as soon as we walk to the next frame  
object in PLy_traceback.  This sounds unsafe, but it's perfectly  
okay because the caller holds a refcount on the first frame object  
and each frame object holds a refcount on the next one; so the  
current frame object can't disappear underneath us.  
  
By the same token, we can simplify the caller's cleanup back to  
simply dropping its refcount on the first object.  Cleanup of  
each frame object will lead in turn to the refcount of the next  
one going to zero.  
  
I also added a couple of comments explaining why PLy_elog_impl()  
doesn't try to free the strings acquired from PLy_get_spi_error_data()  
or PLy_get_error_data().  That's because I got here by looking at a  
Coverity complaint about how those strings might get leaked.  They  
are not leaked, but in testing that I discovered this other leak.  
  
Back-patch, as c6f7f11d8 was.  It's a bit nervous-making to be  
putting such a fix into v13, which is only a couple weeks from its  
final release; but I can't see that leaving a recently-introduced  
leak in place is a better idea.  
  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/1203918.1761184159@sss.pgh.pa.us  
Backpatch-through: 13  

M src/pl/plpython/plpy_elog.c

Add comments explaining overflow entries in the replication lag tracker.

commit   : 4d707f2fd72857016400ad0a7b3e3ffa4bb2d730    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 23 Oct 2025 13:24:56 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 23 Oct 2025 13:24:56 +0900    

Click here for diff

Commit 883a95646a8 introduced overflow entries in the replication lag tracker  
to fix an issue where lag columns in pg_stat_replication could stall when  
the replay LSN stopped advancing.  
  
This commit adds comments clarifying the purpose and behavior of overflow  
entries to improve code readability and understanding.  
  
Since commit 883a95646a8 was recently applied and backpatched to all  
supported branches, this follow-up commit is also backpatched accordingly.  
  
Author: Xuneng Zhou <xunengzhou@gmail.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/CABPTF7VxqQA_DePxyZ7Y8V+ErYyXkmwJ1P6NC+YC+cvxMipWKw@mail.gmail.com  
Backpatch-through: 13  

M src/backend/replication/walsender.c

commit   : 0b36c326aa70f607cf97b99aa08c2f79f104e495    
  
author   : Masahiko Sawada <msawada@postgresql.org>    
date     : Wed, 22 Oct 2025 17:17:41 -0700    
  
committer: Masahiko Sawada <msawada@postgresql.org>    
date     : Wed, 22 Oct 2025 17:17:41 -0700    

Click here for diff

Fix oversight in commit 303ba0573, which was backpatched through 14.  
  
Reviewed-by: Michael Paquier <michael@paquier.xyz>  
Discussion: https://postgr.es/m/CAD21AoBeFdTJcwUfUYPcEgONab3TS6i1PB9S5cSXcBAmdAdQKw%40mail.gmail.com  
Backpatch-through: 14  

M src/test/recovery/t/043_vacuum_horizon_floor.pl

Fix incorrect zero extension of Datum in JIT tuple deform code

commit   : 3398b0d027852ff1ef4a3be28612a5731f6e52f3    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Thu, 23 Oct 2025 13:13:19 +1300    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Thu, 23 Oct 2025 13:13:19 +1300    

Click here for diff

When JIT deformed tuples (controlled via the jit_tuple_deforming GUC),  
types narrower than sizeof(Datum) would be zero-extended up to Datum  
width.  This wasn't the same as what fetch_att() does in the standard  
tuple deforming code.  Logically the values are the same when fetching  
via the DatumGet*() marcos, but negative numbers are not the same in  
binary form.  
  
In the report, the problem was manifesting itself with:  
  
ERROR: could not find memoization table entry  
  
in a query which had a "Cache Mode: binary" Memoize node. However, it's  
currently unclear what else is affected.  Anything that uses  
datum_image_eq() or datum_image_hash() on a Datum from a tuple deformed by  
JIT could be affected, but it may not be limited to that.  
  
The fix for this is simple: use signed extension instead of zero  
extension.  
  
Many thanks to Emmanuel Touzery for reporting this issue and providing  
steps and backup which allowed the problem to easily be recreated.  
  
Reported-by: Emmanuel Touzery <emmanuel.touzery@plandela.si>  
Author: David Rowley <dgrowleyml@gmail.com>  
Discussion: https://postgr.es/m/DB8P194MB08532256D5BAF894F241C06393F3A@DB8P194MB0853.EURP194.PROD.OUTLOOK.COM  
Backpatch-through: 13  

M src/backend/jit/llvm/llvmjit_deform.c

Make invalid primary_slot_name follow standard GUC error reporting.

commit   : 4fd916eab99c63410a084ab41dd5eb53328e45dc    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Wed, 22 Oct 2025 20:11:47 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Wed, 22 Oct 2025 20:11:47 +0900    

Click here for diff

Previously, if primary_slot_name was set to an invalid slot name and  
the configuration file was reloaded, both the postmaster and all other  
backend processes reported a WARNING. With many processes running,  
this could produce a flood of duplicate messages. The problem was that  
the GUC check hook for primary_slot_name reported errors at WARNING  
level via ereport().  
  
This commit changes the check hook to use GUC_check_errdetail() and  
GUC_check_errhint() for error reporting. As with other GUC parameters,  
this causes non-postmaster processes to log the message at DEBUG3,  
so by default, only the postmaster's message appears in the log file.  
  
Backpatch to all supported versions.  
  
Author: Fujii Masao <masao.fujii@gmail.com>  
Reviewed-by: Chao Li <lic@highgo.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>  
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>  
Discussion: https://postgr.es/m/CAHGQGwFud-cvthCTfusBfKHBS6Jj6kdAPTdLWKvP2qjUX6L_wA@mail.gmail.com  
Backpatch-through: 13  

M src/backend/access/transam/xlogrecovery.c
M src/backend/replication/slot.c
M src/include/replication/slot.h

Fix stalled lag columns in pg_stat_replication when replay LSN stops advancing.

commit   : 2e55cf4efc8d696c967222a6727698df072b3b67    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Wed, 22 Oct 2025 11:27:15 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Wed, 22 Oct 2025 11:27:15 +0900    

Click here for diff

Previously, when the replay LSN reported in feedback messages from a standby  
stopped advancing, for example, due to a recovery conflict, the write_lag and  
flush_lag columns in pg_stat_replication would initially update but then stop  
progressing. This prevented users from correctly monitoring replication lag.  
  
The problem occurred because when any LSN stopped updating, the lag tracker's  
cyclic buffer became full (the write head reached the slowest read head).  
In that state, the lag tracker could no longer compute round-trip lag values  
correctly.  
  
This commit fixes the issue by handling the slowest read entry (the one  
causing the buffer to fill) as a separate overflow entry and freeing space  
so the write and other read heads can continue advancing in the buffer.  
As a result, write_lag and flush_lag now continue updating even if the reported  
replay LSN remains stalled.  
  
Backpatch to all supported versions.  
  
Author: Fujii Masao <masao.fujii@gmail.com>  
Reviewed-by: Chao Li <lic@highgo.com>  
Reviewed-by: Shinya Kato <shinya11.kato@gmail.com>  
Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com>  
Discussion: https://postgr.es/m/CAHGQGwGdGQ=1-X-71Caee-LREBUXSzyohkoQJd4yZZCMt24C0g@mail.gmail.com  
Backpatch-through: 13  

M src/backend/replication/walsender.c

Add .abi-compliance-history to back-branches.

commit   : eaf73340cd468800c19024748b885a01e6e53e75    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Tue, 21 Oct 2025 16:37:29 -0500    
  
committer: Nathan Bossart <nathan@postgresql.org>    
date     : Tue, 21 Oct 2025 16:37:29 -0500    

Click here for diff

This file was previously added to v18 by commits a72f7d97be and  
93fb76ca4e.  Unlike the v18 version of the file, the back-branch  
versions set the original baseline point to the most recent ABI  
break documented in the git commit history.  While we'd ordinarily  
set it to something just before the .0 release, we're unlikely to  
act upon ABI breaks in released minor versions, so it doesn't seem  
worth the trouble to construct a comprehensive history.  
  
Discussion: https://postgr.es/m/aPfDOD6F4FaJJd7M%40nathan  
Backpatch-through: 13-17  

A .abi-compliance-history

Add previous commit to .git-blame-ignore-revs.

commit   : c70341ed7a54899667eee9a4a83d8eaaa763e7d9    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Tue, 21 Oct 2025 10:02:19 -0500    
  
committer: Nathan Bossart <nathan@postgresql.org>    
date     : Tue, 21 Oct 2025 10:02:19 -0500    

Click here for diff

Backpatch-through: 13  

M .git-blame-ignore-revs

Re-pgindent brin.c.

commit   : 63a5b1f89423f003ec850eee270c40b30b87cd52    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Tue, 21 Oct 2025 09:56:26 -0500    
  
committer: Nathan Bossart <nathan@postgresql.org>    
date     : Tue, 21 Oct 2025 09:56:26 -0500    

Click here for diff

Backpatch-through: 13  

M src/backend/access/brin/brin.c

Fix BRIN 32-bit counter wrap issue with huge tables

commit   : ef915bf9367c65a405e8b254a145351bf3add4aa    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Tue, 21 Oct 2025 20:47:35 +1300    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Tue, 21 Oct 2025 20:47:35 +1300    

Click here for diff

A BlockNumber (32-bit) might not be large enough to add bo_pagesPerRange  
to when the table contains close to 2^32 pages.  At worst, this could  
result in a cancellable infinite loop during the BRIN index scan with  
power-of-2 pagesPerRange, and slow (inefficient) BRIN index scans and  
scanning of unneeded heap blocks for non power-of-2 pagesPerRange.  
  
Backpatch to all supported versions.  
  
Author: sunil s <sunilfeb26@gmail.com>  
Reviewed-by: David Rowley <dgrowleyml@gmail.com>  
Reviewed-by: Michael Paquier <michael@paquier.xyz>  
Discussion: https://postgr.es/m/CAOG6S4-tGksTQhVzJM19NzLYAHusXsK2HmADPZzGQcfZABsvpA@mail.gmail.com  
Backpatch-through: 13  

M src/backend/access/brin/brin.c

Fix POSIX compliance in pgwin32_unsetenv() for "name" argument

commit   : 9666ce889e9aefebba50689c44121659092f2afd    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Tue, 21 Oct 2025 08:08:38 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Tue, 21 Oct 2025 08:08:38 +0900    

Click here for diff

pgwin32_unsetenv() (compatibility routine of unsetenv() on Windows)  
lacks the input validation that its sibling pgwin32_setenv() has.  
Without these checks, calling unsetenv() with incorrect names crashes on  
WIN32.  However, invalid names should be handled, failing on EINVAL.  
  
This commit adds the same checks as setenv() to fail with EINVAL for a  
"name" set to NULL, an empty string, or if '=' is included in the value,  
per POSIX requirements.  
  
Like 7ca37fb0406b, backpatch down to v14.  pgwin32_unsetenv() is defined  
on REL_13_STABLE, but with the branch going EOL soon and the lack of  
setenv() there for WIN32, nothing is done for v13.  
  
Author: Bryan Green <dbryan.green@gmail.com>  
Discussion: https://postgr.es/m/b6a1e52b-d808-4df7-87f7-2ff48d15003e@gmail.com  
Backpatch-through: 14  

M src/port/win32env.c

Fix thinko in commit 7d129ba54.

commit   : f1e1db780e50f9723edf67c52a7920e09c4dab40    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 20 Oct 2025 08:45:57 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 20 Oct 2025 08:45:57 -0400    

Click here for diff

The revised logic in 001_ssltests.pl would fail if openssl  
doesn't work or if Perl is a 32-bit build, because it had  
already overwritten $serialno with something inappropriate  
to use in the eventual match.  We could go back to the  
previous code layout, but it seems best to introduce a  
separate variable for the output of openssl.  
  
Per failure on buildfarm member mamba, which has a 32-bit Perl.  

M src/test/ssl/t/001_ssltests.pl

Don't rely on zlib's gzgetc() macro.

commit   : c865f5b9f090bc1325f5667b11f883a46f5d96dd    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 19 Oct 2025 14:36:58 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 19 Oct 2025 14:36:58 -0400    

Click here for diff

It emerges that zlib's configuration logic is not robust enough  
to guarantee that the macro will have the same ideas about struct  
field layout as the library itself does, leading to corruption of  
zlib's state struct followed by unintelligible failure messages.  
This hazard has existed for a long time, but we'd not noticed  
for several reasons:  
  
(1) We only use gzgetc() when trying to read a manually-compressed  
TOC file within a directory-format dump, which is a rarely-used  
scenario that we weren't even testing before 20ec99589.  
  
(2) No corruption actually occurs unless sizeof(long) is different  
from sizeof(off_t) and the platform is big-endian.  
  
(3) Some platforms have already fixed the configuration instability,  
at least sufficiently for their environments.  
  
Despite (3), it seems foolish to assume that the problem isn't  
going to be present in some environments for a long time to come.  
Hence, avoid relying on this macro.  We can just #undef it and  
fall back on the underlying function of the same name.  
  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/2122679.1760846783@sss.pgh.pa.us  
Backpatch-through: 13  

M src/bin/pg_dump/compress_gzip.c

Allow role created by new test to log in on Windows.

commit   : c26a8eaf650281ad08e9c5bdf39ec9387335a86c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 18 Oct 2025 18:36:21 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 18 Oct 2025 18:36:21 -0400    

Click here for diff

We must tell init about each role name we plan to connect as,  
else SSPI auth fails.  Similar to previous patches such as  
14793f471, 973542866.  
  
Oversight in 208927e65, per buildfarm member drongo.  
(Although that was back-patched to v13, the test script  
only exists in v16 and up.)  

M contrib/pg_prewarm/t/001_basic.pl

Fix pg_dump sorting of foreign key constraints

commit   : 06c1ee6b75dc040b2444535240d6f0bbee25c977    
  
author   : Álvaro Herrera <alvherre@kurilemu.de>    
date     : Sat, 18 Oct 2025 17:50:10 +0200    
  
committer: Álvaro Herrera <alvherre@kurilemu.de>    
date     : Sat, 18 Oct 2025 17:50:10 +0200    

Click here for diff

Apparently, commit 04bc2c42f765 failed to notice that DO_FK_CONSTRAINT  
objects require identical handling as DO_CONSTRAINT ones, which causes  
some pg_upgrade tests in debug builds to fail spuriously.  Add that.  
  
Author: Álvaro Herrera <alvherre@kurilemu.de>  
Backpatch-through: 13  
Discussion: https://postgr.es/m/202510181201.k6y75v2tpf5r@alvherre.pgsql  

M src/bin/pg_dump/pg_dump_sort.c

Fix privilege checks for pg_prewarm() on indexes.

commit   : fae0ce5e318eea8cd8f7bac936a58ee7cbd10bf8    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Fri, 17 Oct 2025 11:36:50 -0500    
  
committer: Nathan Bossart <nathan@postgresql.org>    
date     : Fri, 17 Oct 2025 11:36:50 -0500    

Click here for diff

pg_prewarm() currently checks for SELECT privileges on the target  
relation.  However, indexes do not have access rights of their own,  
so a role may be denied permission to prewarm an index despite  
having the SELECT privilege on its parent table.  This commit fixes  
this by locking the parent table before the index (to avoid  
deadlocks) and checking for SELECT on the parent table.  Note that  
the code is largely borrowed from  
amcheck_lock_relation_and_check().  
  
An obvious downside of this change is the extra AccessShareLock on  
the parent table during prewarming, but that isn't expected to  
cause too much trouble in practice.  
  
Author: Ayush Vatsa <ayushvatsa1810@gmail.com>  
Co-authored-by: Nathan Bossart <nathandbossart@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Jeff Davis <pgsql@j-davis.com>  
Discussion: https://postgr.es/m/CACX%2BKaMz2ZoOojh0nQ6QNBYx8Ak1Dkoko%3DD4FSb80BYW%2Bo8CHQ%40mail.gmail.com  
Backpatch-through: 13  

M contrib/pg_prewarm/pg_prewarm.c
M contrib/pg_prewarm/t/001_basic.pl

Avoid warnings in tests when openssl binary isn't available

commit   : bf5b26525b28a2abf8eef368921c12fdeaddc0d8    
  
author   : Daniel Gustafsson <dgustafsson@postgresql.org>    
date     : Fri, 17 Oct 2025 14:21:26 +0200    
  
committer: Daniel Gustafsson <dgustafsson@postgresql.org>    
date     : Fri, 17 Oct 2025 14:21:26 +0200    

Click here for diff

The SSL tests for pg_stat_ssl tries to exactly match the serial  
from the certificate by extracting it with the openssl binary.  
If that fails due to the binary not being available, a fallback  
match is used, but the attempt to execute a missing binary adds  
a warning to the output which can confuse readers for a failure  
in the test.  Fix by only attempting if the openssl binary was  
found by autoconf/meson.  
  
Backpatch down to v16 where commit c8e4030d1bdd made the test  
use the OPENSSL variable from autoconf/meson instead of a hard-  
coded value.  
  
Author: Daniel Gustafsson <daniel@yesql.se>  
Reported-by: Christoph Berg <myon@debian.org>  
Discussion: https://postgr.es/m/aNPSp1-RIAs3skZm@msg.df7cb.de  
Backpatch-through: 16  

M src/test/ssl/t/001_ssltests.pl

Fix update-po for the PGXS case

commit   : a506b0c0ac029fb07a42a4cbf69d37f54845215b    
  
author   : Álvaro Herrera <alvherre@kurilemu.de>    
date     : Thu, 16 Oct 2025 20:21:05 +0200    
  
committer: Álvaro Herrera <alvherre@kurilemu.de>    
date     : Thu, 16 Oct 2025 20:21:05 +0200    

Click here for diff

The original formulation failed to take into account the fact that for  
the PGXS case, the source dir is not $(top_srcdir), so it ended up not  
doing anything.  Handle it explicitly.  
  
Author: Ryo Matsumura <matsumura.ryo@fujitsu.com>  
Reviewed-by: Bryan Green <dbryan.green@gmail.com>  
Backpatch-through: 13  
Discussion: https://postgr.es/m/TYCPR01MB113164770FB0B0BE6ED21E68EE8DCA@TYCPR01MB11316.jpnprd01.prod.outlook.com  

M src/nls-global.mk

Fix EvalPlanQual handling of foreign/custom joins in ExecScanFetch.

commit   : 5a9af48689dc4e67e1db305be9435866f158e9e0    
  
author   : Etsuro Fujita <efujita@postgresql.org>    
date     : Wed, 15 Oct 2025 17:15:03 +0900    
  
committer: Etsuro Fujita <efujita@postgresql.org>    
date     : Wed, 15 Oct 2025 17:15:03 +0900    

Click here for diff

If inside an EPQ recheck, ExecScanFetch would run the recheck method  
function for foreign/custom joins even if they aren't descendant nodes  
in the EPQ recheck plan tree, which is problematic at least in the  
foreign-join case, because such a foreign join isn't guaranteed to have  
an alternative local-join plan required for running the recheck method  
function; in the postgres_fdw case this could lead to a segmentation  
fault or an assert failure in an assert-enabled build when running the  
recheck method function.  
  
Even if inside an EPQ recheck, any scan nodes that aren't descendant  
ones in the EPQ recheck plan tree should be normally processed by using  
the access method function; fix by modifying ExecScanFetch so that if  
inside an EPQ recheck, it runs the recheck method function for  
foreign/custom joins that are descendant nodes in the EPQ recheck plan  
tree as before and runs the access method function for foreign/custom  
joins that aren't.  
  
This fix also adds to postgres_fdw an isolation test for an EPQ recheck  
that caused issues stated above.  
  
Oversight in commit 385f337c9.  
  
Reported-by: Kristian Lejao <kristianlejao@gmail.com>  
Author: Masahiko Sawada <sawada.mshk@gmail.com>  
Co-authored-by: Etsuro Fujita <etsuro.fujita@gmail.com>  
Reviewed-by: Michael Paquier <michael@paquier.xyz>  
Reviewed-by: Etsuro Fujita <etsuro.fujita@gmail.com>  
Discussion: https://postgr.es/m/CAD21AoBpo6Gx55FBOW+9s5X=nUw3Xpq64v35fpDEKsTERnc4TQ@mail.gmail.com  
Backpatch-through: 13  

M contrib/postgres_fdw/.gitignore
M contrib/postgres_fdw/Makefile
A contrib/postgres_fdw/expected/eval_plan_qual.out
M contrib/postgres_fdw/meson.build
A contrib/postgres_fdw/specs/eval_plan_qual.spec
M src/backend/executor/execScan.c

Fix incorrect message-printing in win32security.c.

commit   : 9883e3cd1c61b1bf23d655e62789ff56c3da3bb5    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 13 Oct 2025 17:56:45 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 13 Oct 2025 17:56:45 -0400    

Click here for diff

log_error() would probably fail completely if used, and would  
certainly print garbage for anything that needed to be interpolated  
into the message, because it was failing to use the correct printing  
subroutine for a va_list argument.  
  
This bug likely went undetected because the error cases this code  
is used for are rarely exercised - they only occur when Windows  
security API calls fail catastrophically (out of memory, security  
subsystem corruption, etc).  
  
The FRONTEND variant can be fixed just by calling vfprintf()  
instead of fprintf().  However, there was no va_list variant  
of write_stderr(), so create one by refactoring that function.  
Following the usual naming convention for such things, call  
it vwrite_stderr().  
  
Author: Bryan Green <dbryan.green@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/CAF+pBj8goe4fRmZ0V3Cs6eyWzYLvK+HvFLYEYWG=TzaM+tWPnw@mail.gmail.com  
Backpatch-through: 13  

M src/backend/utils/error/elog.c
M src/include/utils/elog.h
M src/port/win32security.c

Doc: clarify n_distinct_inherited setting

commit   : 0a47f054ef8d5b9f2d9702ba1a5587777953fc7c    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Tue, 14 Oct 2025 09:26:24 +1300    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Tue, 14 Oct 2025 09:26:24 +1300    

Click here for diff

There was some confusion around how to adjust the n_distinct estimates  
for partitioned tables.  Here we try and clarify that  
n_distinct_inherited needs to be adjusted rather than n_distinct.  
  
Also fix some slightly misleading text which was talking about table  
size rather than table rows, fix a grammatical error, and adjust some  
text which indicated that ANALYZE was performing calculations based on  
the n_distinct settings.  Really it's the query planner that does this  
and ANALYZE only stores the overridden n_distinct estimate value in  
pg_statistic.  
  
Author: David Rowley <dgrowleyml@gmail.com>  
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Backpatch-through: 13  
Discussion: https://postgr.es/m/CAApHDvrL7a-ZytM1SP8Uk9nEw9bR2CPzVb+uP+bcNj=_q-ZmVw@mail.gmail.com  

M doc/src/sgml/ref/alter_table.sgml

Fix issue with reading zero bytes in Gzip_read.

commit   : 1518b7d76aadcbdffa6214555b82b995e7404b38    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 13 Oct 2025 12:44:20 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 13 Oct 2025 12:44:20 -0400    

Click here for diff

pg_dump expects a read request of zero bytes to be a no-op; see for  
example ReadStr().  Gzip_read got this wrong and falsely supposed  
that the resulting gzret == 0 indicated an error.  We could complicate  
that error-checking logic some more, but it seems best to just fall  
out immediately when passed size == 0.  
  
This bug breaks the nominally-supported case of manually gzip'ing  
the toc.dat file within a directory-style dump, so back-patch to v16  
where this code came in.  (Prior branches already have a short-circuit  
for size == 0 before their only gzread call.)  
  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/3515357.1760128017@sss.pgh.pa.us  
Backpatch-through: 16  

M src/bin/pg_dump/compress_gzip.c

Stop creating constraints during DETACH CONCURRENTLY

commit   : b835759ec7e8357c341f051df91168c04f032025    
  
author   : Álvaro Herrera <alvherre@kurilemu.de>    
date     : Sat, 11 Oct 2025 20:30:12 +0200    
  
committer: Álvaro Herrera <alvherre@kurilemu.de>    
date     : Sat, 11 Oct 2025 20:30:12 +0200    

Click here for diff

Commit 71f4c8c6f74b (which implemented DETACH CONCURRENTLY) added code  
to create a separate table constraint when a table is detached  
concurrently, identical to the partition constraint, on the theory that  
such a constraint was needed in case the optimizer had constructed any  
query plans that depended on the constraint being there.  However, that  
theory was apparently bogus because any such plans would be invalidated.  
  
For hash partitioning, those constraints are problematic, because their  
expressions reference the OID of the parent partitioned table, to which  
the detached table is no longer related; this causes all sorts of  
problems (such as inability of restoring a pg_dump of that table, and  
the table no longer working properly if the partitioned table is later  
dropped).  
  
We'd like to get rid of all those constraints.  In fact, for branch  
master, do that -- no longer create any substitute constraints.  
However, out of fear that some users might somehow depend on these  
constraints for other partitioning strategies, for stable branches  
(back to 14, which added DETACH CONCURRENTLY), only do it for hash  
partitioning.  
  
(If you repeatedly DETACH CONCURRENTLY and then ATTACH a partition, then  
with this constraint addition you don't need to scan the table in the  
ATTACH step, which presumably is good.  But if users really valued this  
feature, they would have requested that it worked for non-concurrent  
DETACH also.)  
  
Author: Haiyang Li <mohen.lhy@alibaba-inc.com>  
Reported-by: Fei Changhong <feichanghong@qq.com>  
Reported-by: Haiyang Li <mohen.lhy@alibaba-inc.com>  
Backpatch-through: 14  
Discussion: https://postgr.es/m/18371-7fef49f63de13f02@postgresql.org  
Discussion: https://postgr.es/m/19070-781326347ade7c57@postgresql.org  

M src/backend/commands/tablecmds.c
M src/test/regress/expected/alter_table.out
M src/test/regress/sql/alter_table.sql

dbase_redo: Fix Valgrind-reported memory leak

commit   : c72b5c536032d636cf35d964c216c3e44e2b1efe    
  
author   : Álvaro Herrera <alvherre@kurilemu.de>    
date     : Sat, 11 Oct 2025 16:39:22 +0200    
  
committer: Álvaro Herrera <alvherre@kurilemu.de>    
date     : Sat, 11 Oct 2025 16:39:22 +0200    

Click here for diff

Introduced by my (Álvaro's) commit 9e4f914b5eba, which was itself  
backpatched to pg10, though only pg15 and up contain the problem  
because of commit 9c08aea6a309.  
  
This isn't a particularly significant leak, but given the fix is  
trivial, we might as well backpatch to all branches where it applies, so  
do that.  
  
Author: Nathan Bossart <nathandbossart@gmail.com>  
Reported-by: Andres Freund <andres@anarazel.de>  
Discussion: https://postgr.es/m/x4odfdlrwvsjawscnqsqjpofvauxslw7b4oyvxgt5owoyf4ysn@heafjusodrz7  

M src/backend/commands/dbcommands.c

Remove overzealous _bt_killitems assertion.

commit   : c160fd46910e7ad9490578b453847f06ffb3f5ee    
  
author   : Peter Geoghegan <pg@bowt.ie>    
date     : Fri, 10 Oct 2025 14:52:19 -0400    
  
committer: Peter Geoghegan <pg@bowt.ie>    
date     : Fri, 10 Oct 2025 14:52:19 -0400    

Click here for diff

An assertion in _bt_killitems expected the scan's currPos state to  
contain a valid LSN, saved from when currPos's page was initially read.  
The assertion failed to account for the fact that even logged relations  
can have leaf pages with an invalid LSN when built with wal_level set to  
"minimal".  Remove the faulty assertion.  
  
Oversight in commit e6eed40e (though note that the assertion was  
backpatched to stable branches before 18 by commit 7c319f54).  
  
Author: Peter Geoghegan <pg@bowt.ie>  
Reported-By: Matthijs van der Vleuten <postgresql@zr40.nl>  
Bug: #19082  
Discussion: https://postgr.es/m/19082-628e62160dbbc1c1@postgresql.org  
Backpatch-through: 13  

M src/backend/access/nbtree/nbtutils.c

Fix two typos in xlogstats.h and xlogstats.c

commit   : fa30f0cb06a184d13078832efc0052b9792de6f8    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Fri, 10 Oct 2025 11:51:53 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Fri, 10 Oct 2025 11:51:53 +0900    

Click here for diff

Issue found while browsing this area of the code, introduced and  
copy-pasted around by 2258e76f90bf.  
  
Backpatch-through: 15  

M src/backend/access/transam/xlogstats.c
M src/include/access/xlogstats.h

Remove state.tmp when failing to save a replication slot

commit   : bfdd1a12d2d90a255d3e5cc582e2cb3f4a347e53    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Fri, 10 Oct 2025 09:24:53 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Fri, 10 Oct 2025 09:24:53 +0900    

Click here for diff

An error happening while a slot data is saved on disk in  
SaveSlotToPath() could cause a state.tmp file (temporary file holding  
the slot state data, renamed to its permanent name at the end of the  
function) to remain around after it has been created.  This temporary  
file is created with O_EXCL, meaning that if an existing state.tmp is  
found, its creation would fail.  This would prevent the slot data to be  
saved, requiring a manual intervention to remove state.tmp before being  
able to save again a slot.  Possible scenarios where this temporary file  
could remain on disk is for example a ENOSPC case (no disk space) while  
writing, syncing or renaming it.  The bug reports point to a write  
failure as the principal cause of the problems.  
  
Using O_TRUNC has been argued back in 2019 as a potential solution to  
discard any temporary file that could exist.  This solution was rejected  
as O_EXCL can also act as a safety measure when saving the slot state,  
crash recovery offering cleanup guarantees post-crash.  This commit uses  
the alternative approach that has been suggested by Andres Freund back  
in 2019.  When the temporary state file cannot be written, synced,  
closed or renamed (note: not when created!), an unlink() is used to  
remove the temporary state file while holding the in-progress I/O  
LWLock, so as any follow-up attempts to save a slot's data would not  
choke on an existing file that remained around because of a previous  
failure.  
  
This problem has been reported a few times across the years, going back  
to 2019, but for some reason I have never come back to do something  
about it and it has been forgotten.  A recent report has reminded me  
that this was still a problem.  
  
Reported-by: Kevin K Biju <kevinkbiju@gmail.com>  
Reported-by: Sergei Kornilov <sk@zsrv.org>  
Reported-by: Grigory Smolkin <g.smolkin@postgrespro.ru>  
Discussion: https://postgr.es/m/CAM45KeHa32soKL_G8Vk38CWvTBeOOXcsxAPAs7Jt7yPRf2mbVA@mail.gmail.com  
Discussion: https://postgr.es/m/3559061693910326@qy4q4a6esb2lebnz.sas.yp-c.yandex.net  
Discussion: https://postgr.es/m/08bbfab1-a61d-3750-fc18-4ab2c1aa7f09@postgrespro.ru  
Backpatch-through: 13  

M src/backend/replication/slot.c

Fix access-to-already-freed-memory issue in pgoutput.

commit   : b07682bce301d2236667ad45aeba61c671f81aae    
  
author   : Masahiko Sawada <msawada@postgresql.org>    
date     : Thu, 9 Oct 2025 10:59:34 -0700    
  
committer: Masahiko Sawada <msawada@postgresql.org>    
date     : Thu, 9 Oct 2025 10:59:34 -0700    

Click here for diff

While pgoutput caches relation synchronization information in  
RelationSyncCache that resides in CacheMemoryContext, each entry's  
information (such as row filter expressions and column lists) is  
stored in the entry's private memory context (entry_cxt in  
RelationSyncEntry), which is a descendant memory context of the  
decoding context. If a logical decoding invoked via SQL functions like  
pg_logical_slot_get_binary_changes fails with an error, subsequent  
logical decoding executions could access already-freed memory of the  
entry's cache, resulting in a crash.  
  
With this change, it's ensured that RelationSyncCache is cleaned up  
even in error cases by using a memory context reset callback function.  
  
Backpatch to 15, where entry_cxt was introduced for column filtering  
and row filtering.  
  
While the backbranches v13 and v14 have a similar issue where  
RelationSyncCache persists even after an error when pgoutput is used  
via SQL API, we decided not to backport this fix. This decision was  
made because v13 is approaching its final minor release, and we won't  
have an chance to fix any new issues that might arise. Additionally,  
since using pgoutput via SQL API is not a common use case, the risk  
outwights the benefit. If we receive bug reports, we can consider  
backporting the fixes then.  
  
Author: vignesh C <vignesh21@gmail.com>  
Co-authored-by: Masahiko Sawada <sawada.mshk@gmail.com>  
Reviewed-by: Zhijie Hou <houzj.fnst@fujitsu.com>  
Reviewed-by: Euler Taveira <euler@eulerto.com>  
Discussion: https://postgr.es/m/CALDaNm0x-aCehgt8Bevs2cm=uhmwS28MvbYq1=s2Ekf0aDPkOA@mail.gmail.com  
Backpatch-through: 15  

M src/backend/replication/pgoutput/pgoutput.c

Use SOCK_ERRNO[_SET] in fe-secure-gssapi.c.

commit   : 46c4478db421764c5785ddce4f07b34550316ec0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 5 Oct 2025 16:27:47 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 5 Oct 2025 16:27:47 -0400    

Click here for diff

On Windows, this code did not handle error conditions correctly at  
all, since it looked at "errno" which is not used for socket-related  
errors on that platform.  This resulted, for example, in failure  
to connect to a PostgreSQL server with GSSAPI enabled.  
  
We have a convention for dealing with this within libpq, which is to  
use SOCK_ERRNO and SOCK_ERRNO_SET rather than touching errno directly;  
but the GSSAPI code is a relative latecomer and did not get that memo.  
(The equivalent backend code continues to use errno, because the  
backend does this differently.  Maybe libpq's approach should be  
rethought someday.)  
  
Apparently nobody tries to build libpq with GSSAPI support on Windows,  
or we'd have heard about this before, because it's been broken all  
along.  Back-patch to all supported branches.  
  
Author: Ning Wu <ning94803@gmail.com>  
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/CAFGqpvg-pRw=cdsUpKYfwY6D3d-m9tw8WMcAEE7HHWfm-oYWvw@mail.gmail.com  
Backpatch-through: 13  

M src/interfaces/libpq/fe-secure-gssapi.c

pgbench: Fail cleanly when finding a COPY result state

commit   : 640590bb423823d60346f14558f93a8b3bafe1dd    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Fri, 3 Oct 2025 14:04:03 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Fri, 3 Oct 2025 14:04:03 +0900    

Click here for diff

Currently, pgbench aborts when a COPY response is received in  
readCommandResponse().  However, as PQgetResult() returns an empty  
result when there is no asynchronous result, through getCopyResult(),  
the logic done at the end of readCommandResponse() for the error path  
leads to an infinite loop.  
  
This commit forcefully exits the COPY state with PQendcopy() before  
moving to the error handler when fiding a COPY state, avoiding the  
infinite loop.  The COPY protocol is not supported by pgbench anyway, as  
an error is assumed in this case, so giving up is better than having the  
tool be stuck forever.  pgbench was interruptible in this state.  
  
A TAP test is added to check that an error happens if trying to use  
COPY.  
  
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>  
Discussion: https://postgr.es/m/CAO6_XqpHyF2m73ifV5a=5jhXxH2chk=XrgefY+eWWPe2Eft3=A@mail.gmail.com  
Backpatch-through: 13  

M src/bin/pgbench/pgbench.c
M src/bin/pgbench/t/001_pgbench_with_server.pl

pgstattuple: Improve reports generated for indexes (hash, gist, btree)

commit   : c0f9fe877e23ed5598efa9816b1eafeac3391756    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Thu, 2 Oct 2025 11:09:13 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Thu, 2 Oct 2025 11:09:13 +0900    

Click here for diff

pgstattuple checks the state of the pages retrieved for gist and hash  
using some check functions from each index AM, respectively  
gistcheckpage() and _hash_checkpage().  When these are called, they  
would fail when bumping on data that is found as incorrect (like opaque  
area size not matching, or empty pages), contrary to btree that simply  
discards these cases and continues to aggregate data.  
  
Zero pages can happen after a crash, with these AMs being able to do an  
internal cleanup when these are seen.  Also, sporadic failures are  
annoying when doing for example a large-scale diagnostic query based on  
pgstattuple with a join of pg_class, as it forces one to use tricks like  
quals to discard hash or gist indexes, or use a PL wrapper able to catch  
errors.  
  
This commit changes the reports generated for btree, gist and hash to  
be more user-friendly;  
- When seeing an empty page, report it as free space.  This new rule  
applies to gist and hash, and already applied to btree.  
- For btree, a check based on the size of BTPageOpaqueData is added.  
- For gist indexes, gistcheckpage() is not called anymore, replaced by a  
check based on the size of GISTPageOpaqueData.  
- For hash indexes, instead of _hash_getbuf_with_strategy(), use a  
direct call to ReadBufferExtended(), coupled with a check based on  
HashPageOpaqueData.  The opaque area size check was already used.  
- Pages that do not match these criterias are discarded from the stats  
reports generated.  
  
There have been a couple of bug reports over the years that complained  
about the current behavior for hash and gist, as being not that useful,  
with nothing being done about it.  Hence this change is backpatched down  
to v13.  
  
Reported-by: Noah Misch <noah@leadboat.com>  
Author: Nitin Motiani <nitinmotiani@google.com>  
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>  
Discussion: https://postgr.es/m/CAH5HC95gT1J3dRYK4qEnaywG8RqjbwDdt04wuj8p39R=HukayA@mail.gmail.com  
Backpatch-through: 13  

M contrib/pgstattuple/pgstattuple.c

pgbench: Fix error reporting in readCommandResponse().

commit   : 36c4d30c8f2474cb9bc5f4eb0f2bec4c5a314003    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Tue, 30 Sep 2025 23:52:28 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Tue, 30 Sep 2025 23:52:28 +0900    

Click here for diff

pgbench uses readCommandResponse() to process server responses.  
When readCommandResponse() encounters an error during a call to  
PQgetResult() to fetch the current result, it attempts to report it  
with an additional error message from PQerrorMessage(). However,  
previously, this extra error message could be lost or become incorrect.  
  
The cause was that after fetching the current result (and detecting  
an error), readCommandResponse() called PQgetResult() again to  
peek at the next result. This second call could overwrite the libpq  
connection's error message before the original error was reported,  
causing the error message retrieved from PQerrorMessage() to be  
lost or overwritten.  
  
This commit fixes the issue by updating readCommandResponse()  
to use PQresultErrorMessage() instead of PQerrorMessage()  
to retrieve the error message generated when the PQgetResult()  
for the current result causes an error, ensuring the correct message  
is reported.  
  
Backpatch to all supported versions.  
  
Author: Yugo Nagata <nagata@sraoss.co.jp>  
Reviewed-by: Chao Li <lic@highgo.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/20250925110940.ebacc31725758ec47d5432c6@sraoss.co.jp  
Backpatch-through: 13  

M src/bin/pgbench/pgbench.c

Fix StatisticsObjIsVisibleExt() for pg_temp.

commit   : ab16418ee0ecc9632436f085b09af73ed2444091    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 29 Sep 2025 11:15:44 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 29 Sep 2025 11:15:44 -0700    

Click here for diff

Neighbor get_statistics_object_oid() ignores objects in pg_temp, as has  
been the standard for non-relation, non-type namespace searches since  
CVE-2007-2138.  Hence, most operations that name a statistics object  
correctly decline to map an unqualified name to a statistics object in  
pg_temp.  StatisticsObjIsVisibleExt() did not.  Consequently,  
pg_statistics_obj_is_visible() wrongly returned true for such objects,  
psql \dX wrongly listed them, and getObjectDescription()-based ereport()  
and pg_describe_object() wrongly omitted namespace qualification.  Any  
malfunction beyond that would depend on how a human or application acts  
on those wrong indications.  Commit  
d99d58cdc8c0b5b50ee92995e8575c100b1a458a introduced this.  Back-patch to  
v13 (all supported versions).  
  
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>  
Discussion: https://postgr.es/m/20250920162116.2e.nmisch@google.com  
Backpatch-through: 13  

M src/backend/catalog/namespace.c
M src/test/regress/expected/stats_ext.out
M src/test/regress/sql/stats_ext.sql

Fix missed copying of groupDistinct in transformPLAssignStmt.

commit   : b7f6798c056a0ca96c0f01750f382213b0ad3375    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 27 Sep 2025 14:29:41 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 27 Sep 2025 14:29:41 -0400    

Click here for diff

Because we failed to do this, DISTINCT in GROUP BY DISTINCT would be  
ignored in PL/pgSQL assignment statements.  It's not surprising that  
no one noticed, since such statements will throw an error if the query  
produces more than one row.  That eliminates most scenarios where  
advanced forms of GROUP BY could be useful, and indeed makes it hard  
even to find a simple test case.  Nonetheless it's wrong.  
  
This is directly the fault of be45be9c3 which added the groupDistinct  
field, but I think much of the blame has to fall on c9d529848, in  
which I incautiously supposed that we'd manage to keep two copies of  
a big chunk of parse-analysis logic in sync.  As a follow-up, I plan  
to refactor so that there's only one copy.  But that seems useful  
only in master, so let's use this one-line fix for the back branches.  
  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/31027.1758919078@sss.pgh.pa.us  
Backpatch-through: 14  

M src/backend/parser/analyze.c

pgbench: Fix assertion failure with retriable errors in pipeline mode.

commit   : 8b2e290bde21767373df9eef4b14bba282df8bdb    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Fri, 26 Sep 2025 21:23:43 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Fri, 26 Sep 2025 21:23:43 +0900    

Click here for diff

When running pgbench with --verbose-errors option and a custom script that  
triggered retriable errors (e.g., serialization errors) in pipeline mode,  
an assertion failure could occur:  
  
    Assertion failed: (sql_script[st->use_file].commands[st->command]->type == 1), function commandError, file pgbench.c, line 3062.  
  
The failure happened because pgbench assumed these errors would only occur  
during SQL commands, but in pipeline mode they can also happen during  
\endpipeline meta command.  
  
This commit fixes the assertion failure by adjusting the assertion check to  
allow such errors during either SQL commands or \endpipeline.  
  
Backpatch to v15, where the assertion check was introduced.  
  
Author: Yugo Nagata <nagata@sraoss.co.jp>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/CAHGQGwGWQMOzNkQs-LmpDHdNC0h8dmAuUMRvZrEntQi5a-b=Kg@mail.gmail.com  

M src/bin/pgbench/pgbench.c

Add minimal sleep to stats isolation test functions.

commit   : 21ada43a6105bee3093d636d2ba2afc71bebe5a1    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 25 Sep 2025 13:29:02 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 25 Sep 2025 13:29:02 -0400    

Click here for diff

The functions test_stat_func() and test_stat_func2() had empty  
function bodies, so that they took very little time to run.  This made  
it possible that on machines with relatively low timer resolution the  
functions could return before the clock advanced, making the test fail  
(as seen on buildfarm members fruitcrow and hamerkop).  
  
To avoid that, pg_sleep for 10us during the functions.  As far as we  
can tell, all current hardware has clock resolution much less than  
that.  (The current implementation of pg_sleep will round it up to  
1ms anyway, but someday that might get improved.)  
  
Author: Michael Banck <mbanck@gmx.net>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/68d413a3.a70a0220.24c74c.8be9@mx.google.com  
Backpatch-through: 15  

M src/test/isolation/specs/stats.spec

Fix LOCK_TIMEOUT handling during parallel apply.

commit   : a54c7a1133a4338c09b435f7870152f305bbf3ad    
  
author   : Amit Kapila <akapila@postgresql.org>    
date     : Wed, 24 Sep 2025 03:38:27 +0000    
  
committer: Amit Kapila <akapila@postgresql.org>    
date     : Wed, 24 Sep 2025 03:38:27 +0000    

Click here for diff

Previously, the parallel apply worker used SIGINT to receive a graceful  
shutdown signal from the leader apply worker. However, SIGINT is also used  
by the LOCK_TIMEOUT handler to trigger a query-cancel interrupt. This  
overlap caused the parallel apply worker to miss LOCK_TIMEOUT signals,  
leading to incorrect behavior during lock wait/contention.  
  
This patch resolves the conflict by switching the graceful shutdown signal  
from SIGINT to SIGUSR2.  
  
Reported-by: Zane Duffield <duffieldzane@gmail.com>  
Diagnosed-by: Zhijie Hou <houzj.fnst@fujitsu.com>  
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Backpatch-through: 16, where it was introduced  
Discussion: https://postgr.es/m/CACMiCkXyC4au74kvE2g6Y=mCEF8X6r-Ne_ty4r7qWkUjRE4+oQ@mail.gmail.com  

M src/backend/postmaster/interrupt.c
M src/backend/replication/logical/applyparallelworker.c
M src/backend/replication/logical/launcher.c

Fix meson build with -Duuid=ossp when using version older than 0.60

commit   : 2f4bc5658a1b6ac1e1fda7ce34cb60ecb935b87a    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 22 Sep 2025 08:03:31 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Mon, 22 Sep 2025 08:03:31 +0900    

Click here for diff

The package for the UUID library may be named "uuid" or "ossp-uuid", and  
meson.build has been using a single call of dependency() with multiple  
names, something only supported since meson 0.60.0.  
  
The minimum version of meson supported by Postgres is 0.57.2 on HEAD,  
since f039c2244110, and 0.54 on stable branches down to 16.  
  
Author: Oreo Yang <oreo.yang@hotmail.com>  
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>  
Discussion: https://postgr.es/m/OS3P301MB01656E6F91539770682B1E77E711A@OS3P301MB0165.JPNP301.PROD.OUTLOOK.COM  
Backpatch-through: 16  

M meson.build

pg_restore: Fix security label handling with --no-publications/subscriptions.

commit   : 0870397ccfbdcf830e5fd75608415f7b5c3bc2e7    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 18 Sep 2025 11:09:15 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 18 Sep 2025 11:09:15 +0900    

Click here for diff

Previously, pg_restore did not skip security labels on publications or  
subscriptions even when --no-publications or --no-subscriptions was specified.  
As a result, it could issue SECURITY LABEL commands for objects that were  
never created, causing those commands to fail.  
  
This commit fixes the issue by ensuring that security labels on publications  
and subscriptions are also skipped when the corresponding options are used.  
  
Backpatch to all supported versions.  
  
Author: Jian He <jian.universality@gmail.com>  
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/CACJufxHCt00pR9h51AVu6+yPD5J7JQn=7dQXxqacj0XyDhc-fA@mail.gmail.com  
Backpatch-through: 13  

M src/bin/pg_dump/pg_backup_archiver.c

Calculate agglevelsup correctly when Aggref contains a CTE.

commit   : 7df74e635e65e880c308d52e11d3d603aa1ee9c6    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 17 Sep 2025 16:32:57 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 17 Sep 2025 16:32:57 -0400    

Click here for diff

If an aggregate function call contains a sub-select that has  
an RTE referencing a CTE outside the aggregate, we must treat  
that reference like a Var referencing the CTE's query level  
for purposes of determining the aggregate's level.  Otherwise  
we might reach the nonsensical conclusion that the aggregate  
should be evaluated at some query level higher than the CTE,  
ending in a planner error or a broken plan tree that causes  
executor failures.  
  
Bug: #19055  
Reported-by: BugForge <dllggyx@outlook.com>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/19055-6970cfa8556a394d@postgresql.org  
Backpatch-through: 13  

M src/backend/parser/parse_agg.c
M src/test/regress/expected/with.out
M src/test/regress/sql/with.sql

Add missing EPQ recheck for TID Range Scan

commit   : ba0203880a8f2f8ae397f176f86ed1339939479c    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Wed, 17 Sep 2025 12:20:44 +1200    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Wed, 17 Sep 2025 12:20:44 +1200    

Click here for diff

The EvalPlanQual recheck for TID Range Scan wasn't rechecking the TID qual  
still passed after following update chains.  This could result in tuples  
being updated or deleted by plans using TID Range Scans where the ctid of  
the new (updated) tuple no longer matches the clause of the scan.  This  
isn't desired behavior, and isn't consistent with what would happen if the  
chosen plan had used an Index or Seq Scan, and that could lead to hard to  
predict behavior for scans that contain TID quals and other quals as the  
planner has freedom to choose TID Range or some other non-TID scan method  
for such queries, and the chosen plan could change at any moment.  
  
Here we fix this by properly implementing the recheck function for TID  
Range Scans.  
  
Backpatch to 14, where TID Range Scans were added  
  
Reported-by: Sophie Alpert <pg@sophiebits.com>  
Author: Sophie Alpert <pg@sophiebits.com>  
Author: David Rowley <dgrowleyml@gmail.com>  
Reviewed-by: David Rowley <dgrowleyml@gmail.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/4a6268ff-3340-453a-9bf5-c98d51a6f729@app.fastmail.com  
Backpatch-through: 14  

M src/backend/executor/nodeTidrangescan.c
M src/test/isolation/expected/eval-plan-qual.out
M src/test/isolation/specs/eval-plan-qual.spec

Add missing EPQ recheck for TID Scan

commit   : d6539f88b7c5bd2889dfdadc922c740bf5d3e0e4    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Wed, 17 Sep 2025 11:50:12 +1200    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Wed, 17 Sep 2025 11:50:12 +1200    

Click here for diff

The EvalPlanQual recheck for TID Scan wasn't rechecking the TID qual  
still passed after following update chains.  This could result in tuples  
being updated or deleted by plans using TID Scans where the ctid of the  
new (updated) tuple no longer matches the clause of the scan.  This isn't  
desired behavior, and isn't consistent with what would happen if the  
chosen plan had used an Index or Seq Scan, and that could lead to hard to  
predict behavior for scans that contain TID quals and other quals as the  
planner has freedom to choose TID or some other scan method for such  
queries, and the chosen plan could change at any moment.  
  
Here we fix this by properly implementing the recheck function for TID  
Scans.  
  
Backpatch to 13, oldest supported version  
  
Reported-by: Sophie Alpert <pg@sophiebits.com>  
Author: Sophie Alpert <pg@sophiebits.com>  
Author: David Rowley <dgrowleyml@gmail.com>  
Reviewed-by: David Rowley <dgrowleyml@gmail.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/4a6268ff-3340-453a-9bf5-c98d51a6f729@app.fastmail.com  
Backpatch-through: 13  

M src/backend/executor/nodeTidscan.c
M src/test/isolation/expected/eval-plan-qual.out
M src/test/isolation/specs/eval-plan-qual.spec

Fix pg_dump COMMENT dependency for separate domain constraints.

commit   : 3cf328eca84a638ff8178dd85a980dbac3e39ae3    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Tue, 16 Sep 2025 09:40:44 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Tue, 16 Sep 2025 09:40:44 -0700    

Click here for diff

The COMMENT should depend on the separately-dumped constraint, not the  
domain.  Sufficient restore parallelism might fail the COMMENT command  
by issuing it before the constraint exists.  Back-patch to v13, like  
commit 0858f0f96ebb891c8960994f023ed5a17b758a38.  
  
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>  
Discussion: https://postgr.es/m/20250913020233.fa.nmisch@google.com  
Backpatch-through: 13  

M src/bin/pg_dump/pg_dump.c

Treat JsonConstructorExpr as non-strict

commit   : 62397bb1893ba4f0b5e3a715ba21ee3c6a211d93    
  
author   : Richard Guo <rguo@postgresql.org>    
date     : Tue, 16 Sep 2025 18:42:20 +0900    
  
committer: Richard Guo <rguo@postgresql.org>    
date     : Tue, 16 Sep 2025 18:42:20 +0900    

Click here for diff

JsonConstructorExpr can produce non-NULL output with a NULL input, so  
it should be treated as a non-strict construct.  Failing to do so can  
lead to incorrect query behavior.  
  
For example, in the reported case, when pulling up a subquery that is  
under an outer join, if the subquery's target list contains a  
JsonConstructorExpr that uses subquery variables and it is mistakenly  
treated as strict, it will be pulled up without being wrapped in a  
PlaceHolderVar.  As a result, the expression will be evaluated at the  
wrong place and will not be forced to null when the outer join should  
do so.  
  
Back-patch to v16 where JsonConstructorExpr was introduced.  
  
Bug: #19046  
Reported-by: Runyuan He <runyuan@berkeley.edu>  
Author: Tender Wang <tndrwang@gmail.com>  
Co-authored-by: Richard Guo <guofenglinux@gmail.com>  
Discussion: https://postgr.es/m/19046-765b6602b0a8cfdf@postgresql.org  
Backpatch-through: 16  

M src/backend/optimizer/util/clauses.c
M src/test/regress/expected/subselect.out
M src/test/regress/sql/subselect.sql

pg_dump: Fix dumping of security labels on subscriptions and event triggers.

commit   : 20b23784fc312e848de4fbca8012eb1733b86795    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Tue, 16 Sep 2025 16:44:58 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Tue, 16 Sep 2025 16:44:58 +0900    

Click here for diff

Previously, pg_dump incorrectly queried pg_seclabel to retrieve security labels  
for subscriptions, which are stored in pg_shseclabel as they are global objects.  
This could result in security labels for subscriptions not being dumped.  
  
This commit fixes the issue by updating pg_dump to query the pg_seclabels view,  
which aggregates entries from both pg_seclabel and pg_shseclabel.  
While querying pg_shseclabel directly for subscriptions was an alternative,  
using pg_seclabels is simpler and sufficient.  
  
In addition, pg_dump is updated to dump security labels on event triggers,  
which were previously omitted.  
  
Backpatch to all supported versions.  
  
Author: Jian He <jian.universality@gmail.com>  
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/CACJufxHCt00pR9h51AVu6+yPD5J7JQn=7dQXxqacj0XyDhc-fA@mail.gmail.com  
Backpatch-through: 13  

M src/bin/pg_dump/pg_backup_archiver.c
M src/bin/pg_dump/pg_dump.c

pg_restore: Fix comment handling with --no-publications / --no-subscriptions.

commit   : 97527a5e68d523048fd80aa1c97c3fe5bf948fb6    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Tue, 16 Sep 2025 10:37:38 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Tue, 16 Sep 2025 10:37:38 +0900    

Click here for diff

Previously, pg_restore did not skip comments on publications or subscriptions  
even when --no-publications or --no-subscriptions was specified. As a result,  
it could issue COMMENT commands for objects that were never created,  
causing those commands to fail.  
  
This commit fixes the issue by ensuring that comments on publications and  
subscriptions are also skipped when the corresponding options are used.  
  
Backpatch to all supported versions.  
  
Author: Jian He <jian.universality@gmail.com>  
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>  
Discussion: https://postgr.es/m/CACJufxHCt00pR9h51AVu6+yPD5J7JQn=7dQXxqacj0XyDhc-fA@mail.gmail.com  
Backpatch-through: 13  

M src/bin/pg_dump/pg_backup_archiver.c
M src/bin/pg_dump/t/002_pg_dump.pl

CREATE STATISTICS: improve misleading error message

commit   : f30c04682cdf3a2fb48c0ab47017b28ef92bbebf    
  
author   : Peter Eisentraut <peter@eisentraut.org>    
date     : Mon, 15 Sep 2025 11:38:58 +0200    
  
committer: Peter Eisentraut <peter@eisentraut.org>    
date     : Mon, 15 Sep 2025 11:38:58 +0200    

Click here for diff

The previous change (commit f225473cbae) was still not on target,  
because it talked about relation kinds, which are not what is being  
checked here.  Provide a more accurate message.  
  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/CACJufxEZ48toGH0Em_6vdsT57Y3L8pLF=DZCQ_gCii6=C3MeXw@mail.gmail.com  

M src/backend/tcop/utility.c
M src/test/regress/expected/stats_ext.out

jit: fix build with LLVM-21

commit   : 2670881afd9175b8946530646eb4fb76aaebd340    
  
author   : Peter Eisentraut <peter@eisentraut.org>    
date     : Mon, 15 Sep 2025 08:13:21 +0200    
  
committer: Peter Eisentraut <peter@eisentraut.org>    
date     : Mon, 15 Sep 2025 08:13:21 +0200    

Click here for diff

LLVM-21 renamed llvm::GlobalValue::getGUID() to  
getGUIDAssumingExternalLinkage(), so add a version guard.  
  
Author: Holger Hoffstätte <holger@applied-asynchrony.com>  
Discussion: https://www.postgresql.org/message-id/flat/d25e6e4a-d1b4-84d3-2f8a-6c45b975f53d%40applied-asynchrony.com  

M src/backend/jit/llvm/llvmjit_inline.cpp

Amend recent fix for SIMILAR TO regex conversion.

commit   : 281ad4ed11d235cccad78fcbf7d807c615e1c0e8    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 13 Sep 2025 16:55:51 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 13 Sep 2025 16:55:51 -0400    

Click here for diff

Commit e3ffc3e91 fixed the translation of character classes in  
SIMILAR TO regular expressions.  Unfortunately the fix broke a corner  
case: if there is an escape character right after the opening bracket  
(for example in "[\q]"), a closing bracket right after the escape  
sequence would not be seen as closing the character class.  
  
There were two more oversights: a backslash or a nested opening bracket  
right at the beginning of a character class should remove the special  
meaning from any following caret or closing bracket.  
  
This bug suggests that this code needs to be more readable, so also  
rename the variables "charclass_depth" and "charclass_start" to  
something more meaningful, rewrite an "if" cascade to be more  
consistent, and improve the commentary.  
  
Reported-by: Dominique Devienne <ddevienne@gmail.com>  
Reported-by: Stephan Springl <springl-psql@bfw-online.de>  
Author: Laurenz Albe <laurenz.albe@cybertec.at>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/CAFCRh-8NwJd0jq6P=R3qhHyqU7hw0BTor3W0SvUcii24et+zAw@mail.gmail.com  
Backpatch-through: 13  

M src/backend/utils/adt/regexp.c
M src/test/regress/expected/strings.out
M src/test/regress/sql/strings.sql

Fix oversights in pg_event_trigger_dropped_objects() fixes.

commit   : f6c8e7824c5d3872556d1f717f5a5328e5f3e594    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 12 Sep 2025 17:43:15 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 12 Sep 2025 17:43:15 -0400    

Click here for diff

Commit a0b99fc12 caused pg_event_trigger_dropped_objects()  
to not fill the object_name field for schemas, which it  
should have; and caused it to fill the object_name field  
for default values, which it should not have.  
  
In addition, triggers and RLS policies really should behave  
the same way as we're making column defaults do; that is,  
they should have is_temporary = true if they belong to a  
temporary table.  
  
Fix those things, and upgrade event_trigger.sql's woefully  
inadequate test coverage of these secondary output columns.  
  
As before, back-patch only to v15.  
  
Reported-by: Sergey Shinderuk <s.shinderuk@postgrespro.ru>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/bd7b4651-1c26-4d30-832b-f942fabcb145@postgrespro.ru  
Backpatch-through: 15  

M src/backend/commands/event_trigger.c
M src/test/regress/expected/event_trigger.out
M src/test/regress/sql/event_trigger.sql

Silence compiler warnings on clang 21

commit   : 385c5dfe24deb4fd500fe4007a73c344fa315125    
  
author   : Peter Eisentraut <peter@eisentraut.org>    
date     : Fri, 12 Sep 2025 07:27:48 +0200    
  
committer: Peter Eisentraut <peter@eisentraut.org>    
date     : Fri, 12 Sep 2025 07:27:48 +0200    

Click here for diff

Clang 21 shows some new compiler warnings, for example:  
  
warning: variable 'dstsize' is uninitialized when passed as a const pointer argument here [-Wuninitialized-const-pointer]  
  
The fix is to initialize the variables when they are defined.  This is  
similar to, for example, the existing situation in gistKeyIsEQ().  
  
Discussion: https://www.postgresql.org/message-id/flat/6604ad6e-5934-43ac-8590-15113d6ae4b1%40eisentraut.org  

M src/backend/access/common/toast_internals.c
M src/backend/access/gist/gistutil.c

Report the correct is_temporary flag for column defaults.

commit   : 8856f1acc97be5cc1816d42389c4a6852bc2ec86    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 11 Sep 2025 17:11:54 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 11 Sep 2025 17:11:54 -0400    

Click here for diff

pg_event_trigger_dropped_objects() would report a column default  
object with is_temporary = false, even if it belongs to a  
temporary table.  This seems clearly wrong, so adjust it to  
report the table's temp-ness.  
  
While here, refactor EventTriggerSQLDropAddObject to make its  
handling of namespace objects less messy and avoid duplication  
of the schema-lookup code.  And add some explicit test coverage  
of dropped-object reports for dependencies of temp tables.  
  
Back-patch to v15.  The bug exists further back, but the  
GetAttrDefaultColumnAddress function this patch depends on does not,  
and it doesn't seem worth adjusting it to cope with the older code.  
  
Author: Antoine Violin <violin.antuan@gmail.com>  
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/CAFjUV9x3-hv0gihf+CtUc-1it0hh7Skp9iYFhMS7FJjtAeAptA@mail.gmail.com  
Backpatch-through: 15  

M src/backend/commands/event_trigger.c
M src/test/regress/expected/event_trigger.out
M src/test/regress/sql/event_trigger.sql

Fix description of WAL record blocks in hash_xlog.h

commit   : 2915a8f8f2db0c9ece41877d46f4300730b4f7d7    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Thu, 11 Sep 2025 17:17:28 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Thu, 11 Sep 2025 17:17:28 +0900    

Click here for diff

hash_xlog.h included descriptions for the blocks used in WAL records  
that were was not completely consistent with how the records are  
generated, with one block missing for SQUEEZE_PAGE, and inconsistent  
descriptions used for block 0 in VACUUM_ONE_PAGE and MOVE_PAGE_CONTENTS.  
  
This information was incorrect since c11453ce0aea, cross-checking the  
logic for the record generation.  
  
Author: Kirill Reshke <reshkekirill@gmail.com>  
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>  
Discussion: https://postgr.es/m/CALdSSPj1j=a1d1hVA3oabRFz0hSU3KKrYtZPijw4UPUM7LY9zw@mail.gmail.com  
Backpatch-through: 13  

M src/include/access/hash_xlog.h

Fix incorrect file reference in guc.h

commit   : 93b6b466336265c5085e2b3d6f44b8d981c82ae1    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Thu, 11 Sep 2025 10:15:42 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Thu, 11 Sep 2025 10:15:42 +0900    

Click here for diff

GucSource_Names was documented as being in guc.c, but since 0a20ff54f5e6  
it is located in guc_tables.c.  The reference to the location of  
GucSource_Names is important, as GucSource needs to be kept in sync with  
GucSource_Names.  
  
Author: David G. Johnston <david.g.johnston@gmail.com>  
Discussion: https://postgr.es/m/CAKFQuwYPgAHWPYjPzK7iXzhSZ6MKR8w20_Nz7ZXpOvx=kZbs7A@mail.gmail.com  
Backpatch-through: 16  

M src/include/utils/guc.h

Fix memory leakage in nodeSubplan.c.

commit   : e1da9c072106113fda0eb6ac124aefe9be2cbd3c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 10 Sep 2025 16:05:03 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 10 Sep 2025 16:05:03 -0400    

Click here for diff

If the hash functions used for hashing tuples leaked any memory,  
we failed to clean that up, resulting in query-lifespan memory  
leakage in queries using hashed subplans.  One way that could  
happen is if the values being hashed require de-toasting, since  
most of our hash functions don't trouble to clean up de-toasted  
inputs.  
  
Prior to commit bf6c614a2, this leakage was largely masked  
because TupleHashTableMatch would reset hashtable->tempcxt  
(via execTuplesMatch).  But it doesn't do that anymore, and  
that's not really the right place for this anyway: doing it  
there could reset the tempcxt many times per hash lookup,  
or not at all.  Instead put reset calls into ExecHashSubPlan  
and buildSubPlanHash.  Along the way to that, rearrange  
ExecHashSubPlan so that there's just one place to call  
MemoryContextReset instead of several.  
  
This amounts to accepting the de-facto API spec that the caller  
of the TupleHashTable routines is responsible for resetting the  
tempcxt adequately often.  Although the other callers seem to  
get this right, it was not documented anywhere, so add a comment  
about it.  
  
Bug: #19040  
Reported-by: Haiyang Li <mohen.lhy@alibaba-inc.com>  
Author: Haiyang Li <mohen.lhy@alibaba-inc.com>  
Reviewed-by: Fei Changhong <feichanghong@qq.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/19040-c9b6073ef814f48c@postgresql.org  
Backpatch-through: 13  

M src/backend/executor/execGrouping.c
M src/backend/executor/nodeSubplan.c

meson: Build numeric.c with -ftree-vectorize.

commit   : 509c779293f2c4a9a91b6a4ceb9acc629d15f038    
  
author   : Nathan Bossart <nathan@postgresql.org>    
date     : Wed, 10 Sep 2025 11:21:12 -0500    
  
committer: Nathan Bossart <nathan@postgresql.org>    
date     : Wed, 10 Sep 2025 11:21:12 -0500    

Click here for diff

autoconf builds have compiled this file with -ftree-vectorize since  
commit 8870917623, but meson builds seem to have missed the memo.  
  
Reviewed-by: Jeff Davis <pgsql@j-davis.com>  
Discussion: https://postgr.es/m/aL85CeasM51-0D1h%40nathan  
Backpatch-through: 16  

M src/backend/utils/adt/meson.build

meson: build checksums with extra optimization flags.

commit   : 2de24ca6ca512053d821e14f2203cb0b28d7259e    
  
author   : Jeff Davis <jdavis@postgresql.org>    
date     : Tue, 9 Sep 2025 16:04:04 -0700    
  
committer: Jeff Davis <jdavis@postgresql.org>    
date     : Tue, 9 Sep 2025 16:04:04 -0700    

Click here for diff

Use -funroll-loops and -ftree-vectorize when building checksum.c to  
match what autoconf does.  
  
Missed backport of 9af672bcb2, noticed by Nathan Bossart.  
  
Reported-by: Nathan Bossart <nathandbossart@gmail.com>  
Discussion: https://postgr.es/m/a81f2f7ef34afc24a89c613671ea017e3651329c.camel@j-davis.com  
Reviewed-by: Andres Freund <andres@anarazel.de>  
Backpatch-through: 16  

M src/backend/storage/page/meson.build

Fix corruption of pgstats shared hashtable due to OOM failures

commit   : 12f57681c79bd3e8283bfb223dadba0efbe0b37e    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 8 Sep 2025 15:52:53 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Mon, 8 Sep 2025 15:52:53 +0900    

Click here for diff

A new pgstats entry is created as a two-step process:  
- The entry is looked at in the shared hashtable of pgstats, and is  
inserted if not found.  
- When not found and inserted, its fields are then initialized.  This  
part include a DSA chunk allocation for the stats data of the new entry.  
  
As currently coded, if the DSA chunk allocation fails due to an  
out-of-memory failure, an ERROR is generated, leaving in the pgstats  
shared hashtable an inconsistent entry due to the first step, as the  
entry has already been inserted in the hashtable.  These broken entries  
can then be found by other backends, crashing them.  
  
There are only two callers of pgstat_init_entry(), when loading the  
pgstats file at startup and when creating a new pgstats entry.  This  
commit changes pgstat_init_entry() so as we use dsa_allocate_extended()  
with DSA_ALLOC_NO_OOM, making it return NULL on allocation failure  
instead of failing.  This way, a backend failing an entry creation can  
take appropriate cleanup actions in the shared hashtable before throwing  
an error.  Currently, this means removing the entry from the shared  
hashtable before throwing the error for the allocation failure.  
  
Out-of-memory errors unlikely happen in the wild, and we do not bother  
with back-patches when these are fixed, usually.  However, the problem  
dealt with here is a degree worse as it breaks the shared memory state  
of pgstats, impacting other processes that may look at an inconsistent  
entry that a different process has failed to create.  
  
Author: Mikhail Kot <mikhail.kot@databricks.com>  
Discussion: https://postgr.es/m/CAAi9E7jELo5_-sBENftnc2E8XhW2PKZJWfTC3i2y-GMQd2bcqQ@mail.gmail.com  
Backpatch-through: 15  

M src/backend/utils/activity/pgstat.c
M src/backend/utils/activity/pgstat_shmem.c

Fix concurrent update issue with MERGE.

commit   : 21a61b87f47b6a1b359021bf9f71b14bbf805330    
  
author   : Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Fri, 5 Sep 2025 08:23:57 +0100    
  
committer: Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Fri, 5 Sep 2025 08:23:57 +0100    

Click here for diff

When executing a MERGE UPDATE action, if there is more than one  
concurrent update of the target row, the lock-and-retry code would  
sometimes incorrectly identify the latest version of the target tuple,  
leading to incorrect results.  
  
This was caused by using the ctid field from the TM_FailureData  
returned by table_tuple_lock() in a case where the result was TM_Ok,  
which is unsafe because the TM_FailureData struct is not guaranteed to  
be fully populated in that case. Instead, it should use the tupleid  
passed to (and updated by) table_tuple_lock().  
  
To reduce the chances of similar errors in the future, improve the  
commentary for table_tuple_lock() and TM_FailureData to make it  
clearer that table_tuple_lock() updates the tid passed to it, and most  
fields of TM_FailureData should not be relied on in non-failure cases.  
An exception to this is the "traversed" field, which is set in both  
success and failure cases.  
  
Reported-by: Dmitry <dsy.075@yandex.ru>  
Author: Yugo Nagata <nagata@sraoss.co.jp>  
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>  
Reviewed-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/1570d30e-2b95-4239-b9c3-f7bf2f2f8556@yandex.ru  
Backpatch-through: 15  

M src/backend/executor/nodeModifyTable.c
M src/include/access/tableam.h
M src/test/isolation/expected/merge-match-recheck.out
M src/test/isolation/specs/merge-match-recheck.spec

Fix compiler error introduced by 5386bfb9c1f.

commit   : d37694b974322e2ddc1fd3829c8f3d6cd56f0483    
  
author   : Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Thu, 4 Sep 2025 16:00:01 +0100    
  
committer: Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Thu, 4 Sep 2025 16:00:01 +0100    

Click here for diff

Per buildfarm member wrasse, void function cannot return a value.  
This only affects v13-v17, where an ABI-compatible wrapper function  
was added.  
  
Backpatch-through: 13-17  

M src/backend/executor/execMain.c

Fix replica identity check for MERGE.

commit   : 421d7a1579853220dfc650126bd43718acdae84b    
  
author   : Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Thu, 4 Sep 2025 11:48:51 +0100    
  
committer: Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Thu, 4 Sep 2025 11:48:51 +0100    

Click here for diff

When executing a MERGE, check that the target relation supports all  
actions mentioned in the MERGE command. Specifically, check that it  
has a REPLICA IDENTITY if it publishes updates or deletes and the  
MERGE command contains update or delete actions. Failing to do this  
can silently break replication.  
  
Author: Zhijie Hou <houzj.fnst@fujitsu.com>  
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>  
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>  
Tested-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/OS3PR01MB57180C87E43A679A730482DF94B62@OS3PR01MB5718.jpnprd01.prod.outlook.com  
Backpatch-through: 15  

M src/backend/executor/execMain.c
M src/backend/executor/execPartition.c
M src/backend/executor/nodeModifyTable.c
M src/include/executor/executor.h
M src/test/regress/expected/publication.out
M src/test/regress/sql/publication.sql

Fix replica identity check for INSERT ON CONFLICT DO UPDATE.

commit   : 0c4d5a45dbd7d11c9ddff4f7cf4dae3940b083be    
  
author   : Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Thu, 4 Sep 2025 11:34:25 +0100    
  
committer: Dean Rasheed <dean.a.rasheed@gmail.com>    
date     : Thu, 4 Sep 2025 11:34:25 +0100    

Click here for diff

If an INSERT has an ON CONFLICT DO UPDATE clause, the executor must  
check that the target relation supports UPDATE as well as INSERT. In  
particular, it must check that the target relation has a REPLICA  
IDENTITY if it publishes updates. Formerly, it was not doing this  
check, which could lead to silently breaking replication.  
  
Fix by adding such a check to CheckValidResultRel(), which requires  
adding a new onConflictAction argument. In back-branches, preserve ABI  
compatibility by introducing a wrapper function with the original  
signature.  
  
Author: Zhijie Hou <houzj.fnst@fujitsu.com>  
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>  
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>  
Tested-by: Chao Li <li.evan.chao@gmail.com>  
Discussion: https://postgr.es/m/OS3PR01MB57180C87E43A679A730482DF94B62@OS3PR01MB5718.jpnprd01.prod.outlook.com  
Backpatch-through: 13  

M src/backend/executor/execMain.c
M src/backend/executor/execPartition.c
M src/backend/executor/nodeModifyTable.c
M src/include/executor/executor.h
M src/test/regress/expected/publication.out
M src/test/regress/sql/publication.sql

Fix planner error when estimating SubPlan cost

commit   : 79ade5873232c5dea225345c78f1632c22cbb05a    
  
author   : Richard Guo <rguo@postgresql.org>    
date     : Wed, 3 Sep 2025 16:09:23 +0900    
  
committer: Richard Guo <rguo@postgresql.org>    
date     : Wed, 3 Sep 2025 16:09:23 +0900    

Click here for diff

SubPlan nodes are typically built very early, before any RelOptInfos  
have been constructed for the parent query level.  As a result, the  
simple_rel_array in the parent root has not yet been initialized.  
Currently, during cost estimation of a SubPlan's testexpr, we may call  
examine_variable() to look up statistical data about the expressions.  
This can lead to "no relation entry for relid" errors.  
  
To fix, pass root as NULL to cost_qual_eval() in cost_subplan(), since  
the root does not yet contain enough information to safely consult  
statistics.  
  
One exception is SubPlan nodes built for the initplans of MIN/MAX  
aggregates from indexes.  In this case, having a NULL root is safe  
because testexpr will be NULL.  Additionally, an initplan will by  
definition not consult anything from the parent plan.  
  
Backpatch to all supported branches.  Although the reported call path  
that triggers this error is not reachable prior to v17, there's no  
guarantee that other code paths -- especially in extensions -- could  
not encounter the same issue when cost_qual_eval() is called with a  
root that lacks a valid simple_rel_array.  The test case is not  
included in pre-v17 branches though.  
  
Bug: #19037  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Diagnosed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Author: Richard Guo <guofenglinux@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/19037-3d1c7bb553c7ce84@postgresql.org  
Backpatch-through: 13  

M src/backend/optimizer/path/costsize.c

libpq: Fix PQtrace() format for non-printable characters

commit   : 701a0bd56aa4c3091c7c92d235ada9ba7161e7e9    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Wed, 3 Sep 2025 12:54:31 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Wed, 3 Sep 2025 12:54:31 +0900    

Click here for diff

PQtrace() was generating its output for non-printable characters without  
casting the characters printed with unsigned char, leading to some extra  
"\xffffff" generated in the output due to the fact that char may be  
signed.  
  
Oversights introduced by commit 198b3716dba6, so backpatch down to v14.  
  
Author: Ran Benita <ran@unusedvar.com>  
Discussion: https://postgr.es/m/a3383211-4539-459b-9d51-95c736ef08e0@app.fastmail.com  
Backpatch-through: 14  

M src/interfaces/libpq/fe-trace.c

pg_dump: Fix compression API errorhandling

commit   : ec017a305bd4e73a2129e20ea9bbebfcf735ecdc    
  
author   : Daniel Gustafsson <dgustafsson@postgresql.org>    
date     : Fri, 29 Aug 2025 19:28:46 +0200    
  
committer: Daniel Gustafsson <dgustafsson@postgresql.org>    
date     : Fri, 29 Aug 2025 19:28:46 +0200    

Click here for diff

Compression in pg_dump is abstracted using an API with multiple  
implementations which can be selected at runtime by the user.  
The API and its implementations have evolved over time, notable  
commits include bf9aa490db, e9960732a9, 84adc8e20, and 0da243fed.  
The errorhandling defined by the API was however problematic and  
the implementations had a few bugs and/or were not following the  
API specification.  This commit modifies the API to ensure that  
callers can perform errorhandling efficiently and fixes all the  
implementations such that they all implement the API in the same  
way.  A full list of the changes can be seen below.  
  
 * write_func:  
   - Make write_func throw an error on all error conditions.  All  
     callers of write_func were already checking for success and  
     calling pg_fatal on all errors, so we might as well make the  
     API support that case directly with simpler errorhandling as  
     a result.  
  
 * open_func:  
   - zstd: move stream initialization from the open function to  
     the read and write functions as they can have fatal errors.  
     Also ensure to dup the file descriptor like none and gzip.  
   - lz4: Ensure to dup the file descriptor like none and gzip.  
  
 * close_func:  
   - zstd: Ensure to close the file descriptor even if closing  
     down the compressor fails, and clean up state allocation on  
     fclose failures.  Make sure to capture errors set by fclose.  
   - lz4: Ensure to close the file descriptor even if closing  
     down the compressor fails, and instead of calling pg_fatal  
     log the failures using pg_log_error. Make sure to capture  
     errors set by fclose.  
   - none: Make sure to catch errors set by fclose.  
  
 * read_func / gets_func:  
   - Make read_func unconditionally return the number of read  
     bytes instead of making it optional per implementation.  
   - lz4: Make sure to call throw an error and not return -1  
   - gzip: gzread returning zero cannot be assumed to indicate  
     EOF as it is documented to return zero for some types of  
     errors.  
   - lz4, zstd: Convert the _read_internal helper functions to  
     not call pg_fatal on errors to be able to handle gets_func  
     returning NULL on error.  
  
 * getc_func:  
   - zstd: Use an unsigned char rather than an int to read char  
     into.  
  
 * LZ4Stream_init:  
   - Make sure to not switch to inited state until we know that  
     initialization succeeded and reset errno just in case.  
  
On top of these changes there are minor comment cleanups and  
improvements as well as an attempt to consistently reset errno  
in codepaths where it is inspected.  
  
This work was initiated by a report of API misuse, which turned  
into a larger body of work.  As this is an internal API these  
changes can be backpatched into all affected branches.  
  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Author: Daniel Gustafsson <daniel@yesql.se>  
Reported-by: Evgeniy Gorbanev <gorbanyoves@basealt.ru>  
Discussion: https://postgr.es/m/517794.1750082166@sss.pgh.pa.us  
Backpatch-through: 16  

M src/bin/pg_dump/compress_gzip.c
M src/bin/pg_dump/compress_io.c
M src/bin/pg_dump/compress_io.h
M src/bin/pg_dump/compress_lz4.c
M src/bin/pg_dump/compress_none.c
M src/bin/pg_dump/compress_zstd.c
M src/bin/pg_dump/pg_backup_archiver.c
M src/bin/pg_dump/pg_backup_directory.c

Fix possible use after free in expand_partitioned_rtentry()

commit   : f0fe1da50998beb08e066508b9d2ed87ab900472    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Sat, 30 Aug 2025 00:52:49 +1200    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Sat, 30 Aug 2025 00:52:49 +1200    

Click here for diff

It's possible that if the only live partition is concurrently dropped  
and try_table_open() fails, that the bms_del_member() will pfree the  
live_parts Bitmapset.  Since the bms_del_member() call does not assign  
the result back to the live_parts local variable, the while loop could  
segfault as that variable would still reference the pfree'd Bitmapset.  
  
Backpatch to 15. 52f3de874 was backpatched to 14, but there's no  
bms_del_member() there due to live_parts not yet existing in RelOptInfo in  
that version.  Technically there's no bug in version 15 as  
bms_del_member() didn't pfree when the set became empty prior to  
00b41463c (from v16).  Applied to v15 anyway to keep the code similar and  
to avoid the bad coding pattern.  
  
Author: Bernd Reiß <bd_reiss@gmx.at>  
Reviewed-by: David Rowley <dgrowleyml@gmail.com>  
Discussion: https://postgr.es/m/6b88f27a-c45c-4826-8e37-d61a04d90182@gmx.at  
Backpatch-through: 15  

M src/backend/optimizer/util/inherit.c

CREATE STATISTICS: improve misleading error message

commit   : d84a6c3dad1f867275ed7547c8ae6e120292f94c    
  
author   : Álvaro Herrera <alvherre@kurilemu.de>    
date     : Fri, 29 Aug 2025 14:43:47 +0200    
  
committer: Álvaro Herrera <alvherre@kurilemu.de>    
date     : Fri, 29 Aug 2025 14:43:47 +0200    

Click here for diff

I think the error message for a different condition was inadvertently  
copied.  
  
This problem seems to have been introduced by commit a4d75c86bf15.  
  
Author: Álvaro Herrera <alvherre@kurilemu.de>  
Reported-by: jian he <jian.universality@gmail.com>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Backpatch-through: 14  
Discussion: https://postgr.es/m/CACJufxEZ48toGH0Em_6vdsT57Y3L8pLF=DZCQ_gCii6=C3MeXw@mail.gmail.com  

M src/backend/tcop/utility.c
M src/test/regress/expected/stats_ext.out
M src/test/regress/sql/stats_ext.sql

Put "excludeOnly" GIN scan keys at the end of the scankey array.

commit   : d532069c391a2b50432f273052196496b7a61c64    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 26 Aug 2025 12:08:57 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 26 Aug 2025 12:08:57 -0400    

Click here for diff

Commit 4b754d6c1 introduced the concept of an excludeOnly scan key,  
which cannot select matching index entries but can reject  
non-matching tuples, for example a tsquery such as '!term'.  There are  
poorly-documented assumptions that such scan keys do not appear as the  
first scan key.  ginNewScanKey did nothing to ensure that, however,  
with the result that certain GIN index searches could go into an  
infinite loop while apparently-equivalent queries with the clauses in  
a different order were fine.  
  
Fix by teaching ginNewScanKey to place all excludeOnly scan keys  
after all not-excludeOnly ones.  So far as we know at present,  
it might be sufficient to avoid the case where the very first  
scan key is excludeOnly; but I'm not very convinced that there  
aren't other dependencies on the ordering.  
  
Bug: #19031  
Reported-by: Tim Wood <washwithcare@gmail.com>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/19031-0638148643d25548@postgresql.org  
Backpatch-through: 13  

M contrib/pg_trgm/expected/pg_trgm.out
M contrib/pg_trgm/sql/pg_trgm.sql
M src/backend/access/gin/ginscan.c

Do CHECK_FOR_INTERRUPTS inside, not before, scanGetItem.

commit   : 25eadfd0fe7b34f5d3a0d574873a871881bf96f9    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 26 Aug 2025 11:38:41 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 26 Aug 2025 11:38:41 -0400    

Click here for diff

The CHECK_FOR_INTERRUPTS call in gingetbitmap turns out to be  
inadequate to prevent a long uninterruptible loop, because  
we now know a case where looping occurs within scanGetItem.  
While the next patch will fix the bug that caused that, it  
seems foolish to assume that no similar patterns are possible.  
Let's do the CFI within scanGetItem's retry loop, instead.  
This demonstrably allows canceling out of the loop exhibited  
in bug #19031.  
  
Bug: #19031  
Reported-by: Tim Wood <washwithcare@gmail.com>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/19031-0638148643d25548@postgresql.org  
Backpatch-through: 13  

M src/backend/access/gin/ginget.c

Rewrite previous commit's test for TestUpgradeXversion compatibility.

commit   : 412d29fd21665d0c4e473efbc9636bd0c16d12a8    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sat, 23 Aug 2025 16:46:20 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sat, 23 Aug 2025 16:46:20 -0700    

Click here for diff

v17 introduced the MAINTAIN ON TABLES privilege.  That changed the  
applicable "baseacls" reaching buildACLCommands().  That yielded  
spurious TestUpgradeXversion diffs.  Change to use a TYPES privilege.  
Types have the same one privilege in all supported versions, so they  
avoid the problem.  Per buildfarm.  Back-patch to v13, like that commit.  
  
Discussion: https://postgr.es/m/20250823144505.88.nmisch@google.com  
Backpatch-through: 13  

M src/test/regress/expected/privileges.out
M src/test/regress/sql/privileges.sql

Sort DO_DEFAULT_ACL dump objects independent of OIDs.

commit   : e68fa9a830f015a5870abcf9ef8bbc4b416464c4    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Fri, 22 Aug 2025 20:50:28 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Fri, 22 Aug 2025 20:50:28 -0700    

Click here for diff

Commit 0decd5e89db9f5edb9b27351082f0d74aae7a9b6 missed DO_DEFAULT_ACL,  
leading to assertion failures, potential dump order instability, and  
spurious schema diffs.  Back-patch to v13, like that commit.  
  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Author: Kirill Reshke <reshkekirill@gmail.com>  
Discussion: https://postgr.es/m/d32aaa8d-df7c-4f94-bcb3-4c85f02bea21@gmail.com  
Backpatch-through: 13  

M src/bin/pg_dump/pg_dump_sort.c
M src/test/regress/expected/privileges.out
M src/test/regress/sql/privileges.sql

Ignore temporary relations in RelidByRelfilenumber()

commit   : ab874faaa1f39a82832a82164951503be668c874    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Fri, 22 Aug 2025 09:06:37 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Fri, 22 Aug 2025 09:06:37 +0900    

Click here for diff

Temporary relations may share the same RelFileNumber with a permanent  
relation, or other temporary relations associated with other sessions.  
  
Being able to uniquely identify a temporary relation would require  
RelidByRelfilenumber() to know about the proc number of the temporary  
relation it wants to identify, something it is not designed for since  
its introduction in f01d1ae3a104.  
  
There are currently three callers of RelidByRelfilenumber():  
- autoprewarm.  
- Logical decoding, reorder buffer.  
- pg_filenode_relation(), that attempts to find a relation OID based on  
a tablespace OID and a RelFileNumber.  
  
This makes the situation problematic particularly for the first two  
cases, leading to the possibility of random ERRORs due to  
inconsistencies that temporary relations can create in the cache  
maintained by RelidByRelfilenumber().  The third case should be less of  
an issue, as I suspect that there are few direct callers of  
pg_filenode_relation().  
  
The window where the ERRORs are happen is very narrow, requiring an OID  
wraparound to create a lookup conflict in RelidByRelfilenumber() with a  
temporary table reusing the same OID as another relation already cached.  
The problem is easier to reach in workloads with a high OID consumption  
rate, especially with a higher number of temporary relations created.  
  
We could get pg_filenode_relation() and RelidByRelfilenumber() to work  
with temporary relations if provided the means to identify them with an  
optional proc number given in input, but the years have also shown that  
we do not have a use case for it, yet.  Note that this could not be  
backpatched if pg_filenode_relation() needs changes.  It is simpler to  
ignore temporary relations.  
  
Reported-by: Shenhao Wang <wangsh.fnst@fujitsu.com>  
Author: Vignesh C <vignesh21@gmail.com>  
Reviewed-By: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>  
Reviewed-By: Robert Haas <robertmhaas@gmail.com>  
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>  
Reviewed-By: Takamichi Osumi <osumi.takamichi@fujitsu.com>  
Reviewed-By: Michael Paquier <michael@paquier.xyz>  
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>  
Reported-By: Shenhao Wang <wangsh.fnst@fujitsu.com>  
Discussion: https://postgr.es/m/bbaaf9f9-ebb2-645f-54bb-34d6efc7ac42@fujitsu.com  
Backpatch-through: 13  

M doc/src/sgml/func.sgml
M src/backend/utils/adt/dbsize.c
M src/backend/utils/cache/relfilenumbermap.c
M src/test/regress/expected/alter_table.out
M src/test/regress/expected/create_table.out
M src/test/regress/sql/alter_table.sql
M src/test/regress/sql/create_table.sql

doc: Improve description of wal_compression

commit   : d16ed88f040b4aaf32f3df0ff4e07ae3f798cf71    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Thu, 21 Aug 2025 13:25:54 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Thu, 21 Aug 2025 13:25:54 +0900    

Click here for diff

The description of this GUC provides a list of the situations where  
full-page writes are generated.  However, it is not completely exact,  
mentioning only the cases where full_page_writes=on or base backups.  It  
is possible to generate full-page writes in more situations than these  
two, making the description confusing as it implies that no other cases  
exist.  
  
The description is slightly reworded to take into account that other  
cases are possible, without mentioning them directly to minimize the  
maintenance burden should FPWs be generated in more contexts in the  
future.  
  
Author: Jingtang Zhang <mrdrivingduck@gmail.com>  
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>  
Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com>  
Discussion: https://postgr.es/m/CAPsk3_CtAYa_fy4p6=x7qtoutrdKvg1kGk46D5fsE=sMt2546g@mail.gmail.com  
Backpatch-through: 13  

M doc/src/sgml/config.sgml

Fix assertion failure with replication slot release in single-user mode

commit   : fea1cc3f75ff6376e27ceb43a655c403f2fe31ef    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Wed, 20 Aug 2025 15:00:13 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Wed, 20 Aug 2025 15:00:13 +0900    

Click here for diff

Some replication slot manipulations (logical decoding via SQL,  
advancing) were failing an assertion when releasing a slot in  
single-user mode, because active_pid was not set in a ReplicationSlot  
when its slot is acquired.  
  
ReplicationSlotAcquire() has some logic to be able to work with the  
single-user mode.  This commit sets ReplicationSlot->active_pid to  
MyProcPid, to let the slot-related logic fall-through, considering the  
single process as the one holding the slot.  
  
Some TAP tests are added for various replication slot functions with the  
single-user mode, while on it, for slot creation, drop, advancing, copy  
and logical decoding with multiple slot types (temporary, physical vs  
logical).  These tests are skipped on Windows, as direct calls of  
postgres --single would fail on permission failures.  There is no  
platform-specific behavior that needs to be checked, so living with this  
restriction should be fine.  The CI is OK with that, now let's see what  
the buildfarm tells.  
  
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>  
Reviewed-by: Paul A. Jungwirth <pj@illuminatedcomputing.com>  
Reviewed-by: Mutaamba Maasha <maasha@gmail.com>  
Discussion: https://postgr.es/m/OSCPR01MB14966ED588A0328DAEBE8CB25F5FA2@OSCPR01MB14966.jpnprd01.prod.outlook.com  
Backpatch-through: 13  

M src/backend/replication/slot.c
M src/test/modules/test_misc/Makefile
M src/test/modules/test_misc/meson.build
A src/test/modules/test_misc/t/008_replslot_single_user.pl

Add CHECK_FOR_INTERRUPTS in contrib/pg_buffercache functions.

commit   : 815fcfb206f04a4c59975817de79b2074fa912d5    
  
author   : Masahiko Sawada <msawada@postgresql.org>    
date     : Tue, 19 Aug 2025 12:11:34 -0700    
  
committer: Masahiko Sawada <msawada@postgresql.org>    
date     : Tue, 19 Aug 2025 12:11:34 -0700    

Click here for diff

This commit adds CHECK_FOR_INTERRUPTS to loops iterating over shared  
buffers in several pg_buffercache functions, allowing them to be  
interrupted during long-running operations.  
  
Backpatch to all supported versions. Add CHECK_FOR_INTERRUPTS to the  
loop in pg_buffercache_pages() in all supported branches, and to  
pg_buffercache_summary() and pg_buffercache_usage_counts() in version  
16 and newer.  
  
Author: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>  
Discussion: https://postgr.es/m/CAHg+QDcejeLx7WunFT3DX6XKh1KshvGKa8F5au8xVhqVvvQPRw@mail.gmail.com  
Backpatch-through: 13  

M contrib/pg_buffercache/pg_buffercache_pages.c

Fix self-deadlock during DROP SUBSCRIPTION.

commit   : 7ece7612906e0ce9fe6d38b7b68f6176857c79f9    
  
author   : Amit Kapila <akapila@postgresql.org>    
date     : Tue, 19 Aug 2025 04:54:19 +0000    
  
committer: Amit Kapila <akapila@postgresql.org>    
date     : Tue, 19 Aug 2025 04:54:19 +0000    

Click here for diff

The DROP SUBSCRIPTION command performs several operations: it stops the  
subscription workers, removes subscription-related entries from system  
catalogs, and deletes the replication slot on the publisher server.  
Previously, this command acquired an AccessExclusiveLock on  
pg_subscription before initiating these steps.  
  
However, while holding this lock, the command attempts to connect to the  
publisher to remove the replication slot. In cases where the connection is  
made to a newly created database on the same server as subscriber, the  
cache-building process during connection tries to acquire an  
AccessShareLock on pg_subscription, resulting in a self-deadlock.  
  
To resolve this issue, we reduce the lock level on pg_subscription during  
DROP SUBSCRIPTION from AccessExclusiveLock to RowExclusiveLock. Earlier,  
the higher lock level was used to prevent the launcher from starting a new  
worker during the drop operation, as a restarted worker could become  
orphaned.  
  
Now, instead of relying on a strict lock, we acquire an AccessShareLock on  
the specific subscription being dropped and re-validate its existence  
after acquiring the lock. If the subscription is no longer valid, the  
worker exits gracefully. This approach avoids the deadlock while still  
ensuring that orphan workers are not created.  
  
Reported-by: Alexander Lakhin <exclusion@gmail.com>  
Author: Dilip Kumar <dilipbalaut@gmail.com>  
Reviewed-by: vignesh C <vignesh21@gmail.com>  
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>  
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>  
Backpatch-through: 13  
Discussion: https://postgr.es/m/18988-7312c868be2d467f@postgresql.org  

M src/backend/commands/subscriptioncmds.c
M src/backend/replication/logical/worker.c
M src/test/subscription/t/100_bugs.pl

Update obsolete comments in ResultRelInfo struct.

commit   : eb13f3d2e05cc03e41058c1cfbe9749a79612cc9    
  
author   : Etsuro Fujita <efujita@postgresql.org>    
date     : Sun, 17 Aug 2025 19:40:02 +0900    
  
committer: Etsuro Fujita <efujita@postgresql.org>    
date     : Sun, 17 Aug 2025 19:40:02 +0900    

Click here for diff

Commit c5b7ba4e6 changed things so that the ri_RootResultRelInfo field  
of this struct is set for both partitions and inheritance children and  
used for tuple routing and transition capture (before that commit, it  
was only set for partitions to route tuples into), but failed to update  
these comments.  
  
Author: Etsuro Fujita <etsuro.fujita@gmail.com>  
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>  
Discussion: https://postgr.es/m/CAPmGK14NF5CcdCmTZpxrvpvBiT0y4EqKikW1r_wAu1CEHeOmUA%40mail.gmail.com  
Backpatch-through: 14  

M src/include/nodes/execnodes.h

Fix git whitespace warning

commit   : 1d5205d88b79beb2a697be74b26cd4e1a7517648    
  
author   : Peter Eisentraut <peter@eisentraut.org>    
date     : Fri, 15 Aug 2025 10:29:16 +0200    
  
committer: Peter Eisentraut <peter@eisentraut.org>    
date     : Fri, 15 Aug 2025 10:29:16 +0200    

Click here for diff

Recent changes to src/tools/ci/README triggered warnings like  
  
    src/tools/ci/README:88: leftover conflict marker  
  
Raise conflict-marker-size in .gitattributes to avoid these.  

M .gitattributes

Fix invalid format string in HASH_DEBUG code

commit   : d809494cdfc575beca578518b71a47e2c5b88238    
  
author   : David Rowley <drowley@postgresql.org>    
date     : Fri, 15 Aug 2025 18:07:00 +1200    
  
committer: David Rowley <drowley@postgresql.org>    
date     : Fri, 15 Aug 2025 18:07:00 +1200    

Click here for diff

This seems to have been broken back in be0a66666.  
  
Reported-by: Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>  
Author: David Rowley <dgrowleyml@gmail.com>  
Discussion: https://postgr.es/m/OSCPR01MB14966E11EEFB37D7857FCEDB7F535A@OSCPR01MB14966.jpnprd01.prod.outlook.com  
Backpatch-through: 14  

M src/backend/utils/hash/dynahash.c

ci: Simplify ci-os-only handling

commit   : 4900179e9cd8965372fa9971b36c2898d53ebedc    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Thu, 14 Aug 2025 11:48:04 -0400    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Thu, 14 Aug 2025 11:48:04 -0400    

Click here for diff

Handle 'ci-os-only' occurrences in the .cirrus.star file instead of  
.cirrus.tasks.yml file. Now, 'ci-os-only' occurrences are controlled  
from one central place instead of dealing with them in each task.  
  
Author: Andres Freund <andres@anarazel.de>  
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>  
Discussion: https://postgr.es/m/20240413021221.hg53rvqlvldqh57i%40awork3.anarazel.de  
Backpatch: 15-, where CI support was added  

M .cirrus.star
M .cirrus.tasks.yml

ci: Per-repo configuration for manually trigger tasks

commit   : 965d0a0b8b4083caf7769d752973dd0edbb9d0dd    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Thu, 14 Aug 2025 11:33:50 -0400    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Thu, 14 Aug 2025 11:33:50 -0400    

Click here for diff

We do not want to trigger some tasks by default, to avoid using too many  
compute credits. These tasks have to be manually triggered to be run. But  
e.g. for cfbot we do have sufficient resources, so we always want to start  
those tasks.  
  
With this commit, an individual repository can be configured to trigger  
them automatically using an environment variable defined under  
"Repository Settings", for example:  
  
REPO_CI_AUTOMATIC_TRIGGER_TASKS="mingw netbsd openbsd"  
  
This will enable cfbot to turn them on by default when running tests for the  
Commitfest app.  
  
Backpatch this back to PG 15, even though PG 15 does not have any manually  
triggered task. Keeping the CI infrastructure the same seems advantageous.  
  
Author: Andres Freund <andres@anarazel.de>  
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>  
Co-authored-by: Nazir Bilal Yavuz <byavuz81@gmail.com>  
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>  
Discussion: https://postgr.es/m/20240413021221.hg53rvqlvldqh57i%40awork3.anarazel.de  
Backpatch-through: 16  

M .cirrus.star
M .cirrus.tasks.yml
M .cirrus.yml
M src/tools/ci/README

Fix compilation warning with SerializeClientConnectionInfo()

commit   : 791506f59e48906477140aa0b80b91f91e8ed601    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Thu, 14 Aug 2025 16:22:01 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Thu, 14 Aug 2025 16:22:01 +0900    

Click here for diff

This function uses an argument named "maxsize" that is only used in  
assertions, being set once outside the assertion area.  Recent gcc  
versions with -Wunused-but-set-parameter complain about a warning when  
building without assertions enabled, because of that.  
  
In order to fix this issue, PG_USED_FOR_ASSERTS_ONLY is added to the  
function argument of SerializeClientConnectionInfo(), which is the first  
time we are doing so in the tree.  The CI is fine with the change, but  
let's see what the buildfarm has to say on the matter.  
  
Reviewed-by: Andres Freund <andres@anarazel.de>  
Reviewed-by: Jacob Champion <jchampion@postgresql.org>  
Discussion: https://postgr.es/m/pevajesswhxafjkivoq3yvwxga77tbncghlf3gq5fvchsvfuda@6uivg25sb3nx  
Backpatch-through: 16  

M src/backend/utils/init/miscinit.c

ci: windows: Stop using DEBUG:FASTLINK

commit   : c42f2bdcdf7f3ca765b1dc5ba5753e9f8b545e85    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Wed, 13 Aug 2025 15:52:54 -0400    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Wed, 13 Aug 2025 15:52:54 -0400    

Click here for diff

Currently the pdb file for libpq and some other libraries are named the same  
for the static and shared libraries. That has been the case for a long time,  
but recently started failing, after an image update started using a newer  
ninja version. The issue is not itself caused by ninja, but just made visible,  
as the newer version optimizes the build order and builds the shared libpq  
earlier than the static library. Previously both static and shared libraries  
were built at the same time, which prevented msvc from detecting the issue.  
  
When using /DEBUG:FASTLINK pdb files cannot be updated, triggering the error.  
  
We were using /DEBUG:FASTLINK due to running out of memory in the past, but  
that was when using container based CI images, rather than full VMs.  
  
This isn't really the correct fix (that'd be to deconflict the pdb file  
names), but we'd like to get CI to become green again, and a proper fix (in  
meson) will presumably take longer.  
  
Suggested-by: Andres Freund <andres@anarazel.de>  
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>  
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>  
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>  
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/CAN55FZ1RuBhJmPWs3Oi%3D9UoezDfrtO-VaU67db5%2B0_uy19uF%2BA%40mail.gmail.com  
Backpatch-through: 16  

M .cirrus.tasks.yml

Don't treat EINVAL from semget() as a hard failure.

commit   : e67d5f7baa7d227f9c10b39de71a8a7c647388c5    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 13 Aug 2025 11:59:47 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 13 Aug 2025 11:59:47 -0400    

Click here for diff

It turns out that on some platforms (at least current macOS, NetBSD,  
OpenBSD) semget(2) will return EINVAL if there is a pre-existing  
semaphore set with the same key and too few semaphores.  Our code  
expects EEXIST in that case and treats EINVAL as a hard failure,  
resulting in failure during initdb or postmaster start.  
  
POSIX does document EINVAL for too-few-semaphores-in-set, and is  
silent on its priority relative to EEXIST, so this behavior arguably  
conforms to spec.  Nonetheless it's quite problematic because EINVAL  
is also documented to mean that nsems is greater than the system's  
limit on the number of semaphores per set (SEMMSL).  If that is  
where the problem lies, retrying would just become an infinite loop.  
  
To resolve this contradiction, retry after EINVAL, but also install a  
loop limit that will make us give up regardless of the specific errno  
after trying 1000 different keys.  (1000 is a pretty arbitrary number,  
but it seems like it should be sufficient.)  I like this better than  
the previous infinite-looping behavior, since it will also keep us out  
of trouble if (say) we get EACCES due to a system-level permissions  
problem rather than anything to do with a specific semaphore set.  
  
This problem has only been observed in the field in PG 17, which uses  
a higher nsems value than other branches (cf. 38da05346, 810a8b1c8).  
That makes it possible to get the failure if a new v17 postmaster  
has a key collision with an existing postmaster of another branch.  
In principle though, we might see such a collision against a semaphore  
set created by some other application, in which case all branches are  
vulnerable on these platforms.  Hence, backpatch.  
  
Reported-by: Gavin Panella <gavinpanella@gmail.com>  
Author: Tom Lane <tgl@sss.pgh.pa.us>  
Discussion: https://postgr.es/m/CALL7chmzY3eXHA7zHnODUVGZLSvK3wYCSP0RmcDFHJY8f28Q3g@mail.gmail.com  
Backpatch-through: 13  

M src/backend/port/sysv_sema.c

postgres_fdw: Fix tests with ANALYZE and remote sampling

commit   : 327bd6111aede2a41f8891d2cd2501c773bc047e    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Wed, 13 Aug 2025 13:11:51 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Wed, 13 Aug 2025 13:11:51 +0900    

Click here for diff

The tests fixed in this commit were changing the sampling setting of a  
foreign server, but then were analyzing a local table instead of a  
foreign table, meaning that the test was not running for its original  
purpose.  
  
This commit changes the ANALYZE commands to analyze the foreign table,  
and changes the foreign table definition to point to a valid remote  
table.  Attempting to analyze the foreign table "analyze_ftable" would  
have failed before this commit, because "analyze_rtable1" is not defined  
on the remote side.  
  
Issue introduced by 8ad51b5f446b.  
  
Author: Corey Huinker <corey.huinker@gmail.com>  
Discussion: https://postgr.es/m/CADkLM=cpUiJ3QF7aUthTvaVMmgQcm7QqZBRMDLhBRTR+gJX-Og@mail.gmail.com  
Backpatch-through: 16  

M contrib/postgres_fdw/expected/postgres_fdw.out
M contrib/postgres_fdw/sql/postgres_fdw.sql