PostgreSQL 11.20 (upcoming) commit log

doc: Fix XML_CATALOG_FILES env var for Apple Silicon machines

commit   : 4257d34e310be2e16c782f9e9339907a34f708ce    
author   : Daniel Gustafsson <>    
date     : Mon, 27 Mar 2023 21:35:34 +0200    
committer: Daniel Gustafsson <>    
date     : Mon, 27 Mar 2023 21:35:34 +0200    

Click here for diff

Homebrew changed the prefix for Apple Silicon based machines, so  
our advice for XML_CATALOG_FILES needs to mention both.  More info  
on the Homebrew change can be found at:  
This is backpatch of commits 4c8d65408 and 5a91c7975, the latter  
which contained a small fix based on a report from Dagfinn Ilmari  
Author: Julien Rouhaud <>  

M doc/src/sgml/docguide.sgml

Reject attempts to alter composite types used in indexes.

commit   : 78838bc3d43d3557f1027b95b1961b943d8c0980    
author   : Tom Lane <>    
date     : Mon, 27 Mar 2023 15:04:02 -0400    
committer: Tom Lane <>    
date     : Mon, 27 Mar 2023 15:04:02 -0400    

Click here for diff

find_composite_type_dependencies() ignored indexes, which is a poor  
decision because an expression index could have a stored column of  
a composite (or other container) type even when the underlying table  
does not.  Teach it to detect such cases and error out.  We have to  
work a bit harder than for other relations because the pg_depend entry  
won't identify the specific index column of concern, but it's not much  
new code.  
This does not address bug #17872's original complaint that dropping  
a column in such a type might lead to violations of the uniqueness  
property that a unique index is supposed to ensure.  That seems of  
much less concern to me because it won't lead to crashes.  
Per bug #17872 from Alexander Lakhin.  Back-patch to all supported  

M src/backend/commands/tablecmds.c
M src/test/regress/expected/alter_table.out
M src/test/regress/sql/alter_table.sql

Fix oversights in array manipulation.

commit   : ae320fc216863e675505877a844d5a31516362d8    
author   : Tom Lane <>    
date     : Sun, 26 Mar 2023 13:41:06 -0400    
committer: Tom Lane <>    
date     : Sun, 26 Mar 2023 13:41:06 -0400    

Click here for diff

The nested-arrays code path in ExecEvalArrayExpr() used palloc to  
allocate the result array, whereas every other array-creating function  
has used palloc0 since 18c0b4ecc.  This mostly works, but unused bits  
past the end of the nulls bitmap may end up undefined.  That causes  
valgrind complaints with -DWRITE_READ_PARSE_PLAN_TREES, and could  
cause planner misbehavior as cited in 18c0b4ecc.  There seems no very  
good reason why we should strive to avoid palloc0 in just this one case,  
so fix it the easy way with s/palloc/palloc0/.  
While looking at that I noted that we also failed to check for overflow  
of "nbytes" and "nitems" while summing the sizes of the sub-arrays,  
potentially allowing a crash due to undersized output allocation.  
For "nbytes", follow the policy used by other array-munging code of  
checking for overflow after each addition.  (As elsewhere, the last  
addition of the array's overhead space doesn't need an extra check,  
since palloc itself will catch a value between 1Gb and 2Gb.)  
For "nitems", there's no very good reason to sum the inputs at all,  
since we can perfectly well use ArrayGetNItems' result instead of  
ignoring it.  
Per discussion of this bug, also remove redundant zeroing of the  
nulls bitmap in array_set_element and array_set_slice.  
Patch by Alexander Lakhin and myself, per bug #17858 from Alexander  
Lakhin; thanks also to Richard Guo.  These bugs are a dozen years old,  
so back-patch to all supported branches.  

M src/backend/executor/execExprInterp.c
M src/backend/utils/adt/arrayfuncs.c

doc: Add description of some missing monitoring functions

commit   : e09fb8a7588d89c9654b21e81d181ef9abf6e77c    
author   : Michael Paquier <>    
date     : Wed, 22 Mar 2023 18:32:11 +0900    
committer: Michael Paquier <>    
date     : Wed, 22 Mar 2023 18:32:11 +0900    

Click here for diff

This commit adds some documentation about two monitoring functions:  
- pg_stat_get_xact_blocks_fetched()  
- pg_stat_get_xact_blocks_hit()  
The description of these functions has been removed in ddfc2d9, later  
simplified by 5f2b089, assuming that all the functions whose  
descriptions were removed are used in system views.  Unfortunately, some  
of them were are not used in any system views, so they lacked  
This gap exists in the docs for a long time, so backpatch all the way  
Reported-by: Michael Paquier  
Author: Bertrand Drouvot  
Reviewed-by: Kyotaro Horiguchi  
Backpatch-through: 11  

M doc/src/sgml/monitoring.sgml

Ignore dropped columns during apply of update/delete.

commit   : 4cdaea7a211938a2471a8481f5efb61e7a88ad36    
author   : Amit Kapila <>    
date     : Tue, 21 Mar 2023 08:39:00 +0530    
committer: Amit Kapila <>    
date     : Tue, 21 Mar 2023 08:39:00 +0530    

Click here for diff

We fail to apply updates and deletes when the REPLICA IDENTITY FULL is  
used for the table having dropped columns. We didn't use to ignore dropped  
columns while doing tuple comparison among the tuples from the publisher  
and subscriber during apply of updates and deletes.  
Author: Onder Kalaci, Shi yu  
Reviewed-by: Amit Kapila  

M src/backend/executor/execReplication.c
M src/test/subscription/t/

Fix race in parallel hash join batch cleanup, take II.

commit   : ef16d27245f6167acd878bea49cdde0a8dcec49c    
author   : Thomas Munro <>    
date     : Tue, 21 Mar 2023 14:29:34 +1300    
committer: Thomas Munro <>    
date     : Tue, 21 Mar 2023 14:29:34 +1300    

Click here for diff

With unlucky timing and parallel_leader_participation=off (not the  
default), PHJ could attempt to access per-batch shared state just as it  
was being freed.  There was code intended to prevent that by checking  
for a cleared pointer, but it was racy.  Fix, by introducing an extra  
barrier phase.  The new phase PHJ_BUILD_RUNNING means that it's safe to  
access the per-batch state to find a batch to help with, and  
PHJ_BUILD_DONE means that it is too late.  The last to detach will free  
the array of per-batch state as before, but now it will also atomically  
advance the phase, so that late attachers can avoid the hazard.  This  
mirrors the way per-batch hash tables are freed (see phases  
An earlier attempt to fix this (commit 3b8981b6, later reverted) missed  
one special case.  When the inner side is empty (the "empty inner  
optimization), the build barrier would only make it to  
PHJ_BUILD_HASHING_INNER phase before workers attempted to detach from  
the hashtable.  In that case, fast-forward the build barrier to  
PHJ_BUILD_RUNNING before proceeding, so that our later assertions hold  
and we can still negotiate who is cleaning up.  
Revealed by build farm failures, where BarrierAttach() failed a sanity  
check assertion, because the memory had been clobbered by dsa_free().  
In non-assert builds, the result could be a segmentation fault.  
Back-patch to all supported releases.  
Author: Thomas Munro <>  
Author: Melanie Plageman <>  
Reported-by: Michael Paquier <>  
Reported-by: David Geier <>  
Tested-by: David Geier <>  

M src/backend/executor/nodeHash.c
M src/backend/executor/nodeHashjoin.c
M src/include/executor/hashjoin.h

Doc: fix documentation example for bytea hex output format.

commit   : 4af259543b1581deeaedd6a983223d44b2db98b7    
author   : Tom Lane <>    
date     : Sat, 18 Mar 2023 16:11:22 -0400    
committer: Tom Lane <>    
date     : Sat, 18 Mar 2023 16:11:22 -0400    

Click here for diff

Per report from rsindlin  

M doc/src/sgml/datatype.sgml

Fix pg_dump for hash partitioning on enum columns.

commit   : 012ffb365a05694e392bf96c8363d119bd19534c    
author   : Tom Lane <>    
date     : Fri, 17 Mar 2023 13:31:40 -0400    
committer: Tom Lane <>    
date     : Fri, 17 Mar 2023 13:31:40 -0400    

Click here for diff

Hash partitioning on an enum is problematic because the hash codes are  
derived from the OIDs assigned to the enum values, which will almost  
certainly be different after a dump-and-reload than they were before.  
This means that some rows probably end up in different partitions than  
before, causing restore to fail because of partition constraint  
violations.  (pg_upgrade dodges this problem by using hacks to force  
the enum values to keep the same OIDs, but that's not possible nor  
desirable for pg_dump.)  
Users can work around that by specifying --load-via-partition-root,  
but since that's a dump-time not restore-time decision, one might  
find out the need for it far too late.  Instead, teach pg_dump to  
apply that option automatically when dealing with a partitioned  
table that has hash-on-enum partitioning.  
Also deal with a pre-existing issue for --load-via-partition-root  
mode: in a parallel restore, we try to TRUNCATE target tables just  
before loading them, in order to enable some backend optimizations.  
This is bad when using --load-via-partition-root because (a) we're  
likely to suffer deadlocks from restore jobs trying to restore rows  
into other partitions than they came from, and (b) if we miss getting  
a deadlock we might still lose data due to a TRUNCATE removing rows  
from some already-completed restore job.  
The fix for this is conceptually simple: just don't TRUNCATE if we're  
dealing with a --load-via-partition-root case.  The tricky bit is for  
pg_restore to identify those cases.  In dumps using COPY commands we  
can inspect each COPY command to see if it targets the nominal target  
table or some ancestor.  However, in dumps using INSERT commands it's  
pretty impractical to examine the INSERTs in advance.  To provide a  
solution for that going forward, modify pg_dump to mark TABLE DATA  
items that are using --load-via-partition-root with a comment.  
(This change also responds to a complaint from Robert Haas that  
the dump output for --load-via-partition-root is pretty confusing.)  
pg_restore checks for the special comment as well as checking the  
COPY command if present.  This will fail to identify the combination  
of --load-via-partition-root and --inserts in pre-existing dump files,  
but that should be a pretty rare case in the field.  If it does  
happen you will probably get a deadlock failure that you can work  
around by not using parallel restore, which is the same as before  
this bug fix.  
Having done this, there seems no remaining reason for the alarmism  
in the pg_dump man page about combining --load-via-partition-root  
with parallel restore, so remove that warning.  
Patch by me; thanks to Julien Rouhaud for review.  Back-patch to  
v11 where hash partitioning was introduced.  

M doc/src/sgml/ref/pg_dump.sgml
M doc/src/sgml/ref/pg_dumpall.sgml
M src/bin/pg_dump/common.c
M src/bin/pg_dump/pg_backup_archiver.c
M src/bin/pg_dump/pg_dump.c
M src/bin/pg_dump/pg_dump.h
A src/bin/pg_dump/t/

tests: Prevent syslog activity by slapd, take 2

commit   : b520129bdca4a787973b71bdd61519686be12715    
author   : Andres Freund <>    
date     : Thu, 16 Mar 2023 23:03:31 -0700    
committer: Andres Freund <>    
date     : Thu, 16 Mar 2023 23:03:31 -0700    

Click here for diff

Unfortunately it turns out that the logfile-only option added in b9f8d1cbad7  
is only available in openldap starting in 2.6.  
Luckily the option to control the log level (loglevel/-s) have been around for  
much longer. As it turns out loglevel/-s only control what goes into syslog,  
not what ends up in the file specified with 'logfile' and stderr.  
While we currently are specifying 'logfile', nothing ends up in it, as the  
option only controls debug messages, and we didn't set a debug level. The  
debug level can only be configured on the commandline and also prevents  
forking. That'd require larger changes, so this commit doesn't tackle that  
Specify the syslog level when starting slapd using -s, as that allows to  
prevent all syslog messages if one uses '0' instead of 'none', while loglevel  
doesn't prevent the first message.  
Backpatch: 11-  

M src/test/ldap/t/

tests: Minimize syslog activity by slapd

commit   : b43d8e76ddfabfb710f80dd3095cd62c593e5ca8    
author   : Andres Freund <>    
date     : Thu, 16 Mar 2023 17:48:47 -0700    
committer: Andres Freund <>    
date     : Thu, 16 Mar 2023 17:48:47 -0700    

Click here for diff

Until now the tests using slapd spammed syslog for every connection /  
query. Use logfile-only to prevent syslog activity. Unfortunately that only  
takes effect after logging the first message, but that's still much better  
than the prior situation.  
Backpatch: 11-  

M src/test/ldap/t/

Small tidyup for commit d41a178b, part II.

commit   : b23f2a729c9e9b028091717df2fed11c95f98cd9    
author   : Thomas Munro <>    
date     : Fri, 17 Mar 2023 14:44:12 +1300    
committer: Thomas Munro <>    
date     : Fri, 17 Mar 2023 14:44:12 +1300    

Click here for diff

Further to commit 6a9229da, checking for NULL is now redundant.  An "out  
of memory" error would have been thrown already by palloc() and treated  
as FATAL, so we can delete a few more lines.  
Back-patch to all releases, like those other commits.  
Reported-by: Tom Lane <>  

M src/backend/postmaster/postmaster.c

Work around spurious compiler warning in inet operators

commit   : 25ae3bba79221ceacf8a6ec0313a039ecd18e528    
author   : Andres Freund <>    
date     : Thu, 16 Mar 2023 14:08:44 -0700    
committer: Andres Freund <>    
date     : Thu, 16 Mar 2023 14:08:44 -0700    

Click here for diff

gcc 12+ has complaints like the following:  
../../../../../pgsql/src/backend/utils/adt/network.c: In function 'inetnot':  
../../../../../pgsql/src/backend/utils/adt/network.c:1893:34: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]  
 1893 |                         pdst[nb] = ~pip[nb];  
      |                         ~~~~~~~~~^~~~~~~~~~  
../../../../../pgsql/src/include/utils/inet.h:27:23: note: at offset -1 into destination object 'ipaddr' of size 16  
   27 |         unsigned char ipaddr[16];       /* up to 128 bits of address */  
      |                       ^~~~~~  
../../../../../pgsql/src/include/utils/inet.h:27:23: note: at offset -1 into destination object 'ipaddr' of size 16  
This is due to a compiler bug:  
It has been a year since the bug has been reported without getting fixed. As  
the warnings are verbose and use of gcc 12 is becoming more common, it seems  
worth working around the bug. Particularly because a simple reformulation of  
the loop condition fixes the issue and isn't any less readable.  
Author: Tom Lane <>  
Author: Andres Freund <>  
Backpatch: 11-  

M src/backend/utils/adt/network.c

Small tidyup for commit d41a178b.

commit   : 9d6c3439716f04e1dea92376e794ddde98624e3b    
author   : Thomas Munro <>    
date     : Fri, 17 Mar 2023 09:44:42 +1300    
committer: Thomas Munro <>    
date     : Fri, 17 Mar 2023 09:44:42 +1300    

Click here for diff

A comment was left behind claiming that we needed to use malloc() rather  
than palloc() because the corresponding free would run in another  
thread, but that's not true anymore.  Remove that comment.  And, with  
the reason being gone, we might as well actually use palloc().  
Back-patch to supported releases, like d41a178b.  

M src/backend/postmaster/postmaster.c

Fix waitpid() emulation on Windows.

commit   : 5ff8e69d8ea594075e435bb42e95b605e08afc2f    
author   : Thomas Munro <>    
date     : Wed, 15 Mar 2023 13:17:18 +1300    
committer: Thomas Munro <>    
date     : Wed, 15 Mar 2023 13:17:18 +1300    

Click here for diff

Our waitpid() emulation didn't prevent a PID from being recycled by the  
OS before the call to waitpid().  The postmaster could finish up  
tracking more than one child process with the same PID, and confuse  
Fix, by moving the guts of pgwin32_deadchild_callback() into waitpid(),  
so that resources are released synchronously.  The process and PID  
continue to exist until we close the process handle, which only happens  
once we're ready to adjust our book-keeping of running children.  
This seems to explain a couple of failures on CI.  It had never been  
reported before, despite the code being as old as the Windows port.  
Perhaps Windows started recycling PIDs more rapidly, or perhaps timing  
changes due to commit 7389aad6 made it more likely to break.  
Thanks to Alexander Lakhin for analysis and Andres Freund for tracking  
down the root cause.  
Back-patch to all supported branches.  
Reported-by: Andres Freund <>  

M src/backend/postmaster/postmaster.c

Fix corner case bug in numeric to_char() some more.

commit   : 8e33fb9ef1df7f1a1e7ef2fe0dbddfff2520f161    
author   : Tom Lane <>    
date     : Tue, 14 Mar 2023 19:17:31 -0400    
committer: Tom Lane <>    
date     : Tue, 14 Mar 2023 19:17:31 -0400    

Click here for diff

The band-aid applied in commit f0bedf3e4 turns out to still need  
some work: it made sure we didn't set Np->last_relevant too small  
(to the left of the decimal point), but it didn't prevent setting  
it too large (off the end of the partially-converted string).  
This could result in fetching data beyond the end of the allocated  
space, which with very bad luck could cause a SIGSEGV, though  
I don't see any hazard of interesting memory disclosure.  
Per bug #17839 from Thiago Nunes.  The bug's pretty ancient,  
so back-patch to all supported versions.  

M src/backend/utils/adt/formatting.c
M src/test/regress/expected/numeric.out
M src/test/regress/sql/numeric.sql

Fix JSON error reporting for many cases of erroneous string values.

commit   : 234941a3bbf32266e9e2d3d9bc7648aec850d8c4    
author   : Tom Lane <>    
date     : Mon, 13 Mar 2023 15:19:00 -0400    
committer: Tom Lane <>    
date     : Mon, 13 Mar 2023 15:19:00 -0400    

Click here for diff

The majority of error exit cases in json_lex_string() failed to  
set lex->token_terminator, causing problems for the error context  
reporting code: it would see token_terminator less than token_start  
and do something more or less nuts.  In v14 and up the end result  
could be as bad as a crash in report_json_context().  Older  
versions accidentally avoided that fate; but all versions produce  
error context lines that are far less useful than intended,  
because they'd stop at the end of the prior token instead of  
continuing to where the actually-bad input is.  
To fix, invent some macros that make it less notationally painful  
to do the right thing.  Also add documentation about what the  
function is actually required to do; and in >= v14, add an assertion  
in report_json_context about token_terminator being sufficiently  
far advanced.  
Per report from Nikolay Shaplov.  Back-patch to all supported  

M src/backend/utils/adt/json.c
M src/test/regress/expected/json_encoding.out
M src/test/regress/expected/json_encoding_1.out

Fix failure to detect some cases of improperly-nested aggregates.

commit   : 0736b11318daaa38f808197f707fabed79b3aef8    
author   : Tom Lane <>    
date     : Mon, 13 Mar 2023 12:40:28 -0400    
committer: Tom Lane <>    
date     : Mon, 13 Mar 2023 12:40:28 -0400    

Click here for diff

check_agg_arguments_walker() supposed that it needn't descend into  
the arguments of a lower-level aggregate function, but this is  
just wrong in the presence of multiple levels of sub-select.  The  
oversight would lead to executor failures on queries that should  
be rejected.  (Prior to v11, they actually were rejected, thanks  
to a "redundant" execution-time check.)  
Per bug #17835 from Anban Company.  Back-patch to all supported  

M src/backend/parser/parse_agg.c
M src/test/regress/expected/aggregates.out
M src/test/regress/sql/aggregates.sql

Fix misbehavior in contrib/pg_trgm with an unsatisfiable regex.

commit   : b18327489b3be5c30ae51eaf24479da7c0af1aaa    
author   : Tom Lane <>    
date     : Sat, 11 Mar 2023 12:15:41 -0500    
committer: Tom Lane <>    
date     : Sat, 11 Mar 2023 12:15:41 -0500    

Click here for diff

If the regex compiler can see that a regex is unsatisfiable  
(for example, '$foo') then it may emit an NFA having no arcs.  
pg_trgm's packGraph function did the wrong thing in this case;  
it would access off the end of a work array, and with bad luck  
could produce a corrupted output data structure causing more  
problems later.  This could end with wrong answers or crashes  
in queries using a pg_trgm GIN or GiST index with such a regex.  
Fix by not trying to de-duplicate if there aren't at least 2 arcs.  
Per bug #17830 from Alexander Lakhin.  Back-patch to all supported  

M contrib/pg_trgm/expected/pg_word_trgm.out
M contrib/pg_trgm/sql/pg_word_trgm.sql
M contrib/pg_trgm/trgm_regexp.c

Ensure COPY TO on an RLS-enabled table copies no more than it should.

commit   : 6e2674d772b017f4ad4e36394aa1cf64c05b46e5    
author   : Tom Lane <>    
date     : Fri, 10 Mar 2023 13:52:28 -0500    
committer: Tom Lane <>    
date     : Fri, 10 Mar 2023 13:52:28 -0500    

Click here for diff

The COPY documentation is quite clear that "COPY relation TO" copies  
rows from only the named table, not any inheritance children it may  
have.  However, if you enabled row-level security on the table then  
this stopped being true, because the code forgot to apply the ONLY  
modifier in the "SELECT ... FROM relation" query that it constructs  
in order to allow RLS predicates to be attached.  Fix that.  
Report and patch by Antonin Houska (comment adjustments and test case  
by me).  Back-patch to all supported branches.  

M src/backend/commands/copy.c
M src/test/regress/expected/rowsecurity.out
M src/test/regress/sql/rowsecurity.sql


commit   : d1c0f81e72738bcd1b5abc86be7f5a90d659f7bc    
author   : Thomas Munro <>    
date     : Thu, 9 Mar 2023 16:33:24 +1300    
committer: Thomas Munro <>    
date     : Thu, 9 Mar 2023 16:33:24 +1300    

Click here for diff

Commit bdaabb9b started skipping doomed transactions when building the  
list of possible conflicts for SERIALIZABLE READ ONLY.  That makes  
sense, because doomed transactions won't commit, but a couple of subtle  
things broke:  
1.  If all uncommitted r/w transactions are doomed, a READ ONLY  
transaction would arbitrarily not benefit from the safe snapshot  
optimization.  It would not be taken immediately, and yet no other  
transaction would set SXACT_FLAG_RO_SAFE later.  
2.  In the same circumstances but with DEFERRABLE, GetSafeSnapshot()  
would correctly exit its wait loop without sleeping and then take the  
optimization in non-assert builds, but assert builds would fail a sanity  
check that SXACT_FLAG_RO_SAFE had been set by another transaction.  
This is similar to the case for PredXact->WritableSxactCount == 0.  We  
should opt out immediately if our possibleUnsafeConflicts list is empty  
after filtering.  
The code to maintain the serializable global xmin is moved down below  
the new opt out site, because otherwise we'd have to reverse its effects  
before returning.  
Back-patch to all supported releases.  Bug #17368.  
Reported-by: Alexander Lakhin <>  

M src/backend/storage/lmgr/predicate.c

Fix more bugs caused by adding columns to the end of a view.

commit   : 721626cb57c023a957397a81564439560f0f155f    
author   : Tom Lane <>    
date     : Tue, 7 Mar 2023 18:21:37 -0500    
committer: Tom Lane <>    
date     : Tue, 7 Mar 2023 18:21:37 -0500    

Click here for diff

If a view is defined atop another view, and then CREATE OR REPLACE  
VIEW is used to add columns to the lower view, then when the upper  
view's referencing RTE is expanded by ApplyRetrieveRule we will have  
a subquery RTE with fewer eref->colnames than output columns.  This  
confuses various code that assumes those lists are always in sync,  
as they are in plain parser output.  
We have seen such problems before (cf commit d5b760ecb), and now  
I think the time has come to do what was speculated about in that  
commit: let's make ApplyRetrieveRule synthesize some column names to  
preserve the invariant that holds in parser output.  Otherwise we'll  
be chasing this class of bugs indefinitely.  Moreover, it appears from  
testing that this actually gives us better results in the test case  
d5b760ecb added, and likely in other corner cases that we lack  
coverage for.  
In HEAD, I replaced d5b760ecb's hack to make expandRTE exit early with  
an elog(ERROR) call, since the case is now presumably unreachable.  
But it seems like changing that in back branches would bring more risk  
than benefit, so there I just updated the comment.  
Per bug #17811 from Alexander Lakhin.  Back-patch to all supported  

M src/backend/parser/parse_relation.c
M src/backend/rewrite/rewriteHandler.c
M src/test/regress/expected/alter_table.out
M src/test/regress/sql/alter_table.sql

Avoid fetching one past the end of translate()'s "to" parameter.

commit   : b1a9d8ef254fab59c35a504490e14b2d9e1bbb92    
author   : Tom Lane <>    
date     : Wed, 1 Mar 2023 11:30:17 -0500    
committer: Tom Lane <>    
date     : Wed, 1 Mar 2023 11:30:17 -0500    

Click here for diff

This is usually harmless, but if you were very unlucky it could  
provoke a segfault due to the "to" string being right up against  
the end of memory.  Found via valgrind testing (so we might've  
found it earlier, except that our regression tests lacked any  
exercise of translate()'s deletion feature).  
Fix by switching the order of the test-for-end-of-string and  
advance-pointer steps.  While here, compute "to_ptr + tolen"  
just once.  (Smarter compilers might figure that out for  
themselves, but let's just make sure.)  
Report and fix by Daniil Anisimov, in bug #17816.  

M src/backend/utils/adt/oracle_compat.c
M src/test/regress/expected/strings.out
M src/test/regress/sql/strings.sql

Don't force SQL_ASCII/no-locale for installcheck in

commit   : 73e779b3807dedb9b0bc20d4ce12a350033aa646    
author   : Andrew Dunstan <>    
date     : Sun, 26 Feb 2023 06:48:41 -0500    
committer: Andrew Dunstan <>    
date     : Sun, 26 Feb 2023 06:48:41 -0500    

Click here for diff

It's been this way for a very long time, but it appears to have been  
masking an issue that only manifests with different settings. Therefore,  
run the tests in the installation's default encoding/locale.  
Backpatch to all live branches.  

M src/tools/msvc/

commit   : ffec64ba86c40a31e5a1d0c9a53bf923624db51f    
author   : Tom Lane <>    
date     : Sat, 25 Feb 2023 14:44:14 -0500    
committer: Tom Lane <>    
date     : Sat, 25 Feb 2023 14:44:14 -0500    

Click here for diff

We already tried to fix this in commits 3f7323cbb et al (and follow-on  
fixes), but now it emerges that there are still unfixed cases;  
moreover, these cases affect all branches not only pre-v14.  I thought  
we had eliminated all cases of making multiple clones of an UPDATE's  
target list when we nuked inheritance_planner.  But it turns out we  
still do that in some partitioned-UPDATE cases, notably including  
INSERT ... ON CONFLICT UPDATE, because ExecInitPartitionInfo thinks  
it's okay to clone and modify the parent's targetlist.  
This fix is based on a suggestion from Andres Freund: let's stop  
abusing the ParamExecData.execPlan mechanism, which was only ever  
meant to handle initplans, and instead solve the execution timing  
problem by having the expression compiler move MULTIEXPR_SUBLINK steps  
to the front of their expression step lists.  This is feasible because  
(a) all branches still in support compile the entire targetlist of  
an UPDATE into a single ExprState, and (b) we know that all  
MULTIEXPR_SUBLINKs do need to be evaluated --- none could be buried  
inside a CASE, for example.  There is a minor semantics change  
concerning the order of execution of the MULTIEXPR's subquery versus  
other parts of the parent targetlist, but that seems like something  
we can get away with.  By doing that, we no longer need to worry  
about whether different clones of a MULTIEXPR_SUBLINK share output  
Params; their usage of that data structure won't overlap.  
Per bug #17800 from Alexander Lakhin.  Back-patch to all supported  
branches.  In v13 and earlier, we can revert 3f7323cbb and follow-on  
fixes; however, I chose to keep the SubPlan.subLinkId field added  
in ccbb54c72.  We don't need that anymore in the core code, but it's  
cheap enough to fill, and removing a plan node field in a minor  
release seems like it'd be asking for trouble.  
Andres Freund and Tom Lane  

M src/backend/executor/execExpr.c
M src/backend/executor/nodeSubplan.c
M src/backend/optimizer/plan/planner.c
M src/backend/optimizer/plan/subselect.c
M src/include/nodes/primnodes.h
M src/include/optimizer/subselect.h
M src/test/regress/expected/inherit.out
M src/test/regress/sql/inherit.sql

Fix mishandling of OLD/NEW references in subqueries in rule actions.

commit   : 79f194cc0144fad07fa18c4b2a5f32bce9035ee0    
author   : Dean Rasheed <>    
date     : Sat, 25 Feb 2023 14:48:08 +0000    
committer: Dean Rasheed <>    
date     : Sat, 25 Feb 2023 14:48:08 +0000    

Click here for diff

If a rule action contains a subquery that refers to columns from OLD  
or NEW, then those are really lateral references, and the planner will  
complain if it sees such things in a subquery that isn't marked as  
lateral. However, at rule-definition time, the user isn't required to  
mark the subquery with LATERAL, and so it can fail when the rule is  
Fix this by marking such subqueries as lateral in the rewriter, at the  
point where they're used.  
Dean Rasheed and Tom Lane, per report from Alexander Lakhin.  
Back-patch to all supported branches.  

M src/backend/rewrite/rewriteHandler.c
M src/test/regress/expected/rules.out
M src/test/regress/sql/rules.sql

Don't repeatedly register cache callbacks in pgoutput plugin.

commit   : 44dbc960f6711e32118a8da71f251d65e0630caa    
author   : Tom Lane <>    
date     : Thu, 23 Feb 2023 15:40:28 -0500    
committer: Tom Lane <>    
date     : Thu, 23 Feb 2023 15:40:28 -0500    

Click here for diff

Multiple cycles of starting up and shutting down the plugin within a  
single session would eventually lead to "out of relcache_callback_list  
slots", because pgoutput_startup blindly re-registered its cache  
callbacks each time.  Fix it to register them only once, as all other  
users of cache callbacks already take care to do.  
This has been broken all along, so back-patch to all supported branches.  
Shi Yu  

M src/backend/replication/pgoutput/pgoutput.c

Fix multi-row DEFAULT handling for INSERT ... SELECT rules.

commit   : e68b133c30e2146b51c15be702f8954bc8fdb63b    
author   : Dean Rasheed <>    
date     : Thu, 23 Feb 2023 10:58:43 +0000    
committer: Dean Rasheed <>    
date     : Thu, 23 Feb 2023 10:58:43 +0000    

Click here for diff

Given an updatable view with a DO ALSO INSERT ... SELECT rule, a  
multi-row INSERT ... VALUES query on the view fails if the VALUES list  
contains any DEFAULTs that are not replaced by view defaults. This  
manifests as an "unrecognized node type" error, or an Assert failure,  
in an assert-enabled build.  
The reason is that when RewriteQuery() attempts to replace the  
remaining DEFAULT items with NULLs in any product queries, using  
rewriteValuesRTEToNulls(), it assumes that the VALUES RTE is located  
at the same rangetable index in each product query. However, if the  
product query is an INSERT ... SELECT, then the VALUES RTE is actually  
in the SELECT part of that query (at the same index), rather than the  
top-level product query itself.  
Fix, by descending to the SELECT in such cases. Note that we can't  
simply use getInsertSelectQuery() for this, since that expects to be  
given a raw rule action with OLD and NEW placeholder entries, so we  
duplicate its logic instead.  
While at it, beef up the checks in getInsertSelectQuery() by checking  
that the jointree->fromlist node is indeed a RangeTblRef, and that the  
RTE it points to has rtekind == RTE_SUBQUERY.  
Per bug #17803, from Alexander Lakhin. Back-patch to all supported  
Dean Rasheed, reviewed by Tom Lane.  

M src/backend/rewrite/rewriteHandler.c
M src/backend/rewrite/rewriteManip.c
M src/test/regress/expected/updatable_views.out
M src/test/regress/sql/updatable_views.sql

Fix snapshot handling in logicalmsg_decode

commit   : 8de91ebf2ac1e9922214bf2976a2fcc5c045c169    
author   : Tomas Vondra <>    
date     : Wed, 22 Feb 2023 15:24:09 +0100    
committer: Tomas Vondra <>    
date     : Wed, 22 Feb 2023 15:24:09 +0100    

Click here for diff

Whe decoding a transactional logical message, logicalmsg_decode called  
SnapBuildGetOrBuildSnapshot. But we may not have a consistent snapshot  
yet at that point. We don't actually need the snapshot in this case  
(during replay we'll have the snapshot from the transaction), so in  
practice this is harmless. But in assert-enabled build this crashes.  
Fixed by requesting the snapshot only in non-transactional case, where  
we are guaranteed to have SNAPBUILD_CONSISTENT.  
Backpatch to 11. The issue exists since 9.6.  
Backpatch-through: 11  
Reviewed-by: Andres Freund  

M src/backend/replication/logical/decode.c
M src/backend/replication/logical/reorderbuffer.c

Add missing support for the latest SPI status codes.

commit   : 83a54d9661027cbb0a97e543ce7440d55812c87c    
author   : Dean Rasheed <>    
date     : Wed, 22 Feb 2023 13:29:39 +0000    
committer: Dean Rasheed <>    
date     : Wed, 22 Feb 2023 13:29:39 +0000    

Click here for diff

SPI_result_code_string() was missing support for SPI_OK_TD_REGISTER,  
and in v15 and later, it was missing support for SPI_OK_MERGE, as was  
The last of those would trigger an error if a MERGE was executed from  
PL/Tcl. The others seem fairly innocuous, but worth fixing.  
Back-patch to all supported branches. Before v15, this is just adding  
SPI_OK_TD_REGISTER to SPI_result_code_string(), which is unlikely to  
be seen by anyone, but seems worth doing for completeness.  
Reviewed by Tom Lane.  

M src/backend/executor/spi.c

Fix erroneous Valgrind markings in AllocSetRealloc.

commit   : 21bd818d05fb24c6e48de95734acf5e572d18392    
author   : Tom Lane <>    
date     : Tue, 21 Feb 2023 18:47:47 -0500    
committer: Tom Lane <>    
date     : Tue, 21 Feb 2023 18:47:47 -0500    

Click here for diff

If asked to decrease the size of a large (>8K) palloc chunk,  
AllocSetRealloc could improperly change the Valgrind state of memory  
beyond the new end of the chunk: it would mark data UNDEFINED as far  
as the old end of the chunk after having done the realloc(3) call,  
thus tromping on the state of memory that no longer belongs to it.  
One would normally expect that memory to now be marked NOACCESS,  
so that this mislabeling might prevent detection of later errors.  
If realloc() had chosen to move the chunk someplace else (unlikely,  
but well within its rights) we could also mismark perfectly-valid  
DEFINED data as UNDEFINED, causing false-positive valgrind reports  
later.  Also, any malloc bookkeeping data placed within this area  
might now be wrongly marked, causing additional problems.  
Fix by replacing relevant uses of "oldsize" with "Min(size, oldsize)".  
It's sufficient to mark as far as "size" when that's smaller, because  
whatever remains in the new chunk size will be marked NOACCESS below,  
and we expect realloc() to have taken care of marking the memory  
beyond the new official end of the chunk.  
While we're here, also rename the function's "oldsize" variable  
to "oldchksize" to more clearly explain what it actually holds,  
namely the distance to the end of the chunk (that is, requested size  
plus trailing padding).  This is more consistent with the use of  
"size" and "chksize" to hold the new requested size and chunk size.  
Add a new variable "oldsize" in the one stanza where we're actually  
talking about the old requested size.  
Oversight in commit c477f3e44.  Back-patch to all supported branches,  
as that was, just in case anybody wants to do valgrind testing on back  
Karina Litskevich  

M src/backend/utils/mmgr/aset.c

Print the correct aliases for DML target tables in ruleutils.

commit   : df931e9ab35bae8902035eac60e0edcbd8db8b3d    
author   : Tom Lane <>    
date     : Fri, 17 Feb 2023 16:40:34 -0500    
committer: Tom Lane <>    
date     : Fri, 17 Feb 2023 16:40:34 -0500    

Click here for diff

ruleutils.c blindly printed the user-given alias (or nothing if there  
hadn't been one) for the target table of INSERT/UPDATE/DELETE queries.  
That works a large percentage of the time, but not always: for queries  
appearing in WITH, it's possible that we chose a different alias to  
avoid conflict with outer-scope names.  Since the chosen alias would  
be used in any Var references to the target table, this'd lead to an  
inconsistent printout with consequences such as dump/restore failures.  
The correct logic for printing (or not) a relation alias was embedded  
in get_from_clause_item.  Factor it out to a separate function so that  
we don't need a jointree node to use it.  (Only a limited part of that  
function can be reached from these new call sites, but this seems like  
the cleanest non-duplicative factorization.)  
In passing, I got rid of a redundant "\d+ rules_src" step in rules.sql.  
Initial report from Jonathan Katz; thanks to Vignesh C for analysis.  
This has been broken for a long time, so back-patch to all supported  

M src/backend/utils/adt/ruleutils.c
M src/test/regress/expected/rules.out
M src/test/regress/sql/rules.sql

Fix handling of SCRAM-SHA-256's channel binding with RSA-PSS certificates

commit   : 88d606f7cc68aa753868ca92b0e065d77c5915d2    
author   : Michael Paquier <>    
date     : Wed, 15 Feb 2023 10:12:40 +0900    
committer: Michael Paquier <>    
date     : Wed, 15 Feb 2023 10:12:40 +0900    

Click here for diff

OpenSSL 1.1.1 and newer versions have added support for RSA-PSS  
certificates, which requires the use of a specific routine in OpenSSL to  
determine which hash function to use when compiling it when using  
channel binding in SCRAM-SHA-256.  X509_get_signature_nid(), that is the  
original routine the channel binding code has relied on, is not able to  
determine which hash algorithm to use for such certificates.  However,  
X509_get_signature_info(), new to OpenSSL 1.1.1, is able to do it.  This  
commit switches the channel binding logic to rely on  
X509_get_signature_info() over X509_get_signature_nid(), which would be  
the choice when building with 1.1.1 or newer.  
The error could have been triggered on the client or the server, hence  
libpq and the backend need to have their related code paths patched.  
Note that attempting to load an RSA-PSS certificate with OpenSSL 1.1.0  
or older leads to a failure due to an unsupported algorithm.  
The discovery of relying on X509_get_signature_info() comes from Jacob,  
the tests have been written by Heikki (with few tweaks from me), while I  
have bundled the whole together while adding the bits needed for MSVC  
and meson.  
This issue exists since channel binding exists, so backpatch all the way  
down.  Some tests are added in 15~, triggered if compiling with OpenSSL  
1.1.1 or newer, where the certificate and key files can easily be  
generated for RSA-PSS.  
Reported-by: Gunnar "Nick" Bluth  
Author: Jacob Champion, Heikki Linnakangas  
Backpatch-through: 11  

M configure
M src/backend/libpq/be-secure-openssl.c
M src/include/libpq/libpq-be.h
M src/include/
M src/interfaces/libpq/fe-secure-openssl.c
M src/interfaces/libpq/libpq-int.h
M src/tools/msvc/

Disable WindowAgg inverse transitions when subplans are present

commit   : 8d2a8581b6d67cfa05c1f47fa13de9815cdf91f6    
author   : David Rowley <>    
date     : Mon, 13 Feb 2023 17:07:04 +1300    
committer: David Rowley <>    
date     : Mon, 13 Feb 2023 17:07:04 +1300    

Click here for diff

When an aggregate function is used as a WindowFunc and a tuple transitions  
out of the window frame, we ordinarily try to make use of the aggregate  
function's inverse transition function to "unaggregate" the exiting tuple.  
This optimization is disabled for various cases, including when the  
aggregate contains a volatile function.  In such a case we'd be unable to  
ensure that the transition value was calculated to the same value during  
transitions and inverse transitions.  Unfortunately, we did this check by  
calling contain_volatile_functions() which does not recursively search  
SubPlans for volatile functions.  If the aggregate function's arguments or  
its FILTER clause contained a subplan with volatile functions then we'd  
fail to notice this.  
Here we fix this by just disabling the optimization when the WindowFunc  
contains any subplans.  Volatile functions are not the only reason that a  
subplan may have nonrepeatable results.  
Bug: #17777  
Reported-by: Anban Company  
Reviewed-by: Tom Lane  
Backpatch-through: 11  

M src/backend/executor/nodeWindowAgg.c

Stop recommending auto-download of DTD files, and indeed disable it.

commit   : 36a646d99c3fb5262aff00fe7d3d40c2cdbb6d34    
author   : Tom Lane <>    
date     : Wed, 8 Feb 2023 17:15:23 -0500    
committer: Tom Lane <>    
date     : Wed, 8 Feb 2023 17:15:23 -0500    

Click here for diff

It appears no longer possible to build the SGML docs without a local  
installation of the DocBook DTD, because now only  
permits HTTPS access, and no common version of xsltproc supports that.  
Hence, remove the bits of our documentation suggesting that that's  
possible or useful.  
In fact, we might as well add the --nonet option to the build recipes  
automatically, for a bit of extra security.  
Also fix our documentation-tool-installation recipes for macOS to  
ensure that xmllint and xsltproc are pulled in from MacPorts or  
Homebrew.  The previous recipes assumed you could use the  
Apple-supplied versions of these tools; which still works, except that  
you'd need to set an environment variable to ensure that they would  
find DTD files provided by those package managers.  Simpler and easier  
to just recommend pulling in the additional packages.  
In HEAD, also document how to build docs using Meson, and adjust  
"ninja docs" to just build the HTML docs, for consistency with the  
default behavior of doc/src/sgml/Makefile.  
In a fit of neatnik-ism, I also made the ordering of the package  
lists match the order in which the tools are described at the head  
of the appendix.  
Aleksander Alekseev, Peter Eisentraut, Tom Lane  

M doc/src/sgml/Makefile
M doc/src/sgml/docguide.sgml

Backpatch OpenSSL 3.0.0 compatibility in tests

commit   : cab553a08e2d7ba8be52a1871977fa6653ba5924    
author   : Peter Eisentraut <>    
date     : Fri, 5 Jun 2020 11:18:11 +0200    
committer: Andrew Dunstan <>    
date     : Fri, 5 Jun 2020 11:18:11 +0200    

Click here for diff

backport of commit f0d2c65f17 to releases 11 and 12  
This means the SSL tests will fail on machines with extremely old  
versions of OpenSSL, but we don't know of anything trying to run such  
tests. The ability to build is not affected.  

M src/test/ssl/Makefile
M src/test/ssl/ssl/server-password.key