commit : 0da63ea07a3b610a5f0c4fc43595d76f3d8fa0d7 author : Tom Lane <email@example.com> date : Mon, 6 Feb 2023 16:46:39 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 6 Feb 2023 16:46:39 -0500
commit : f280ad2ca2d808a8cee6489dbbe2ec1e9ca02434 author : Peter Eisentraut <email@example.com> date : Mon, 6 Feb 2023 12:22:35 +0100 committer: Peter Eisentraut <firstname.lastname@example.org> date : Mon, 6 Feb 2023 12:22:35 +0100
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 0a5723fd9be9be6300531a4cb0cf34eab22b66b3
Release notes for 15.2, 14.7, 13.10, 12.14, 11.19.
commit : 1efc127978f8c90f8fe9225b85b1a2adba3a2ea0 author : Tom Lane <email@example.com> date : Sun, 5 Feb 2023 16:22:32 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 5 Feb 2023 16:22:32 -0500
doc: Fix XML formatting that psql cannot handle
commit : a28bf818eaaa4bc71f492071b5a8e6bb0919e4ed author : Peter Eisentraut <email@example.com> date : Fri, 3 Feb 2023 09:04:35 +0100 committer: Peter Eisentraut <firstname.lastname@example.org> date : Fri, 3 Feb 2023 09:04:35 +0100
Breaking <phrase> over two lines is not handled by psql's create_help.pl. (It creates faulty \help output.) Undo the formatting change introduced by 9bdad1b5153e5d6b77a8f9c6e32286d6bafcd76d to fix this for now.
Update time zone data files to tzdata release 2022g.
commit : 7ddc428ef005710e8b220cdd2a90231296294781 author : Tom Lane <email@example.com> date : Tue, 31 Jan 2023 17:36:55 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 31 Jan 2023 17:36:55 -0500
DST law changes in Greenland and Mexico. Notably, a new timezone America/Ciudad_Juarez has been split off from America/Ojinaga. Historical corrections for northern Canada, Colombia, and Singapore.
Doc: clarify use of NULL to drop comments and security labels.
commit : c1827e9cd2d70a353c066b1b70961e514ebe1d9f author : Tom Lane <email@example.com> date : Tue, 31 Jan 2023 14:32:24 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 31 Jan 2023 14:32:24 -0500
This was only mentioned in the description of the text/label, which are marked as being in quotes in the synopsis, which can cause confusion (as witnessed on IRC). Also separate the literal and NULL cases in the parameter list, per suggestion from Tom Lane. Also add an example of dropping a security label. Dagfinn Ilmari Mannsåker, with some tweaks by me Discussion: https://email@example.com
Remove recovery test 011_crash_recovery.pl
commit : 403d82dd54b58f9c1c64f7f7b00f396ce903a257 author : Michael Paquier <firstname.lastname@example.org> date : Tue, 31 Jan 2023 12:47:20 +0900 committer: Michael Paquier <email@example.com> date : Tue, 31 Jan 2023 12:47:20 +0900
This test has been added as of 857ee8e that has introduced the SQL function txid_status(), with the purpose of checking that a transaction ID still in-progress during a crash is correctly marked as aborted after recovery finishes. This test is unstable, and some configuration scenarios may that easier to reproduce (wal_level=minimal, wal_compression=on) because the WAL holding the information about the in-progress transaction ID may not have made it to disk yet, hence a post-crash recovery may cause the same XID to be reused, triggering a test failure. We have discussed a few approaches, like making this function force a WAL flush to make it reliable across crashes, but we don't want to pay a performance penalty in some scenarios, as well. The test could have been tweaked to enforce a checkpoint but that actually breaks the promise of the test to rely on a stable result of txid_status() after a crash. This issue has been reported a few times across the past years, with an original report from Kyotaro Horiguchi. The buildfarm machines tanager, hachi and gokiburi enable wal_compression, and fail on this test periodically. Discussion: https://firstname.lastname@example.org Discussion: https://email@example.com Backpatch-through: 11
Fix rare sharedtuplestore.c corruption.
commit : d95dcc9ab5f8816c7a4ac591628c68efaa2a9b7a author : Thomas Munro <firstname.lastname@example.org> date : Thu, 26 Jan 2023 14:50:07 +1300 committer: Thomas Munro <email@example.com> date : Thu, 26 Jan 2023 14:50:07 +1300
If the final chunk of an oversized tuple being written out to disk was exactly 32760 bytes, it would be corrupted due to a fencepost bug. Bug #17619. Back-patch to 11 where the code arrived. While testing that (see test module in archives), I (tmunro) noticed that the per-participant page counter was not initialized to zero as it should have been; that wasn't a live bug when it was written since DSM memory was originally always zeroed, but since 14 min_dynamic_shared_memory might be configured and it supplies non-zeroed memory, so that is also fixed here. Author: Dmitry Astapov <firstname.lastname@example.org> Discussion: https://postgr.es/m/17619-0de62ceda812b8b5%40postgresql.org
Fix error handling in libpqrcv_connect()
commit : 243373159fb427dfb6fcf43e2ac403d9d3b82752 author : Andres Freund <email@example.com> date : Mon, 23 Jan 2023 18:04:02 -0800 committer: Andres Freund <firstname.lastname@example.org> date : Mon, 23 Jan 2023 18:04:02 -0800
When libpqrcv_connect (also known as walrcv_connect()) failed, it leaked the libpq connection. In most paths that's fairly harmless, as the calling process will exit soon after. But e.g. CREATE SUBSCRIPTION could lead to a somewhat longer lived leak. Fix by releasing resources, including the libpq connection, on error. Add a test exercising the error code path. To make it reliable and safe, the test tries to connect to port=-1, which happens to fail during connection establishment, rather than during connection string parsing. Reviewed-by: Noah Misch <email@example.com> Discussion: https://firstname.lastname@example.org Backpatch: 11-
Allow REPLICA IDENTITY to be set on an index that's not (yet) valid.
commit : 6c122eddecc1c2e702aa05d2868a3e6a0d75bda1 author : Tom Lane <email@example.com> date : Sat, 21 Jan 2023 13:10:30 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 21 Jan 2023 13:10:30 -0500
The motivation for this change is that when pg_dump dumps a partitioned index that's marked REPLICA IDENTITY, it generates a command sequence that applies REPLICA IDENTITY before the partitioned index has been marked valid, causing restore to fail. We could perhaps change pg_dump to not do it like that, but that would be difficult and would not fix existing dump files with the problem. There seems to be very little reason for the backend to disallow this anyway --- the code ignores indisreplident when the index isn't valid --- so instead let's fix it by allowing the case. Commit 9511fb37a previously expressed a concern that allowing indisreplident to be set on invalid indexes might allow us to wind up in a situation where a table could have indisreplident set on multiple indexes. I'm not sure I follow that concern exactly, but in any case the only way that could happen is because relation_mark_replica_identity is too trusting about the existing set of markings being valid. Let's just rip out its early-exit code path (which sure looks like premature optimization anyway; what are we doing expending code to make redundant ALTER TABLE ... REPLICA IDENTITY commands marginally faster and not-redundant ones marginally slower?) and fix it to positively guarantee that no more than one index is marked indisreplident. The pg_dump failure can be demonstrated in all supported branches, so back-patch all the way. I chose to back-patch 9511fb37a as well, just to keep indisreplident handling the same in all branches. Per bug #17756 from Sergey Belyashov. Discussion: https://email@example.com
Reject CancelRequestPacket having unexpected length.
commit : 8f70de7e0106dc0df8b31994e2b3ad06691f5836 author : Noah Misch <firstname.lastname@example.org> date : Sat, 21 Jan 2023 06:08:00 -0800 committer: Noah Misch <email@example.com> date : Sat, 21 Jan 2023 06:08:00 -0800
When the length was too short, the server read outside the allocation. That yielded the same log noise as sending the correct length with (backendPID,cancelAuthCode) matching nothing. Change to a message about the unexpected length. Given the attacker's lack of control over the memory layout and the general lack of diversity in memory layouts at the code in question, we doubt a would-be attacker could cause a segfault. Hence, while the report arrived via firstname.lastname@example.org, this is not a vulnerability. Back-patch to v11 (all supported versions). Andrey Borodin, reviewed by Tom Lane. Reported by Andrey Borodin.
Make our back branches build under -fkeep-inline-functions.
commit : b69e9dfab14f3602eac6a97afaf1a593cfa34424 author : Tom Lane <email@example.com> date : Fri, 20 Jan 2023 11:58:12 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 20 Jan 2023 11:58:12 -0500
Add "#ifndef FRONTEND" where necessary to make pg_waldump build on compilers that don't elide unused static-inline functions. This back-patches relevant parts of commit 3e9ca5260, fixing build breakage from dc7420c2c and back-patching of f10f0ae42. Per recently-resurrected buildfarm member castoroides. We aren't expecting castoroides to build anything newer than v11, but we might as well clean up the intermediate branches while at it.
Log the correct ending timestamp in recovery_target_xid mode.
commit : 0a269527f6a1151fe95efe4a8c2112a6e4b66496 author : Tom Lane <email@example.com> date : Thu, 19 Jan 2023 12:23:20 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 19 Jan 2023 12:23:20 -0500
When ending recovery based on recovery_target_xid matching with recovery_target_inclusive = off, we printed an incorrect timestamp (always 2000-01-01) in the "recovery stopping before ... transaction" log message. This is a consequence of sloppy refactoring in c945af80c: the code to fetch recordXtime out of the commit/abort record used to be executed unconditionally, but it was changed to get called only in the RECOVERY_TARGET_TIME case. We need only flip the order of operations to restore the intended behavior. Per report from Torsten Förtsch. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAKkG4_kUevPqbmyOfLajx7opAQk6Cvwkvx0HRcFjSPfRPTXanA@mail.gmail.com
Add missing assign hook for GUC checkpoint_completion_target
commit : 0c2f34af7e1e908d27eb34c2c3469910e43c1253 author : Michael Paquier <email@example.com> date : Thu, 19 Jan 2023 13:13:34 +0900 committer: Michael Paquier <firstname.lastname@example.org> date : Thu, 19 Jan 2023 13:13:34 +0900
This is wrong since 88e9823, that has switched the WAL sizing configuration from checkpoint_segments to min_wal_size and max_wal_size. This missed the recalculation of the internal value of the internal "CheckPointSegments", that works as a mapping of the old GUC checkpoint_segments, on reload, for example, and it controls the timing of checkpoints depending on the volume of WAL generated. Most users tend to leave checkpoint_completion_target at 0.9 to smooth the I/O workload, which is why I guess this has gone unnoticed for so long, still it can be useful to tweak and reload the value dynamically in some cases to control the timing of checkpoints. Author: Bharath Rupireddy Discussion: https://postgr.es/m/CALj2ACXgPPAm28mruojSBno+F_=9cTOOxHAywu_dfZPeBdybQw@mail.gmail.com Backpatch-through: 11
Fix failure with perlcritic in psql's create_help.pl
commit : 145bc5debf6fdcc1546196906ee57ae68d647921 author : Michael Paquier <email@example.com> date : Thu, 19 Jan 2023 10:02:15 +0900 committer: Michael Paquier <firstname.lastname@example.org> date : Thu, 19 Jan 2023 10:02:15 +0900
No buildfarm members have reported that yet, but a recently-refreshed Debian host did. Reviewed-by: Andrew Dunstan Discussion: https://postgr.es/m/Y8ey5z4Nav62g4/K@paquier.xyz Backpatch-through: 11
AdjustUpgrade.pm should zap test_ext_cine, too.
commit : c94d684bf4b04aded0560915035429ae09012a40 author : Tom Lane <email@example.com> date : Tue, 17 Jan 2023 16:00:39 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 17 Jan 2023 16:00:39 -0500
test_extensions' test_ext_cine extension has the same upgrade hazard as test_ext7: the regression test leaves it in an updated state from which no downgrade path to default is provided. This causes the update_extensions.sql script helpfully provided by pg_upgrade to fail. So drop it in cross-version-upgrade testing. Not entirely sure how come I didn't hit this in testing yesterday; possibly I'd built the upgrade reference databases with testmodules-install-check disabled. Backpatch to v10 where this module was introduced.
Create common infrastructure for cross-version upgrade testing.
commit : f02a75222821d1633343c81f5a9183a6e7acee62 author : Tom Lane <email@example.com> date : Mon, 16 Jan 2023 20:35:53 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 16 Jan 2023 20:35:53 -0500
To test pg_upgrade across major PG versions, we have to be able to modify or drop any old objects with no-longer-supported properties, and we have to be able to deal with cosmetic changes in pg_dump output. Up to now, the buildfarm and pg_upgrade's own test infrastructure had separate implementations of the former, and we had nothing but very ad-hoc rules for the latter (including an arbitrary threshold on how many lines of unchecked diff were okay!). This patch creates a Perl module that can be shared by both those use-cases, and adds logic that deals with pg_dump output diffs in a much more tightly defined fashion. This largely supersedes previous efforts in commits 0df9641d3, 9814ff550, and 62be9e4cd, which developed a SQL-script-based solution for the task of dropping old objects. There was nothing fundamentally wrong with that work in itself, but it had no basis for solving the output-formatting problem. The most plausible way to deal with formatting is to build a Perl module that can perform editing on the dump files; and once we commit to that, it makes more sense for the same module to also embed the knowledge of what has to be done for dropping old objects. Back-patch versions of the helper module as far as 9.2, to support buildfarm animals that still test that far back. It's also necessary to back-patch PostgreSQL/Version.pm, because the new code depends on that. I fixed up pg_upgrade's 002_pg_upgrade.pl in v15, but did not look into back-patching it further than that. Tom Lane and Andrew Dunstan Discussion: https://email@example.com
Fix WaitEventSetWait() buffer overrun.
commit : 1b40710a8c89a1b12de06e38047185b972bd6351 author : Thomas Munro <firstname.lastname@example.org> date : Fri, 13 Jan 2023 10:40:52 +1300 committer: Thomas Munro <email@example.com> date : Fri, 13 Jan 2023 10:40:52 +1300
The WAIT_USE_EPOLL and WAIT_USE_KQUEUE implementations of WaitEventSetWaitBlock() confused the size of their internal buffer with the size of the caller's output buffer, and could ask the kernel for too many events. In fact the set of events retrieved from the kernel needs to be able to fit in both buffers, so take the smaller of the two. The WAIT_USE_POLL and WAIT_USE WIN32 implementations didn't have this confusion. This probably didn't come up before because we always used the same number in both places, but commit 7389aad6 calculates a dynamic size at construction time, while using MAXLISTEN for its output event buffer on the stack. That seems like a reasonable thing to want to do, so consider this to be a pre-existing bug worth fixing. As discovered by valgrind on skink. Back-patch to all supported releases for epoll, and to release 13 for the kqueue part, which copied the incorrect epoll code. Reviewed-by: Andres Freund <firstname.lastname@example.org> Discussion: https://postgr.es/m/901504.1673504836%40sss.pgh.pa.us
Fix tab completion of ALTER FUNCTION/PROCEDURE/ROUTINE ... SET SCHEMA.
commit : c54b888700814c5075f35995e4c0e32335e27d23 author : Dean Rasheed <email@example.com> date : Fri, 6 Jan 2023 11:09:56 +0000 committer: Dean Rasheed <firstname.lastname@example.org> date : Fri, 6 Jan 2023 11:09:56 +0000
The ALTER DATABASE|FUNCTION|PROCEDURE|ROLE|ROUTINE|USER ... SET <name> case in psql tab completion failed to exclude <name> = "SCHEMA", which caused ALTER FUNCTION|PROCEDURE|ROUTINE ... SET SCHEMA to complete with "FROM CURRENT" and "TO", which won't work. Fix that, so that those cases now complete with the list of schemas, like other ALTER ... SET SCHEMA commands. Noticed while testing the recent patch to improve tab completion for ALTER FUNCTION/PROCEDURE/ROUTINE, but this is not directly related to that patch. Rather, this is a long-standing bug, so back-patch to all supported branches. Discussion: https://postgr.es/m/CALDaNm0s7GQmkLP_mx5Cvk=UzYMnjhPmXBxU8DsHEunFbC5sTg@mail.gmail.com
Improve documentation of the CREATEROLE attibute.
commit : 0b496bc9881fdcb3c20aeaa08bcdc196b8ec6946 author : Robert Haas <email@example.com> date : Tue, 3 Jan 2023 14:50:40 -0500 committer: Robert Haas <firstname.lastname@example.org> date : Tue, 3 Jan 2023 14:50:40 -0500
In user-manag.sgml, document precisely what privileges are conveyed by CREATEROLE. Make particular note of the fact that it allows changing passwords and granting access to high-privilege roles. Also remove the suggestion of using a user with CREATEROLE and CREATEDB instead of a superuser, as there is no real security advantage to this approach. Elsewhere in the documentation, adjust text that suggests that <literal>CREATEROLE</literal> only allows for role creation, and refer to the documentation in user-manag.sgml as appropriate. Patch by me, reviewed by Álvaro Herrera Discussion: http://postgr.es/m/CA+TgmoZBsPL8nPhvYecx7iGo5qpDRqa9k_AcaW1SbOjugAY1Ag@mail.gmail.com
Fix typos in comments, code and documentation
commit : a80740a7c9792a622e5ebb75010ea48a5521bf56 author : Michael Paquier <email@example.com> date : Tue, 3 Jan 2023 16:26:38 +0900 committer: Michael Paquier <firstname.lastname@example.org> date : Tue, 3 Jan 2023 16:26:38 +0900
While on it, newlines are removed from the end of two elog() strings. The others are simple grammar mistakes. One comment in pg_upgrade referred incorrectly to sequences since a7e5457. Author: Justin Pryzby Discussion: https://postgr.es/m/20221230231257.GI1153@telsasoft.com Backpatch-through: 11
perl: Hide warnings inside perl.h when using gcc compatible compiler
commit : 99f8bc335cbed43d64ed1e440a7fa43a8746a683 author : Andres Freund <email@example.com> date : Thu, 29 Dec 2022 12:47:29 -0800 committer: Andres Freund <firstname.lastname@example.org> date : Thu, 29 Dec 2022 12:47:29 -0800
New versions of perl trigger warnings within perl.h with our compiler flags. At least -Wdeclaration-after-statement, -Wshadow=compatible-local are known to be problematic. To avoid these warnings, conditionally use #pragma GCC system_header before including plperl.h. Alternatively, we could add the include paths for problematic headers with -isystem, but that is a larger hammer and is harder to search for. A more granular alternative would be to use #pragma GCC diagnostic push/ignored/pop, but gcc warns about unknown warnings being ignored, so every to-be-ignored-temporarily compiler warning would require its own pg_config.h symbol and #ifdef. As the warnings are voluminous, it makes sense to backpatch this change. But don't do so yet, we first want gather buildfarm coverage - it's e.g. possible that some compiler claiming to be gcc compatible has issues with the pragma. Author: Andres Freund <email@example.com> Reviewed-by: Tom Lane <firstname.lastname@example.org> Discussion: Discussion: https://email@example.com
Avoid reference to nonexistent array element in ExecInitAgg().
commit : 982b9b1eba8d809cd677009d15ca045cea890c69 author : Tom Lane <firstname.lastname@example.org> date : Mon, 2 Jan 2023 16:17:00 -0500 committer: Tom Lane <email@example.com> date : Mon, 2 Jan 2023 16:17:00 -0500
When considering an empty grouping set, we fetched phasedata->eqfunctions[-1]. Because the eqfunctions array is palloc'd, that would always be an aset pointer in released versions, and thus the code accidentally failed to malfunction (since it would do nothing unless it found a null pointer). Nonetheless this seems like trouble waiting to happen, so add a check for length == 0. It's depressing that our valgrind testing did not catch this. Maybe we should reconsider the choice to not mark that word NOACCESS? Richard Guo Discussion: https://postgr.es/m/CAMbWs4-vZuuPOZsKOYnSAaPYGKhmacxhki+vpOKk0O7rymccXQ@mail.gmail.com
Update copyright for 2023
commit : ef3de5557687bdb60ab1a810c85a27094450f529 author : Bruce Momjian <firstname.lastname@example.org> date : Mon, 2 Jan 2023 15:00:36 -0500 committer: Bruce Momjian <email@example.com> date : Mon, 2 Jan 2023 15:00:36 -0500
Fix come incorrect elog() messages in aclchk.c
commit : df6fea51f01d38a427ed7c5d2e13c501f0beb4f0 author : Michael Paquier <firstname.lastname@example.org> date : Fri, 23 Dec 2022 10:04:37 +0900 committer: Michael Paquier <email@example.com> date : Fri, 23 Dec 2022 10:04:37 +0900
Three error strings used with cache lookup failures were referring to incorrect object types for ACL checks: - Schemas - Types - Foreign Servers There errors should never be triggered, but if they do incorrect information would be reported. Author: Justin Pryzby Discussion: https://postgr.es/m/20221222153041.GN1153@telsasoft.com Backpatch-through: 11
Add some recursion and looping defenses in prepjointree.c.
commit : 8cd700cc5a676282912c7080cfa142977a2dd851 author : Tom Lane <firstname.lastname@example.org> date : Thu, 22 Dec 2022 10:35:03 -0500 committer: Tom Lane <email@example.com> date : Thu, 22 Dec 2022 10:35:03 -0500
Andrey Lepikhov demonstrated a case where we spend an unreasonable amount of time in pull_up_subqueries(). Not only is that recursing with no explicit check for stack overrun, but the code seems not interruptable by control-C. Let's stick a CHECK_FOR_INTERRUPTS there, along with sprinkling some stack depth checks. An actual fix for the excessive time consumption seems a bit risky to back-patch; but this isn't, so let's do so. Discussion: https://firstname.lastname@example.org
Fix contrib/seg to be more wary of long input numbers.
commit : 0ff4056b8ce994e2932260c9f194675769b3d2e5 author : Tom Lane <email@example.com> date : Wed, 21 Dec 2022 17:51:50 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 21 Dec 2022 17:51:50 -0500
seg stores the number of significant digits in an input number in a "char" field. If char is signed, and the input is more than 127 digits long, the count can read out as negative causing seg_out() to print garbage (or, if you're really unlucky, even crash). To fix, clamp the digit count to be not more than FLT_DIG. (In theory this loses some information about what the original input was, but it doesn't seem like useful information; it would not survive dump/restore in any case.) Also, in case there are stored values of the seg type containing bad data, add a clamp in seg_out's restore() subroutine. Per bug #17725 from Robins Tharakan. It's been like this forever, so back-patch to all supported branches. Discussion: https://email@example.com
Rethink handling of [Prevent|Is]InTransactionBlock in pipeline mode.
commit : f48aa5df4e030ab75bdc2ca5fc480c4a830cf5f3 author : Tom Lane <firstname.lastname@example.org> date : Tue, 13 Dec 2022 14:23:59 -0500 committer: Tom Lane <email@example.com> date : Tue, 13 Dec 2022 14:23:59 -0500
Commits f92944137 et al. made IsInTransactionBlock() set the XACT_FLAGS_NEEDIMMEDIATECOMMIT flag before returning "false", on the grounds that that kept its API promises equivalent to those of PreventInTransactionBlock(). This turns out to be a bad idea though, because it allows an ANALYZE in a pipelined series of commands to cause an immediate commit, which is unexpected. Furthermore, if we return "false" then we have another issue, which is that ANALYZE will decide it's allowed to do internal commit-and-start-transaction sequences, thus possibly unexpectedly committing the effects of previous commands in the pipeline. To fix the latter situation, invent another transaction state flag XACT_FLAGS_PIPELINING, which explicitly records the fact that we have executed some extended-protocol command and not yet seen a commit for it. Then, require that flag to not be set before allowing InTransactionBlock() to return "false". Having done that, we can remove its setting of NEEDIMMEDIATECOMMIT without fear of causing problems. This means that the API guarantees of IsInTransactionBlock now diverge from PreventInTransactionBlock, which is mildly annoying, but it seems OK given the very limited usage of IsInTransactionBlock. (In any case, a caller preferring the old behavior could always set NEEDIMMEDIATECOMMIT for itself.) For consistency also require XACT_FLAGS_PIPELINING to not be set in PreventInTransactionBlock. This too is meant to prevent commands such as CREATE DATABASE from silently committing previous commands in a pipeline. Per report from Peter Eisentraut. As before, back-patch to all supported branches (which sadly no longer includes v10). Discussion: https://firstname.lastname@example.org
doc: Add missing <varlistentry> markups for developer GUCs
commit : 990a773ab6a807f3436d86b2825435f1274a8187 author : Michael Paquier <email@example.com> date : Mon, 5 Dec 2022 11:23:36 +0900 committer: Michael Paquier <firstname.lastname@example.org> date : Mon, 5 Dec 2022 11:23:36 +0900
Missing such markups makes it impossible to create links back to these GUCs, and all the other parameters have one already. Author: Ian Lawrence Barwick Discussion: https://postgr.es/m/CAB8KJ=jx=6dFB_EN3j0UkuvG3cPu5OmQiM-ZKRAz+fKvS+u8Ng@mail.gmail.com Backpatch-through: 11
Fix generate_partitionwise_join_paths() to tolerate failure.
commit : 2df073313934a3787ec0e42dde4a06a879ae5e3a author : Tom Lane <email@example.com> date : Sun, 4 Dec 2022 13:17:18 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 4 Dec 2022 13:17:18 -0500
We might fail to generate a partitionwise join, because reparameterize_path_by_child() does not support all path types. This should not be a hard failure condition: we should just fall back to a non-partitioned join. However, generate_partitionwise_join_paths did not consider this possibility and would emit the (misleading) error "could not devise a query plan for the given query" if we'd failed to make any paths for a child join. Fix it to give up on partitionwise joining if so. (The accepted technique for giving up appears to be to set rel->nparts = 0, which I find pretty bizarre, but there you have it.) I have not added a test case because there'd be little point: any omissions of this sort that we identify would soon get fixed by extending reparameterize_path_by_child(), so the test would stop proving anything. However, right now there is a known test case based on failure to cover MaterialPath, and with that I've found that this is broken in all supported versions. Hence, patch all the way back. Original report and patch by me; thanks to Richard Guo for identifying a test case that works against committed versions. Discussion: https://email@example.com
Fix DEFAULT handling for multi-row INSERT rules.
commit : 30f9b03a08bde2c4235ef5f1812ae194b32fb7e3 author : Dean Rasheed <firstname.lastname@example.org> date : Sat, 3 Dec 2022 12:20:02 +0000 committer: Dean Rasheed <email@example.com> date : Sat, 3 Dec 2022 12:20:02 +0000
When updating a relation with a rule whose action performed an INSERT from a multi-row VALUES list, the rewriter might skip processing the VALUES list, and therefore fail to replace any DEFAULTs in it. This would lead to an "unrecognized node type" error. The reason was that RewriteQuery() assumed that a query doing an INSERT from a multi-row VALUES list would necessarily only have one item in its fromlist, pointing to the VALUES RTE to read from. That assumption is correct for the original query, but not for product queries produced for rule actions. In such cases, there may be multiple items in the fromlist, possibly including multiple VALUES RTEs. What is required instead is for RewriteQuery() to skip any RTEs from the product query's originating query, which might include one or more already-processed VALUES RTEs. What's left should then include at most one VALUES RTE (from the rule action) to be processed. Patch by me. Thanks to Tom Lane for reviewing. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAEZATCV39OOW7LAR_Xq4i%2BLc1Byux%3DeK3Q%3DHD_pF1o9LBt%3DphA%40mail.gmail.com
Prevent pgstats from getting confused when relkind of a relation changes
commit : af3517c15c8412a63ba763571425573337d0e42d author : Andres Freund <firstname.lastname@example.org> date : Fri, 2 Dec 2022 17:50:26 -0800 committer: Andres Freund <email@example.com> date : Fri, 2 Dec 2022 17:50:26 -0800
When the relkind of a relache entry changes, because a table is converted into a view, pgstats can get confused in 15+, leading to crashes or assertion failures. For HEAD, Tom fixed this in b23cd185fd5, by removing support for converting a table to a view, removing the source of the inconsistency. This commit just adds an assertion that a relcache entry's relkind does not change, just in case we end up with another case of that in the future. As there's no cases of changing relkind anymore, we can't add a test that that's handled correctly. For 15, fix the problem by not maintaining the association with the old pgstat entry when the relkind changes during a relcache invalidation processing. In that case the pgstat entry needs to be unlinked first, to avoid PgStat_TableStatus->relation getting out of sync. Also add a test reproducing the issues. No known problem exists in 11-14, so just add the test there. Reported-by: vignesh C <firstname.lastname@example.org> Author: Andres Freund <email@example.com> Reviewed-by: Tom Lane <firstname.lastname@example.org> Discussion: https://postgr.es/m/CALDaNm2yXz+zOtv7y5zBd5WKT8O0Ld3YxikuU3dcyCvxF7gypA@mail.gmail.com Discussion: https://postgr.es/m/CALDaNm3oZA-8Wbps2Jd1g5_Gjrr-x3YWrJPek-mF5Asrrvz2Dg@mail.gmail.com Backpatch: 15-
revert: add transaction processing chapter with internals info
commit : 01e248f7966c0678645a5189bd2405218a7a5929 author : Bruce Momjian <email@example.com> date : Thu, 1 Dec 2022 10:45:07 -0500 committer: Bruce Momjian <firstname.lastname@example.org> date : Thu, 1 Dec 2022 10:45:07 -0500
This doc patch (master hash 66bc9d2d3e) was decided to be too significant for backpatching, so reverted in all but master. Also fix SGML file header comment in master. Reported-by: Peter Eisentraut Discussion: https://email@example.com Backpatch-through: 11
Reject missing database name in pg_regress and cohorts.
commit : 04f3acc97e426c5df1e31cd51b86fc1a93ca77dc author : Tom Lane <firstname.lastname@example.org> date : Wed, 30 Nov 2022 13:01:41 -0500 committer: Tom Lane <email@example.com> date : Wed, 30 Nov 2022 13:01:41 -0500
Writing "pg_regress --dbname= ..." led to a crash, because we weren't expecting there to be no database name supplied. It doesn't seem like a great idea to run regression tests in whatever is the user's default database; so rather than supporting this case let's explicitly reject it. Per report from Xing Guo. Back-patch to all supported branches. Discussion: https://postgr.es/m/CACpMh+A8cRvtvtOWVAZsCM1DU81GK4DL26R83y6ugZ1osV=ifA@mail.gmail.com
doc: add transaction processing chapter with internals info
commit : 80e591676bf374c632cfec1b61fde0ca468988f8 author : Bruce Momjian <firstname.lastname@example.org> date : Tue, 29 Nov 2022 20:49:51 -0500 committer: Bruce Momjian <email@example.com> date : Tue, 29 Nov 2022 20:49:51 -0500
This also adds references to this new chapter at relevant sections of our documentation. Previously much of these internal details were exposed to users, but not explained. This also updates RELEASE SAVEPOINT. Discussion: https://postgr.es/m/CANbhV-E_iy9fmrErxrCh8TZTyenpfo72Hf_XD2HLDppva4dUNA@mail.gmail.com Author: Simon Riggs, Laurenz Albe Reviewed-by: Bruce Momjian Backpatch-through: 11
Fix comment in fe-auth-scram.c
commit : a282d5f034d7ab9a285625acfcfd32d9b443bc36 author : Michael Paquier <firstname.lastname@example.org> date : Wed, 30 Nov 2022 08:38:40 +0900 committer: Michael Paquier <email@example.com> date : Wed, 30 Nov 2022 08:38:40 +0900
The frontend-side routine in charge of building a SCRAM verifier mentioned that the restrictions applying to SASLprep on the password with the encoding are described at the top of fe-auth-scram.c, but this information is in auth-scram.c. This is wrong since 8f8b9be, so backpatch all the way down as this is an important documentation bit. Spotted while reviewing a different patch. Backpatch-through: 11
Improve heuristics for compressing the KnownAssignedXids array.
commit : a6c9e1db2b6673100138f5b9f1ba31d55de7af9a author : Tom Lane <firstname.lastname@example.org> date : Tue, 29 Nov 2022 15:43:17 -0500 committer: Tom Lane <email@example.com> date : Tue, 29 Nov 2022 15:43:17 -0500
Previously, we'd compress only when the active range of array entries reached Max(4 * PROCARRAY_MAXPROCS, 2 * pArray->numKnownAssignedXids). If max_connections is large, the first term could result in not compressing for a long time, resulting in much wastage of cycles in hot-standby backends scanning the array to take snapshots. Get rid of that term, and just bound it to 2 * pArray->numKnownAssignedXids. That however creates the opposite risk, that we might spend too much effort compressing. Hence, consider compressing only once every 128 commit records. (This frequency was chosen by benchmarking. While we only tried one benchmark scenario, the results seem stable over a fairly wide range of frequencies.) Also, force compression when processing RecoveryInfo WAL records (which should be infrequent); the old code could perform compression then, but would do so only after the same array-range check as for the transaction-commit path. Also, opportunistically run compression if the startup process is about to wait for WAL, though not oftener than once a second. This should prevent cases where we waste lots of time by leaving the array not-compressed for long intervals due to low WAL traffic. Lastly, add a simple check to keep us from uselessly compressing when the array storage is already compact. Back-patch, as the performance problem is worse in pre-v14 branches than in HEAD. Simon Riggs and Michail Nikolaev, with help from Tom Lane and Andres Freund. Discussion: https://postgr.es/m/CALdSSPgahNUD_=pB_j=1zSnDBaiOtqVfzo8Ejt5J_k7qZiU1Tw@mail.gmail.com
Fix binary mismatch for MSVC plperl vs gcc built perl libs
commit : 724dd5649079a38193793c00292419917969effb author : Andrew Dunstan <firstname.lastname@example.org> date : Sun, 27 Nov 2022 09:03:22 -0500 committer: Andrew Dunstan <email@example.com> date : Sun, 27 Nov 2022 09:03:22 -0500
When loading plperl built against Strawberry perl or the msys2 ucrt perl that have been built with gcc, a binary mismatch has been encountered which looks like this: loadable library and perl binaries are mismatched (got handshake key 0000000012800080, needed 0000000012900080) To cure this we bring the handshake keys into sync by adding NO_THREAD_SAFE_LOCALE to the defines used to build plperl. Discussion: https://firstname.lastname@example.org Discussion: https://email@example.com Backpatch to all live branches.
Remove temporary portlock directory during make [dist]clean.
commit : b85fd738529b345666c4a94ce837327a2f86d75e author : Tom Lane <firstname.lastname@example.org> date : Sat, 26 Nov 2022 10:30:31 -0500 committer: Tom Lane <email@example.com> date : Sat, 26 Nov 2022 10:30:31 -0500
Another oversight in 9b4eafcaf.
Add portlock directory to .gitignore
commit : 9d3f29d990f0dd404c5ec6406ef11e9c5fed901f author : Andrew Dunstan <firstname.lastname@example.org> date : Sat, 26 Nov 2022 07:44:23 -0500 committer: Andrew Dunstan <email@example.com> date : Sat, 26 Nov 2022 07:44:23 -0500
Commit 9b4eafcaf4 added creattion of a directory to reserve TAP test ports at the top of the build tree. In a non-vpath build this means at the top of the source tree, so it needs to be added to .gitignore. As suggested by Michael Paquier Backpatch to all live branches.
Allow building with MSVC and Strawberry perl
commit : ae7c5121307d5c8b92b4e0ac63ec2e8635d8c076 author : Andrew Dunstan <firstname.lastname@example.org> date : Fri, 25 Nov 2022 15:28:38 -0500 committer: Andrew Dunstan <email@example.com> date : Fri, 25 Nov 2022 15:28:38 -0500
Strawberry uses __builtin_expect which Visual C doesn't have. For this case define it as a noop. Solution taken from vim sources. Backpatch to all live branches
Fix uninitialized access to InitialRunningXacts during decoding.
commit : 9b788aafdc64531388e0f2c0f064e946b8aa6378 author : Amit Kapila <firstname.lastname@example.org> date : Fri, 25 Nov 2022 08:56:54 +0530 committer: Amit Kapila <email@example.com> date : Fri, 25 Nov 2022 08:56:54 +0530
In commit 272248a0c, we introduced an InitialRunningXacts array to remember transactions and subtransactions that were running when the xl_running_xacts record that we decoded was written. This array was allocated in the snapshot builder memory context after we restore serialized snapshot but we forgot to reset the array while freeing the builder memory context. So, the next time when we start decoding in the same session where we don't restore any serialized snapshot, we ended up using the uninitialized array and that can lead to unpredictable behavior. This problem doesn't exist in HEAD as instead of using InitialRunningXacts, we added the list of transaction IDs and sub-transaction IDs, that have modified catalogs and are running during snapshot serialization, to the serialized snapshot (see commit 7f13ac8123). Reported-by: Maxim Orlov Author: Masahiko Sawada Reviewed-by: Amit Kapila, Maxim Orlov Backpatch-through: 11 Discussion: https://postgr.es/m/CACG=ezZoz_KG+Ryh9MrU_g5e0HiVoHocEvqFF=NRrhrwKmEQJQ@mail.gmail.com
Make multixact error message more explicit
commit : b20e381422089f66d2cfa7a6286c8053006d3afb author : Alvaro Herrera <firstname.lastname@example.org> date : Thu, 24 Nov 2022 10:45:10 +0100 committer: Alvaro Herrera <email@example.com> date : Thu, 24 Nov 2022 10:45:10 +0100
There are recent reports involving a very old error message that we have no history of hitting -- perhaps a recently introduced bug. Improve the error message in an attempt to improve our chances of investigating the bug. Per reports from Dimos Stamatakis and Bob Krier. Backpatch to 11. Discussion: https://postgr.es/m/CO2PR0801MB2310579F65529380A4E5EDC0E20A9@CO2PR0801MB2310.namprd08.prod.outlook.com Discussion: https://firstname.lastname@example.org
Fix perl warning from commit 9b4eafcaf4
commit : 2f92b8ad3121d9f23c48afbd3a60a2592cf90ba0 author : Andrew Dunstan <email@example.com> date : Wed, 23 Nov 2022 07:17:26 -0500 committer: Andrew Dunstan <firstname.lastname@example.org> date : Wed, 23 Nov 2022 07:17:26 -0500
per gripe from Andres Freund and Tom Lane Backpatch to all live branches.
YA attempt at taming worst-case behavior of get_actual_variable_range.
commit : b96a096dbc2bdee3cce05f0230e87ae40c641491 author : Tom Lane <email@example.com> date : Tue, 22 Nov 2022 14:40:20 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 22 Nov 2022 14:40:20 -0500
We've made multiple attempts at preventing get_actual_variable_range from taking an unreasonable amount of time (3ca930fc3, fccebe421). But there's still an issue for the very first planning attempt after deletion of a large number of extremal-valued tuples. While that planning attempt will set "killed" bits on the tuples it visits and thereby reduce effort for next time, there's still a lot of work it has to do to visit the heap and then set those bits. It's (usually?) not worth it to do that much work at plan time to have a slightly better estimate, especially in a context like this where the table contents are known to be mutating rapidly. Therefore, let's bound the amount of work to be done by giving up after we've visited 100 heap pages. Giving up just means we'll fall back on the extremal value recorded in pg_statistic, so it shouldn't mean that planner estimates suddenly become worthless. Note that this means we'll still gradually whittle down the problem by setting a few more index "killed" bits in each planning attempt; so eventually we'll reach a good state (barring further deletions), even in the absence of VACUUM. Simon Riggs, per a complaint from Jakub Wartak (with cosmetic adjustments by me). Back-patch to all supported branches. Discussion: https://postgr.es/m/CAKZiRmznOwi0oaV=4PHOCM4ygcH4MgSvt8=5cu_vNCfc8FSUug@mail.gmail.com
Prevent port collisions between concurrent TAP tests
commit : 46def5267cf938e6031d82aece2d50b3f6a17de4 author : Andrew Dunstan <email@example.com> date : Tue, 22 Nov 2022 10:35:04 -0500 committer: Andrew Dunstan <firstname.lastname@example.org> date : Tue, 22 Nov 2022 10:35:04 -0500
Currently there is a race condition where if concurrent TAP tests both test that they can open a port they will assume that it is free and use it, causing one of them to fail. To prevent this we record a reservation using an exclusive lock, and any TAP test that discovers a reservation checks to see if the reserving process is still alive, and looks for another free port if it is. Ports are reserved in a directory set by the environment setting PG_TEST_PORT_DIR, or if that doesn't exist a subdirectory of the top build directory as set by Makefile.global, or its own tmp_check directory. The prove_check recipe in Makefile.global.in is extended to export top_builddir to the TAP tests. This was already exported by the prove_installcheck recipes. Per complaint from Andres Freund Backpatched from 9b4eafcaf4 to all live branches Discussion: https://email@example.com
doc: Fix description of pg_stat_all_tables.n_tup_upd
commit : aa1fe772c9dbdf54bae6a3cbdb31625cb0579a9e author : Michael Paquier <firstname.lastname@example.org> date : Tue, 22 Nov 2022 09:16:02 +0900 committer: Michael Paquier <email@example.com> date : Tue, 22 Nov 2022 09:16:02 +0900
Issue caused by an incorrect merge done in f507895. This issue only impacts v11 and v12. Author: Guillaume Lelarge Discussion: https://postgr.es/m/CAECtzeUAL3qoebLBDnn2DfWYS0Kww-yqDicQQ3r+JS5Yu1n6FA@mail.gmail.com Backpatch-through: 11
Replace link to Hunspell with the current homepage
commit : 926ba562ef4f85e922aef6ee64a5209fe2d00d54 author : Daniel Gustafsson <firstname.lastname@example.org> date : Mon, 21 Nov 2022 23:25:48 +0100 committer: Daniel Gustafsson <email@example.com> date : Mon, 21 Nov 2022 23:25:48 +0100
The Hunspell project moved from Sourceforge to Github sometime in 2016, so update our links to match the new URL. Backpatch the doc changes to all supported versions. Discussion: https://postgr.es/m/DC9A662A-360D-4125-A453-5A6CB9C6C4B4@yesql.se Backpatch-through: v11
Add comments and a missing CHECK_FOR_INTERRUPTS in ts_headline.
commit : c0eed88914c0986e0c64c9d0944c89e1c5bdb1e3 author : Tom Lane <firstname.lastname@example.org> date : Mon, 21 Nov 2022 17:07:07 -0500 committer: Tom Lane <email@example.com> date : Mon, 21 Nov 2022 17:07:07 -0500
I just spent an annoying amount of time reverse-engineering the 100%-undocumented API between ts_headline and the text search parser's prsheadline function. Add some commentary about that while it's fresh in mind. Also remove some unused macros in wparser_def.c. While at it, I noticed that when commit 78e73e875 added a CHECK_FOR_INTERRUPTS call in TS_execute_recurse, it missed doing so in the parallel function TS_phrase_execute, which surely needs one just as much. Back-patch because of the missing CHECK_FOR_INTERRUPTS. Might as well back-patch the rest of this too.
Fix mislabeling of PROC_QUEUE->links as PGPROC, fixing UBSan on 32bit
commit : 140c80372327fb7723859d982b3efda754b266a7 author : Andres Freund <firstname.lastname@example.org> date : Wed, 16 Nov 2022 20:00:59 -0800 committer: Andres Freund <email@example.com> date : Wed, 16 Nov 2022 20:00:59 -0800
ProcSleep() used a PGPROC* variable to point to PROC_QUEUE->links.next, because that does "the right thing" with SHMQueueInsertBefore(). While that largely works, it's certainly not correct and unnecessary - we can just use SHM_QUEUE* to point to the insertion point. Noticed when testing a 32bit of postgres with undefined behavior sanitizer. UBSan noticed that sometimes the supposed PGPROC wasn't sufficiently aligned (required since 46d6e5f5679, ensured indirectly, via ShmemAllocRaw() guaranteeing cacheline alignment). For now fix this by using a SHM_QUEUE* for the insertion point. Subsequently we should replace all the use of PROC_QUEUE and SHM_QUEUE with ilist.h, but that's a larger change that we don't want to backpatch. Backpatch to all supported versions - it's useful to be able to run postgres under UBSan. Reviewed-by: Tom Lane <firstname.lastname@example.org> Discussion: https://email@example.com Backpatch: 11-
Doc: sync src/tutorial/basics.source with SGML documentation.
commit : 8687cee44d003acaeb2bca908275d04092741e88 author : Tom Lane <firstname.lastname@example.org> date : Sat, 19 Nov 2022 13:09:14 -0500 committer: Tom Lane <email@example.com> date : Sat, 19 Nov 2022 13:09:14 -0500
basics.source is supposed to be pretty closely in step with the examples in chapter 2 of the tutorial, but I forgot to update it in commit f05a5e000. Fix that, and adjust a couple of other discrepancies that had crept in over time. (I notice that advanced.source is nowhere near being in sync with chapter 3, but I lack the ambition to do something about that right now.)
pg_dump: avoid unsafe function calls in getPolicies().
commit : b7333e826955092776108dfdb7cd358beeeca138 author : Tom Lane <firstname.lastname@example.org> date : Sat, 19 Nov 2022 12:00:27 -0500 committer: Tom Lane <email@example.com> date : Sat, 19 Nov 2022 12:00:27 -0500
getPolicies() had the same disease I fixed in other places in commit e3fcbbd62, i.e., it was calling pg_get_expr() for expressions on tables that we don't necessarily have lock on. To fix, restrict the query to only collect interesting rows, rather than doing the filtering on the client side. Back-patch of commit 3e6e86abc. That's been in v15/HEAD long enough to have some confidence about it, so now let's fix the problem in older branches. Discussion: https://firstname.lastname@example.org Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc Discussion: https://email@example.com
Postpone calls of unsafe server-side functions in pg_dump.
commit : b1f106420b1a1703a1703fdf90bcdd703f322cd1 author : Tom Lane <firstname.lastname@example.org> date : Sat, 19 Nov 2022 11:40:30 -0500 committer: Tom Lane <email@example.com> date : Sat, 19 Nov 2022 11:40:30 -0500
Avoid calling pg_get_partkeydef(), pg_get_expr(relpartbound), and regtypeout until we have lock on the relevant tables. The existing coding is at serious risk of failure if there are any concurrent DROP TABLE commands going on --- including drops of other sessions' temp tables. Back-patch of commit e3fcbbd62. That's been in v15/HEAD long enough to have some confidence about it, so now let's fix the problem in older branches. Original patch by me; thanks to Gilles Darold for back-patching legwork. Discussion: https://firstname.lastname@example.org Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc Discussion: https://email@example.com
Replace RelationOpenSmgr() with RelationGetSmgr().
commit : d4acf2eb94f3464d5f439ca83a4188496cd5841a author : Tom Lane <firstname.lastname@example.org> date : Thu, 17 Nov 2022 16:54:31 -0500 committer: Tom Lane <email@example.com> date : Thu, 17 Nov 2022 16:54:31 -0500
This is a back-patch of the v15-era commit f10f0ae42 into older supported branches. The idea is to design out bugs in which an ill-timed relcache flush clears rel->rd_smgr partway through some code sequence that wasn't expecting that. We had another report today of a corner case that reliably crashes v14 under debug_discard_caches (nee CLOBBER_CACHE_ALWAYS), and therefore would crash once in a blue moon in the field. We're unlikely to get rid of all such code paths unless we adopt the more rigorous coding rules instituted by f10f0ae42. Therefore, even though this is a bit invasive, it's time to back-patch. Some comfort can be taken in the fact that f10f0ae42 has been in v15 for 16 months without problems. I left the RelationOpenSmgr macro present in the back branches, even though no core code should use it anymore, in order to not break third-party extensions in minor releases. Such extensions might opt to start using RelationGetSmgr instead, to reduce their code differential between v15 and earlier branches. This carries a hazard of failing to compile against headers from existing minor releases. However, once compiled the extension should work fine even with such releases, because RelationGetSmgr is a "static inline" function so it creates no link-time dependency. So depending on distribution practices, that might be an OK tradeoff. Per report from Spyridon Dimitrios Agathos. Original patch by Amul Sul. Discussion: https://postgr.es/m/CAFM5RaqdgyusQvmWkyPYaWMwoK5gigdtW-7HcgHgOeAw7mqJ_Q@mail.gmail.com Discussion: https://postgr.es/m/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com
Account for IPC::Run::result() Windows behavior change.
commit : 791dd7579149a6a393a43935f976c19039360f94 author : Noah Misch <firstname.lastname@example.org> date : Thu, 17 Nov 2022 07:35:06 -0800 committer: Noah Misch <email@example.com> date : Thu, 17 Nov 2022 07:35:06 -0800
This restores compatibility with the not-yet-released successor of version 20220807.0. Back-patch to 9.4, which introduced this code. Reviewed by Andrew Dunstan. Discussion: https://postgr.es/m/20221117061805.GA4020280@rfd.leadboat.com
Fix cleanup lock acquisition in SPLIT_ALLOCATE_PAGE replay.
commit : 1703033f896a7825001d8b1e17fa24613408d852 author : Amit Kapila <firstname.lastname@example.org> date : Mon, 14 Nov 2022 09:52:06 +0530 committer: Amit Kapila <email@example.com> date : Mon, 14 Nov 2022 09:52:06 +0530
During XLOG_HASH_SPLIT_ALLOCATE_PAGE replay, we were checking for a cleanup lock on the new bucket page after acquiring an exclusive lock on it and raising a PANIC error on failure. However, it is quite possible that checkpointer can acquire the pin on the same page before acquiring a lock on it, and then the replay will lead to an error. So instead, directly acquire the cleanup lock on the new bucket page during XLOG_HASH_SPLIT_ALLOCATE_PAGE replay operation. Reported-by: Andres Freund Author: Robert Haas Reviewed-By: Amit Kapila, Andres Freund, Vignesh C Backpatch-through: 11 Discussion: https://firstname.lastname@example.org
Fix theoretical torn page hazard.
commit : 5eaf3e375dc77a254efa098bdb1413bbd0ef9f1a author : Jeff Davis <email@example.com> date : Thu, 10 Nov 2022 14:46:30 -0800 committer: Jeff Davis <firstname.lastname@example.org> date : Thu, 10 Nov 2022 14:46:30 -0800
The original report was concerned with a possible inconsistency between the heap and the visibility map, which I was unable to confirm. The concern has been retracted. However, there did seem to be a torn page hazard when using checksums. By not setting the heap page LSN during redo, the protections of minRecoveryPoint were bypassed. Fixed, along with a misleading comment. It may have been impossible to hit this problem in practice, because it would require a page tear between the checksum and the flags, so I am marking this as a theoretical risk. But, as discussed, it did violate expectations about the page LSN, so it may have other consequences. Backpatch to all supported versions. Reported-by: Konstantin Knizhnik Reviewed-by: Konstantin Knizhnik Discussion: https://email@example.com Backpatch-through: 11
Fix alter_table.sql test case to test what it claims to.
commit : 421f3363a8c1035cfc2e0684bf8b24df2afe2902 author : Tom Lane <firstname.lastname@example.org> date : Thu, 10 Nov 2022 17:24:26 -0500 committer: Tom Lane <email@example.com> date : Thu, 10 Nov 2022 17:24:26 -0500
The stanza "SET STORAGE may need to add a TOAST table" does not test what it's supposed to, and hasn't done so since we added the ability to store constant column default values as metadata. We need to use a non-constant default to get the expected table rewrite to actually happen. Fix that, and add the missing checks that would have exposed the problem to begin with. Noted while reviewing a patch that made changes in this test case. Back-patch to v11 where the problem came in.
Doc: add comments about PreventInTransactionBlock/IsInTransactionBlock.
commit : fec80da849f3374c2931cc3bf593edca15041ece author : Tom Lane <firstname.lastname@example.org> date : Wed, 9 Nov 2022 11:08:52 -0500 committer: Tom Lane <email@example.com> date : Wed, 9 Nov 2022 11:08:52 -0500
Add a little to the header comments for these functions to make it clearer what guarantees about commit behavior are provided to callers. (See commit f92944137 for context.) Although this is only a comment change, it's really documentation aimed at authors of extensions, so it seems appropriate to back-patch. Yugo Nagata and Tom Lane, per further discussion of bug #17434. Discussion: https://firstname.lastname@example.org
Fix compilation warnings with libselinux 3.1 in contrib/sepgsql/
commit : 91723759e450adff35664926a0328fb8a2c02cf4 author : Michael Paquier <email@example.com> date : Wed, 9 Nov 2022 09:39:57 +0900 committer: Michael Paquier <firstname.lastname@example.org> date : Wed, 9 Nov 2022 09:39:57 +0900
Upstream SELinux has recently marked security_context_t as officially deprecated, causing warnings with -Wdeprecated-declarations. This is considered as legacy code for some time now by upstream as security_context_t got removed from most of the code tree during the development of 2.3 back in 2014. This removes all the references to security_context_t in sepgsql/ to be consistent with SELinux, fixing the warnings. Note that this does not impact the minimum version of libselinux supported. This has been applied first as 1f32136 for 14~, but no other branches got the call. This is in line with the recent project policy to have no warnings in branches where builds should still be supported (9.2~ as of today). Per discussion with Tom Lane and Álvaro Herrera. Reviewed-by: Tom Lane Discussion: https://postgr.es/m/20200813012735.GC11663@paquier.xyz Discussion: https://email@example.com Backpatch-through: 9.2
Doc: improve tutorial section about grouped aggregates.
commit : 679c394f538729f7d84d01f7afc0e1239f818361 author : Tom Lane <firstname.lastname@example.org> date : Tue, 8 Nov 2022 18:25:03 -0500 committer: Tom Lane <email@example.com> date : Tue, 8 Nov 2022 18:25:03 -0500
Commit fede15417 introduced FILTER by jamming it into the existing example introducing HAVING, which seems pedagogically poor to me; and it added no information about what the keyword actually does. Not to mention that the claimed output didn't match the sample data being used in this running example. Revert that and instead make an independent example using FILTER. To help drive home the point that it's a per-aggregate filter, we need to use two aggregates not just one; for consistency expand all the examples in this segment to do that. Also adjust the example using WHERE ... LIKE so that it'd produce nonempty output with this sample data, and show that output. Back-patch, as the previous patch was. (Sadly, v10 is now out of scope.) Discussion: https://firstname.lastname@example.org