commit : d9b89f1939cc33f14bb8c3f01ced946eb0febaa5 author : Tom Lane <firstname.lastname@example.org> date : Mon, 10 Feb 2020 17:23:16 -0500 committer: Tom Lane <email@example.com> date : Mon, 10 Feb 2020 17:23:16 -0500
Last-minute updates for release notes.
commit : 4153ac0d703593987f2fcac082d7fc04546c28cc author : Tom Lane <firstname.lastname@example.org> date : Mon, 10 Feb 2020 12:51:07 -0500 committer: Tom Lane <email@example.com> date : Mon, 10 Feb 2020 12:51:07 -0500
createuser: fix parsing of --connection-limit argument
commit : 1b2ae4bcd69deb08bbabd335558fa2ef779dc783 author : Alvaro Herrera <firstname.lastname@example.org> date : Mon, 10 Feb 2020 12:14:58 -0300 committer: Alvaro Herrera <email@example.com> date : Mon, 10 Feb 2020 12:14:58 -0300
The original coding failed to quote the argument properly. Reported-by: Daniel Gustafsson Discussion: 1B8AE66C-85AB-4728-9BB4-612E8E61C219@yesql.se
commit : a6e11f4a1a5967d2f06fe48716141bf25ae623df author : Peter Eisentraut <firstname.lastname@example.org> date : Mon, 10 Feb 2020 12:55:41 +0100 committer: Peter Eisentraut <email@example.com> date : Mon, 10 Feb 2020 12:55:41 +0100
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: cde576a776a749a424b649f24259486752c7884d
Revert "pg_upgrade: Fix quoting of some arguments in pg_ctl command"
commit : ad70af11f9836234837361cab91668ab7a9c96b9 author : Michael Paquier <firstname.lastname@example.org> date : Mon, 10 Feb 2020 15:48:56 +0900 committer: Michael Paquier <email@example.com> date : Mon, 10 Feb 2020 15:48:56 +0900
This reverts commit d1c0b61. The patch has some downsides that require more attention, as discussed with Noah Misch. Backpatch-through: 9.5
pg_upgrade: Fix quoting of some arguments in pg_ctl command
commit : bd54e7ace7d60a823d405348a65c9a28171072b7 author : Michael Paquier <firstname.lastname@example.org> date : Mon, 10 Feb 2020 10:49:58 +0900 committer: Michael Paquier <email@example.com> date : Mon, 10 Feb 2020 10:49:58 +0900
The previous coding forgot to apply shell quoting to the socket directory and the data folder, leading to failures when running pg_upgrade. This refactors the code generating the pg_ctl command starting clusters to use a more correct shell quoting. Failures are easier to trigger in 12 and newer versions by using a value of --socketdir that includes quotes, but it is also possible to cause failures with quotes included in the default socket directory used by pg_upgrade or the data folders of the clusters involved in the upgrade. As 9.4 is going to be EOL'd with the next minor release, nobody is likely going to upgrade to it now so this branch is not included in the set of branches fixed. Author: Michael Paquier Reviewed-by: Álvaro Herrera, Noah Misch Backpatch-through: 9.5
Release notes for 12.2, 11.7, 10.12, 9.6.17, 9.5.21, 9.4.26.
commit : 8be0a55d392cc2701d1ffacc45d56a757cb03df0 author : Tom Lane <firstname.lastname@example.org> date : Sun, 9 Feb 2020 14:14:19 -0500 committer: Tom Lane <email@example.com> date : Sun, 9 Feb 2020 14:14:19 -0500
Add note about access permission checks by inherited TRUNCATE and LOCK TABLE.
commit : 990acfc656c05d499f9aa642ea8cbfe5a6738a4e author : Fujii Masao <firstname.lastname@example.org> date : Fri, 7 Feb 2020 00:33:11 +0900 committer: Fujii Masao <email@example.com> date : Fri, 7 Feb 2020 00:33:11 +0900
Inherited queries perform access permission checks on the parent table only. But there are two exceptions to this rule in v12 or before; TRUNCATE and LOCK TABLE commands through a parent table check the permissions on not only the parent table but also the children tables. Previously these exceptions were not documented. This commit adds the note about these exceptions, into the document. Back-patch to v9.4. But we don't apply this commit to the master because commit e6f1e560e4 already got rid of the exception about inherited TRUNCATE and upcoming commit will do for the exception about inherited LOCK TABLE. Author: Amit Langote Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CA+HiwqHfTnMU6SUkyHxCmpHUKk7ERLHCR3vZVq19ZOQBjPBLmQ@mail.gmail.com
Revert commit 606f350de9.
commit : dc06d0839a330d2a51ae10d1a168312b9daf7b0a author : Fujii Masao <firstname.lastname@example.org> date : Mon, 3 Feb 2020 12:43:51 +0900 committer: Fujii Masao <email@example.com> date : Mon, 3 Feb 2020 12:43:51 +0900
This commit reverts the fix "Make inherited TRUNCATE perform access permission checks on parent table only" only in the back branches. It's not hard to imagine that there are some applications expecting the old behavior and the fix breaks their security. To avoid this compatibility problem, we decided to apply the fix only in HEAD and revert it in all supported back branches. Discussion: https://firstname.lastname@example.org
Fix memory leak on DSM slot exhaustion.
commit : a5f45c3dd3b569de07e9adf049e748c6a3a896a5 author : Thomas Munro <email@example.com> date : Sat, 1 Feb 2020 14:29:13 +1300 committer: Thomas Munro <firstname.lastname@example.org> date : Sat, 1 Feb 2020 14:29:13 +1300
If we attempt to create a DSM segment when no slots are available, we should return the memory to the operating system. Previously we did that if the DSM_CREATE_NULL_IF_MAXSEGMENTS flag was passed in, but we didn't do it if an error was raised. Repair. Back-patch to 9.4, where DSM segments arrived. Author: Thomas Munro Reviewed-by: Robert Haas Reported-by: Julian Backes Discussion: https://postgr.es/m/CA%2BhUKGKAAoEw-R4om0d2YM4eqT1eGEi6%3DQot-3ceDR-SLiWVDw%40mail.gmail.com
Fix CheckAttributeType's handling of collations for ranges.
commit : 59047b6d0c93db208a40a77a3672cbe10b7b04da author : Tom Lane <email@example.com> date : Fri, 31 Jan 2020 17:03:55 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 31 Jan 2020 17:03:55 -0500
Commit fc7695891 changed CheckAttributeType to recurse into ranges, but made it pass down the wrong collation (always InvalidOid, since ranges as such have no collation). This would result in guaranteed failure when considering a range type whose subtype is collatable. Embarrassingly, we lack any regression tests that would expose such a problem (but fortunately, somebody noticed before we shipped this bug in any release). Fix it to pass down the range's subtype collation property instead, and add some regression test cases to exercise collatable-subtype ranges a bit more. Back-patch to all supported branches, as the previous patch was. Report and patch by Julien Rouhaud, test cases tweaked by me Discussion: https://postgr.es/m/CAOBaU_aBWqNweiGUFX0guzBKkcfJ8mnnyyGC_KBQmO12Mj5f_A@mail.gmail.com
Fix parallel pg_dump/pg_restore for failure to create worker processes.
commit : 1b78759a62beeb7e777a5ed1518fd840277c9a52 author : Tom Lane <email@example.com> date : Fri, 31 Jan 2020 14:41:49 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 31 Jan 2020 14:41:49 -0500
If we failed to fork a worker process, or create a communication pipe for one, WaitForTerminatingWorkers would suffer an assertion failure if assert-enabled, otherwise crash or go into an infinite loop. This was a consequence of not accounting for the startup condition where we've not yet forked all the workers. The original bug was that ParallelBackupStart would set workerStatus to WRKR_IDLE before it had successfully forked a worker. I made things worse in commit b7b8cc0cf by not understanding the undocumented fact that the WRKR_TERMINATED state was also meant to represent the case where a worker hadn't been started yet: I changed enum T_WorkerStatus so that *all* the worker slots were initially in WRKR_IDLE state. But this wasn't any more broken in practice, since even one slot in the wrong state would keep WaitForTerminatingWorkers from terminating. In v10 and later, introduce an explicit T_WorkerStatus value for worker-not-started, in hopes of preventing future oversights of the same ilk. Before that, just document that WRKR_TERMINATED is supposed to cover that case (partly because it wasn't actively broken, and partly because the enum is exposed outside parallel.c in those branches, so there's microscopically more risk involved in changing it). In all branches, introduce a WORKER_IS_RUNNING status test macro to hide which T_WorkerStatus values mean that, and be more careful not to access ParallelSlot fields till we're sure they're valid. Per report from Vignesh C, though this is my patch not his. Back-patch to all supported branches. Discussion: https://postgr.es/m/CALDaNm1Luv-E3sarR+-unz-BjchquHHyfP+YC+2FS2pt_Jemail@example.com
Make inherited TRUNCATE perform access permission checks on parent table only.
commit : 606f350de92a15fdb2c6a3c209f60cf74d2825e7 author : Fujii Masao <firstname.lastname@example.org> date : Fri, 31 Jan 2020 00:46:20 +0900 committer: Fujii Masao <email@example.com> date : Fri, 31 Jan 2020 00:46:20 +0900
Previously, TRUNCATE command through a parent table checked the permissions on not only the parent table but also the children tables inherited from it. This was a bug and inherited queries should perform access permission checks on the parent table only. This commit fixes that bug. Back-patch to all supported branches. Author: Amit Langote Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CAHGQGwFHdSvifhJE+-GSNqUHSfbiKxaeQQ7HGcYz6SC2n_oDcg@mail.gmail.com
Fix an oversight in commit 4c70098ff.
commit : 0e63d9641bad3f50f91ee8e351051bddabac8d31 author : Tom Lane <firstname.lastname@example.org> date : Thu, 23 Jan 2020 16:15:32 -0500 committer: Tom Lane <email@example.com> date : Thu, 23 Jan 2020 16:15:32 -0500
I had supposed that the from_char_seq_search() call sites were all passing the constant arrays you'd expect them to pass ... but on looking closer, the one for DY format was passing the days array not days_short. This accidentally worked because the day abbreviations in English are all the same as the first three letters of the full day names. However, once we took out the "maximum comparison length" logic, it stopped working. As penance for that oversight, add regression test cases covering this, as well as every other switch case in DCH_from_char() that was not reached according to the code coverage report. Also, fold the DCH_RM and DCH_rm cases into one --- now that seq_search is case independent, there's no need to pass different comparison arrays for those cases. Back-patch, as the previous commit was.
Clean up formatting.c's logic for matching constant strings.
commit : a576f2a8f2e8a07b35b66c15477e510baa462dec author : Tom Lane <firstname.lastname@example.org> date : Thu, 23 Jan 2020 13:42:10 -0500 committer: Tom Lane <email@example.com> date : Thu, 23 Jan 2020 13:42:10 -0500
seq_search(), which is used to match input substrings to constants such as month and day names, had a lot of bizarre and unnecessary behaviors. It was mostly possible to avert our eyes from that before, but we don't want to duplicate those behaviors in the upcoming patch to allow recognition of non-English month and day names. So it's time to clean this up. In particular: * seq_search scribbled on the input string, which is a pretty dangerous thing to do, especially in the badly underdocumented way it was done here. Fortunately the input string is a temporary copy, but that was being made three subroutine levels away, making it something easy to break accidentally. The behavior is externally visible nonetheless, in the form of odd case-folding in error reports about unrecognized month/day names. The scribbling is evidently being done to save a few calls to pg_tolower, but that's such a cheap function (at least for ASCII data) that it's pretty pointless to worry about. In HEAD I switched it to be pg_ascii_tolower to ensure it is cheap in all cases; but there are corner cases in Turkish where this'd change behavior, so leave it as pg_tolower in the back branches. * seq_search insisted on knowing the case form (all-upper, all-lower, or initcap) of the constant strings, so that it didn't have to case-fold them to perform case-insensitive comparisons. This likewise seems like excessive micro-optimization, given that pg_tolower is certainly very cheap for ASCII data. It seems unsafe to assume that we know the case form that will come out of pg_locale.c for localized month/day names, so it's better just to define the comparison rule as "downcase all strings before comparing". (The choice between downcasing and upcasing is arbitrary so far as English is concerned, but it might not be in other locales, so follow citext's lead here.) * seq_search also had a parameter that'd cause it to report a match after a maximum number of characters, even if the constant string were longer than that. This was not actually used because no caller passed a value small enough to cut off a comparison. Replicating that behavior for localized month/day names seems expensive as well as useless, so let's get rid of that too. * from_char_seq_search used the maximum-length parameter to truncate the input string in error reports about not finding a matching name. This leads to rather confusing reports in many cases. Worse, it is outright dangerous if the input string isn't all-ASCII, because we risk truncating the string in the middle of a multibyte character. That'd lead either to delivering an illegible error message to the client, or to encoding-conversion failures that obscure the actual data problem. Get rid of that in favor of truncating at whitespace if any (a suggestion due to Alvaro Herrera). In addition to fixing these things, I const-ified the input string pointers of DCH_from_char and its subroutines, to make sure there aren't any other scribbling-on-input problems. The risk of generating a badly-encoded error message seems like enough of a bug to justify back-patching, so patch all supported branches. Discussion: https://firstname.lastname@example.org
Fix concurrent indexing operations with temporary tables
commit : c39f455981770aa707f05d8e338b233e96730e47 author : Michael Paquier <email@example.com> date : Wed, 22 Jan 2020 09:49:44 +0900 committer: Michael Paquier <firstname.lastname@example.org> date : Wed, 22 Jan 2020 09:49:44 +0900
Attempting to use CREATE INDEX, DROP INDEX or REINDEX with CONCURRENTLY on a temporary relation with ON COMMIT actions triggered unexpected errors because those operations use multiple transactions internally to complete their work. Here is for example one confusing error when using ON COMMIT DELETE ROWS: ERROR: index "foo" already contains data Issues related to temporary relations and concurrent indexing are fixed in this commit by enforcing the non-concurrent path to be taken for temporary relations even if using CONCURRENTLY, transparently to the user. Using a non-concurrent path does not matter in practice as locks cannot be taken on a temporary relation by a session different than the one owning the relation, and the non-concurrent operation is more effective. The problem exists with REINDEX since v12 with the introduction of CONCURRENTLY, and with CREATE/DROP INDEX since CONCURRENTLY exists for those commands. In all supported versions, this caused only confusing error messages to be generated. Note that with REINDEX, it was also possible to issue a REINDEX CONCURRENTLY for a temporary relation owned by a different session, leading to a server crash. The idea to enforce transparently the non-concurrent code path for temporary relations comes originally from Andres Freund. Reported-by: Manuel Rigger Author: Michael Paquier, Heikki Linnakangas Reviewed-by: Andres Freund, Álvaro Herrera, Heikki Linnakangas Discussion: https://postgr.es/m/CA+u7OA6gP7YAeCguyseusYcc=uR8+ypjCcgDDCTzjQ+k6S9ksQ@mail.gmail.com Backpatch-through: 9.4
Fix edge case leading to agg transitions skipping ExecAggTransReparent() calls.
commit : f651976d94bfd6fd19c3b27e9649f166030923bd author : Andres Freund <email@example.com> date : Mon, 20 Jan 2020 23:26:51 -0800 committer: Andres Freund <firstname.lastname@example.org> date : Mon, 20 Jan 2020 23:26:51 -0800
The code checking whether an aggregate transition value needs to be reparented into the current context has always only compared the transition return value with the previous transition value by datum, i.e. without regard for NULLness. This normally works, because when the transition function returns NULL (via fcinfo->isnull), it'll return a value that won't be the same as its input value. But there's no hard requirement that that's the case. And it turns out, it's possible to hit this case (see discussion or reproducers), leading to a non-null transition value not being reparented, followed by a crash caused by that. Instead of adding another comparison of NULLness, instead have ExecAggTransReparent() ensure that pergroup->transValue ends up as 0 when the new transition value is NULL. That avoids having to add an additional branch to the much more common cases of the transition function returning the old transition value (which is a pointer in this case), and when the new value is different, but not NULL. In branches since 69c3936a149, also deduplicate the reparenting code between the expression evaluation based transitions, and the path for ordered aggregates. Reported-By: Teodor Sigaev, Nikita Glukhov Author: Andres Freund Discussion: https://email@example.com Backpatch: 9.4-, this issue has existed since at least 7.4
Add GUC variables for stat tracking and timeout as PGDLLIMPORT
commit : 4a49149b9d74d2f9479cb73227e9646c901c84e6 author : Michael Paquier <firstname.lastname@example.org> date : Tue, 21 Jan 2020 13:47:13 +0900 committer: Michael Paquier <email@example.com> date : Tue, 21 Jan 2020 13:47:13 +0900
This helps integration of extensions with Windows. The following parameters are changed: - idle_in_transaction_session_timeout (9.6 and newer versions) - lock_timeout - statement_timeout - track_activities - track_counts - track_functions Author: Pascal Legrand Reviewed-by: Amit Kamila, Julien Rouhaud, Michael Paquier Discussion: https://firstname.lastname@example.org Backpatch-through: 9.4
Fix pg_dump's sigTermHandler() to use _exit() not exit().
commit : b1392a9502016fcd3c93ba18ab418e11424b711e author : Tom Lane <email@example.com> date : Mon, 20 Jan 2020 12:57:17 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 20 Jan 2020 12:57:17 -0500
sigTermHandler() tried to be careful to invoke only operations that are safe to do in a signal handler. But for some reason we forgot that exit(3) is not among those, because it calls atexit handlers that might do various random things. (pg_dump itself installs no atexit handlers, but e.g. OpenSSL does.) That led to crashes or lockups when attempting to terminate a parallel dump or restore via a signal. Fix by calling _exit() instead. Per bug #16199 from Raúl Marín. Back-patch to all supported branches. Discussion: https://email@example.com
Fix crash in BRIN inclusion op functions, due to missing datum copy.
commit : 98f0d283774b68895bc41413d7dd9c19e5608231 author : Heikki Linnakangas <firstname.lastname@example.org> date : Mon, 20 Jan 2020 10:36:35 +0200 committer: Heikki Linnakangas <email@example.com> date : Mon, 20 Jan 2020 10:36:35 +0200
The BRIN add_value() and union() functions need to make a longer-lived copy of the argument, if they want to store it in the BrinValues struct also passed as argument. The functions for the "inclusion operator classes" used with box, range and inet types didn't take into account that the union helper function might return its argument as is, without making a copy. Check for that case, and make a copy if necessary. That case arises at least with the range_union() function, when one of the arguments is an 'empty' range: CREATE TABLE brintest (n numrange); CREATE INDEX brinidx ON brintest USING brin (n); INSERT INTO brintest VALUES ('empty'); INSERT INTO brintest VALUES (numrange(0, 2^1000::numeric)); INSERT INTO brintest VALUES ('(-1, 0)'); SELECT brin_desummarize_range('brinidx', 0); SELECT brin_summarize_range('brinidx', 0); Backpatch down to 9.5, where BRIN was introduced. Discussion: https://www.postgresql.org/message-id/e6e1d6eb-0a67-36aa-e779-bcca59167c14%40iki.fi Reviewed-by: Emre Hasegeli, Tom Lane, Alvaro Herrera
Repair more failures with SubPlans in multi-row VALUES lists.
commit : 3964722780d811430521b6051bc350ead03fb708 author : Tom Lane <firstname.lastname@example.org> date : Fri, 17 Jan 2020 16:17:18 -0500 committer: Tom Lane <email@example.com> date : Fri, 17 Jan 2020 16:17:18 -0500
Commit 9b63c13f0 turns out to have been fundamentally misguided: the parent node's subPlan list is by no means the only way in which a child SubPlan node can be hooked into the outer execution state. As shown in bug #16213 from Matt Jibson, we can also get short-lived tuple table slots added to the outer es_tupleTable list. At this point I have little faith that there aren't other possible connections as well; the long time it took to notice this problem shows that this isn't a heavily-exercised situation. Therefore, revert that fix, returning to the coding that passed a NULL parent plan pointer down to the transiently-built subexpressions. That gives us a pretty good guarantee that they won't hook into the outer executor state in any way. But then we need some other solution to make SubPlans work. Adopt the solution speculated about in the previous commit's log message: do expression initialization at plan startup for just those VALUES rows containing SubPlans, abandoning the goal of reclaiming memory intra-query for those rows. In practice it seems unlikely that queries containing a vast number of VALUES rows would be using SubPlans in them, so this should not give up much. (BTW, this test case also refutes my claim in connection with the prior commit that the issue only arises with use of LATERAL. That was just wrong: some variants of SubLink always produce SubPlans.) As with previous patch, back-patch to all supported branches. Discussion: https://firstname.lastname@example.org
Set ReorderBufferTXN->final_lsn more eagerly
commit : 58997ace5b372cc137770292f462d5b8854c832d author : Alvaro Herrera <email@example.com> date : Fri, 17 Jan 2020 18:00:39 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Fri, 17 Jan 2020 18:00:39 -0300
... specifically, set it incrementally as each individual change is spilled down to disk. This way, it is set correctly when the transaction disappears without trace, ie. without leaving an XACT_ABORT wal record. (This happens when the server crashes midway through a transaction.) Failing to have final_lsn prevents ReorderBufferRestoreCleanup() from working, since it needs the final_lsn in order to know the endpoint of its iteration through spilled files. Commit df9f682c7bf8 already tried to fix the problem, but it didn't set the final_lsn in all cases. Revert that, since it's no longer needed. Author: Vignesh C Reviewed-by: Amit Kapila, Dilip Kumar Discussion: https://postgr.es/m/CALDaNm2CLk+K9JDwjYST0sPbGg5AQdvhUt0jbKyX_HdAE0jk3A@mail.gmail.com
Make rewriter prevent auto-updates on views with conditional INSTEAD rules.
commit : bb09a9414f1e149aac6064d217b5d5cab8f13723 author : Dean Rasheed <email@example.com> date : Tue, 14 Jan 2020 09:48:44 +0000 committer: Dean Rasheed <firstname.lastname@example.org> date : Tue, 14 Jan 2020 09:48:44 +0000
A view with conditional INSTEAD rules and no unconditional INSTEAD rules or INSTEAD OF triggers is not auto-updatable. Previously we relied on a check in the executor to catch this, but that's problematic since the planner may fail to properly handle such a query and thus return a particularly unhelpful error to the user, before reaching the executor check. Instead, trap this in the rewriter and report the correct error there. Doing so also allows us to include more useful error detail than the executor check can provide. This doesn't change the existing behaviour of updatable views; it merely ensures that useful error messages are reported when a view isn't updatable. Per report from Pengzhou Tang, though not adopting that suggested fix. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAG4reAQn+4xB6xHJqWdtE0ve_WqJkdyCV4P=trYr4Kn8_3_PEA@mail.gmail.com
Fix edge-case crashes and misestimation in range containment selectivity.
commit : 784c58da1957cecdea037428ce08953b852be85b author : Tom Lane <email@example.com> date : Sun, 12 Jan 2020 14:37:00 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 12 Jan 2020 14:37:00 -0500
When estimating the selectivity of "range_var <@ range_constant" or "range_var @> range_constant", if the upper (or respectively lower) bound of the range_constant was above the last bin of the range_var's histogram, the code would access uninitialized memory and potentially crash (though it seems the probability of a crash is quite low). Handle the endpoint cases explicitly to fix that. While at it, be more paranoid about the possibility of getting NaN or other silly results from the range type's subdiff function. And improve some comments. Ordinarily we'd probably add a regression test case demonstrating the bug in unpatched code. But it's too hard to get it to crash reliably because of the uninitialized-memory dependence, so skip that. Per bug #16122 from Adam Scott. It's been broken from the beginning, apparently, so backpatch to all supported branches. Diagnosis by Michael Paquier, patch by Andrey Borodin and Tom Lane. Discussion: https://email@example.com
doc: Fix naming of SELinux
commit : 8d55879f089563cf1d62266d921c0a00d2b9fff8 author : Michael Paquier <firstname.lastname@example.org> date : Fri, 10 Jan 2020 09:37:38 +0900 committer: Michael Paquier <email@example.com> date : Fri, 10 Jan 2020 09:37:38 +0900
Reported-by: Tham Nguyen Discussion: https://firstname.lastname@example.org Backpatch-through: 9.4
Revert "Forbid DROP SCHEMA on temporary namespaces"
commit : 86949c2f120792a6cfcafee1f30cf4231085b6d8 author : Michael Paquier <email@example.com> date : Wed, 8 Jan 2020 10:36:46 +0900 committer: Michael Paquier <firstname.lastname@example.org> date : Wed, 8 Jan 2020 10:36:46 +0900
This reverts commit a052f6c, following complains from Robert Haas and Tom Lane. Backpatch down to 9.4, like the previous commit. Discussion: https://postgr.es/m/CA+TgmobL4npEX5=E5h=5Jm_9mZun3MT39Kq2suJFVeamc9skSQ@mail.gmail.com Backpatch-through: 9.4
Fix running out of file descriptors for spill files.
commit : a6f4f407ada026007d37ab9d494bb6380ec12527 author : Amit Kapila <email@example.com> date : Thu, 2 Jan 2020 12:28:02 +0530 committer: Amit Kapila <firstname.lastname@example.org> date : Thu, 2 Jan 2020 12:28:02 +0530
Currently while decoding changes, if the number of changes exceeds a certain threshold, we spill those to disk. And this happens for each (sub)transaction. Now, while reading all these files, we don't close them until we read all the files. While reading these files, if the number of such files exceeds the maximum number of file descriptors, the operation errors out. Use PathNameOpenFile interface to open these files as that internally has the mechanism to release kernel FDs as needed to get us under the max_safe_fds limit. Reported-by: Amit Khandekar Author: Amit Khandekar Reviewed-by: Amit Kapila Backpatch-through: 9.4 Discussion: https://postgr.es/m/CAJ3gD9c-sECEn79zXw4yBnBdOttacoE-6gAyP0oy60nfs_sabQ@mail.gmail.com
Update copyrights for 2020
commit : 78b381c5ddadb25e915ee4f380c28a676851998c author : Bruce Momjian <email@example.com> date : Wed, 1 Jan 2020 12:21:44 -0500 committer: Bruce Momjian <firstname.lastname@example.org> date : Wed, 1 Jan 2020 12:21:44 -0500
Backpatch-through: update all files in master, backpatch legal files through 9.4
doc: add examples of creative use of unique expression indexes
commit : 937cf1b09146ef91195780f747900cf69e8d0fd8 author : Bruce Momjian <email@example.com> date : Fri, 27 Dec 2019 14:49:08 -0500 committer: Bruce Momjian <firstname.lastname@example.org> date : Fri, 27 Dec 2019 14:49:08 -0500
Unique expression indexes can constrain data in creative ways, so show two examples. Reported-by: Tuomas Leikola Discussion: https://email@example.com Backpatch-through: 9.4
docs: clarify infinite range values from data-type infinities
commit : ebd3c132f5304f351e43e34a4565233e2587154f author : Bruce Momjian <firstname.lastname@example.org> date : Fri, 27 Dec 2019 14:33:30 -0500 committer: Bruce Momjian <email@example.com> date : Fri, 27 Dec 2019 14:33:30 -0500
The previous docs referenced these distinct ideas confusingly. Reported-by: Eugen Konkov Discussion: https://firstname.lastname@example.org Backpatch-through: 9.4
Forbid DROP SCHEMA on temporary namespaces
commit : 12cb5478a2a451fd291a535b8b1b387b3a81914a author : Michael Paquier <email@example.com> date : Fri, 27 Dec 2019 17:59:32 +0900 committer: Michael Paquier <firstname.lastname@example.org> date : Fri, 27 Dec 2019 17:59:32 +0900
This operation was possible for the owner of the schema or a superuser. Down to 9.4, doing this operation would cause inconsistencies in a session whose temporary schema was dropped, particularly if trying to create new temporary objects after the drop. A more annoying consequence is a crash of autovacuum on an assertion failure when logging information about an orphaned temp table dropped. Note that because of 246a6c8 (present in v11~), which has made the removal of orphaned temporary tables more aggressive, the failure could be triggered more easily, but it is possible to reproduce down to 9.4. Reported-by: Mahendra Singh, Prabhat Sahu Author: Michael Paquier Reviewed-by: Kyotaro Horiguchi, Mahendra Singh Discussion: https://postgr.es/m/CAKYtNAr9Zq=1-ww4etHo-VCC-k120YxZy5OS01VkaLPaDbv2tg@mail.gmail.com Backpatch-through: 9.4
Rotate instead of shifting hash join batch number.
commit : 893eaf0be8be32f1d6ee364d5d9e2dae0d87ebfd author : Thomas Munro <email@example.com> date : Tue, 24 Dec 2019 11:31:24 +1300 committer: Thomas Munro <firstname.lastname@example.org> date : Tue, 24 Dec 2019 11:31:24 +1300
Our algorithm for choosing batch numbers turned out not to work effectively for multi-billion key inner relations. We would use more hash bits than we have, and effectively concentrate all tuples into a smaller number of batches than we intended. While ideally we should switch to wider hashes, for now, change the algorithm to one that effectively gives up bits from the bucket number when we don't have enough bits. That means we'll finish up with longer bucket chains than would be ideal, but that's better than having batches that don't fit in work_mem and can't be divided. Batch-patch to all supported releases. Author: Thomas Munro Reviewed-by: Tom Lane, thanks also to Tomas Vondra, Alvaro Herrera, Andres Freund for testing and discussion Reported-by: James Coleman Discussion: https://postgr.es/m/16104-dc11ed911f1ab9df%40postgresql.org
Disallow null category in crosstab_hash
commit : 70fc6c4ef3c3790284796db3935d6f2d040b60bd author : Joe Conway <email@example.com> date : Mon, 23 Dec 2019 13:34:05 -0500 committer: Joe Conway <firstname.lastname@example.org> date : Mon, 23 Dec 2019 13:34:05 -0500
While building a hash map of categories in load_categories_hash, resulting category names have not thus far been checked to ensure they are not null. Prior to pg12 null category names worked to the extent that they did not crash on some platforms. This is because those system libraries have an snprintf which can deal with being passed a null pointer argument for a string. But even in those cases null categories did nothing useful. And on some platforms it crashed. As of pg12, our own version of snprintf gets called, and it does not deal with null pointer arguments at all, and crashes consistently. Fix that by disallowing null categories. They never worked usefully, and no one has ever asked for them to work previously. Back-patch to all supported branches. Reported-By: Ireneusz Pluta Discussion: https://email@example.com
Prevent a rowtype from being included in itself via a range.
commit : 6609c3ad984bde125ff6ec4e3e056a5b23f68210 author : Tom Lane <firstname.lastname@example.org> date : Mon, 23 Dec 2019 12:08:24 -0500 committer: Tom Lane <email@example.com> date : Mon, 23 Dec 2019 12:08:24 -0500
We probably should have thought of this case when ranges were added, but we didn't. (It's not the fault of commit eb51af71f, because ranges didn't exist then.) It's an old bug, so back-patch to all supported branches. Discussion: https://firstname.lastname@example.org
Avoid low-probability regression test failures in timestamp[tz] tests.
commit : 365052abbc114a4269405bd7ac97b4b86dcfdca7 author : Tom Lane <email@example.com> date : Sun, 22 Dec 2019 18:00:18 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 22 Dec 2019 18:00:18 -0500
If the first transaction block in these tests were entered exactly at midnight (California time), they'd report a bogus failure due to 'now' and 'midnight' having the same values. Commit 8c2ac75c5 had dismissed this as being of negligible probability, but we've now seen it happen in the buildfarm, so let's prevent it. We can get pretty much the same test coverage without an it's-not-midnight assumption by moving the does-'now'-work cases into their own test step. While here, apply commit 47169c255's s/DELETE/TRUNCATE/ change to timestamptz as well as timestamp (not sure why that didn't occur to me at the time; the risk of failure is the same). Back-patch to all supported branches, since the main point is to get rid of potential buildfarm failures. Discussion: https://email@example.com
In pgwin32_open, loop after ERROR_ACCESS_DENIED only if we can't stat.
commit : 35b28d98335e704a8f3fffb241e837daec95533a author : Tom Lane <firstname.lastname@example.org> date : Sat, 21 Dec 2019 17:39:37 -0500 committer: Tom Lane <email@example.com> date : Sat, 21 Dec 2019 17:39:37 -0500
This fixes a performance problem introduced by commit 6d7547c21. ERROR_ACCESS_DENIED is returned in some other cases besides the delete-pending case considered by that commit; notably, if the given path names a directory instead of a plain file. In that case we'll uselessly loop for 1 second before returning the failure condition. That slows down some usage scenarios enough to cause test timeout failures on our Windows buildfarm critters. To fix, try to stat() the file, and sleep/loop only if that fails. It will fail in the delete-pending case, and also in the case where the deletion completed before we could stat(), so we have the cases where we want to loop covered. In the directory case, the stat() should succeed, letting us exit without a wait. One case where we'll still wait uselessly is if the access-denied problem pertains to a directory in the given pathname. But we don't expect that to happen in any performance-critical code path. There might be room to refine this further, but I'll push it now in hopes of making the buildfarm green again. Back-patch, like the preceding commit. Alexander Lakhin and Tom Lane Discussion: https://firstname.lastname@example.org
docs: clarify handling of column lists in COPY TO/FROM
commit : dd4c4eaab724835f4553656efb6451b79ffbb9e1 author : Bruce Momjian <email@example.com> date : Sat, 21 Dec 2019 12:44:38 -0500 committer: Bruce Momjian <firstname.lastname@example.org> date : Sat, 21 Dec 2019 12:44:38 -0500
Previously it was unclear how COPY FROM handled cases where not all columns were specified, or if the order didn't match. Reported-by: email@example.com Discussion: https://firstname.lastname@example.org Backpatch-through: 9.4
libpq should expose GSS-related parameters even when not implemented.
commit : 5e22a111185ccf5fe1d84b980d947b5427a50e64 author : Tom Lane <email@example.com> date : Fri, 20 Dec 2019 15:34:08 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 20 Dec 2019 15:34:08 -0500
We realized years ago that it's better for libpq to accept all connection parameters syntactically, even if some are ignored or restricted due to lack of the feature in a particular build. However, that lesson from the SSL support was for some reason never applied to the GSSAPI support. This is causing various buildfarm members to have problems with a test case added by commit 6136e94dc, and it's just a bad idea from a user-experience standpoint anyway, so fix it. While at it, fix some places where parameter-related infrastructure was added with the aid of a dartboard, or perhaps with the aid of the anti-pattern "add new stuff at the end". It should be safe to rearrange the contents of struct pg_conn even in released branches, since that's private to libpq (and we'd have to move some fields in some builds to fix this, anyway). Back-patch to all supported branches. Discussion: https://email@example.com
Fix error reporting for index expressions of prohibited types.
commit : da5dd421833b2cd074cb752dd8ecfbe6a4735a4f author : Tom Lane <firstname.lastname@example.org> date : Tue, 17 Dec 2019 17:44:28 -0500 committer: Tom Lane <email@example.com> date : Tue, 17 Dec 2019 17:44:28 -0500
If CheckAttributeType() threw an error about the datatype of an index expression column, it would report an empty column name, which is pretty unhelpful and certainly not the intended behavior. I (tgl) evidently broke this in commit cfc5008a5, by not noticing that the column's attname was used above where I'd placed the assignment of it. In HEAD and v12, this is trivially fixable by moving up the assignment of attname. Before v12 the code is a bit more messy; to avoid doing substantial refactoring, I took the lazy way out and just put in two copies of the assignment code. Report and patch by Amit Langote. Back-patch to all supported branches. Discussion: https://postgr.es/m/CA+HiwqFA+BGyBFimjiYXXMa2Hc3fcL0+OJOyzUNjhU4NCa_XXw@mail.gmail.com
On Windows, wait a little to see if ERROR_ACCESS_DENIED goes away.
commit : cd03803512bf9484d2a045bf877a8ddc2cb47059 author : Tom Lane <firstname.lastname@example.org> date : Mon, 16 Dec 2019 15:10:55 -0500 committer: Tom Lane <email@example.com> date : Mon, 16 Dec 2019 15:10:55 -0500
Attempting to open a file fails with ERROR_ACCESS_DENIED if the file is flagged for deletion but not yet actually gone (another in a long list of reasons why Windows is broken, if you ask me). This seems likely to explain a lot of irreproducible failures we see in the buildfarm. This state generally persists for only a millisecond or so, so just wait a bit and retry. If it's a real permissions problem, we'll eventually give up and report it as such. If it's the pending deletion case, we'll see file-not-found and report that after the deletion completes, and the caller will treat that in an appropriate way. In passing, rejigger the existing retry logic for some other error cases so that we don't uselessly wait an extra time when we're not going to retry anymore. Alexander Lakhin (with cosmetic tweaks by me). Back-patch to all supported branches, since this seems like a pretty safe change and the problem is definitely real. Discussion: https://firstname.lastname@example.org
Fix EXTRACT(ISOYEAR FROM timestamp) for years BC.
commit : 323c47925af9e53c4c0476553e3e13a86f3d8173 author : Tom Lane <email@example.com> date : Thu, 12 Dec 2019 12:30:44 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 12 Dec 2019 12:30:44 -0500
The test cases added by commit 26ae3aa80 exposed an old oversight in timestamp[tz]_part: they didn't correct the result of date2isoyear() for BC years, so that we produced an off-by-one answer for such years. Fix that, and back-patch to all supported branches. Discussion: https://postgr.es/m/SG2PR06MB37762CAE45DB0F6CA7001EA9B6550@SG2PR06MB3776.apcprd06.prod.outlook.com
Remove redundant function calls in timestamp[tz]_part().
commit : e284e0e2f9da268754ee0deb814b5a7310db0d0e author : Tom Lane <email@example.com> date : Thu, 12 Dec 2019 12:12:36 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 12 Dec 2019 12:12:36 -0500
The DTK_DOW/DTK_ISODOW and DTK_DOY switch cases in timestamp_part() and timestamptz_part() contained calls of timestamp2tm() that were fully redundant with the ones done just above the switch. This evidently crept in during commit 258ee1b63, which relocated that code from another place where the calls were indeed needed. Just delete the redundant calls. I (tgl) noted that our test coverage of these functions left quite a bit to be desired, so extend timestamp.sql and timestamptz.sql to cover all the branches. Back-patch to all supported branches, as the previous commit was. There's no real issue here other than some wasted cycles in some not-too-heavily-used code paths, but the test coverage seems valuable. Report and patch by Li Japin; test case adjustments by me. Discussion: https://postgr.es/m/SG2PR06MB37762CAE45DB0F6CA7001EA9B6550@SG2PR06MB3776.apcprd06.prod.outlook.com
Doc: back-patch documentation about limitations of CHECK constraints.
commit : 6bf23e8c6e066fb7554b9076355afdfc103ad134 author : Tom Lane <email@example.com> date : Wed, 11 Dec 2019 15:53:36 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 11 Dec 2019 15:53:36 -0500
Back-patch commits 36d442a25 and 1f66c657f into all supported branches. I'd considered doing this when putting in the latter commit, but failed to pull the trigger. Now that we've had an actual field complaint about the lack of such docs, let's do it. Per bug #16158 from Piotr Jander. Original patches by Lætitia Avrot, Patrick Francelle, and me. Discussion: https://email@example.com
Fix race condition in our Windows signal emulation.
commit : 1a0c65120137786bb2667cb935954b2982d5d96f author : Tom Lane <firstname.lastname@example.org> date : Mon, 9 Dec 2019 15:03:52 -0500 committer: Tom Lane <email@example.com> date : Mon, 9 Dec 2019 15:03:52 -0500
pg_signal_dispatch_thread() responded to the client (signal sender) and disconnected the pipe before actually setting the shared variables that make the signal visible to the backend process's main thread. In the worst case, it seems, effective delivery of the signal could be postponed for as long as the machine has any other work to do. To fix, just move the pg_queue_signal() call so that we do it before responding to the client. This essentially makes pgkill() synchronous, which is a stronger guarantee than we have on Unix. That may be overkill, but on the other hand we have not seen comparable timing bugs on any Unix platform. While at it, add some comments to this sadly underdocumented code. Problem diagnosis and fix by Amit Kapila; I just added the comments. Back-patch to all supported versions, as it appears that this can cause visible NOTIFY timing oddities on all of them, and there might be other misbehavior due to slow delivery of other signals. Discussion: https://firstname.lastname@example.org
Document search_path security with untrusted dbowner or CREATEROLE.
commit : 3056258149c1aea7341a4d81bd502e1a1c8198a6 author : Noah Misch <email@example.com> date : Sun, 8 Dec 2019 11:06:26 -0800 committer: Noah Misch <firstname.lastname@example.org> date : Sun, 8 Dec 2019 11:06:26 -0800
Commit 5770172cb0c9df9e6ce27c507b449557e5b45124 wrote, incorrectly, that certain schema usage patterns are secure against CREATEROLE users and database owners. When an untrusted user is the database owner or holds CREATEROLE privilege, a query is secure only if its session started with SELECT pg_catalog.set_config('search_path', '', false) or equivalent. Back-patch to 9.4 (all supported versions). Discussion: https://postgr.es/m/20191013013512.GC4131753@rfd.leadboat.com
Ensure maxlen is at leat 1 in dict_int
commit : a2fdeb7863a684b661b0fcbaf90f00595be11bd0 author : Tomas Vondra <email@example.com> date : Tue, 3 Dec 2019 16:55:51 +0100 committer: Tomas Vondra <firstname.lastname@example.org> date : Tue, 3 Dec 2019 16:55:51 +0100
The dict_int text search dictionary template accepts maxlen parameter, which is then used to cap the length of input strings. The value was not properly checked, and the code simply does txt[d->maxlen] = '\0'; to insert a terminator, leading to segfaults with negative values. This commit simply rejects values less than 1. The issue was there since dct_int was introduced in 9.3, so backpatch all the way back to 9.4 which is the oldest supported version. Reported-by: cili Discussion: https://email@example.com Backpatch-through: 9.4
Fix misbehavior with expression indexes on ON COMMIT DELETE ROWS tables.
commit : cfffa8a6b22d8813c30fd524affc07c0c580ea58 author : Tom Lane <firstname.lastname@example.org> date : Sun, 1 Dec 2019 13:09:27 -0500 committer: Tom Lane <email@example.com> date : Sun, 1 Dec 2019 13:09:27 -0500
We implement ON COMMIT DELETE ROWS by truncating tables marked that way, which requires also truncating/rebuilding their indexes. But RelationTruncateIndexes asks the relcache for up-to-date copies of any index expressions, which may cause execution of eval_const_expressions on them, which can result in actual execution of subexpressions. This is a bad thing to have happening during ON COMMIT. Manuel Rigger reported that use of a SQL function resulted in crashes due to expectations that ActiveSnapshot would be set, which it isn't. The most obvious fix perhaps would be to push a snapshot during PreCommit_on_commit_actions, but I think that would just open the door to more problems: CommitTransaction explicitly expects that no user-defined code can be running at this point. Fortunately, since we know that no tuples exist to be indexed, there seems no need to use the real index expressions or predicates during RelationTruncateIndexes. We can set up dummy index expressions instead (we do need something that will expose the right data type, as there are places that build index tupdescs based on this), and just ignore predicates and exclusion constraints. In a green field it'd likely be better to reimplement ON COMMIT DELETE ROWS using the same "init fork" infrastructure used for unlogged relations. That seems impractical without catalog changes though, and even without that it'd be too big a change to back-patch. So for now do it like this. Per private report from Manuel Rigger. This has been broken forever, so back-patch to all supported branches.
Fix off-by-one error in PGTYPEStimestamp_fmt_asc
commit : a17602de18f7ff7821054b3b7f51eadf9fb59d62 author : Tomas Vondra <firstname.lastname@example.org> date : Sat, 30 Nov 2019 14:51:27 +0100 committer: Tomas Vondra <email@example.com> date : Sat, 30 Nov 2019 14:51:27 +0100
When using %b or %B patterns to format a date, the code was simply using tm_mon as an index into array of month names. But that is wrong, because tm_mon is 1-based, while array indexes are 0-based. The result is we either use name of the next month, or a segfault (for December). Fix by subtracting 1 from tm_mon for both patterns, and add a regression test triggering the issue. Backpatch to all supported versions (the bug is there far longer, since at least 2003). Reported-by: Paul Spencer Backpatch-through: 9.4 Discussion: https://postgr.es/m/16143-0d861eb8688d3fef%40postgresql.org
Fix typo in comment.
commit : 5f55e4c061d376304f979fef0f9aebae94e286bf author : Etsuro Fujita <firstname.lastname@example.org> date : Wed, 27 Nov 2019 16:00:53 +0900 committer: Etsuro Fujita <email@example.com> date : Wed, 27 Nov 2019 16:00:53 +0900
Avoid assertion failure with LISTEN in a serializable transaction.
commit : 864e8080e19007ed3377d679447d2fc06f148bc8 author : Tom Lane <firstname.lastname@example.org> date : Sun, 24 Nov 2019 15:57:32 -0500 committer: Tom Lane <email@example.com> date : Sun, 24 Nov 2019 15:57:32 -0500
If LISTEN is the only action in a serializable-mode transaction, and the session was not previously listening, and the notify queue is not empty, predicate.c reported an assertion failure. That happened because we'd acquire the transaction's initial snapshot during PreCommit_Notify, which was called *after* predicate.c expects any such snapshot to have been established. To fix, just swap the order of the PreCommit_Notify and PreCommit_CheckForSerializationFailure calls during CommitTransaction. This will imply holding the notify-insertion lock slightly longer, but the difference could only be meaningful in serializable mode, which is an expensive option anyway. It appears that this is just an assertion failure, with no consequences in non-assert builds. A snapshot used only to scan the notify queue could not have been involved in any serialization conflicts, so there would be nothing for PreCommit_CheckForSerializationFailure to do except assign it a prepareSeqNo and set the SXACT_FLAG_PREPARED flag. And given no conflicts, neither of those omissions affect the behavior of ReleasePredicateLocks. This admittedly once-over-lightly analysis is backed up by the lack of field reports of trouble. Per report from Mark Dilger. The bug is old, so back-patch to all supported branches; but the new test case only goes back to 9.6, for lack of adequate isolationtester infrastructure before that. Discussion: https://firstname.lastname@example.org Discussion: https://email@example.com
Defend against self-referential views in relation_is_updatable().
commit : bcd541897fcf89b60494700f1f15ff70fae5ced7 author : Tom Lane <firstname.lastname@example.org> date : Thu, 21 Nov 2019 16:21:44 -0500 committer: Tom Lane <email@example.com> date : Thu, 21 Nov 2019 16:21:44 -0500
While a self-referential view doesn't actually work, it's possible to create one, and it turns out that this breaks some of the information_schema views. Those views call relation_is_updatable(), which neglected to consider the hazards of being recursive. In older PG versions you get a "stack depth limit exceeded" error, but since v10 it'd recurse to the point of stack overrun and crash, because commit a4c35ea1c took out the expression_returns_set() call that was incidentally checking the stack depth. Since this function is only used by information_schema views, it seems like it'd be better to return "not updatable" than suffer an error. Hence, add tracking of what views we're examining, in just the same way that the nearby fireRIRrules() code detects self-referential views. I added a check_stack_depth() call too, just to be defensive. Per private report from Manuel Rigger. Back-patch to all supported versions.
Remove incorrect markup
commit : 31d3da740cf888b7d59ab7ea82e329fe536a38d6 author : Magnus Hagander <firstname.lastname@example.org> date : Wed, 20 Nov 2019 17:03:07 +0100 committer: Magnus Hagander <email@example.com> date : Wed, 20 Nov 2019 17:03:07 +0100
Author: Daniel Gustafsson <firstname.lastname@example.org>
Revise GIN README
commit : 8165384babd96258ed521c4ea2567c70a1849948 author : Alexander Korotkov <email@example.com> date : Tue, 19 Nov 2019 23:11:24 +0300 committer: Alexander Korotkov <firstname.lastname@example.org> date : Tue, 19 Nov 2019 23:11:24 +0300
We find GIN concurrency bugs from time to time. One of the problems here is that concurrency of GIN isn't well-documented in README. So, it might be even hard to distinguish design bugs from implementation bugs. This commit revised concurrency section in GIN README providing more details. Some examples are illustrated in ASCII art. Also, this commit add the explanation of how is tuple layout in internal GIN B-tree page different in comparison with nbtree. Discussion: https://postgr.es/m/CAPpHfduXR_ywyaVN4%2BOYEGaw%3DcPLzWX6RxYLBncKw8de9vOkqw%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Peter Geoghegan Backpatch-through: 9.4
Fix traversing to the deleted GIN page via downlink
commit : 4fc4856849deb51c8d8822d79b6f10d571dab4f6 author : Alexander Korotkov <email@example.com> date : Tue, 19 Nov 2019 23:08:14 +0300 committer: Alexander Korotkov <firstname.lastname@example.org> date : Tue, 19 Nov 2019 23:08:14 +0300
Current GIN code appears to don't handle traversing to the deleted page via downlink. This commit fixes that by stepping right from the delete page like we do in nbtree. This commit also fixes setting 'deleted' flag to the GIN pages. Now other page flags are not erased once page is deleted. That helps to keep our assertions true if we arrive deleted page via downlink. Discussion: https://postgr.es/m/CAPpHfdvMvsw-NcE5bRS7R1BbvA4BxoDnVVjkXC5W0Czvy9LVrg%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Peter Geoghegan Backpatch-through: 9.4
Doc: clarify use of RECURSIVE in WITH.
commit : 4871270e38e5ab0490b78c37ea7d4ccf3240edce author : Tom Lane <email@example.com> date : Tue, 19 Nov 2019 14:43:37 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 19 Nov 2019 14:43:37 -0500
Apparently some people misinterpreted the syntax as being that RECURSIVE is a prefix of individual WITH queries. It's a modifier for the WITH clause as a whole, so state that more clearly. Discussion: https://email@example.com
Doc: clarify behavior of ALTER DEFAULT PRIVILEGES ... IN SCHEMA.
commit : de8c2d38fddc4be2a4cd0c5108b428daa5cc781e author : Tom Lane <firstname.lastname@example.org> date : Tue, 19 Nov 2019 14:21:42 -0500 committer: Tom Lane <email@example.com> date : Tue, 19 Nov 2019 14:21:42 -0500
The existing text stated that "Default privileges that are specified per-schema are added to whatever the global default privileges are for the particular object type". However, that bare-bones observation is not quite clear enough, as demonstrated by the complaint in bug #16124. Flesh it out by stating explicitly that you can't revoke built-in default privileges this way, and by providing an example to drive the point home. Back-patch to all supported branches, since it's been like this from the beginning. Discussion: https://firstname.lastname@example.org
Further fix dumping of views that contain just VALUES(...).
commit : ecb533af623738f49a8bd748e688c7f02c1c0f9f author : Tom Lane <email@example.com> date : Sat, 16 Nov 2019 20:00:20 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 16 Nov 2019 20:00:20 -0500
It turns out that commit e9f1c01b7 missed a case: we must print a VALUES clause in long format if get_query_def is given a resultDesc that would require the query's output column name(s) to be different from what the bare VALUES clause would produce. This applies in case an ALTER ... RENAME COLUMN has been done to a view that formerly could be printed in simple format, as shown in the added regression test case. It also explains bug #16119 from Dmitry Telpt, because it turns out that (unlike CREATE VIEW) CREATE MATERIALIZED VIEW fails to apply any column aliases it's given to the stored ON SELECT rule. So to get them to be printed, we have to account for the resultDesc renaming. It might be worth changing the matview code so that it creates the ON SELECT rule with the correct aliases; but we'd still need these messy checks in get_simple_values_rte to handle the case of a subsequent column rename, so any such change would be just neatnik-ism not a bug fix. Like the previous patch, back-patch to all supported branches. Discussion: https://email@example.com
Handle arrays and ranges in pg_upgrade's test for non-upgradable types.
commit : fb26754af4da8bdb25ca3bc8841714c4101c4107 author : Tom Lane <firstname.lastname@example.org> date : Wed, 13 Nov 2019 11:35:37 -0500 committer: Tom Lane <email@example.com> date : Wed, 13 Nov 2019 11:35:37 -0500
pg_upgrade needs to check whether certain non-upgradable data types appear anywhere on-disk in the source cluster. It knew that it has to check for these types being contained inside domains and composite types; but it somehow overlooked that they could be contained in arrays and ranges, too. Extend the existing recursive-containment query to handle those cases. We probably should have noticed this oversight while working on commit 0ccfc2822 and follow-ups, but we failed to :-(. The whole thing's possibly a bit overdesigned, since we don't really expect that any of these types will appear on disk; but if we're going to the effort of doing a recursive search then it's silly not to cover all the possibilities. While at it, refactor so that we have only one copy of the search logic, not three-and-counting. Also, to keep the branches looking more alike, back-patch the output wording change of commit 1634d3615. Back-patch to all supported branches. Discussion: https://firstname.lastname@example.org