commit : 30ffdd24d7222bc01183a56d536c236240674516 author : Tom Lane <firstname.lastname@example.org> date : Mon, 10 Feb 2020 17:25:31 -0500 committer: Tom Lane <email@example.com> date : Mon, 10 Feb 2020 17:25:31 -0500
Last-minute updates for release notes.
commit : f6117744d14017cb11a6ddd95d4f44b114d871c7 author : Tom Lane <firstname.lastname@example.org> date : Mon, 10 Feb 2020 12:51:07 -0500 committer: Tom Lane <email@example.com> date : Mon, 10 Feb 2020 12:51:07 -0500
createuser: fix parsing of --connection-limit argument
commit : 6f1e443a65eeee84ec57bd3eb57e54bb5fafbf51 author : Alvaro Herrera <firstname.lastname@example.org> date : Mon, 10 Feb 2020 12:14:58 -0300 committer: Alvaro Herrera <email@example.com> date : Mon, 10 Feb 2020 12:14:58 -0300
The original coding failed to quote the argument properly. Reported-by: Daniel Gustafsson Discussion: 1B8AE66C-85AB-4728-9BB4-612E8E61C219@yesql.se
commit : 3a1acb6b70eb06820c826b0eebb8848be5bc26e8 author : Peter Eisentraut <firstname.lastname@example.org> date : Mon, 10 Feb 2020 12:52:24 +0100 committer: Peter Eisentraut <email@example.com> date : Mon, 10 Feb 2020 12:52:24 +0100
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: b0010b20ca9b3de3a7ca6e908948ffab7cd3f467
Release notes for 12.2, 11.7, 10.12, 9.6.17, 9.5.21, 9.4.26.
commit : dbae284f317f2fa5c9add9141d5c8f2d8b5140df author : Tom Lane <firstname.lastname@example.org> date : Sun, 9 Feb 2020 14:14:19 -0500 committer: Tom Lane <email@example.com> date : Sun, 9 Feb 2020 14:14:19 -0500
Add note about access permission checks by inherited TRUNCATE and LOCK TABLE.
commit : bf1840255123d90e777c72341d36149d09bef0e5 author : Fujii Masao <firstname.lastname@example.org> date : Fri, 7 Feb 2020 00:33:11 +0900 committer: Fujii Masao <email@example.com> date : Fri, 7 Feb 2020 00:33:11 +0900
Inherited queries perform access permission checks on the parent table only. But there are two exceptions to this rule in v12 or before; TRUNCATE and LOCK TABLE commands through a parent table check the permissions on not only the parent table but also the children tables. Previously these exceptions were not documented. This commit adds the note about these exceptions, into the document. Back-patch to v9.4. But we don't apply this commit to the master because commit e6f1e560e4 already got rid of the exception about inherited TRUNCATE and upcoming commit will do for the exception about inherited LOCK TABLE. Author: Amit Langote Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CA+HiwqHfTnMU6SUkyHxCmpHUKk7ERLHCR3vZVq19ZOQBjPBLmQ@mail.gmail.com
Revert commit 56bc82a511.
commit : d034ab0bb2ae9300e91f9272abd4872bf53ce8d7 author : Fujii Masao <firstname.lastname@example.org> date : Mon, 3 Feb 2020 12:45:01 +0900 committer: Fujii Masao <email@example.com> date : Mon, 3 Feb 2020 12:45:01 +0900
This commit reverts the fix "Make inherited TRUNCATE perform access permission checks on parent table only" only in the back branches. It's not hard to imagine that there are some applications expecting the old behavior and the fix breaks their security. To avoid this compatibility problem, we decided to apply the fix only in HEAD and revert it in all supported back branches. Discussion: https://firstname.lastname@example.org
Fix memory leak on DSM slot exhaustion.
commit : 95936c795b9f086c2b413b5150d2993ac30354fa author : Thomas Munro <email@example.com> date : Sat, 1 Feb 2020 14:29:13 +1300 committer: Thomas Munro <firstname.lastname@example.org> date : Sat, 1 Feb 2020 14:29:13 +1300
If we attempt to create a DSM segment when no slots are available, we should return the memory to the operating system. Previously we did that if the DSM_CREATE_NULL_IF_MAXSEGMENTS flag was passed in, but we didn't do it if an error was raised. Repair. Back-patch to 9.4, where DSM segments arrived. Author: Thomas Munro Reviewed-by: Robert Haas Reported-by: Julian Backes Discussion: https://postgr.es/m/CA%2BhUKGKAAoEw-R4om0d2YM4eqT1eGEi6%3DQot-3ceDR-SLiWVDw%40mail.gmail.com
Fix CheckAttributeType's handling of collations for ranges.
commit : f521ef0ae3199ed2d16fc11a865738f7fe54f38b author : Tom Lane <email@example.com> date : Fri, 31 Jan 2020 17:03:55 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 31 Jan 2020 17:03:55 -0500
Commit fc7695891 changed CheckAttributeType to recurse into ranges, but made it pass down the wrong collation (always InvalidOid, since ranges as such have no collation). This would result in guaranteed failure when considering a range type whose subtype is collatable. Embarrassingly, we lack any regression tests that would expose such a problem (but fortunately, somebody noticed before we shipped this bug in any release). Fix it to pass down the range's subtype collation property instead, and add some regression test cases to exercise collatable-subtype ranges a bit more. Back-patch to all supported branches, as the previous patch was. Report and patch by Julien Rouhaud, test cases tweaked by me Discussion: https://postgr.es/m/CAOBaU_aBWqNweiGUFX0guzBKkcfJ8mnnyyGC_KBQmO12Mj5f_A@mail.gmail.com
Fix parallel pg_dump/pg_restore for failure to create worker processes.
commit : 5d60df8306c89f6a813d0a1935807a83e43f7968 author : Tom Lane <email@example.com> date : Fri, 31 Jan 2020 14:41:49 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 31 Jan 2020 14:41:49 -0500
If we failed to fork a worker process, or create a communication pipe for one, WaitForTerminatingWorkers would suffer an assertion failure if assert-enabled, otherwise crash or go into an infinite loop. This was a consequence of not accounting for the startup condition where we've not yet forked all the workers. The original bug was that ParallelBackupStart would set workerStatus to WRKR_IDLE before it had successfully forked a worker. I made things worse in commit b7b8cc0cf by not understanding the undocumented fact that the WRKR_TERMINATED state was also meant to represent the case where a worker hadn't been started yet: I changed enum T_WorkerStatus so that *all* the worker slots were initially in WRKR_IDLE state. But this wasn't any more broken in practice, since even one slot in the wrong state would keep WaitForTerminatingWorkers from terminating. In v10 and later, introduce an explicit T_WorkerStatus value for worker-not-started, in hopes of preventing future oversights of the same ilk. Before that, just document that WRKR_TERMINATED is supposed to cover that case (partly because it wasn't actively broken, and partly because the enum is exposed outside parallel.c in those branches, so there's microscopically more risk involved in changing it). In all branches, introduce a WORKER_IS_RUNNING status test macro to hide which T_WorkerStatus values mean that, and be more careful not to access ParallelSlot fields till we're sure they're valid. Per report from Vignesh C, though this is my patch not his. Back-patch to all supported branches. Discussion: https://postgr.es/m/CALDaNm1Luv-E3sarR+-unz-BjchquHHyfP+YC+2FS2pt_Jemail@example.com
Make inherited TRUNCATE perform access permission checks on parent table only.
commit : 56bc82a5111737d76f211356eeb2b1a8e657e8b4 author : Fujii Masao <firstname.lastname@example.org> date : Fri, 31 Jan 2020 00:46:20 +0900 committer: Fujii Masao <email@example.com> date : Fri, 31 Jan 2020 00:46:20 +0900
Previously, TRUNCATE command through a parent table checked the permissions on not only the parent table but also the children tables inherited from it. This was a bug and inherited queries should perform access permission checks on the parent table only. This commit fixes that bug. Back-patch to all supported branches. Author: Amit Langote Reviewed-by: Fujii Masao Discussion: https://postgr.es/m/CAHGQGwFHdSvifhJE+-GSNqUHSfbiKxaeQQ7HGcYz6SC2n_oDcg@mail.gmail.com
Fix an oversight in commit 4c70098ff.
commit : 8fc33e6cc10fcd801efff4d8ddbc22011fdd6108 author : Tom Lane <firstname.lastname@example.org> date : Thu, 23 Jan 2020 16:15:32 -0500 committer: Tom Lane <email@example.com> date : Thu, 23 Jan 2020 16:15:32 -0500
I had supposed that the from_char_seq_search() call sites were all passing the constant arrays you'd expect them to pass ... but on looking closer, the one for DY format was passing the days array not days_short. This accidentally worked because the day abbreviations in English are all the same as the first three letters of the full day names. However, once we took out the "maximum comparison length" logic, it stopped working. As penance for that oversight, add regression test cases covering this, as well as every other switch case in DCH_from_char() that was not reached according to the code coverage report. Also, fold the DCH_RM and DCH_rm cases into one --- now that seq_search is case independent, there's no need to pass different comparison arrays for those cases. Back-patch, as the previous commit was.
Clean up formatting.c's logic for matching constant strings.
commit : 600b953d73ca2f253ff177c43f6650187ae81c9e author : Tom Lane <firstname.lastname@example.org> date : Thu, 23 Jan 2020 13:42:10 -0500 committer: Tom Lane <email@example.com> date : Thu, 23 Jan 2020 13:42:10 -0500
seq_search(), which is used to match input substrings to constants such as month and day names, had a lot of bizarre and unnecessary behaviors. It was mostly possible to avert our eyes from that before, but we don't want to duplicate those behaviors in the upcoming patch to allow recognition of non-English month and day names. So it's time to clean this up. In particular: * seq_search scribbled on the input string, which is a pretty dangerous thing to do, especially in the badly underdocumented way it was done here. Fortunately the input string is a temporary copy, but that was being made three subroutine levels away, making it something easy to break accidentally. The behavior is externally visible nonetheless, in the form of odd case-folding in error reports about unrecognized month/day names. The scribbling is evidently being done to save a few calls to pg_tolower, but that's such a cheap function (at least for ASCII data) that it's pretty pointless to worry about. In HEAD I switched it to be pg_ascii_tolower to ensure it is cheap in all cases; but there are corner cases in Turkish where this'd change behavior, so leave it as pg_tolower in the back branches. * seq_search insisted on knowing the case form (all-upper, all-lower, or initcap) of the constant strings, so that it didn't have to case-fold them to perform case-insensitive comparisons. This likewise seems like excessive micro-optimization, given that pg_tolower is certainly very cheap for ASCII data. It seems unsafe to assume that we know the case form that will come out of pg_locale.c for localized month/day names, so it's better just to define the comparison rule as "downcase all strings before comparing". (The choice between downcasing and upcasing is arbitrary so far as English is concerned, but it might not be in other locales, so follow citext's lead here.) * seq_search also had a parameter that'd cause it to report a match after a maximum number of characters, even if the constant string were longer than that. This was not actually used because no caller passed a value small enough to cut off a comparison. Replicating that behavior for localized month/day names seems expensive as well as useless, so let's get rid of that too. * from_char_seq_search used the maximum-length parameter to truncate the input string in error reports about not finding a matching name. This leads to rather confusing reports in many cases. Worse, it is outright dangerous if the input string isn't all-ASCII, because we risk truncating the string in the middle of a multibyte character. That'd lead either to delivering an illegible error message to the client, or to encoding-conversion failures that obscure the actual data problem. Get rid of that in favor of truncating at whitespace if any (a suggestion due to Alvaro Herrera). In addition to fixing these things, I const-ified the input string pointers of DCH_from_char and its subroutines, to make sure there aren't any other scribbling-on-input problems. The risk of generating a badly-encoded error message seems like enough of a bug to justify back-patching, so patch all supported branches. Discussion: https://firstname.lastname@example.org
Fix concurrent indexing operations with temporary tables
commit : d76652edc56f7892806fabf08cad23321fe842de author : Michael Paquier <email@example.com> date : Wed, 22 Jan 2020 09:49:48 +0900 committer: Michael Paquier <firstname.lastname@example.org> date : Wed, 22 Jan 2020 09:49:48 +0900
Attempting to use CREATE INDEX, DROP INDEX or REINDEX with CONCURRENTLY on a temporary relation with ON COMMIT actions triggered unexpected errors because those operations use multiple transactions internally to complete their work. Here is for example one confusing error when using ON COMMIT DELETE ROWS: ERROR: index "foo" already contains data Issues related to temporary relations and concurrent indexing are fixed in this commit by enforcing the non-concurrent path to be taken for temporary relations even if using CONCURRENTLY, transparently to the user. Using a non-concurrent path does not matter in practice as locks cannot be taken on a temporary relation by a session different than the one owning the relation, and the non-concurrent operation is more effective. The problem exists with REINDEX since v12 with the introduction of CONCURRENTLY, and with CREATE/DROP INDEX since CONCURRENTLY exists for those commands. In all supported versions, this caused only confusing error messages to be generated. Note that with REINDEX, it was also possible to issue a REINDEX CONCURRENTLY for a temporary relation owned by a different session, leading to a server crash. The idea to enforce transparently the non-concurrent code path for temporary relations comes originally from Andres Freund. Reported-by: Manuel Rigger Author: Michael Paquier, Heikki Linnakangas Reviewed-by: Andres Freund, Álvaro Herrera, Heikki Linnakangas Discussion: https://postgr.es/m/CA+u7OA6gP7YAeCguyseusYcc=uR8+ypjCcgDDCTzjQ+k6S9ksQ@mail.gmail.com Backpatch-through: 9.4
Fix edge case leading to agg transitions skipping ExecAggTransReparent() calls.
commit : ba1dfbe22d3002ff56933ee6ebd26b9bc9be3d86 author : Andres Freund <email@example.com> date : Mon, 20 Jan 2020 23:26:51 -0800 committer: Andres Freund <firstname.lastname@example.org> date : Mon, 20 Jan 2020 23:26:51 -0800
The code checking whether an aggregate transition value needs to be reparented into the current context has always only compared the transition return value with the previous transition value by datum, i.e. without regard for NULLness. This normally works, because when the transition function returns NULL (via fcinfo->isnull), it'll return a value that won't be the same as its input value. But there's no hard requirement that that's the case. And it turns out, it's possible to hit this case (see discussion or reproducers), leading to a non-null transition value not being reparented, followed by a crash caused by that. Instead of adding another comparison of NULLness, instead have ExecAggTransReparent() ensure that pergroup->transValue ends up as 0 when the new transition value is NULL. That avoids having to add an additional branch to the much more common cases of the transition function returning the old transition value (which is a pointer in this case), and when the new value is different, but not NULL. In branches since 69c3936a149, also deduplicate the reparenting code between the expression evaluation based transitions, and the path for ordered aggregates. Reported-By: Teodor Sigaev, Nikita Glukhov Author: Andres Freund Discussion: https://email@example.com Backpatch: 9.4-, this issue has existed since at least 7.4
Add GUC variables for stat tracking and timeout as PGDLLIMPORT
commit : dbe405b7859c68a4927afd8334059cf9348afbeb author : Michael Paquier <firstname.lastname@example.org> date : Tue, 21 Jan 2020 13:47:17 +0900 committer: Michael Paquier <email@example.com> date : Tue, 21 Jan 2020 13:47:17 +0900
This helps integration of extensions with Windows. The following parameters are changed: - idle_in_transaction_session_timeout (9.6 and newer versions) - lock_timeout - statement_timeout - track_activities - track_counts - track_functions Author: Pascal Legrand Reviewed-by: Amit Kamila, Julien Rouhaud, Michael Paquier Discussion: https://firstname.lastname@example.org Backpatch-through: 9.4
Fix pg_dump's sigTermHandler() to use _exit() not exit().
commit : 42e538fe673b20d9adfc46bc889f3eb16ceb6538 author : Tom Lane <email@example.com> date : Mon, 20 Jan 2020 12:57:18 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 20 Jan 2020 12:57:18 -0500
sigTermHandler() tried to be careful to invoke only operations that are safe to do in a signal handler. But for some reason we forgot that exit(3) is not among those, because it calls atexit handlers that might do various random things. (pg_dump itself installs no atexit handlers, but e.g. OpenSSL does.) That led to crashes or lockups when attempting to terminate a parallel dump or restore via a signal. Fix by calling _exit() instead. Per bug #16199 from Raúl Marín. Back-patch to all supported branches. Discussion: https://email@example.com
Repair more failures with SubPlans in multi-row VALUES lists.
commit : eb9d1f0504a64aeae2b91279bc59e2649d35b4b0 author : Tom Lane <firstname.lastname@example.org> date : Fri, 17 Jan 2020 16:17:18 -0500 committer: Tom Lane <email@example.com> date : Fri, 17 Jan 2020 16:17:18 -0500
Commit 9b63c13f0 turns out to have been fundamentally misguided: the parent node's subPlan list is by no means the only way in which a child SubPlan node can be hooked into the outer execution state. As shown in bug #16213 from Matt Jibson, we can also get short-lived tuple table slots added to the outer es_tupleTable list. At this point I have little faith that there aren't other possible connections as well; the long time it took to notice this problem shows that this isn't a heavily-exercised situation. Therefore, revert that fix, returning to the coding that passed a NULL parent plan pointer down to the transiently-built subexpressions. That gives us a pretty good guarantee that they won't hook into the outer executor state in any way. But then we need some other solution to make SubPlans work. Adopt the solution speculated about in the previous commit's log message: do expression initialization at plan startup for just those VALUES rows containing SubPlans, abandoning the goal of reclaiming memory intra-query for those rows. In practice it seems unlikely that queries containing a vast number of VALUES rows would be using SubPlans in them, so this should not give up much. (BTW, this test case also refutes my claim in connection with the prior commit that the issue only arises with use of LATERAL. That was just wrong: some variants of SubLink always produce SubPlans.) As with previous patch, back-patch to all supported branches. Discussion: https://firstname.lastname@example.org
Set ReorderBufferTXN->final_lsn more eagerly
commit : 20a1dc1e311d795fa37e5e4bd4f3d49157d78dba author : Alvaro Herrera <email@example.com> date : Fri, 17 Jan 2020 18:00:39 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Fri, 17 Jan 2020 18:00:39 -0300
... specifically, set it incrementally as each individual change is spilled down to disk. This way, it is set correctly when the transaction disappears without trace, ie. without leaving an XACT_ABORT wal record. (This happens when the server crashes midway through a transaction.) Failing to have final_lsn prevents ReorderBufferRestoreCleanup() from working, since it needs the final_lsn in order to know the endpoint of its iteration through spilled files. Commit df9f682c7bf8 already tried to fix the problem, but it didn't set the final_lsn in all cases. Revert that, since it's no longer needed. Author: Vignesh C Reviewed-by: Amit Kapila, Dilip Kumar Discussion: https://postgr.es/m/CALDaNm2CLk+K9JDwjYST0sPbGg5AQdvhUt0jbKyX_HdAE0jk3A@mail.gmail.com
Make rewriter prevent auto-updates on views with conditional INSTEAD rules.
commit : 9be6fcb3e4e21c5d745dba314a451dc40f6f388b author : Dean Rasheed <email@example.com> date : Tue, 14 Jan 2020 09:47:44 +0000 committer: Dean Rasheed <firstname.lastname@example.org> date : Tue, 14 Jan 2020 09:47:44 +0000
A view with conditional INSTEAD rules and no unconditional INSTEAD rules or INSTEAD OF triggers is not auto-updatable. Previously we relied on a check in the executor to catch this, but that's problematic since the planner may fail to properly handle such a query and thus return a particularly unhelpful error to the user, before reaching the executor check. Instead, trap this in the rewriter and report the correct error there. Doing so also allows us to include more useful error detail than the executor check can provide. This doesn't change the existing behaviour of updatable views; it merely ensures that useful error messages are reported when a view isn't updatable. Per report from Pengzhou Tang, though not adopting that suggested fix. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAG4reAQn+4xB6xHJqWdtE0ve_WqJkdyCV4P=trYr4Kn8_3_PEA@mail.gmail.com
Fix edge-case crashes and misestimation in range containment selectivity.
commit : 6bd567b65858ef4610b4faa4ca7186cffa05a213 author : Tom Lane <email@example.com> date : Sun, 12 Jan 2020 14:37:00 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 12 Jan 2020 14:37:00 -0500
When estimating the selectivity of "range_var <@ range_constant" or "range_var @> range_constant", if the upper (or respectively lower) bound of the range_constant was above the last bin of the range_var's histogram, the code would access uninitialized memory and potentially crash (though it seems the probability of a crash is quite low). Handle the endpoint cases explicitly to fix that. While at it, be more paranoid about the possibility of getting NaN or other silly results from the range type's subdiff function. And improve some comments. Ordinarily we'd probably add a regression test case demonstrating the bug in unpatched code. But it's too hard to get it to crash reliably because of the uninitialized-memory dependence, so skip that. Per bug #16122 from Adam Scott. It's been broken from the beginning, apparently, so backpatch to all supported branches. Diagnosis by Michael Paquier, patch by Andrey Borodin and Tom Lane. Discussion: https://email@example.com
doc: Fix naming of SELinux
commit : 27676e22d330e6a6f30b1e1dd6ed6e3d3413f200 author : Michael Paquier <firstname.lastname@example.org> date : Fri, 10 Jan 2020 09:37:44 +0900 committer: Michael Paquier <email@example.com> date : Fri, 10 Jan 2020 09:37:44 +0900
Reported-by: Tham Nguyen Discussion: https://firstname.lastname@example.org Backpatch-through: 9.4
Revert "Forbid DROP SCHEMA on temporary namespaces"
commit : b83ba2e6e7060fd8c37bd57e660b2ad64a1a9cd6 author : Michael Paquier <email@example.com> date : Wed, 8 Jan 2020 10:36:52 +0900 committer: Michael Paquier <firstname.lastname@example.org> date : Wed, 8 Jan 2020 10:36:52 +0900
This reverts commit a052f6c, following complains from Robert Haas and Tom Lane. Backpatch down to 9.4, like the previous commit. Discussion: https://postgr.es/m/CA+TgmobL4npEX5=E5h=5Jm_9mZun3MT39Kq2suJFVeamc9skSQ@mail.gmail.com Backpatch-through: 9.4
Fix running out of file descriptors for spill files.
commit : 1ad47e8757bb95058e70fd00d5e619833c83df40 author : Amit Kapila <email@example.com> date : Thu, 2 Jan 2020 12:28:02 +0530 committer: Amit Kapila <firstname.lastname@example.org> date : Thu, 2 Jan 2020 12:28:02 +0530
Currently while decoding changes, if the number of changes exceeds a certain threshold, we spill those to disk. And this happens for each (sub)transaction. Now, while reading all these files, we don't close them until we read all the files. While reading these files, if the number of such files exceeds the maximum number of file descriptors, the operation errors out. Use PathNameOpenFile interface to open these files as that internally has the mechanism to release kernel FDs as needed to get us under the max_safe_fds limit. Reported-by: Amit Khandekar Author: Amit Khandekar Reviewed-by: Amit Kapila Backpatch-through: 9.4 Discussion: https://postgr.es/m/CAJ3gD9c-sECEn79zXw4yBnBdOttacoE-6gAyP0oy60nfs_sabQ@mail.gmail.com
Update copyrights for 2020
commit : ce758a3d753748891ba2dbd56ad7953154e256d2 author : Bruce Momjian <email@example.com> date : Wed, 1 Jan 2020 12:21:44 -0500 committer: Bruce Momjian <firstname.lastname@example.org> date : Wed, 1 Jan 2020 12:21:44 -0500
Backpatch-through: update all files in master, backpatch legal files through 9.4
doc: add examples of creative use of unique expression indexes
commit : 645694731ca9526bfcb976ae7be17b9bd8de6294 author : Bruce Momjian <email@example.com> date : Fri, 27 Dec 2019 14:49:08 -0500 committer: Bruce Momjian <firstname.lastname@example.org> date : Fri, 27 Dec 2019 14:49:08 -0500
Unique expression indexes can constrain data in creative ways, so show two examples. Reported-by: Tuomas Leikola Discussion: https://email@example.com Backpatch-through: 9.4
docs: clarify infinite range values from data-type infinities
commit : 820b1d1a48afca43bf3a1ff45b36ef0ded55dfff author : Bruce Momjian <firstname.lastname@example.org> date : Fri, 27 Dec 2019 14:33:30 -0500 committer: Bruce Momjian <email@example.com> date : Fri, 27 Dec 2019 14:33:30 -0500
The previous docs referenced these distinct ideas confusingly. Reported-by: Eugen Konkov Discussion: https://firstname.lastname@example.org Backpatch-through: 9.4
Forbid DROP SCHEMA on temporary namespaces
commit : 898e0c650097a724177ade1af552901385eaf6c1 author : Michael Paquier <email@example.com> date : Fri, 27 Dec 2019 17:59:39 +0900 committer: Michael Paquier <firstname.lastname@example.org> date : Fri, 27 Dec 2019 17:59:39 +0900
This operation was possible for the owner of the schema or a superuser. Down to 9.4, doing this operation would cause inconsistencies in a session whose temporary schema was dropped, particularly if trying to create new temporary objects after the drop. A more annoying consequence is a crash of autovacuum on an assertion failure when logging information about an orphaned temp table dropped. Note that because of 246a6c8 (present in v11~), which has made the removal of orphaned temporary tables more aggressive, the failure could be triggered more easily, but it is possible to reproduce down to 9.4. Reported-by: Mahendra Singh, Prabhat Sahu Author: Michael Paquier Reviewed-by: Kyotaro Horiguchi, Mahendra Singh Discussion: https://postgr.es/m/CAKYtNAr9Zq=1-ww4etHo-VCC-k120YxZy5OS01VkaLPaDbv2tg@mail.gmail.com Backpatch-through: 9.4
Rotate instead of shifting hash join batch number.
commit : 5c0a132cf1410fb9dea577e6b84b8560bb0f7e03 author : Thomas Munro <email@example.com> date : Tue, 24 Dec 2019 11:31:24 +1300 committer: Thomas Munro <firstname.lastname@example.org> date : Tue, 24 Dec 2019 11:31:24 +1300
Our algorithm for choosing batch numbers turned out not to work effectively for multi-billion key inner relations. We would use more hash bits than we have, and effectively concentrate all tuples into a smaller number of batches than we intended. While ideally we should switch to wider hashes, for now, change the algorithm to one that effectively gives up bits from the bucket number when we don't have enough bits. That means we'll finish up with longer bucket chains than would be ideal, but that's better than having batches that don't fit in work_mem and can't be divided. Batch-patch to all supported releases. Author: Thomas Munro Reviewed-by: Tom Lane, thanks also to Tomas Vondra, Alvaro Herrera, Andres Freund for testing and discussion Reported-by: James Coleman Discussion: https://postgr.es/m/16104-dc11ed911f1ab9df%40postgresql.org
Disallow null category in crosstab_hash
commit : 4a3cdb531be57bf96acd211e9c058b1b72a6fb39 author : Joe Conway <email@example.com> date : Mon, 23 Dec 2019 13:34:12 -0500 committer: Joe Conway <firstname.lastname@example.org> date : Mon, 23 Dec 2019 13:34:12 -0500
While building a hash map of categories in load_categories_hash, resulting category names have not thus far been checked to ensure they are not null. Prior to pg12 null category names worked to the extent that they did not crash on some platforms. This is because those system libraries have an snprintf which can deal with being passed a null pointer argument for a string. But even in those cases null categories did nothing useful. And on some platforms it crashed. As of pg12, our own version of snprintf gets called, and it does not deal with null pointer arguments at all, and crashes consistently. Fix that by disallowing null categories. They never worked usefully, and no one has ever asked for them to work previously. Back-patch to all supported branches. Reported-By: Ireneusz Pluta Discussion: https://email@example.com
Prevent a rowtype from being included in itself via a range.
commit : 0d245d13c643d9ee089ad59fc4673a6b520461b1 author : Tom Lane <firstname.lastname@example.org> date : Mon, 23 Dec 2019 12:08:24 -0500 committer: Tom Lane <email@example.com> date : Mon, 23 Dec 2019 12:08:24 -0500
We probably should have thought of this case when ranges were added, but we didn't. (It's not the fault of commit eb51af71f, because ranges didn't exist then.) It's an old bug, so back-patch to all supported branches. Discussion: https://firstname.lastname@example.org
Combine initdb tests that successfully create a data directory.
commit : 297b9ccff44859f274b7408045a9c91d2bb1c6d1 author : Michael Paquier <email@example.com> date : Mon, 23 Dec 2019 10:51:00 +0900 committer: Michael Paquier <firstname.lastname@example.org> date : Mon, 23 Dec 2019 10:51:00 +0900
This eliminates many seconds of test duration and the cause to invoke "rm -rf", which is typically unavailable on Windows. This is a backpatch of 1a629c1 which has never been applied to REL9_4_STABLE. Per complain from buildarm member drongo. Reported-by: Tom Lane Author: Michael Paquier, Noah Misch Discussion: https://email@example.com
Avoid low-probability regression test failures in timestamp[tz] tests.
commit : 8f735c0488c68d02ed9a484851277fd5df4570c5 author : Tom Lane <firstname.lastname@example.org> date : Sun, 22 Dec 2019 18:00:18 -0500 committer: Tom Lane <email@example.com> date : Sun, 22 Dec 2019 18:00:18 -0500
If the first transaction block in these tests were entered exactly at midnight (California time), they'd report a bogus failure due to 'now' and 'midnight' having the same values. Commit 8c2ac75c5 had dismissed this as being of negligible probability, but we've now seen it happen in the buildfarm, so let's prevent it. We can get pretty much the same test coverage without an it's-not-midnight assumption by moving the does-'now'-work cases into their own test step. While here, apply commit 47169c255's s/DELETE/TRUNCATE/ change to timestamptz as well as timestamp (not sure why that didn't occur to me at the time; the risk of failure is the same). Back-patch to all supported branches, since the main point is to get rid of potential buildfarm failures. Discussion: https://firstname.lastname@example.org
In pgwin32_open, loop after ERROR_ACCESS_DENIED only if we can't stat.
commit : f1a4020ef3bbb9381a43972f33159dc5a6132758 author : Tom Lane <email@example.com> date : Sat, 21 Dec 2019 17:39:37 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 21 Dec 2019 17:39:37 -0500
This fixes a performance problem introduced by commit 6d7547c21. ERROR_ACCESS_DENIED is returned in some other cases besides the delete-pending case considered by that commit; notably, if the given path names a directory instead of a plain file. In that case we'll uselessly loop for 1 second before returning the failure condition. That slows down some usage scenarios enough to cause test timeout failures on our Windows buildfarm critters. To fix, try to stat() the file, and sleep/loop only if that fails. It will fail in the delete-pending case, and also in the case where the deletion completed before we could stat(), so we have the cases where we want to loop covered. In the directory case, the stat() should succeed, letting us exit without a wait. One case where we'll still wait uselessly is if the access-denied problem pertains to a directory in the given pathname. But we don't expect that to happen in any performance-critical code path. There might be room to refine this further, but I'll push it now in hopes of making the buildfarm green again. Back-patch, like the preceding commit. Alexander Lakhin and Tom Lane Discussion: https://email@example.com
docs: clarify handling of column lists in COPY TO/FROM
commit : 530b5d53f67262c5e092f8b82aa8e34f54e7e68f author : Bruce Momjian <firstname.lastname@example.org> date : Sat, 21 Dec 2019 12:44:38 -0500 committer: Bruce Momjian <email@example.com> date : Sat, 21 Dec 2019 12:44:38 -0500
Previously it was unclear how COPY FROM handled cases where not all columns were specified, or if the order didn't match. Reported-by: firstname.lastname@example.org Discussion: https://email@example.com Backpatch-through: 9.4
libpq should expose GSS-related parameters even when not implemented.
commit : 875c7d70def61a725ed94d25859a7d806dd6e747 author : Tom Lane <firstname.lastname@example.org> date : Fri, 20 Dec 2019 15:34:08 -0500 committer: Tom Lane <email@example.com> date : Fri, 20 Dec 2019 15:34:08 -0500
We realized years ago that it's better for libpq to accept all connection parameters syntactically, even if some are ignored or restricted due to lack of the feature in a particular build. However, that lesson from the SSL support was for some reason never applied to the GSSAPI support. This is causing various buildfarm members to have problems with a test case added by commit 6136e94dc, and it's just a bad idea from a user-experience standpoint anyway, so fix it. While at it, fix some places where parameter-related infrastructure was added with the aid of a dartboard, or perhaps with the aid of the anti-pattern "add new stuff at the end". It should be safe to rearrange the contents of struct pg_conn even in released branches, since that's private to libpq (and we'd have to move some fields in some builds to fix this, anyway). Back-patch to all supported branches. Discussion: https://firstname.lastname@example.org
Fix error reporting for index expressions of prohibited types.
commit : 298d056d9d9786bac9c645be0011e8229029aa21 author : Tom Lane <email@example.com> date : Tue, 17 Dec 2019 17:44:28 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 17 Dec 2019 17:44:28 -0500
If CheckAttributeType() threw an error about the datatype of an index expression column, it would report an empty column name, which is pretty unhelpful and certainly not the intended behavior. I (tgl) evidently broke this in commit cfc5008a5, by not noticing that the column's attname was used above where I'd placed the assignment of it. In HEAD and v12, this is trivially fixable by moving up the assignment of attname. Before v12 the code is a bit more messy; to avoid doing substantial refactoring, I took the lazy way out and just put in two copies of the assignment code. Report and patch by Amit Langote. Back-patch to all supported branches. Discussion: https://postgr.es/m/CA+HiwqFA+BGyBFimjiYXXMa2Hc3fcL0+OJOyzUNjhU4NCa_XXw@mail.gmail.com
On Windows, wait a little to see if ERROR_ACCESS_DENIED goes away.
commit : cfb2a4cce37bdb279a9360913b3ef29be3079f98 author : Tom Lane <email@example.com> date : Mon, 16 Dec 2019 15:10:56 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 16 Dec 2019 15:10:56 -0500
Attempting to open a file fails with ERROR_ACCESS_DENIED if the file is flagged for deletion but not yet actually gone (another in a long list of reasons why Windows is broken, if you ask me). This seems likely to explain a lot of irreproducible failures we see in the buildfarm. This state generally persists for only a millisecond or so, so just wait a bit and retry. If it's a real permissions problem, we'll eventually give up and report it as such. If it's the pending deletion case, we'll see file-not-found and report that after the deletion completes, and the caller will treat that in an appropriate way. In passing, rejigger the existing retry logic for some other error cases so that we don't uselessly wait an extra time when we're not going to retry anymore. Alexander Lakhin (with cosmetic tweaks by me). Back-patch to all supported branches, since this seems like a pretty safe change and the problem is definitely real. Discussion: https://email@example.com
Fix EXTRACT(ISOYEAR FROM timestamp) for years BC.
commit : 6aa1263113a6aeeaf1a2405018bff50b4666134e author : Tom Lane <firstname.lastname@example.org> date : Thu, 12 Dec 2019 12:30:44 -0500 committer: Tom Lane <email@example.com> date : Thu, 12 Dec 2019 12:30:44 -0500
The test cases added by commit 26ae3aa80 exposed an old oversight in timestamp[tz]_part: they didn't correct the result of date2isoyear() for BC years, so that we produced an off-by-one answer for such years. Fix that, and back-patch to all supported branches. Discussion: https://postgr.es/m/SG2PR06MB37762CAE45DB0F6CA7001EA9B6550@SG2PR06MB3776.apcprd06.prod.outlook.com
Remove redundant function calls in timestamp[tz]_part().
commit : 8b0fd6d04137d8c436a34f3261a0f7fa3068f891 author : Tom Lane <firstname.lastname@example.org> date : Thu, 12 Dec 2019 12:12:36 -0500 committer: Tom Lane <email@example.com> date : Thu, 12 Dec 2019 12:12:36 -0500
The DTK_DOW/DTK_ISODOW and DTK_DOY switch cases in timestamp_part() and timestamptz_part() contained calls of timestamp2tm() that were fully redundant with the ones done just above the switch. This evidently crept in during commit 258ee1b63, which relocated that code from another place where the calls were indeed needed. Just delete the redundant calls. I (tgl) noted that our test coverage of these functions left quite a bit to be desired, so extend timestamp.sql and timestamptz.sql to cover all the branches. Back-patch to all supported branches, as the previous commit was. There's no real issue here other than some wasted cycles in some not-too-heavily-used code paths, but the test coverage seems valuable. Report and patch by Li Japin; test case adjustments by me. Discussion: https://postgr.es/m/SG2PR06MB37762CAE45DB0F6CA7001EA9B6550@SG2PR06MB3776.apcprd06.prod.outlook.com
Doc: back-patch documentation about limitations of CHECK constraints.
commit : 3ddd8ee7167f14b8792f36f6fdfec8b57daec035 author : Tom Lane <firstname.lastname@example.org> date : Wed, 11 Dec 2019 15:53:36 -0500 committer: Tom Lane <email@example.com> date : Wed, 11 Dec 2019 15:53:36 -0500
Back-patch commits 36d442a25 and 1f66c657f into all supported branches. I'd considered doing this when putting in the latter commit, but failed to pull the trigger. Now that we've had an actual field complaint about the lack of such docs, let's do it. Per bug #16158 from Piotr Jander. Original patches by Lætitia Avrot, Patrick Francelle, and me. Discussion: https://firstname.lastname@example.org
Fix race condition in our Windows signal emulation.
commit : 7309e75fa95681f09e55a78e8d752d8df74fd478 author : Tom Lane <email@example.com> date : Mon, 9 Dec 2019 15:03:52 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 9 Dec 2019 15:03:52 -0500
pg_signal_dispatch_thread() responded to the client (signal sender) and disconnected the pipe before actually setting the shared variables that make the signal visible to the backend process's main thread. In the worst case, it seems, effective delivery of the signal could be postponed for as long as the machine has any other work to do. To fix, just move the pg_queue_signal() call so that we do it before responding to the client. This essentially makes pgkill() synchronous, which is a stronger guarantee than we have on Unix. That may be overkill, but on the other hand we have not seen comparable timing bugs on any Unix platform. While at it, add some comments to this sadly underdocumented code. Problem diagnosis and fix by Amit Kapila; I just added the comments. Back-patch to all supported versions, as it appears that this can cause visible NOTIFY timing oddities on all of them, and there might be other misbehavior due to slow delivery of other signals. Discussion: https://email@example.com
Document search_path security with untrusted dbowner or CREATEROLE.
commit : 08395e592cbc540bdf111ab4a27e01a375983e23 author : Noah Misch <firstname.lastname@example.org> date : Sun, 8 Dec 2019 11:06:26 -0800 committer: Noah Misch <email@example.com> date : Sun, 8 Dec 2019 11:06:26 -0800
Commit 5770172cb0c9df9e6ce27c507b449557e5b45124 wrote, incorrectly, that certain schema usage patterns are secure against CREATEROLE users and database owners. When an untrusted user is the database owner or holds CREATEROLE privilege, a query is secure only if its session started with SELECT pg_catalog.set_config('search_path', '', false) or equivalent. Back-patch to 9.4 (all supported versions). Discussion: https://postgr.es/m/20191013013512.GC4131753@rfd.leadboat.com
Ensure maxlen is at leat 1 in dict_int
commit : 44381b1aff0e92acc91381c40f1b07514e93a18b author : Tomas Vondra <firstname.lastname@example.org> date : Tue, 3 Dec 2019 16:55:51 +0100 committer: Tomas Vondra <email@example.com> date : Tue, 3 Dec 2019 16:55:51 +0100
The dict_int text search dictionary template accepts maxlen parameter, which is then used to cap the length of input strings. The value was not properly checked, and the code simply does txt[d->maxlen] = '\0'; to insert a terminator, leading to segfaults with negative values. This commit simply rejects values less than 1. The issue was there since dct_int was introduced in 9.3, so backpatch all the way back to 9.4 which is the oldest supported version. Reported-by: cili Discussion: https://firstname.lastname@example.org Backpatch-through: 9.4
Fix misbehavior with expression indexes on ON COMMIT DELETE ROWS tables.
commit : 0c84e992c3309e1a0a80ebf3af9ebead9e23d58f author : Tom Lane <email@example.com> date : Sun, 1 Dec 2019 13:09:27 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 1 Dec 2019 13:09:27 -0500
We implement ON COMMIT DELETE ROWS by truncating tables marked that way, which requires also truncating/rebuilding their indexes. But RelationTruncateIndexes asks the relcache for up-to-date copies of any index expressions, which may cause execution of eval_const_expressions on them, which can result in actual execution of subexpressions. This is a bad thing to have happening during ON COMMIT. Manuel Rigger reported that use of a SQL function resulted in crashes due to expectations that ActiveSnapshot would be set, which it isn't. The most obvious fix perhaps would be to push a snapshot during PreCommit_on_commit_actions, but I think that would just open the door to more problems: CommitTransaction explicitly expects that no user-defined code can be running at this point. Fortunately, since we know that no tuples exist to be indexed, there seems no need to use the real index expressions or predicates during RelationTruncateIndexes. We can set up dummy index expressions instead (we do need something that will expose the right data type, as there are places that build index tupdescs based on this), and just ignore predicates and exclusion constraints. In a green field it'd likely be better to reimplement ON COMMIT DELETE ROWS using the same "init fork" infrastructure used for unlogged relations. That seems impractical without catalog changes though, and even without that it'd be too big a change to back-patch. So for now do it like this. Per private report from Manuel Rigger. This has been broken forever, so back-patch to all supported branches.
Fix off-by-one error in PGTYPEStimestamp_fmt_asc
commit : d9b974e9947e54a9a55639f8f5f746bb5ad497b8 author : Tomas Vondra <email@example.com> date : Sat, 30 Nov 2019 14:51:27 +0100 committer: Tomas Vondra <firstname.lastname@example.org> date : Sat, 30 Nov 2019 14:51:27 +0100
When using %b or %B patterns to format a date, the code was simply using tm_mon as an index into array of month names. But that is wrong, because tm_mon is 1-based, while array indexes are 0-based. The result is we either use name of the next month, or a segfault (for December). Fix by subtracting 1 from tm_mon for both patterns, and add a regression test triggering the issue. Backpatch to all supported versions (the bug is there far longer, since at least 2003). Reported-by: Paul Spencer Backpatch-through: 9.4 Discussion: https://postgr.es/m/16143-0d861eb8688d3fef%40postgresql.org
Fix typo in comment.
commit : 304ea5d98ed8c5b4d7ade7524a7d82841d335181 author : Etsuro Fujita <email@example.com> date : Wed, 27 Nov 2019 16:00:55 +0900 committer: Etsuro Fujita <firstname.lastname@example.org> date : Wed, 27 Nov 2019 16:00:55 +0900
Avoid assertion failure with LISTEN in a serializable transaction.
commit : 34f44f59b38fcb0cca9363cf45f3fe6c41610abb author : Tom Lane <email@example.com> date : Sun, 24 Nov 2019 15:57:32 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 24 Nov 2019 15:57:32 -0500
If LISTEN is the only action in a serializable-mode transaction, and the session was not previously listening, and the notify queue is not empty, predicate.c reported an assertion failure. That happened because we'd acquire the transaction's initial snapshot during PreCommit_Notify, which was called *after* predicate.c expects any such snapshot to have been established. To fix, just swap the order of the PreCommit_Notify and PreCommit_CheckForSerializationFailure calls during CommitTransaction. This will imply holding the notify-insertion lock slightly longer, but the difference could only be meaningful in serializable mode, which is an expensive option anyway. It appears that this is just an assertion failure, with no consequences in non-assert builds. A snapshot used only to scan the notify queue could not have been involved in any serialization conflicts, so there would be nothing for PreCommit_CheckForSerializationFailure to do except assign it a prepareSeqNo and set the SXACT_FLAG_PREPARED flag. And given no conflicts, neither of those omissions affect the behavior of ReleasePredicateLocks. This admittedly once-over-lightly analysis is backed up by the lack of field reports of trouble. Per report from Mark Dilger. The bug is old, so back-patch to all supported branches; but the new test case only goes back to 9.6, for lack of adequate isolationtester infrastructure before that. Discussion: https://email@example.com Discussion: https://firstname.lastname@example.org
Defend against self-referential views in relation_is_updatable().
commit : f09829017d1502a12716fb3fbd116cb2e25897c8 author : Tom Lane <email@example.com> date : Thu, 21 Nov 2019 16:21:44 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 21 Nov 2019 16:21:44 -0500
While a self-referential view doesn't actually work, it's possible to create one, and it turns out that this breaks some of the information_schema views. Those views call relation_is_updatable(), which neglected to consider the hazards of being recursive. In older PG versions you get a "stack depth limit exceeded" error, but since v10 it'd recurse to the point of stack overrun and crash, because commit a4c35ea1c took out the expression_returns_set() call that was incidentally checking the stack depth. Since this function is only used by information_schema views, it seems like it'd be better to return "not updatable" than suffer an error. Hence, add tracking of what views we're examining, in just the same way that the nearby fireRIRrules() code detects self-referential views. I added a check_stack_depth() call too, just to be defensive. Per private report from Manuel Rigger. Back-patch to all supported versions.
Remove incorrect markup
commit : d81d4c36ac98a7b402c14581c278273e9063ff0a author : Magnus Hagander <email@example.com> date : Wed, 20 Nov 2019 17:03:07 +0100 committer: Magnus Hagander <firstname.lastname@example.org> date : Wed, 20 Nov 2019 17:03:07 +0100
Author: Daniel Gustafsson <email@example.com>
Revise GIN README
commit : 91ce01a6e07b27e8c293abe4d2d565e0d8ae4021 author : Alexander Korotkov <firstname.lastname@example.org> date : Tue, 19 Nov 2019 23:11:24 +0300 committer: Alexander Korotkov <email@example.com> date : Tue, 19 Nov 2019 23:11:24 +0300
We find GIN concurrency bugs from time to time. One of the problems here is that concurrency of GIN isn't well-documented in README. So, it might be even hard to distinguish design bugs from implementation bugs. This commit revised concurrency section in GIN README providing more details. Some examples are illustrated in ASCII art. Also, this commit add the explanation of how is tuple layout in internal GIN B-tree page different in comparison with nbtree. Discussion: https://postgr.es/m/CAPpHfduXR_ywyaVN4%2BOYEGaw%3DcPLzWX6RxYLBncKw8de9vOkqw%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Peter Geoghegan Backpatch-through: 9.4
Fix traversing to the deleted GIN page via downlink
commit : 1414821e1688c86266c5ecaad378086785152332 author : Alexander Korotkov <firstname.lastname@example.org> date : Tue, 19 Nov 2019 23:08:14 +0300 committer: Alexander Korotkov <email@example.com> date : Tue, 19 Nov 2019 23:08:14 +0300
Current GIN code appears to don't handle traversing to the deleted page via downlink. This commit fixes that by stepping right from the delete page like we do in nbtree. This commit also fixes setting 'deleted' flag to the GIN pages. Now other page flags are not erased once page is deleted. That helps to keep our assertions true if we arrive deleted page via downlink. Discussion: https://postgr.es/m/CAPpHfdvMvsw-NcE5bRS7R1BbvA4BxoDnVVjkXC5W0Czvy9LVrg%40mail.gmail.com Author: Alexander Korotkov Reviewed-by: Peter Geoghegan Backpatch-through: 9.4
Doc: clarify use of RECURSIVE in WITH.
commit : 6e8516ef6c60adbcc40e2b913cafebf214f1d99c author : Tom Lane <firstname.lastname@example.org> date : Tue, 19 Nov 2019 14:43:37 -0500 committer: Tom Lane <email@example.com> date : Tue, 19 Nov 2019 14:43:37 -0500
Apparently some people misinterpreted the syntax as being that RECURSIVE is a prefix of individual WITH queries. It's a modifier for the WITH clause as a whole, so state that more clearly. Discussion: https://firstname.lastname@example.org
Doc: clarify behavior of ALTER DEFAULT PRIVILEGES ... IN SCHEMA.
commit : e5df9bb0179512bef6d9e1b51b3d07ca4db3f4ec author : Tom Lane <email@example.com> date : Tue, 19 Nov 2019 14:21:42 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 19 Nov 2019 14:21:42 -0500
The existing text stated that "Default privileges that are specified per-schema are added to whatever the global default privileges are for the particular object type". However, that bare-bones observation is not quite clear enough, as demonstrated by the complaint in bug #16124. Flesh it out by stating explicitly that you can't revoke built-in default privileges this way, and by providing an example to drive the point home. Back-patch to all supported branches, since it's been like this from the beginning. Discussion: https://email@example.com
Further fix dumping of views that contain just VALUES(...).
commit : 65da6dd1d306dd679b92770b628d44ed6a08db3b author : Tom Lane <firstname.lastname@example.org> date : Sat, 16 Nov 2019 20:00:20 -0500 committer: Tom Lane <email@example.com> date : Sat, 16 Nov 2019 20:00:20 -0500
It turns out that commit e9f1c01b7 missed a case: we must print a VALUES clause in long format if get_query_def is given a resultDesc that would require the query's output column name(s) to be different from what the bare VALUES clause would produce. This applies in case an ALTER ... RENAME COLUMN has been done to a view that formerly could be printed in simple format, as shown in the added regression test case. It also explains bug #16119 from Dmitry Telpt, because it turns out that (unlike CREATE VIEW) CREATE MATERIALIZED VIEW fails to apply any column aliases it's given to the stored ON SELECT rule. So to get them to be printed, we have to account for the resultDesc renaming. It might be worth changing the matview code so that it creates the ON SELECT rule with the correct aliases; but we'd still need these messy checks in get_simple_values_rte to handle the case of a subsequent column rename, so any such change would be just neatnik-ism not a bug fix. Like the previous patch, back-patch to all supported branches. Discussion: https://firstname.lastname@example.org
Handle arrays and ranges in pg_upgrade's test for non-upgradable types.
commit : 56c06999d3c3bb056dd3c6eccd085e7c14bd6a38 author : Tom Lane <email@example.com> date : Wed, 13 Nov 2019 11:35:37 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 13 Nov 2019 11:35:37 -0500
pg_upgrade needs to check whether certain non-upgradable data types appear anywhere on-disk in the source cluster. It knew that it has to check for these types being contained inside domains and composite types; but it somehow overlooked that they could be contained in arrays and ranges, too. Extend the existing recursive-containment query to handle those cases. We probably should have noticed this oversight while working on commit 0ccfc2822 and follow-ups, but we failed to :-(. The whole thing's possibly a bit overdesigned, since we don't really expect that any of these types will appear on disk; but if we're going to the effort of doing a recursive search then it's silly not to cover all the possibilities. While at it, refactor so that we have only one copy of the search logic, not three-and-counting. Also, to keep the branches looking more alike, back-patch the output wording change of commit 1634d3615. Back-patch to all supported branches. Discussion: https://email@example.com