commit : 01306452b1481a73a24fe7396f84797d37269865 author : Tom Lane <firstname.lastname@example.org> date : Mon, 6 Feb 2017 16:49:02 -0500 committer: Tom Lane <email@example.com> date : Mon, 6 Feb 2017 16:49:02 -0500
Release notes for 9.6.2, 9.5.6, 9.4.11, 9.3.16, 9.2.20.
commit : 59661896228ecd701c8f78ac6663766d740cd39e author : Tom Lane <firstname.lastname@example.org> date : Mon, 6 Feb 2017 15:30:17 -0500 committer: Tom Lane <email@example.com> date : Mon, 6 Feb 2017 15:30:17 -0500
Avoid returning stale attribute bitmaps in RelationGetIndexAttrBitmap().
commit : 5879958e1f822fd81ecad0fba371cfb3fc4c40af author : Tom Lane <firstname.lastname@example.org> date : Mon, 6 Feb 2017 13:19:51 -0500 committer: Tom Lane <email@example.com> date : Mon, 6 Feb 2017 13:19:51 -0500
The problem with the original coding here is that we might receive (and clear) a relcache invalidation signal for the target relation down inside one of the index_open calls we're doing. Since the target is open, we would not drop the relcache entry, just reset its rd_indexvalid and rd_indexlist fields. But RelationGetIndexAttrBitmap() kept going, and would eventually cache and return potentially-obsolete attribute bitmaps. The case where this matters is where the inval signal was from a CREATE INDEX CONCURRENTLY telling us about a new index on a formerly-unindexed column. (In all other cases, the lock we hold on the target rel should prevent any concurrent change in index state.) Even just returning the stale attribute bitmap is not such a problem, because it shouldn't matter during the transaction in which we receive the signal. What hurts is caching the stale data, because it can survive into later transactions, breaking CREATE INDEX CONCURRENTLY's expectation that later transactions will not create new broken HOT chains. The upshot is that there's a window for building corrupted indexes during CREATE INDEX CONCURRENTLY. This patch fixes the problem by rechecking that the set of index OIDs is still the same at the end of RelationGetIndexAttrBitmap() as it was at the start. If not, we loop back and try again. That's a little more than is strictly necessary to fix the bug --- in principle, we could return the stale data but not cache it --- but it seems like a bad idea on general principles for relcache to return data it knows is stale. There might be more hazards of the same ilk, or there might be a better way to fix this one, but this patch definitely improves matters and seems unlikely to make anything worse. So let's push it into today's releases even as we continue to study the problem. Pavan Deolasee and myself Discussion: https://postgr.es/m/CABOikdM2MUq9cyZJi1KyLmmkCereyGp5JQ4fuwKoyKEde_mzkQ@mail.gmail.com
commit : 468d108f01efed029221a541eb4ceb2159712f6d author : Peter Eisentraut <firstname.lastname@example.org> date : Mon, 6 Feb 2017 12:38:00 -0500 committer: Peter Eisentraut <email@example.com> date : Mon, 6 Feb 2017 12:38:00 -0500
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 13ec5c66ea619ad27f74e5182af5e149aa1cde27
Add missing newline to error messages
commit : 2a6bc23299d168dca7878c2dedb80667b11e9936 author : Peter Eisentraut <firstname.lastname@example.org> date : Mon, 6 Feb 2017 09:47:39 -0500 committer: Peter Eisentraut <email@example.com> date : Mon, 6 Feb 2017 09:47:39 -0500
Also improve the message style a bit while we're here.
Fix typo also in expected output.
commit : c01b73336b23e1e6af4b5dc8e115e6eccf4fb3f8 author : Heikki Linnakangas <firstname.lastname@example.org> date : Mon, 6 Feb 2017 12:04:04 +0200 committer: Heikki Linnakangas <email@example.com> date : Mon, 6 Feb 2017 12:04:04 +0200
Commit 181bdb90ba fixed the typo in the .sql file, but forgot to update the expected output.
Fix typos in comments.
commit : 1dd06ede17e024ab5803fda1f58947d793009fe1 author : Heikki Linnakangas <firstname.lastname@example.org> date : Mon, 6 Feb 2017 11:33:58 +0200 committer: Heikki Linnakangas <email@example.com> date : Mon, 6 Feb 2017 11:33:58 +0200
Backpatch to all supported versions, where applicable, to make backpatching of future fixes go more smoothly. Josh Soref Discussion: https://www.postgresql.org/message-id/CACZqfqCf+5qRztLPgmmosr-B0Ye4srWzzw_mo4c_8_B_mtjmJQ@mail.gmail.com
Add KOI8-U map files to Makefile.
commit : 12b4bdcb43b749ecbcde8e129d2a42f797377a6a author : Heikki Linnakangas <firstname.lastname@example.org> date : Thu, 2 Feb 2017 14:12:35 +0200 committer: Heikki Linnakangas <email@example.com> date : Thu, 2 Feb 2017 14:12:35 +0200
These were left out by mistake back when support for KOI8-U encoding was added. Extracted from Kyotaro Horiguchi's larger patch.
Update time zone data files to tzdata release 2016j.
commit : a7b5de3ba726180022e0de9f968d6c5456e25b12 author : Tom Lane <firstname.lastname@example.org> date : Mon, 30 Jan 2017 11:40:22 -0500 committer: Tom Lane <email@example.com> date : Mon, 30 Jan 2017 11:40:22 -0500
DST law changes in northern Cyprus (new zone Asia/Famagusta), Russia (new zone Europe/Saratov), Tonga, Antarctica/Casey. Historical corrections for Asia/Aqtau, Asia/Atyrau, Asia/Gaza, Asia/Hebron, Italy, Malta. Replace invented zone abbreviation "TOT" for Tonga with numeric UTC offset; but as in the past, we'll keep accepting "TOT" for input.
Orthography fixes for new castNode() macro.
commit : d02f038c35280207f29e50d777cf9e1cd69d6f52 author : Tom Lane <firstname.lastname@example.org> date : Fri, 27 Jan 2017 08:33:58 -0500 committer: Tom Lane <email@example.com> date : Fri, 27 Jan 2017 08:33:58 -0500
Clean up hastily-composed comment. Normalize whitespace. Erik Rijkers and myself
Check interrupts during hot standby waits
commit : 357e061286d2653737eee509f69d70ea85475e3a author : Simon Riggs <simon@2ndQuadrant.com> date : Fri, 27 Jan 2017 12:16:18 +0000 committer: Simon Riggs <simon@2ndQuadrant.com> date : Fri, 27 Jan 2017 12:16:18 +0000
Add castNode(type, ptr) for safe casting between NodeTag based types.
commit : cf8c86af950b6ccb05cd81684b848f35b2317fdc author : Andres Freund <firstname.lastname@example.org> date : Thu, 26 Jan 2017 16:47:03 -0800 committer: Andres Freund <email@example.com> date : Thu, 26 Jan 2017 16:47:03 -0800
The new function allows to cast from one NodeTag based type to another, while asserting that the conversion is valid. This replaces the common pattern of doing a cast and a Assert(IsA(ptr, type)) close-by. As this seems likely to be used pervasively, we decided to backpatch this change the addition of this macro. Otherwise backpatched fixes are more likely not to work on back-branches. On branches before 9.6, where we do not yet rely on inline functions being available, the type assertion is only performed if PG_USE_INLINE support is detected. The cast obviously is performed regardless. For the benefit of verifying the macro compiles in the back-branches, this commit contains a single use of the new macro. On master, a somewhat larger conversion will be committed separately. Author: Peter Eisentraut and Andres Freund Reviewed-By: Tom Lane Discussion: https://firstname.lastname@example.org Backpatch: 9.2-
Reset hot standby xmin after restart
commit : 800d89a98a59e50cb69049fb91a9122396df64eb author : Simon Riggs <simon@2ndQuadrant.com> date : Thu, 26 Jan 2017 20:10:19 +0000 committer: Simon Riggs <simon@2ndQuadrant.com> date : Thu, 26 Jan 2017 20:10:19 +0000
Hot_standby_feedback could be reset by reload and worked correctly, but if the server was restarted rather than reloaded the xmin was not reset. Force reset always if hot_standby_feedback is enabled at startup. Ants Aasma, Craig Ringer Reported-by: Ants Aasma
Ensure that a tsquery like '!foo' matches empty tsvectors.
commit : 2c1976a6cc4bc2447bfd1e422c5b9ebf2a4d893f author : Tom Lane <email@example.com> date : Thu, 26 Jan 2017 12:17:47 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 26 Jan 2017 12:17:47 -0500
!foo means "the tsvector does not contain foo", and therefore it should match an empty tsvector. ts_match_vq() overenthusiastically supposed that an empty tsvector could never match any query, so it forcibly returned FALSE, the wrong answer. Remove the premature optimization. Our behavior on this point was inconsistent, because while seqscans and GIST index searches both failed to match empty tsvectors, GIN index searches would find them, since GIN scans don't rely on ts_match_vq(). That makes this certainly a bug, not a debatable definition disagreement, so back-patch to all supported branches. Report and diagnosis by Tom Dunstan (bug #14515); added test cases by me. Discussion: https://email@example.com
Revert "Fix comments in StrategyNotifyBgWriter()."
commit : fb9d2ed6151d1220fc54b86142e3be85ce90ef9d author : Tatsuo Ishii <firstname.lastname@example.org> date : Tue, 24 Jan 2017 10:29:04 +0900 committer: Tatsuo Ishii <email@example.com> date : Tue, 24 Jan 2017 10:29:04 +0900
This reverts commit a73cc3eff3831d98ea3c6dbeb978b96f1bc72a42, which tried to fix the comments to reflect the change of API of the function but actually the change had been made only for 9.5 or later.
Fix comments in StrategyNotifyBgWriter().
commit : a73cc3eff3831d98ea3c6dbeb978b96f1bc72a42 author : Tatsuo Ishii <firstname.lastname@example.org> date : Tue, 24 Jan 2017 09:39:11 +0900 committer: Tatsuo Ishii <email@example.com> date : Tue, 24 Jan 2017 09:39:11 +0900
The interface for the function was changed in d72731a70450b5e7084991b9caa15cb58a2820df but the comments of the function was not updated. Patch by Yugo Nagata.
doc: Update URL for Microsoft download site
commit : 7de7f80a5fd7d0eaddbcaf60881bdc85dc1f91d7 author : Peter Eisentraut <firstname.lastname@example.org> date : Tue, 17 Jan 2017 12:00:00 -0500 committer: Peter Eisentraut <email@example.com> date : Tue, 17 Jan 2017 12:00:00 -0500
Avoid useless respawining the autovacuum launcher at high speed.
commit : 806f9b3d764c35bf584fdb674267e8d910ee08bb author : Robert Haas <firstname.lastname@example.org> date : Fri, 20 Jan 2017 15:55:45 -0500 committer: Robert Haas <email@example.com> date : Fri, 20 Jan 2017 15:55:45 -0500
When (1) autovacuum = off and (2) there's at least one database with an XID age greater than autovacuum_freeze_max_age and (3) all tables in that database that need vacuuming are already being processed by a worker and (4) the autovacuum launcher is started, a kind of infinite loop occurs. The launcher starts a worker and immediately exits. The worker, finding no worker to do, immediately starts the launcher, supposedly so that the next database can be processed. But because datfrozenxid for that database hasn't been advanced yet, the new worker gets put right back into the same database as the old one, where it once again starts the launcher and exits. High-speed ping pong ensues. There are several possible ways to break the cycle; this seems like the safest one. Amit Khandekar (code) and Robert Haas (comments), reviewed by Álvaro Herrera. Discussion: http://postgr.es/m/CAJ3gD9eWejf72HKquKSzax0r+epS=nAbQKNnykkMA0E8c+rMDg@mail.gmail.com
Reset the proper GUC in create_index test.
commit : 6290f8e966ef3e2796522a3ea11b087a99f63413 author : Tom Lane <firstname.lastname@example.org> date : Wed, 18 Jan 2017 16:33:18 -0500 committer: Tom Lane <email@example.com> date : Wed, 18 Jan 2017 16:33:18 -0500
Thinko in commit a4523c5aa. It doesn't really affect anything at present, but it would be a problem if any tests added later in this file ought to get index-only-scan plans. Back-patch, like the previous commit, just to avoid surprises in case we add such a test and then back-patch it. Nikita Glukhov Discussion: https://firstname.lastname@example.org
Change some test macros to return true booleans
commit : 75c155f65b046686242ece4bbfb688359f28feea author : Alvaro Herrera <email@example.com> date : Wed, 18 Jan 2017 18:06:13 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Wed, 18 Jan 2017 18:06:13 -0300
These macros work fine when they are used directly in an "if" test or similar, but as soon as the return values are assigned to boolean variables (or passed as boolean arguments to some function), they become bugs, hopefully caught by compiler warnings. To avoid future problems, fix the definitions so that they return actual booleans. To further minimize the risk that somebody uses them in back-patched fixes that only work correctly in branches starting from the current master and not in old ones, back-patch the change to supported branches as appropriate. See also commit af4472bcb88ab36b9abbe7fd5858e570a65a2d1a, and the long discussion (and larger patch) in the thread mentioned in its commit message. Discussion: https://email@example.com
Fix an assertion failure related to an exclusive backup.
commit : 9e7f00d861fffacaec49473f89513ee8abca076c author : Fujii Masao <firstname.lastname@example.org> date : Tue, 17 Jan 2017 17:31:51 +0900 committer: Fujii Masao <email@example.com> date : Tue, 17 Jan 2017 17:31:51 +0900
Previously multiple sessions could execute pg_start_backup() and pg_stop_backup() to start and stop an exclusive backup at the same time. This could trigger the assertion failure of "FailedAssertion("!(XLogCtl->Insert.exclusiveBackup)". This happend because, even while pg_start_backup() was starting an exclusive backup, other session could run pg_stop_backup() concurrently and mark the backup as not-in-progress unconditionally. This patch introduces ExclusiveBackupState indicating the state of an exclusive backup. This state is used to ensure that there is only one session running pg_start_backup() or pg_stop_backup() at the same time, to avoid the assertion failure. Back-patch to all supported versions. Author: Michael Paquier Reviewed-By: Kyotaro Horiguchi and me Reported-By: Andreas Seltenreich Discussion: <firstname.lastname@example.org>
Throw suitable error for COPY TO STDOUT/FROM STDIN in a SQL function.
commit : c3775577878fc89b7b5ccb3f222cd8a01425ccb6 author : Tom Lane <email@example.com> date : Sat, 14 Jan 2017 13:27:47 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 14 Jan 2017 13:27:47 -0500
A client copy can't work inside a function because the FE/BE wire protocol doesn't support nesting of a COPY operation within query results. (Maybe it could, but the protocol spec doesn't suggest that clients should support this, and libpq for one certainly doesn't.) In most PLs, this prohibition is enforced by spi.c, but SQL functions don't use SPI. A comparison of _SPI_execute_plan() and init_execution_state() shows that rejecting client COPY is the only discrepancy in what they allow, so there's no other similar bugs. This is an astonishingly ancient oversight, so back-patch to all supported branches. Report: https://postgr.es/m/BY2PR05MB2309EABA3DEFA0143F50F0D593780@BY2PR05MB2309.namprd05.prod.outlook.com
pg_restore: Don't allow non-positive number of jobs
commit : 2c72d9c5e3f2f292ebcc30a37755d87b605c7fd5 author : Stephen Frost <email@example.com> date : Wed, 11 Jan 2017 15:46:03 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Wed, 11 Jan 2017 15:46:03 -0500
pg_restore will currently accept invalid values for the number of parallel jobs to run (eg: -1), unlike pg_dump which does check that the value provided is reasonable. Worse, '-1' is actually a valid, independent, parameter (as an alias for --single-transaction), leading to potentially completely unexpected results from a command line such as: -> pg_restore -j -1 Where a user would get neither parallel jobs nor a single-transaction. Add in validity checking of the parallel jobs option, as we already have in pg_dump, before we try to open up the archive. Also move the check that we haven't been asked to run more parallel jobs than possible on Windows to the same place, so we do all the option validity checking before opening the archive. Back-patch all the way, though for 9.2 we're adding the Windows-specific check against MAXIMUM_WAIT_OBJECTS as that check wasn't back-patched originally. Discussion: https://www.postgresql.org/message-id/20170110044815.GC18360%40tamriel.snowman.net
Fix invalid-parallel-jobs error message
commit : 499606c806c387ba2f9a9ee773e6b92d99e27221 author : Stephen Frost <email@example.com> date : Mon, 9 Jan 2017 23:09:37 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Mon, 9 Jan 2017 23:09:37 -0500
Including the program name twice is not helpful: -> pg_dump -j -1 pg_dump: pg_dump: invalid number of parallel jobs Correct by removing the progname from the exit_horribly() call used when validating the number of parallel jobs. Noticed while testing various pg_dump error cases. Back-patch to 9.3 where parallel pg_dump was added.
Invalidate cached plans on FDW option changes.
commit : e4380e4cf68a647d87f0e6207b513d49e802b45c author : Tom Lane <email@example.com> date : Fri, 6 Jan 2017 14:12:52 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 6 Jan 2017 14:12:52 -0500
This fixes problems where a plan must change but fails to do so, as seen in a bug report from Rajkumar Raghuwanshi. For ALTER FOREIGN TABLE OPTIONS, do this through the standard method of forcing a relcache flush on the table. For ALTER FOREIGN DATA WRAPPER and ALTER SERVER, just flush the whole plan cache on any change in pg_foreign_data_wrapper or pg_foreign_server. That matches the way we handle some other low-probability cases such as opclass changes, and it's unclear that the case arises often enough to be worth working harder. Besides, that gives a patch that is simple enough to back-patch with confidence. Back-patch to 9.3. In principle we could apply the code change to 9.2 as well, but (a) we lack postgres_fdw to test it with, (b) it's doubtful that anyone is doing anything exciting enough with FDWs that far back to need this desperately, and (c) the patch doesn't apply cleanly. Patch originally by Amit Langote, reviewed by Etsuro Fujita and Ashutosh Bapat, who each contributed substantial changes as well. Discussion: https://postgr.es/m/CAKcux6m5cA6rRPTKkqVdJ-R=KKDfe35Q_ZuUqxDSV_4hwgaemail@example.com
Fix handling of empty arrays in array_fill().
commit : 4e446563be720760efcb18e4a83b7189638f6ae8 author : Tom Lane <firstname.lastname@example.org> date : Thu, 5 Jan 2017 11:33:51 -0500 committer: Tom Lane <email@example.com> date : Thu, 5 Jan 2017 11:33:51 -0500
array_fill(..., array) produced an empty array, which is probably what users expect, but it was a one-dimensional zero-length array which is not our standard representation of empty arrays. Also, for no very good reason, it rejected empty input arrays; that case should be allowed and produce an empty output array. In passing, remove the restriction that the input array(s) have lower bound 1. That seems rather pointless, and it would have needed extra complexity to make the check deal with empty input arrays. Per bug #14487 from Andrew Gierth. It's been broken all along, so back-patch to all supported branches. Discussion: https://firstname.lastname@example.org
Handle OID column inheritance correctly in ALTER TABLE ... INHERIT.
commit : 696d40d303af1e92fbbe4192a93c5a94340fc22c author : Tom Lane <email@example.com> date : Wed, 4 Jan 2017 18:00:11 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 4 Jan 2017 18:00:11 -0500
Inheritance operations must treat the OID column, if any, much like regular user columns. But MergeAttributesIntoExisting() neglected to do that, leading to weird results after a table with OIDs is associated to a parent with OIDs via ALTER TABLE ... INHERIT. Report and patch by Amit Langote, reviewed by Ashutosh Bapat, some adjustments by me. It's been broken all along, so back-patch to all supported branches. Discussion: https://email@example.com
Update copyright for 2017
commit : 1dfe7f068121d15d59c85ecd936d154922e195d1 author : Bruce Momjian <firstname.lastname@example.org> date : Tue, 3 Jan 2017 12:37:53 -0500 committer: Bruce Momjian <email@example.com> date : Tue, 3 Jan 2017 12:37:53 -0500
Backpatch-through: certain files through 9.2
Remove bogus notice that older clients might not work with MD5 passwords.
commit : ada2cdb61015e6baed5bd9fcce7fa4985147b27d author : Heikki Linnakangas <firstname.lastname@example.org> date : Tue, 3 Jan 2017 14:09:01 +0200 committer: Heikki Linnakangas <email@example.com> date : Tue, 3 Jan 2017 14:09:01 +0200
That was written when we still had "crypt" authentication, and it was referring to the fact that an older client might support "crypt" authentication but not "md5". But we haven't supported "crypt" for years. (As soon as we add a new authentication mechanism that doesn't work with MD5 hashes, we'll need a similar notice again. But this text as it's worded now is just wrong.) Backpatch to all supported versions. Discussion: https://firstname.lastname@example.org
Silence compiler warnings
commit : 8cb9d01829825e55c5637db40f5cd0690111cf09 author : Joe Conway <email@example.com> date : Mon, 2 Jan 2017 14:12:17 -0800 committer: Joe Conway <firstname.lastname@example.org> date : Mon, 2 Jan 2017 14:12:17 -0800
In GetCachedPlan(), initialize 'plan' to silence a compiler warning, but also add an Assert() to make sure we don't ever actually fall through with 'plan' still being set to NULL, since we are about to dereference it. Back-patch back to 9.2. Author: Stephen Frost Discussion: https://postgr.es/m/20161129152102.GR13284%40tamriel.snowman.net
Silence compiler warning
commit : f832a1e9e9a889c1c08a60db5520327bc0569fd6 author : Magnus Hagander <email@example.com> date : Sun, 1 Jan 2017 13:23:43 +0100 committer: Magnus Hagander <firstname.lastname@example.org> date : Sun, 1 Jan 2017 13:23:43 +0100
Caused by the backpatch of f650882 past the point where interrupt handling was changed. Noted by Dean Rasheed
Fix incorrect example of to_timestamp() usage.
commit : ea853db4a58fe8c0fcfe7f945f5ca5c800264e5a author : Tom Lane <email@example.com> date : Thu, 29 Dec 2016 18:05:34 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 29 Dec 2016 18:05:34 -0500
Must use HH24 not HH to read a hour value exceeding 12. This was already fixed in HEAD in commit d3cd36a13, but I didn't think of backpatching it. Report: https://email@example.com
Fix interval_transform so it doesn't throw away non-no-op casts.
commit : 0b947b692c5e6ef3c20a6b24cad5a3b29f3210f1 author : Tom Lane <firstname.lastname@example.org> date : Tue, 27 Dec 2016 15:43:54 -0500 committer: Tom Lane <email@example.com> date : Tue, 27 Dec 2016 15:43:54 -0500
interval_transform() contained two separate bugs that caused it to sometimes mistakenly decide that a cast from interval to restricted interval is a no-op and throw it away. First, it was wrong to rely on dt.h's field type macros to have an ordering consistent with the field's significance; in one case they do not. This led to mistakenly treating YEAR as less significant than MONTH, so that a cast from INTERVAL MONTH to INTERVAL YEAR was incorrectly discarded. Second, fls(1<<k) produces k+1 not k, so comparing its output directly to SECOND was wrong. This led to supposing that a cast to INTERVAL MINUTE was really a cast to INTERVAL SECOND and so could be discarded. To fix, get rid of the use of fls(), and make a function based on intervaltypmodout to produce a field ID code adapted to the need here. Per bug #14479 from Piotr Stefaniak. Back-patch to 9.2 where transform functions were introduced, because this code was born broken. Discussion: https://firstname.lastname@example.org
Explain unaccounted for space in pgstattuple.
commit : 99ae68da22df7bc22e83ae8901e16ec14c9dd837 author : Andrew Dunstan <email@example.com> date : Tue, 27 Dec 2016 11:23:46 -0500 committer: Andrew Dunstan <firstname.lastname@example.org> date : Tue, 27 Dec 2016 11:23:46 -0500
In addition to space accounted for by tuple_len, dead_tuple_len and free_space, the table_len includes page overhead, the item pointers table and padding bytes. Backpatch to live branches.
Remove triggerable Assert in hashname().
commit : 56d58fa0415cfef3bc157d1139776d81139bcac8 author : Tom Lane <email@example.com> date : Mon, 26 Dec 2016 14:58:02 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 26 Dec 2016 14:58:02 -0500
hashname() asserted that the key string it is given is shorter than NAMEDATALEN. That should surely always be true if the input is in fact a regular value of type "name". However, for reasons of coding convenience, we allow plain old C strings to be treated as "name" values in many places. Some SQL functions accept arbitrary "text" inputs, convert them to C strings, and pass them otherwise-untransformed to syscache lookups for name columns, allowing an overlength input value to trigger hashname's Assert. This would be a DOS problem, except that it only happens in assert-enabled builds which aren't recommended for production. In a production build, you'll just get a name lookup error, since regardless of the hash value computed by hashname, the later equality comparison checks can't match. Likewise, if the catalog lookup is done by seqscan or indexscan searches, there will just be a lookup error, since the name comparison functions don't contain any similar length checks, and will see an overlength input as unequal to any stored entry. After discussion we concluded that we should simply remove this Assert. It's inessential to hashname's own functionality, and having such an assertion in only some paths for name lookup is more of a foot-gun than a useful check. There may or may not be a case for the affected callers to do something other than let the name lookup fail, but we'll consider that separately; in any case we probably don't want to change such behavior in the back branches. Per report from Tushar Ahuja. Back-patch to all supported branches. Report: https://email@example.com Discussion: https://firstname.lastname@example.org
pg_dumpall: Include --verbose option in --help output
commit : 2ed97140c743b4a4cf725a9d1b54439d7a8dac22 author : Stephen Frost <email@example.com> date : Sat, 24 Dec 2016 01:42:12 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Sat, 24 Dec 2016 01:42:12 -0500
The -v/--verbose option was not included in the output from --help for pg_dumpall even though it's in the pg_dumpall documentation and has apparently been around since pg_dumpall was reimplemented in C in 2002. Fix that by adding it. Pointed out by Daniel Westermann. Back-patch to all supported branches. Discussion: https://www.postgresql.org/message-id/2020970042.4589542.1482482101585.JavaMail.zimbra%40dbi-services.com
Fix tab completion in psql for ALTER DEFAULT PRIVILEGES
commit : 98f30a0e7d483402f2a7ebe5e6bbbbbba044c42c author : Stephen Frost <email@example.com> date : Fri, 23 Dec 2016 21:01:45 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Fri, 23 Dec 2016 21:01:45 -0500
When providing tab completion for ALTER DEFAULT PRIVILEGES, we are including the list of roles as possible options for completion after the GRANT or REVOKE. Further, we accept FOR ROLE/IN SCHEMA at the same time and in either order, but the tab completion was only working for one or the other. Lastly, we weren't using the actual list of allowed kinds of objects for default privileges for completion after the 'GRANT X ON' but instead were completeing to what 'GRANT X ON' supports, which isn't the ssame at all. Address these issues by improving the forward tab-completion for ALTER DEFAULT PRIVILEGES and then constrain and correct how the tail completion is done when it is for ALTER DEFAULT PRIVILEGES. Back-patch the forward/tail tab-completion to 9.6, where we made it easy to handle such cases. For 9.5 and earlier, correct the initial tab-completion to at least be correct as far as it goes and then add a check for GRANT/REVOKE to only tab-complete when the GRANT/REVOKE is the start of the command, so we don't try to do tab-completion after we get to the GRANT/REVOKE part of the ALTER DEFAULT PRIVILEGES command, which is better than providing incorrect completions. Initial patch for master and 9.6 by Gilles Darold, though I cleaned it up and added a few comments. All bugs in the 9.5 and earlier patch are mine. Discussion: https://email@example.com
Doc: improve index entry for "median".
commit : 090a3870aa4ed334a06a5cd81b74fde368ecf379 author : Tom Lane <firstname.lastname@example.org> date : Fri, 23 Dec 2016 12:53:09 -0500 committer: Tom Lane <email@example.com> date : Fri, 23 Dec 2016 12:53:09 -0500
We had an index entry for "median" attached to the percentile_cont function entry, which was pretty useless because a person following the link would never realize that that function was the one they were being hinted to use. Instead, make the index entry point at the example in syntax-aggregates, and add a <seealso> link to "percentile". Also, since that example explicitly claims to be calculating the median, make it use percentile_cont not percentile_disc. This makes no difference in terms of the larger goals of that section, but so far as I can find, nearly everyone thinks that "median" means the continuous not discrete calculation. Per gripe from Steven Winfield. Back-patch to 9.4 where we introduced percentile_cont. Discussion: https://firstname.lastname@example.org
Use TSConfigRelationId in AlterTSConfiguration()
commit : ac1ec9c1f0365293e9fdc26f06545b3b48817230 author : Stephen Frost <email@example.com> date : Thu, 22 Dec 2016 17:08:58 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Thu, 22 Dec 2016 17:08:58 -0500
When we are altering a text search configuration, we are getting the tuple from pg_ts_config and using its OID, so use TSConfigRelationId when invoking any post-alter hooks and setting the object address. Further, in the functions called from AlterTSConfiguration(), we're saving information about the command via EventTriggerCollectAlterTSConfig(), so we should be setting commandCollected to true. Also add a regression test to test_ddl_deparse for ALTER TEXT SEARCH CONFIGURATION. Author: Artur Zakirov, a few additional comments by me Discussion: https://www.postgresql.org/message-id/57a71eba-f2c7-e7fd-6fc0-2126ec0b39bd%40postgrespro.ru Back-patch the fix for the InvokeObjectPostAlterHook() call to 9.3 where it was introduced, and the fix for the ObjectAddressSet() call and setting commandCollected to true to 9.5 where those changes to ProcessUtilitySlow() were introduced.
Fix broken error check in _hash_doinsert.
commit : c2f78e5e02e72e7f74f549fa07a435b1264ba308 author : Robert Haas <email@example.com> date : Thu, 22 Dec 2016 13:54:40 -0500 committer: Robert Haas <firstname.lastname@example.org> date : Thu, 22 Dec 2016 13:54:40 -0500
You can't just cast a HashMetaPage to a Page, because the meta page data is stored after the page header, not at offset 0. Fortunately, this didn't break anything because it happens to find hashm_bsize at the offset at which it expects to find pd_pagesize_version, and the values are close enough to the same that this works out. Still, it's a bug, so back-patch to all supported versions. Mithun Cy, revised a bit by me.
Make dblink try harder to form useful error messages
commit : 76943f54a743e3d7875a6772fec89bb57a8f2118 author : Joe Conway <email@example.com> date : Thu, 22 Dec 2016 09:47:36 -0800 committer: Joe Conway <firstname.lastname@example.org> date : Thu, 22 Dec 2016 09:47:36 -0800
When libpq encounters a connection-level error, e.g. runs out of memory while forming a result, there will be no error associated with PGresult, but a message will be placed into PGconn's error buffer. postgres_fdw takes care to use the PGconn error message when PGresult does not have one, but dblink has been negligent in that regard. Modify dblink to mirror what postgres_fdw has been doing. Back-patch to all supported branches. Author: Joe Conway Reviewed-By: Tom Lane Discussion: https://postgr.es/m/02fa2d90-2efd-00bc-fefc-c23c00eb671e%40joeconway.com
Protect dblink from invalid options when using postgres_fdw server
commit : cb687e0acfdfa3699e4b19b597aba37f686b9f16 author : Joe Conway <email@example.com> date : Thu, 22 Dec 2016 09:19:08 -0800 committer: Joe Conway <firstname.lastname@example.org> date : Thu, 22 Dec 2016 09:19:08 -0800
When dblink uses a postgres_fdw server name for its connection, it is possible for the connection to have options that are invalid with dblink (e.g. "updatable"). The recommended way to avoid this problem is to use dblink_fdw servers instead. However there are use cases for using postgres_fdw, and possibly other FDWs, for dblink connection options, therefore protect against trying to use any options that do not apply by using is_valid_dblink_option() when building the connection string from the options. Back-patch to 9.3. Although 9.2 supports FDWs for connection info, is_valid_dblink_option() did not yet exist, and neither did postgres_fdw, at least in the postgres source tree. Given the lack of previous complaints, fixing that seems too invasive/not worth it. Author: Corey Huinker Reviewed-By: Joe Conway Discussion: https://postgr.es/m/CADkLM%3DfWyXVEyYcqbcRnxcHutkP45UHU9WD7XpdZaMfe7S%3DRwA%40mail.gmail.com
Give a useful error message if uuid-ossp is built without preconfiguration.
commit : 2807080fb995110466c2207da2f6bbebe28875f4 author : Tom Lane <email@example.com> date : Thu, 22 Dec 2016 11:19:04 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 22 Dec 2016 11:19:04 -0500
Before commit b8cc8f947, it was possible to build contrib/uuid-ossp without having told configure you meant to; you could just cd into that directory and "make". That no longer works because the code depends on configure to have done header and library probes, but the ensuing error messages are not so easy to interpret if you're not an old C hand. We've gotten a couple of complaints recently from people trying to do this the low-tech way, so add an explicit #error directing the user to use --with-uuid. (In principle we might want to do something similar in the other optionally-built contrib modules; but I don't think any of the others have ever worked without preconfiguration, so there are no bad habits to break people of.) Back-patch to 9.4 where the previous commit came in. Report: https://postgr.es/m/CAHeEsBf42AWTnk=1qJvFv+mYgRFm07Knsfuc86Ono8nRjf3tvQ@mail.gmail.com Report: https://postgr.es/m/CAKYdkBrUaZX+F6KpmzoHqMtiUqCtAW_w6Dgvr6F0WTiopuGxow@mail.gmail.com
Fix buffer overflow on particularly named files and clarify documentation about output file naming.
commit : 3af172f7b68763fbbf720d11be88784f21c4c1d1 author : Michael Meskes <email@example.com> date : Thu, 22 Dec 2016 08:28:13 +0100 committer: Michael Meskes <firstname.lastname@example.org> date : Thu, 22 Dec 2016 08:28:13 +0100
Patch by Tsunakawa, Takayuki <email@example.com>
Improve dblink error message when remote does not provide it
commit : 0f5b1867c2e6b752336f0b32d05f94c424664f79 author : Joe Conway <firstname.lastname@example.org> date : Wed, 21 Dec 2016 15:48:28 -0800 committer: Joe Conway <email@example.com> date : Wed, 21 Dec 2016 15:48:28 -0800
When dblink or postgres_fdw detects an error on the remote side of the connection, it will try to construct a local error message as best it can using libpq's PQresultErrorField(). When no primary message is available, it was bailing out with an unhelpful "unknown error". Make that message better and more style guide compliant. Per discussion on hackers. Backpatch to 9.2 except postgres_fdw which didn't exist before 9.3. Discussion: https://postgr.es/m/19872.1482338965%40sss.pgh.pa.us
Fix detection of unfinished Unicode surrogate pair at end of string.
commit : d0f60e4cc5f20bf64ee12d740e52db2773a93c21 author : Tom Lane <firstname.lastname@example.org> date : Wed, 21 Dec 2016 17:39:32 -0500 committer: Tom Lane <email@example.com> date : Wed, 21 Dec 2016 17:39:32 -0500
The U&'...' and U&"..." syntaxes silently discarded a surrogate pair start (that is, a code between U+D800 and U+DBFF) if it occurred at the very end of the string. This seems like an obvious oversight, since we throw an error for every other invalid combination of surrogate characters, including the very same situation in E'...' syntax. This has been wrong since the pair processing was added (in 9.0), so back-patch to all supported branches. Discussion: https://firstname.lastname@example.org
Improve ALTER TABLE documentation
commit : b5fe1d4cd93d281eef7ab6718716d169609b000a author : Stephen Frost <email@example.com> date : Wed, 21 Dec 2016 15:05:14 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Wed, 21 Dec 2016 15:05:14 -0500
The ALTER TABLE documentation wasn't terribly clear when it came to which commands could be combined together and what it meant when they were. In particular, SET TABLESPACE *can* be combined with other commands, when it's operating against a single table, but not when multiple tables are being moved with ALL IN TABLESPACE. Further, the actions are applied together but not really in 'parallel', at least today. Pointed out by: Amit Langote Improved wording from Tom. Back-patch to 9.4, where the ALL IN TABLESPACE option was added. Discussion: https://www.postgresql.org/message-id/14c535b4-13ef-0590-1b98-76af355a0763%40lab.ntt.co.jp
Fix dumping of casts and transforms using built-in functions
commit : 13f51dacfab83f868df09370d10d4d9932fc0a61 author : Stephen Frost <email@example.com> date : Wed, 21 Dec 2016 13:47:23 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Wed, 21 Dec 2016 13:47:23 -0500
In pg_dump.c dumpCast() and dumpTransform(), we would happily ignore the cast or transform if it happened to use a built-in function because we weren't including the information about built-in functions when querying pg_proc from getFuncs(). Modify the query in getFuncs() to also gather information about functions which are used by user-defined casts and transforms (where "user-defined" means "has an OID >= FirstNormalObjectId"). This also adds to the TAP regression tests for 9.6 and master to cover these types of objects. Back-patch all the way for casts, back to 9.5 for transforms. Discussion: https://www.postgresql.org/message-id/flat/20160504183952.GE10850%40tamriel.snowman.net
For 8.0 servers, get last built-in oid from pg_database
commit : 107943f1a9b2823e5faef78f3ca1400a7612a24e author : Stephen Frost <email@example.com> date : Wed, 21 Dec 2016 13:47:23 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Wed, 21 Dec 2016 13:47:23 -0500
We didn't start ensuring that all built-in objects had OIDs less than 16384 until 8.1, so for 8.0 servers we still need to query the value out of pg_database. We need this, in particular, to distinguish which casts were built-in and which were user-defined. For HEAD, we only worry about going back to 8.0, for the back-branches, we also ensure that 7.0-7.4 work. Discussion: https://www.postgresql.org/message-id/flat/20160504183952.GE10850%40tamriel.snowman.net
Fix order of operations in CREATE OR REPLACE VIEW.
commit : cad24980ef40a648fc72727ca14557968db29295 author : Dean Rasheed <email@example.com> date : Wed, 21 Dec 2016 17:03:54 +0000 committer: Dean Rasheed <firstname.lastname@example.org> date : Wed, 21 Dec 2016 17:03:54 +0000
When CREATE OR REPLACE VIEW acts on an existing view, don't update the view options until after the view query has been updated. This is necessary in the case where CREATE OR REPLACE VIEW is used on an existing view that is not updatable, and the new view is updatable and specifies the WITH CHECK OPTION. In this case, attempting to apply the new options to the view before updating its query fails, because the options are applied using the ALTER TABLE infrastructure which checks that WITH CHECK OPTION is only applied to an updatable view. If new columns are being added to the view, that is also done using the ALTER TABLE infrastructure, but it is important that that still be done before updating the view query, because the rules system checks that the query columns match those on the view relation. Added a comment to explain that, in case someone is tempted to move that to where the view options are now being set. Back-patch to 9.4 where WITH CHECK OPTION was added. Report: https://postgr.es/m/CAEZATCUp%3Dz%3Ds4SzZjr14bfct_bdJNwMPi-gFi3Xc5k1ntbsAgQ%40mail.gmail.com
Fix base backup rate limiting in presence of slow i/o
commit : f6508827afe76b2c3735a9ce073620e708d60c79 author : Magnus Hagander <email@example.com> date : Mon, 19 Dec 2016 10:11:04 +0100 committer: Magnus Hagander <firstname.lastname@example.org> date : Mon, 19 Dec 2016 10:11:04 +0100
When source i/o on disk was too slow compared to the rate limiting specified, the system could end up with a negative value for sleep that it never got out of, which caused rate limiting to effectively be turned off. Discussion: https://postgr.es/m/CABUevEy_-e0YvL4ayoX8bH_Ja9w%2BBHoP6jUgdxZuG2nEj3uAfQ%40mail.gmail.com Analysis by me, patch by Antonin Houska
In contrib/uuid-ossp, #include headers needed for ntohl() and ntohs().
commit : 20c27fdfd3af8da68837ab1cbea08100619f91b6 author : Tom Lane <email@example.com> date : Sat, 17 Dec 2016 22:24:13 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 17 Dec 2016 22:24:13 -0500
Oversight in commit b8cc8f947. I just noticed this causes compiler warnings on FreeBSD, and it really ought to cause warnings elsewhere too: all references I can find say that <arpa/inet.h> is required for these. We have a lot of code elsewhere that thinks that both <netinet/in.h> and <arpa/inet.h> should be included for these functions, so do it that way here too, even though <arpa/inet.h> ought to be sufficient according to the references I consulted. Back-patch to 9.4 where the previous commit landed.
Fix off-by-one in memory allocation for quote_literal_cstr().
commit : 779325478e7e867c636c3e1ad1e5df734e7549e5 author : Heikki Linnakangas <email@example.com> date : Fri, 16 Dec 2016 12:50:20 +0200 committer: Heikki Linnakangas <firstname.lastname@example.org> date : Fri, 16 Dec 2016 12:50:20 +0200
The calculation didn't take into account the NULL terminator. That lead to overwriting the palloc'd buffer by one byte, if the input consists entirely of backslashes. For example "format('%L', E'\\')". Fixes bug #14468. Backpatch to all supported versions. Report: https://www.postgresql.org/message-id/20161216105001.13334.42819%40wrigleys.postgresql.org
Sync our copy of the timezone library with IANA release tzcode2016j.
commit : b95f4bf07468165b09c29816def5292200d393cf author : Tom Lane <email@example.com> date : Thu, 15 Dec 2016 14:32:42 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 15 Dec 2016 14:32:42 -0500
This is a trivial update (consisting in fact only in the addition of a comment). The point is just to get back to being synced with an official release of tzcode, rather than some ad-hoc point in their commit history, which is where commit 1f87181e1 left it.
Back-patch fcff8a575198478023ada8a48e13b50f70054766 as a bug fix.
commit : 4b9d466c14083003bd80e1ce02e617b2b92df7fe author : Kevin Grittner <email@example.com> date : Tue, 13 Dec 2016 19:05:12 -0600 committer: Kevin Grittner <firstname.lastname@example.org> date : Tue, 13 Dec 2016 19:05:12 -0600
When there is both a serialization failure and a unique violation, throw the former rather than the latter. When initially pushed, this was viewed as a feature to assist application framework developers, so that they could more accurately determine when to retry a failed transaction, but a test case presented by Ian Jackson has shown that this patch can prevent serialization anomalies in some cases where a unique violation is caught within a subtransaction, the work of that subtransaction is discarded, and no error is thrown. That makes this a bug fix, so it is being back-patched to all supported branches where it is not already present (i.e., 9.2 to 9.5). Discussion: https://email@example.com Discussion: https://firstname.lastname@example.org
Use "%option prefix" to set API names in ecpg's lexer.
commit : fb12471ebec11417d49e705e8cb77baaa53f0473 author : Tom Lane <email@example.com> date : Sun, 11 Dec 2016 18:04:28 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 11 Dec 2016 18:04:28 -0500
Back-patch commit 92fb64983 into the pre-9.6 branches. Without this, ecpg fails to build with the latest version of flex. It's not unreasonable that people would want to compile our old branches with recent tools. Per report from Дилян Палаузов. Discussion: https://email@example.com
Build backend/parser/scan.l and interfaces/ecpg/preproc/pgc.l standalone.
commit : 7192865bdc48ded15caaba63fef313ff9e84eb71 author : Tom Lane <firstname.lastname@example.org> date : Sun, 11 Dec 2016 17:44:16 -0500 committer: Tom Lane <email@example.com> date : Sun, 11 Dec 2016 17:44:16 -0500
Back-patch commit 72b1e3a21 into the pre-9.6 branches. As noted in the original commit, this has some extra benefits: we can narrow the scope of the -Wno-error flag that's forced on scan.c. Also, since these grammar and lexer files are so large, splitting them into separate build targets should have some advantages in build speed, particularly in parallel or ccache'd builds. However, the real reason for doing this now is that it avoids symbol- redefinition warnings (or worse) with the latest version of flex. It's not unreasonable that people would want to compile our old branches with recent tools. Per report from Дилян Палаузов. Discussion: https://firstname.lastname@example.org
Prevent crash when ts_rewrite() replaces a non-top-level subtree with null.
commit : 6f5cb982e7df8f277505fd2028be83211d586769 author : Tom Lane <email@example.com> date : Sun, 11 Dec 2016 13:09:57 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 11 Dec 2016 13:09:57 -0500
When ts_rewrite()'s replacement argument is an empty tsquery, it's supposed to simplify any operator nodes whose operand(s) become NULL; but it failed to do that reliably, because dropvoidsubtree() only examined the top level of the result tree. Rather than make a second recursive pass, let's just give the responsibility to dofindsubquery() to simplify while it's doing the main replacement pass. Per report from Andreas Seltenreich. Artur Zakirov, with some cosmetic changes by me. Back-patch to all supported branches. Discussion: https://email@example.com
Be more careful about Python refcounts while creating exception objects.
commit : 13a4b37b9806bb591aaadd745300b95baec80515 author : Tom Lane <firstname.lastname@example.org> date : Fri, 9 Dec 2016 15:27:23 -0500 committer: Tom Lane <email@example.com> date : Fri, 9 Dec 2016 15:27:23 -0500
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception objects, evidently supposing that PyModule_AddObject would do that --- but it doesn't. This left us in a situation where a Python garbage collection cycle could result in deletion of exception object(s), causing server crashes or wrong answers if the exception objects are used later in the session. In addition, PLy_generate_spi_exceptions didn't bother to test for a null result from PyErr_NewException, which at best is inconsistent with the code in PLy_add_exceptions. And PLy_add_exceptions, while it did do Py_INCREF on the exceptions it makes, waited to do that till after some PyModule_AddObject calls, creating a similar risk for failure if garbage collection happened within those calls. To fix, refactor to have just one piece of code that creates an exception object and adds it to the spiexceptions module, bumping the refcount first. Also, let's add an additional refcount to represent the pointer we're going to store in a C global variable or hash table. This should only matter if the user does something weird like delete the spiexceptions Python module, but lack of paranoia has caused us enough problems in PL/Python already. The fact that PyModule_AddObject doesn't do a Py_INCREF of its own explains the need for the Py_INCREF added in commit 4c966d920, so we can improve the comment about that; also, this means we really want to do that before not after the PyModule_AddObject call. The missing Py_INCREF in PLy_generate_spi_exceptions was reported and diagnosed by Rafa de la Torre; the other fixes by me. Back-patch to all supported branches. Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
Fix reporting of column typmods for multi-row VALUES constructs.
commit : c7a62135acfb951d2e6ca6c0cfc71ef4ed08f3bc author : Tom Lane <firstname.lastname@example.org> date : Fri, 9 Dec 2016 12:01:14 -0500 committer: Tom Lane <email@example.com> date : Fri, 9 Dec 2016 12:01:14 -0500
expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. We fixed this in HEAD by deriving the typmods during transformValuesClause and storing them in the RTE, but that's not a feasible solution in the back branches. Instead, just use a brute-force approach of determining the correct common typmod during expandRTE() and get_rte_attribute_type(). Simple testing says that that doesn't really cost much, at least not in common cases where expandRTE() is only used once per query. It turns out that get_rte_attribute_type() is typically never used at all on VALUES RTEs, so the inefficiency there is of no great concern. Report: https://firstname.lastname@example.org Discussion: https://email@example.com
Log the creation of an init fork unconditionally.
commit : 68e56eef655af48acc2d171fb724a803debe37f3 author : Robert Haas <firstname.lastname@example.org> date : Thu, 8 Dec 2016 14:09:09 -0500 committer: Robert Haas <email@example.com> date : Thu, 8 Dec 2016 14:09:09 -0500
Previously, it was thought that this only needed to be done for the benefit of possible standbys, so wal_level = minimal skipped it. But that's not safe, because during crash recovery we might replay XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE record which recursively removes the directory that contains the new init fork. So log it always. The user-visible effect of this bug is that if you create a database or tablespace, then create an unlogged table, then crash without checkpointing, then restart, accessing the table will fail, because the it won't have been properly reset. This commit fixes that. Michael Paquier, per a report from Konstantin Knizhnik. Wording of the comments per a suggestion from me.
Restore psql's SIGPIPE setting if popen() fails.
commit : cf59a8a4fe8dd83a069737ff32f1f211f352036c author : Tom Lane <firstname.lastname@example.org> date : Wed, 7 Dec 2016 12:39:24 -0500 committer: Tom Lane <email@example.com> date : Wed, 7 Dec 2016 12:39:24 -0500
Ancient oversight in PageOutput(): if popen() fails, we'd better reset the SIGPIPE handler before returning stdout, because ClosePager() won't. Noticed while fixing the empty-PAGER issue.
Handle empty or all-blank PAGER setting more sanely in psql.
commit : ccb84dae13c9fcb0dec00ee337179e9f36700ba3 author : Tom Lane <firstname.lastname@example.org> date : Wed, 7 Dec 2016 12:19:56 -0500 committer: Tom Lane <email@example.com> date : Wed, 7 Dec 2016 12:19:56 -0500
If the PAGER environment variable is set but contains an empty string, psql would pass it to "sh" which would silently exit, causing whatever query output we were printing to vanish entirely. This is quite mystifying; it took a long time for us to figure out that this was the cause of Joseph Brenner's trouble report. Rather than allowing that to happen, we should treat this as another way to specify "no pager". (We could alternatively treat it as selecting the default pager, but it seems more likely that the former is what the user meant to achieve by setting PAGER this way.) Nonempty, but all-white-space, PAGER values have the same behavior, and it's pretty easy to test for that, so let's handle that case the same way. Most other cases of faulty PAGER values will result in the shell printing some kind of complaint to stderr, which should be enough to diagnose the problem, so we don't need to work harder than this. (Note that there's been an intentional decision not to be very chatty about apparent failure returns from the pager process, since that may happen if, eg, the user quits the pager with control-C or some such. I'd just as soon not start splitting hairs about which exit codes might merit making our own report.) libpq's old PQprint() function was already on board with ignoring empty PAGER values, but for consistency, make it ignore all-white-space values as well. It's been like this a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAFfgvXWLOE2novHzYjmQK8-J6TmHz42G8f3X0SORM44+stUGmw@mail.gmail.com
Make pgwin32_putenv() visit debug CRTs.
commit : b45a4949de999bd1ee42b3f6029b75f6939c74c9 author : Noah Misch <firstname.lastname@example.org> date : Sat, 3 Dec 2016 15:46:36 -0500 committer: Noah Misch <email@example.com> date : Sat, 3 Dec 2016 15:46:36 -0500
This has no effect in the most conventional case, where no relevant DLL uses a debug build. For an example where it does matter, given a debug build of MIT Kerberos, the krb_server_keyfile parameter usually had no effect. Since nobody wants a Heisenbug, back-patch to 9.2 (all supported versions). Christian Ullrich, reviewed by Michael Paquier.
Remove wrong CloseHandle() call.
commit : ec7eacfae271a454e6ad01ea0f710dc71e1d9852 author : Noah Misch <firstname.lastname@example.org> date : Sat, 3 Dec 2016 15:46:35 -0500 committer: Noah Misch <email@example.com> date : Sat, 3 Dec 2016 15:46:35 -0500
In accordance with its own documentation, invoke CloseHandle() only when directed in the documentation for the function that furnished the handle. GetModuleHandle() does not so direct. We have been issuing this call only in the rare event that a CRT DLL contains no "_putenv" symbol, so lack of bug reports is uninformative. Back-patch to 9.2 (all supported versions). Christian Ullrich, reviewed by Michael Paquier.
Refine win32env.c cosmetics.
commit : bf5ecaae4af355256011b7eb4f3f1880e061162c author : Noah Misch <firstname.lastname@example.org> date : Sat, 3 Dec 2016 15:46:35 -0500 committer: Noah Misch <email@example.com> date : Sat, 3 Dec 2016 15:46:35 -0500
Replace use of plain 0 as a null pointer constant. In comments, update terminology and lessen redundancy. Back-patch to 9.2 (all supported versions) for the convenience of back-patching the next two commits. Christian Ullrich and Noah Misch, reviewed (in earlier versions) by Michael Paquier.
Doc: improve description of trim() and related functions.
commit : 523bb1de834e9c05fec0f0d85330cfa15a3000b7 author : Tom Lane <firstname.lastname@example.org> date : Wed, 30 Nov 2016 13:34:14 -0500 committer: Tom Lane <email@example.com> date : Wed, 30 Nov 2016 13:34:14 -0500
Per bug #14441 from Mark Pether, the documentation could be misread, mainly because some of the examples failed to show what happens with a multicharacter "characters to trim" string. Also, while the text description in most of these entries was fairly clear that the "characters" argument is a set of characters not a substring to match, some of them used variant wording that was a bit less clear. trim() itself suffered from both deficiencies and was thus pretty misinterpretable. Also fix failure to explain which of LEADING/TRAILING/BOTH is the default. Discussion: https://firstname.lastname@example.org
Clarify pg_dump -b documentation
commit : be0b98fc21648f8483d00670a73c5e6ea89a5fb1 author : Stephen Frost <email@example.com> date : Tue, 29 Nov 2016 10:35:12 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Tue, 29 Nov 2016 10:35:12 -0500
The documentation around the -b/--blobs option to pg_dump seemed to imply that it might be possible to add blobs to a "schema-only" dump or similar. Clarify that blobs are data and therefore will only be included in dumps where data is being included, even when -b is used to request blobs be included. The -b option has been around since before 9.2, so back-patch to all supported branches. Discussion: https://postgr.es/m/20161119173316.GA13284@tamriel.snowman.net
Mention server start requirement for ssl parameters
commit : ac1d9dbcbe4842402e85dee82901341913386f3a author : Magnus Hagander <email@example.com> date : Sun, 27 Nov 2016 17:10:02 +0100 committer: Magnus Hagander <firstname.lastname@example.org> date : Sun, 27 Nov 2016 17:10:02 +0100
Fix that the documentation for three ssl related parameters did not specify that they can only be changed at server start. Michael Paquier
Fix test about ignoring extension dependencies during extension scripts.
commit : 313786a74155cf04e4539f4d613a6993aab199d8 author : Tom Lane <email@example.com> date : Sat, 26 Nov 2016 13:31:35 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 26 Nov 2016 13:31:35 -0500
Commit 08dd23cec introduced an exception to the rule that extension member objects can only be dropped as part of dropping the whole extension, intending to allow such drops while running the extension's own creation or update scripts. However, the exception was only applied at the outermost recursion level, because it was modeled on a pre-existing check to ignore dependencies on objects listed in pendingObjects. Bug #14434 from Philippe Beaudoin shows that this is inadequate: in some cases we can reach an extension member object by recursion from another one. (The bug concerns the serial-sequence case; I'm not sure if there are other cases, but there might well be.) To fix, revert 08dd23cec's changes to findDependentObjects() and instead apply the creating_extension exception regardless of stack level. Having seen this example, I'm a bit suspicious that the pendingObjects logic is also wrong and such cases should likewise be allowed at any recursion level. However, changing that would interact in subtle ways with the recursion logic (at least it would need to be moved to after the recursing-from check). Given that the code's been like that a long time, I'll refrain from touching it without a clear example showing it's wrong. Back-patch to all active branches. In HEAD and 9.6, where suitable test infrastructure exists, add a regression test case based on the bug report. Report: <email@example.com> Discussion: <firstname.lastname@example.org>
Check for pending trigger events on far end when dropping an FK constraint.
commit : f7166ce243aec3f5df454377d81b67032d85f35c author : Tom Lane <email@example.com> date : Fri, 25 Nov 2016 13:44:48 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 25 Nov 2016 13:44:48 -0500
When dropping a foreign key constraint with ALTER TABLE DROP CONSTRAINT, we refuse the drop if there are any pending trigger events on the named table; this ensures that we won't remove the pg_trigger row that will be consulted by those events. But we should make the same check for the referenced relation, else we might remove a due-to-be-referenced pg_trigger row for that relation too, resulting in "could not find trigger NNN" or "relation NNN has no triggers" errors at commit. Per bug #14431 from Benjie Gillam. Back-patch to all supported branches. Report: <email@example.com>
Make sure ALTER TABLE preserves index tablespaces.
commit : 15f3e0cb13e7711195915996664e0b035c0498f9 author : Tom Lane <firstname.lastname@example.org> date : Wed, 23 Nov 2016 13:45:56 -0500 committer: Tom Lane <email@example.com> date : Wed, 23 Nov 2016 13:45:56 -0500
When rebuilding an existing index, ALTER TABLE correctly kept the physical file in the same tablespace, but it messed up the pg_class entry if the index had been in the database's default tablespace and "default_tablespace" was set to some non-default tablespace. This led to an inaccessible index. Fix by fixing pg_get_indexdef_string() to always include a tablespace clause, whether or not the index is in the default tablespace. The previous behavior was installed in commit 537e92e41, and I think it just wasn't thought through very clearly; certainly the possible effect of default_tablespace wasn't considered. There's some risk in changing the behavior of this function, but there are no other call sites in the core code. Even if it's being used by some third party extension, it's fairly hard to envision a usage that is okay with a tablespace clause being appended some of the time but can't handle it being appended all the time. Back-patch to all supported versions. Code fix by me, investigation and test cases by Michael Paquier. Discussion: <firstname.lastname@example.org>
Doc: improve documentation about composite-value usage.
commit : 2153cb7a994325f3fafe7eb4d15b96fc199ea696 author : Tom Lane <email@example.com> date : Tue, 22 Nov 2016 17:56:16 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 22 Nov 2016 17:56:16 -0500
Create a section specifically for the syntactic rules around whole-row variable usage, such as expansion of "foo.*". This was previously documented only haphazardly, with some critical info buried in unexpected places like xfunc-sql-composite-functions. Per repeated questions in different mailing lists. Discussion: <email@example.com>
Doc: add a section in Part II concerning RETURNING.
commit : 68d11e81f5f0aad1294098299c6491126bc8578c author : Tom Lane <firstname.lastname@example.org> date : Tue, 22 Nov 2016 14:02:52 -0500 committer: Tom Lane <email@example.com> date : Tue, 22 Nov 2016 14:02:52 -0500
There are assorted references to RETURNING in Part II, but nothing that would qualify as an explanation of the feature, which seems like an oversight considering how useful it is. Add something. Noted while looking for a place to point a cross-reference to ...
Make contrib/test_decoding regression tests safe for CZ locale.
commit : 4bc8b7bbdf8e37b6a1dfc472279d94403902919a author : Tom Lane <firstname.lastname@example.org> date : Mon, 21 Nov 2016 20:39:28 -0500 committer: Tom Lane <email@example.com> date : Mon, 21 Nov 2016 20:39:28 -0500
A little COLLATE "C" goes a long way. Pavel Stehule, per suggestion from Craig Ringer Discussion: <CAFj8pRA8nJZcozgxN=RMSqMmKuHVOkcGAAKPKdFeiMWGDSUDLA@mail.gmail.com>
Fix PGLC_localeconv() to handle errors better.
commit : e702aea4f7d91502bc3db16bb494012e1bc99d11 author : Tom Lane <firstname.lastname@example.org> date : Mon, 21 Nov 2016 18:21:56 -0500 committer: Tom Lane <email@example.com> date : Mon, 21 Nov 2016 18:21:56 -0500
The code was intentionally not very careful about leaking strdup'd strings in case of an error. That was forgivable probably, but it also failed to notice strdup() failures, which could lead to subsequent null-pointer-dereference crashes, since many callers unsurprisingly didn't check for null pointers in the struct lconv fields. An even worse problem is that it could throw error while we were setlocale'd to a non-C locale, causing unwanted behavior in subsequent libc calls. Rewrite to ensure that we cannot throw elog(ERROR) until after we've restored the previous locale settings, or at least attempted to. (I'm sorely tempted to make restore failure be a FATAL error, but will refrain for the moment.) Having done that, it's not much more work to ensure that we clean up strdup'd storage on the way out, too. This code is substantially the same in all supported branches, so back-patch all the way. Michael Paquier and Tom Lane Discussion: <CAB7nPqRMbGqa_mesopcn4MPyTs34eqtVEK7ELYxvvV=oqS00YA@mail.gmail.com>
Prevent multicolumn expansion of "foo.*" in an UPDATE source expression.
commit : 44c8b4fcdf4db4f50e4bfc0bf403d08b4cbfff72 author : Tom Lane <firstname.lastname@example.org> date : Sun, 20 Nov 2016 14:26:19 -0500 committer: Tom Lane <email@example.com> date : Sun, 20 Nov 2016 14:26:19 -0500
Because we use transformTargetList() for UPDATE as well as SELECT tlists, the code accidentally tried to expand a "*" reference into several columns. This is nonsensical, because the UPDATE syntax provides exactly one target column to put the value into. The immediate result was that transformUpdateTargetList() got confused and reported "UPDATE target count mismatch --- internal error". It seems better to treat such a reference as a plain whole-row variable, as it would be in other contexts. (This could produce useful results when the target column is of composite type.) Fix by tweaking transformTargetList() to perform *-expansion only conditionally, depending on its exprKind parameter. Back-patch to 9.3. The problem exists further back, but a fix would be much more invasive before that, because transformTargetList() wasn't told what kind of list it was working on. Doesn't seem worth the trouble given the lack of field reports. (I only noticed it because I was checking the code while trying to improve the documentation about how we handle "foo.*".) Discussion: <firstname.lastname@example.org>
Improve pg_dump/pg_restore --create --if-exists logic.
commit : e69b532be797b3fac87a34be71fc97959f8a02aa author : Tom Lane <email@example.com> date : Thu, 17 Nov 2016 14:59:26 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 17 Nov 2016 14:59:26 -0500
Teach it not to complain if the dropStmt attached to an archive entry is actually spelled CREATE OR REPLACE VIEW, since that will happen due to an upcoming bug fix. Also, if it doesn't recognize a dropStmt, have it print a WARNING and then emit the dropStmt unmodified. That seems like a much saner behavior than Assert'ing or dumping core due to a null-pointer dereference, which is what would happen before :-(. Back-patch to 9.4 where this option was introduced. Discussion: <email@example.com>
Avoid pin scan for replay of XLOG_BTREE_VACUUM in all cases
commit : 30e3cb307d494990cb38cebff6bfc8b4ce27d7ab author : Alvaro Herrera <firstname.lastname@example.org> date : Thu, 17 Nov 2016 13:31:30 -0300 committer: Alvaro Herrera <email@example.com> date : Thu, 17 Nov 2016 13:31:30 -0300
Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to require complex interlocking that matched the requirements on the master. This required an O(N) operation that became a significant problem with large indexes, causing replication delays of seconds or in some cases minutes while the XLOG_BTREE_VACUUM was replayed. This commit skips the “pin scan” that was previously required, by observing in detail when and how it is safe to do so, with full documentation. The pin scan is skipped only in replay; the VACUUM code path on master is not touched here. No tests included. Manual tests using an additional patch to view WAL records and their timing have shown the change in WAL records and their handling has successfully reduced replication delay. This is a back-patch of commits 687f2cd7a015, 3e4b7d87988f, b60284261375 by Simon Riggs, to branches 9.4 and 9.5. No further backpatch is possible because this depends on catalog scans being MVCC. I (Álvaro) additionally updated a slight problem in the README, which explains why this touches the 9.6 and master branches.
Allow DOS-style line endings in ~/.pgpass files.
commit : e9802122d42aee661113423d290d41b005a9b1b2 author : Tom Lane <firstname.lastname@example.org> date : Tue, 15 Nov 2016 16:17:19 -0500 committer: Tom Lane <email@example.com> date : Tue, 15 Nov 2016 16:17:19 -0500
On Windows, libc will mask \r\n line endings for us, since we read the password file in text mode. But that doesn't happen on Unix. People who share password files across both systems might have \r\n line endings in a file they use on Unix, so as a convenience, ignore trailing \r. Per gripe from Josh Berkus. In passing, put the existing check for empty line somewhere where it's actually useful, ie after stripping the newline not before. Vik Fearing, adjusted a bit by me Discussion: <firstname.lastname@example.org>
Account for catalog snapshot in PGXACT->xmin updates.
commit : 3e844a34b80355570a9cfb25becac561aee7cf82 author : Tom Lane <email@example.com> date : Tue, 15 Nov 2016 15:55:36 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 15 Nov 2016 15:55:36 -0500
The CatalogSnapshot was not plugged into SnapshotResetXmin()'s accounting for whether MyPgXact->xmin could be cleared or advanced. In normal transactions this was masked by the fact that the transaction snapshot would be older, but during backend startup and certain utility commands it was possible to re-use the CatalogSnapshot after MyPgXact->xmin had been cleared, meaning that recently-deleted rows could be pruned even though this snapshot could still see them, causing unexpected catalog lookup failures. This effect appears to be the explanation for a recent failure on buildfarm member piculet. To fix, add the CatalogSnapshot to the RegisteredSnapshots heap whenever it is valid. In the previous logic, it was possible for the CatalogSnapshot to remain valid across waits for client input, but with this change that would mean it delays advance of global xmin in cases where it did not before. To avoid possibly causing new table-bloat problems with clients that sit idle for long intervals, add code to invalidate the CatalogSnapshot before waiting for client input. (When the backend is busy, it's unlikely that the CatalogSnapshot would be the oldest snap for very long, so we don't worry about forcing early invalidation of it otherwise.) In passing, remove the CatalogSnapshotStale flag in favor of using "CatalogSnapshot != NULL" to represent validity, as we do for the other special snapshots in snapmgr.c. And improve some obsolete comments. No regression test because I don't know a deterministic way to cause this failure. But the stress test shown in the original discussion provokes "cache lookup failed for relation 1255" within a few dozen seconds for me. Back-patch to 9.4 where MVCC catalog scans were introduced. (Note: it's quite easy to produce similar failures with the same test case in branches before 9.4. But MVCC catalog scans were supposed to fix that.) Discussion: <email@example.com>
Fix duplication in ALTER MATERIALIZE VIEW synopsis
commit : da8c6f078d28d574e5e509ff2e7e6698840f3aeb author : Alvaro Herrera <firstname.lastname@example.org> date : Mon, 14 Nov 2016 11:14:34 -0300 committer: Alvaro Herrera <email@example.com> date : Mon, 14 Nov 2016 11:14:34 -0300
Commit 3c4cf080879b should have removed SET TABLESPACE from the synopsis of ALTER MATERIALIZE VIEW as a possible "action" when it added a separate line for it in the main command listing, but failed to. Repair. Backpatch to 9.4, like the aforementioned commit.
commit : 5c1ac4c685fe6fbe7781127931686f36e70265cd author : Magnus Hagander <firstname.lastname@example.org> date : Tue, 8 Nov 2016 18:34:59 +0100 committer: Magnus Hagander <email@example.com> date : Tue, 8 Nov 2016 18:34:59 +0100
Fix handling of symlinked pg_stat_tmp and pg_replslot
commit : 5556420d46fd890fe684724daa32dd4bcc8c4b42 author : Magnus Hagander <firstname.lastname@example.org> date : Mon, 7 Nov 2016 14:47:30 +0100 committer: Magnus Hagander <email@example.com> date : Mon, 7 Nov 2016 14:47:30 +0100
This was already fixed in HEAD as part of 6ad8ac60 but was not backpatched. Also change the way pg_xlog is handled to be the same as the other directories. Patch from me with pg_xlog addition from Michael Paquier, test updates from David Steele.
Rationalize and document pltcl's handling of magic ".tupno" array element.
commit : 110413a35794ef38cbd3f5a73f5865da88b349c1 author : Tom Lane <firstname.lastname@example.org> date : Sun, 6 Nov 2016 14:43:13 -0500 committer: Tom Lane <email@example.com> date : Sun, 6 Nov 2016 14:43:13 -0500
For a very long time, pltcl's spi_exec and spi_execp commands have had a behavior of storing the current row number as an element of output arrays, but this was never documented. Fix that. For an equally long time, pltcl_trigger_handler had a behavior of silently ignoring ".tupno" as an output column name, evidently so that the result of spi_exec could be used directly as a trigger result tuple. Not sure how useful that really is, but in any case it's bad that it would break attempts to use ".tupno" as an actual column name. We can fix it by not checking for ".tupno" until after we check for a column name match. This comports with the effective behavior of spi_exec[p] that ".tupno" is only magic when you don't have an actual column named that. In passing, wordsmith the description of returning modified tuples from a pltcl trigger. Noted while working on Jim Nasby's patch to support composite results from pltcl. The inability to return trigger tuples using ".tupno" as a column name is a bug, so back-patch to all supported branches.
More zic cleanup.
commit : 6651ab058a78208320d5349b29d649a946bfaf5c author : Tom Lane <firstname.lastname@example.org> date : Sun, 6 Nov 2016 10:45:58 -0500 committer: Tom Lane <email@example.com> date : Sun, 6 Nov 2016 10:45:58 -0500
The workaround the IANA guys chose to get rid of the clang warning we'd silenced in commit 23ed2ba81 turns out not to satisfy Coverity. Go back to the previous solution, ie, remove the useless comparison to SIZE_MAX. (In principle, there could be machines out there where it's not useless because ptrdiff_t is wider than size_t. But the whole thing is pretty academic anyway, as we could never approach this limit for any sane estimate of the amount of data that zic will ever be asked to work with.) Also, s/lineno/lineno_t/g, because if we accept their decision to start using "lineno" as a typedef, it is going to have very unpleasant consequences in our next pgindent run. Noted that while fooling with pltcl yesterday.
Sync our copy of the timezone library with IANA tzcode master.
commit : c09478e157f1fde919b0798ec862d1eb1c1f031b author : Tom Lane <firstname.lastname@example.org> date : Fri, 4 Nov 2016 10:44:16 -0400 committer: Tom Lane <email@example.com> date : Fri, 4 Nov 2016 10:44:16 -0400
This patch absorbs some unreleased fixes for symlink manipulation bugs introduced in tzcode 2016g. Ordinarily I'd wait around for a released version, but in this case it seems like we could do with extra testing, in particular checking whether it works in EDB's VMware build environment. This corresponds to commit aec59156abbf8472ba201b6c7ca2592f9c10e077 in https://github.com/eggert/tz. Per a report from Sandeep Thakkar, building in an environment where hard links are not supported in the timezone data installation directory failed, because upstream code refactoring had broken the case of symlinking from an existing symlink. Further experimentation also showed that the symlinks were sometimes made incorrectly, with too many or too few "../"'s in the symlink contents. Back-patch of commit 1f87181e12beb067d21b79493393edcff14c190b. Report: <CANFyU94_p6mqRQc2i26PFp5QAOQGB++AjGX=FO8LDpXw0GSTjw@mail.gmail.com> Discussion: http://mm.icann.org/pipermail/tz/2016-November/024431.html
Fix nasty performance problem in tsquery_rewrite().
commit : 514797a529427fd99635c8d92cb94c4e5fd12da1 author : Tom Lane <firstname.lastname@example.org> date : Sun, 30 Oct 2016 17:35:43 -0400 committer: Tom Lane <email@example.com> date : Sun, 30 Oct 2016 17:35:43 -0400
tsquery_rewrite() tries to find matches to subsets of AND/OR conditions; for example, in the query 'a | b | c' the substitution subquery 'a | c' should match and lead to replacement of the first and third items. That's fine, but the matching algorithm apparently takes about O(2^N) for an N-clause query (I say "apparently" because the code is also both unintelligible and uncommented). We could probably do better than that even without any extra assumptions --- but actually, we know that the subclauses are sorted, indeed are depending on that elsewhere in this very same function. So we can just scan the two lists a single time to detect matches, as though we were doing a merge join. Also do a re-flattening call (QTNTernary()) in tsquery_rewrite_query, just to make sure that the tree fits the expectations of the next search cycle. I didn't try to devise a test case for this, but I'm pretty sure that the oversight could have led to failure to match in some cases where a match would be expected. Improve comments, and also stick a CHECK_FOR_INTERRUPTS into dofindsubquery, just in case it's still too slow for somebody. Per report from Andreas Seltenreich. Back-patch to all supported branches. Discussion: <firstname.lastname@example.org>
Fix bogus tree-flattening logic in QTNTernary().
commit : f0c2ce45e7acb38d56cf62ef8c907bb690e84cb1 author : Tom Lane <email@example.com> date : Sun, 30 Oct 2016 15:24:40 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 30 Oct 2016 15:24:40 -0400
QTNTernary() contains logic to flatten, eg, '(a & b) & c' into 'a & b & c', which is all well and good, but it tries to do that to NOT nodes as well, so that '!!a' gets changed to '!a'. Explicitly restrict the conversion to be done only on AND and OR nodes, and add a test case illustrating the bug. In passing, provide some comments for the sadly naked functions in tsquery_util.c, and simplify some baroque logic in QTNFree(), which I think may have been leaking some items it intended to free. Noted while investigating a complaint from Andreas Seltenreich. Back-patch to all supported versions.
If the stats collector dies during Hot Standby, restart it.
commit : 4a8cfbdcbe8447e4226b2ebdb155e0acc1167db5 author : Robert Haas <email@example.com> date : Thu, 27 Oct 2016 14:27:40 -0400 committer: Robert Haas <firstname.lastname@example.org> date : Thu, 27 Oct 2016 14:27:40 -0400
This bug exists as far back as 9.0, when Hot Standby was introduced, so back-patch to all supported branches. Report and patch by Takayuki Tsunakawa, reviewed by Michael Paquier and Kuntal Ghosh.
Fix possible pg_basebackup failure on standby with "include WAL".
commit : d1e9c8269bb1b8eae595c48b67f11e3fd4658170 author : Robert Haas <email@example.com> date : Thu, 27 Oct 2016 11:19:51 -0400 committer: Robert Haas <firstname.lastname@example.org> date : Thu, 27 Oct 2016 11:19:51 -0400
If a restartpoint flushed no dirty buffers, it could fail to update the minimum recovery point, leading to a minimum recovery point prior to the starting REDO location. perform_base_backup() would interpret that as meaning that no WAL files at all needed to be included in the backup, failing an internal sanity check. To fix, have restartpoints always update the minimum recovery point to just after the checkpoint record itself, so that the file (or files) containing the checkpoint record will always be included in the backup. Code by Amit Kapila, per a design suggestion by me, with some additional work on the code comment by me. Test case by Michael Paquier. Report by Kyotaro Horiguchi.
Fix incorrect trigger-property updating in ALTER CONSTRAINT.
commit : 3a9a8c40837844bfdfd44c626042670ce076bbba author : Tom Lane <email@example.com> date : Wed, 26 Oct 2016 17:05:06 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 26 Oct 2016 17:05:06 -0400
The code to change the deferrability properties of a foreign-key constraint updated all the associated triggers to match; but a moment's examination of the code that creates those triggers in the first place shows that only some of them should track the constraint's deferrability properties. This leads to odd failures in subsequent exercise of the foreign key, as the triggers are fired at the wrong times. Fix that, and add a regression test comparing the trigger properties produced by ALTER CONSTRAINT with those you get by creating the constraint as-intended to begin with. Per report from James Parks. Back-patch to 9.4 where this ALTER functionality was introduced. Report: <CAJ3Xv+jzJ8iNNUcp4RKW8b6Qp1xVAxHwSXVpjBNygjKxcVuE9w@mail.gmail.com>
Fix not-HAVE_SYMLINK code in zic.c.
commit : cfc5cb9efc7b9f62532d25155335997ccec43496 author : Tom Lane <email@example.com> date : Wed, 26 Oct 2016 13:40:41 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 26 Oct 2016 13:40:41 -0400
I broke this in commit f3094920a. Apparently it's dead code anyway, at least as far as our buildfarm is concerned (and the upstream IANA code doesn't worry at all about symlink() not being present). But as long as the rest of our code is willing to guard against not having symlink(), this should too. Noted while investigating a tangentially-related complaint from Sandeep Thakkar. Back-patch to keep branches in sync.