commit : bcbbc4cfc9ca163c4a562f24ff9e2fb070647786 author : Tom Lane <firstname.lastname@example.org> date : Mon, 6 Feb 2017 16:47:25 -0500 committer: Tom Lane <email@example.com> date : Mon, 6 Feb 2017 16:47:25 -0500
Release notes for 9.6.2, 9.5.6, 9.4.11, 9.3.16, 9.2.20.
commit : 5127c873aa52e574e8b2dd3ebf488e072a81a3ae author : Tom Lane <firstname.lastname@example.org> date : Mon, 6 Feb 2017 15:30:16 -0500 committer: Tom Lane <email@example.com> date : Mon, 6 Feb 2017 15:30:16 -0500
Avoid returning stale attribute bitmaps in RelationGetIndexAttrBitmap().
commit : e935696f4dcbe8bacbd6461f1ec5150f67c1c866 author : Tom Lane <firstname.lastname@example.org> date : Mon, 6 Feb 2017 13:19:51 -0500 committer: Tom Lane <email@example.com> date : Mon, 6 Feb 2017 13:19:51 -0500
The problem with the original coding here is that we might receive (and clear) a relcache invalidation signal for the target relation down inside one of the index_open calls we're doing. Since the target is open, we would not drop the relcache entry, just reset its rd_indexvalid and rd_indexlist fields. But RelationGetIndexAttrBitmap() kept going, and would eventually cache and return potentially-obsolete attribute bitmaps. The case where this matters is where the inval signal was from a CREATE INDEX CONCURRENTLY telling us about a new index on a formerly-unindexed column. (In all other cases, the lock we hold on the target rel should prevent any concurrent change in index state.) Even just returning the stale attribute bitmap is not such a problem, because it shouldn't matter during the transaction in which we receive the signal. What hurts is caching the stale data, because it can survive into later transactions, breaking CREATE INDEX CONCURRENTLY's expectation that later transactions will not create new broken HOT chains. The upshot is that there's a window for building corrupted indexes during CREATE INDEX CONCURRENTLY. This patch fixes the problem by rechecking that the set of index OIDs is still the same at the end of RelationGetIndexAttrBitmap() as it was at the start. If not, we loop back and try again. That's a little more than is strictly necessary to fix the bug --- in principle, we could return the stale data but not cache it --- but it seems like a bad idea on general principles for relcache to return data it knows is stale. There might be more hazards of the same ilk, or there might be a better way to fix this one, but this patch definitely improves matters and seems unlikely to make anything worse. So let's push it into today's releases even as we continue to study the problem. Pavan Deolasee and myself Discussion: https://postgr.es/m/CABOikdM2MUq9cyZJi1KyLmmkCereyGp5JQ4fuwKoyKEde_mzkQ@mail.gmail.com
commit : a7eddfa2282d166872d131b608c1a3dacd47ee5e author : Peter Eisentraut <firstname.lastname@example.org> date : Mon, 6 Feb 2017 12:39:38 -0500 committer: Peter Eisentraut <email@example.com> date : Mon, 6 Feb 2017 12:39:38 -0500
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: e7df014526482b9ee2f736d01d09cf979a4e31e2
Add missing newline to error messages
commit : cff0d02e8e8e26b67865967c088ef65aaeb2deff author : Peter Eisentraut <firstname.lastname@example.org> date : Mon, 6 Feb 2017 09:47:39 -0500 committer: Peter Eisentraut <email@example.com> date : Mon, 6 Feb 2017 09:47:39 -0500
Also improve the message style a bit while we're here.
Fix typo also in expected output.
commit : 8e93e759bbcec69f7e0e778e25aa335e72a50d91 author : Heikki Linnakangas <firstname.lastname@example.org> date : Mon, 6 Feb 2017 12:04:04 +0200 committer: Heikki Linnakangas <email@example.com> date : Mon, 6 Feb 2017 12:04:04 +0200
Commit 181bdb90ba fixed the typo in the .sql file, but forgot to update the expected output.
Fix typos in comments.
commit : 3aee34d41d38f16546dd0761b9652e47be29f006 author : Heikki Linnakangas <firstname.lastname@example.org> date : Mon, 6 Feb 2017 11:33:58 +0200 committer: Heikki Linnakangas <email@example.com> date : Mon, 6 Feb 2017 11:33:58 +0200
Backpatch to all supported versions, where applicable, to make backpatching of future fixes go more smoothly. Josh Soref Discussion: https://www.postgresql.org/message-id/CACZqfqCf+5qRztLPgmmosr-B0Ye4srWzzw_mo4c_8_B_mtjmJQ@mail.gmail.com
Add KOI8-U map files to Makefile.
commit : e5e75ea288299aafe6dba1d1c18f284593210596 author : Heikki Linnakangas <firstname.lastname@example.org> date : Thu, 2 Feb 2017 14:12:35 +0200 committer: Heikki Linnakangas <email@example.com> date : Thu, 2 Feb 2017 14:12:35 +0200
These were left out by mistake back when support for KOI8-U encoding was added. Extracted from Kyotaro Horiguchi's larger patch.
Update time zone data files to tzdata release 2016j.
commit : 4c729f4718772d49230fd932322fd6bffa762e10 author : Tom Lane <firstname.lastname@example.org> date : Mon, 30 Jan 2017 11:40:22 -0500 committer: Tom Lane <email@example.com> date : Mon, 30 Jan 2017 11:40:22 -0500
DST law changes in northern Cyprus (new zone Asia/Famagusta), Russia (new zone Europe/Saratov), Tonga, Antarctica/Casey. Historical corrections for Asia/Aqtau, Asia/Atyrau, Asia/Gaza, Asia/Hebron, Italy, Malta. Replace invented zone abbreviation "TOT" for Tonga with numeric UTC offset; but as in the past, we'll keep accepting "TOT" for input.
Orthography fixes for new castNode() macro.
commit : d63917d6aa6f9638ae523aff387911cb343c5eba author : Tom Lane <firstname.lastname@example.org> date : Fri, 27 Jan 2017 08:33:58 -0500 committer: Tom Lane <email@example.com> date : Fri, 27 Jan 2017 08:33:58 -0500
Clean up hastily-composed comment. Normalize whitespace. Erik Rijkers and myself
Check interrupts during hot standby waits
commit : ace2cd80a028fc8775146c946d3aff87810e4392 author : Simon Riggs <simon@2ndQuadrant.com> date : Fri, 27 Jan 2017 12:15:02 +0000 committer: Simon Riggs <simon@2ndQuadrant.com> date : Fri, 27 Jan 2017 12:15:02 +0000
Add castNode(type, ptr) for safe casting between NodeTag based types.
commit : 8fef0c34143ba45d4efcd0c352dee23602be54af author : Andres Freund <firstname.lastname@example.org> date : Thu, 26 Jan 2017 16:47:03 -0800 committer: Andres Freund <email@example.com> date : Thu, 26 Jan 2017 16:47:03 -0800
The new function allows to cast from one NodeTag based type to another, while asserting that the conversion is valid. This replaces the common pattern of doing a cast and a Assert(IsA(ptr, type)) close-by. As this seems likely to be used pervasively, we decided to backpatch this change the addition of this macro. Otherwise backpatched fixes are more likely not to work on back-branches. On branches before 9.6, where we do not yet rely on inline functions being available, the type assertion is only performed if PG_USE_INLINE support is detected. The cast obviously is performed regardless. For the benefit of verifying the macro compiles in the back-branches, this commit contains a single use of the new macro. On master, a somewhat larger conversion will be committed separately. Author: Peter Eisentraut and Andres Freund Reviewed-By: Tom Lane Discussion: https://firstname.lastname@example.org Backpatch: 9.2-
Add missed update in expected file
commit : f90860f339f6bb49a46ae0a2e4cedb109b3c9764 author : Alvaro Herrera <email@example.com> date : Thu, 26 Jan 2017 18:06:06 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Thu, 26 Jan 2017 18:06:06 -0300
Remove test for COMMENT ON DATABASE
commit : 7fb29ea9c6269d9a2345a9dfa91cd066eb66bedc author : Alvaro Herrera <email@example.com> date : Thu, 26 Jan 2017 17:45:22 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Thu, 26 Jan 2017 17:45:22 -0300
Our current DDL only allows a database name to be specified in COMMENT ON DATABASE, which Andrew Dunstan reports to make this test fail on the buildfarm. Remove the line until we gain a DDL command that allows the current database to be operated on without having the specify it by name. Backpatch to 9.5, where these tests appeared. Discussion: https://postgr.es/m/e6084b89-07a7-7e57-51ee-d7b8fc9ec864@2ndQuadrant.com
Reset hot standby xmin after restart
commit : 99289e50606e7be4202c15c40caf27cf77893d58 author : Simon Riggs <simon@2ndQuadrant.com> date : Thu, 26 Jan 2017 20:09:18 +0000 committer: Simon Riggs <simon@2ndQuadrant.com> date : Thu, 26 Jan 2017 20:09:18 +0000
Hot_standby_feedback could be reset by reload and worked correctly, but if the server was restarted rather than reloaded the xmin was not reset. Force reset always if hot_standby_feedback is enabled at startup. Ants Aasma, Craig Ringer Reported-by: Ants Aasma
Ensure that a tsquery like '!foo' matches empty tsvectors.
commit : 423ad86f422397ce145cc3f5b3e56d2a11ccb6a6 author : Tom Lane <email@example.com> date : Thu, 26 Jan 2017 12:17:47 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 26 Jan 2017 12:17:47 -0500
!foo means "the tsvector does not contain foo", and therefore it should match an empty tsvector. ts_match_vq() overenthusiastically supposed that an empty tsvector could never match any query, so it forcibly returned FALSE, the wrong answer. Remove the premature optimization. Our behavior on this point was inconsistent, because while seqscans and GIST index searches both failed to match empty tsvectors, GIN index searches would find them, since GIN scans don't rely on ts_match_vq(). That makes this certainly a bug, not a debatable definition disagreement, so back-patch to all supported branches. Report and diagnosis by Tom Dunstan (bug #14515); added test cases by me. Discussion: https://email@example.com
Fix comments in StrategyNotifyBgWriter().
commit : 557917769ada4988c15f81f44ed278a5c8c687a4 author : Tatsuo Ishii <firstname.lastname@example.org> date : Tue, 24 Jan 2017 09:39:11 +0900 committer: Tatsuo Ishii <email@example.com> date : Tue, 24 Jan 2017 09:39:11 +0900
The interface for the function was changed in d72731a70450b5e7084991b9caa15cb58a2820df but the comments of the function was not updated. Patch by Yugo Nagata.
doc: Update URL for Microsoft download site
commit : 9cb83818ccfa116309283cff1c01ca2b75670aa7 author : Peter Eisentraut <firstname.lastname@example.org> date : Tue, 17 Jan 2017 12:00:00 -0500 committer: Peter Eisentraut <email@example.com> date : Tue, 17 Jan 2017 12:00:00 -0500
Avoid useless respawining the autovacuum launcher at high speed.
commit : aeaaf62aa57a6b8d35a092b4897a801e1881a9a0 author : Robert Haas <firstname.lastname@example.org> date : Fri, 20 Jan 2017 15:55:45 -0500 committer: Robert Haas <email@example.com> date : Fri, 20 Jan 2017 15:55:45 -0500
When (1) autovacuum = off and (2) there's at least one database with an XID age greater than autovacuum_freeze_max_age and (3) all tables in that database that need vacuuming are already being processed by a worker and (4) the autovacuum launcher is started, a kind of infinite loop occurs. The launcher starts a worker and immediately exits. The worker, finding no worker to do, immediately starts the launcher, supposedly so that the next database can be processed. But because datfrozenxid for that database hasn't been advanced yet, the new worker gets put right back into the same database as the old one, where it once again starts the launcher and exits. High-speed ping pong ensues. There are several possible ways to break the cycle; this seems like the safest one. Amit Khandekar (code) and Robert Haas (comments), reviewed by Álvaro Herrera. Discussion: http://postgr.es/m/CAJ3gD9eWejf72HKquKSzax0r+epS=nAbQKNnykkMA0E8c+rMDg@mail.gmail.com
Reset the proper GUC in create_index test.
commit : b60f9820fbc1ee9d5a74269c6a152527de841a17 author : Tom Lane <firstname.lastname@example.org> date : Wed, 18 Jan 2017 16:33:18 -0500 committer: Tom Lane <email@example.com> date : Wed, 18 Jan 2017 16:33:18 -0500
Thinko in commit a4523c5aa. It doesn't really affect anything at present, but it would be a problem if any tests added later in this file ought to get index-only-scan plans. Back-patch, like the previous commit, just to avoid surprises in case we add such a test and then back-patch it. Nikita Glukhov Discussion: https://firstname.lastname@example.org
Change some test macros to return true booleans
commit : 5f4ae4f3cd58d885593c0c56482d99d06b47a3ad author : Alvaro Herrera <email@example.com> date : Wed, 18 Jan 2017 18:06:13 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Wed, 18 Jan 2017 18:06:13 -0300
These macros work fine when they are used directly in an "if" test or similar, but as soon as the return values are assigned to boolean variables (or passed as boolean arguments to some function), they become bugs, hopefully caught by compiler warnings. To avoid future problems, fix the definitions so that they return actual booleans. To further minimize the risk that somebody uses them in back-patched fixes that only work correctly in branches starting from the current master and not in old ones, back-patch the change to supported branches as appropriate. See also commit af4472bcb88ab36b9abbe7fd5858e570a65a2d1a, and the long discussion (and larger patch) in the thread mentioned in its commit message. Discussion: https://email@example.com
Disable transforms that replaced AT TIME ZONE with RelabelType.
commit : 74e67bbad6b435310c375dd9f57c0210ef796bd0 author : Tom Lane <firstname.lastname@example.org> date : Wed, 18 Jan 2017 15:21:52 -0500 committer: Tom Lane <email@example.com> date : Wed, 18 Jan 2017 15:21:52 -0500
These resulted in wrong answers if the relabeled argument could be matched to an index column, as shown in bug #14504 from Evgeniy Kozlov. We might be able to resurrect these optimizations by adjusting the planner's treatment of RelabelType, or by adjusting btree's rules for selecting comparison functions, but either solution will take careful analysis and does not sound like a fit candidate for backpatching. I left the catalog infrastructure in place and just reduced the transform functions to always-return-NULL. This would be necessary anyway in the back branches, and it doesn't seem important to be more invasive in HEAD. Bug introduced by commit b8a18ad48. Back-patch to 9.5 where that came in. Report: https://firstname.lastname@example.org Discussion: https://email@example.com
Fix an assertion failure related to an exclusive backup.
commit : dfe348c1b1ca5327cf2ff058b795ac188d442715 author : Fujii Masao <firstname.lastname@example.org> date : Tue, 17 Jan 2017 17:30:26 +0900 committer: Fujii Masao <email@example.com> date : Tue, 17 Jan 2017 17:30:26 +0900
Previously multiple sessions could execute pg_start_backup() and pg_stop_backup() to start and stop an exclusive backup at the same time. This could trigger the assertion failure of "FailedAssertion("!(XLogCtl->Insert.exclusiveBackup)". This happend because, even while pg_start_backup() was starting an exclusive backup, other session could run pg_stop_backup() concurrently and mark the backup as not-in-progress unconditionally. This patch introduces ExclusiveBackupState indicating the state of an exclusive backup. This state is used to ensure that there is only one session running pg_start_backup() or pg_stop_backup() at the same time, to avoid the assertion failure. Back-patch to all supported versions. Author: Michael Paquier Reviewed-By: Kyotaro Horiguchi and me Reported-By: Andreas Seltenreich Discussion: <firstname.lastname@example.org>
Throw suitable error for COPY TO STDOUT/FROM STDIN in a SQL function.
commit : a23ea8f65e07aa56a501bbdd68068b52469d77cf author : Tom Lane <email@example.com> date : Sat, 14 Jan 2017 13:27:47 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 14 Jan 2017 13:27:47 -0500
A client copy can't work inside a function because the FE/BE wire protocol doesn't support nesting of a COPY operation within query results. (Maybe it could, but the protocol spec doesn't suggest that clients should support this, and libpq for one certainly doesn't.) In most PLs, this prohibition is enforced by spi.c, but SQL functions don't use SPI. A comparison of _SPI_execute_plan() and init_execution_state() shows that rejecting client COPY is the only discrepancy in what they allow, so there's no other similar bugs. This is an astonishingly ancient oversight, so back-patch to all supported branches. Report: https://postgr.es/m/BY2PR05MB2309EABA3DEFA0143F50F0D593780@BY2PR05MB2309.namprd05.prod.outlook.com
pg_upgrade: Fix for changed pg_ctl default stop mode
commit : 7fbd3ddd1d2b76c0229cdc44d355b6238c146335 author : Peter Eisentraut <email@example.com> date : Fri, 13 Jan 2017 12:00:00 -0500 committer: Peter Eisentraut <firstname.lastname@example.org> date : Fri, 13 Jan 2017 12:00:00 -0500
In 9.5, the default pg_ctl stop mode was changed from "smart" to "fast". pg_upgrade still thought the default mode was "smart" and only specified the mode when "fast" was asked for. This results in using "fast" all the time. It's not clear what the effect in practice is, but fix it nonetheless to restore the previous behavior.
pg_restore: Don't allow non-positive number of jobs
commit : 26e7cdb3a80d340742aeb5bfe2dbc42edfb9d34b author : Stephen Frost <email@example.com> date : Wed, 11 Jan 2017 15:45:56 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Wed, 11 Jan 2017 15:45:56 -0500
pg_restore will currently accept invalid values for the number of parallel jobs to run (eg: -1), unlike pg_dump which does check that the value provided is reasonable. Worse, '-1' is actually a valid, independent, parameter (as an alias for --single-transaction), leading to potentially completely unexpected results from a command line such as: -> pg_restore -j -1 Where a user would get neither parallel jobs nor a single-transaction. Add in validity checking of the parallel jobs option, as we already have in pg_dump, before we try to open up the archive. Also move the check that we haven't been asked to run more parallel jobs than possible on Windows to the same place, so we do all the option validity checking before opening the archive. Back-patch all the way, though for 9.2 we're adding the Windows-specific check against MAXIMUM_WAIT_OBJECTS as that check wasn't back-patched originally. Discussion: https://www.postgresql.org/message-id/20170110044815.GC18360%40tamriel.snowman.net
Fix invalid-parallel-jobs error message
commit : f12681079020eea53ea9a0eb994ac2ee6190770f author : Stephen Frost <email@example.com> date : Mon, 9 Jan 2017 23:09:35 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Mon, 9 Jan 2017 23:09:35 -0500
Including the program name twice is not helpful: -> pg_dump -j -1 pg_dump: pg_dump: invalid number of parallel jobs Correct by removing the progname from the exit_horribly() call used when validating the number of parallel jobs. Noticed while testing various pg_dump error cases. Back-patch to 9.3 where parallel pg_dump was added.
Fix ALTER TABLE / SET TYPE for irregular inheritance
commit : 4d4ab6ccd8a24a91ef9fc1904f0e80a2fd0377fe author : Alvaro Herrera <email@example.com> date : Mon, 9 Jan 2017 19:26:58 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Mon, 9 Jan 2017 19:26:58 -0300
If inherited tables don't have exactly the same schema, the USING clause in an ALTER TABLE / SET DATA TYPE misbehaves when applied to the children tables since commit 9550e8348b79. Starting with that commit, the attribute numbers in the USING expression are fixed during parse analysis. This can lead to bogus errors being reported during execution, such as: ERROR: attribute 2 has wrong type DETAIL: Table has type smallint, but query expects integer. Since it wouldn't do to revert to the original coding, we now apply a transformation to map the attribute numbers to the correct ones for each child. Reported by Justin Pryzby Analysis by Tom Lane; patch by me. Discussion: https://postgr.es/m/20170102225618.GA10071@telsasoft.com
BRIN revmap pages are not standard pages ...
commit : ed8e8b8149220809881d7a275104621fddc36289 author : Alvaro Herrera <email@example.com> date : Mon, 9 Jan 2017 18:19:29 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Mon, 9 Jan 2017 18:19:29 -0300
... and therefore we ought not to tell XLogRegisterBuffer the opposite, when writing XLog for a brin update that moves the index tuple to a different page. Otherwise, xlog insertion would try to "compress the hole" when producing a full-page image for it; but since we don't update pd_lower/upper, the hole covers the whole page. On WAL replay, the revmap page becomes empty and so the entire portion of the index is useless and needs to be recomputed. This is low-probability: a BRIN update only moves an index tuple to a different page when the summary tuple is larger than the existing one, which doesn't happen with fixed-width datatypes. Also, the revmap page must be first after a checkpoint. Report and patch: Kuntal Ghosh Bug is alleged to have detected by a WAL-consistency-checking tool. Discussion: https://postgr.es/m/CAGz5QCJ=00UQjScSEFbV=0qO5ShTZB9WWz_Fm7+Wd83zPs9Geg@mail.gmail.com I posted a test case demonstrating the problem, but I'm refraining from adding it to the test suite; if the WAL consistency tool makes it in, that will be a better way to catch this from regressing. (We should definitely have someting that causes not-same-page updates, though.)
Invalidate cached plans on FDW option changes.
commit : aaf12e577ee9bfe71aae43879e41007628ad1143 author : Tom Lane <email@example.com> date : Fri, 6 Jan 2017 14:12:52 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 6 Jan 2017 14:12:52 -0500
This fixes problems where a plan must change but fails to do so, as seen in a bug report from Rajkumar Raghuwanshi. For ALTER FOREIGN TABLE OPTIONS, do this through the standard method of forcing a relcache flush on the table. For ALTER FOREIGN DATA WRAPPER and ALTER SERVER, just flush the whole plan cache on any change in pg_foreign_data_wrapper or pg_foreign_server. That matches the way we handle some other low-probability cases such as opclass changes, and it's unclear that the case arises often enough to be worth working harder. Besides, that gives a patch that is simple enough to back-patch with confidence. Back-patch to 9.3. In principle we could apply the code change to 9.2 as well, but (a) we lack postgres_fdw to test it with, (b) it's doubtful that anyone is doing anything exciting enough with FDWs that far back to need this desperately, and (c) the patch doesn't apply cleanly. Patch originally by Amit Langote, reviewed by Etsuro Fujita and Ashutosh Bapat, who each contributed substantial changes as well. Discussion: https://postgr.es/m/CAKcux6m5cA6rRPTKkqVdJ-R=KKDfe35Q_ZuUqxDSV_4hwgaemail@example.com
Fix handling of empty arrays in array_fill().
commit : 4555a375a1b27a49f5f9da2474159557360d17de author : Tom Lane <firstname.lastname@example.org> date : Thu, 5 Jan 2017 11:33:51 -0500 committer: Tom Lane <email@example.com> date : Thu, 5 Jan 2017 11:33:51 -0500
array_fill(..., array) produced an empty array, which is probably what users expect, but it was a one-dimensional zero-length array which is not our standard representation of empty arrays. Also, for no very good reason, it rejected empty input arrays; that case should be allowed and produce an empty output array. In passing, remove the restriction that the input array(s) have lower bound 1. That seems rather pointless, and it would have needed extra complexity to make the check deal with empty input arrays. Per bug #14487 from Andrew Gierth. It's been broken all along, so back-patch to all supported branches. Discussion: https://firstname.lastname@example.org
Handle OID column inheritance correctly in ALTER TABLE ... INHERIT.
commit : 50c8196f946d55f7f4b998491e48f592f711c4fe author : Tom Lane <email@example.com> date : Wed, 4 Jan 2017 18:00:11 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 4 Jan 2017 18:00:11 -0500
Inheritance operations must treat the OID column, if any, much like regular user columns. But MergeAttributesIntoExisting() neglected to do that, leading to weird results after a table with OIDs is associated to a parent with OIDs via ALTER TABLE ... INHERIT. Report and patch by Amit Langote, reviewed by Ashutosh Bapat, some adjustments by me. It's been broken all along, so back-patch to all supported branches. Discussion: https://email@example.com
Prefer int-wide pg_atomic_flag over char-wide when using gcc intrinsics.
commit : 1ed8335ce04598e63944f063c4dc6a8ab08e47bc author : Tom Lane <firstname.lastname@example.org> date : Wed, 4 Jan 2017 13:36:44 -0500 committer: Tom Lane <email@example.com> date : Wed, 4 Jan 2017 13:36:44 -0500
configure can only probe the existence of gcc intrinsics, not how well they're implemented, and unfortunately the answer is sometimes "badly". In particular we've found that multiple compilers fail to implement char-width __sync_lock_test_and_set() correctly on PPC; and even a correct implementation would necessarily be pretty inefficient, since that hardware has only a word-wide primitive to work with. Given the knowledge we've accumulated in s_lock.h, it appears that it's best to rely on int-width TAS operations on most non-Intel architectures. Hence, pick int not char when both are nominally available to us in generic-gcc.h (note that that code is not used for x86[_64]). Back-patch to fix regression test failures on FreeBSD/PPC. Ordinarily back-patching a change like this would be verboten because of ABI breakage. But since pg_atomic_flag is not yet used in any Postgres data structure, there's no ABI to break. It seems safer to back-patch to avoid possible gotchas, if someday we do back-patch something that uses pg_atomic_flag. Discussion: https://firstname.lastname@example.org
Update copyright for 2017
commit : 92ade06b2cf3985a93f45edd10d4855dcc0bf26d author : Bruce Momjian <email@example.com> date : Tue, 3 Jan 2017 12:37:53 -0500 committer: Bruce Momjian <firstname.lastname@example.org> date : Tue, 3 Jan 2017 12:37:53 -0500
Backpatch-through: certain files through 9.2
Remove bogus notice that older clients might not work with MD5 passwords.
commit : 65a7f190b2537218dcbe47ef9d75fc99adcbe99a author : Heikki Linnakangas <email@example.com> date : Tue, 3 Jan 2017 14:09:01 +0200 committer: Heikki Linnakangas <firstname.lastname@example.org> date : Tue, 3 Jan 2017 14:09:01 +0200
That was written when we still had "crypt" authentication, and it was referring to the fact that an older client might support "crypt" authentication but not "md5". But we haven't supported "crypt" for years. (As soon as we add a new authentication mechanism that doesn't work with MD5 hashes, we'll need a similar notice again. But this text as it's worded now is just wrong.) Backpatch to all supported versions. Discussion: https://email@example.com
Silence compiler warnings
commit : cbc62b22952006324199979396d229ce82f6a0d7 author : Joe Conway <firstname.lastname@example.org> date : Mon, 2 Jan 2017 14:42:44 -0800 committer: Joe Conway <email@example.com> date : Mon, 2 Jan 2017 14:42:44 -0800
Rearrange a bit of code to ensure that 'mode' in LWLockRelease is obviously always set, which seems a bit cleaner and avoids a compiler warning (thanks to Robert for the suggestion!). Back-patch back to 9.5 where the warning is first seen. Author: Stephen Frost Discussion: https://postgr.es/m/20161129152102.GR13284%40tamriel.snowman.net
Silence compiler warnings
commit : 35d4dd82c2e82d2bc0e51174b5cfb5ac30061a81 author : Joe Conway <firstname.lastname@example.org> date : Mon, 2 Jan 2017 14:12:04 -0800 committer: Joe Conway <email@example.com> date : Mon, 2 Jan 2017 14:12:04 -0800
In GetCachedPlan(), initialize 'plan' to silence a compiler warning, but also add an Assert() to make sure we don't ever actually fall through with 'plan' still being set to NULL, since we are about to dereference it. Back-patch back to 9.2. Author: Stephen Frost Discussion: https://postgr.es/m/20161129152102.GR13284%40tamriel.snowman.net
Fix incorrect example of to_timestamp() usage.
commit : 4191b9ece43d28cbf8b1257d1a97896184fc1fdf author : Tom Lane <firstname.lastname@example.org> date : Thu, 29 Dec 2016 18:05:34 -0500 committer: Tom Lane <email@example.com> date : Thu, 29 Dec 2016 18:05:34 -0500
Must use HH24 not HH to read a hour value exceeding 12. This was already fixed in HEAD in commit d3cd36a13, but I didn't think of backpatching it. Report: https://firstname.lastname@example.org
Fix interval_transform so it doesn't throw away non-no-op casts.
commit : 4efe7aa2d8df98dc56d74042b25abf77b4e5bb61 author : Tom Lane <email@example.com> date : Tue, 27 Dec 2016 15:43:54 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 27 Dec 2016 15:43:54 -0500
interval_transform() contained two separate bugs that caused it to sometimes mistakenly decide that a cast from interval to restricted interval is a no-op and throw it away. First, it was wrong to rely on dt.h's field type macros to have an ordering consistent with the field's significance; in one case they do not. This led to mistakenly treating YEAR as less significant than MONTH, so that a cast from INTERVAL MONTH to INTERVAL YEAR was incorrectly discarded. Second, fls(1<<k) produces k+1 not k, so comparing its output directly to SECOND was wrong. This led to supposing that a cast to INTERVAL MINUTE was really a cast to INTERVAL SECOND and so could be discarded. To fix, get rid of the use of fls(), and make a function based on intervaltypmodout to produce a field ID code adapted to the need here. Per bug #14479 from Piotr Stefaniak. Back-patch to 9.2 where transform functions were introduced, because this code was born broken. Discussion: https://email@example.com
Explain unaccounted for space in pgstattuple.
commit : 29e28134fa47ea6041dd89a1f7c0d34b6d0fc136 author : Andrew Dunstan <firstname.lastname@example.org> date : Tue, 27 Dec 2016 11:23:46 -0500 committer: Andrew Dunstan <email@example.com> date : Tue, 27 Dec 2016 11:23:46 -0500
In addition to space accounted for by tuple_len, dead_tuple_len and free_space, the table_len includes page overhead, the item pointers table and padding bytes. Backpatch to live branches.
Remove triggerable Assert in hashname().
commit : 987c4b401f4610fbc8d39922743584a366cc7146 author : Tom Lane <firstname.lastname@example.org> date : Mon, 26 Dec 2016 14:58:02 -0500 committer: Tom Lane <email@example.com> date : Mon, 26 Dec 2016 14:58:02 -0500
hashname() asserted that the key string it is given is shorter than NAMEDATALEN. That should surely always be true if the input is in fact a regular value of type "name". However, for reasons of coding convenience, we allow plain old C strings to be treated as "name" values in many places. Some SQL functions accept arbitrary "text" inputs, convert them to C strings, and pass them otherwise-untransformed to syscache lookups for name columns, allowing an overlength input value to trigger hashname's Assert. This would be a DOS problem, except that it only happens in assert-enabled builds which aren't recommended for production. In a production build, you'll just get a name lookup error, since regardless of the hash value computed by hashname, the later equality comparison checks can't match. Likewise, if the catalog lookup is done by seqscan or indexscan searches, there will just be a lookup error, since the name comparison functions don't contain any similar length checks, and will see an overlength input as unequal to any stored entry. After discussion we concluded that we should simply remove this Assert. It's inessential to hashname's own functionality, and having such an assertion in only some paths for name lookup is more of a foot-gun than a useful check. There may or may not be a case for the affected callers to do something other than let the name lookup fail, but we'll consider that separately; in any case we probably don't want to change such behavior in the back branches. Per report from Tushar Ahuja. Back-patch to all supported branches. Report: https://firstname.lastname@example.org Discussion: https://email@example.com
pg_dumpall: Include --verbose option in --help output
commit : 846eaadd059aa6953b90ad7f2f6f16a965013748 author : Stephen Frost <firstname.lastname@example.org> date : Sat, 24 Dec 2016 01:42:10 -0500 committer: Stephen Frost <email@example.com> date : Sat, 24 Dec 2016 01:42:10 -0500
The -v/--verbose option was not included in the output from --help for pg_dumpall even though it's in the pg_dumpall documentation and has apparently been around since pg_dumpall was reimplemented in C in 2002. Fix that by adding it. Pointed out by Daniel Westermann. Back-patch to all supported branches. Discussion: https://www.postgresql.org/message-id/2020970042.4589542.1482482101585.JavaMail.zimbra%40dbi-services.com
Fix tab completion in psql for ALTER DEFAULT PRIVILEGES
commit : 16a2efdb28dfdbf705fcf7b95c981847cad7234d author : Stephen Frost <firstname.lastname@example.org> date : Fri, 23 Dec 2016 21:01:40 -0500 committer: Stephen Frost <email@example.com> date : Fri, 23 Dec 2016 21:01:40 -0500
When providing tab completion for ALTER DEFAULT PRIVILEGES, we are including the list of roles as possible options for completion after the GRANT or REVOKE. Further, we accept FOR ROLE/IN SCHEMA at the same time and in either order, but the tab completion was only working for one or the other. Lastly, we weren't using the actual list of allowed kinds of objects for default privileges for completion after the 'GRANT X ON' but instead were completeing to what 'GRANT X ON' supports, which isn't the ssame at all. Address these issues by improving the forward tab-completion for ALTER DEFAULT PRIVILEGES and then constrain and correct how the tail completion is done when it is for ALTER DEFAULT PRIVILEGES. Back-patch the forward/tail tab-completion to 9.6, where we made it easy to handle such cases. For 9.5 and earlier, correct the initial tab-completion to at least be correct as far as it goes and then add a check for GRANT/REVOKE to only tab-complete when the GRANT/REVOKE is the start of the command, so we don't try to do tab-completion after we get to the GRANT/REVOKE part of the ALTER DEFAULT PRIVILEGES command, which is better than providing incorrect completions. Initial patch for master and 9.6 by Gilles Darold, though I cleaned it up and added a few comments. All bugs in the 9.5 and earlier patch are mine. Discussion: https://firstname.lastname@example.org
Doc: improve index entry for "median".
commit : a15c59d19624713f8e02fb11c5460003d9be98ed author : Tom Lane <email@example.com> date : Fri, 23 Dec 2016 12:53:09 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 23 Dec 2016 12:53:09 -0500
We had an index entry for "median" attached to the percentile_cont function entry, which was pretty useless because a person following the link would never realize that that function was the one they were being hinted to use. Instead, make the index entry point at the example in syntax-aggregates, and add a <seealso> link to "percentile". Also, since that example explicitly claims to be calculating the median, make it use percentile_cont not percentile_disc. This makes no difference in terms of the larger goals of that section, but so far as I can find, nearly everyone thinks that "median" means the continuous not discrete calculation. Per gripe from Steven Winfield. Back-patch to 9.4 where we introduced percentile_cont. Discussion: https://email@example.com
Improve RLS documentation with respect to COPY
commit : 53afdef662ef77a3f27fc36dbc0472f855069fd3 author : Joe Conway <firstname.lastname@example.org> date : Thu, 22 Dec 2016 17:57:14 -0800 committer: Joe Conway <email@example.com> date : Thu, 22 Dec 2016 17:57:14 -0800
Documentation for pg_restore said COPY TO does not support row security when in fact it should say COPY FROM. Fix that. While at it, make it clear that "COPY FROM" does not allow RLS to be enabled and INSERT should be used instead. Also that SELECT policies will apply to COPY TO statements. Back-patch to 9.5 where RLS first appeared. Author: Joe Conway Reviewed-By: Dean Rasheed and Robert Haas Discussion: https://postgr.es/m/5744FA24.3030008%40joeconway.com
Use TSConfigRelationId in AlterTSConfiguration()
commit : e8236921773dab92f884db2685aee49fdc747cfc author : Stephen Frost <firstname.lastname@example.org> date : Thu, 22 Dec 2016 17:08:49 -0500 committer: Stephen Frost <email@example.com> date : Thu, 22 Dec 2016 17:08:49 -0500
When we are altering a text search configuration, we are getting the tuple from pg_ts_config and using its OID, so use TSConfigRelationId when invoking any post-alter hooks and setting the object address. Further, in the functions called from AlterTSConfiguration(), we're saving information about the command via EventTriggerCollectAlterTSConfig(), so we should be setting commandCollected to true. Also add a regression test to test_ddl_deparse for ALTER TEXT SEARCH CONFIGURATION. Author: Artur Zakirov, a few additional comments by me Discussion: https://www.postgresql.org/message-id/57a71eba-f2c7-e7fd-6fc0-2126ec0b39bd%40postgrespro.ru Back-patch the fix for the InvokeObjectPostAlterHook() call to 9.3 where it was introduced, and the fix for the ObjectAddressSet() call and setting commandCollected to true to 9.5 where those changes to ProcessUtilitySlow() were introduced.
Fix handling of expanded objects in CoerceToDomain and CASE execution.
commit : c472f2a3353dc4907b54ba90b720310bfb2434eb author : Tom Lane <firstname.lastname@example.org> date : Thu, 22 Dec 2016 15:01:28 -0500 committer: Tom Lane <email@example.com> date : Thu, 22 Dec 2016 15:01:28 -0500
When the input value to a CoerceToDomain expression node is a read-write expanded datum, we should pass a read-only pointer to any domain CHECK expressions and then return the original read-write pointer as the expression result. Previously we were blindly passing the same pointer to all the consumers of the value, making it possible for a function in CHECK to modify or even delete the expanded value. (Since a plpgsql function will absorb a passed-in read-write expanded array as a local variable value, it will in fact delete the value on exit.) A similar hazard of passing the same read-write pointer to multiple consumers exists in domain_check() and in ExecEvalCase, so fix those too. The fix requires adding MakeExpandedObjectReadOnly calls at the appropriate places, which is simple enough except that we need to get the data type's typlen from somewhere. For the domain cases, solve this by redefining DomainConstraintRef.tcache as okay for callers to access; there wasn't any reason for the original convention against that, other than not wanting the API of typcache.c to be any wider than it had to be. For CASE, there's no good solution except to add a syscache lookup during executor start. Per bug #14472 from Marcos Castedo. Back-patch to 9.5 where expanded values were introduced. Discussion: https://firstname.lastname@example.org
Fix broken error check in _hash_doinsert.
commit : aa04e5c3c1b1ba27d5e726d9afc291f475051ec0 author : Robert Haas <email@example.com> date : Thu, 22 Dec 2016 13:54:40 -0500 committer: Robert Haas <firstname.lastname@example.org> date : Thu, 22 Dec 2016 13:54:40 -0500
You can't just cast a HashMetaPage to a Page, because the meta page data is stored after the page header, not at offset 0. Fortunately, this didn't break anything because it happens to find hashm_bsize at the offset at which it expects to find pd_pagesize_version, and the values are close enough to the same that this works out. Still, it's a bug, so back-patch to all supported versions. Mithun Cy, revised a bit by me.
Make dblink try harder to form useful error messages
commit : 80ca22aa60a6138b5ada890339dd6a6e2397222d author : Joe Conway <email@example.com> date : Thu, 22 Dec 2016 09:47:46 -0800 committer: Joe Conway <firstname.lastname@example.org> date : Thu, 22 Dec 2016 09:47:46 -0800
When libpq encounters a connection-level error, e.g. runs out of memory while forming a result, there will be no error associated with PGresult, but a message will be placed into PGconn's error buffer. postgres_fdw takes care to use the PGconn error message when PGresult does not have one, but dblink has been negligent in that regard. Modify dblink to mirror what postgres_fdw has been doing. Back-patch to all supported branches. Author: Joe Conway Reviewed-By: Tom Lane Discussion: https://postgr.es/m/02fa2d90-2efd-00bc-fefc-c23c00eb671e%40joeconway.com
Protect dblink from invalid options when using postgres_fdw server
commit : d5c05f27a43fd0bc777b13dd50de203bade2118d author : Joe Conway <email@example.com> date : Thu, 22 Dec 2016 09:19:18 -0800 committer: Joe Conway <firstname.lastname@example.org> date : Thu, 22 Dec 2016 09:19:18 -0800
When dblink uses a postgres_fdw server name for its connection, it is possible for the connection to have options that are invalid with dblink (e.g. "updatable"). The recommended way to avoid this problem is to use dblink_fdw servers instead. However there are use cases for using postgres_fdw, and possibly other FDWs, for dblink connection options, therefore protect against trying to use any options that do not apply by using is_valid_dblink_option() when building the connection string from the options. Back-patch to 9.3. Although 9.2 supports FDWs for connection info, is_valid_dblink_option() did not yet exist, and neither did postgres_fdw, at least in the postgres source tree. Given the lack of previous complaints, fixing that seems too invasive/not worth it. Author: Corey Huinker Reviewed-By: Joe Conway Discussion: https://postgr.es/m/CADkLM%3DfWyXVEyYcqbcRnxcHutkP45UHU9WD7XpdZaMfe7S%3DRwA%40mail.gmail.com
Give a useful error message if uuid-ossp is built without preconfiguration.
commit : 1cfdb29c7d6621f81d3835e20fd846552c08ec9a author : Tom Lane <email@example.com> date : Thu, 22 Dec 2016 11:19:04 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 22 Dec 2016 11:19:04 -0500
Before commit b8cc8f947, it was possible to build contrib/uuid-ossp without having told configure you meant to; you could just cd into that directory and "make". That no longer works because the code depends on configure to have done header and library probes, but the ensuing error messages are not so easy to interpret if you're not an old C hand. We've gotten a couple of complaints recently from people trying to do this the low-tech way, so add an explicit #error directing the user to use --with-uuid. (In principle we might want to do something similar in the other optionally-built contrib modules; but I don't think any of the others have ever worked without preconfiguration, so there are no bad habits to break people of.) Back-patch to 9.4 where the previous commit came in. Report: https://postgr.es/m/CAHeEsBf42AWTnk=1qJvFv+mYgRFm07Knsfuc86Ono8nRjf3tvQ@mail.gmail.com Report: https://postgr.es/m/CAKYdkBrUaZX+F6KpmzoHqMtiUqCtAW_w6Dgvr6F0WTiopuGxow@mail.gmail.com
Fix buffer overflow on particularly named files and clarify documentation about output file naming.
commit : a88c547f926c1bd70d7dc180826e7e95824c88d1 author : Michael Meskes <email@example.com> date : Thu, 22 Dec 2016 08:28:13 +0100 committer: Michael Meskes <firstname.lastname@example.org> date : Thu, 22 Dec 2016 08:28:13 +0100
Patch by Tsunakawa, Takayuki <email@example.com>
Improve dblink error message when remote does not provide it
commit : 28c9b6be7f76d64397bfa39a944915d4dbfd1994 author : Joe Conway <firstname.lastname@example.org> date : Wed, 21 Dec 2016 15:48:15 -0800 committer: Joe Conway <email@example.com> date : Wed, 21 Dec 2016 15:48:15 -0800
When dblink or postgres_fdw detects an error on the remote side of the connection, it will try to construct a local error message as best it can using libpq's PQresultErrorField(). When no primary message is available, it was bailing out with an unhelpful "unknown error". Make that message better and more style guide compliant. Per discussion on hackers. Backpatch to 9.2 except postgres_fdw which didn't exist before 9.3. Discussion: https://postgr.es/m/19872.1482338965%40sss.pgh.pa.us
Fix detection of unfinished Unicode surrogate pair at end of string.
commit : d5633af7b60ba70fb0e1713df69af6672d52e2cd author : Tom Lane <firstname.lastname@example.org> date : Wed, 21 Dec 2016 17:39:32 -0500 committer: Tom Lane <email@example.com> date : Wed, 21 Dec 2016 17:39:32 -0500
The U&'...' and U&"..." syntaxes silently discarded a surrogate pair start (that is, a code between U+D800 and U+DBFF) if it occurred at the very end of the string. This seems like an obvious oversight, since we throw an error for every other invalid combination of surrogate characters, including the very same situation in E'...' syntax. This has been wrong since the pair processing was added (in 9.0), so back-patch to all supported branches. Discussion: https://firstname.lastname@example.org
Improve ALTER TABLE documentation
commit : ec2bc3264ef007e6d7b6463029de689273161336 author : Stephen Frost <email@example.com> date : Wed, 21 Dec 2016 15:03:44 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Wed, 21 Dec 2016 15:03:44 -0500
The ALTER TABLE documentation wasn't terribly clear when it came to which commands could be combined together and what it meant when they were. In particular, SET TABLESPACE *can* be combined with other commands, when it's operating against a single table, but not when multiple tables are being moved with ALL IN TABLESPACE. Further, the actions are applied together but not really in 'parallel', at least today. Pointed out by: Amit Langote Improved wording from Tom. Back-patch to 9.4, where the ALL IN TABLESPACE option was added. Discussion: https://www.postgresql.org/message-id/14c535b4-13ef-0590-1b98-76af355a0763%40lab.ntt.co.jp
Fix dumping of casts and transforms using built-in functions
commit : 1efc5dba05a81bd91a0d3659f4a4ccfb4c36e680 author : Stephen Frost <email@example.com> date : Wed, 21 Dec 2016 13:47:18 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Wed, 21 Dec 2016 13:47:18 -0500
In pg_dump.c dumpCast() and dumpTransform(), we would happily ignore the cast or transform if it happened to use a built-in function because we weren't including the information about built-in functions when querying pg_proc from getFuncs(). Modify the query in getFuncs() to also gather information about functions which are used by user-defined casts and transforms (where "user-defined" means "has an OID >= FirstNormalObjectId"). This also adds to the TAP regression tests for 9.6 and master to cover these types of objects. Back-patch all the way for casts, back to 9.5 for transforms. Discussion: https://www.postgresql.org/message-id/flat/20160504183952.GE10850%40tamriel.snowman.net
For 8.0 servers, get last built-in oid from pg_database
commit : 94476436a627d80319d49afe5724633b781e735a author : Stephen Frost <email@example.com> date : Wed, 21 Dec 2016 13:47:18 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Wed, 21 Dec 2016 13:47:18 -0500
We didn't start ensuring that all built-in objects had OIDs less than 16384 until 8.1, so for 8.0 servers we still need to query the value out of pg_database. We need this, in particular, to distinguish which casts were built-in and which were user-defined. For HEAD, we only worry about going back to 8.0, for the back-branches, we also ensure that 7.0-7.4 work. Discussion: https://www.postgresql.org/message-id/flat/20160504183952.GE10850%40tamriel.snowman.net
Fix order of operations in CREATE OR REPLACE VIEW.
commit : 78a98b7674cf7d5d82001f6d8d4ebcb8344fc0cd author : Dean Rasheed <email@example.com> date : Wed, 21 Dec 2016 17:02:47 +0000 committer: Dean Rasheed <firstname.lastname@example.org> date : Wed, 21 Dec 2016 17:02:47 +0000
When CREATE OR REPLACE VIEW acts on an existing view, don't update the view options until after the view query has been updated. This is necessary in the case where CREATE OR REPLACE VIEW is used on an existing view that is not updatable, and the new view is updatable and specifies the WITH CHECK OPTION. In this case, attempting to apply the new options to the view before updating its query fails, because the options are applied using the ALTER TABLE infrastructure which checks that WITH CHECK OPTION is only applied to an updatable view. If new columns are being added to the view, that is also done using the ALTER TABLE infrastructure, but it is important that that still be done before updating the view query, because the rules system checks that the query columns match those on the view relation. Added a comment to explain that, in case someone is tempted to move that to where the view options are now being set. Back-patch to 9.4 where WITH CHECK OPTION was added. Report: https://postgr.es/m/CAEZATCUp%3Dz%3Ds4SzZjr14bfct_bdJNwMPi-gFi3Xc5k1ntbsAgQ%40mail.gmail.com
Fix base backup rate limiting in presence of slow i/o
commit : bc53d71308a2b4bd8216932fda3e21cec4915ff8 author : Magnus Hagander <email@example.com> date : Mon, 19 Dec 2016 10:11:04 +0100 committer: Magnus Hagander <firstname.lastname@example.org> date : Mon, 19 Dec 2016 10:11:04 +0100
When source i/o on disk was too slow compared to the rate limiting specified, the system could end up with a negative value for sleep that it never got out of, which caused rate limiting to effectively be turned off. Discussion: https://postgr.es/m/CABUevEy_-e0YvL4ayoX8bH_Ja9w%2BBHoP6jUgdxZuG2nEj3uAfQ%40mail.gmail.com Analysis by me, patch by Antonin Houska
In contrib/uuid-ossp, #include headers needed for ntohl() and ntohs().
commit : 53140bf22bc4b361836b68f08a2946a2fd2ab240 author : Tom Lane <email@example.com> date : Sat, 17 Dec 2016 22:24:13 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 17 Dec 2016 22:24:13 -0500
Oversight in commit b8cc8f947. I just noticed this causes compiler warnings on FreeBSD, and it really ought to cause warnings elsewhere too: all references I can find say that <arpa/inet.h> is required for these. We have a lot of code elsewhere that thinks that both <netinet/in.h> and <arpa/inet.h> should be included for these functions, so do it that way here too, even though <arpa/inet.h> ought to be sufficient according to the references I consulted. Back-patch to 9.4 where the previous commit landed.
Fix off-by-one in memory allocation for quote_literal_cstr().
commit : 595333ff493a3b17d82133a01cd64128bb6175b7 author : Heikki Linnakangas <email@example.com> date : Fri, 16 Dec 2016 12:50:20 +0200 committer: Heikki Linnakangas <firstname.lastname@example.org> date : Fri, 16 Dec 2016 12:50:20 +0200
The calculation didn't take into account the NULL terminator. That lead to overwriting the palloc'd buffer by one byte, if the input consists entirely of backslashes. For example "format('%L', E'\\')". Fixes bug #14468. Backpatch to all supported versions. Report: https://www.postgresql.org/message-id/20161216105001.13334.42819%40wrigleys.postgresql.org
Sync our copy of the timezone library with IANA release tzcode2016j.
commit : 492fe48f0113600e86449e3738a17d029dcfe144 author : Tom Lane <email@example.com> date : Thu, 15 Dec 2016 14:32:42 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 15 Dec 2016 14:32:42 -0500
This is a trivial update (consisting in fact only in the addition of a comment). The point is just to get back to being synced with an official release of tzcode, rather than some ad-hoc point in their commit history, which is where commit 1f87181e1 left it.
Back-patch fcff8a575198478023ada8a48e13b50f70054766 as a bug fix.
commit : bed2a0b06ba54266a4e66affbc8f08e5eea6e8bc author : Kevin Grittner <email@example.com> date : Tue, 13 Dec 2016 19:14:42 -0600 committer: Kevin Grittner <firstname.lastname@example.org> date : Tue, 13 Dec 2016 19:14:42 -0600
When there is both a serialization failure and a unique violation, throw the former rather than the latter. When initially pushed, this was viewed as a feature to assist application framework developers, so that they could more accurately determine when to retry a failed transaction, but a test case presented by Ian Jackson has shown that this patch can prevent serialization anomalies in some cases where a unique violation is caught within a subtransaction, the work of that subtransaction is discarded, and no error is thrown. That makes this a bug fix, so it is being back-patched to all supported branches where it is not already present (i.e., 9.2 to 9.5). Discussion: https://email@example.com Discussion: https://firstname.lastname@example.org
Use "%option prefix" to set API names in ecpg's lexer.
commit : 15b3722700ca043494804dfd1fe7556c50d4f9e9 author : Tom Lane <email@example.com> date : Sun, 11 Dec 2016 18:04:28 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 11 Dec 2016 18:04:28 -0500
Back-patch commit 92fb64983 into the pre-9.6 branches. Without this, ecpg fails to build with the latest version of flex. It's not unreasonable that people would want to compile our old branches with recent tools. Per report from Дилян Палаузов. Discussion: https://email@example.com
Build backend/parser/scan.l and interfaces/ecpg/preproc/pgc.l standalone.
commit : 4262c5b1eecc63f12f86daa293428009eee54b5c author : Tom Lane <firstname.lastname@example.org> date : Sun, 11 Dec 2016 17:44:16 -0500 committer: Tom Lane <email@example.com> date : Sun, 11 Dec 2016 17:44:16 -0500
Back-patch commit 72b1e3a21 into the pre-9.6 branches. As noted in the original commit, this has some extra benefits: we can narrow the scope of the -Wno-error flag that's forced on scan.c. Also, since these grammar and lexer files are so large, splitting them into separate build targets should have some advantages in build speed, particularly in parallel or ccache'd builds. However, the real reason for doing this now is that it avoids symbol- redefinition warnings (or worse) with the latest version of flex. It's not unreasonable that people would want to compile our old branches with recent tools. Per report from Дилян Палаузов. Discussion: https://firstname.lastname@example.org
Prevent crash when ts_rewrite() replaces a non-top-level subtree with null.
commit : c6caa520008761b2ce9c88f46e537d3044cf7c96 author : Tom Lane <email@example.com> date : Sun, 11 Dec 2016 13:09:57 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 11 Dec 2016 13:09:57 -0500
When ts_rewrite()'s replacement argument is an empty tsquery, it's supposed to simplify any operator nodes whose operand(s) become NULL; but it failed to do that reliably, because dropvoidsubtree() only examined the top level of the result tree. Rather than make a second recursive pass, let's just give the responsibility to dofindsubquery() to simplify while it's doing the main replacement pass. Per report from Andreas Seltenreich. Artur Zakirov, with some cosmetic changes by me. Back-patch to all supported branches. Discussion: https://email@example.com
Be more careful about Python refcounts while creating exception objects.
commit : 00858728fd718a9e29fa1dd3311a5e742f83ab5c author : Tom Lane <firstname.lastname@example.org> date : Fri, 9 Dec 2016 15:27:23 -0500 committer: Tom Lane <email@example.com> date : Fri, 9 Dec 2016 15:27:23 -0500
PLy_generate_spi_exceptions neglected to do Py_INCREF on the new exception objects, evidently supposing that PyModule_AddObject would do that --- but it doesn't. This left us in a situation where a Python garbage collection cycle could result in deletion of exception object(s), causing server crashes or wrong answers if the exception objects are used later in the session. In addition, PLy_generate_spi_exceptions didn't bother to test for a null result from PyErr_NewException, which at best is inconsistent with the code in PLy_add_exceptions. And PLy_add_exceptions, while it did do Py_INCREF on the exceptions it makes, waited to do that till after some PyModule_AddObject calls, creating a similar risk for failure if garbage collection happened within those calls. To fix, refactor to have just one piece of code that creates an exception object and adds it to the spiexceptions module, bumping the refcount first. Also, let's add an additional refcount to represent the pointer we're going to store in a C global variable or hash table. This should only matter if the user does something weird like delete the spiexceptions Python module, but lack of paranoia has caused us enough problems in PL/Python already. The fact that PyModule_AddObject doesn't do a Py_INCREF of its own explains the need for the Py_INCREF added in commit 4c966d920, so we can improve the comment about that; also, this means we really want to do that before not after the PyModule_AddObject call. The missing Py_INCREF in PLy_generate_spi_exceptions was reported and diagnosed by Rafa de la Torre; the other fixes by me. Back-patch to all supported branches. Discussion: https://postgr.es/m/CA+Fz15kR1OXZv43mDrJb3XY+1MuQYWhx5kx3ea6BRKQp6ezGkg@mail.gmail.com
Fix reporting of column typmods for multi-row VALUES constructs.
commit : 6a493adda745e7fe65a5f524425a50f131b32531 author : Tom Lane <firstname.lastname@example.org> date : Fri, 9 Dec 2016 12:01:14 -0500 committer: Tom Lane <email@example.com> date : Fri, 9 Dec 2016 12:01:14 -0500
expandRTE() and get_rte_attribute_type() reported the exprType() and exprTypmod() values of the expressions in the first row of the VALUES as being the column type/typmod returned by the VALUES RTE. That's fine for the data type, since we coerce all expressions in a column to have the same common type. But we don't coerce them to have a common typmod, so it was possible for rows after the first one to return values that violate the claimed column typmod. This leads to the incorrect result seen in bug #14448 from Hassan Mahmood, as well as some other corner-case misbehaviors. The desired behavior is the same as we use in other type-unification cases: report the common typmod if there is one, but otherwise return -1 indicating no particular constraint. We fixed this in HEAD by deriving the typmods during transformValuesClause and storing them in the RTE, but that's not a feasible solution in the back branches. Instead, just use a brute-force approach of determining the correct common typmod during expandRTE() and get_rte_attribute_type(). Simple testing says that that doesn't really cost much, at least not in common cases where expandRTE() is only used once per query. It turns out that get_rte_attribute_type() is typically never used at all on VALUES RTEs, so the inefficiency there is of no great concern. Report: https://firstname.lastname@example.org Discussion: https://email@example.com
Fix crasher bug in array_position(s)
commit : 581b09c72524db9a141a1b9217f01cf2b24ae512 author : Alvaro Herrera <firstname.lastname@example.org> date : Fri, 9 Dec 2016 12:42:17 -0300 committer: Alvaro Herrera <email@example.com> date : Fri, 9 Dec 2016 12:42:17 -0300
array_position and its cousin array_positions were caching the element type equality function's FmgrInfo without being careful enough to put it in a long-lived context. This is obviously broken but it didn't matter in most cases; only when using arrays of records (involving record_eq) it becomes a problem. The fix is to ensure that the type's equality function's FmgrInfo is cached in the array_position's flinfo->fn_mcxt rather than the current memory context. Apart from record types, the only other case that seems complex enough to possibly cause the same problem are range types. I didn't find a way to reproduce the problem with those, so I only include the test case submitted with the bug report as regression test. Bug report and patch: Junseok Yang Discussion: https://postgr.es/m/CAE+byMupUURYiZ6bKYgMZb9pgV1CYAijJGqWj-90W=nS7uEOeA@mail.gmail.com Backpatch to 9.5, where array_position appeared.
Log the creation of an init fork unconditionally.
commit : 141ad68964f739c6543dc48143829c2cd0dd0c86 author : Robert Haas <firstname.lastname@example.org> date : Thu, 8 Dec 2016 14:09:09 -0500 committer: Robert Haas <email@example.com> date : Thu, 8 Dec 2016 14:09:09 -0500
Previously, it was thought that this only needed to be done for the benefit of possible standbys, so wal_level = minimal skipped it. But that's not safe, because during crash recovery we might replay XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE record which recursively removes the directory that contains the new init fork. So log it always. The user-visible effect of this bug is that if you create a database or tablespace, then create an unlogged table, then crash without checkpointing, then restart, accessing the table will fail, because the it won't have been properly reset. This commit fixes that. Michael Paquier, per a report from Konstantin Knizhnik. Wording of the comments per a suggestion from me.
Restore psql's SIGPIPE setting if popen() fails.
commit : 93c78ba19b378b4b54dad5ceb4fdf063bb0998e1 author : Tom Lane <firstname.lastname@example.org> date : Wed, 7 Dec 2016 12:39:24 -0500 committer: Tom Lane <email@example.com> date : Wed, 7 Dec 2016 12:39:24 -0500
Ancient oversight in PageOutput(): if popen() fails, we'd better reset the SIGPIPE handler before returning stdout, because ClosePager() won't. Noticed while fixing the empty-PAGER issue.
Handle empty or all-blank PAGER setting more sanely in psql.
commit : 370c7a863aa7410029d1406a43871722b1f9a8af author : Tom Lane <firstname.lastname@example.org> date : Wed, 7 Dec 2016 12:19:56 -0500 committer: Tom Lane <email@example.com> date : Wed, 7 Dec 2016 12:19:56 -0500
If the PAGER environment variable is set but contains an empty string, psql would pass it to "sh" which would silently exit, causing whatever query output we were printing to vanish entirely. This is quite mystifying; it took a long time for us to figure out that this was the cause of Joseph Brenner's trouble report. Rather than allowing that to happen, we should treat this as another way to specify "no pager". (We could alternatively treat it as selecting the default pager, but it seems more likely that the former is what the user meant to achieve by setting PAGER this way.) Nonempty, but all-white-space, PAGER values have the same behavior, and it's pretty easy to test for that, so let's handle that case the same way. Most other cases of faulty PAGER values will result in the shell printing some kind of complaint to stderr, which should be enough to diagnose the problem, so we don't need to work harder than this. (Note that there's been an intentional decision not to be very chatty about apparent failure returns from the pager process, since that may happen if, eg, the user quits the pager with control-C or some such. I'd just as soon not start splitting hairs about which exit codes might merit making our own report.) libpq's old PQprint() function was already on board with ignoring empty PAGER values, but for consistency, make it ignore all-white-space values as well. It's been like this a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAFfgvXWLOE2novHzYjmQK8-J6TmHz42G8f3X0SORM44+stUGmw@mail.gmail.com
Revert "Permit dump/reload of not-too-large >1GB tuples"
commit : f858524ee4f0e7249959ee0ee8dd9f00b3e8d107 author : Alvaro Herrera <firstname.lastname@example.org> date : Tue, 6 Dec 2016 12:36:44 -0300 committer: Alvaro Herrera <email@example.com> date : Tue, 6 Dec 2016 12:36:44 -0300
This reverts commit 646655d264f17cf7fdbc6425ef8bc9a2f9f9ee41. Per Tom Lane, changing the definition of StringInfoData amounts to an ABI break, which is unacceptable in back branches.
Fix incorrect output from gin_desc().
commit : 8606271640401b5a4efd20c54e2850fa88118eb8 author : Fujii Masao <firstname.lastname@example.org> date : Mon, 5 Dec 2016 20:29:41 +0900 committer: Fujii Masao <email@example.com> date : Mon, 5 Dec 2016 20:29:41 +0900
Previously gin_desc() displayed incorrect output "unknown action 0" for XLOG_GIN_INSERT and XLOG_GIN_VACUUM_DATA_LEAF_PAGE records with valid actions. The cause of this problem was that gin_desc() wrongly used XLogRecGetData() to extract data from those records. Since they were registered by XLogRegisterBufData(), gin_desc() should have used XLogRecGetBlockData(), instead, like gin_redo(). Also there were other differences about how to treat XLOG_GIN_INSERT record between gin_desc() and gin_redo(). This commit fixes gin_desc() routine so that it treats those records in the same way as gin_redo(). Batch-patch to 9.5 where WAL record format was revamped and XLogRegisterBufData() was added. Reported-By: Andres Freund Reviewed-By: Tom Lane Discussion: <firstname.lastname@example.org>
Don't mess up pstate->p_next_resno in transformOnConflictClause().
commit : 25c06a1ed647dca4f308026e00fd3e830b4ba383 author : Tom Lane <email@example.com> date : Sun, 4 Dec 2016 15:02:27 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 4 Dec 2016 15:02:27 -0500
transformOnConflictClause incremented p_next_resno while generating the phony targetlist for the EXCLUDED pseudo-rel. Then that field got incremented some more during transformTargetList, possibly leading to free_parsestate concluding that we'd overrun the allowed length of a tlist, as reported by Justin Pryzby. We could fix this by resetting p_next_resno to 1 after using it for the EXCLUDED pseudo-rel tlist, but it seems easier and less coupled to other places if we just don't use that field at all in this loop. (Note that this doesn't change anything about the resnos that end up appearing in the main target list, because those are all replaced with target-column numbers by updateTargetListEntry.) In passing, fix incorrect type OID assigned to the whole-row Var for "EXCLUDED.*" (somehow this escaped having any bad consequences so far, but it's certainly wrong); remove useless assignment to var->location; pstrdup the column names in case of a relcache flush; and improve nearby comments. Back-patch to 9.5 where ON CONFLICT was introduced. Report: https://postgr.es/m/20161204163237.GA8030@telsasoft.com
Make pgwin32_putenv() visit debug CRTs.
commit : 5ab4b2ec4b7a3794aff271f05310f675dd2c7d05 author : Noah Misch <email@example.com> date : Sat, 3 Dec 2016 15:46:36 -0500 committer: Noah Misch <firstname.lastname@example.org> date : Sat, 3 Dec 2016 15:46:36 -0500
This has no effect in the most conventional case, where no relevant DLL uses a debug build. For an example where it does matter, given a debug build of MIT Kerberos, the krb_server_keyfile parameter usually had no effect. Since nobody wants a Heisenbug, back-patch to 9.2 (all supported versions). Christian Ullrich, reviewed by Michael Paquier.
Remove wrong CloseHandle() call.
commit : 3cb8bdfef998ad54a88da5d1d9e0af9be3ab79cc author : Noah Misch <email@example.com> date : Sat, 3 Dec 2016 15:46:35 -0500 committer: Noah Misch <firstname.lastname@example.org> date : Sat, 3 Dec 2016 15:46:35 -0500
In accordance with its own documentation, invoke CloseHandle() only when directed in the documentation for the function that furnished the handle. GetModuleHandle() does not so direct. We have been issuing this call only in the rare event that a CRT DLL contains no "_putenv" symbol, so lack of bug reports is uninformative. Back-patch to 9.2 (all supported versions). Christian Ullrich, reviewed by Michael Paquier.
Refine win32env.c cosmetics.
commit : c8e18339cc59dc41632dc12e53e858e197ade56b author : Noah Misch <email@example.com> date : Sat, 3 Dec 2016 15:46:35 -0500 committer: Noah Misch <firstname.lastname@example.org> date : Sat, 3 Dec 2016 15:46:35 -0500
Replace use of plain 0 as a null pointer constant. In comments, update terminology and lessen redundancy. Back-patch to 9.2 (all supported versions) for the convenience of back-patching the next two commits. Christian Ullrich and Noah Misch, reviewed (in earlier versions) by Michael Paquier.
Permit dump/reload of not-too-large >1GB tuples
commit : 646655d264f17cf7fdbc6425ef8bc9a2f9f9ee41 author : Alvaro Herrera <email@example.com> date : Fri, 2 Dec 2016 00:34:01 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Fri, 2 Dec 2016 00:34:01 -0300
Our documentation states that our maximum field size is 1 GB, and that our maximum row size of 1.6 TB. However, while this might be attainable in theory with enough contortions, it is not workable in practice; for starters, pg_dump fails to dump tables containing rows larger than 1 GB, even if individual columns are well below the limit; and even if one does manage to manufacture a dump file containing a row that large, the server refuses to load it anyway. This commit enables dumping and reloading of such tuples, provided two conditions are met: 1. no single column is larger than 1 GB (in output size -- for bytea this includes the formatting overhead) 2. the whole row is not larger than 2 GB There are three related changes to enable this: a. StringInfo's API now has two additional functions that allow creating a string that grows beyond the typical 1GB limit (and "long" string). ABI compatibility is maintained. We still limit these strings to 2 GB, though, for reasons explained below. b. COPY now uses long StringInfos, so that pg_dump doesn't choke trying to emit rows longer than 1GB. c. heap_form_tuple now uses the MCXT_ALLOW_HUGE flag in its allocation for the input tuple, which means that large tuples are accepted on input. Note that at this point we do not apply any further limit to the input tuple size. The main reason to limit to 2 GB is that the FE/BE protocol uses 32 bit length words to describe each row; and because the documentation is ambiguous on its signedness and libpq does consider it signed, we cannot use the highest-order bit. Additionally, the StringInfo API uses "int" (which is 4 bytes wide in most platforms) in many places, so we'd need to change that API too in order to improve, which has lots of fallout. Backpatch to 9.5, which is the oldest that has MemoryContextAllocExtended, a necessary piece of infrastructure. We could apply to 9.4 with very minimal additional effort, but any further than that would require backpatching "huge" allocations too. This is the largest set of changes we could find that can be back-patched without breaking compatibility with existing systems. Fixing a bigger set of problems (for example, dumping tuples bigger than 2GB, or dumping fields bigger than 1GB) would require changing the FE/BE protocol and/or changing the StringInfo API in an ABI-incompatible way, neither of which would be back-patchable. Authors: Daniel Vérité, Álvaro Herrera Reviewed by: Tomas Vondra Discussion: https://postgr.es/m/20160229183023.GA286012@alvherre.pgsql
Doc: improve description of trim() and related functions.
commit : bb389ad8cdda3756b93e06a532128c5b34307673 author : Tom Lane <email@example.com> date : Wed, 30 Nov 2016 13:34:14 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 30 Nov 2016 13:34:14 -0500
Per bug #14441 from Mark Pether, the documentation could be misread, mainly because some of the examples failed to show what happens with a multicharacter "characters to trim" string. Also, while the text description in most of these entries was fairly clear that the "characters" argument is a set of characters not a substring to match, some of them used variant wording that was a bit less clear. trim() itself suffered from both deficiencies and was thus pretty misinterpretable. Also fix failure to explain which of LEADING/TRAILING/BOTH is the default. Discussion: https://email@example.com
Clarify pg_dump -b documentation
commit : 6c1a257fc67957410e421b0a276f845d9e1f9ff0 author : Stephen Frost <firstname.lastname@example.org> date : Tue, 29 Nov 2016 10:35:10 -0500 committer: Stephen Frost <email@example.com> date : Tue, 29 Nov 2016 10:35:10 -0500
The documentation around the -b/--blobs option to pg_dump seemed to imply that it might be possible to add blobs to a "schema-only" dump or similar. Clarify that blobs are data and therefore will only be included in dumps where data is being included, even when -b is used to request blobs be included. The -b option has been around since before 9.2, so back-patch to all supported branches. Discussion: https://postgr.es/m/20161119173316.GA13284@tamriel.snowman.net
Mention server start requirement for ssl parameters
commit : eb516e87ea2832e4501e9a95ac82d5bb2d6bbdd5 author : Magnus Hagander <firstname.lastname@example.org> date : Sun, 27 Nov 2016 17:10:02 +0100 committer: Magnus Hagander <email@example.com> date : Sun, 27 Nov 2016 17:10:02 +0100
Fix that the documentation for three ssl related parameters did not specify that they can only be changed at server start. Michael Paquier
Fix test about ignoring extension dependencies during extension scripts.
commit : 576bd360b2702e5418ee0c5cb11db4cc04ccec99 author : Tom Lane <firstname.lastname@example.org> date : Sat, 26 Nov 2016 13:31:35 -0500 committer: Tom Lane <email@example.com> date : Sat, 26 Nov 2016 13:31:35 -0500
Commit 08dd23cec introduced an exception to the rule that extension member objects can only be dropped as part of dropping the whole extension, intending to allow such drops while running the extension's own creation or update scripts. However, the exception was only applied at the outermost recursion level, because it was modeled on a pre-existing check to ignore dependencies on objects listed in pendingObjects. Bug #14434 from Philippe Beaudoin shows that this is inadequate: in some cases we can reach an extension member object by recursion from another one. (The bug concerns the serial-sequence case; I'm not sure if there are other cases, but there might well be.) To fix, revert 08dd23cec's changes to findDependentObjects() and instead apply the creating_extension exception regardless of stack level. Having seen this example, I'm a bit suspicious that the pendingObjects logic is also wrong and such cases should likewise be allowed at any recursion level. However, changing that would interact in subtle ways with the recursion logic (at least it would need to be moved to after the recursing-from check). Given that the code's been like that a long time, I'll refrain from touching it without a clear example showing it's wrong. Back-patch to all active branches. In HEAD and 9.6, where suitable test infrastructure exists, add a regression test case based on the bug report. Report: <firstname.lastname@example.org> Discussion: <email@example.com>
Check for pending trigger events on far end when dropping an FK constraint.
commit : 6cbe84c826b51a159825e9843184c7b4a740e4ae author : Tom Lane <firstname.lastname@example.org> date : Fri, 25 Nov 2016 13:44:48 -0500 committer: Tom Lane <email@example.com> date : Fri, 25 Nov 2016 13:44:48 -0500
When dropping a foreign key constraint with ALTER TABLE DROP CONSTRAINT, we refuse the drop if there are any pending trigger events on the named table; this ensures that we won't remove the pg_trigger row that will be consulted by those events. But we should make the same check for the referenced relation, else we might remove a due-to-be-referenced pg_trigger row for that relation too, resulting in "could not find trigger NNN" or "relation NNN has no triggers" errors at commit. Per bug #14431 from Benjie Gillam. Back-patch to all supported branches. Report: <firstname.lastname@example.org>
Fix commit_ts for FrozenXid and BootstrapXid
commit : 7816d13563b74379c9db618e46883c6db5fc0680 author : Alvaro Herrera <email@example.com> date : Thu, 24 Nov 2016 15:39:55 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Thu, 24 Nov 2016 15:39:55 -0300
Previously, requesting commit timestamp for transactions FrozenTransactionId and BootstrapTransactionId resulted in an error. But since those values can validly appear in committed tuples' Xmin, this behavior is unhelpful and error prone: each caller would have to special-case those values before requesting timestamp data for an Xid. We already have a perfectly good interface for returning "the Xid you requested is too old for us to have commit TS data for it", so let's use that instead. Backpatch to 9.5, where commit timestamps appeared. Author: Craig Ringer Discussion: https://www.postgresql.org/message-id/CAMsr+YFM5Q=+ry3mKvWEqRTxrB0iU3qUSRnS28nz6FJYtBwhJg@mail.gmail.com
Make sure ALTER TABLE preserves index tablespaces.
commit : e0375d77b691f6cab70934c63d3212a4713f66df author : Tom Lane <email@example.com> date : Wed, 23 Nov 2016 13:45:56 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 23 Nov 2016 13:45:56 -0500
When rebuilding an existing index, ALTER TABLE correctly kept the physical file in the same tablespace, but it messed up the pg_class entry if the index had been in the database's default tablespace and "default_tablespace" was set to some non-default tablespace. This led to an inaccessible index. Fix by fixing pg_get_indexdef_string() to always include a tablespace clause, whether or not the index is in the default tablespace. The previous behavior was installed in commit 537e92e41, and I think it just wasn't thought through very clearly; certainly the possible effect of default_tablespace wasn't considered. There's some risk in changing the behavior of this function, but there are no other call sites in the core code. Even if it's being used by some third party extension, it's fairly hard to envision a usage that is okay with a tablespace clause being appended some of the time but can't handle it being appended all the time. Back-patch to all supported versions. Code fix by me, investigation and test cases by Michael Paquier. Discussion: <email@example.com>
Doc: in back branches, don't call it a row constructor if it isn't really.
commit : 3de257f602550407ba095189feb6d3db8cd02b22 author : Tom Lane <firstname.lastname@example.org> date : Tue, 22 Nov 2016 18:07:43 -0500 committer: Tom Lane <email@example.com> date : Tue, 22 Nov 2016 18:07:43 -0500
Before commit 906bfcad7, we were not actually processing the righthand side of a multiple-column assignment in UPDATE as a row constructor: it was just a parenthesized list of expressions. Call it that rather than risking confusion by people who would expect the documented behaviors of row constructors to apply. Back-patch to 9.5; before that, the text correctly described the construct as a "list of independent expressions". Discussion: <firstname.lastname@example.org>
Doc: improve documentation about composite-value usage.
commit : ff9730aa15d4f964c0f4bb3bc73b12a9e5312e9a author : Tom Lane <email@example.com> date : Tue, 22 Nov 2016 17:56:16 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 22 Nov 2016 17:56:16 -0500
Create a section specifically for the syntactic rules around whole-row variable usage, such as expansion of "foo.*". This was previously documented only haphazardly, with some critical info buried in unexpected places like xfunc-sql-composite-functions. Per repeated questions in different mailing lists. Discussion: <email@example.com>
Doc: add a section in Part II concerning RETURNING.
commit : 57995544497ce7e62552ce55ecdf6eca90198788 author : Tom Lane <firstname.lastname@example.org> date : Tue, 22 Nov 2016 14:02:52 -0500 committer: Tom Lane <email@example.com> date : Tue, 22 Nov 2016 14:02:52 -0500
There are assorted references to RETURNING in Part II, but nothing that would qualify as an explanation of the feature, which seems like an oversight considering how useful it is. Add something. Noted while looking for a place to point a cross-reference to ...
Make contrib/test_decoding regression tests safe for CZ locale.
commit : 89c2d81438da2e0bc66ed819769748d58f03a489 author : Tom Lane <firstname.lastname@example.org> date : Mon, 21 Nov 2016 20:39:28 -0500 committer: Tom Lane <email@example.com> date : Mon, 21 Nov 2016 20:39:28 -0500
A little COLLATE "C" goes a long way. Pavel Stehule, per suggestion from Craig Ringer Discussion: <CAFj8pRA8nJZcozgxN=RMSqMmKuHVOkcGAAKPKdFeiMWGDSUDLA@mail.gmail.com>
Fix PGLC_localeconv() to handle errors better.
commit : a0c15427df155da88c5c243129d61ce54604f66a author : Tom Lane <firstname.lastname@example.org> date : Mon, 21 Nov 2016 18:21:56 -0500 committer: Tom Lane <email@example.com> date : Mon, 21 Nov 2016 18:21:56 -0500
The code was intentionally not very careful about leaking strdup'd strings in case of an error. That was forgivable probably, but it also failed to notice strdup() failures, which could lead to subsequent null-pointer-dereference crashes, since many callers unsurprisingly didn't check for null pointers in the struct lconv fields. An even worse problem is that it could throw error while we were setlocale'd to a non-C locale, causing unwanted behavior in subsequent libc calls. Rewrite to ensure that we cannot throw elog(ERROR) until after we've restored the previous locale settings, or at least attempted to. (I'm sorely tempted to make restore failure be a FATAL error, but will refrain for the moment.) Having done that, it's not much more work to ensure that we clean up strdup'd storage on the way out, too. This code is substantially the same in all supported branches, so back-patch all the way. Michael Paquier and Tom Lane Discussion: <CAB7nPqRMbGqa_mesopcn4MPyTs34eqtVEK7ELYxvvV=oqS00YA@mail.gmail.com>
Prevent multicolumn expansion of "foo.*" in an UPDATE source expression.
commit : aeb5e82427cb300738a97977192cbe1f9644ae60 author : Tom Lane <firstname.lastname@example.org> date : Sun, 20 Nov 2016 14:26:19 -0500 committer: Tom Lane <email@example.com> date : Sun, 20 Nov 2016 14:26:19 -0500
Because we use transformTargetList() for UPDATE as well as SELECT tlists, the code accidentally tried to expand a "*" reference into several columns. This is nonsensical, because the UPDATE syntax provides exactly one target column to put the value into. The immediate result was that transformUpdateTargetList() got confused and reported "UPDATE target count mismatch --- internal error". It seems better to treat such a reference as a plain whole-row variable, as it would be in other contexts. (This could produce useful results when the target column is of composite type.) Fix by tweaking transformTargetList() to perform *-expansion only conditionally, depending on its exprKind parameter. Back-patch to 9.3. The problem exists further back, but a fix would be much more invasive before that, because transformTargetList() wasn't told what kind of list it was working on. Doesn't seem worth the trouble given the lack of field reports. (I only noticed it because I was checking the code while trying to improve the documentation about how we handle "foo.*".) Discussion: <firstname.lastname@example.org>
Code review for GUC serialization/deserialization code.
commit : b9ee42e70a412606c01f6dbb05c2151d38f58de5 author : Tom Lane <email@example.com> date : Sat, 19 Nov 2016 14:26:20 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 19 Nov 2016 14:26:20 -0500
The serialization code dumped core for a string-valued GUC whose value is NULL, which is a legal state. The infrastructure isn't capable of transmitting that state exactly, but fortunately, transmitting an empty string instead should be close enough (compare, eg, commit e45e990e4). The code potentially underestimated the space required to format a real-valued variable, both because it made an unwarranted assumption that %g output would never be longer than %e output, and because it didn't count right even for %e format. In practice this would pretty much always be masked by overestimates for other variables, but it's still wrong. Also fix boundary-case error in read_gucstate, incorrect handling of the case where guc_sourcefile is non-NULL but zero length (not clear that can happen, but if it did, this code would get totally confused), and confusingly useless check for a NULL result from read_gucstate. Andreas Seltenreich discovered the core dump; other issues noted while reading nearby code. Back-patch to 9.5 where this code was introduced. Michael Paquier and Tom Lane Discussion: <email@example.com>
Improve pg_dump/pg_restore --create --if-exists logic.
commit : a7864037d8b7fd172d870782a8024d3f96b0b17b author : Tom Lane <firstname.lastname@example.org> date : Thu, 17 Nov 2016 14:59:23 -0500 committer: Tom Lane <email@example.com> date : Thu, 17 Nov 2016 14:59:23 -0500
Teach it not to complain if the dropStmt attached to an archive entry is actually spelled CREATE OR REPLACE VIEW, since that will happen due to an upcoming bug fix. Also, if it doesn't recognize a dropStmt, have it print a WARNING and then emit the dropStmt unmodified. That seems like a much saner behavior than Assert'ing or dumping core due to a null-pointer dereference, which is what would happen before :-(. Back-patch to 9.4 where this option was introduced. Discussion: <firstname.lastname@example.org>
Avoid pin scan for replay of XLOG_BTREE_VACUUM in all cases
commit : c0db1ec2600a898ac75d14057e01fb716059a2f5 author : Alvaro Herrera <email@example.com> date : Thu, 17 Nov 2016 13:31:30 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Thu, 17 Nov 2016 13:31:30 -0300
Replay of XLOG_BTREE_VACUUM during Hot Standby was previously thought to require complex interlocking that matched the requirements on the master. This required an O(N) operation that became a significant problem with large indexes, causing replication delays of seconds or in some cases minutes while the XLOG_BTREE_VACUUM was replayed. This commit skips the “pin scan” that was previously required, by observing in detail when and how it is safe to do so, with full documentation. The pin scan is skipped only in replay; the VACUUM code path on master is not touched here. No tests included. Manual tests using an additional patch to view WAL records and their timing have shown the change in WAL records and their handling has successfully reduced replication delay. This is a back-patch of commits 687f2cd7a015, 3e4b7d87988f, b60284261375 by Simon Riggs, to branches 9.4 and 9.5. No further backpatch is possible because this depends on catalog scans being MVCC. I (Álvaro) additionally updated a slight problem in the README, which explains why this touches the 9.6 and master branches.
Allow DOS-style line endings in ~/.pgpass files.
commit : 8951f92da48cb2866b9f694e91f57eeaac7489bb author : Tom Lane <email@example.com> date : Tue, 15 Nov 2016 16:17:19 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 15 Nov 2016 16:17:19 -0500
On Windows, libc will mask \r\n line endings for us, since we read the password file in text mode. But that doesn't happen on Unix. People who share password files across both systems might have \r\n line endings in a file they use on Unix, so as a convenience, ignore trailing \r. Per gripe from Josh Berkus. In passing, put the existing check for empty line somewhere where it's actually useful, ie after stripping the newline not before. Vik Fearing, adjusted a bit by me Discussion: <email@example.com>
Account for catalog snapshot in PGXACT->xmin updates.
commit : 0bc3ed98c08d01da1138886a06829a93f97c923e author : Tom Lane <firstname.lastname@example.org> date : Tue, 15 Nov 2016 15:55:35 -0500 committer: Tom Lane <email@example.com> date : Tue, 15 Nov 2016 15:55:35 -0500
The CatalogSnapshot was not plugged into SnapshotResetXmin()'s accounting for whether MyPgXact->xmin could be cleared or advanced. In normal transactions this was masked by the fact that the transaction snapshot would be older, but during backend startup and certain utility commands it was possible to re-use the CatalogSnapshot after MyPgXact->xmin had been cleared, meaning that recently-deleted rows could be pruned even though this snapshot could still see them, causing unexpected catalog lookup failures. This effect appears to be the explanation for a recent failure on buildfarm member piculet. To fix, add the CatalogSnapshot to the RegisteredSnapshots heap whenever it is valid. In the previous logic, it was possible for the CatalogSnapshot to remain valid across waits for client input, but with this change that would mean it delays advance of global xmin in cases where it did not before. To avoid possibly causing new table-bloat problems with clients that sit idle for long intervals, add code to invalidate the CatalogSnapshot before waiting for client input. (When the backend is busy, it's unlikely that the CatalogSnapshot would be the oldest snap for very long, so we don't worry about forcing early invalidation of it otherwise.) In passing, remove the CatalogSnapshotStale flag in favor of using "CatalogSnapshot != NULL" to represent validity, as we do for the other special snapshots in snapmgr.c. And improve some obsolete comments. No regression test because I don't know a deterministic way to cause this failure. But the stress test shown in the original discussion provokes "cache lookup failed for relation 1255" within a few dozen seconds for me. Back-patch to 9.4 where MVCC catalog scans were introduced. (Note: it's quite easy to produce similar failures with the same test case in branches before 9.4. But MVCC catalog scans were supposed to fix that.) Discussion: <firstname.lastname@example.org>
Fix typo in comment
commit : 60de884be75cafec97cb15e60e63f72cb9035549 author : Magnus Hagander <email@example.com> date : Mon, 14 Nov 2016 17:31:35 +0100 committer: Magnus Hagander <firstname.lastname@example.org> date : Mon, 14 Nov 2016 17:31:35 +0100
The function was renamed in 908e23473, but the comment never learned about it.
Fix duplication in ALTER MATERIALIZE VIEW synopsis
commit : f16159ce15040b0dc0b2c01b8c7345bfc594cc28 author : Alvaro Herrera <email@example.com> date : Mon, 14 Nov 2016 11:14:34 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Mon, 14 Nov 2016 11:14:34 -0300
Commit 3c4cf080879b should have removed SET TABLESPACE from the synopsis of ALTER MATERIALIZE VIEW as a possible "action" when it added a separate line for it in the main command listing, but failed to. Repair. Backpatch to 9.4, like the aforementioned commit.
Re-allow user_catalog_table option for materialized views.
commit : 6e00ba1e17056b86f698e448f900bc35868f5f64 author : Tom Lane <email@example.com> date : Thu, 10 Nov 2016 15:00:58 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 10 Nov 2016 15:00:58 -0500
The reloptions stuff allows this option to be set on a matview. While it's questionable whether that is useful or was really intended, it does work, and we shouldn't change that in minor releases. Commit e3e66d8a9 disabled the option since I didn't realize that it was possible for it to be set on a matview. Tweak the test to re-allow it. Discussion: <email@example.com>
commit : af017fc195fbf7e861cef470d5169320c45f5582 author : Magnus Hagander <firstname.lastname@example.org> date : Tue, 8 Nov 2016 18:34:59 +0100 committer: Magnus Hagander <email@example.com> date : Tue, 8 Nov 2016 18:34:59 +0100
Band-aid fix for incorrect use of view options as StdRdOptions.
commit : e2f5cd9cf5588cc37f665fb2ca061898b5c73f2a author : Tom Lane <firstname.lastname@example.org> date : Mon, 7 Nov 2016 12:08:19 -0500 committer: Tom Lane <email@example.com> date : Mon, 7 Nov 2016 12:08:19 -0500
We really ought to make StdRdOptions and the other decoded forms of reloptions self-identifying, but for the moment, assume that only plain relations could possibly be user_catalog_tables. Fixes problem with bogus "ON CONFLICT is not supported on table ... used as a catalog table" error when target is a view with cascade option. Discussion: <firstname.lastname@example.org>
pg_rewing pg_upgrade: Fix translation markers
commit : e98c3ab2bd65f0e78bce20ace477eeed76b87345 author : Peter Eisentraut <email@example.com> date : Mon, 7 Nov 2016 12:00:00 -0500 committer: Peter Eisentraut <firstname.lastname@example.org> date : Mon, 7 Nov 2016 12:00:00 -0500
In pg_log_v(), we need to translate the fmt before processing, not the formatted message afterwards.
Fix handling of symlinked pg_stat_tmp and pg_replslot
commit : 6d779e05a03d2c06433b71b76f9b0168d47d1a3e author : Magnus Hagander <email@example.com> date : Mon, 7 Nov 2016 14:47:30 +0100 committer: Magnus Hagander <firstname.lastname@example.org> date : Mon, 7 Nov 2016 14:47:30 +0100
This was already fixed in HEAD as part of 6ad8ac60 but was not backpatched. Also change the way pg_xlog is handled to be the same as the other directories. Patch from me with pg_xlog addition from Michael Paquier, test updates from David Steele.
Rationalize and document pltcl's handling of magic ".tupno" array element.
commit : abdc839985a396cb8516a9131e75f602ae277d27 author : Tom Lane <email@example.com> date : Sun, 6 Nov 2016 14:43:13 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 6 Nov 2016 14:43:13 -0500
For a very long time, pltcl's spi_exec and spi_execp commands have had a behavior of storing the current row number as an element of output arrays, but this was never documented. Fix that. For an equally long time, pltcl_trigger_handler had a behavior of silently ignoring ".tupno" as an output column name, evidently so that the result of spi_exec could be used directly as a trigger result tuple. Not sure how useful that really is, but in any case it's bad that it would break attempts to use ".tupno" as an actual column name. We can fix it by not checking for ".tupno" until after we check for a column name match. This comports with the effective behavior of spi_exec[p] that ".tupno" is only magic when you don't have an actual column named that. In passing, wordsmith the description of returning modified tuples from a pltcl trigger. Noted while working on Jim Nasby's patch to support composite results from pltcl. The inability to return trigger tuples using ".tupno" as a column name is a bug, so back-patch to all supported branches.
Need to do SPI_push/SPI_pop around expression evaluation in plpgsql.
commit : 674877e93a1b9217c692a5671e1118136959ee74 author : Tom Lane <email@example.com> date : Sun, 6 Nov 2016 12:09:36 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 6 Nov 2016 12:09:36 -0500
We must do this in case the expression evaluation results in calling another plpgsql function (or, really, anything using SPI). I missed the need for this when I converted exec_cast_value() from doing a simple InputFunctionCall() to doing ExecEvalExpr() in commit 1345cc67b. There is a SPI_push_conditional in InputFunctionCall(), so that there was no bug before that. Per bug #14414 from Marcos Castedo. Add a regression test based on his example, which was that a plpgsql function in a domain check constraint didn't work when assigning to a domain-type variable within plpgsql. Report: <email@example.com>
More zic cleanup.
commit : 6e377ef0cc88851f9d077ae93f529701af81170d author : Tom Lane <firstname.lastname@example.org> date : Sun, 6 Nov 2016 10:45:58 -0500 committer: Tom Lane <email@example.com> date : Sun, 6 Nov 2016 10:45:58 -0500
The workaround the IANA guys chose to get rid of the clang warning we'd silenced in commit 23ed2ba81 turns out not to satisfy Coverity. Go back to the previous solution, ie, remove the useless comparison to SIZE_MAX. (In principle, there could be machines out there where it's not useless because ptrdiff_t is wider than size_t. But the whole thing is pretty academic anyway, as we could never approach this limit for any sane estimate of the amount of data that zic will ever be asked to work with.) Also, s/lineno/lineno_t/g, because if we accept their decision to start using "lineno" as a typedef, it is going to have very unpleasant consequences in our next pgindent run. Noted that while fooling with pltcl yesterday.
Remove duplicate macro definition.
commit : 56993cd17efc71f321c3aa09d93d03d11b83b8c6 author : Tom Lane <firstname.lastname@example.org> date : Sat, 5 Nov 2016 11:51:46 -0400 committer: Tom Lane <email@example.com> date : Sat, 5 Nov 2016 11:51:46 -0400
Seems to be a copy-and-pasteo. Odd that we heard no reports of compiler warnings about it. Thomas Munro
Back-patch portability fixes for contrib/pageinspect/ginfuncs.c.
commit : 56d34ba5fb919dd9ee2e7aeb20e6f00610198067 author : Tom Lane <firstname.lastname@example.org> date : Fri, 4 Nov 2016 12:37:29 -0400 committer: Tom Lane <email@example.com> date : Fri, 4 Nov 2016 12:37:29 -0400
Back-patch commits 84ad68d64 and 367b99bbb.
Sync our copy of the timezone library with IANA tzcode master.
commit : ac6fc1b55caa86fd7f90bf3f76b1d207aa4afcde author : Tom Lane <firstname.lastname@example.org> date : Fri, 4 Nov 2016 10:44:16 -0400 committer: Tom Lane <email@example.com> date : Fri, 4 Nov 2016 10:44:16 -0400
This patch absorbs some unreleased fixes for symlink manipulation bugs introduced in tzcode 2016g. Ordinarily I'd wait around for a released version, but in this case it seems like we could do with extra testing, in particular checking whether it works in EDB's VMware build environment. This corresponds to commit aec59156abbf8472ba201b6c7ca2592f9c10e077 in https://github.com/eggert/tz. Per a report from Sandeep Thakkar, building in an environment where hard links are not supported in the timezone data installation directory failed, because upstream code refactoring had broken the case of symlinking from an existing symlink. Further experimentation also showed that the symlinks were sometimes made incorrectly, with too many or too few "../"'s in the symlink contents. Back-patch of commit 1f87181e12beb067d21b79493393edcff14c190b. Report: <CANFyU94_p6mqRQc2i26PFp5QAOQGB++AjGX=FO8LDpXw0GSTjw@mail.gmail.com> Discussion: http://mm.icann.org/pipermail/tz/2016-November/024431.html
Fix portability bug in gin_page_opaque_info().
commit : af636d7b535cf1bcee59a60d22b712ecba5a400f author : Tom Lane <firstname.lastname@example.org> date : Wed, 2 Nov 2016 00:09:28 -0400 committer: Tom Lane <email@example.com> date : Wed, 2 Nov 2016 00:09:28 -0400
Somebody apparently thought that "if Int32GetDatum is good, Int64GetDatum must be better". Per buildfarm failures now that Peter has added some regression tests here.
Fix nasty performance problem in tsquery_rewrite().
commit : e0491c19d5cf7b398b45b620d238339737078109 author : Tom Lane <firstname.lastname@example.org> date : Sun, 30 Oct 2016 17:35:42 -0400 committer: Tom Lane <email@example.com> date : Sun, 30 Oct 2016 17:35:42 -0400
tsquery_rewrite() tries to find matches to subsets of AND/OR conditions; for example, in the query 'a | b | c' the substitution subquery 'a | c' should match and lead to replacement of the first and third items. That's fine, but the matching algorithm apparently takes about O(2^N) for an N-clause query (I say "apparently" because the code is also both unintelligible and uncommented). We could probably do better than that even without any extra assumptions --- but actually, we know that the subclauses are sorted, indeed are depending on that elsewhere in this very same function. So we can just scan the two lists a single time to detect matches, as though we were doing a merge join. Also do a re-flattening call (QTNTernary()) in tsquery_rewrite_query, just to make sure that the tree fits the expectations of the next search cycle. I didn't try to devise a test case for this, but I'm pretty sure that the oversight could have led to failure to match in some cases where a match would be expected. Improve comments, and also stick a CHECK_FOR_INTERRUPTS into dofindsubquery, just in case it's still too slow for somebody. Per report from Andreas Seltenreich. Back-patch to all supported branches. Discussion: <firstname.lastname@example.org>
Fix bogus tree-flattening logic in QTNTernary().
commit : de7387604bb695ab9f688f6cdaeb3efd3a221006 author : Tom Lane <email@example.com> date : Sun, 30 Oct 2016 15:24:40 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 30 Oct 2016 15:24:40 -0400
QTNTernary() contains logic to flatten, eg, '(a & b) & c' into 'a & b & c', which is all well and good, but it tries to do that to NOT nodes as well, so that '!!a' gets changed to '!a'. Explicitly restrict the conversion to be done only on AND and OR nodes, and add a test case illustrating the bug. In passing, provide some comments for the sadly naked functions in tsquery_util.c, and simplify some baroque logic in QTNFree(), which I think may have been leaking some items it intended to free. Noted while investigating a complaint from Andreas Seltenreich. Back-patch to all supported versions.
Improve speed of aggregates that use array_append as transition function.
commit : 7151e72d7fe3b43a810583c6f3a90d6fcde61760 author : Tom Lane <email@example.com> date : Sun, 30 Oct 2016 12:27:41 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 30 Oct 2016 12:27:41 -0400
In the previous coding, if an aggregate's transition function returned an expanded array, nodeAgg.c and nodeWindowAgg.c would always copy it and thus force it into the flat representation. This led to ping-ponging between flat and expanded formats, which costs a lot. For an aggregate using array_append as transition function, I measured about a 15X slowdown compared to the pre-9.5 code, when working on simple int arrays. Of course, the old code was already O(N^2) in this usage due to copying flat arrays all the time, but it wasn't quite this inefficient. To fix, teach nodeAgg.c and nodeWindowAgg.c to allow expanded transition values without copying, so long as the transition function takes care to return the transition value already properly parented under the aggcontext. That puts a bit of extra responsibility on the transition function, but doing it this way allows us to not need any extra logic in the fast path of advance_transition_function (ie, with a pass-by-value transition value, or with a modified-in-place pass-by-reference value). We already know that that's a hot spot so I'm loath to add any cycles at all there. Also, while only array_append currently knows how to follow this convention, this solution allows other transition functions to opt-in without needing to have a whitelist in the core aggregation code. (The reason we would need a whitelist is that currently, if you pass a R/W expanded-object pointer to an arbitrary function, it's allowed to do anything with it including deleting it; that breaks the core agg code's assumption that it should free discarded values. Returning a value under aggcontext is the transition function's signal that it knows it is an aggregate transition function and will play nice. Possibly the API rules for expanded objects should be refined, but that would not be a back-patchable change.) With this fix, an aggregate using array_append is no longer O(N^2), so it's much faster than pre-9.5 code rather than much slower. It's still a bit slower than the bespoke infrastructure for array_agg, but the differential seems to be only about 10%-20% rather than orders of magnitude. Discussion: <email@example.com>
If the stats collector dies during Hot Standby, restart it.
commit : 0cbd199fd93ed01b0b2c3e120c5603b7f101efdf author : Robert Haas <firstname.lastname@example.org> date : Thu, 27 Oct 2016 14:27:40 -0400 committer: Robert Haas <email@example.com> date : Thu, 27 Oct 2016 14:27:40 -0400
This bug exists as far back as 9.0, when Hot Standby was introduced, so back-patch to all supported branches. Report and patch by Takayuki Tsunakawa, reviewed by Michael Paquier and Kuntal Ghosh.
Fix possible pg_basebackup failure on standby with "include WAL".
commit : ef18cb7da6ab0bde676ad8f7b17452d7cd8f7970 author : Robert Haas <firstname.lastname@example.org> date : Thu, 27 Oct 2016 11:19:51 -0400 committer: Robert Haas <email@example.com> date : Thu, 27 Oct 2016 11:19:51 -0400
If a restartpoint flushed no dirty buffers, it could fail to update the minimum recovery point, leading to a minimum recovery point prior to the starting REDO location. perform_base_backup() would interpret that as meaning that no WAL files at all needed to be included in the backup, failing an internal sanity check. To fix, have restartpoints always update the minimum recovery point to just after the checkpoint record itself, so that the file (or files) containing the checkpoint record will always be included in the backup. Code by Amit Kapila, per a design suggestion by me, with some additional work on the code comment by me. Test case by Michael Paquier. Report by Kyotaro Horiguchi.
Fix incorrect trigger-property updating in ALTER CONSTRAINT.
commit : b53c841e5e289a34fde848d44b3d0b2b0ed5cad8 author : Tom Lane <firstname.lastname@example.org> date : Wed, 26 Oct 2016 17:05:06 -0400 committer: Tom Lane <email@example.com> date : Wed, 26 Oct 2016 17:05:06 -0400
The code to change the deferrability properties of a foreign-key constraint updated all the associated triggers to match; but a moment's examination of the code that creates those triggers in the first place shows that only some of them should track the constraint's deferrability properties. This leads to odd failures in subsequent exercise of the foreign key, as the triggers are fired at the wrong times. Fix that, and add a regression test comparing the trigger properties produced by ALTER CONSTRAINT with those you get by creating the constraint as-intended to begin with. Per report from James Parks. Back-patch to 9.4 where this ALTER functionality was introduced. Report: <CAJ3Xv+jzJ8iNNUcp4RKW8b6Qp1xVAxHwSXVpjBNygjKxcVuE9w@mail.gmail.com>
Fix not-HAVE_SYMLINK code in zic.c.
commit : 59f5b61cce3cfdfca3dc1a3f68307a47156d6a97 author : Tom Lane <firstname.lastname@example.org> date : Wed, 26 Oct 2016 13:40:41 -0400 committer: Tom Lane <email@example.com> date : Wed, 26 Oct 2016 13:40:41 -0400
I broke this in commit f3094920a. Apparently it's dead code anyway, at least as far as our buildfarm is concerned (and the upstream IANA code doesn't worry at all about symlink() not being present). But as long as the rest of our code is willing to guard against not having symlink(), this should too. Noted while investigating a tangentially-related complaint from Sandeep Thakkar. Back-patch to keep branches in sync.
Doc: improve documentation about inheritance.
commit : afb10b3b298c1012908a7d0f491a6a852c64e38d author : Tom Lane <firstname.lastname@example.org> date : Wed, 26 Oct 2016 11:46:26 -0400 committer: Tom Lane <email@example.com> date : Wed, 26 Oct 2016 11:46:26 -0400
Clarify documentation about inheritance of check constraints, in particular mentioning the NO INHERIT option, which didn't exist when this text was written. Document that in an inherited query, the applicable row security policies are those of the explicitly-named table, not its children. This is the intended behavior (per off-list discussion with Stephen Frost), and there are regression tests for it, but it wasn't documented anywhere user-facing as far as I could find. Do a bit of wordsmithing on the description of inherited access-privilege checks. Back-patch to 9.5 where RLS was added.