commit : a1efb790fb99e989e5cd3ff5ae8cc6df3e250516 author : Tom Lane <email@example.com> date : Mon, 8 Feb 2016 16:15:19 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 8 Feb 2016 16:15:19 -0500
commit : b101dca62b38a374dcb1dd4232f6ade9f7390cfc author : Peter Eisentraut <email@example.com> date : Mon, 8 Feb 2016 14:39:08 -0500 committer: Peter Eisentraut <firstname.lastname@example.org> date : Mon, 8 Feb 2016 14:39:08 -0500
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 97f0f075b2d3e9dac26db78dbd79c32d80eb8f33
Last-minute updates for release notes.
commit : 5e54757d41ad8b6e7fc2ab4961055c3872430cc4 author : Tom Lane <email@example.com> date : Mon, 8 Feb 2016 10:49:37 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 8 Feb 2016 10:49:37 -0500
Fix some regex issues with out-of-range characters and large char ranges.
commit : fdc3139e2bd7d7c8b8e530e48b78bba48b72e9a1 author : Tom Lane <email@example.com> date : Mon, 8 Feb 2016 10:25:40 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 8 Feb 2016 10:25:40 -0500
Previously, our regex code defined CHR_MAX as 0xfffffffe, which is a bad choice because it is outside the range of type "celt" (int32). Characters approaching that limit could lead to infinite loops in logic such as "for (c = a; c <= b; c++)" where c is of type celt but the range bounds are chr. Such loops will work safely only if CHR_MAX+1 is representable in celt, since c must advance to beyond b before the loop will exit. Fortunately, there seems no reason not to restrict CHR_MAX to 0x7ffffffe. It's highly unlikely that Unicode will ever assign codes that high, and none of our other backend encodings need characters beyond that either. In addition to modifying the macro, we have to explicitly enforce character range restrictions on the values of \u, \U, and \x escape sequences, else the limit is trivially bypassed. Also, the code for expanding case-independent character ranges in bracket expressions had a potential integer overflow in its calculation of the number of characters it could generate, which could lead to allocating too small a character vector and then overwriting memory. An attacker with the ability to supply arbitrary regex patterns could easily cause transient DOS via server crashes, and the possibility for privilege escalation has not been ruled out. Quite aside from the integer-overflow problem, the range expansion code was unnecessarily inefficient in that it always produced a result consisting of individual characters, abandoning the knowledge that we had a range to start with. If the input range is large, this requires excessive memory. Change it so that the original range is reported as-is, and then we add on any case-equivalent characters that are outside that range. With this approach, we can bound the number of individual characters allowed without sacrificing much. This patch allows at most 100000 individual characters, which I believe to be more than the number of case pairs existing in Unicode, so that the restriction will never be hit in practice. It's still possible for range() to take awhile given a large character code range, so also add statement-cancel detection to its loop. The downstream function dovec() also lacked cancel detection, and could take a long time given a large output from range(). Per fuzz testing by Greg Stark. Back-patch to all supported branches. Security: CVE-2016-0773
Backpatch of 7a58d19b0 to 9.4, previously omitted.
commit : 33b26426ebe993b0f59e9b7683db2dcf2f7ad2dd author : Andres Freund <email@example.com> date : Mon, 8 Feb 2016 11:10:14 +0100 committer: Andres Freund <firstname.lastname@example.org> date : Mon, 8 Feb 2016 11:10:14 +0100
Apparently by accident the above commit was backpatched to all supported branches, except 9.4. This appears to be an error, as the issue is just as present there. Given the short amount of time before the next minor release, and given the issue is documented to be fixed for 9.4, it seems like a good idea to push this now. Original-Author: Michael Meskes Discussion: 75DB81BEEA95B445AE6D576A0A5C9E9364CBC11F@BPXM05GP.gisp.nec.co.jp
Improve documentation about PRIMARY KEY constraints.
commit : 73ed2a5607b88b018c3e1cbdc9ea4623695a7743 author : Tom Lane <email@example.com> date : Sun, 7 Feb 2016 16:02:44 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 7 Feb 2016 16:02:44 -0500
Get rid of the false implication that PRIMARY KEY is exactly equivalent to UNIQUE + NOT NULL. That was more-or-less true at one time in our implementation, but the standard doesn't say that, and we've grown various features (many of them required by spec) that treat a pkey differently from less-formal constraints. Per recent discussion on pgsql-general. I failed to resist the temptation to do some other wordsmithing in the same area.
Release notes for 9.5.1, 9.4.6, 9.3.11, 9.2.15, 9.1.20.
commit : 282a62e2fb02497accb98acca2ee95caf54416e2 author : Tom Lane <email@example.com> date : Sun, 7 Feb 2016 14:16:31 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 7 Feb 2016 14:16:31 -0500
Force certain "pljava" custom GUCs to be PGC_SUSET.
commit : ed6deeb7a0dd9a8636309d0d8f6033db9fcc55ab author : Noah Misch <email@example.com> date : Fri, 5 Feb 2016 20:22:51 -0500 committer: Noah Misch <firstname.lastname@example.org> date : Fri, 5 Feb 2016 20:22:51 -0500
Future PL/Java versions will close CVE-2016-0766 by making these GUCs PGC_SUSET. This PostgreSQL change independently mitigates that PL/Java vulnerability, helping sites that update PostgreSQL more frequently than PL/Java. Back-patch to 9.1 (all supported versions).
Update time zone data files to tzdata release 2016a.
commit : 31b792f61448c862bd32dbb4fa9dfd528fda646c author : Tom Lane <email@example.com> date : Fri, 5 Feb 2016 10:59:09 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 5 Feb 2016 10:59:09 -0500
DST law changes in Cayman Islands, Metlakatla, Trans-Baikal Territory (Zabaykalsky Krai). Historical corrections for Pakistan.
postgres_fdw: Avoid possible misbehavior when RETURNING tableoid column only.
commit : 2099b911d75738e18749e89019bf75f20dfde4c1 author : Robert Haas <email@example.com> date : Thu, 4 Feb 2016 22:15:50 -0500 committer: Robert Haas <firstname.lastname@example.org> date : Thu, 4 Feb 2016 22:15:50 -0500
deparseReturningList ended up adding up RETURNING NULL to the code, but code elsewhere saw an empty list of attributes and concluded that it should not expect tuples from the remote side. Etsuro Fujita and Robert Haas, reviewed by Thom Brown
When modifying a foreign table, initialize tableoid field properly.
commit : 1f3294c22f614da74dd98a2ef69137bfa9135c96 author : Robert Haas <email@example.com> date : Thu, 4 Feb 2016 21:15:57 -0500 committer: Robert Haas <firstname.lastname@example.org> date : Thu, 4 Feb 2016 21:15:57 -0500
Failure to do this can cause AFTER ROW triggers or RETURNING expressions that reference this field to misbehave. Etsuro Fujita, reviewed by Thom Brown
In pg_dump, ensure that view triggers are processed after view rules.
commit : 411e2b0d59947b8fce39906b4722781895e31b1e author : Tom Lane <email@example.com> date : Thu, 4 Feb 2016 00:26:10 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 4 Feb 2016 00:26:10 -0500
If a view is split into CREATE TABLE + CREATE RULE to break a circular dependency, then any triggers on the view must be dumped/reloaded after the CREATE RULE; else the backend may reject the CREATE TRIGGER because it's the wrong type of trigger for a plain table. This works all right in plain dump/restore because of pg_dump's sorting heuristic that places triggers after rules. However, when using parallel restore, the ordering must be enforced by a dependency --- and we didn't have one. Fixing this is a mere matter of adding an addObjectDependency() call, except that we need to be able to find all the triggers belonging to the view relation, and there was no easy way to do that. Add fields to pg_dump's TableInfo struct to remember where the associated TriggerInfo struct(s) are. Per bug report from Dennis Kögel. The failure can be exhibited at least as far back as 9.1, so back-patch to all supported branches.
Add hstore_to_jsonb() and hstore_to_jsonb_loose() to hstore documentation.
commit : c27fda69c919d3fcde5f4dfef11747f1584ed123 author : Tom Lane <email@example.com> date : Wed, 3 Feb 2016 12:56:40 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 3 Feb 2016 12:56:40 -0500
These were never documented anywhere user-visible. Tut tut.
pgbench: Install guard against overflow when dividing by -1.
commit : c33d1a8d5266d345bf777b1a9256cb7155d7e67f author : Robert Haas <email@example.com> date : Wed, 3 Feb 2016 09:15:29 -0500 committer: Robert Haas <firstname.lastname@example.org> date : Wed, 3 Feb 2016 09:15:29 -0500
Commit 64f5edca2401f6c2f23564da9dd52e92d08b3a20 fixed the same hazard on master; this is a backport, but the modulo operator does not exist in older releases. Michael Paquier
Fix IsValidJsonNumber() to notice trailing non-alphanumeric garbage.
commit : aa223a037be2935dd6e335d94550dc3f53262479 author : Tom Lane <email@example.com> date : Wed, 3 Feb 2016 01:39:08 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 3 Feb 2016 01:39:08 -0500
Commit e09996ff8dee3f70 was one brick shy of a load: it didn't insist that the detected JSON number be the whole of the supplied string. This allowed inputs such as "2016-01-01" to be misdetected as valid JSON numbers. Per bug #13906 from Dmitry Ryabov. In passing, be more wary of zero-length input (I'm not sure this can happen given current callers, but better safe than sorry), and do some minor cosmetic cleanup.
Fix pg_description entries for jsonb_to_record() and jsonb_to_recordset().
commit : 95a2cca93088fd9f0bdf71b090d2fe00a355257d author : Tom Lane <email@example.com> date : Tue, 2 Feb 2016 11:39:50 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 2 Feb 2016 11:39:50 -0500
All the other jsonb function descriptions refer to the arguments as being "jsonb", but these two said "json". Make it consistent. Per bug #13905 from Petru Florin Mihancea. No catversion bump --- we can't force one in the back branches, and this isn't very critical anyway.
Fix error in documentated use of mingw-w64 compilers
commit : e76281e8e9effe24b805900a6e68ef1a087af655 author : Andrew Dunstan <email@example.com> date : Sat, 30 Jan 2016 19:28:44 -0500 committer: Andrew Dunstan <firstname.lastname@example.org> date : Sat, 30 Jan 2016 19:28:44 -0500
Error reported by Igal Sapir.
Fix incorrect pattern-match processing in psql's \det command.
commit : 5849b6e32404079c122d013d14c3fc2d197c81d7 author : Tom Lane <email@example.com> date : Fri, 29 Jan 2016 10:28:02 +0100 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 29 Jan 2016 10:28:02 +0100
listForeignTables' invocation of processSQLNamePattern did not match up with the other ones that handle potentially-schema-qualified names; it failed to make use of pg_table_is_visible() and also passed the name arguments in the wrong order. Bug seems to have been aboriginal in commit 0d692a0dc9f0e532. It accidentally sort of worked as long as you didn't inquire too closely into the behavior, although the silliness was later exposed by inconsistencies in the test queries added by 59efda3e50ca4de6 (which I probably should have questioned at the time, but didn't). Per bug #13899 from Reece Hart. Patch by Reece Hart and Tom Lane. Back-patch to all affected branches.
Fix syntax descriptions for replication commands in logicaldecoding.sgml
commit : 280d05ca6bb88d0a760cdadbf59ed2a6e71df8f0 author : Fujii Masao <email@example.com> date : Fri, 29 Jan 2016 12:14:56 +0900 committer: Fujii Masao <firstname.lastname@example.org> date : Fri, 29 Jan 2016 12:14:56 +0900
Patch-by: Oleksandr Shulgin Reviewed-by: Craig Ringer and Fujii Masao Backpatch-through: 9.4 where logical decoding was introduced
Fix startup so that log prefix %h works for the log_connections message.
commit : 2b3983158726be1b2494208786506dfccd900d47 author : Tom Lane <email@example.com> date : Tue, 26 Jan 2016 15:38:33 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 26 Jan 2016 15:38:33 -0500
We entirely randomly chose to initialize port->remote_host just after printing the log_connections message, when we could perfectly well do it just before, allowing %h and %r to work for that message. Per gripe from Artem Tomyuk.
Properly install dynloader.h on MSVC builds
commit : 8b3d5280125f25c9eee5531fe5ff059f5e922da5 author : Bruce Momjian <email@example.com> date : Tue, 19 Jan 2016 23:30:29 -0500 committer: Bruce Momjian <firstname.lastname@example.org> date : Tue, 19 Jan 2016 23:30:29 -0500
This will enable PL/Java to be cleanly compiled, as dynloader.h is a requirement. Report by Chapman Flack Patch by Michael Paquier Backpatch through 9.1
Fix spelling mistake.
commit : fc5d5e9de7b9f972633569fd6932b0582630671f author : Robert Haas <email@example.com> date : Thu, 14 Jan 2016 23:12:05 -0500 committer: Robert Haas <firstname.lastname@example.org> date : Thu, 14 Jan 2016 23:12:05 -0500
Same patch submitted independently by David Rowley and Peter Geoghegan.
Properly close token in sspi authentication
commit : ab49f87d5cd4f70fdfe606449ebe1369aa6cbb61 author : Magnus Hagander <email@example.com> date : Thu, 14 Jan 2016 13:06:03 +0100 committer: Magnus Hagander <firstname.lastname@example.org> date : Thu, 14 Jan 2016 13:06:03 +0100
We can never leak more than one token, but we shouldn't do that. We don't bother closing it in the error paths since the process will exit shortly anyway. Christian Ullrich
Handle extension members when first setting object dump flags in pg_dump.
commit : 7393208b512545034061edce5d124090270f8bac author : Tom Lane <email@example.com> date : Wed, 13 Jan 2016 18:55:27 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 13 Jan 2016 18:55:27 -0500
pg_dump's original approach to handling extension member objects was to run around and clear (or set) their dump flags rather late in its data collection process. Unfortunately, quite a lot of code expects those flags to be valid before that; which was an entirely reasonable expectation before we added extensions. In particular, this explains Karsten Hilbert's recent report of pg_upgrade failing on a database in which an extension has been installed into the pg_catalog schema. Its objects are initially marked as not-to-be-dumped on the strength of their schema, and later we change them to must-dump because we're doing a binary upgrade of their extension; but we've already skipped essential tasks like making associated DO_SHELL_TYPE objects. To fix, collect extension membership data first, and incorporate it in the initial setting of the dump flags, so that those are once again correct from the get-go. This has the undesirable side effect of slightly lengthening the time taken before pg_dump acquires table locks, but testing suggests that the increase in that window is not very much. Along the way, get rid of ugly special-case logic for deciding whether to dump procedural languages, FDWs, and foreign servers; dump decisions for those are now correct up-front, too. In 9.3 and up, this also fixes erroneous logic about when to dump event triggers (basically, they were *always* dumped before). In 9.5 and up, transform objects had that problem too. Since this problem came in with extensions, back-patch to all supported versions.
Avoid dump/reload problems when using both plpython2 and plpython3.
commit : 22815752effdec0df091244563b4398bd36a91b0 author : Tom Lane <email@example.com> date : Mon, 11 Jan 2016 19:55:40 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 11 Jan 2016 19:55:40 -0500
Commit 803716013dc1350f installed a safeguard against loading plpython2 and plpython3 at the same time, but asserted that both could still be used in the same database, just not in the same session. However, that's not actually all that practical because dumping and reloading will fail (since both libraries necessarily get loaded into the restoring session). pg_upgrade is even worse, because it checks for missing libraries by loading every .so library mentioned in the entire installation into one session, so that you can have only one across the whole cluster. We can improve matters by not throwing the error immediately in _PG_init, but only when and if we're asked to do something that requires calling into libpython. This ameliorates both of the above situations, since while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in loading plpython, it isn't asked to do anything interesting (at least not if check_function_bodies is off, as it will be during a restore). It's possible that this opens some corner-case holes in which a crash could be provoked with sufficient effort. However, since plpython only exists as an untrusted language, any such crash would require superuser privileges, making it "don't do that" not a security issue. To reduce the hazards in this area, the error is still FATAL when it does get thrown. Per a report from Paul Jones. Back-patch to 9.2, which is as far back as the patch applies without work. (It could be made to work in 9.1, but given the lack of previous complaints, I'm disinclined to expend effort so far back. We've been pretty desultory about support for Python 3 in 9.1 anyway.)
Clean up some lack-of-STRICT issues in the core code, too.
commit : 78b7aaaacb98b9144ab2668b988a9294dbf102ca author : Tom Lane <email@example.com> date : Sat, 9 Jan 2016 16:58:32 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 9 Jan 2016 16:58:32 -0500
A scan for missed proisstrict markings in the core code turned up these functions: brin_summarize_new_values pg_stat_reset_single_table_counters pg_stat_reset_single_function_counters pg_create_logical_replication_slot pg_create_physical_replication_slot pg_drop_replication_slot The first three of these take OID, so a null argument will normally look like a zero to them, resulting in "ERROR: could not open relation with OID 0" for brin_summarize_new_values, and no action for the pg_stat_reset_XXX functions. The other three will dump core on a null argument, though this is mitigated by the fact that they won't do so until after checking that the caller is superuser or has rolreplication privilege. In addition, the pg_logical_slot_get/peek[_binary]_changes family was intentionally marked nonstrict, but failed to make nullness checks on all the arguments; so again a null-pointer-dereference crash is possible but only for superusers and rolreplication users. Add the missing ARGISNULL checks to the latter functions, and mark the former functions as strict in pg_proc. Make that change in the back branches too, even though we can't force initdb there, just so that installations initdb'd in future won't have the issue. Since none of these bugs rise to the level of security issues (and indeed the pg_stat_reset_XXX functions hardly misbehave at all), it seems sufficient to do this. In addition, fix some order-of-operations oddities in the slot_get_changes family, mostly cosmetic, but not the part that moves the function's last few operations into the PG_TRY block. As it stood, there was significant risk for an error to exit without clearing historical information from the system caches. The slot_get_changes bugs go back to 9.4 where that code was introduced. Back-patch appropriate subsets of the pg_proc changes into all active branches, as well.
Clean up code for widget_in() and widget_out().
commit : acbdda4dbf99117712fb9b699ae1e83a01faea53 author : Tom Lane <email@example.com> date : Sat, 9 Jan 2016 13:44:27 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 9 Jan 2016 13:44:27 -0500
Given syntactically wrong input, widget_in() could call atof() with an indeterminate pointer argument, typically leading to a crash; or if it didn't do that, it might return a NULL pointer, which again would lead to a crash since old-style C functions aren't supposed to do things that way. Fix that by correcting the off-by-one syntax test and throwing a proper error rather than just returning NULL. Also, since widget_in and widget_out have been marked STRICT for a long time, their tests for null inputs are just dead code; remove 'em. In the oldest branches, also improve widget_out to use snprintf not sprintf, just to be sure. In passing, get rid of a long-since-useless sprintf into a local buffer that nothing further is done with, and make some other minor coding style cleanups. In the intended regression-testing usage of these functions, none of this is very significant; but if the regression test database were left around in a production installation, these bugs could amount to a minor security hazard. Piotr Stefaniak, Michael Paquier, and Tom Lane
Add STRICT to some C functions created by the regression tests.
commit : 831c22ba3c6c23f826e3c266d842daab32f65990 author : Tom Lane <email@example.com> date : Sat, 9 Jan 2016 13:02:54 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 9 Jan 2016 13:02:54 -0500
These functions readily crash when passed a NULL input value. The tests themselves do not pass NULL values to them; but when the regression database is used as a basis for fuzz testing, they cause a lot of noise. Also, if someone were to leave a regression database lying about in a production installation, these would create a minor security hazard. Andreas Seltenreich
Fix unobvious interaction between -X switch and subdirectory creation.
commit : aa062597c5192fdd758900eb3740a5bd935feda9 author : Tom Lane <email@example.com> date : Thu, 7 Jan 2016 18:20:57 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 7 Jan 2016 18:20:57 -0500
Turns out the only reason initdb -X worked is that pg_mkdir_p won't whine if you point it at something that's a symlink to a directory. Otherwise, the attempt to create pg_xlog/ just like all the other subdirectories would have failed. Let's be a little more explicit about what's happening. Oversight in my patch for bug #13853 (mea culpa for not testing -X ...)
Fix one more TAP test to use standard command-line argument ordering.
commit : 33b051293a40b138c1ac02cdd1425977977dcd1d author : Tom Lane <email@example.com> date : Thu, 7 Jan 2016 17:36:43 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 7 Jan 2016 17:36:43 -0500
Commit 84c08a7649b8c6dd488dfe0e37ab017e8059cd33 should have been back-patched into 9.4, but was not, so this test continued to pass for the wrong reason there. Noted while investigating other failures.
Use plain mkdir() not pg_mkdir_p() to create subdirectories of PGDATA.
commit : 882c592d0c0cfc29a8265f382316c58cb8b81c66 author : Tom Lane <email@example.com> date : Thu, 7 Jan 2016 15:22:01 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 7 Jan 2016 15:22:01 -0500
When we're creating subdirectories of PGDATA during initdb, we know darn well that the parent directory exists (or should exist) and that the new subdirectory doesn't (or shouldn't). There is therefore no need to use anything more complicated than mkdir(). Using pg_mkdir_p() just opens us up to unexpected failure modes, such as the one exhibited in bug #13853 from Nuri Boardman. It's not very clear why pg_mkdir_p() went wrong there, but it is clear that we didn't need to be trying to create parent directories in the first place. We're not even saving any code, as proven by the fact that this patch nets out at minus five lines. Since this is a response to a field bug report, back-patch to all branches.
Windows: Make pg_ctl reliably detect service status
commit : c7aca3d45b3dc97461be94c836a7deeeca4111b2 author : Alvaro Herrera <email@example.com> date : Thu, 7 Jan 2016 11:59:08 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Thu, 7 Jan 2016 11:59:08 -0300
pg_ctl is using isatty() to verify whether the process is running in a terminal, and if not it sends its output to Windows' Event Log ... which does the wrong thing when the output has been redirected to a pipe, as reported in bug #13592. To fix, make pg_ctl use the code we already have to detect service-ness: in the master branch, move src/backend/port/win32/security.c to src/port (with suitable tweaks so that it runs properly in backend and frontend environments); pg_ctl already has access to pgport so it Just Works. In older branches, that's likely to cause trouble, so instead duplicate the required code in pg_ctl.c. Author: Michael Paquier Bug report and diagnosis: Egon Kocjan Backpatch: all supported branches
Sort $(wildcard) output where needed for reproducible build output.
commit : 8c558b2e96ae608807d1e1167ebb0a5f1e1987bd author : Tom Lane <email@example.com> date : Tue, 5 Jan 2016 15:47:05 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 5 Jan 2016 15:47:05 -0500
The order of inclusion of .o files makes a difference in linker output; not a functional difference, but still a bitwise difference, which annoys some packagers who would like reproducible builds. Report and patch by Christoph Berg
Fix treatment of *lpNumberOfBytesRecvd == 0: that's a completion condition.
commit : add6d821b778ad727277eca5562b444168fa6f66 author : Tom Lane <email@example.com> date : Mon, 4 Jan 2016 17:41:33 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 4 Jan 2016 17:41:33 -0500
pgwin32_recv() has treated a non-error return of zero bytes from WSARecv() as being a reason to block ever since the current implementation was introduced in commit a4c40f140d23cefb. However, so far as one can tell from Microsoft's documentation, that is just wrong: what it means is graceful connection closure (in stream protocols) or receipt of a zero-length message (in message protocols), and neither case should result in blocking here. The only reason the code worked at all was that control then fell into the retry loop, which did *not* treat zero bytes specially, so we'd get out after only wasting some cycles. But as of 9.5 we do not normally reach the retry loop and so the bug is exposed, as reported by Shay Rojansky and diagnosed by Andres Freund. Remove the unnecessary test on the byte count, and rearrange the code in the retry loop so that it looks identical to the initial sequence. Back-patch of commit 90e61df8130dc7051a108ada1219fb0680cb3eb6. The original plan was to apply this only to 9.5 and up, but after discussion and buildfarm testing, it seems better to back-patch. The noblock code path has been at risk of this problem since it was introduced (in 9.0); if it did happen in pre-9.5 branches, the symptom would be that a walsender would wait indefinitely rather than noticing a loss of connection. While we lack proof that the case has been seen in the field, it seems possible that it's happened without being reported.
Teach pg_dump to quote reloption values safely.
commit : aab4b73bdd52263c7c415cd1e0a246dbdeb7c667 author : Tom Lane <email@example.com> date : Sat, 2 Jan 2016 19:04:45 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 2 Jan 2016 19:04:45 -0500
Commit c7e27becd2e6eb93 fixed this on the backend side, but we neglected the fact that several code paths in pg_dump were printing reloptions values that had not gotten massaged by ruleutils. Apply essentially the same quoting logic in those places, too.
Fix overly-strict assertions in spgtextproc.c.
commit : 1cd38408ba4e851eeccff6ffbba049a7a916c4e1 author : Tom Lane <email@example.com> date : Sat, 2 Jan 2016 16:24:50 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 2 Jan 2016 16:24:50 -0500
spg_text_inner_consistent is capable of reconstructing an empty string to pass down to the next index level; this happens if we have an empty string coming in, no prefix, and a dummy node label. (In practice, what is needed to trigger that is insertion of a whole bunch of empty-string values.) Then, we will arrive at the next level with in->level == 0 and a non-NULL (but zero length) in->reconstructedValue, which is valid but the Assert tests weren't expecting it. Per report from Andreas Seltenreich. This has no impact in non-Assert builds, so should not be a problem in production, but back-patch to all affected branches anyway. In passing, remove a couple of useless variable initializations and shorten the code by not duplicating DatumGetPointer() calls.
Adjust back-branch release note description of commits a2a718b22 et al.
commit : 88ee25658536b93ea94cf90f176a0c3f2febf6e0 author : Tom Lane <email@example.com> date : Sat, 2 Jan 2016 15:29:03 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 2 Jan 2016 15:29:03 -0500
As pointed out by Michael Paquier, recovery_min_apply_delay didn't exist in 9.0-9.3, making the release note text not very useful. Instead make it talk about recovery_target_xid, which did exist then. 9.0 is already out of support, but we can fix the text in the newer branches' copies of its release notes.
Update copyright for 2016
commit : 146b4cd8ca353bfeafced020c83f2777041f97c8 author : Bruce Momjian <email@example.com> date : Sat, 2 Jan 2016 13:33:39 -0500 committer: Bruce Momjian <firstname.lastname@example.org> date : Sat, 2 Jan 2016 13:33:39 -0500
Backpatch certain files through 9.1
Teach flatten_reloptions() to quote option values safely.
commit : f9b3b3fecc84a50a06a2efb2a71b226540db14bc author : Tom Lane <email@example.com> date : Fri, 1 Jan 2016 15:27:53 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 1 Jan 2016 15:27:53 -0500
flatten_reloptions() supposed that it didn't really need to do anything beyond inserting commas between reloption array elements. However, in principle the value of a reloption could be nearly anything, since the grammar allows a quoted string there. Any restrictions on it would come from validity checking appropriate to the particular option, if any. A reloption value that isn't a simple identifier or number could thus lead to dump/reload failures due to syntax errors in CREATE statements issued by pg_dump. We've gotten away with not worrying about this so far with the core-supported reloptions, but extensions might allow reloption values that cause trouble, as in bug #13840 from Kouhei Sutou. To fix, split the reloption array elements explicitly, and then convert any value that doesn't look like a safe identifier to a string literal. (The details of the quoting rule could be debated, but this way is safe and requires little code.) While we're at it, also quote reloption names if they're not safe identifiers; that may not be a likely problem in the field, but we might as well try to be bulletproof here. It's been like this for a long time, so back-patch to all supported branches. Kouhei Sutou, adjusted some by me
Add some more defenses against silly estimates to gincostestimate().
commit : 76eccf07bb40c36e06549c73dee4606cfef93318 author : Tom Lane <email@example.com> date : Fri, 1 Jan 2016 13:42:21 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 1 Jan 2016 13:42:21 -0500
A report from Andy Colson showed that gincostestimate() was not being nearly paranoid enough about whether to believe the statistics it finds in the index metapage. The problem is that the metapage stats (other than the pending-pages count) are only updated by VACUUM, and in the worst case could still reflect the index's original empty state even when it has grown to many entries. We attempted to deal with that by scaling up the stats to match the current index size, but if nEntries is zero then scaling it up still gives zero. Moreover, the proportion of pages that are entry pages vs. data pages vs. pending pages is unlikely to be estimated very well by scaling if the index is now orders of magnitude larger than before. We can improve matters by expanding the use of the rule-of-thumb estimates I introduced in commit 7fb008c5ee59b040: if the index has grown by more than a cutoff amount (here set at 4X growth) since VACUUM, then use the rule-of-thumb numbers instead of scaling. This might not be exactly right but it seems much less likely to produce insane estimates. I also improved both the scaling estimate and the rule-of-thumb estimate to account for numPendingPages, since it's reasonable to expect that that is accurate in any case, and certainly pages that are in the pending list are not either entry or data pages. As a somewhat separate issue, adjust the estimation equations that are concerned with extra fetches for partial-match searches. These equations suppose that a fraction partialEntries / numEntries of the entry and data pages will be visited as a consequence of a partial-match search. Now, it's physically impossible for that fraction to exceed one, but our estimate of partialEntries is mostly bunk, and our estimate of numEntries isn't exactly gospel either, so we could arrive at a silly value. In the example presented by Andy we were coming out with a value of 100, leading to insane cost estimates. Clamp the fraction to one to avoid that. Like the previous patch, back-patch to all supported branches; this problem can be demonstrated in one form or another in all of them.
Put back one copyObject() in rewriteTargetView().
commit : 12e116a0050813ae1f8763ba986535440526b9a8 author : Tom Lane <email@example.com> date : Tue, 29 Dec 2015 17:06:04 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 29 Dec 2015 17:06:04 -0500
Commit 6f8cb1e23485bd6d tried to centralize rewriteTargetView's copying of a target view's Query struct. However, it ignored the fact that the jointree->quals field was used twice. This only accidentally failed to fail immediately because the same ChangeVarNodes mutation is applied in both cases, so that we end up with logically identical expression trees for both uses (and, as the code stands, the second ChangeVarNodes call actually does nothing). However, we end up linking *physically* identical expression trees into both an RTE's securityQuals list and the WithCheckOption list. That's pretty dangerous, mainly because prepsecurity.c is utterly cavalier about further munging such structures without copying them first. There may be no live bug in HEAD as a consequence of the fact that we apply preprocess_expression in between here and prepsecurity.c, and that will make a copy of the tree anyway. Or it may just be that the regression tests happen to not trip over it. (I noticed this only because things fell over pretty badly when I tried to relocate the planner's call of expand_security_quals to before expression preprocessing.) In any case it's very fragile because if anyone tried to make the securityQuals and WithCheckOption trees diverge before we reach preprocess_expression, it would not work. The fact that the current code will preprocess securityQuals and WithCheckOptions lists at completely different times in different query levels does nothing to increase my trust that that can't happen. In view of the fact that 9.5.0 is almost upon us and the aforesaid commit has seen exactly zero field testing, the prudent course is to make an extra copy of the quals so that the behavior is not different from what has been in the field during beta.
Document the exponentiation operator as associating left to right.
commit : 3b3e8fc9cf395400fb9108b4e384ab9b6bf2ec14 author : Tom Lane <email@example.com> date : Mon, 28 Dec 2015 12:09:00 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 28 Dec 2015 12:09:00 -0500
Common mathematical convention is that exponentiation associates right to left. We aren't going to change the parser for this, but we could note it in the operator's description. (It's already noted in the operator precedence/associativity table, but users might not look there.) Per bug #13829 from Henrik Pauli.
Update documentation about pseudo-types.
commit : aba82401dd04387bf17968b839039fe7cbec8f4e author : Tom Lane <email@example.com> date : Mon, 28 Dec 2015 11:04:42 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 28 Dec 2015 11:04:42 -0500
Tone down an overly strong statement about which pseudo-types PLs are likely to allow. Add "event_trigger" to the list, as well as "pg_ddl_command" in 9.5/HEAD. Back-patch to 9.3 where event_trigger was added.
Fix translation domain in pg_basebackup
commit : f98bc20dd125bd356864d76e8b32fab2b2df51df author : Alvaro Herrera <email@example.com> date : Mon, 28 Dec 2015 10:50:35 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Mon, 28 Dec 2015 10:50:35 -0300
For some reason, we've been overlooking the fact that pg_receivexlog and pg_recvlogical are using wrong translation domains all along, so their output hasn't ever been translated. The right domain is pg_basebackup, not their own executable names. Noticed by Ioseph Kim, who's been working on the Korean translation. Backpatch pg_receivexlog to 9.2 and pg_recvlogical to 9.4.
Add forgotten CHECK_FOR_INTERRUPT calls in pgcrypto's crypt()
commit : 0a29cf693d385a60fb37c1f396af6fd8ffadc2e6 author : Alvaro Herrera <email@example.com> date : Sun, 27 Dec 2015 13:03:19 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Sun, 27 Dec 2015 13:03:19 -0300
Both Blowfish and DES implementations of crypt() can take arbitrarily long time, depending on the number of rounds specified by the caller; make sure they can be interrupted. Author: Andreas Karlsson Reviewer: Jeff Janes Backpatch to 9.1.
Fix factual and grammatical errors in comments for struct _tableInfo.
commit : 70ff73717004912a27506f31ef3c6aef77d64708 author : Tom Lane <email@example.com> date : Thu, 24 Dec 2015 10:42:58 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 24 Dec 2015 10:42:58 -0500
Amit Langote, further adjusted by me
In pg_dump, remember connection passwords no matter how we got them.
commit : f56802a2d06f5f995c7eb1513d8ef5b978a8471c author : Tom Lane <email@example.com> date : Wed, 23 Dec 2015 14:25:31 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 23 Dec 2015 14:25:31 -0500
When pg_dump prompts the user for a password, it remembers the password for possible re-use by parallel worker processes. However, libpq might have extracted the password from a connection string originally passed as "dbname". Since we don't record the original form of dbname but break it down to host/port/etc, the password gets lost. Fix that by retrieving the actual password from the PGconn. (It strikes me that this whole approach is rather broken, as it will also lose other information such as options that might have been present in the connection string. But we'll leave that problem for another day.) In passing, get rid of rather silly use of malloc() for small fixed-size arrays. Back-patch to 9.3 where parallel pg_dump was introduced. Report and fix by Zeus Kronion, adjusted a bit by Michael Paquier and me
Rework internals of changing a type's ownership
commit : d07afa42db217e1ce46936fe9b2f18706b443c97 author : Alvaro Herrera <email@example.com> date : Mon, 21 Dec 2015 19:49:15 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Mon, 21 Dec 2015 19:49:15 -0300
This is necessary so that REASSIGN OWNED does the right thing with composite types, to wit, that it also alters ownership of the type's pg_class entry -- previously, the pg_class entry remained owned by the original user, which caused later other failures such as the new owner's inability to use ALTER TYPE to rename an attribute of the affected composite. Also, if the original owner is later dropped, the pg_class entry becomes owned by a non-existant user which is bogus. To fix, create a new routine AlterTypeOwner_oid which knows whether to pass the request to ATExecChangeOwner or deal with it directly, and use that in shdepReassignOwner rather than calling AlterTypeOwnerInternal directly. AlterTypeOwnerInternal is now simpler in that it only modifies the pg_type entry and recurses to handle a possible array type; higher-level tasks are handled by either AlterTypeOwner directly or AlterTypeOwner_oid. I took the opportunity to add a few more objects to the test rig for REASSIGN OWNED, so that more cases are exercised. Additional ones could be added for superuser-only-ownable objects (such as FDWs and event triggers) but I didn't want to push my luck by adding a new superuser to the tests on a backpatchable bug fix. Per bug #13666 reported by Chris Pacejo. This is a backpatch of commit 756e7b4c9db1 to branches 9.1 -- 9.4.
adjust ACL owners for REASSIGN and ALTER OWNER TO
commit : 2c8ae6442fed47a28320ad470c30bb1b9a31e856 author : Alvaro Herrera <email@example.com> date : Mon, 21 Dec 2015 19:16:15 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Mon, 21 Dec 2015 19:16:15 -0300
When REASSIGN and ALTER OWNER TO are used, both the object owner and ACL list should be changed from the old owner to the new owner. This patch fixes types, foreign data wrappers, and foreign servers to change their ACL list properly; they already changed owners properly. Report by Alexey Bashtanov This is a backpatch of commit 59367fdf97c (for bug #9923) by Bruce Momjian to branches 9.1 - 9.4; it wasn't backpatched originally out of concerns that it would create a backwards compatibility problem, but per discussion related to bug #13666 that turns out to have been misguided. (Therefore, the entry in the 9.5 release notes should be removed.) Note that 9.1 didn't have privileges on types (which were introduced by commit 729205571e81), so this commit only changes foreign-data related objects in that branch. Discussion: http://www.postgresql.org/message-id/20151216224004.GL2618@alvherre.pgsql http://email@example.com
Make viewquery a copy in rewriteTargetView()
commit : f02137da88c49125a20d7c51ae9932e47eb61f93 author : Stephen Frost <firstname.lastname@example.org> date : Mon, 21 Dec 2015 10:34:23 -0500 committer: Stephen Frost <email@example.com> date : Mon, 21 Dec 2015 10:34:23 -0500
Rather than expect the Query returned by get_view_query() to be read-only and then copy bits and pieces of it out, simply copy the entire structure when we get it. This addresses an issue where AcquireRewriteLocks, which is called by acquireLocksOnSubLinks(), scribbles on the parsetree passed in, which was actually an entry in relcache, leading to segfaults with certain view definitions. This also future-proofs us a bit for anyone adding more code to this path. The acquireLocksOnSubLinks() was added in commit c3e0ddd40. Back-patch to 9.3 as that commit was.
Remove silly completion for "DELETE FROM tabname ...".
commit : 590d201ef73c7f34f919b8fd9ecf17d81de25aa4 author : Tom Lane <firstname.lastname@example.org> date : Sun, 20 Dec 2015 18:29:51 -0500 committer: Tom Lane <email@example.com> date : Sun, 20 Dec 2015 18:29:51 -0500
psql offered USING, WHERE, and SET in this context, but SET is not a valid possibility here. Seems to have been a thinko in commit f5ab0a14ea83eb6c which added DELETE's USING option.
Fix tab completion for ALTER ... TABLESPACE ... OWNED BY.
commit : 9f749267c431f93f688c4187437a527a13ac2d8d author : Andres Freund <firstname.lastname@example.org> date : Sat, 19 Dec 2015 17:37:11 +0100 committer: Andres Freund <email@example.com> date : Sat, 19 Dec 2015 17:37:11 +0100
Previously the completion used the wrong word to match 'BY'. This was introduced brokenly, in b2de2a. While at it, also add completion of IN TABLESPACE ... OWNED BY and fix comments referencing nonexistent syntax. Reported-By: Michael Paquier Author: Michael Paquier and Andres Freund Discussion: CAB7nPqSHDdSwsJqX0d2XzjqOHr==HdWiubCi4L=Zs7YFTUne8w@mail.gmail.com Backpatch: 9.4, like the commit introducing the bug
Fix improper initialization order for readline.
commit : acb6c64f4a3628ea7005b3c492633a71e52c95ff author : Tom Lane <firstname.lastname@example.org> date : Thu, 17 Dec 2015 16:55:23 -0500 committer: Tom Lane <email@example.com> date : Thu, 17 Dec 2015 16:55:23 -0500
Turns out we must set rl_basic_word_break_characters *before* we call rl_initialize() the first time, because it will quietly copy that value elsewhere --- but only on the first call. (Love these undocumented dependencies.) I broke this yesterday in commit 2ec477dc8108339d; like that commit, back-patch to all active branches. Per report from Pavel Stehule.
Cope with Readline's failure to track SIGWINCH events outside of input.
commit : e168dfef609ca336049c14f2bcc0a957707cb93c author : Tom Lane <firstname.lastname@example.org> date : Wed, 16 Dec 2015 16:58:55 -0500 committer: Tom Lane <email@example.com> date : Wed, 16 Dec 2015 16:58:55 -0500
It emerges that libreadline doesn't notice terminal window size change events unless they occur while collecting input. This is easy to stumble over if you resize the window while using a pager to look at query output, but it can be demonstrated without any pager involvement. The symptom is that queries exceeding one line are misdisplayed during subsequent input cycles, because libreadline has the wrong idea of the screen dimensions. The safest, simplest way to fix this is to call rl_reset_screen_size() just before calling readline(). That causes an extra ioctl(TIOCGWINSZ) for every command; but since it only happens when reading from a tty, the performance impact should be negligible. A more valid objection is that this still leaves a tiny window during entry to readline() wherein delivery of SIGWINCH will be missed; but the practical consequences of that are probably negligible. In any case, there doesn't seem to be any good way to avoid the race, since readline exposes no functions that seem safe to call from a generic signal handler --- rl_reset_screen_size() certainly isn't. It turns out that we also need an explicit rl_initialize() call, else rl_reset_screen_size() dumps core when called before the first readline() call. rl_reset_screen_size() is not present in old versions of libreadline, so we need a configure test for that. (rl_initialize() is present at least back to readline 4.0, so we won't bother with a test for it.) We would need a configure test anyway since libedit's emulation of libreadline doesn't currently include such a function. Fortunately, libedit seems not to have any corresponding bug. Merlin Moncure, adjusted a bit by me
Add missing CHECK_FOR_INTERRUPTS in lseg_inside_poly
commit : b9a46f8ba667556b7a9b34c8c36f5d465f3fc7a2 author : Alvaro Herrera <firstname.lastname@example.org> date : Mon, 14 Dec 2015 16:44:40 -0300 committer: Alvaro Herrera <email@example.com> date : Mon, 14 Dec 2015 16:44:40 -0300
Apparently, there are bugs in this code that cause it to loop endlessly. That bug still needs more research, but in the meantime it's clear that the loop is missing a check for interrupts so that it can be cancelled timely. Backpatch to 9.1 -- this has been missing since 49475aab8d0d.
Fix out-of-memory error handling in ParameterDescription message processing.
commit : affae5e981d624735791ab897ff822118005ab44 author : Heikki Linnakangas <firstname.lastname@example.org> date : Mon, 14 Dec 2015 18:19:10 +0200 committer: Heikki Linnakangas <email@example.com> date : Mon, 14 Dec 2015 18:19:10 +0200
If libpq ran out of memory while constructing the result set, it would hang, waiting for more data from the server, which might never arrive. To fix, distinguish between out-of-memory error and not-enough-data cases, and give a proper error message back to the client on OOM. There are still similar issues in handling COPY start messages, but let's handle that as a separate patch. Michael Paquier, Amit Kapila and me. Backpatch to all supported versions.
Correct statement to actually be the intended assert statement.
commit : 819aceaa0e7f7de07885fe8b136d0dd3aba1229a author : Andres Freund <firstname.lastname@example.org> date : Mon, 14 Dec 2015 11:25:02 +0100 committer: Andres Freund <email@example.com> date : Mon, 14 Dec 2015 11:25:02 +0100
e3f4cfc7 introduced a LWLockHeldByMe() call, without the corresponding Assert() surrounding it. Spotted by Coverity. Backpatch: 9.1+, like the previous commit
Docs: document that psql's "\i -" means read from stdin.
commit : 38a4a42197f2b9fd237fc9b3958e602f25ffcd30 author : Tom Lane <firstname.lastname@example.org> date : Sun, 13 Dec 2015 23:42:54 -0500 committer: Tom Lane <email@example.com> date : Sun, 13 Dec 2015 23:42:54 -0500
This has worked that way for a long time, maybe always, but you would not have known it from the documentation. Also back-patch the notes I added to HEAD earlier today about behavior of the "-f -" switch, which likewise have been valid for many releases.
Properly initialize write, flush and replay locations in walsender slots
commit : 61c7bee2196d284a19692d83b33eac7588693292 author : Magnus Hagander <firstname.lastname@example.org> date : Sun, 13 Dec 2015 16:40:37 +0100 committer: Magnus Hagander <email@example.com> date : Sun, 13 Dec 2015 16:40:37 +0100
These would leak random xlog positions if a walsender used for backup would a walsender slot previously used by a replication walsender. In passing also fix a couple of cases where the xlog pointer is directly compared to zero instead of using XLogRecPtrIsInvalid, noted by Michael Paquier.
Doc: update external URLs for PostGIS project.
commit : 6bdef1369f01d48107a7cde8bf2ca8e2d4fdb73e author : Tom Lane <firstname.lastname@example.org> date : Sat, 12 Dec 2015 20:02:09 -0500 committer: Tom Lane <email@example.com> date : Sat, 12 Dec 2015 20:02:09 -0500
Fix ALTER TABLE ... SET TABLESPACE for unlogged relations.
commit : d638aeef607981a7a5f4da64dc5b2f717df873cf author : Andres Freund <firstname.lastname@example.org> date : Sat, 12 Dec 2015 14:19:23 +0100 committer: Andres Freund <email@example.com> date : Sat, 12 Dec 2015 14:19:23 +0100
Changing the tablespace of an unlogged relation did not WAL log the creation and content of the init fork. Thus, after a standby is promoted, unlogged relation cannot be accessed anymore, with errors like: ERROR: 58P01: could not open file "pg_tblspc/...": No such file or directory Additionally the init fork was not synced to disk, independent of the configured wal_level, a relatively small durability risk. Investigation of that problem also brought to light that, even for permanent relations, the creation of !main forks was not WAL logged, i.e. no XLOG_SMGR_CREATE record were emitted. That mostly turns out not to be a problem, because these files were created when the actual relation data is copied; nonexistent files are not treated as an error condition during replay. But that doesn't work for empty files, and generally feels a bit haphazard. Luckily, outside init and main forks, empty forks don't occur often or are not a problem. Add the required WAL logging and syncing to disk. Reported-By: Michael Paquier Author: Michael Paquier and Andres Freund Discussion: 20151210163230.GA11331@alap3.anarazel.de Backpatch: 9.1, where unlogged relations were introduced
Add an expected-file to match behavior of latest libxml2.
commit : 09824cd9961661a88aa0a04d70df53f66d47b38a author : Tom Lane <firstname.lastname@example.org> date : Fri, 11 Dec 2015 19:08:40 -0500 committer: Tom Lane <email@example.com> date : Fri, 11 Dec 2015 19:08:40 -0500
Recent releases of libxml2 do not provide error context reports for errors detected at the very end of the input string. This appears to be a bug, or at least an infelicity, introduced by the fix for libxml2's CVE-2015-7499. We can hope that this behavioral change will get undone before too long; but the security patch is likely to spread a lot faster/further than any follow-on cleanup, which means this behavior is likely to be present in the wild for some time to come. As a stopgap, add a variant regression test expected-file that matches what you get with a libxml2 that acts this way.
For REASSIGN OWNED for foreign user mappings
commit : 1f8757ad8c0436a1b1559fb83325a727e05b920e author : Alvaro Herrera <firstname.lastname@example.org> date : Fri, 11 Dec 2015 18:39:09 -0300 committer: Alvaro Herrera <email@example.com> date : Fri, 11 Dec 2015 18:39:09 -0300
As reported in bug #13809 by Alexander Ashurkov, the code for REASSIGN OWNED hadn't gotten word about user mappings. Deal with them in the same way default ACLs do, which is to ignore them altogether; they are handled just fine by DROP OWNED. The other foreign object cases are already handled correctly by both commands. Also add a REASSIGN OWNED statement to foreign_data test to exercise the foreign data objects. (The changes are just before the "cleanup" phase, so it shouldn't remove any existing live test.) Reported by Alexander Ashurkov, then independently by Jaime Casanova.
Install our "missing" script where PGXS builds can find it.
commit : 423697e3dd4d4986854a9ed3b11962515423f2e7 author : Tom Lane <firstname.lastname@example.org> date : Fri, 11 Dec 2015 16:14:27 -0500 committer: Tom Lane <email@example.com> date : Fri, 11 Dec 2015 16:14:27 -0500
This allows sane behavior in a PGXS build done on a machine where build tools such as bison are missing. Jim Nasby
Still more fixes for planner's handling of LATERAL references.
commit : 7ad6960664b87c34500f27cb4b8bdf861ebc02e8 author : Tom Lane <firstname.lastname@example.org> date : Fri, 11 Dec 2015 14:22:20 -0500 committer: Tom Lane <email@example.com> date : Fri, 11 Dec 2015 14:22:20 -0500
More fuzz testing by Andreas Seltenreich exposed that the planner did not cope well with chains of lateral references. If relation X references Y laterally, and Y references Z laterally, then we will have to scan X on the inside of a nestloop with Z, so for all intents and purposes X is laterally dependent on Z too. The planner did not understand this and would generate intermediate joins that could not be used. While that was usually harmless except for wasting some planning cycles, under the right circumstances it would lead to "failed to build any N-way joins" or "could not devise a query plan" planner failures. To fix that, convert the existing per-relation lateral_relids and lateral_referencers relid sets into their transitive closures; that is, they now show all relations on which a rel is directly or indirectly laterally dependent. This not only fixes the chained-reference problem but allows some of the relevant tests to be made substantially simpler and faster, since they can be reduced to simple bitmap manipulations instead of searches of the LateralJoinInfo list. Also, when a PlaceHolderVar that is due to be evaluated at a join contains lateral references, we should treat those references as indirect lateral dependencies of each of the join's base relations. This prevents us from trying to join any individual base relations to the lateral reference source before the join is formed, which again cannot work. Andreas' testing also exposed another oversight in the "dangerous PlaceHolderVar" test added in commit 85e5e222b1dd02f1. Simply rejecting unsafe join paths in joinpath.c is insufficient, because in some cases we will end up rejecting *all* possible paths for a particular join, again leading to "could not devise a query plan" failures. The restriction has to be known also to join_is_legal and its cohort functions, so that they will not select a join for which that will happen. I chose to move the supporting logic into joinrels.c where the latter functions are. Back-patch to 9.3 where LATERAL support was introduced.
Fix bug leading to restoring unlogged relations from empty files.
commit : c6a67bbc7077f652b59c79c2dd5bf9774755db48 author : Andres Freund <firstname.lastname@example.org> date : Thu, 10 Dec 2015 16:25:12 +0100 committer: Andres Freund <email@example.com> date : Thu, 10 Dec 2015 16:25:12 +0100
At the end of crash recovery, unlogged relations are reset to the empty state, using their init fork as the template. The init fork is copied to the main fork without going through shared buffers. Unfortunately WAL replay so far has not necessarily flushed writes from shared buffers to disk at that point. In normal crash recovery, and before the introduction of 'fast promotions' in fd4ced523 / 9.3, the END_OF_RECOVERY checkpoint flushes the buffers out in time. But with fast promotions that's not the case anymore. To fix, force WAL writes targeting the init fork to be flushed immediately (using the new FlushOneBuffer() function). In 9.5+ that flush can centrally be triggered from the code dealing with restoring full page writes (XLogReadBufferForRedoExtended), in earlier releases that responsibility is in the hands of XLOG_HEAP_NEWPAGE's replay function. Backpatch to 9.1, even if this currently is only known to trigger in 9.3+. Flushing earlier is more robust, and it is advantageous to keep the branches similar. Typical symptoms of this bug are errors like 'ERROR: index "..." contains unexpected zero page at block 0' shortly after promoting a node. Reported-By: Thom Brown Author: Andres Freund and Michael Paquier Discussion: 20150326175024.GJ451@alap3.anarazel.de Backpatch: 9.1-
Accept flex > 2.5.x on Windows, too.
commit : ee0df4d77c9f3e3aa6c5e23fa2dbc66e9cd6deae author : Tom Lane <firstname.lastname@example.org> date : Thu, 10 Dec 2015 10:19:13 -0500 committer: Tom Lane <email@example.com> date : Thu, 10 Dec 2015 10:19:13 -0500
Commit 32f15d05c fixed this in configure, but missed the similar check in the MSVC scripts. Michael Paquier, per report from Victor Wagner
Simplify LATERAL-related calculations within add_paths_to_joinrel().
commit : 7145d35c1727e12cc492c60a2c34392b5ed0dfa5 author : Tom Lane <firstname.lastname@example.org> date : Wed, 9 Dec 2015 18:54:25 -0500 committer: Tom Lane <email@example.com> date : Wed, 9 Dec 2015 18:54:25 -0500
While convincing myself that commit 7e19db0c09719d79 would solve both of the problems recently reported by Andreas Seltenreich, I realized that add_paths_to_joinrel's handling of LATERAL restrictions could be made noticeably simpler and faster if we were to retain the minimum possible parameterization for each joinrel (that is, the set of relids supplying unsatisfied lateral references in it). We already retain that for baserels, in RelOptInfo.lateral_relids, so we can use that field for joinrels too. This is a back-port of commit edca44b1525b3d591263d032dc4fe500ea771e0e. I originally intended not to back-patch that, but additional hacking in this area turns out to be needed, making it necessary not optional to compute lateral_relids for joinrels. In preparation for those fixes, sync the relevant code with HEAD as much as practical. (I did not risk rearranging fields of RelOptInfo in released branches, however.)
Avoid odd portability problem in TestLib.pm's slurp_file function.
commit : 56a79a5ae9fe1426b07f8ad986777829f19b4437 author : Tom Lane <firstname.lastname@example.org> date : Tue, 8 Dec 2015 16:58:05 -0500 committer: Tom Lane <email@example.com> date : Tue, 8 Dec 2015 16:58:05 -0500
For unclear reasons, this function doesn't always read the expected data in some old Perl versions. Rewriting it to avoid use of ARGV seems to dodge the problem, and this version is clearer anyway if you ask me. In passing, also improve error message in adjacent append_to_file function.
Fix another oversight in checking if a join with LATERAL refs is legal.
commit : 0901d68babc324cc09077131fa966f15225e1fab author : Tom Lane <firstname.lastname@example.org> date : Mon, 7 Dec 2015 17:41:45 -0500 committer: Tom Lane <email@example.com> date : Mon, 7 Dec 2015 17:41:45 -0500
It was possible for the planner to decide to join a LATERAL subquery to the outer side of an outer join before the outer join itself is completed. Normally that's fine because of the associativity rules, but it doesn't work if the subquery contains a lateral reference to the inner side of the outer join. In such a situation the outer join *must* be done first. join_is_legal() missed this consideration and would allow the join to be attempted, but the actual path-building code correctly decided that no valid join path could be made, sometimes leading to planner errors such as "failed to build any N-way joins". Per report from Andreas Seltenreich. Back-patch to 9.3 where LATERAL support was added.
Create TestLib.pm's tempdir underneath tmp_check/, not out in the open.
commit : 0cc6badf69f914667d3645af3e8d05e973542cb1 author : Tom Lane <firstname.lastname@example.org> date : Sat, 5 Dec 2015 13:23:48 -0500 committer: Tom Lane <email@example.com> date : Sat, 5 Dec 2015 13:23:48 -0500
This way, existing .gitignore entries and makefile clean actions will automatically apply to the tempdir, should it survive a TAP test run (which can happen if the user control-C's out of the run, for example). Michael Paquier, per a complaint from me
Further improve documentation of the role-dropping process.
commit : 85cb94f61bfb7e3ac852912f773bef4e15d98cc1 author : Tom Lane <firstname.lastname@example.org> date : Fri, 4 Dec 2015 14:44:13 -0500 committer: Tom Lane <email@example.com> date : Fri, 4 Dec 2015 14:44:13 -0500
In commit 1ea0c73c2 I added a section to user-manag.sgml about how to drop roles that own objects; but as pointed out by Stephen Frost, I neglected that shared objects (databases or tablespaces) may need special treatment. Fix that. Back-patch to supported versions, like the previous patch.
Make gincostestimate() cope with hypothetical GIN indexes.
commit : ab14e0e4c8ed453cd719f1db82ec5c175e73ba91 author : Tom Lane <firstname.lastname@example.org> date : Tue, 1 Dec 2015 16:24:34 -0500 committer: Tom Lane <email@example.com> date : Tue, 1 Dec 2015 16:24:34 -0500
We tried to fetch statistics data from the index metapage, which does not work if the index isn't actually present. If the index is hypothetical, instead extrapolate some plausible internal statistics based on the index page count provided by the index-advisor plugin. There was already some code in gincostestimate() to invent internal stats in this way, but since it was only meant as a stopgap for pre-9.1 GIN indexes that hadn't been vacuumed since upgrading, it was pretty crude. If we want it to support index advisors, we should try a little harder. A small amount of testing says that it's better to estimate the entry pages as 90% of the index, not 100%. Also, estimating the number of entries (keys) as equal to the heap tuple count could be wildly wrong in either direction. Instead, let's estimate 100 entries per entry page. Perhaps someday somebody will want the index advisor to be able to provide these numbers more directly, but for the moment this should serve. Problem report and initial patch by Julien Rouhaud; modified by me to invent less-bogus internal statistics. Back-patch to all supported branches, since we've supported index advisors since 9.0.
Use "g" not "f" format in ecpg's PGTYPESnumeric_from_double().
commit : 346cc2f0192791b5ee0fb5de4276d3cb8f5ab9a6 author : Tom Lane <firstname.lastname@example.org> date : Tue, 1 Dec 2015 11:42:25 -0500 committer: Tom Lane <email@example.com> date : Tue, 1 Dec 2015 11:42:25 -0500
The previous coding could overrun the provided buffer size for a very large input, or lose precision for a very small input. Adopt the methodology that's been in use in the equivalent backend code for a long time. Per private report from Bas van Schaik. Back-patch to all supported branches.
Fix failure to consider failure cases in GetComboCommandId().
commit : b7fc1dd1fed3c38b1d95c6433b65cbbd22c0323f author : Tom Lane <firstname.lastname@example.org> date : Thu, 26 Nov 2015 13:23:02 -0500 committer: Tom Lane <email@example.com> date : Thu, 26 Nov 2015 13:23:02 -0500
Failure to initially palloc the comboCids array, or to realloc it bigger when needed, left combocid's data structures in an inconsistent state that would cause trouble if the top transaction continues to execute. Noted while examining a user complaint about the amount of memory used for this. (There's not much we can do about that, but it does point up that repalloc failure has a non-negligible chance of occurring here.) In HEAD/9.5, also avoid possible invocation of memcpy() with a null pointer in SerializeComboCIDState; cf commit 13bba0227.
Be more paranoid about null return values from libpq status functions.
commit : 3d357b48f901503ecc69c529e5bbcd3f84ba5094 author : Tom Lane <firstname.lastname@example.org> date : Wed, 25 Nov 2015 17:31:53 -0500 committer: Tom Lane <email@example.com> date : Wed, 25 Nov 2015 17:31:53 -0500
PQhost() can return NULL in non-error situations, namely when a Unix-socket connection has been selected by default. That behavior is a tad debatable perhaps, but for the moment we should make sure that psql copes with it. Unfortunately, do_connect() failed to: it could pass a NULL pointer to strcmp(), resulting in crashes on most platforms. This was reported as a security issue by ChenQin of Topsec Security Team, but the consensus of the security list is that it's just a garden-variety bug with no security implications. For paranoia's sake, I made the keep_password test not trust PQuser or PQport either, even though I believe those will never return NULL given a valid PGconn. Back-patch to all supported branches.
pg_upgrade: fix CopyFile() on Windows to fail on file existence
commit : f91c4e326aa9c81223146b43b4167c29630155c4 author : Bruce Momjian <firstname.lastname@example.org> date : Tue, 24 Nov 2015 17:18:28 -0500 committer: Bruce Momjian <email@example.com> date : Tue, 24 Nov 2015 17:18:28 -0500
Also fix getErrorText() to return the right error string on failure. This behavior now matches that of other operating systems. Report by Noah Misch Backpatch through 9.1
Adopt the GNU convention for handling tar-archive members exceeding 8GB.
commit : 7acad954639df1c44877a37671339b8a7efce8c7 author : Tom Lane <firstname.lastname@example.org> date : Sat, 21 Nov 2015 20:21:32 -0500 committer: Tom Lane <email@example.com> date : Sat, 21 Nov 2015 20:21:32 -0500
The POSIX standard for tar headers requires archive member sizes to be printed in octal with at most 11 digits, limiting the representable file size to 8GB. However, GNU tar and apparently most other modern tars support a convention in which oversized values can be stored in base-256, allowing any practical file to be a tar member. Adopt this convention to remove two limitations: * pg_dump with -Ft output format failed if the contents of any one table exceeded 8GB. * pg_basebackup failed if the data directory contained any file exceeding 8GB. (This would be a fatal problem for installations configured with a table segment size of 8GB or more, and it has also been seen to fail when large core dump files exist in the data directory.) File sizes under 8GB are still printed in octal, so that no compatibility issues are created except in cases that would have failed entirely before. In addition, this patch fixes several bugs in the same area: * In 9.3 and later, we'd defined tarCreateHeader's file-size argument as size_t, which meant that on 32-bit machines it would write a corrupt tar header for file sizes between 4GB and 8GB, even though no error was raised. This broke both "pg_dump -Ft" and pg_basebackup for such cases. * pg_restore from a tar archive would fail on tables of size between 4GB and 8GB, on machines where either "size_t" or "unsigned long" is 32 bits. This happened even with an archive file not affected by the previous bug. * pg_basebackup would fail if there were files of size between 4GB and 8GB, even on 64-bit machines. * In 9.3 and later, "pg_basebackup -Ft" failed entirely, for any file size, on 64-bit big-endian machines. In view of these potential data-loss bugs, back-patch to all supported branches, even though removal of the documented 8GB limit might otherwise be considered a new feature rather than a bug fix.
Fix vcregress.pl's bincheck
commit : b29a40fea78ec003146f1aa57aa60413cd560c45 author : Andrew Dunstan <firstname.lastname@example.org> date : Sat, 21 Nov 2015 09:20:08 -0500 committer: Andrew Dunstan <email@example.com> date : Sat, 21 Nov 2015 09:20:08 -0500
We didn't have InstallTemp() in 9.4, that was implemented in 9.5, but it's used by the new bincheck code, so add it for 9.4.
Fix handling of inherited check constraints in ALTER COLUMN TYPE (again).
commit : 47ea4614e94d459817063cf922f88a615f8cee76 author : Tom Lane <firstname.lastname@example.org> date : Fri, 20 Nov 2015 14:55:28 -0500 committer: Tom Lane <email@example.com> date : Fri, 20 Nov 2015 14:55:28 -0500
The previous way of reconstructing check constraints was to do a separate "ALTER TABLE ONLY tab ADD CONSTRAINT" for each table in an inheritance hierarchy. However, that way has no hope of reconstructing the check constraints' own inheritance properties correctly, as pointed out in bug #13779 from Jan Dirk Zijlstra. What we should do instead is to do a regular "ALTER TABLE", allowing recursion, at the topmost table that has a particular constraint, and then suppress the work queue entries for inherited instances of the constraint. Annoyingly, we'd tried to fix this behavior before, in commit 5ed6546cf, but we failed to notice that it wasn't reconstructing the pg_constraint field values correctly. As long as I'm touching pg_get_constraintdef_worker anyway, tweak it to always schema-qualify the target table name; this seems like useful backup to the protections installed by commit 5f173040. In HEAD/9.5, get rid of get_constraint_relation_oids, which is now unused. (I could alternatively have modified it to also return conislocal, but that seemed like a pretty single-purpose API, so let's not pretend it has some other use.) It's unused in the back branches as well, but I left it in place just in case some third-party code has decided to use it. In HEAD/9.5, also rename pg_get_constraintdef_string to pg_get_constraintdef_command, as the previous name did nothing to explain what that entry point did differently from others (and its comment was equally useless). Again, that change doesn't seem like material for back-patching. I did a bit of re-pgindenting in tablecmds.c in HEAD/9.5, as well. Otherwise, back-patch to all supported branches.
fix a perl typo
commit : 9892cc20097e8934b76965d97a86775d6d5d49bf author : Andrew Dunstan <firstname.lastname@example.org> date : Thu, 19 Nov 2015 02:42:02 -0500 committer: Andrew Dunstan <email@example.com> date : Thu, 19 Nov 2015 02:42:02 -0500
Update docs for vcregress.pl bincheck changes
commit : 0fbf440407ae4ae34c143579ab2fc68f37895856 author : Andrew Dunstan <firstname.lastname@example.org> date : Wed, 18 Nov 2015 23:32:16 -0500 committer: Andrew Dunstan <email@example.com> date : Wed, 18 Nov 2015 23:32:16 -0500
Improve vcregress.pl's handling of tap tests for client programs
commit : b06a8e3cc24d29d5c6eee0220a32c3155726daf9 author : Andrew Dunstan <firstname.lastname@example.org> date : Wed, 18 Nov 2015 22:47:41 -0500 committer: Andrew Dunstan <email@example.com> date : Wed, 18 Nov 2015 22:47:41 -0500
The target is now named 'bincheck' rather than 'tapcheck' so that it reflects what is checked instead of the test mechanism. Some of the logic is improved, making it easier to add further sets of TAP based tests in future. Also, the environment setting logic is imrpoved. As discussed on -hackers a couple of months ago.
Accept flex > 2.5.x in configure.
commit : d5bb7c6f699cf4ce9250fcd14e0ec129c0f1a507 author : Tom Lane <firstname.lastname@example.org> date : Wed, 18 Nov 2015 17:45:05 -0500 committer: Tom Lane <email@example.com> date : Wed, 18 Nov 2015 17:45:05 -0500
Per buildfarm member anchovy, 2.6.0 exists in the wild now. Hopefully it works with Postgres; if not, we'll have to do something about that, but in any case claiming it's "too old" is pretty silly.
Fix possible internal overflow in numeric division.
commit : cc95595e05c086a53a64eea2b17d135a80548106 author : Tom Lane <firstname.lastname@example.org> date : Tue, 17 Nov 2015 15:46:47 -0500 committer: Tom Lane <email@example.com> date : Tue, 17 Nov 2015 15:46:47 -0500
div_var_fast() postpones propagating carries in the same way as mul_var(), so it has the same corner-case overflow risk we fixed in 246693e5ae8a36f0, namely that the size of the carries has to be accounted for when setting the threshold for executing a carry propagation step. We've not devised a test case illustrating the brokenness, but the required fix seems clear enough. Like the previous fix, back-patch to all active branches. Dean Rasheed
Back-patch fixes to make TAP tests work on Windows.
commit : 8bc496c3b68f6b5296eed3ebccc6f322d4d0ba52 author : Tom Lane <firstname.lastname@example.org> date : Tue, 17 Nov 2015 14:10:24 -0500 committer: Tom Lane <email@example.com> date : Tue, 17 Nov 2015 14:10:24 -0500
This back-ports commit 13d856e177e69083 and assorted followon patches into 9.4 and 9.5. 9.5 and HEAD are now substantially identical in all the files touched by this commit, except that 010_pg_basebackup.pl has a few more tests related to the new --slot option. 9.4 has many fewer TAP tests, but the test infrastructure files are substantially the same, with the exception that 9.4 lacks the single-tmp-install infrastructure introduced in 9.5 (commit dcae5faccab64776). The primary motivation for this patch is to ensure that TAP test case fixes can be back-patched without hazards of the kind seen in commits 34557f544/06dd4b44f. In principle it should also make the world safe for running the TAP tests in the buildfarm in these branches; although we might want to think about back-porting dcae5faccab64776 to 9.4 if we're going to do that for real, because the TAP tests are quite disk space hungry without it. Michael Paquier did the back-porting work; original patches were by him and assorted other people.
Speed up ruleutils' name de-duplication code, and fix overlength-name case.
commit : a6c4c07fc5cd0665568ff48ada3b65900fafa1af author : Tom Lane <firstname.lastname@example.org> date : Mon, 16 Nov 2015 13:45:17 -0500 committer: Tom Lane <email@example.com> date : Mon, 16 Nov 2015 13:45:17 -0500
Since commit 11e131854f8231a21613f834c40fe9d046926387, ruleutils.c has attempted to ensure that each RTE in a query or plan tree has a unique alias name. However, the code that was added for this could be quite slow, even as bad as O(N^3) if N identical RTE names must be replaced, as noted by Jeff Janes. Improve matters by building a transient hash table within set_rtable_names. The hash table in itself reduces the cost of detecting a duplicate from O(N) to O(1), and we can save another factor of N by storing the number of de-duplicated names already created for each entry, so that we don't have to re-try names already created. This way is probably a bit slower overall for small range tables, but almost by definition, such cases should not be a performance problem. In principle the same problem applies to the column-name-de-duplication code; but in practice that seems to be less of a problem, first because N is limited since we don't support extremely wide tables, and second because duplicate column names within an RTE are fairly rare, so that in practice the cost is more like O(N^2) not O(N^3). It would be very much messier to fix the column-name code, so for now I've left that alone. An independent problem in the same area was that the de-duplication code paid no attention to the identifier length limit, and would happily produce identifiers that were longer than NAMEDATALEN and wouldn't be unique after truncation to NAMEDATALEN. This could result in dump/reload failures, or perhaps even views that silently behaved differently than before. We can fix that by shortening the base name as needed. Fix it for both the relation and column name cases. In passing, check for interrupts in set_rtable_names, just in case it's still slow enough to be an issue. Back-patch to 9.3 where this code was introduced.
Fix ruleutils.c's dumping of whole-row Vars in ROW() and VALUES() contexts.
commit : d33ab56b0ee3dbb5a353bf07804820f1daa199d5 author : Tom Lane <firstname.lastname@example.org> date : Sun, 15 Nov 2015 14:41:09 -0500 committer: Tom Lane <email@example.com> date : Sun, 15 Nov 2015 14:41:09 -0500
Normally ruleutils prints a whole-row Var as "foo.*". We already knew that that doesn't work at top level of a SELECT list, because the parser would treat the "*" as a directive to expand the reference into separate columns, not a whole-row Var. However, Joshua Yanovski points out in bug #13776 that the same thing happens at top level of a ROW() construct; and some nosing around in the parser shows that the same is true in VALUES(). Hence, apply the same workaround already devised for the SELECT-list case, namely to add a forced cast to the appropriate rowtype in these cases. (The alternative of just printing "foo" was rejected because it is difficult to avoid ambiguity against plain columns named "foo".) Back-patch to all supported branches.
PL/Python: Make tests pass with Python 3.5
commit : f1b898759f4936e9185698e8624da832a99b933e author : Peter Eisentraut <firstname.lastname@example.org> date : Wed, 3 Jun 2015 19:52:08 -0400 committer: Peter Eisentraut <email@example.com> date : Wed, 3 Jun 2015 19:52:08 -0400
The error message wording for AttributeError has changed in Python 3.5. For the plpython_error test, add a new expected file. In the plpython_subtransaction test, we didn't really care what the exception is, only that it is something coming from Python. So use a generic exception instead, which has a message that doesn't vary across versions.
pg_upgrade: properly detect file copy failure on Windows
commit : 87cdfeb18ae0fe298b6f405718cbe97eaed190d3 author : Bruce Momjian <firstname.lastname@example.org> date : Sat, 14 Nov 2015 11:47:11 -0500 committer: Bruce Momjian <email@example.com> date : Sat, 14 Nov 2015 11:47:11 -0500
Previously, file copy failures were ignored on Windows due to an incorrect return value check. Report by Manu Joye Backpatch through 9.1
Fix unwanted flushing of libpq's input buffer when socket EOF is seen.
commit : 40879a92b90cdd46f74588a2edb024a3c869d932 author : Tom Lane <firstname.lastname@example.org> date : Thu, 12 Nov 2015 13:03:52 -0500 committer: Tom Lane <email@example.com> date : Thu, 12 Nov 2015 13:03:52 -0500
In commit 210eb9b743c0645d I centralized libpq's logic for closing down the backend communication socket, and made the new pqDropConnection routine always reset the I/O buffers to empty. Many of the call sites previously had not had such code, and while that amounted to an oversight in some cases, there was one place where it was intentional and necessary *not* to flush the input buffer: pqReadData should never cause that to happen, since we probably still want to process whatever data we read. This is the true cause of the problem Robert was attempting to fix in c3e7c24a1d60dc6a, namely that libpq no longer reported the backend's final ERROR message before reporting "server closed the connection unexpectedly". But that only accidentally fixed it, by invoking parseInput before the input buffer got flushed; and very likely there are timing scenarios where we'd still lose the message before processing it. To fix, pass a flag to pqDropConnection to tell it whether to flush the input buffer or not. On review I think flushing is actually correct for every other call site. Back-patch to 9.3 where the problem was introduced. In HEAD, also improve the comments added by c3e7c24a1d60dc6a.
Docs: fix misleading example.
commit : ff4adfd27ca6cbb94512c2433c086eba6ad97019 author : Tom Lane <firstname.lastname@example.org> date : Tue, 10 Nov 2015 22:11:39 -0500 committer: Tom Lane <email@example.com> date : Tue, 10 Nov 2015 22:11:39 -0500
Commit 8457d0beca731bf0 introduced an example which, while not incorrect, failed to exhibit the behavior it meant to describe, as a result of omitting an E'' prefix that needed to be there. Noticed and fixed by Peter Geoghegan. I (tgl) failed to resist the temptation to wordsmith nearby text a bit while at it.
Improve our workaround for 'TeX capacity exceeded' in building PDF files.
commit : 86f358c1ee727e67cc8a40304324bb1650a70562 author : Tom Lane <firstname.lastname@example.org> date : Tue, 10 Nov 2015 15:59:59 -0500 committer: Tom Lane <email@example.com> date : Tue, 10 Nov 2015 15:59:59 -0500
In commit a5ec86a7c787832d28d5e50400ec96a5190f2555 I wrote a quick hack that reduced the number of TeX string pool entries created while converting our documentation to PDF form. That held the fort for awhile, but as of HEAD we're back up against the same limitation. It turns out that the original coding of \FlowObjectSetup actually results in *three* string pool entries being generated for every "flow object" (that is, potential cross-reference target) in the documentation, and my previous hack only got rid of one of them. With a little more care, we can reduce the string count to one per flow object plus one per actually-cross-referenced flow object (about 115000 + 5000 as of current HEAD); that should work until the documentation volume roughly doubles from where it is today. As a not-incidental side benefit, this change also causes pdfjadetex to stop emitting unreferenced hyperlink anchors (bookmarks) into the PDF file. It had been making one willy-nilly for every flow object; now it's just one per actually-cross-referenced object. This results in close to a 2X savings in PDF file size. We will still want to run the output through "jpdftweak" to get it to be compressed; but we no longer need removal of unreferenced bookmarks, so we might be able to find a quicker tool for that step. Although the failure only affects HEAD and US-format output at the moment, 9.5 cannot be more than a few pages short of failing likewise, so it will inevitably fail after a few rounds of minor-version release notes. I don't have a lot of faith that we'll never hit the limit in the older branches; and anyway it would be nice to get rid of jpdftweak across the board. Therefore, back-patch to all supported branches.
Don't connect() to a wildcard address in test_postmaster_connection().
commit : 24379a45c5ecbc35fde93952346be67112a96fe0 author : Noah Misch <firstname.lastname@example.org> date : Sun, 8 Nov 2015 17:28:53 -0500 committer: Noah Misch <email@example.com> date : Sun, 8 Nov 2015 17:28:53 -0500
At least OpenBSD, NetBSD, and Windows don't support it. This repairs pg_ctl for listen_addresses='0.0.0.0' and listen_addresses='::'. Since pg_ctl prefers to test a Unix-domain socket, Windows users are most likely to need this change. Back-patch to 9.1 (all supported versions). This could change pg_ctl interaction with loopback-interface firewall rules. Therefore, in 9.4 and earlier (released branches), activate the change only on known-affected platforms. Reported (bug #13611) and designed by Kondo Yuta.
Fix enforcement of restrictions inside regexp lookaround constraints.
commit : f69c01f2c2b7dcd1b584ac1025d4739636a6cc83 author : Tom Lane <firstname.lastname@example.org> date : Sat, 7 Nov 2015 12:43:24 -0500 committer: Tom Lane <email@example.com> date : Sat, 7 Nov 2015 12:43:24 -0500
Lookahead and lookbehind constraints aren't allowed to contain backrefs, and parentheses within them are always considered non-capturing. Or so says the manual. But the regexp parser forgot about these rules once inside a parenthesized subexpression, so that constructs like (\w)(?=(\1)) were accepted (but then not correctly executed --- a case like this acted like (\w)(?=\w), without any enforcement that the two \w's match the same text). And in (?=((foo))) the innermost parentheses would be counted as capturing parentheses, though no text would ever be captured for them. To fix, properly pass down the "type" argument to the recursive invocation of parse(). Back-patch to all supported branches; it was agreed that silent misexecution of such patterns is worse than throwing an error, even though new errors in minor releases are generally not desirable.
Fix erroneous hash calculations in gin_extract_jsonb_path().
commit : 788e35ac0bc00489e2b86a930d8c1264100fb94b author : Tom Lane <firstname.lastname@example.org> date : Thu, 5 Nov 2015 18:15:48 -0500 committer: Tom Lane <email@example.com> date : Thu, 5 Nov 2015 18:15:48 -0500
The jsonb_path_ops code calculated hash values inconsistently in some cases involving nested arrays and objects. This would result in queries possibly not finding entries that they should find, when using a jsonb_path_ops GIN index for the search. The problem cases involve JSONB values that contain both scalars and sub-objects at the same nesting level, for example an array containing both scalars and sub-arrays. To fix, reset the current stack->hash after processing each value or sub-object, not before; and don't try to be cute about the outermost level's initial hash. Correcting this means that existing jsonb_path_ops indexes may now be inconsistent with the new hash calculation code. The symptom is the same --- searches not finding entries they should find --- but the specific rows affected are likely to be different. Users will need to REINDEX jsonb_path_ops indexes to make sure that all searches work as expected. Per bug #13756 from Daniel Cheng. Back-patch to 9.4 where the faulty logic was introduced.
shm_mq: Third attempt at fixing nowait behavior in shm_mq_receive.
commit : 038aa89af53ee6ee26dfc9e73704d4e94701588f author : Robert Haas <firstname.lastname@example.org> date : Tue, 3 Nov 2015 09:12:52 -0500 committer: Robert Haas <email@example.com> date : Tue, 3 Nov 2015 09:12:52 -0500
Commit a1480ec1d3bacb9acb08ec09f22bc25bc033115b purported to fix the problems with commit b2ccb5f4e6c81305386edb34daf7d1d1e1ee112a, but it didn't completely fix them. The problem is that the checks were performed in the wrong order, leading to a race condition. If the sender attached, sent a message, and detached after the receiver called shm_mq_get_sender and before the receiver called shm_mq_counterparty_gone, we'd incorrectly return SHM_MQ_DETACHED before all messages were read. Repair by reversing the order of operations, and add a long comment explaining why this new logic is (hopefully) correct.
Add RMV to list of commands taking AE lock.
commit : 11e7f9d52e2108f340213e51a8114a832b0ccc52 author : Kevin Grittner <firstname.lastname@example.org> date : Mon, 2 Nov 2015 06:26:36 -0600 committer: Kevin Grittner <email@example.com> date : Mon, 2 Nov 2015 06:26:36 -0600
Backpatch to 9.3, where it was initially omitted. Craig Ringer, with minor adjustment by Kevin Grittner
Fix serialization anomalies due to race conditions on INSERT.
commit : 1d95617f7039d77bba09415db6ca9b4bc8791cf6 author : Kevin Grittner <firstname.lastname@example.org> date : Sat, 31 Oct 2015 14:36:09 -0500 committer: Kevin Grittner <email@example.com> date : Sat, 31 Oct 2015 14:36:09 -0500
On insert the CheckForSerializableConflictIn() test was performed before the page(s) which were going to be modified had been locked (with an exclusive buffer content lock). If another process acquired a relation SIReadLock on the heap and scanned to a page on which an insert was going to occur before the page was so locked, a rw-conflict would be missed, which could allow a serialization anomaly to be missed. The window between the check and the page lock was small, so the bug was generally not noticed unless there was high concurrency with multiple processes inserting into the same table. This was reported by Peter Bailis as bug #11732, by Sean Chittenden as bug #13667, and by others. The race condition was eliminated in heap_insert() by moving the check down below the acquisition of the buffer lock, which had been the very next statement. Because of the loop locking and unlocking multiple buffers in heap_multi_insert() a check was added after all inserts were completed. The check before the start of the inserts was left because it might avoid a large amount of work to detect a serialization anomaly before performing the all of the inserts and the related WAL logging. While investigating this bug, other SSI bugs which were even harder to hit in practice were noticed and fixed, an unnecessary check (covered by another check, so redundant) was removed from heap_update(), and comments were improved. Back-patch to all supported branches. Kevin Grittner and Thomas Munro
doc: security_barrier option is a Boolean, not a string.
commit : 5554b308c0c1f1eecbc66b39cda9234ca386eb31 author : Robert Haas <firstname.lastname@example.org> date : Fri, 30 Oct 2015 12:18:55 +0100 committer: Robert Haas <email@example.com> date : Fri, 30 Oct 2015 12:18:55 +0100
Mistake introduced by commit 5bd91e3a835b5d5499fee5f49fc7c0c776fe63dd. Hari Babu
Fix typo in bgworker.c
commit : 352e3cbf8ea10206f504fb0f1ffb2541d7284f5a author : Robert Haas <firstname.lastname@example.org> date : Fri, 30 Oct 2015 10:35:33 +0100 committer: Robert Haas <email@example.com> date : Fri, 30 Oct 2015 10:35:33 +0100
Docs: add example clarifying use of nested JSON containment.
commit : 626f9be8db11521745711aa3b29e304ab5902a9c author : Tom Lane <firstname.lastname@example.org> date : Thu, 29 Oct 2015 18:54:35 -0400 committer: Tom Lane <email@example.com> date : Thu, 29 Oct 2015 18:54:35 -0400
Show how this can be used in practice to make queries simpler and more flexible. Also, draw an explicit contrast to the existence operator, which doesn't work that way. Peter Geoghegan and Tom Lane
Fix incorrect message in ATWrongRelkindError.
commit : 589017eb5d3d447118641cf5d46b58e1d3ff0387 author : Robert Haas <firstname.lastname@example.org> date : Wed, 28 Oct 2015 11:44:47 +0100 committer: Robert Haas <email@example.com> date : Wed, 28 Oct 2015 11:44:47 +0100
Mistake introduced by commit 3bf3ab8c563699138be02f9dc305b7b77a724307. Etsuro Fujita
Measure string lengths only once
commit : fa171654f2733726f984bd22ef9eaee410dfce8d author : Alvaro Herrera <firstname.lastname@example.org> date : Tue, 27 Oct 2015 13:20:40 -0300 committer: Alvaro Herrera <email@example.com> date : Tue, 27 Oct 2015 13:20:40 -0300
Bernd Helmle complained that CreateReplicationSlot() was assigning the same value to the same variable twice, so we could remove one of them. Code inspection reveals that we can actually remove both assignments: according to the author the assignment was there for beauty of the strlen line only, and another possible fix to that is to put the strlen in its own line, so do that. To be consistent within the file, refactor all duplicated strlen() calls, which is what we do elsewhere in the backend anyway. In basebackup.c, snprintf already returns the right length; no need for strlen afterwards. Backpatch to 9.4, where replication slots were introduced, to keep code identical. Some of this is older, but the patch doesn't apply cleanly and it's only of cosmetic value anyway. Discussion: http://www.postgresql.org/message-id/BE2FD71DEA35A2287EA5F018@eje.credativ.lan
shm_mq: Repair breakage from previous commit.
commit : 5eca6cf99411bfd47f43fc742552c9a2ae459bc8 author : Robert Haas <firstname.lastname@example.org> date : Thu, 22 Oct 2015 22:01:11 -0400 committer: Robert Haas <email@example.com> date : Thu, 22 Oct 2015 22:01:11 -0400
If the counterparty writes some data into the queue and then detaches, it's wrong to return SHM_MQ_DETACHED right away. If we do that, we fail to read whatever was written.
shm_mq: Fix failure to notice a dead counterparty when nowait is used.
commit : 87abcb4ebd48f5d8f7244236f8839854c1861537 author : Robert Haas <firstname.lastname@example.org> date : Thu, 22 Oct 2015 16:33:30 -0400 committer: Robert Haas <email@example.com> date : Thu, 22 Oct 2015 16:33:30 -0400
The shm_mq mechanism was intended to optionally notice when the process on the other end of the queue fails to attach to the queue. It does this by allowing the user to pass a BackgroundWorkerHandle; if the background worker in question is launched and dies without attaching to the queue, then we know it never will. This logic works OK in blocking mode, but when called with nowait = true we fail to notice that this has happened due to an asymmetry in the logic. Repair. Reported off-list by Rushabh Lathia. Patch by me.
Fix incorrect translation of minus-infinity datetimes for json/jsonb.
commit : 4f33572ee68b515dc2750e265fc0d0312c0d5d3d author : Tom Lane <firstname.lastname@example.org> date : Tue, 20 Oct 2015 11:06:24 -0700 committer: Tom Lane <email@example.com> date : Tue, 20 Oct 2015 11:06:24 -0700
Commit bda76c1c8cfb1d11751ba6be88f0242850481733 caused both plus and minus infinity to be rendered as "infinity", which is not only wrong but inconsistent with the pre-9.4 behavior of to_json(). Fix that by duplicating the coding in date_out/timestamp_out/timestamptz_out more closely. Per bug #13687 from Stepan Perlov. Back-patch to 9.4, like the previous commit. In passing, also re-pgindent json.c, since it had gotten a bit messed up by recent patches (and I was already annoyed by indentation-related problems in back-patching this fix ...)
Fix back-patch of commit 8e3b4d9d40244c037bbc6e182ea3fabb9347d482.
commit : 7fc7125e21bbc1a84b8670e3d8ac7f7f4b204a9d author : Noah Misch <firstname.lastname@example.org> date : Tue, 20 Oct 2015 00:57:25 -0400 committer: Noah Misch <email@example.com> date : Tue, 20 Oct 2015 00:57:25 -0400
master emits an extra context message compared to 9.5 and earlier.
Eschew "RESET statement_timeout" in tests.
commit : 563f40bb4e465e69c0a2641ce5880a679e69538b author : Noah Misch <firstname.lastname@example.org> date : Tue, 20 Oct 2015 00:37:22 -0400 committer: Noah Misch <email@example.com> date : Tue, 20 Oct 2015 00:37:22 -0400
Instead, use transaction abort. Given an unlucky bout of latency, the timeout would cancel the RESET itself. Buildfarm members gharial, lapwing, mereswine, shearwater, and sungazer witness that. Back-patch to 9.1 (all supported versions). The query_canceled test still could timeout before entering its subtransaction; for whatever reason, that has yet to happen on the buildfarm.
Fix incorrect handling of lookahead constraints in pg_regprefix().
commit : 52f21c5882ebad18d4cfcd67d99d31b90397ce29 author : Tom Lane <firstname.lastname@example.org> date : Mon, 19 Oct 2015 13:54:53 -0700 committer: Tom Lane <email@example.com> date : Mon, 19 Oct 2015 13:54:53 -0700
pg_regprefix was doing nothing with lookahead constraints, which would be fine if it were the right kind of nothing, but it isn't: we have to terminate our search for a fixed prefix, not just pretend the LACON arc isn't there. Otherwise, if the current state has both a LACON outarc and a single plain-color outarc, we'd falsely conclude that the color represents an addition to the fixed prefix, and generate an extracted index condition that restricts the indexscan too much. (See added regression test case.) Terminating the search is conservative: we could traverse the LACON arc (thus assuming that the constraint can be satisfied at runtime) and then examine the outarcs of the linked-to state. But that would be a lot more work than it seems worth, because writing a LACON followed by a single plain character is a pretty silly thing to do. This makes a difference only in rather contrived cases, but it's a bug, so back-patch to all supported branches.
Fix order of arguments in ecpg generated typedef command.
commit : a850d7136fd0e1220be32df8646117f7017d67d6 author : Michael Meskes <firstname.lastname@example.org> date : Fri, 16 Oct 2015 17:29:05 +0200 committer: Michael Meskes <email@example.com> date : Fri, 16 Oct 2015 17:29:05 +0200
Miscellaneous cleanup of regular-expression compiler.
commit : f189747d4692bddc2f07c622d7d83b1bcbf48fbf author : Tom Lane <firstname.lastname@example.org> date : Fri, 16 Oct 2015 15:52:12 -0400 committer: Tom Lane <email@example.com> date : Fri, 16 Oct 2015 15:52:12 -0400
Revert our previous addition of "all" flags to copyins() and copyouts(); they're no longer needed, and were never anything but an unsightly hack. Improve a couple of infelicities in the REG_DEBUG code for dumping the NFA data structure, including adding code to count the total number of states and arcs. Add a couple of missed error checks. Add some more documentation in the README file, and some regression tests illustrating cases that exceeded the state-count limit and/or took unreasonable amounts of time before this set of patches. Back-patch to all supported branches.
Improve memory-usage accounting in regular-expression compiler.
commit : 0ecf4a9e55d7a9322f3aaee31bbd68ba01b2820e author : Tom Lane <firstname.lastname@example.org> date : Fri, 16 Oct 2015 15:36:17 -0400 committer: Tom Lane <email@example.com> date : Fri, 16 Oct 2015 15:36:17 -0400
This code previously counted the number of NFA states it created, and complained if a limit was exceeded, so as to prevent bizarre regex patterns from consuming unreasonable time or memory. That's fine as far as it went, but the code paid no attention to how many arcs linked those states. Since regexes can be contrived that have O(N) states but will need O(N^2) arcs after fixempties() processing, it was still possible to blow out memory, and take a long time doing it too. To fix, modify the bookkeeping to count space used by both states and arcs. I did not bother with including the "color map" in the accounting; it can only grow to a few megabytes, which is not a lot in comparison to what we're allowing for states+arcs (about 150MB on 64-bit machines or half that on 32-bit machines). Looking at some of the larger real-world regexes captured in the Tcl regression test suite suggests that the most that is likely to be needed for regexes found in the wild is under 10MB, so I believe that the current limit has enough headroom to make it okay to keep it as a hard-wired limit. In connection with this, redefine REG_ETOOBIG as meaning "regular expression is too complex"; the previous wording of "nfa has too many states" was already somewhat inapropos because of the error code's use for stack depth overrun, and it was not very user-friendly either. Back-patch to all supported branches.
Improve performance of pullback/pushfwd in regular-expression compiler.
commit : 9774fda86866fb12c8d690cb754e3981dc45efcd author : Tom Lane <firstname.lastname@example.org> date : Fri, 16 Oct 2015 15:11:49 -0400 committer: Tom Lane <email@example.com> date : Fri, 16 Oct 2015 15:11:49 -0400
The previous coding would create a new intermediate state every time it wanted to interchange the ordering of two constraint arcs. Certain regex features such as \Y can generate large numbers of parallel constraint arcs, and if we needed to reorder the results of that, we created unreasonable numbers of intermediate states. To improve matters, keep a list of already-created intermediate states associated with the state currently being considered by the outer loop; we can re-use such states to place all the new arcs leading to the same destination or source. I also took the trouble to redefine push() and pull() to have a less risky API: they no longer delete any state or arc that the caller might possibly have a pointer to, except for the specifically-passed constraint arc. This reduces the risk of re-introducing the same type of error seen in the failed patch for CVE-2007-4772. Back-patch to all supported branches.
Improve performance of fixempties() pass in regular-expression compiler.
commit : 8cf4eed0b0f25063e3d09933cf7334cc95094307 author : Tom Lane <firstname.lastname@example.org> date : Fri, 16 Oct 2015 14:58:11 -0400 committer: Tom Lane <email@example.com> date : Fri, 16 Oct 2015 14:58:11 -0400
The previous coding took something like O(N^4) time to fully process a chain of N EMPTY arcs. We can't really do much better than O(N^2) because we have to insert about that many arcs, but we can do lots better than what's there now. The win comes partly from using mergeins() to amortize de-duplication of arcs across multiple source states, and partly from exploiting knowledge of the ordering of arcs for each state to avoid looking at arcs we don't need to consider during the scan. We do have to be a bit careful of the possible reordering of arcs introduced by the sort-merge coding of the previous commit, but that's not hard to deal with. Back-patch to all supported branches.
Fix O(N^2) performance problems in regular-expression compiler.
commit : bdde29e1ceb100d47cc212b6a39eeb5c8708a535 author : Tom Lane <firstname.lastname@example.org> date : Fri, 16 Oct 2015 14:43:18 -0400 committer: Tom Lane <email@example.com> date : Fri, 16 Oct 2015 14:43:18 -0400
Change the singly-linked in-arc and out-arc lists to be doubly-linked, so that arc deletion is constant time rather than having worst-case time proportional to the number of other arcs on the connected states. Modify the bulk arc transfer operations copyins(), copyouts(), moveins(), moveouts() so that they use a sort-and-merge algorithm whenever there's more than a small number of arcs to be copied or moved. The previous method is O(N^2) in the number of arcs involved, because it performs duplicate checking independently for each copied arc. The new method may change the ordering of existing arcs for the destination state, but nothing really cares about that. Provide another bulk arc copying method mergeins(), which is unused as of this commit but is needed for the next one. It basically is like copyins(), but the source arcs might not all come from the same state. Replace the O(N^2) bubble-sort algorithm used in carcsort() with a qsort() call. These changes greatly improve the performance of regex compilation for large or complex regexes, at the cost of extra space for arc storage during compilation. The original tradeoff was probably fine when it was made, but now we care more about speed and less about memory consumption. Back-patch to all supported branches.
Fix regular-expression compiler to handle loops of constraint arcs.
commit : b6eb5fc40ed60dc0c58bc52356ab5b3679f74d0d author : Tom Lane <firstname.lastname@example.org> date : Fri, 16 Oct 2015 14:14:41 -0400 committer: Tom Lane <email@example.com> date : Fri, 16 Oct 2015 14:14:41 -0400
It's possible to construct regular expressions that contain loops of constraint arcs (that is, ^ $ AHEAD BEHIND or LACON arcs). There's no use in fully traversing such a loop at execution, since you'd just end up in the same NFA state without having consumed any input. Worse, such a loop leads to infinite looping in the pullback/pushfwd stage of compilation, because we keep pushing or pulling the same constraints around the loop in a vain attempt to move them to the pre or post state. Such looping was previously recognized in CVE-2007-4772; but the fix only handled the case of trivial single-state loops (that is, a constraint arc leading back to its source state) ... and not only that, it was incorrect even for that case, because it broke the admittedly-not-very-clearly-stated API contract of the pull() and push() subroutines. The first two regression test cases added by this commit exhibit patterns that result in assertion failures because of that (though there seem to be no ill effects in non-assert builds). The other new test cases exhibit multi-state constraint loops; in an unpatched build they will run until the NFA state-count limit is exceeded. To fix, remove the code added for CVE-2007-4772, and instead create a general-purpose constraint-loop-breaking phase of regex compilation that executes before we do pullback/pushfwd. Since we never need to traverse a constraint loop fully, we can just break the loop at any chosen spot, if we add clone states that can replicate any sequence of arc transitions that would've traversed just part of the loop. Also add some commentary clarifying why we have to have all these machinations in the first place. This class of problems has been known for some time --- we had a report from Marc Mamin about two years ago, for example, and there are related complaints in the Tcl bug tracker. I had discussed a fix of this kind off-list with Henry Spencer, but didn't get around to doing something about it until the issue was rediscovered by Greg Stark recently. Back-patch to all supported branches.
On Windows, ensure shared memory handle gets closed if not being used.
commit : 44a6e24fbc8b2ec5fec4be6850b2e816864e2cca author : Tom Lane <firstname.lastname@example.org> date : Tue, 13 Oct 2015 11:21:33 -0400 committer: Tom Lane <email@example.com> date : Tue, 13 Oct 2015 11:21:33 -0400
Postmaster child processes that aren't supposed to be attached to shared memory were not bothering to close the shared memory mapping handle they inherit from the postmaster process. That's mostly harmless, since the handle vanishes anyway when the child process exits -- but the syslogger process, if used, doesn't get killed and restarted during recovery from a backend crash. That meant that Windows doesn't see the shared memory mapping as becoming free, so it doesn't delete it and the postmaster is unable to create a new one, resulting in failure to recover from crashes whenever logging_collector is turned on. Per report from Dmitry Vasilyev. It's a bit astonishing that we'd not figured this out long ago, since it's been broken from the very beginnings of out native Windows support; probably some previously-unexplained trouble reports trace to this. A secondary problem is that on Cygwin (perhaps only in older versions?), exec() may not detach from the shared memory segment after all, in which case these child processes did remain attached to shared memory, posing the risk of an unexpected shared memory clobber if they went off the rails somehow. That may be a long-gone bug, but we can deal with it now if it's still live, by detaching within the infrastructure introduced here to deal with closing the handle. Back-patch to all supported branches. Tom Lane and Amit Kapila
Sigh, need "use Config" as well.
commit : bba442ef9397ae31a278b4be1cecf1986b1f67c5 author : Tom Lane <firstname.lastname@example.org> date : Mon, 12 Oct 2015 19:49:22 -0400 committer: Tom Lane <email@example.com> date : Mon, 12 Oct 2015 19:49:22 -0400
This time with some manual testing behind it ...
Cause TestLib.pm to define $windows_os in all branches.
commit : 06dd4b44fbcef0297acc0fbb1efe311900310272 author : Tom Lane <firstname.lastname@example.org> date : Mon, 12 Oct 2015 19:35:38 -0400 committer: Tom Lane <email@example.com> date : Mon, 12 Oct 2015 19:35:38 -0400
Back-port of a part of commit 690ed2b76ab91eb79ea04ee2bfbdc8a2693f2a37 that I'd depended on without realizing that it was only added recently. Since it seems entirely likely that other such tests will need to be back-patched in future, providing the flag seems like a better answer than just putting a test in-line. Per buildfarm.
Fix "pg_ctl start -w" to test child process status directly.
commit : 57f54b5e4eb5fedb102d4006857c226b47f21e28 author : Tom Lane <firstname.lastname@example.org> date : Mon, 12 Oct 2015 18:30:36 -0400 committer: Tom Lane <email@example.com> date : Mon, 12 Oct 2015 18:30:36 -0400
pg_ctl start with -w previously relied on a heuristic that the postmaster would surely always manage to create postmaster.pid within five seconds. Unfortunately, that fails much more often than we would like on some of the slower, more heavily loaded buildfarm members. We have known for quite some time that we could remove the need for that heuristic on Unix by using fork/exec instead of system() to launch the postmaster. This allows us to know the exact PID of the postmaster, which allows near-certain verification that the postmaster.pid file is the one we want and not a leftover, and it also lets us use waitpid() to detect reliably whether the child postmaster has exited or not. What was blocking this change was not wanting to rewrite the Windows version of start_postmaster() to avoid use of CMD.EXE. That's doable in theory but would require fooling about with stdout/stderr redirection, and getting the handling of quote-containing postmaster switches to stay the same might be rather ticklish. However, we realized that we don't have to do that to fix the problem, because we can test whether the shell process has exited as a proxy for whether the postmaster is still alive. That doesn't allow an exact check of the PID in postmaster.pid, but we're no worse off than before in that respect; and we do get to get rid of the heuristic about how long the postmaster might take to create postmaster.pid. On Unix, this change means that a second "pg_ctl start -w" immediately after another such command will now reliably fail, whereas previously it would succeed if done within two seconds of the earlier command. Since that's a saner behavior anyway, it's fine. On Windows, the case can still succeed within the same time window, since pg_ctl can't tell that the earlier postmaster's postmaster.pid isn't the pidfile it is looking for. To ensure stable test results on Windows, we can insert a short sleep into the test script for pg_ctl, ensuring that the existing pidfile looks stale. This hack can be removed if we ever do rewrite start_postmaster(), but that no longer seems like a high-priority thing to do. Back-patch to all supported versions, both because the current behavior is buggy and because we must do that if we want the buildfarm failures to go away. Tom Lane and Michael Paquier
Use JsonbIteratorToken consistently in automatic variable declarations.
commit : 22c5705f81717dd0622ebfb13a617e6104d1fbd9 author : Noah Misch <firstname.lastname@example.org> date : Sun, 11 Oct 2015 23:53:35 -0400 committer: Noah Misch <email@example.com> date : Sun, 11 Oct 2015 23:53:35 -0400
Many functions stored JsonbIteratorToken values in variables of other integer types. Also, standardize order relative to other declarations. Expect compilers to generate the same code before and after this change.
commit : c9853e647f247e0bbae1294b24fac40ec7c34550 author : Peter Eisentraut <firstname.lastname@example.org> date : Sun, 11 Oct 2015 21:44:27 -0400 committer: Peter Eisentraut <email@example.com> date : Sun, 11 Oct 2015 21:44:27 -0400
Make prove_installcheck remove the old log directory, if any.
commit : e491f2bbb53bf7eb678af4fd5ebc88f1b64d28f7 author : Noah Misch <firstname.lastname@example.org> date : Sun, 11 Oct 2015 20:36:07 -0400 committer: Noah Misch <email@example.com> date : Sun, 11 Oct 2015 20:36:07 -0400
prove_check already has been doing this. Back-patch to 9.4, like the commit that introduced this logging.
Fix uninitialized-variable bug.
commit : 15e9457bbb3e6ba2485f88252e2f522c069f26c5 author : Tom Lane <firstname.lastname@example.org> date : Fri, 9 Oct 2015 09:12:03 -0500 committer: Tom Lane <email@example.com> date : Fri, 9 Oct 2015 09:12:03 -0500
For some reason, neither of the compilers I usually use noticed the uninitialized-variable problem I introduced in commit 7e2a18a9161fee7e. That's hardly a good enough excuse though. Committing with brown paper bag on head. In addition to putting the operations in the right order, move the declaration of "now" inside the loop; there's no need for it to be outside, and that does wake up older gcc enough to notice any similar future problem. Back-patch to 9.4; earlier versions lack the time-to-SIGKILL stanza so there's no bug.
Factor out encoding specific tests for json
commit : 56f9d916327b3e256d655db278b62b850c931d91 author : Andrew Dunstan <firstname.lastname@example.org> date : Wed, 7 Oct 2015 17:41:45 -0400 committer: Andrew Dunstan <email@example.com> date : Wed, 7 Oct 2015 17:41:45 -0400
This lets us remove the large alternative results files for the main json and jsonb tests, which makes modifying those tests simpler for committers and patch submitters. Backpatch to 9.4 for jsonb and 9.3 for json.
Improve documentation of the role-dropping process.
commit : fe86f7fbe442212ec638bdaf2ebf474dfaae9722 author : Tom Lane <firstname.lastname@example.org> date : Wed, 7 Oct 2015 16:12:06 -0400 committer: Tom Lane <email@example.com> date : Wed, 7 Oct 2015 16:12:06 -0400
In general one may have to run both REASSIGN OWNED and DROP OWNED to get rid of all the dependencies of a role to be dropped. This was alluded to in the REASSIGN OWNED man page, but not really spelled out in full; and in any case the procedure ought to be documented in a more prominent place than that. Add a section to the "Database Roles" chapter explaining this, and do a bit of wordsmithing in the relevant commands' man pages.
Perform an immediate shutdown if the postmaster.pid file is removed.
commit : 3d701277f8c90c3770f086093eaa5999e3ce6e95 author : Tom Lane <firstname.lastname@example.org> date : Tue, 6 Oct 2015 17:15:27 -0400 committer: Tom Lane <email@example.com> date : Tue, 6 Oct 2015 17:15:27 -0400
The postmaster now checks every minute or so (worst case, at most two minutes) that postmaster.pid is still there and still contains its own PID. If not, it performs an immediate shutdown, as though it had received SIGQUIT. The original goal behind this change was to ensure that failed buildfarm runs would get fully cleaned up, even if the test scripts had left a postmaster running, which is not an infrequent occurrence. When the buildfarm script removes a test postmaster's $PGDATA directory, its next check on postmaster.pid will fail and cause it to exit. Previously, manual intervention was often needed to get rid of such orphaned postmasters, since they'd block new test postmasters from obtaining the expected socket address. However, by checking postmaster.pid and not something else, we can provide additional robustness: manual removal of postmaster.pid is a frequent DBA mistake, and now we can at least limit the damage that will ensue if a new postmaster is started while the old one is still alive. Back-patch to all supported branches, since we won't get the desired improvement in buildfarm reliability otherwise.