commit : e1ea61a30121a97eee192adc0808635fcf7b6f25 author : Tom Lane <email@example.com> date : Mon, 21 Jul 2014 15:12:31 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 21 Jul 2014 15:12:31 -0400
Release notes for 9.3.5, 9.2.9, 9.1.14, 9.0.18, 8.4.22.
commit : 074f840c22604570e326cdaea076432972d50d85 author : Tom Lane <email@example.com> date : Mon, 21 Jul 2014 14:59:32 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 21 Jul 2014 14:59:32 -0400
commit : 44c4013431eb90b3c385bfc5371e583bbc8ecac3 author : Peter Eisentraut <email@example.com> date : Mon, 21 Jul 2014 01:02:07 -0400 committer: Peter Eisentraut <firstname.lastname@example.org> date : Mon, 21 Jul 2014 01:02:07 -0400
Fix xreflabel for hot_standby_feedback.
commit : 3b3a05df1e0f0227cde186d274d6b5f47211242b author : Tom Lane <email@example.com> date : Sat, 19 Jul 2014 22:20:50 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 19 Jul 2014 22:20:50 -0400
Rather remarkable that this has been wrong since 9.1 and nobody noticed.
Update time zone data files to tzdata release 2014e.
commit : 7d09e4854250df9bc01b318c23d8467e215adb75 author : Tom Lane <email@example.com> date : Sat, 19 Jul 2014 15:00:50 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 19 Jul 2014 15:00:50 -0400
DST law changes in Crimea, Egypt, Morocco. New zone Antarctica/Troll for Norwegian base in Queen Maud Land.
Limit pg_upgrade authentication advice to always-secure techniques.
commit : 9cef05e0f53933ab54a99224fcba5ab2abf2b8a8 author : Noah Misch <email@example.com> date : Fri, 18 Jul 2014 16:05:17 -0400 committer: Noah Misch <firstname.lastname@example.org> date : Fri, 18 Jul 2014 16:05:17 -0400
~/.pgpass is a sound choice everywhere, and "peer" authentication is safe on every platform it supports. Cease to recommend "trust" authentication, the safety of which is deeply configuration-specific. Back-patch to 9.0, where pg_upgrade was introduced.
Fix two low-probability memory leaks in regular expression parsing.
commit : a223b9e361bae993a3e7d845a30b46fe2c5feca5 author : Tom Lane <email@example.com> date : Fri, 18 Jul 2014 13:00:27 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 18 Jul 2014 13:00:27 -0400
If pg_regcomp failed after having invoked markst/cleanst, it would leak any "struct subre" nodes it had created. (We've already detected all regex syntax errors at that point, so the only likely causes of later failure would be query cancel or out-of-memory.) To fix, make sure freesrnode knows the difference between the pre-cleanst and post-cleanst cleanup procedures. Add some documentation of this less-than-obvious point. Also, newlacon did the wrong thing with an out-of-memory failure from realloc(), so that the previously allocated array would be leaked. Both of these are pretty low-probability scenarios, but a bug is a bug, so patch all the way back. Per bug #10976 from Arthur O'Dwyer.
Fix REASSIGN OWNED for text search objects
commit : b42f09fc80ce0848233753bbf38643b88e208786 author : Alvaro Herrera <email@example.com> date : Tue, 15 Jul 2014 13:24:07 -0400 committer: Alvaro Herrera <firstname.lastname@example.org> date : Tue, 15 Jul 2014 13:24:07 -0400
Trying to reassign objects owned by a user that had text search dictionaries or configurations used to fail with: ERROR: unexpected classid 3600 or ERROR: unexpected classid 3602 Fix by adding cases for those object types in a switch in pg_shdepend.c. Both REASSIGN OWNED and text search objects go back all the way to 8.1, so backpatch to all supported branches. In 9.3 the alter-owner code was made generic, so the required change in recent branches is pretty simple; however, for 9.2 and older ones we need some additional reshuffling to enable specifying objects by OID rather than name. Text search templates and parsers are not owned objects, so there's no change required for them. Per bug #9749 reported by Michal Novotný
Reset master xmin when hot_standby_feedback disabled. If walsender has xmin of standby then ensure we reset the value to 0 when we change from hot_standby_feedback=on to hot_standby_feedback=off.
commit : 2dde11a632d3fe309b5af5480d01a0a3028f7f64 author : Simon Riggs <simon@2ndQuadrant.com> date : Tue, 15 Jul 2014 14:40:23 +0100 committer: Simon Riggs <simon@2ndQuadrant.com> date : Tue, 15 Jul 2014 14:40:23 +0100
doc: small fixes for REINDEX reference page
commit : f18858dc72daf64bedb4bfc946e496fa11e972c9 author : Peter Eisentraut <email@example.com> date : Mon, 14 Jul 2014 20:37:00 -0400 committer: Peter Eisentraut <firstname.lastname@example.org> date : Mon, 14 Jul 2014 20:37:00 -0400
From: Josh Kupershmidt <email@example.com>
Add autocompletion of locale keywords for CREATE DATABASE
commit : 2ead596c2a94a053501fbb0282134c6cdc91b1a8 author : Magnus Hagander <firstname.lastname@example.org> date : Sat, 12 Jul 2014 14:19:57 +0200 committer: Magnus Hagander <email@example.com> date : Sat, 12 Jul 2014 14:19:57 +0200
Adds support for autocomplete of LC_COLLATE and LC_CTYPE to the CREATE DATABASE command in psql.
Fix bug with whole-row references to append subplans.
commit : 261f954e7a5861e1706a1e77f5d44c57335d37a6 author : Tom Lane <firstname.lastname@example.org> date : Fri, 11 Jul 2014 19:12:45 -0400 committer: Tom Lane <email@example.com> date : Fri, 11 Jul 2014 19:12:45 -0400
ExecEvalWholeRowVar incorrectly supposed that it could "bless" the source TupleTableSlot just once per query. But if the input is coming from an Append (or, perhaps, other cases?) more than one slot might be returned over the query run. This led to "record type has not been registered" errors when a composite datum was extracted from a non-blessed slot. This bug has been there a long time; I guess it escaped notice because when dealing with subqueries the planner tends to expand whole-row Vars into RowExprs, which don't have the same problem. It is possible to trigger the problem in all active branches, though, as illustrated by the added regression test.
Don't assume a subquery's output is unique if there's a SRF in its tlist.
commit : 189bd09cbde8e80a188be57c19860384ab627ada author : Tom Lane <firstname.lastname@example.org> date : Tue, 8 Jul 2014 14:03:23 -0400 committer: Tom Lane <email@example.com> date : Tue, 8 Jul 2014 14:03:23 -0400
While the x output of "select x from t group by x" can be presumed unique, this does not hold for "select x, generate_series(1,10) from t group by x", because we may expand the set-returning function after the grouping step. (Perhaps that should be re-thought; but considering all the other oddities involved with SRFs in targetlists, it seems unlikely we'll change it.) Put a check in query_is_distinct_for() so it's not fooled by such cases. Back-patch to all supported branches. David Rowley
pg_upgrade: allow upgrades for new-only TOAST tables
commit : 759c9fb631fbb3a1d28b24979e0d512e3a571d5c author : Bruce Momjian <firstname.lastname@example.org> date : Mon, 7 Jul 2014 13:24:08 -0400 committer: Bruce Momjian <email@example.com> date : Mon, 7 Jul 2014 13:24:08 -0400
Previously, when calculations on the need for toast tables changed, pg_upgrade could not handle cases where the new cluster needed a TOAST table and the old cluster did not. (It already handled the opposite case.) This fixes the "OID mismatch" error typically generated in this case. Backpatch through 9.2
Add some errdetail to checkRuleResultList().
commit : 981518ea19325d9949c80aea4627a2b935b4ffee author : Tom Lane <firstname.lastname@example.org> date : Wed, 2 Jul 2014 14:20:40 -0400 committer: Tom Lane <email@example.com> date : Wed, 2 Jul 2014 14:20:40 -0400
This function wasn't originally thought to be really user-facing, because converting a table to a view isn't something we expect people to do manually. So not all that much effort was spent on the error messages; in particular, while the code will complain that you got the column types wrong it won't say exactly what they are. But since we repurposed the code to also check compatibility of rule RETURNING lists, it's definitely user-facing. It now seems worthwhile to add errdetail messages showing exactly what the conflict is when there's a mismatch of column names or types. This is prompted by bug #10836 from Matthias Raffelsieper, which might have been forestalled if the error message had reported the wrong column type as being "record". Per Alvaro's advice, back-patch to branches before 9.4, but resist the temptation to rephrase any existing strings there. Adding new strings is not really a translation degradation; anyway having the info presented in English is better than not having it at all.
Fix inadequately-sized output buffer in contrib/unaccent.
commit : c66256b9bd34a4af477eb3dc558ac8f46727f2f3 author : Tom Lane <firstname.lastname@example.org> date : Tue, 1 Jul 2014 11:22:53 -0400 committer: Tom Lane <email@example.com> date : Tue, 1 Jul 2014 11:22:53 -0400
The output buffer size in unaccent_lexize() was calculated as input string length times pg_database_encoding_max_length(), which effectively assumes that replacement strings aren't more than one character. While that was all that we previously documented it to support, the code actually has always allowed replacement strings of arbitrary length; so if you tried to make use of longer strings, you were at risk of buffer overrun. To fix, use an expansible StringInfo buffer instead of trying to determine the maximum space needed a-priori. This would be a security issue if unaccent rules files could be installed by unprivileged users; but fortunately they can't, so in the back branches the problem can be labeled as improper configuration by a superuser. Nonetheless, a memory stomp isn't a nice way of reacting to improper configuration, so let's back-patch the fix.
Don't prematurely free the BufferAccessStrategy in pgstat_heap().
commit : f6d6b7b1e7eac2aa049bbb1e41c468fbbf5b7fef author : Noah Misch <firstname.lastname@example.org> date : Mon, 30 Jun 2014 16:59:19 -0400 committer: Noah Misch <email@example.com> date : Mon, 30 Jun 2014 16:59:19 -0400
This function continued to use it after heap_endscan() freed it. In passing, don't explicit create a strategy here. Instead, use the one created by heap_beginscan_strat(), if any. Back-patch to 9.2, where use of a BufferAccessStrategy here was introduced.
Back-patch "Fix EquivalenceClass processing for nested append relations".
commit : 0cf16686bcc97c9be083880617625bac3c088803 author : Tom Lane <firstname.lastname@example.org> date : Thu, 26 Jun 2014 10:41:06 -0700 committer: Tom Lane <email@example.com> date : Thu, 26 Jun 2014 10:41:06 -0700
When we committed a87c729153e372f3731689a7be007bc2b53f1410, we somehow failed to notice that it didn't merely improve plan quality for expression indexes; there were very closely related cases that failed outright with "could not find pathkey item to sort". The failing cases seem to be those where the planner was already capable of selecting a MergeAppend plan, and there was inheritance involved: the lack of appropriate eclass child members would prevent prepare_sort_from_pathkeys() from succeeding on the MergeAppend's child plan nodes for inheritance child tables. Accordingly, back-patch into 9.1 through 9.3, along with an extra regression test case covering the problem. Per trouble report from Michael Glaesemann.
Remove obsolete example of CSV log file name from log_filename document.
commit : 4ee459458eacf3dd4b448eb9357eec6299a65f62 author : Fujii Masao <firstname.lastname@example.org> date : Thu, 26 Jun 2014 14:27:27 +0900 committer: Fujii Masao <email@example.com> date : Thu, 26 Jun 2014 14:27:27 +0900
7380b63 changed log_filename so that epoch was not appended to it when no format specifier is given. But the example of CSV log file name with epoch still left in log_filename document. This commit removes such obsolete example. This commit also documents the defaults of log_directory and log_filename. Backpatch to all supported versions. Christoph Berg
Don't allow foreign tables with OIDs.
commit : 1c9f9e888fc2d86446f445d768ff076b1643e167 author : Heikki Linnakangas <firstname.lastname@example.org> date : Tue, 24 Jun 2014 12:31:36 +0300 committer: Heikki Linnakangas <email@example.com> date : Tue, 24 Jun 2014 12:31:36 +0300
The syntax doesn't let you specify "WITH OIDS" for foreign tables, but it was still possible with default_with_oids=true. But the rest of the system, including pg_dump, isn't prepared to handle foreign tables with OIDs properly. Backpatch down to 9.1, where foreign tables were introduced. It's possible that there are databases out there that already have foreign tables with OIDs. There isn't much we can do about that, but at least we can prevent them from being created in the future. Patch by Etsuro Fujita, reviewed by Hadi Moshayedi.
Fix documentation template for CREATE TRIGGER.
commit : 07353de4fbcd7ee93a445dc6bf2bb32f05fffef1 author : Kevin Grittner <firstname.lastname@example.org> date : Sat, 21 Jun 2014 09:17:36 -0500 committer: Kevin Grittner <email@example.com> date : Sat, 21 Jun 2014 09:17:36 -0500
By using curly braces, the template had specified that one of "NOT DEFERRABLE", "INITIALLY IMMEDIATE", or "INITIALLY DEFERRED" was required on any CREATE TRIGGER statement, which is not accurate. Change to square brackets makes that optional. Backpatch to 9.1, where the error was introduced.
Clean up data conversion short-lived memory context.
commit : 3e2cfa42f9b9ddfb469f0f56d78377b1864eb894 author : Joe Conway <firstname.lastname@example.org> date : Fri, 20 Jun 2014 12:23:05 -0700 committer: Joe Conway <email@example.com> date : Fri, 20 Jun 2014 12:23:05 -0700
dblink uses a short-lived data conversion memory context. However it was not deleted when no longer needed, leading to a noticeable memory leak under some circumstances. Plug the hole, along with minor refactoring. Backpatch to 9.2 where the leak was introduced. Report and initial patch by MauMau. Reviewed/modified slightly by Tom Lane and me.
Avoid leaking memory while evaluating arguments for a table function.
commit : b568d383610ce1777587ca26b1fab5571a1c8a6f author : Tom Lane <firstname.lastname@example.org> date : Thu, 19 Jun 2014 22:13:51 -0400 committer: Tom Lane <email@example.com> date : Thu, 19 Jun 2014 22:13:51 -0400
ExecMakeTableFunctionResult evaluated the arguments for a function-in-FROM in the query-lifespan memory context. This is insignificant in simple cases where the function relation is scanned only once; but if the function is in a sub-SELECT or is on the inside of a nested loop, any memory consumed during argument evaluation can add up quickly. (The potential for trouble here had been foreseen long ago, per existing comments; but we'd not previously seen a complaint from the field about it.) To fix, create an additional temporary context just for this purpose. Per an example from MauMau. Back-patch to all active branches.
Make pqsignal() available to pg_regress of ECPG and isolation suites.
commit : 0ae841a98c21c53901d5bc9a9323a8cc800364f6 author : Noah Misch <firstname.lastname@example.org> date : Sat, 14 Jun 2014 10:52:25 -0400 committer: Noah Misch <email@example.com> date : Sat, 14 Jun 2014 10:52:25 -0400
Commit 453a5d91d49e4d35054f92785d830df4067e10c1 made it available to the src/test/regress build of pg_regress, but all pg_regress builds need the same treatment. Patch 9.2 through 8.4; in 9.3 and later, pg_regress gets pqsignal() via libpgport.
Secure Unix-domain sockets of "make check" temporary clusters.
commit : 453a5d91d49e4d35054f92785d830df4067e10c1 author : Noah Misch <firstname.lastname@example.org> date : Sat, 14 Jun 2014 09:41:13 -0400 committer: Noah Misch <email@example.com> date : Sat, 14 Jun 2014 09:41:13 -0400
Any OS user able to access the socket can connect as the bootstrap superuser and proceed to execute arbitrary code as the OS user running the test. Protect against that by placing the socket in a temporary, mode-0700 subdirectory of /tmp. The pg_regress-based test suites and the pg_upgrade test suite were vulnerable; the $(prove_check)-based test suites were already secure. Back-patch to 8.4 (all supported versions). The hazard remains wherever the temporary cluster accepts TCP connections, notably on Windows. As a convenient side effect, this lets testing proceed smoothly in builds that override DEFAULT_PGSOCKET_DIR. Popular non-default values like /var/run/postgresql are often unwritable to the build user. Security: CVE-2014-0067
Add mkdtemp() to libpgport.
commit : a919937f112eb2f548d5f9bd1b3a7298375e6380 author : Noah Misch <firstname.lastname@example.org> date : Sat, 14 Jun 2014 09:41:13 -0400 committer: Noah Misch <email@example.com> date : Sat, 14 Jun 2014 09:41:13 -0400
This function is pervasive on free software operating systems; import NetBSD's implementation. Back-patch to 8.4, like the commit that will harness it.
Fix pg_restore's processing of old-style BLOB COMMENTS data.
commit : ce7fc4fbbca76dd39e4be31cfad72cf42adb0a76 author : Tom Lane <firstname.lastname@example.org> date : Thu, 12 Jun 2014 20:14:46 -0400 committer: Tom Lane <email@example.com> date : Thu, 12 Jun 2014 20:14:46 -0400
Prior to 9.0, pg_dump handled comments on large objects by dumping a bunch of COMMENT commands into a single BLOB COMMENTS archive object. With sufficiently many such comments, some of the commands would likely get split across bufferloads when restoring, causing failures in direct-to-database restores (though no problem would be evident in text output). This is the same type of issue we have with table data dumped as INSERT commands, and it can be fixed in the same way, by using a mini SQL lexer to figure out where the command boundaries are. Fortunately, the COMMENT commands are no more complex to lex than INSERTs, so we can just re-use the existing lexer for INSERTs. Per bug #10611 from Jacek Zalewski. Back-patch to all active branches.
Remove inadvertent copyright violation in largeobject regression test.
commit : 72adbe55b70da897b61fa2bd2e7de80286c4088d author : Tom Lane <firstname.lastname@example.org> date : Thu, 12 Jun 2014 16:51:11 -0400 committer: Tom Lane <email@example.com> date : Thu, 12 Jun 2014 16:51:11 -0400
Robert Frost is no longer with us, but his copyrights still are, so let's stop using "Stopping by Woods on a Snowy Evening" as test data before somebody decides to sue us. Wordsworth is more safely dead.
Fix ancient encoding error in hungarian.stop.
commit : 802323535788965f041b4fdaecc16025f289cb44 author : Tom Lane <firstname.lastname@example.org> date : Tue, 10 Jun 2014 22:48:16 -0400 committer: Tom Lane <email@example.com> date : Tue, 10 Jun 2014 22:48:16 -0400
When we grabbed this file off the Snowball project's website, we mistakenly supposed that it was in LATIN1 encoding, but evidently it was actually in LATIN2. This resulted in ő (o-double-acute, U+0151, which is code 0xF5 in LATIN2) being misconverted into õ (o-tilde, U+00F5), as complained of in bug #10589 from Zoltán Sörös. We'd have messed up u-double-acute too, but there aren't any of those in the file. Other characters used in the file have the same codes in LATIN1 and LATIN2, which no doubt helped hide the problem for so long. The error is not only ours: the Snowball project also was confused about which encoding is required for Hungarian. But dealing with that will require source-code changes that I'm not at all sure we'll wish to back-patch. Fixing the stopword file seems reasonably safe to back-patch however.
Fix planner bug with nested PlaceHolderVars in 9.2 (only).
commit : 187ae17300776f48b2bd9d0737923b1bf70f606e author : Tom Lane <firstname.lastname@example.org> date : Mon, 9 Jun 2014 21:28:56 -0400 committer: Tom Lane <email@example.com> date : Mon, 9 Jun 2014 21:28:56 -0400
Commit 9e7e29c75ad441450f9b8287bd51c13521641e3b fixed some problems with LATERAL references in PlaceHolderVars, one of which was that "createplan.c wasn't handling nested PlaceHolderVars properly". I failed to see that this problem might occur in older versions as well; but it can, as demonstrated in bug #10587 from Geoff Speicher. In this case the nesting occurs due to push-down of PlaceHolderVar expressions into a parameterized path. So, back-patch the relevant changes from 9e7e29c75ad4 into 9.2 where parameterized paths were introduced. (Perhaps I'm still being too myopic, but I'm hesitant to change older branches without some evidence that the case can occur there.)
Fix infinite loop when splitting inner tuples in SPGiST text indexes.
commit : 93328b2dfb9aa08e5f56a049d84df4ad3c9ff225 author : Tom Lane <firstname.lastname@example.org> date : Mon, 9 Jun 2014 16:30:46 -0400 committer: Tom Lane <email@example.com> date : Mon, 9 Jun 2014 16:30:46 -0400
Previously, the code used a node label of zero both for strings that contain no bytes beyond the inner tuple's prefix, and for cases where an "allTheSame" inner tuple has to be split to allow a string with a different next byte to be inserted into it. Failing to distinguish these cases meant that if a string ending with the current prefix needed to be inserted into an allTheSame tuple, we got into an infinite loop, because after splitting the tuple we'd descend into the child allTheSame tuple and then find we need to split again. To fix, instead use -1 and -2 as the node labels for these two cases. This requires widening the node label type from "char" to int2, but fortunately SPGiST stores all pass-by-value node label types in their Datum representation, which means that this change is transparently upward compatible so far as the on-disk representation goes. We continue to recognize zero as a dummy node label for reading purposes, but will not attempt to push new index entries down into such a label, so that the loop won't occur even when dealing with an existing index. Per report from Teodor Sigaev. Back-patch to 9.2 where the faulty code was introduced.
Fix breakages of hot standby regression test.
commit : bdc5400bc627e72a1fcf9d61459d0a34db58fca8 author : Fujii Masao <firstname.lastname@example.org> date : Fri, 6 Jun 2014 18:46:32 +0900 committer: Fujii Masao <email@example.com> date : Fri, 6 Jun 2014 18:46:32 +0900
This commit changes HS regression test so that it uses REPEATABLE READ transaction instead of SERIALIZABLE one because SERIALIZABLE transaction isolation level is not available in HS. Also this commit fixes VACUUM/ANALYZE label mixup. This was fixed in HEAD (commit 2985e16), but it should have been back-patched to 9.1 which had introduced SSI and forbidden SERIALIZABLE transaction in HS. Amit Langote
Add defenses against running with a wrong selection of LOBLKSIZE.
commit : 4fb647827592f69d53ea5201f58dfb53aad95147 author : Tom Lane <firstname.lastname@example.org> date : Thu, 5 Jun 2014 11:31:12 -0400 committer: Tom Lane <email@example.com> date : Thu, 5 Jun 2014 11:31:12 -0400
It's critical that the backend's idea of LOBLKSIZE match the way data has actually been divided up in pg_largeobject. While we don't provide any direct way to adjust that value, doing so is a one-line source code change and various people have expressed interest recently in changing it. So, just as with TOAST_MAX_CHUNK_SIZE, it seems prudent to record the value in pg_control and cross-check that the backend's compiled-in setting matches the on-disk data. Also tweak the code in inv_api.c so that fetches from pg_largeobject explicitly verify that the length of the data field is not more than LOBLKSIZE. Formerly we just had Asserts() for that, which is no protection at all in production builds. In some of the call sites an overlength data value would translate directly to a security-relevant stack clobber, so it seems worth one extra runtime comparison to be sure. In the back branches, we can't change the contents of pg_control; but we can still make the extra checks in inv_api.c, which will offer some amount of protection against running with the wrong value of LOBLKSIZE.
Fix longstanding bug in HeapTupleSatisfiesVacuum().
commit : 315442c01156d4f6c0f766ec51da090a613d03b8 author : Andres Freund <firstname.lastname@example.org> date : Wed, 4 Jun 2014 23:25:52 +0200 committer: Andres Freund <email@example.com> date : Wed, 4 Jun 2014 23:25:52 +0200
HeapTupleSatisfiesVacuum() didn't properly discern between DELETE_IN_PROGRESS and INSERT_IN_PROGRESS for rows that have been inserted in the current transaction and deleted in a aborted subtransaction of the current backend. At the very least that caused problems for CLUSTER and CREATE INDEX in transactions that had aborting subtransactions producing rows, leading to warnings like: WARNING: concurrent delete in progress within table "..." possibly in an endless, uninterruptible, loop. Instead of treating *InProgress xmins the same as *IsCurrent ones, treat them as being distinct like the other visibility routines. As implemented this separatation can cause a behaviour change for rows that have been inserted and deleted in another, still running, transaction. HTSV will now return INSERT_IN_PROGRESS instead of DELETE_IN_PROGRESS for those. That's both, more in line with the other visibility routines and arguably more correct. The latter because a INSERT_IN_PROGRESS will make callers look at/wait for xmin, instead of xmax. The only current caller where that's possibly worse than the old behaviour is heap_prune_chain() which now won't mark the page as prunable if a row has concurrently been inserted and deleted. That's harmless enough. As a cautionary measure also insert a interrupt check before the gotos in IndexBuildHeapScan() that lead to the uninterruptible loop. There are other possible causes, like a row that several sessions try to update and all fail, for repeated loops and the cost of doing so in the retry case is low. As this bug goes back all the way to the introduction of subtransactions in 573a71a5da backpatch to all supported releases. Reported-By: Sandro Santilli
Make plpython_unicode regression test work in more database encodings.
commit : 658fad7ff0d1caa3b3fcb548fd6780c563f7ead6 author : Tom Lane <firstname.lastname@example.org> date : Tue, 3 Jun 2014 12:01:34 -0400 committer: Tom Lane <email@example.com> date : Tue, 3 Jun 2014 12:01:34 -0400
This test previously used a data value containing U+0080, and would therefore fail if the database encoding didn't have an equivalent to that; which only about half of our supported server encodings do. We could fall back to using some plain-ASCII character, but that seems like it's losing most of the point of the test. Instead switch to using U+00A0 (no-break space), which translates into all our supported encodings except the four in the EUC_xx family. Per buildfarm testing. Back-patch to 9.1, which is as far back as this test is expected to succeed everywhere. (9.0 has the test, but without back-patching some 9.1 code changes we could not expect to get consistent results across platforms anyway.)
Set the process latch when processing recovery conflict interrupts.
commit : f998e9940065e58596c3aba9bfa51473b46bf1ed author : Andres Freund <firstname.lastname@example.org> date : Tue, 3 Jun 2014 14:02:54 +0200 committer: Andres Freund <email@example.com> date : Tue, 3 Jun 2014 14:02:54 +0200
Because RecoveryConflictInterrupt() didn't set the process latch anything using the latter to wait for events didn't get notified about recovery conflicts. Most latch users are never the target of recovery conflicts, which explains the lack of reports about this until now. Since 9.3 two possible affected users exist though: The sql callable pg_sleep() now uses latches to wait and background workers are expected to use latches in their main loop. Both would currently wait until the end of WaitLatch's timeout. Fix by adding a SetLatch() to RecoveryConflictInterrupt(). It'd also be possible to fix the issue by having each latch user set set_latch_on_sigusr1. That seems failure prone and though, as most of these callsites won't often receive recovery conflicts and thus will likely only be tested against normal query cancels et al. It'd also be unnecessarily verbose. Backpatch to 9.1 where latches were introduced. Arguably 9.3 would be sufficient, because that's where pg_sleep() was converted to waiting on the latch and background workers got introduced; but there could be user level code making use of the latch pre 9.3.
PL/Python: Adjust the regression tests for Python 3.4
commit : 582b41d6e21dd1324c2bab67ace04b74591f8dd9 author : Tom Lane <firstname.lastname@example.org> date : Sun, 1 Jun 2014 15:03:18 -0400 committer: Tom Lane <email@example.com> date : Sun, 1 Jun 2014 15:03:18 -0400
Back-patch commit d0765d50f429472d00554701ac6531c84d324811 into 9.3 and 9.2, which is as far back as we previously bothered to adjust the regression tests for Python 3.3. Per gripe from Honza Horak.
On OS X, link libpython normally, ignoring the "framework" framework.
commit : 83ed4598b251f05f01fb7ea1d562b2a96830d738 author : Tom Lane <firstname.lastname@example.org> date : Fri, 30 May 2014 18:18:20 -0400 committer: Tom Lane <email@example.com> date : Fri, 30 May 2014 18:18:20 -0400
As of Xcode 5.0, Apple isn't including the Python framework as part of the SDK-level files, which means that linking to it might fail depending on whether Xcode thinks you've selected a specific SDK version. According to their Tech Note 2328, they've basically deprecated the framework method of linking to libpython and are telling people to link to the shared library normally. (I'm pretty sure this is in direct contradiction to the advice they were giving a few years ago, but whatever.) Testing says that this approach works fine at least as far back as OS X 10.4.11, so let's just rip out the framework special case entirely. We do still need a special case to decide that OS X provides a shared library at all, unfortunately (I wonder why the distutils check doesn't work ...). But this is still less of a special case than before, so it's fine. Back-patch to all supported branches, since we'll doubtless be hearing about this more as more people update to recent Xcode.
When using the OSSP UUID library, cache its uuid_t state object.
commit : 2fb9fb6614f5aa92271e9d4eef9a192cdf6c468e author : Tom Lane <firstname.lastname@example.org> date : Thu, 29 May 2014 13:51:09 -0400 committer: Tom Lane <email@example.com> date : Thu, 29 May 2014 13:51:09 -0400
The original coding in contrib/uuid-ossp created and destroyed a uuid_t object (or, in some cases, even two of them) each time it was called. This is not the intended usage: you're supposed to keep the uuid_t object around so that the library can cache its state across uses. (Other UUID libraries seem to keep equivalent state behind-the-scenes in static variables, but OSSP chose differently.) Aside from being quite inefficient, creating a new uuid_t loses knowledge of the previously generated UUID, which in theory could result in duplicate V1-style UUIDs being created on sufficiently fast machines. On at least some platforms, creating a new uuid_t also draws some entropy from /dev/urandom, leaving less for the rest of the system. This seems sufficiently unpleasant to justify back-patching this change.
Revert "Fix bogus %name-prefix option syntax in all our Bison files."
commit : 952b036054d72a6e826d37f5a847de37f10cb405 author : Tom Lane <firstname.lastname@example.org> date : Wed, 28 May 2014 19:29:05 -0400 committer: Tom Lane <email@example.com> date : Wed, 28 May 2014 19:29:05 -0400
This reverts commit 867363cbcbc3d0ccac813a74fbc8236d8873c266. It turns out that the %name-prefix syntax without "=" does not work at all in pre-2.4 Bison. We are not prepared to make such a large jump in minimum required Bison version just to suppress a warning message in a version hardly any developers are using yet. When 3.0 gets more popular, we'll figure out a way to deal with this. In the meantime, BISONFLAGS=-Wno-deprecated is recommendable for anyone using 3.0 who doesn't want to see the warning.
Fix bogus %name-prefix option syntax in all our Bison files.
commit : 867363cbcbc3d0ccac813a74fbc8236d8873c266 author : Tom Lane <firstname.lastname@example.org> date : Wed, 28 May 2014 15:41:58 -0400 committer: Tom Lane <email@example.com> date : Wed, 28 May 2014 15:41:58 -0400
%name-prefix doesn't use an "=" sign according to the Bison docs, but it silently accepted one anyway, until Bison 3.0. This was originally a typo of mine in commit 012abebab1bc72043f3f670bf32e91ae4ee04bd2, and we seem to have slavishly copied the error into all the other grammar files. Per report from Vik Fearing; analysis by Peter Eisentraut. Back-patch to all active branches, since somebody might try to build a back branch with up-to-date tools.
Ensure cleanup in case of early errors in streaming base backups
commit : dbcde0f4d62815eee8f32c17441db8f5009657db author : Magnus Hagander <firstname.lastname@example.org> date : Wed, 28 May 2014 13:03:21 +0200 committer: Magnus Hagander <email@example.com> date : Wed, 28 May 2014 13:03:21 +0200
Move the code that sends the initial status information as well as the calculation of paths inside the ENSURE_ERROR_CLEANUP block. If this code failed, we would "leak" a counter of number of concurrent backups, thereby making the system always believe it was in backup mode. This could happen if the sending failed (which it probably never did given that the small amount of data to send would never cause a flush). It is very low risk, but all operations after do_pg_start_backup should be protected.
Avoid unportable usage of sscanf(UINT64_FORMAT).
commit : 9a21ac082f5bfedac5d1a76d9c04e7c786cc242b author : Tom Lane <firstname.lastname@example.org> date : Mon, 26 May 2014 22:23:36 -0400 committer: Tom Lane <email@example.com> date : Mon, 26 May 2014 22:23:36 -0400
On Mingw, it seems that scanf() doesn't necessarily accept the same format codes that printf() does, and in particular it may fail to recognize %llu even though printf() does. Since configure only probes printf() behavior while setting up the INT64_FORMAT macros, this means it's unsafe to use those macros with scanf(). We had only one instance of such a coding pattern, in contrib/pg_stat_statements, so change that code to avoid the problem. Per buildfarm warnings. Back-patch to 9.0 where the troublesome code was introduced. Michael Paquier
Prevent auto_explain from changing the output of a user's EXPLAIN.
commit : 31f579f09ccdaaa63a65bdfba9a9803c4f563a98 author : Tom Lane <firstname.lastname@example.org> date : Tue, 20 May 2014 12:20:57 -0400 committer: Tom Lane <email@example.com> date : Tue, 20 May 2014 12:20:57 -0400
Commit af7914c6627bcf0b0ca614e9ce95d3f8056602bf, which introduced the EXPLAIN (TIMING) option, for some reason coded explain.c to look at planstate->instrument->need_timer rather than es->timing to decide whether to print timing info. However, the former flag might get set as a result of contrib/auto_explain wanting timing information. We certainly don't want activation of auto_explain to change user-visible statement behavior, so fix that. Also fix an independent bug introduced in the same patch: in the code path for a never-executed node with a machine-friendly output format, if timing was selected, it would fail to print the Actual Rows and Actual Loops items. Per bug #10404 from Tomonari Katsumata. Back-patch to 9.2 where the faulty code was introduced.
Use 0-based numbering in comments about backup blocks.
commit : 0128a7712d445e2ecc69105b34cd17f7eb6e0af5 author : Heikki Linnakangas <firstname.lastname@example.org> date : Mon, 19 May 2014 13:21:59 +0300 committer: Heikki Linnakangas <email@example.com> date : Mon, 19 May 2014 13:21:59 +0300
The macros and functions that work with backup blocks in the redo function use 0-based numbering, so let's use that consistently in the function that generates the records too. Makes it so much easier to compare the generation and replay functions. Backpatch to 9.0, where we switched from 1-based to 0-based numbering.
Initialize tsId and dbId fields in WAL record of COMMIT PREPARED.
commit : 0d4c75f4de68012bb6f3bc52ebb58234334259d2 author : Heikki Linnakangas <firstname.lastname@example.org> date : Fri, 16 May 2014 09:47:50 +0300 committer: Heikki Linnakangas <email@example.com> date : Fri, 16 May 2014 09:47:50 +0300
Commit dd428c79 added dbId and tsId to the xl_xact_commit struct but missed that prepared transaction commits reuse that struct. Fix that. Because those fields were left unitialized, replaying a commit prepared WAL record in a hot standby node would fail to remove the relcache init file. That can lead to "could not open file" errors on the standby. Relcache init file only needs to be removed when a system table/index is rewritten in the transaction using two phase commit, so that should be rare in practice. In HEAD, the incorrect dbId/tsId values are also used for filtering in logical replication code, causing the transaction to always be filtered out. Analysis and fix by Andres Freund. Backpatch to 9.0 where hot standby was introduced.
Fix unportable setvbuf() usage in initdb.
commit : 9601cb7bcc8f838b14e5286073b016044604790b author : Tom Lane <firstname.lastname@example.org> date : Thu, 15 May 2014 15:58:01 -0400 committer: Tom Lane <email@example.com> date : Thu, 15 May 2014 15:58:01 -0400
In yesterday's commit 2dc4f011fd61501cce507be78c39a2677690d44b, I tried to force buffering of stdout/stderr in initdb to be what it is by default when the program is run interactively on Unix (since that's how most manual testing is done). This tripped over the fact that Windows doesn't support _IOLBF mode. We dealt with that a long time ago in syslogger.c by falling back to unbuffered mode on Windows. Export that solution in port.h and use it in initdb. Back-patch to 8.4, like the previous commit.
Handle duplicate XIDs in txid_snapshot.
commit : 479a36f23479417eb50b4eac74c0c0e1ce9cd7ee author : Heikki Linnakangas <firstname.lastname@example.org> date : Thu, 15 May 2014 18:29:20 +0300 committer: Heikki Linnakangas <email@example.com> date : Thu, 15 May 2014 18:29:20 +0300
The proc array can contain duplicate XIDs, when a transaction is just being prepared for two-phase commit. To cope, remove any duplicates in txid_current_snapshot(). Also ignore duplicates in the input functions, so that if e.g. you have an old pg_dump file that already contains duplicates, it will be accepted. Report and fix by Jan Wieck. Backpatch to all supported versions.
Fix race condition in preparing a transaction for two-phase commit.
commit : d8dbeb048287f478f11eeab005ddd1e9df65a4d9 author : Heikki Linnakangas <firstname.lastname@example.org> date : Thu, 15 May 2014 16:37:50 +0300 committer: Heikki Linnakangas <email@example.com> date : Thu, 15 May 2014 16:37:50 +0300
To lock a prepared transaction's shared memory entry, we used to mark it with the XID of the backend. When the XID was no longer active according to the proc array, the entry was implicitly considered as not locked anymore. However, when preparing a transaction, the backend's proc array entry was cleared before transfering the locks (and some other state) to the prepared transaction's dummy PGPROC entry, so there was a window where another backend could finish the transaction before it was in fact fully prepared. To fix, rewrite the locking mechanism of global transaction entries. Instead of an XID, just have simple locked-or-not flag in each entry (we store the locking backend's backend id rather than a simple boolean, but that's just for debugging purposes). The backend is responsible for explicitly unlocking the entry, and to make sure that that happens, install a callback to unlock it on abort or process exit. Backpatch to all supported versions.
In initdb, ensure stdout/stderr buffering behavior is what we expect.
commit : a2655a32226ac9f310beea7ee786c573f8009bd6 author : Tom Lane <firstname.lastname@example.org> date : Wed, 14 May 2014 21:13:59 -0400 committer: Tom Lane <email@example.com> date : Wed, 14 May 2014 21:13:59 -0400
Since this program may print to either stdout or stderr, the relative ordering of its messages depends on the buffering behavior of those files. Force stdout to be line-buffered and stderr to be unbuffered, ensuring that the behavior will match standard Unix interactive behavior, even when stdout and stderr are rerouted to a file. Per complaint from Tomas Vondra. The particular case he pointed out is new in HEAD, but issues of the same sort could arise in any branch with other error messages, so back-patch to all branches. I'm unsure whether we might not want to do this in other client programs as well. For the moment, just fix initdb.
Initialize padding bytes in btree_gist varbit support.
commit : 0d8d0d027723d4470a9e2571c499752aa6d8df7a author : Heikki Linnakangas <firstname.lastname@example.org> date : Tue, 13 May 2014 14:16:28 +0300 committer: Heikki Linnakangas <email@example.com> date : Tue, 13 May 2014 14:16:28 +0300
The code expands a varbit gist leaf key to a node key by copying the bit data twice in a varlen datum, as both the lower and upper key. The lower key was expanded to INTALIGN size, but the padding bytes were not initialized. That's a problem because when the lower/upper keys are compared, the padding bytes are used compared too, when the values are otherwise equal. That could lead to incorrect query results. REINDEX is advised for any btree_gist indexes on bit or bit varying data type, to fix any garbage padding bytes on disk. Per Valgrind, reported by Andres Freund. Backpatch to all supported versions.
Ignore config.pl and buildenv.pl in src/tools/msvc.
commit : ada2ff45ec8f29d21fc00c6fca7151c662b63651 author : Tom Lane <firstname.lastname@example.org> date : Mon, 12 May 2014 14:24:18 -0400 committer: Tom Lane <email@example.com> date : Mon, 12 May 2014 14:24:18 -0400
config.pl and buildenv.pl can be used to customize build settings when using MSVC. They should never get committed into the common source tree. Back-patch to 9.0; it looks like the rules were different in 8.4. Michael Paquier
Free PQresult on error in pg_receivexlog.
commit : e158bbb6e95afde040debf3f573fc9b60dbcf469 author : Heikki Linnakangas <firstname.lastname@example.org> date : Mon, 12 May 2014 10:17:40 +0300 committer: Heikki Linnakangas <email@example.com> date : Mon, 12 May 2014 10:17:40 +0300
The leak is fairly small and rare, but a leak nevertheless. Per Coverity report. Backpatch to 9.2, where pg_receivexlog was added. pg_basebackup shares the code, but it always exits on error, so there is no real leak.
Accept tcl 8.6 in configure's probe for tclsh.
commit : 9bc1b439ef8bf5cf444724c29b81cc525c020f40 author : Tom Lane <firstname.lastname@example.org> date : Sat, 10 May 2014 10:48:08 -0400 committer: Tom Lane <email@example.com> date : Sat, 10 May 2014 10:48:08 -0400
Usually the search would find plain "tclsh" without any trouble, but some installations might only have the version-numbered flavor of that program. No compatibility problems have been reported with 8.6, so we might as well back-patch this to all active branches. Christoph Berg
Get rid of bogus dependency on typcategory in to_json() and friends.
commit : 25c933c5c9d6323271fe2fdc67b2fe748ce1bcd4 author : Tom Lane <firstname.lastname@example.org> date : Fri, 9 May 2014 12:55:06 -0400 committer: Tom Lane <email@example.com> date : Fri, 9 May 2014 12:55:06 -0400
These functions were relying on typcategory to identify arrays and composites, which is not reliable and not the normal way to do it. Using typcategory to identify boolean, numeric types, and json itself is also pretty questionable, though the code in those cases didn't seem to be at risk of anything worse than wrong output. Instead, use the standard lsyscache functions to identify arrays and composites, and rely on a direct check of the type OID for the other cases. In HEAD, also be sure to look through domains so that a domain is treated the same as its base type for conversions to JSON. However, this is a small behavioral change; given the lack of field complaints, we won't back-patch it. In passing, refactor so that there's only one copy of the code that decides which conversion strategy to apply, not multiple copies that could (and have) gotten out of sync.
Document permissions needed for pg_database_size and pg_tablespace_size.
commit : 901202248ea465b2ffd77c4a5ec57d27908099dd author : Tom Lane <firstname.lastname@example.org> date : Thu, 8 May 2014 21:45:02 -0400 committer: Tom Lane <email@example.com> date : Thu, 8 May 2014 21:45:02 -0400
Back in 8.3, we installed permissions checks in these functions (see commits 8bc225e7990a and cc26599b7206). But we forgot to document that anywhere in the user-facing docs; it did get mentioned in the 8.3 release notes, but nobody's looking at that any more. Per gripe from Suya Huang.
Un-break ecpg test suite under --disable-integer-datetimes.
commit : 54055609e2d0b35edf236cad6a210592e145e57e author : Noah Misch <firstname.lastname@example.org> date : Thu, 8 May 2014 19:29:02 -0400 committer: Noah Misch <email@example.com> date : Thu, 8 May 2014 19:29:02 -0400
Commit 4318daecc959886d001a6e79c6ea853e8b1dfb4b broke it. The change in sub-second precision at extreme dates is normal. The inconsistent truncation vs. rounding is essentially a bug, albeit a longstanding one. Back-patch to 8.4, like the causative commit.
Protect against torn pages when deleting GIN list pages.
commit : 31633f992568ffca9a7927a9f474ad8516840b2c author : Heikki Linnakangas <firstname.lastname@example.org> date : Thu, 8 May 2014 14:43:04 +0300 committer: Heikki Linnakangas <email@example.com> date : Thu, 8 May 2014 14:43:04 +0300
To-be-deleted list pages contain no useful information, as they are being deleted, but we must still protect the writes from being torn by a crash after a partial write. To do that, re-initialize the pages on WAL replay. Jeff Janes caught this with a test program to test partial writes. Backpatch to all supported versions.
Include files copied from libpqport in .gitignore
commit : 7603744a077a8f6a878f47e389a3d41844c8adc5 author : Heikki Linnakangas <firstname.lastname@example.org> date : Thu, 8 May 2014 10:56:57 +0300 committer: Heikki Linnakangas <email@example.com> date : Thu, 8 May 2014 10:56:57 +0300
Avoid buffer bloat in libpq when server is consistently faster than client.
commit : f7672c8ce26582f250f7dce543a998ac9d1d6665 author : Tom Lane <firstname.lastname@example.org> date : Wed, 7 May 2014 21:38:41 -0400 committer: Tom Lane <email@example.com> date : Wed, 7 May 2014 21:38:41 -0400
If the server sends a long stream of data, and the server + network are consistently fast enough to force the recv() loop in pqReadData() to iterate until libpq's input buffer is full, then upon processing the last incomplete message in each bufferload we'd usually double the buffer size, due to supposing that we didn't have enough room in the buffer to finish collecting that message. After filling the newly-enlarged buffer, the cycle repeats, eventually resulting in an out-of-memory situation (which would be reported misleadingly as "lost synchronization with server"). Of course, we should not enlarge the buffer unless we still need room after discarding already-processed messages. This bug dates back quite a long time: pqParseInput3 has had the behavior since perhaps 2003, getCopyDataMessage at least since commit 70066eb1a1ad in 2008. Probably the reason it's not been isolated before is that in common environments the recv() loop would always be faster than the server (if on the same machine) or faster than the network (if not); or at least it wouldn't be slower consistently enough to let the buffer ramp up to a problematic size. The reported cases involve Windows, which perhaps has different timing behavior than other platforms. Per bug #7914 from Shin-ichi Morita, though this is different from his proposed solution. Back-patch to all supported branches.
Fix failure to set ActiveSnapshot while rewinding a cursor.
commit : 022b5f2b228e2d0a658b808340bd32ba904b87f4 author : Tom Lane <firstname.lastname@example.org> date : Wed, 7 May 2014 14:25:17 -0400 committer: Tom Lane <email@example.com> date : Wed, 7 May 2014 14:25:17 -0400
ActiveSnapshot needs to be set when we call ExecutorRewind because some plan node types may execute user-defined functions during their ReScan calls (nodeLimit.c does so, at least). The wisdom of that is somewhat debatable, perhaps, but for now the simplest fix is to make sure the required context is valid. Failure to do this typically led to a null-pointer-dereference core dump, though it's possible that in more complex cases a function could be executed with the wrong snapshot leading to very subtle misbehavior. Per report from Leif Jensen. It's been broken for a long time, so back-patch to all active branches.
Fix interval test, which was broken for floating-point timestamps.
commit : dfac804a474806692e8e1fc2b65b6bc73666d32d author : Jeff Davis <firstname.lastname@example.org> date : Tue, 6 May 2014 19:35:24 -0700 committer: Jeff Davis <email@example.com> date : Tue, 6 May 2014 19:35:24 -0700
Commit 4318daecc959886d001a6e79c6ea853e8b1dfb4b introduced a test that couldn't be made consistent between integer and floating-point timestamps. It was designed to test the longest possible interval output length, so removing four zeros from the number of hours, as this patch does, is not ideal. But the test still has some utility for its original purpose, and there aren't a lot of other good options. Noah Misch suggested a different approach where we test that the output either matches what we expect from integer timestamps or what we expect from floating-point timestamps. That seemed to obscure an otherwise simple test, however. Reviewed by Tom Lane and Noah Misch.
Remove tabs after spaces in C comments
commit : 0b44914c21a008bb2f0764672eb6b15310431b3e author : Bruce Momjian <firstname.lastname@example.org> date : Tue, 6 May 2014 11:26:27 -0400 committer: Bruce Momjian <email@example.com> date : Tue, 6 May 2014 11:26:27 -0400
This was not changed in HEAD, but will be done later as part of a pgindent run. Future pgindent runs will also do this. Report by Tom Lane Backpatch through all supported branches, but not HEAD
Fix use of free in walsender error handling after a sysid mismatch.
commit : 17b04a15806d8e8b4cc3013244f4837c02d6baf4 author : Heikki Linnakangas <firstname.lastname@example.org> date : Tue, 6 May 2014 15:14:51 +0300 committer: Heikki Linnakangas <email@example.com> date : Tue, 6 May 2014 15:14:51 +0300
Found via valgrind. The bug exists since the introduction of the walsender, so backpatch to 9.0. Andres Freund
Fix handling of array of char pointers in ecpglib.
commit : 3a024c1104248a444caab5fa11b4442a1587a90b author : Michael Meskes <firstname.lastname@example.org> date : Tue, 6 May 2014 13:04:30 +0200 committer: Michael Meskes <email@example.com> date : Tue, 6 May 2014 13:04:30 +0200
When array of char * was used as target for a FETCH statement returning more than one row, it tried to store all the result in the first element. Instead it should dump array of char pointers with right offset, use the address instead of the value of the C variable while reading the array and treat such variable as char **, instead of char * for pointer arithmetic. Patch by Ashutosh Bapat <firstname.lastname@example.org>
Fix possible cache invalidation failure in ReceiveSharedInvalidMessages.
commit : c8fbeeb45ef835a5e513ad167fd984baa7dc00d8 author : Tom Lane <email@example.com> date : Mon, 5 May 2014 14:43:46 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 5 May 2014 14:43:46 -0400
Commit fad153ec45299bd4d4f29dec8d9e04e2f1c08148 modified sinval.c to reduce the number of calls into sinvaladt.c (which require taking a shared lock) by keeping a local buffer of collected-but-not-yet-processed messages. However, if processing of the last message in a batch resulted in a recursive call to ReceiveSharedInvalidMessages, we could overwrite that message with a new one while the outer invalidation function was still working on it. This would be likely to lead to invalidation of the wrong cache entry, allowing subsequent processing to use stale cache data. The fix is just to make a local copy of each message while we're processing it. Spotted by Andres Freund. Back-patch to 8.4 where the bug was introduced.
Fix "quiet inline" configure test for newer clang compilers.
commit : 5788052f3c6b2ac3d2bd761cbb333e50dad8670c author : Tom Lane <email@example.com> date : Fri, 2 May 2014 15:30:35 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 2 May 2014 15:30:35 -0400
This test used to just define an unused static inline function and check whether that causes a warning. But newer clang versions warn about unused static inline functions when defined inside a .c file, but not when defined in an included header, which is the case we care about. Change the test to cope. Andres Freund
Fix failure to detoast fields in composite elements of structured types.
commit : 8c43980a18c9801d693f39f40a5a26a51785d5fc author : Tom Lane <email@example.com> date : Thu, 1 May 2014 15:19:14 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 1 May 2014 15:19:14 -0400
If we have an array of records stored on disk, the individual record fields cannot contain out-of-line TOAST pointers: the tuptoaster.c mechanisms are only prepared to deal with TOAST pointers appearing in top-level fields of a stored row. The same applies for ranges over composite types, nested composites, etc. However, the existing code only took care of expanding sub-field TOAST pointers for the case of nested composites, not for other structured types containing composites. For example, given a command such as UPDATE tab SET arraycol = ARRAY[(ROW(x,42)::mycompositetype] ... where x is a direct reference to a field of an on-disk tuple, if that field is long enough to be toasted out-of-line then the TOAST pointer would be inserted as-is into the array column. If the source record for x is later deleted, the array field value would become a dangling pointer, leading to errors along the line of "missing chunk number 0 for toast value ..." when the value is referenced. A reproducible test case for this was provided by Jan Pecek, but it seems likely that some of the "missing chunk number" reports we've heard in the past were caused by similar issues. Code-wise, the problem is that PG_DETOAST_DATUM() is not adequate to produce a self-contained Datum value if the Datum is of composite type. Seen in this light, the problem is not just confined to arrays and ranges, but could also affect some other places where detoasting is done in that way, for example form_index_tuple(). I tried teaching the array code to apply toast_flatten_tuple_attribute() along with PG_DETOAST_DATUM() when the array element type is composite, but this was messy and imposed extra cache lookup costs whether or not any TOAST pointers were present, indeed sometimes when the array element type isn't even composite (since sometimes it takes a typcache lookup to find that out). The idea of extending that approach to all the places that currently use PG_DETOAST_DATUM() wasn't attractive at all. This patch instead solves the problem by decreeing that composite Datum values must not contain any out-of-line TOAST pointers in the first place; that is, we expand out-of-line fields at the point of constructing a composite Datum, not at the point where we're about to insert it into a larger tuple. This rule is applied only to true composite Datums, not to tuples that are being passed around the system as tuples, so it's not as invasive as it might sound at first. With this approach, the amount of code that has to be touched for a full solution is greatly reduced, and added cache lookup costs are avoided except when there actually is a TOAST pointer that needs to be inlined. The main drawback of this approach is that we might sometimes dereference a TOAST pointer that will never actually be used by the query, imposing a rather large cost that wasn't there before. On the other side of the coin, if the field value is used multiple times then we'll come out ahead by avoiding repeat detoastings. Experimentation suggests that common SQL coding patterns are unaffected either way, though. Applications that are very negatively affected could be advised to modify their code to not fetch columns they won't be using. In future, we might consider reverting this solution in favor of detoasting only at the point where data is about to be stored to disk, using some method that can drill down into multiple levels of nested structured types. That will require defining new APIs for structured types, though, so it doesn't seem feasible as a back-patchable fix. Note that this patch changes HeapTupleGetDatum() from a macro to a function call; this means that any third-party code using that macro will not get protection against creating TOAST-pointer-containing Datums until it's recompiled. The same applies to any uses of PG_RETURN_HEAPTUPLEHEADER(). It seems likely that this is not a big problem in practice: most of the tuple-returning functions in core and contrib produce outputs that could not possibly be toasted anyway, and the same probably holds for third-party extensions. This bug has existed since TOAST was invented, so back-patch to all supported branches.
Check for interrupts and stack overflow during rule/view dumps.
commit : 920fbc1b4347dfca940f7aacf28d9e22294477c1 author : Tom Lane <email@example.com> date : Wed, 30 Apr 2014 13:46:19 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 30 Apr 2014 13:46:19 -0400
Since ruleutils.c recurses, it could be driven to stack overflow by deeply nested constructs. Very large queries might also take long enough to deparse that a check for interrupts seems like a good idea. Stick appropriate tests into a couple of key places. Noted by Greg Stark. Back-patch to all supported branches.
Add missing SYSTEMQUOTEs
commit : e2558e016ea389501dc0aa96aa9db632d4649d87 author : Heikki Linnakangas <email@example.com> date : Wed, 30 Apr 2014 10:34:15 +0300 committer: Heikki Linnakangas <firstname.lastname@example.org> date : Wed, 30 Apr 2014 10:34:15 +0300
Some popen() calls were missing SYSTEMQUOTEs, which caused initdb and pg_upgrade to fail on Windows, if the installation path contained both spaces and @ signs. Patch by Nikhil Deshpande. Backpatch to all supported versions.
Improve planner to drop constant-NULL inputs of AND/OR where it's legal.
commit : 0901dbab338f3161b50c4c60ef669ed393c9e308 author : Tom Lane <email@example.com> date : Tue, 29 Apr 2014 13:12:33 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 29 Apr 2014 13:12:33 -0400
In general we can't discard constant-NULL inputs, since they could change the result of the AND/OR to be NULL. But at top level of WHERE, we do not need to distinguish a NULL result from a FALSE result, so it's okay to treat NULL as FALSE and then simplify AND/OR accordingly. This is a very ancient oversight, but in 9.2 and later it can lead to failure to optimize queries that previous releases did optimize, as a result of more aggressive parameter substitution rules making it possible to reduce more subexpressions to NULL constants. This is the root cause of bug #10171 from Arnold Scheffler. We could alternatively have fixed that by teaching orclauses.c to ignore constant-NULL OR arms, but it seems better to get rid of them globally. I resisted the temptation to back-patch this change into all active branches, but it seems appropriate to back-patch as far as 9.2 so that there will not be performance regressions of the kind shown in this bug.
Fix two bugs in WAL-logging of GIN pending-list pages.
commit : 1e96eff43ac7f1d59800ecd78e09185689ca6704 author : Heikki Linnakangas <email@example.com> date : Mon, 28 Apr 2014 16:12:45 +0300 committer: Heikki Linnakangas <firstname.lastname@example.org> date : Mon, 28 Apr 2014 16:12:45 +0300
In writeListPage, never take a full-page image of the page, because we have all the information required to re-initialize in the WAL record anyway. Before this fix, a full-page image was always generated, unless full_page_writes=off, because when the page is initialized its LSN is always 0. In stable-branches, keep the code to restore the backup blocks if they exist, in case that the WAL is generated with an older minor version, but in master Assert that there are no full-page images. In the redo routine, add missing "off++". Otherwise the tuples are added to the page in reverse order. That happens to be harmless because we always scan and remove all the tuples together, but it was clearly wrong. Also, it was masked by the first bug unless full_page_writes=off, because the page was always restored from a full-page image. Backpatch to all supported versions.
Reset pg_stat_activity.xact_start during PREPARE TRANSACTION.
commit : ea9ac774198ad097d2d322f81c39bd8cdfa8b04d author : Tom Lane <email@example.com> date : Thu, 24 Apr 2014 13:29:48 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 24 Apr 2014 13:29:48 -0400
Once we've completed a PREPARE, our session is not running a transaction, so its entry in pg_stat_activity should show xact_start as null, rather than leaving the value as the start time of the now-prepared transaction. I think possibly this oversight was triggered by faulty extrapolation from the adjacent comment that says PrepareTransaction should not call AtEOXact_PgStat, so tweak the wording of that comment. Noted by Andres Freund while considering bug #10123 from Maxim Boguk, although this error doesn't seem to explain that report. Back-patch to all active branches.
Fix incorrect pg_proc.proallargtypes entries for two built-in functions.
commit : f9cd2b7824f627069f81cbb47d767af039d66a0d author : Tom Lane <email@example.com> date : Wed, 23 Apr 2014 21:21:12 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 23 Apr 2014 21:21:12 -0400
pg_sequence_parameters() and pg_identify_object() have had incorrect proallargtypes entries since 9.1 and 9.3 respectively. This was mostly masked by the correct information in proargtypes, but a few operations such as pg_get_function_arguments() (and thus psql's \df display) would show the wrong data types for these functions' input parameters. In HEAD, fix the wrong info, bump catversion, and add an opr_sanity regression test to catch future mistakes of this sort. In the back branches, just fix the wrong info so that installations initdb'd with future minor releases will have the right data. We can't force an initdb, and it doesn't seem like a good idea to add a regression test that will fail on existing installations. Andres Freund
Fix typos in comment.
commit : 6c5cba8e0b0dee9d1802d3477b9b882782e95ab1 author : Heikki Linnakangas <email@example.com> date : Wed, 23 Apr 2014 12:56:41 +0300 committer: Heikki Linnakangas <firstname.lastname@example.org> date : Wed, 23 Apr 2014 12:56:41 +0300
pg_stat_statements forgot to let previous occupant of hook get control too.
commit : b4eb2d5cc044c32560b3e5cfc8b878078d9ad77f author : Tom Lane <email@example.com> date : Mon, 21 Apr 2014 13:28:13 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 21 Apr 2014 13:28:13 -0400
pgss_post_parse_analyze() neglected to pass the call on to any earlier occupant of the post_parse_analyze_hook. There are no other users of that hook in contrib/, and most likely none in the wild either, so this is probably just a latent bug. But it's a bug nonetheless, so back-patch to 9.2 where this code was introduced.
Fix unused-variable warning on Windows.
commit : c6b55bec3fb232c81ba4eba30856b222ad7a8cc4 author : Tom Lane <email@example.com> date : Thu, 17 Apr 2014 16:12:24 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 17 Apr 2014 16:12:24 -0400
Introduced in 585bca39: msgid is not used in the Windows code path. Also adjust comments a tad (mostly to keep pgindent from messing it up). David Rowley
pgcrypto: fix memset() calls that might be optimized away
commit : ea8725a8b376b46324b47616432670769cae95ed author : Bruce Momjian <email@example.com> date : Thu, 17 Apr 2014 12:37:53 -0400 committer: Bruce Momjian <firstname.lastname@example.org> date : Thu, 17 Apr 2014 12:37:53 -0400
Specifically, on-stack memset() might be removed, so: * Replace memset() with px_memset() * Add px_memset to copy_crlf() * Add px_memset to pgp-s2k.c Patch by Marko Kreen Report by PVS-Studio Backpatch through 8.4.
Attempt to get plpython regression tests working again for MSVC builds.
commit : d3c7498042658884d388c1e86f060d2bf563eedd author : Andrew Dunstan <email@example.com> date : Wed, 16 Apr 2014 13:35:46 -0400 committer: Andrew Dunstan <firstname.lastname@example.org> date : Wed, 16 Apr 2014 13:35:46 -0400
This has probably been broken for quite a long time. Buildfarm member currawong's current results suggest that it's been broken since 9.1, so backpatch this to that branch. This only supports Python 2 - I will handle Python 3 separately, but this is a fairly simple fix.
Use AF_UNSPEC not PF_UNSPEC in getaddrinfo calls.
commit : bac05d622dd9a1186cb81ac09b04736b815b6226 author : Tom Lane <email@example.com> date : Wed, 16 Apr 2014 13:21:00 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 16 Apr 2014 13:21:00 -0400
According to the Single Unix Spec and assorted man pages, you're supposed to use the constants named AF_xxx when setting ai_family for a getaddrinfo call. In a few places we were using PF_xxx instead. Use of PF_xxx appears to be an ancient BSD convention that was not adopted by later standardization. On BSD and most later Unixen, it doesn't matter much because those constants have equivalent values anyway; but nonetheless this code is not per spec. In the same vein, replace PF_INET by AF_INET in one socket() call, which wasn't even consistent with the other socket() call in the same function let alone the remainder of our code. Per investigation of a Cygwin trouble report from Marco Atzeri. It's probably a long shot that this will fix his issue, but it's wrong in any case.
Fix timeout in LDAP lookup of libpq connection parameters
commit : b764080eed128e4ab15db801923ef13c45b506b2 author : Magnus Hagander <email@example.com> date : Wed, 16 Apr 2014 17:18:02 +0200 committer: Magnus Hagander <firstname.lastname@example.org> date : Wed, 16 Apr 2014 17:18:02 +0200
Bind attempts to an LDAP server should time out after two seconds, allowing additional lines in the service control file to be parsed (which provide a fall back to a secondary LDAP server or default options). The existing code failed to enforce that timeout during TCP connect, resulting in a hang far longer than two seconds if the LDAP server does not respond. Laurenz Albe
check socket creation errors against PGINVALID_SOCKET
commit : 966f015b60d90f6450cbadcbfa89e21408fe52f9 author : Bruce Momjian <email@example.com> date : Wed, 16 Apr 2014 10:45:48 -0400 committer: Bruce Momjian <firstname.lastname@example.org> date : Wed, 16 Apr 2014 10:45:48 -0400
Previously, in some places, socket creation errors were checked for negative values, which is not true for Windows because sockets are unsigned. This masked socket creation errors on Windows. Backpatch through 9.0. 8.4 doesn't have the infrastructure to fix this.
Use correctly-sized buffer when zero-filling a WAL file.
commit : a4c4e0bf60f0f8dbe2556fabd94eb827ae376032 author : Heikki Linnakangas <email@example.com> date : Wed, 16 Apr 2014 10:21:09 +0300 committer: Heikki Linnakangas <firstname.lastname@example.org> date : Wed, 16 Apr 2014 10:21:09 +0300
I mixed up BLCKSZ and XLOG_BLCKSZ when I changed the way the buffer is allocated a couple of weeks ago. With the default settings, they are both 8k, but they can be changed at compile-time.
Several fixes to array handling in ecpg.
commit : 2b3136de9ed80cf89287651048d1597ebc1b4b6d author : Michael Meskes <email@example.com> date : Wed, 9 Apr 2014 11:21:46 +0200 committer: Michael Meskes <firstname.lastname@example.org> date : Wed, 9 Apr 2014 11:21:46 +0200
Patches by Ashutosh Bapat <email@example.com>
Fix hot standby bug with GiST scans.
commit : 02b9fd73ee576260355a2e2bd489f93dbcbdb37f author : Heikki Linnakangas <firstname.lastname@example.org> date : Tue, 8 Apr 2014 14:47:24 +0300 committer: Heikki Linnakangas <email@example.com> date : Tue, 8 Apr 2014 14:47:24 +0300
Don't reset the rightlink of a page when replaying a page update record. This was a leftover from pre-hot standby days, when it was not possible to have scans concurrent with WAL replay. Resetting the right-link was not necessary back then either, but it was done for the sake of tidiness. But with hot standby, it's wrong, because a concurrent scan might still need it. Backpatch all versions with hot standby, 9.0 and above.
Assert that strong-lock count is >0 everywhere it's decremented.
commit : 74cf8028411700474b5464d62dafdc32d48583aa author : Robert Haas <firstname.lastname@example.org> date : Mon, 7 Apr 2014 10:59:42 -0400 committer: Robert Haas <email@example.com> date : Mon, 7 Apr 2014 10:59:42 -0400
The one existing assertion of this type has tripped a few times in the buildfarm lately, but it's not clear whether the problem is really originating there or whether it's leftovers from a trip through one of the other two paths that lack a matching assertion. So add one. Since the same bug(s) most likely exist(s) in the back-branches also, back-patch to 9.2, where the fast-path lock mechanism was added.
Block signals earlier during postmaster startup.
commit : 53463e2479b7a8a8c470c4a3b57f218a407b98da author : Tom Lane <firstname.lastname@example.org> date : Sat, 5 Apr 2014 18:16:14 -0400 committer: Tom Lane <email@example.com> date : Sat, 5 Apr 2014 18:16:14 -0400
Formerly, we set up the postmaster's signal handling only when we were about to start launching subprocesses. This is a bad idea though, as it means that for example a SIGINT arriving before that will kill the postmaster instantly, perhaps leaving lockfiles, socket files, shared memory, etc laying about. We'd rather that such a signal caused orderly postmaster termination including releasing of those resources. A simple fix is to move the PostmasterMain stanza that initializes signal handling to an earlier point, before we've created any such resources. Then, an early-arriving signal will be blocked until we're ready to deal with it in the usual way. (The only part that really needs to be moved up is blocking of signals, but it seems best to keep the signal handler installation calls together with that; for one thing this ensures the kernel won't drop any signals we wished to get. The handlers won't get invoked in any case until we unblock signals in ServerLoop.) Per a report from MauMau. He proposed changing the way "pg_ctl stop" works to deal with this, but that'd just be masking one symptom not fixing the core issue. It's been like this since forever, so back-patch to all supported branches.
Fix processing of PGC_BACKEND GUC parameters on Windows.
commit : bdc3e95c2ae6362219ee182fbf3bd03e8f8c2ea5 author : Tom Lane <firstname.lastname@example.org> date : Sat, 5 Apr 2014 12:41:31 -0400 committer: Tom Lane <email@example.com> date : Sat, 5 Apr 2014 12:41:31 -0400
EXEC_BACKEND builds (i.e., Windows) failed to absorb values of PGC_BACKEND parameters if they'd been changed post-startup via the config file. This for example prevented log_connections from working if it were turned on post-startup. The mechanism for handling this case has always been a bit of a kluge, and it wasn't revisited when we implemented EXEC_BACKEND. While in a normal forking environment new backends will inherit the postmaster's value of such settings, EXEC_BACKEND backends have to read the settings from the CONFIG_EXEC_PARAMS file, and they were mistakenly rejecting them. So this case has always been broken in the Windows port; so back-patch to all supported branches. Amit Kapila
Fix tablespace creation WAL replay to work on Windows.
commit : 1a496a12b3a85764ac87009220e2d5adf0dfb765 author : Tom Lane <firstname.lastname@example.org> date : Fri, 4 Apr 2014 23:09:41 -0400 committer: Tom Lane <email@example.com> date : Fri, 4 Apr 2014 23:09:41 -0400
The code segment that removes the old symlink (if present) wasn't clued into the fact that on Windows, symlinks are junction points which have to be removed with rmdir(). Backpatch to 9.0, where the failing code was introduced. MauMau, reviewed by Muhammad Asif Naeem and Amit Kapila
Allow "-C variable" and "--describe-config" even to root users.
commit : 6d25eb314a631871c1df1de563253a78ce706250 author : Tom Lane <firstname.lastname@example.org> date : Fri, 4 Apr 2014 22:03:42 -0400 committer: Tom Lane <email@example.com> date : Fri, 4 Apr 2014 22:03:42 -0400
There's no really compelling reason to refuse to do these read-only, non-server-starting options as root, and there's at least one good reason to allow -C: pg_ctl uses -C to find out the true data directory location when pointed at a config-only directory. On Windows, this is done before dropping administrator privileges, which means that pg_ctl fails for administrators if and only if a config-only layout is used. Since the root-privilege check is done so early in startup, it's a bit awkward to check for these switches. Make the somewhat arbitrary decision that we'll only skip the root check if -C is the first switch. This is not just to make the code a bit simpler: it also guarantees that we can't misinterpret a --boot mode switch. (While AuxiliaryProcessMain doesn't currently recognize any such switch, it might have one in the future.) This is no particular problem for pg_ctl, and since the whole behavior is undocumented anyhow, it's not a documentation issue either. (--describe-config only works as the first switch anyway, so this is no restriction for that case either.) Back-patch to 9.2 where pg_ctl first began to use -C. MauMau, heavily edited by me
Fix bogus time printout in walreceiver's debug log messages.
commit : ed1cb4241585efe86d45edfed9eaf1995aa5421f author : Tom Lane <firstname.lastname@example.org> date : Fri, 4 Apr 2014 11:43:41 -0400 committer: Tom Lane <email@example.com> date : Fri, 4 Apr 2014 11:43:41 -0400
The displayed sendtime and receipttime were always exactly equal, because somebody forgot that timestamptz_to_str returns a static buffer (thereby simplifying life for most callers, at the cost of complicating it for those who need two results concurrently). Apply the same pstrdup solution used by the other call sites with this issue. Back-patch to 9.2 where the faulty code was introduced. Per bug #9849 from Haruka Takatsuka, though this is not exactly his patch. Possibly we should change timestamptz_to_str's API, but I wouldn't want to do so in the back branches.
Avoid allocations in critical sections.
commit : 7d1e0e8d7a96ac2be5c3ce0e68a5c4eb65fccff2 author : Heikki Linnakangas <firstname.lastname@example.org> date : Fri, 4 Apr 2014 13:12:38 +0300 committer: Heikki Linnakangas <email@example.com> date : Fri, 4 Apr 2014 13:12:38 +0300
If a palloc in a critical section fails, it becomes a PANIC.
Fix documentation about joining pg_locks to other views.
commit : 6c1cfbacb9f92454a91d2898da84772fc6eeddd7 author : Tom Lane <firstname.lastname@example.org> date : Thu, 3 Apr 2014 14:18:31 -0400 committer: Tom Lane <email@example.com> date : Thu, 3 Apr 2014 14:18:31 -0400
The advice to join to pg_prepared_xacts via the transaction column was not updated when the transaction column was replaced by virtualtransaction. Since it's not quite obvious how to do that join, give an explicit example. For consistency also give an example for the adjacent case of joining to pg_stat_activity. And link-ify the view references too, just because we can. Per bug #9840 from Alexey Bashtanov. Michael Paquier and Tom Lane
Fix documentation about size of interval type.
commit : 4f304875356bd92f5ed06fe751b985bbfd5b7d66 author : Tom Lane <firstname.lastname@example.org> date : Thu, 3 Apr 2014 11:05:55 -0400 committer: Tom Lane <email@example.com> date : Thu, 3 Apr 2014 11:05:55 -0400
It's been 16 bytes, not 12, for ages. This was fixed in passing in HEAD (commit 146604ec), but as a factual error it should have been back-patched. Per gripe from Tatsuhito Kasahara.
Avoid palloc in critical section in GiST WAL-logging.
commit : 003a31a7c9fabed9b247ccd4c4123da017fddf58 author : Heikki Linnakangas <firstname.lastname@example.org> date : Thu, 3 Apr 2014 15:09:37 +0300 committer: Heikki Linnakangas <email@example.com> date : Thu, 3 Apr 2014 15:09:37 +0300
Memory allocation can fail if you run out of memory, and inside a critical section that will lead to a PANIC. Use conservatively-sized arrays in stack instead. There was previously no explicit limit on the number of pages a GiST split can produce, it was only limited by the number of LWLocks that can be held simultaneously (100 at the moment). This patch adds an explicit limit of 75 pages. That should be plenty, a typical split shouldn't produce more than 2-3 page halves. The bug has been there forever, but only backpatch down to 9.1. The code was changed significantly in 9.1, and it doesn't seem worth the risk or trouble to adapt this for 9.0 and 8.4.
Fix assorted issues in client host name lookup.
commit : 029decfec6befe3bba14918590af5c998402ff08 author : Tom Lane <firstname.lastname@example.org> date : Wed, 2 Apr 2014 17:11:31 -0400 committer: Tom Lane <email@example.com> date : Wed, 2 Apr 2014 17:11:31 -0400
The code for matching clients to pg_hba.conf lines that specify host names (instead of IP address ranges) failed to complain if reverse DNS lookup failed; instead it silently didn't match, so that you might end up getting a surprising "no pg_hba.conf entry for ..." error, as seen in bug #9518 from Mike Blackwell. Since we don't want to make this a fatal error in situations where pg_hba.conf contains a mixture of host names and IP addresses (clients matching one of the numeric entries should not have to have rDNS data), remember the lookup failure and mention it as DETAIL if we get to "no pg_hba.conf entry". Apply the same approach to forward-DNS lookup failures, too, rather than treating them as immediate hard errors. Along the way, fix a couple of bugs that prevented us from detecting an rDNS lookup error reliably, and make sure that we make only one rDNS lookup attempt; formerly, if the lookup attempt failed, the code would try again for each host name entry in pg_hba.conf. Since more or less the whole point of this design is to ensure there's only one lookup attempt not one per entry, the latter point represents a performance bug that seems sufficient justification for back-patching. Also, adjust src/port/getaddrinfo.c so that it plays as well as it can with this code. Which is not all that well, since it does not have actual support for rDNS lookup, but at least it should return the expected (and required by spec) error codes so that the main code correctly perceives the lack of functionality as a lookup failure. It's unlikely that PG is still being used in production on any machines that require our getaddrinfo.c, so I'm not excited about working harder than this. To keep the code in the various branches similar, this includes back-patching commits c424d0d1052cb4053c8712ac44123f9b9a9aa3f2 and 1997f34db4687e671690ed054c8f30bb501b1168 into 9.2 and earlier. Back-patch to 9.1 where the facility for hostnames in pg_hba.conf was introduced.
Fix bugs in manipulation of PgBackendStatus.st_clienthostname.
commit : e83bee8ddcc75881002efb69a293b2b9d1d2be78 author : Tom Lane <firstname.lastname@example.org> date : Tue, 1 Apr 2014 21:30:14 -0400 committer: Tom Lane <email@example.com> date : Tue, 1 Apr 2014 21:30:14 -0400
Initialization of this field was not being done according to the st_changecount protocol (it has to be done within the changecount increment range, not outside). And the test to see if the value should be reported as null was wrong. Noted while perusing uses of Port.remote_hostname. This was wrong from the introduction of this code (commit 4a25bc145), so back-patch to 9.1.
Fix typo in comment.
commit : 7ef17dd71db11f2ac03154ec48fa0d5594b026de author : Heikki Linnakangas <firstname.lastname@example.org> date : Tue, 1 Apr 2014 09:27:37 +0300 committer: Heikki Linnakangas <email@example.com> date : Tue, 1 Apr 2014 09:27:37 +0300
Mark FastPathStrongRelationLocks volatile.
commit : e980ec7c80713e645e08b74e126badb1ca5cecfa author : Robert Haas <firstname.lastname@example.org> date : Mon, 31 Mar 2014 14:32:12 -0400 committer: Robert Haas <email@example.com> date : Mon, 31 Mar 2014 14:32:12 -0400
Otherwise, the compiler might decide to move modifications to data within this structure outside the enclosing SpinLockAcquire / SpinLockRelease pair, leading to shared memory corruption. This may or may not explain a recent lmgr-related buildfarm failure on prairiedog, but it needs to be fixed either way.
Count buffers dirtied due to hints in pgBufferUsage.shared_blks_dirtied.
commit : 6a63dda4c2a0bf06b8da28d15baae1a08e2e8fd5 author : Robert Haas <firstname.lastname@example.org> date : Mon, 31 Mar 2014 13:06:26 -0400 committer: Robert Haas <email@example.com> date : Mon, 31 Mar 2014 13:06:26 -0400
Previously, such buffers weren't counted, with the possible result that EXPLAIN (BUFFERS) and pg_stat_statements would understate the true number of blocks dirtied by an SQL statement. Back-patch to 9.2, where this counter was introduced. Amit Kapila
Revert "Secure Unix-domain sockets of "make check" temporary clusters."
commit : 8c1797e59be95b967e6b00b5a70445cfd0d27653 author : Noah Misch <firstname.lastname@example.org> date : Sat, 29 Mar 2014 03:12:00 -0400 committer: Noah Misch <email@example.com> date : Sat, 29 Mar 2014 03:12:00 -0400
About half of the buildfarm members use too-long directory names, strongly suggesting that this approach is a dead end.
Secure Unix-domain sockets of "make check" temporary clusters.
commit : 83d12a99daf935d8dc4064cd0282179e08354c3f author : Noah Misch <firstname.lastname@example.org> date : Sat, 29 Mar 2014 00:52:56 -0400 committer: Noah Misch <email@example.com> date : Sat, 29 Mar 2014 00:52:56 -0400
Any OS user able to access the socket can connect as the bootstrap superuser and in turn execute arbitrary code as the OS user running the test. Protect against that by placing the socket in the temporary data directory, which has mode 0700 thanks to initdb. Back-patch to 8.4 (all supported versions). The hazard remains wherever the temporary cluster accepts TCP connections, notably on Windows. Attempts to run "make check" from a directory with a long name will now fail. An alternative not sharing that problem was to place the socket in a subdirectory of /tmp, but that is only secure if /tmp is sticky. The PG_REGRESS_SOCK_DIR environment variable is available as a workaround when testing from long directory paths. As a convenient side effect, this lets testing proceed smoothly in builds that override DEFAULT_PGSOCKET_DIR. Popular non-default values like /var/run/postgresql are often unwritable to the build user. Security: CVE-2014-0067
Document platform-specificity of unix_socket_permissions.
commit : c1932ec9e864d475b1debbcf26ba77c88dc163d0 author : Noah Misch <firstname.lastname@example.org> date : Sat, 29 Mar 2014 00:52:31 -0400 committer: Noah Misch <email@example.com> date : Sat, 29 Mar 2014 00:52:31 -0400
Back-patch to 8.4 (all supported versions).
Revert "Document that Python 2.3 requires cdecimal module for full functionality."
commit : 952f0153f3d69927947f8bdcba5d451a2b20e443 author : Tom Lane <firstname.lastname@example.org> date : Thu, 27 Mar 2014 17:08:38 -0400 committer: Tom Lane <email@example.com> date : Thu, 27 Mar 2014 17:08:38 -0400
This reverts commit a8ee81822e43120e1b31949b07af1adadcbeffc1. The change requiring cdecimal is new in 9.4 (see 7919398bac), so we should not claim previous branches need it.
Document that Python 2.3 requires cdecimal module for full functionality.
commit : a8ee81822e43120e1b31949b07af1adadcbeffc1 author : Tom Lane <firstname.lastname@example.org> date : Wed, 26 Mar 2014 22:43:29 -0400 committer: Tom Lane <email@example.com> date : Wed, 26 Mar 2014 22:43:29 -0400
This has been true for some time, but we were leaving users to discover it the hard way. Back-patch to 9.2. It might've been true before that, but we were claiming Python 2.2 compatibility before that, so I won't guess at the exact requirements back then.
Fix refcounting bug in PLy_modify_tuple().
commit : 6b3b15e5348593d882ce12c76720daec7c22e7e9 author : Tom Lane <firstname.lastname@example.org> date : Wed, 26 Mar 2014 16:41:38 -0400 committer: Tom Lane <email@example.com> date : Wed, 26 Mar 2014 16:41:38 -0400
We must increment the refcount on "plntup" as soon as we have the reference, not sometime later. Otherwise, if an error is thrown in between, the Py_XDECREF(plntup) call in the PG_CATCH block removes a refcount we didn't add, allowing the object to be freed even though it's still part of the plpython function's parsetree. This appears to be the cause of crashes seen on buildfarm member prairiedog. It's a bit surprising that we've not seen it fail repeatably before, considering that the regression tests have been exercising the faulty code path since 2009. The real-world impact is probably minimal, since it's unlikely anyone would be provoking the "TD["new"] is not a dictionary" error in production, and that's the only case that is actually wrong. Still, it's a bug affecting the regression tests, so patch all supported branches. In passing, remove dead variable "plstr", and demote "platt" to a local variable inside the PG_TRY block, since we don't need to clean it up in the PG_CATCH path.
Don't forget to flush XLOG_PARAMETER_CHANGE record.
commit : 6fe8411ffb7119010b80b34044dccd4614bda175 author : Fujii Masao <firstname.lastname@example.org> date : Wed, 26 Mar 2014 02:12:39 +0900 committer: Fujii Masao <email@example.com> date : Wed, 26 Mar 2014 02:12:39 +0900
Backpatch to 9.0 where XLOG_PARAMETER_CHANGE record was instroduced.
Fix typos in pg_basebackup documentation
commit : b18efb86fd8af1aa39b886031cfb2ad395f5eb24 author : Magnus Hagander <firstname.lastname@example.org> date : Tue, 25 Mar 2014 11:16:57 +0100 committer: Magnus Hagander <email@example.com> date : Tue, 25 Mar 2014 11:16:57 +0100
Address ccvalid/ccnoinherit in TupleDesc support functions.
commit : 1d1b32a9530ee20be98d4617b46f517f0d0250f2 author : Noah Misch <firstname.lastname@example.org> date : Sun, 23 Mar 2014 02:13:43 -0400 committer: Noah Misch <email@example.com> date : Sun, 23 Mar 2014 02:13:43 -0400
equalTupleDescs() neglected both of these ConstrCheck fields, and CreateTupleDescCopyConstr() neglected ccnoinherit. At this time, the only known behavior defect resulting from these omissions is constraint exclusion disregarding a CHECK constraint validated by an ALTER TABLE VALIDATE CONSTRAINT statement issued earlier in the same transaction. Back-patch to 9.2, where these fields were introduced.
Properly check for readdir/closedir() failures
commit : ee42d8f10b65593aab768438665000aec070f6c2 author : Bruce Momjian <firstname.lastname@example.org> date : Fri, 21 Mar 2014 13:45:11 -0400 committer: Bruce Momjian <email@example.com> date : Fri, 21 Mar 2014 13:45:11 -0400
Clear errno before calling readdir() and handle old MinGW errno bug while adding full test coverage for readdir/closedir failures. Backpatch through 8.4.
Fix memory leak during regular expression execution.
commit : 473194c09cb294151231374d2628ce624da346ba author : Tom Lane <firstname.lastname@example.org> date : Wed, 19 Mar 2014 11:09:24 -0400 committer: Tom Lane <email@example.com> date : Wed, 19 Mar 2014 11:09:24 -0400
For a regex containing backrefs, pg_regexec() might fail to free all the sub-DFAs that were created during execution, resulting in a permanent (session lifespan) memory leak. Problem was introduced by me in commit 587359479acbbdc95c8e37da40707e37097423f5. Per report from Sandro Santilli; diagnosis by Greg Stark.