PostgreSQL 9.1.14 commit log

Stamp 9.1.14.

  
commit   : 972a21d736f0f5ded750c1be7153a0571f2dc83e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 21 Jul 2014 15:14:13 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 21 Jul 2014 15:14:13 -0400    

Click here for diff

  
  

Release notes for 9.3.5, 9.2.9, 9.1.14, 9.0.18, 8.4.22.

  
commit   : b0cab3faf09a355f5e37d25a5c5325f773097c2a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 21 Jul 2014 14:59:36 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 21 Jul 2014 14:59:36 -0400    

Click here for diff

  
  

Translation updates

  
commit   : 07a3f74a73e533cb6d1452556f71fcec0ab7c199    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Mon, 21 Jul 2014 00:58:58 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Mon, 21 Jul 2014 00:58:58 -0400    

Click here for diff

  
  

Fix xreflabel for hot_standby_feedback.

  
commit   : 2f8887abb553b66c19655b922dc53851b0bf3103    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 19 Jul 2014 22:20:54 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 19 Jul 2014 22:20:54 -0400    

Click here for diff

  
Rather remarkable that this has been wrong since 9.1 and nobody noticed.  
  

Update time zone data files to tzdata release 2014e.

  
commit   : 40ccb6530c80a1345d7ba2ad7ed42e0443ee72d0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 19 Jul 2014 15:00:50 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 19 Jul 2014 15:00:50 -0400    

Click here for diff

  
DST law changes in Crimea, Egypt, Morocco.  New zone Antarctica/Troll  
for Norwegian base in Queen Maud Land.  
  

Limit pg_upgrade authentication advice to always-secure techniques.

  
commit   : 3f09bb8d27d509c25e4cbeef92ad454582579851    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Fri, 18 Jul 2014 16:05:17 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Fri, 18 Jul 2014 16:05:17 -0400    

Click here for diff

  
~/.pgpass is a sound choice everywhere, and "peer" authentication is  
safe on every platform it supports.  Cease to recommend "trust"  
authentication, the safety of which is deeply configuration-specific.  
Back-patch to 9.0, where pg_upgrade was introduced.  
  

Fix two low-probability memory leaks in regular expression parsing.

  
commit   : 8a817785adf34387dce3be4b9f2b201cc9ff835d    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 18 Jul 2014 13:00:27 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 18 Jul 2014 13:00:27 -0400    

Click here for diff

  
If pg_regcomp failed after having invoked markst/cleanst, it would leak any  
"struct subre" nodes it had created.  (We've already detected all regex  
syntax errors at that point, so the only likely causes of later failure  
would be query cancel or out-of-memory.)  To fix, make sure freesrnode  
knows the difference between the pre-cleanst and post-cleanst cleanup  
procedures.  Add some documentation of this less-than-obvious point.  
  
Also, newlacon did the wrong thing with an out-of-memory failure from  
realloc(), so that the previously allocated array would be leaked.  
  
Both of these are pretty low-probability scenarios, but a bug is a bug,  
so patch all the way back.  
  
Per bug #10976 from Arthur O'Dwyer.  
  

Fix REASSIGN OWNED for text search objects

  
commit   : a41dc73211c9ab579bb2cd87ad7d0a6ecf0806fe    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 15 Jul 2014 13:24:07 -0400    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 15 Jul 2014 13:24:07 -0400    

Click here for diff

  
Trying to reassign objects owned by a user that had text search  
dictionaries or configurations used to fail with:  
ERROR:  unexpected classid 3600  
or  
ERROR:  unexpected classid 3602  
  
Fix by adding cases for those object types in a switch in pg_shdepend.c.  
  
Both REASSIGN OWNED and text search objects go back all the way to 8.1,  
so backpatch to all supported branches.  In 9.3 the alter-owner code was  
made generic, so the required change in recent branches is pretty  
simple; however, for 9.2 and older ones we need some additional  
reshuffling to enable specifying objects by OID rather than name.  
  
Text search templates and parsers are not owned objects, so there's no  
change required for them.  
  
Per bug #9749 reported by Michal Novotný  
  

Reset master xmin when hot_standby_feedback disabled. If walsender has xmin of standby then ensure we reset the value to 0 when we change from hot_standby_feedback=on to hot_standby_feedback=off.

  
commit   : 8ebf5f7206e0e4c5d4113cf67d3db8f2a90d7e0f    
  
author   : Simon Riggs <simon@2ndQuadrant.com>    
date     : Tue, 15 Jul 2014 14:45:44 +0100    
  
committer: Simon Riggs <simon@2ndQuadrant.com>    
date     : Tue, 15 Jul 2014 14:45:44 +0100    

Click here for diff

  
  

doc: small fixes for REINDEX reference page

  
commit   : bfb47043ad3a3adf50c19011b855afccc2ba6f8d    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Mon, 14 Jul 2014 20:37:00 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Mon, 14 Jul 2014 20:37:00 -0400    

Click here for diff

  
From: Josh Kupershmidt <schmiddy@gmail.com>  
  

Add autocompletion of locale keywords for CREATE DATABASE

  
commit   : a70935d3fc1dc527cc8ed960d80728629c6b3753    
  
author   : Magnus Hagander <magnus@hagander.net>    
date     : Sat, 12 Jul 2014 14:19:57 +0200    
  
committer: Magnus Hagander <magnus@hagander.net>    
date     : Sat, 12 Jul 2014 14:19:57 +0200    

Click here for diff

  
Adds support for autocomplete of LC_COLLATE and LC_CTYPE to  
the CREATE DATABASE command in psql.  
  

Fix bug with whole-row references to append subplans.

  
commit   : c45841f9e199a05c95cb8af51ebc97470fec17b8    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 11 Jul 2014 19:12:48 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 11 Jul 2014 19:12:48 -0400    

Click here for diff

  
ExecEvalWholeRowVar incorrectly supposed that it could "bless" the source  
TupleTableSlot just once per query.  But if the input is coming from an  
Append (or, perhaps, other cases?) more than one slot might be returned  
over the query run.  This led to "record type has not been registered"  
errors when a composite datum was extracted from a non-blessed slot.  
  
This bug has been there a long time; I guess it escaped notice because when  
dealing with subqueries the planner tends to expand whole-row Vars into  
RowExprs, which don't have the same problem.  It is possible to trigger  
the problem in all active branches, though, as illustrated by the added  
regression test.  
  

Don’t assume a subquery’s output is unique if there’s a SRF in its tlist.

  
commit   : fa21a760b2e16b68196da685f29033304b41d4bc    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 8 Jul 2014 14:03:26 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 8 Jul 2014 14:03:26 -0400    

Click here for diff

  
While the x output of "select x from t group by x" can be presumed unique,  
this does not hold for "select x, generate_series(1,10) from t group by x",  
because we may expand the set-returning function after the grouping step.  
(Perhaps that should be re-thought; but considering all the other oddities  
involved with SRFs in targetlists, it seems unlikely we'll change it.)  
Put a check in query_is_distinct_for() so it's not fooled by such cases.  
  
Back-patch to all supported branches.  
  
David Rowley  
  

Add some errdetail to checkRuleResultList().

  
commit   : d9d125d92ae933d27d6522405b2c7a2002619615    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 2 Jul 2014 14:20:44 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 2 Jul 2014 14:20:44 -0400    

Click here for diff

  
This function wasn't originally thought to be really user-facing,  
because converting a table to a view isn't something we expect people  
to do manually.  So not all that much effort was spent on the error  
messages; in particular, while the code will complain that you got  
the column types wrong it won't say exactly what they are.  But since  
we repurposed the code to also check compatibility of rule RETURNING  
lists, it's definitely user-facing.  It now seems worthwhile to add  
errdetail messages showing exactly what the conflict is when there's  
a mismatch of column names or types.  This is prompted by bug #10836  
from Matthias Raffelsieper, which might have been forestalled if the  
error message had reported the wrong column type as being "record".  
  
Per Alvaro's advice, back-patch to branches before 9.4, but resist  
the temptation to rephrase any existing strings there.  Adding new  
strings is not really a translation degradation; anyway having the  
info presented in English is better than not having it at all.  
  

Fix inadequately-sized output buffer in contrib/unaccent.

  
commit   : 0ff9718ff758d3e471ae16a903f6b8285e5a311e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 1 Jul 2014 11:22:56 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 1 Jul 2014 11:22:56 -0400    

Click here for diff

  
The output buffer size in unaccent_lexize() was calculated as input string  
length times pg_database_encoding_max_length(), which effectively assumes  
that replacement strings aren't more than one character.  While that was  
all that we previously documented it to support, the code actually has  
always allowed replacement strings of arbitrary length; so if you tried  
to make use of longer strings, you were at risk of buffer overrun.  To fix,  
use an expansible StringInfo buffer instead of trying to determine the  
maximum space needed a-priori.  
  
This would be a security issue if unaccent rules files could be installed  
by unprivileged users; but fortunately they can't, so in the back branches  
the problem can be labeled as improper configuration by a superuser.  
Nonetheless, a memory stomp isn't a nice way of reacting to improper  
configuration, so let's back-patch the fix.  
  

Back-patch “Fix EquivalenceClass processing for nested append relations”.

  
commit   : 555d0b2000e33fd1ad2721015996a66c43bbb3cd    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 26 Jun 2014 10:41:10 -0700    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 26 Jun 2014 10:41:10 -0700    

Click here for diff

  
When we committed a87c729153e372f3731689a7be007bc2b53f1410, we somehow  
failed to notice that it didn't merely improve plan quality for expression  
indexes; there were very closely related cases that failed outright with  
"could not find pathkey item to sort".  The failing cases seem to be those  
where the planner was already capable of selecting a MergeAppend plan,  
and there was inheritance involved: the lack of appropriate eclass child  
members would prevent prepare_sort_from_pathkeys() from succeeding on the  
MergeAppend's child plan nodes for inheritance child tables.  
  
Accordingly, back-patch into 9.1 through 9.3, along with an extra  
regression test case covering the problem.  
  
Per trouble report from Michael Glaesemann.  
  

Remove obsolete example of CSV log file name from log_filename document.

  
commit   : 865868043af0f23cd72d4450ca4410828b016cea    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 26 Jun 2014 14:27:27 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 26 Jun 2014 14:27:27 +0900    

Click here for diff

  
7380b63 changed log_filename so that epoch was not appended to it  
when no format specifier is given. But the example of CSV log file name  
with epoch still left in log_filename document. This commit removes  
such obsolete example.  
  
This commit also documents the defaults of log_directory and  
log_filename.  
  
Backpatch to all supported versions.  
  
Christoph Berg  
  

Don’t allow foreign tables with OIDs.

  
commit   : dd1a5b09bf070eff699c241b6de28453924e5613    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 24 Jun 2014 12:31:36 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 24 Jun 2014 12:31:36 +0300    

Click here for diff

  
The syntax doesn't let you specify "WITH OIDS" for foreign tables, but it  
was still possible with default_with_oids=true. But the rest of the system,  
including pg_dump, isn't prepared to handle foreign tables with OIDs  
properly.  
  
Backpatch down to 9.1, where foreign tables were introduced. It's possible  
that there are databases out there that already have foreign tables with  
OIDs. There isn't much we can do about that, but at least we can prevent  
them from being created in the future.  
  
Patch by Etsuro Fujita, reviewed by Hadi Moshayedi.  
  

Fix documentation template for CREATE TRIGGER.

  
commit   : cbc0517c3200af091cb95c35322555f4f6782da4    
  
author   : Kevin Grittner <kgrittn@postgresql.org>    
date     : Sat, 21 Jun 2014 09:17:52 -0500    
  
committer: Kevin Grittner <kgrittn@postgresql.org>    
date     : Sat, 21 Jun 2014 09:17:52 -0500    

Click here for diff

  
By using curly braces, the template had specified that one of  
"NOT DEFERRABLE", "INITIALLY IMMEDIATE", or "INITIALLY DEFERRED"  
was required on any CREATE TRIGGER statement, which is not  
accurate.  Change to square brackets makes that optional.  
  
Backpatch to 9.1, where the error was introduced.  
  

Avoid leaking memory while evaluating arguments for a table function.

  
commit   : 06d5eacbc0a7db22422fc07aca56f6e69b02b8ea    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 19 Jun 2014 22:13:54 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 19 Jun 2014 22:13:54 -0400    

Click here for diff

  
ExecMakeTableFunctionResult evaluated the arguments for a function-in-FROM  
in the query-lifespan memory context.  This is insignificant in simple  
cases where the function relation is scanned only once; but if the function  
is in a sub-SELECT or is on the inside of a nested loop, any memory  
consumed during argument evaluation can add up quickly.  (The potential for  
trouble here had been foreseen long ago, per existing comments; but we'd  
not previously seen a complaint from the field about it.)  To fix, create  
an additional temporary context just for this purpose.  
  
Per an example from MauMau.  Back-patch to all active branches.  
  

Make pqsignal() available to pg_regress of ECPG and isolation suites.

  
commit   : 94ab763278459ef8f279bdf98bcda9a73accad7e    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sat, 14 Jun 2014 10:52:25 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sat, 14 Jun 2014 10:52:25 -0400    

Click here for diff

  
Commit 453a5d91d49e4d35054f92785d830df4067e10c1 made it available to the  
src/test/regress build of pg_regress, but all pg_regress builds need the  
same treatment.  Patch 9.2 through 8.4; in 9.3 and later, pg_regress  
gets pqsignal() via libpgport.  
  

Secure Unix-domain sockets of “make check” temporary clusters.

  
commit   : 481831b4388ca4ad0abfa790ba0766cc72a05097    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sat, 14 Jun 2014 09:41:13 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sat, 14 Jun 2014 09:41:13 -0400    

Click here for diff

  
Any OS user able to access the socket can connect as the bootstrap  
superuser and proceed to execute arbitrary code as the OS user running  
the test.  Protect against that by placing the socket in a temporary,  
mode-0700 subdirectory of /tmp.  The pg_regress-based test suites and  
the pg_upgrade test suite were vulnerable; the $(prove_check)-based test  
suites were already secure.  Back-patch to 8.4 (all supported versions).  
The hazard remains wherever the temporary cluster accepts TCP  
connections, notably on Windows.  
  
As a convenient side effect, this lets testing proceed smoothly in  
builds that override DEFAULT_PGSOCKET_DIR.  Popular non-default values  
like /var/run/postgresql are often unwritable to the build user.  
  
Security: CVE-2014-0067  
  

Add mkdtemp() to libpgport.

  
commit   : 3243fa391ebcf2a6210397d7f8c1d353c15130cf    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sat, 14 Jun 2014 09:41:13 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sat, 14 Jun 2014 09:41:13 -0400    

Click here for diff

  
This function is pervasive on free software operating systems; import  
NetBSD's implementation.  Back-patch to 8.4, like the commit that will  
harness it.  
  

Fix pg_restore’s processing of old-style BLOB COMMENTS data.

  
commit   : 294a489855c6080197d40673fe592d6b494db6d5    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 12 Jun 2014 20:14:49 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 12 Jun 2014 20:14:49 -0400    

Click here for diff

  
Prior to 9.0, pg_dump handled comments on large objects by dumping a bunch  
of COMMENT commands into a single BLOB COMMENTS archive object.  With  
sufficiently many such comments, some of the commands would likely get  
split across bufferloads when restoring, causing failures in  
direct-to-database restores (though no problem would be evident in text  
output).  This is the same type of issue we have with table data dumped as  
INSERT commands, and it can be fixed in the same way, by using a mini SQL  
lexer to figure out where the command boundaries are.  Fortunately, the  
COMMENT commands are no more complex to lex than INSERTs, so we can just  
re-use the existing lexer for INSERTs.  
  
Per bug #10611 from Jacek Zalewski.  Back-patch to all active branches.  
  

  
commit   : d5ea7e649462ce0fba7e508e9efbb9f87e8c220b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 12 Jun 2014 16:51:14 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 12 Jun 2014 16:51:14 -0400    

Click here for diff

  
Robert Frost is no longer with us, but his copyrights still are, so  
let's stop using "Stopping by Woods on a Snowy Evening" as test data  
before somebody decides to sue us.  Wordsworth is more safely dead.  
  

Fix ancient encoding error in hungarian.stop.

  
commit   : 62f134954385606931ef6df5cde264296079f93a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 10 Jun 2014 22:48:16 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 10 Jun 2014 22:48:16 -0400    

Click here for diff

  
When we grabbed this file off the Snowball project's website, we mistakenly  
supposed that it was in LATIN1 encoding, but evidently it was actually in  
LATIN2.  This resulted in ő (o-double-acute, U+0151, which is code 0xF5 in  
LATIN2) being misconverted into õ (o-tilde, U+00F5), as complained of in  
bug #10589 from Zoltán Sörös.  We'd have messed up u-double-acute too,  
but there aren't any of those in the file.  Other characters used in the  
file have the same codes in LATIN1 and LATIN2, which no doubt helped hide  
the problem for so long.  
  
The error is not only ours: the Snowball project also was confused about  
which encoding is required for Hungarian.  But dealing with that will  
require source-code changes that I'm not at all sure we'll wish to  
back-patch.  Fixing the stopword file seems reasonably safe to back-patch  
however.  
  

Fix breakages of hot standby regression test.

  
commit   : 034d5c94647f91302e8df5a32154076b375f9134    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Fri, 6 Jun 2014 18:46:32 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Fri, 6 Jun 2014 18:46:32 +0900    

Click here for diff

  
This commit changes HS regression test so that it uses  
REPEATABLE READ transaction instead of SERIALIZABLE one  
because SERIALIZABLE transaction isolation level is not  
available in HS. Also this commit fixes VACUUM/ANALYZE  
label mixup.  
  
This was fixed in HEAD (commit 2985e16), but it should  
have been back-patched to 9.1 which had introduced SSI  
and forbidden SERIALIZABLE transaction in HS.  
  
Amit Langote  
  

Add defenses against running with a wrong selection of LOBLKSIZE.

  
commit   : d3c9f9c5b57d2a8a7ab065a6b942703854ec5e97    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 5 Jun 2014 11:31:18 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 5 Jun 2014 11:31:18 -0400    

Click here for diff

  
It's critical that the backend's idea of LOBLKSIZE match the way data has  
actually been divided up in pg_largeobject.  While we don't provide any  
direct way to adjust that value, doing so is a one-line source code change  
and various people have expressed interest recently in changing it.  So,  
just as with TOAST_MAX_CHUNK_SIZE, it seems prudent to record the value in  
pg_control and cross-check that the backend's compiled-in setting matches  
the on-disk data.  
  
Also tweak the code in inv_api.c so that fetches from pg_largeobject  
explicitly verify that the length of the data field is not more than  
LOBLKSIZE.  Formerly we just had Asserts() for that, which is no protection  
at all in production builds.  In some of the call sites an overlength data  
value would translate directly to a security-relevant stack clobber, so it  
seems worth one extra runtime comparison to be sure.  
  
In the back branches, we can't change the contents of pg_control; but we  
can still make the extra checks in inv_api.c, which will offer some amount  
of protection against running with the wrong value of LOBLKSIZE.  
  

Fix longstanding bug in HeapTupleSatisfiesVacuum().

  
commit   : 6bf6e528af458fb3c1b2a54df739a2e3201d72f8    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Wed, 4 Jun 2014 23:26:30 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Wed, 4 Jun 2014 23:26:30 +0200    

Click here for diff

  
HeapTupleSatisfiesVacuum() didn't properly discern between  
DELETE_IN_PROGRESS and INSERT_IN_PROGRESS for rows that have been  
inserted in the current transaction and deleted in a aborted  
subtransaction of the current backend. At the very least that caused  
problems for CLUSTER and CREATE INDEX in transactions that had  
aborting subtransactions producing rows, leading to warnings like:  
WARNING:  concurrent delete in progress within table "..."  
possibly in an endless, uninterruptible, loop.  
  
Instead of treating *InProgress xmins the same as *IsCurrent ones,  
treat them as being distinct like the other visibility routines. As  
implemented this separatation can cause a behaviour change for rows  
that have been inserted and deleted in another, still running,  
transaction. HTSV will now return INSERT_IN_PROGRESS instead of  
DELETE_IN_PROGRESS for those. That's both, more in line with the other  
visibility routines and arguably more correct. The latter because a  
INSERT_IN_PROGRESS will make callers look at/wait for xmin, instead of  
xmax.  
The only current caller where that's possibly worse than the old  
behaviour is heap_prune_chain() which now won't mark the page as  
prunable if a row has concurrently been inserted and deleted. That's  
harmless enough.  
  
As a cautionary measure also insert a interrupt check before the gotos  
in IndexBuildHeapScan() that lead to the uninterruptible loop. There  
are other possible causes, like a row that several sessions try to  
update and all fail, for repeated loops and the cost of doing so in  
the retry case is low.  
  
As this bug goes back all the way to the introduction of  
subtransactions in 573a71a5da backpatch to all supported releases.  
  
Reported-By: Sandro Santilli  
  

Make plpython_unicode regression test work in more database encodings.

  
commit   : d661582cb568ecb9d580cc5a6a5def317d9686f7    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 3 Jun 2014 12:01:37 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 3 Jun 2014 12:01:37 -0400    

Click here for diff

  
This test previously used a data value containing U+0080, and would  
therefore fail if the database encoding didn't have an equivalent to  
that; which only about half of our supported server encodings do.  
We could fall back to using some plain-ASCII character, but that seems  
like it's losing most of the point of the test.  Instead switch to using  
U+00A0 (no-break space), which translates into all our supported encodings  
except the four in the EUC_xx family.  
  
Per buildfarm testing.  Back-patch to 9.1, which is as far back as this  
test is expected to succeed everywhere.  (9.0 has the test, but without  
back-patching some 9.1 code changes we could not expect to get consistent  
results across platforms anyway.)  
  

Set the process latch when processing recovery conflict interrupts.

  
commit   : 05d22d06ae0ffa63e4e4885ae8db23ca827c1825    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Tue, 3 Jun 2014 14:02:54 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Tue, 3 Jun 2014 14:02:54 +0200    

Click here for diff

  
Because RecoveryConflictInterrupt() didn't set the process latch  
anything using the latter to wait for events didn't get notified about  
recovery conflicts. Most latch users are never the target of recovery  
conflicts, which explains the lack of reports about this until  
now.  
Since 9.3 two possible affected users exist though: The sql callable  
pg_sleep() now uses latches to wait and background workers are  
expected to use latches in their main loop. Both would currently wait  
until the end of WaitLatch's timeout.  
  
Fix by adding a SetLatch() to RecoveryConflictInterrupt(). It'd also  
be possible to fix the issue by having each latch user set  
set_latch_on_sigusr1. That seems failure prone and though, as most of  
these callsites won't often receive recovery conflicts and thus will  
likely only be tested against normal query cancels et al. It'd also be  
unnecessarily verbose.  
  
Backpatch to 9.1 where latches were introduced. Arguably 9.3 would be  
sufficient, because that's where pg_sleep() was converted to waiting  
on the latch and background workers got introduced; but there could be  
user level code making use of the latch pre 9.3.  
  

  
commit   : a784a39c49cbecf4b69324db96ce6bd119e967e8    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 30 May 2014 18:18:24 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 30 May 2014 18:18:24 -0400    

Click here for diff

  
As of Xcode 5.0, Apple isn't including the Python framework as part of the  
SDK-level files, which means that linking to it might fail depending on  
whether Xcode thinks you've selected a specific SDK version.  According to  
their Tech Note 2328, they've basically deprecated the framework method of  
linking to libpython and are telling people to link to the shared library  
normally.  (I'm pretty sure this is in direct contradiction to the advice  
they were giving a few years ago, but whatever.)  Testing says that this  
approach works fine at least as far back as OS X 10.4.11, so let's just  
rip out the framework special case entirely.  We do still need a special  
case to decide that OS X provides a shared library at all, unfortunately  
(I wonder why the distutils check doesn't work ...).  But this is still  
less of a special case than before, so it's fine.  
  
Back-patch to all supported branches, since we'll doubtless be hearing  
about this more as more people update to recent Xcode.  
  

When using the OSSP UUID library, cache its uuid_t state object.

  
commit   : 3606754da9928de4669df7a29d9500d7da5693b9    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 29 May 2014 13:51:12 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 29 May 2014 13:51:12 -0400    

Click here for diff

  
The original coding in contrib/uuid-ossp created and destroyed a uuid_t  
object (or, in some cases, even two of them) each time it was called.  
This is not the intended usage: you're supposed to keep the uuid_t object  
around so that the library can cache its state across uses.  (Other UUID  
libraries seem to keep equivalent state behind-the-scenes in static  
variables, but OSSP chose differently.)  Aside from being quite inefficient,  
creating a new uuid_t loses knowledge of the previously generated UUID,  
which in theory could result in duplicate V1-style UUIDs being created  
on sufficiently fast machines.  
  
On at least some platforms, creating a new uuid_t also draws some entropy  
from /dev/urandom, leaving less for the rest of the system.  This seems  
sufficiently unpleasant to justify back-patching this change.  
  

Revert “Fix bogus %name-prefix option syntax in all our Bison files.”

  
commit   : 43c658f523dafa000eb887ddfa876700f07c745f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 28 May 2014 19:29:29 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 28 May 2014 19:29:29 -0400    

Click here for diff

  
This reverts commit 4c5fde4e288983f30dae09a7eea8e6a9e6145477.  
  
It turns out that the %name-prefix syntax without "=" does not work  
at all in pre-2.4 Bison.  We are not prepared to make such a large  
jump in minimum required Bison version just to suppress a warning  
message in a version hardly any developers are using yet.  
When 3.0 gets more popular, we'll figure out a way to deal with this.  
In the meantime, BISONFLAGS=-Wno-deprecated is recommendable for  
anyone using 3.0 who doesn't want to see the warning.  
  

Fix bogus %name-prefix option syntax in all our Bison files.

  
commit   : 4c5fde4e288983f30dae09a7eea8e6a9e6145477    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 28 May 2014 15:42:01 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 28 May 2014 15:42:01 -0400    

Click here for diff

  
%name-prefix doesn't use an "=" sign according to the Bison docs, but it  
silently accepted one anyway, until Bison 3.0.  This was originally a  
typo of mine in commit 012abebab1bc72043f3f670bf32e91ae4ee04bd2, and we  
seem to have slavishly copied the error into all the other grammar files.  
  
Per report from Vik Fearing; analysis by Peter Eisentraut.  
  
Back-patch to all active branches, since somebody might try to build  
a back branch with up-to-date tools.  
  

Ensure cleanup in case of early errors in streaming base backups

  
commit   : 0282dc2551e32f486b173c3104c3ecdf6d345e67    
  
author   : Magnus Hagander <magnus@hagander.net>    
date     : Wed, 28 May 2014 13:03:21 +0200    
  
committer: Magnus Hagander <magnus@hagander.net>    
date     : Wed, 28 May 2014 13:03:21 +0200    

Click here for diff

  
Move the code that sends the initial status information as well as the  
calculation of paths inside the ENSURE_ERROR_CLEANUP block. If this code  
failed, we would "leak" a counter of number of concurrent backups, thereby  
making the system always believe it was in backup mode. This could happen  
if the sending failed (which it probably never did given that the small  
amount of data to send would never cause a flush). It is very low risk, but  
all operations after do_pg_start_backup should be protected.  
  

Avoid unportable usage of sscanf(UINT64_FORMAT).

  
commit   : 3ae8e8bf552f622600b1a356550882e886753119    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 26 May 2014 22:23:39 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 26 May 2014 22:23:39 -0400    

Click here for diff

  
On Mingw, it seems that scanf() doesn't necessarily accept the same format  
codes that printf() does, and in particular it may fail to recognize %llu  
even though printf() does.  Since configure only probes printf() behavior  
while setting up the INT64_FORMAT macros, this means it's unsafe to use  
those macros with scanf().  We had only one instance of such a coding  
pattern, in contrib/pg_stat_statements, so change that code to avoid  
the problem.  
  
Per buildfarm warnings.  Back-patch to 9.0 where the troublesome code  
was introduced.  
  
Michael Paquier  
  

Use 0-based numbering in comments about backup blocks.

  
commit   : 0d202521988e541827adea389b2a3b4e6c351bfe    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 19 May 2014 13:21:59 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 19 May 2014 13:21:59 +0300    

Click here for diff

  
The macros and functions that work with backup blocks in the redo function  
use 0-based numbering, so let's use that consistently in the function that  
generates the records too. Makes it so much easier to compare the  
generation and replay functions.  
  
Backpatch to 9.0, where we switched from 1-based to 0-based numbering.  
  

Initialize tsId and dbId fields in WAL record of COMMIT PREPARED.

  
commit   : 39b3739c05688b5cd5d5da8c52fa5476304eff11    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 16 May 2014 09:47:50 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 16 May 2014 09:47:50 +0300    

Click here for diff

  
Commit dd428c79 added dbId and tsId to the xl_xact_commit struct but missed  
that prepared transaction commits reuse that struct. Fix that.  
  
Because those fields were left unitialized, replaying a commit prepared WAL  
record in a hot standby node would fail to remove the relcache init file.  
That can lead to "could not open file" errors on the standby. Relcache init  
file only needs to be removed when a system table/index is rewritten in the  
transaction using two phase commit, so that should be rare in practice. In  
HEAD, the incorrect dbId/tsId values are also used for filtering in logical  
replication code, causing the transaction to always be filtered out.  
  
Analysis and fix by Andres Freund. Backpatch to 9.0 where hot standby was  
introduced.  
  

Fix unportable setvbuf() usage in initdb.

  
commit   : da05e57f70d42a41a0c6f01e2440bd3efbd972d1    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 15 May 2014 15:58:05 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 15 May 2014 15:58:05 -0400    

Click here for diff

  
In yesterday's commit 2dc4f011fd61501cce507be78c39a2677690d44b, I tried  
to force buffering of stdout/stderr in initdb to be what it is by  
default when the program is run interactively on Unix (since that's how  
most manual testing is done).  This tripped over the fact that Windows  
doesn't support _IOLBF mode.  We dealt with that a long time ago in  
syslogger.c by falling back to unbuffered mode on Windows.  Export that  
solution in port.h and use it in initdb.  
  
Back-patch to 8.4, like the previous commit.  
  

Handle duplicate XIDs in txid_snapshot.

  
commit   : f47e4ce6cea2ced5da35e49d41154a19d7b3ebed    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 15 May 2014 18:29:20 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 15 May 2014 18:29:20 +0300    

Click here for diff

  
The proc array can contain duplicate XIDs, when a transaction is just being  
prepared for two-phase commit. To cope, remove any duplicates in  
txid_current_snapshot(). Also ignore duplicates in the input functions, so  
that if e.g. you have an old pg_dump file that already contains duplicates,  
it will be accepted.  
  
Report and fix by Jan Wieck. Backpatch to all supported versions.  
  

Fix race condition in preparing a transaction for two-phase commit.

  
commit   : 8c19b807c49aaaa18d1a166df5649ec2c04df320    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 15 May 2014 16:37:50 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 15 May 2014 16:37:50 +0300    

Click here for diff

  
To lock a prepared transaction's shared memory entry, we used to mark it  
with the XID of the backend. When the XID was no longer active according  
to the proc array, the entry was implicitly considered as not locked  
anymore. However, when preparing a transaction, the backend's proc array  
entry was cleared before transfering the locks (and some other state) to  
the prepared transaction's dummy PGPROC entry, so there was a window where  
another backend could finish the transaction before it was in fact fully  
prepared.  
  
To fix, rewrite the locking mechanism of global transaction entries. Instead  
of an XID, just have simple locked-or-not flag in each entry (we store the  
locking backend's backend id rather than a simple boolean, but that's just  
for debugging purposes). The backend is responsible for explicitly unlocking  
the entry, and to make sure that that happens, install a callback to unlock  
it on abort or process exit.  
  
Backpatch to all supported versions.  
  

In initdb, ensure stdout/stderr buffering behavior is what we expect.

  
commit   : 360ec00a57964ce27c4ee064b7313d55dbf2fb9f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 14 May 2014 21:14:02 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 14 May 2014 21:14:02 -0400    

Click here for diff

  
Since this program may print to either stdout or stderr, the relative  
ordering of its messages depends on the buffering behavior of those files.  
Force stdout to be line-buffered and stderr to be unbuffered, ensuring  
that the behavior will match standard Unix interactive behavior, even  
when stdout and stderr are rerouted to a file.  
  
Per complaint from Tomas Vondra.  The particular case he pointed out is  
new in HEAD, but issues of the same sort could arise in any branch with  
other error messages, so back-patch to all branches.  
  
I'm unsure whether we might not want to do this in other client programs  
as well.  For the moment, just fix initdb.  
  

Initialize padding bytes in btree_gist varbit support.

  
commit   : 1913d0f28d6ad1ccebba1035e7c319b1ff4a8b02    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 13 May 2014 14:16:28 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 13 May 2014 14:16:28 +0300    

Click here for diff

  
The code expands a varbit gist leaf key to a node key by copying the bit  
data twice in a varlen datum, as both the lower and upper key. The lower key  
was expanded to INTALIGN size, but the padding bytes were not initialized.  
That's a problem because when the lower/upper keys are compared, the padding  
bytes are used compared too, when the values are otherwise equal. That could  
lead to incorrect query results.  
  
REINDEX is advised for any btree_gist indexes on bit or bit varying data  
type, to fix any garbage padding bytes on disk.  
  
Per Valgrind, reported by Andres Freund. Backpatch to all supported  
versions.  
  

Ignore config.pl and buildenv.pl in src/tools/msvc.

  
commit   : c2a4bb3ded75b92c3f92dfff0772f50726c77c6a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 12 May 2014 14:24:18 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 12 May 2014 14:24:18 -0400    

Click here for diff

  
config.pl and buildenv.pl can be used to customize build settings when  
using MSVC.  They should never get committed into the common source tree.  
  
Back-patch to 9.0; it looks like the rules were different in 8.4.  
  
Michael Paquier  
  

Accept tcl 8.6 in configure’s probe for tclsh.

  
commit   : 8bd328eaee0b3a9d96e5e2f0f6a420ffe05bf4d2    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 10 May 2014 10:48:11 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 10 May 2014 10:48:11 -0400    

Click here for diff

  
Usually the search would find plain "tclsh" without any trouble,  
but some installations might only have the version-numbered flavor  
of that program.  
  
No compatibility problems have been reported with 8.6, so we might  
as well back-patch this to all active branches.  
  
Christoph Berg  
  

Document permissions needed for pg_database_size and pg_tablespace_size.

  
commit   : bb837d75fa30e785af01c1e627e1cae9ba28e424    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 8 May 2014 21:45:02 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 8 May 2014 21:45:02 -0400    

Click here for diff

  
Back in 8.3, we installed permissions checks in these functions (see  
commits 8bc225e7990a and cc26599b7206).  But we forgot to document that  
anywhere in the user-facing docs; it did get mentioned in the 8.3 release  
notes, but nobody's looking at that any more.  Per gripe from Suya Huang.  
  

Un-break ecpg test suite under –disable-integer-datetimes.

  
commit   : 019be0df10afac30b31dd28a22931602ccfba2d9    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Thu, 8 May 2014 19:29:02 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Thu, 8 May 2014 19:29:02 -0400    

Click here for diff

  
Commit 4318daecc959886d001a6e79c6ea853e8b1dfb4b broke it.  The change in  
sub-second precision at extreme dates is normal.  The inconsistent  
truncation vs. rounding is essentially a bug, albeit a longstanding one.  
Back-patch to 8.4, like the causative commit.  
  

Protect against torn pages when deleting GIN list pages.

  
commit   : 686a7194ef1676e4735bbe1c0270dc25c6f33796    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 8 May 2014 14:43:04 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 8 May 2014 14:43:04 +0300    

Click here for diff

  
To-be-deleted list pages contain no useful information, as they are being  
deleted, but we must still protect the writes from being torn by a crash  
after a partial write. To do that, re-initialize the pages on WAL replay.  
  
Jeff Janes caught this with a test program to test partial writes.  
Backpatch to all supported versions.  
  

Include files copied from libpqport in .gitignore

  
commit   : 5c5bfc0ac3653acad439495c27c78f129668d114    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 8 May 2014 10:56:57 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 8 May 2014 10:56:57 +0300    

Click here for diff

  
Michael Paquier  
  

Avoid buffer bloat in libpq when server is consistently faster than client.

  
commit   : 86888054a92aeca429593a22437ecc83fd300985    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 7 May 2014 21:38:44 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 7 May 2014 21:38:44 -0400    

Click here for diff

  
If the server sends a long stream of data, and the server + network are  
consistently fast enough to force the recv() loop in pqReadData() to  
iterate until libpq's input buffer is full, then upon processing the last  
incomplete message in each bufferload we'd usually double the buffer size,  
due to supposing that we didn't have enough room in the buffer to finish  
collecting that message.  After filling the newly-enlarged buffer, the  
cycle repeats, eventually resulting in an out-of-memory situation (which  
would be reported misleadingly as "lost synchronization with server").  
Of course, we should not enlarge the buffer unless we still need room  
after discarding already-processed messages.  
  
This bug dates back quite a long time: pqParseInput3 has had the behavior  
since perhaps 2003, getCopyDataMessage at least since commit 70066eb1a1ad  
in 2008.  Probably the reason it's not been isolated before is that in  
common environments the recv() loop would always be faster than the server  
(if on the same machine) or faster than the network (if not); or at least  
it wouldn't be slower consistently enough to let the buffer ramp up to a  
problematic size.  The reported cases involve Windows, which perhaps has  
different timing behavior than other platforms.  
  
Per bug #7914 from Shin-ichi Morita, though this is different from his  
proposed solution.  Back-patch to all supported branches.  
  

Fix failure to set ActiveSnapshot while rewinding a cursor.

  
commit   : 229101db4d7696c555b338264ef9c89f769c511d    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 7 May 2014 14:25:22 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 7 May 2014 14:25:22 -0400    

Click here for diff

  
ActiveSnapshot needs to be set when we call ExecutorRewind because some  
plan node types may execute user-defined functions during their ReScan  
calls (nodeLimit.c does so, at least).  The wisdom of that is somewhat  
debatable, perhaps, but for now the simplest fix is to make sure the  
required context is valid.  Failure to do this typically led to a  
null-pointer-dereference core dump, though it's possible that in more  
complex cases a function could be executed with the wrong snapshot  
leading to very subtle misbehavior.  
  
Per report from Leif Jensen.  It's been broken for a long time, so  
back-patch to all active branches.  
  

Fix interval test, which was broken for floating-point timestamps.

  
commit   : 47e4309c07a399ce4bf896e0ba5edcdf1a691ada    
  
author   : Jeff Davis <jdavis@postgresql.org>    
date     : Tue, 6 May 2014 19:35:24 -0700    
  
committer: Jeff Davis <jdavis@postgresql.org>    
date     : Tue, 6 May 2014 19:35:24 -0700    

Click here for diff

  
Commit 4318daecc959886d001a6e79c6ea853e8b1dfb4b introduced a test that  
couldn't be made consistent between integer and floating-point  
timestamps.  
  
It was designed to test the longest possible interval output length,  
so removing four zeros from the number of hours, as this patch does,  
is not ideal. But the test still has some utility for its original  
purpose, and there aren't a lot of other good options.  
  
Noah Misch suggested a different approach where we test that the  
output either matches what we expect from integer timestamps or what  
we expect from floating-point timestamps. That seemed to obscure an  
otherwise simple test, however.  
  
Reviewed by Tom Lane and Noah Misch.  
  

Remove tabs after spaces in C comments

  
commit   : 2616a5d300e5bb5a2838d2a065afa3740e08727f    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Tue, 6 May 2014 11:26:26 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Tue, 6 May 2014 11:26:26 -0400    

Click here for diff

  
This was not changed in HEAD, but will be done later as part of a  
pgindent run.  Future pgindent runs will also do this.  
  
Report by Tom Lane  
  
Backpatch through all supported branches, but not HEAD  
  

Fix use of free in walsender error handling after a sysid mismatch.

  
commit   : e0070a6858cfcd2c4129dfa93bc042d6d86732c8    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 6 May 2014 15:14:51 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 6 May 2014 15:14:51 +0300    

Click here for diff

  
Found via valgrind. The bug exists since the introduction of the walsender,  
so backpatch to 9.0.  
  
Andres Freund  
  

Fix handling of array of char pointers in ecpglib.

  
commit   : fb66e88cf8f7ce7abf27cff1570a703e1cb8f562    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Tue, 6 May 2014 13:04:30 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Tue, 6 May 2014 13:04:30 +0200    

Click here for diff

  
When array of char * was used as target for a FETCH statement returning more  
than one row, it tried to store all the result in the first element. Instead it  
should dump array of char pointers with right offset, use the address instead  
of the value of the C variable while reading the array and treat such variable  
as char **, instead of char * for pointer arithmetic.  
  
Patch by Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>  
  

Fix possible cache invalidation failure in ReceiveSharedInvalidMessages.

  
commit   : 2f4ee3a2f02f612046ecc134ccf50af10240bbb1    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 5 May 2014 14:43:49 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 5 May 2014 14:43:49 -0400    

Click here for diff

  
Commit fad153ec45299bd4d4f29dec8d9e04e2f1c08148 modified sinval.c to reduce  
the number of calls into sinvaladt.c (which require taking a shared lock)  
by keeping a local buffer of collected-but-not-yet-processed messages.  
However, if processing of the last message in a batch resulted in a  
recursive call to ReceiveSharedInvalidMessages, we could overwrite that  
message with a new one while the outer invalidation function was still  
working on it.  This would be likely to lead to invalidation of the wrong  
cache entry, allowing subsequent processing to use stale cache data.  
The fix is just to make a local copy of each message while we're processing  
it.  
  
Spotted by Andres Freund.  Back-patch to 8.4 where the bug was introduced.  
  

Fix “quiet inline” configure test for newer clang compilers.

  
commit   : e70980747dbe50b5ddc9aee88912e78825cacdd1    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 May 2014 15:30:38 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 May 2014 15:30:38 -0400    

Click here for diff

  
This test used to just define an unused static inline function and check  
whether that causes a warning.  But newer clang versions warn about  
unused static inline functions when defined inside a .c file, but not  
when defined in an included header, which is the case we care about.  
Change the test to cope.  
  
Andres Freund  
  

Fix failure to detoast fields in composite elements of structured types.

  
commit   : db1fdc945da471a8dc1f0dd701270858acd0806f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 1 May 2014 15:19:17 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 1 May 2014 15:19:17 -0400    

Click here for diff

  
If we have an array of records stored on disk, the individual record fields  
cannot contain out-of-line TOAST pointers: the tuptoaster.c mechanisms are  
only prepared to deal with TOAST pointers appearing in top-level fields of  
a stored row.  The same applies for ranges over composite types, nested  
composites, etc.  However, the existing code only took care of expanding  
sub-field TOAST pointers for the case of nested composites, not for other  
structured types containing composites.  For example, given a command such  
as  
  
UPDATE tab SET arraycol = ARRAY[(ROW(x,42)::mycompositetype] ...  
  
where x is a direct reference to a field of an on-disk tuple, if that field  
is long enough to be toasted out-of-line then the TOAST pointer would be  
inserted as-is into the array column.  If the source record for x is later  
deleted, the array field value would become a dangling pointer, leading  
to errors along the line of "missing chunk number 0 for toast value ..."  
when the value is referenced.  A reproducible test case for this was  
provided by Jan Pecek, but it seems likely that some of the "missing chunk  
number" reports we've heard in the past were caused by similar issues.  
  
Code-wise, the problem is that PG_DETOAST_DATUM() is not adequate to  
produce a self-contained Datum value if the Datum is of composite type.  
Seen in this light, the problem is not just confined to arrays and ranges,  
but could also affect some other places where detoasting is done in that  
way, for example form_index_tuple().  
  
I tried teaching the array code to apply toast_flatten_tuple_attribute()  
along with PG_DETOAST_DATUM() when the array element type is composite,  
but this was messy and imposed extra cache lookup costs whether or not any  
TOAST pointers were present, indeed sometimes when the array element type  
isn't even composite (since sometimes it takes a typcache lookup to find  
that out).  The idea of extending that approach to all the places that  
currently use PG_DETOAST_DATUM() wasn't attractive at all.  
  
This patch instead solves the problem by decreeing that composite Datum  
values must not contain any out-of-line TOAST pointers in the first place;  
that is, we expand out-of-line fields at the point of constructing a  
composite Datum, not at the point where we're about to insert it into a  
larger tuple.  This rule is applied only to true composite Datums, not  
to tuples that are being passed around the system as tuples, so it's not  
as invasive as it might sound at first.  With this approach, the amount  
of code that has to be touched for a full solution is greatly reduced,  
and added cache lookup costs are avoided except when there actually is  
a TOAST pointer that needs to be inlined.  
  
The main drawback of this approach is that we might sometimes dereference  
a TOAST pointer that will never actually be used by the query, imposing a  
rather large cost that wasn't there before.  On the other side of the coin,  
if the field value is used multiple times then we'll come out ahead by  
avoiding repeat detoastings.  Experimentation suggests that common SQL  
coding patterns are unaffected either way, though.  Applications that are  
very negatively affected could be advised to modify their code to not fetch  
columns they won't be using.  
  
In future, we might consider reverting this solution in favor of detoasting  
only at the point where data is about to be stored to disk, using some  
method that can drill down into multiple levels of nested structured types.  
That will require defining new APIs for structured types, though, so it  
doesn't seem feasible as a back-patchable fix.  
  
Note that this patch changes HeapTupleGetDatum() from a macro to a function  
call; this means that any third-party code using that macro will not get  
protection against creating TOAST-pointer-containing Datums until it's  
recompiled.  The same applies to any uses of PG_RETURN_HEAPTUPLEHEADER().  
It seems likely that this is not a big problem in practice: most of the  
tuple-returning functions in core and contrib produce outputs that could  
not possibly be toasted anyway, and the same probably holds for third-party  
extensions.  
  
This bug has existed since TOAST was invented, so back-patch to all  
supported branches.  
  

Check for interrupts and stack overflow during rule/view dumps.

  
commit   : 3897ee9b1fe5000552e38b325130ece9a2dfa0d0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 30 Apr 2014 13:46:22 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 30 Apr 2014 13:46:22 -0400    

Click here for diff

  
Since ruleutils.c recurses, it could be driven to stack overflow by  
deeply nested constructs.  Very large queries might also take long  
enough to deparse that a check for interrupts seems like a good idea.  
Stick appropriate tests into a couple of key places.  
  
Noted by Greg Stark.  Back-patch to all supported branches.  
  

Add missing SYSTEMQUOTEs

  
commit   : 94095e341c1b23e581ffb7227b019df8d2687e3a    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 30 Apr 2014 10:34:15 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 30 Apr 2014 10:34:15 +0300    

Click here for diff

  
Some popen() calls were missing SYSTEMQUOTEs, which caused initdb and  
pg_upgrade to fail on Windows, if the installation path contained both  
spaces and @ signs.  
  
Patch by Nikhil Deshpande. Backpatch to all supported versions.  
  

Fix two bugs in WAL-logging of GIN pending-list pages.

  
commit   : 9bc70b1d66bbaddd4de5b37500bff49f6365fc2e    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 28 Apr 2014 16:12:45 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 28 Apr 2014 16:12:45 +0300    

Click here for diff

  
In writeListPage, never take a full-page image of the page, because we  
have all the information required to re-initialize in the WAL record  
anyway. Before this fix, a full-page image was always generated, unless  
full_page_writes=off, because when the page is initialized its LSN is  
always 0. In stable-branches, keep the code to restore the backup blocks  
if they exist, in case that the WAL is generated with an older minor  
version, but in master Assert that there are no full-page images.  
  
In the redo routine, add missing "off++". Otherwise the tuples are added  
to the page in reverse order. That happens to be harmless because we  
always scan and remove all the tuples together, but it was clearly wrong.  
Also, it was masked by the first bug unless full_page_writes=off, because  
the page was always restored from a full-page image.  
  
Backpatch to all supported versions.  
  

Reset pg_stat_activity.xact_start during PREPARE TRANSACTION.

  
commit   : 70e7be2647106c30784627e69b9d92342e77dc3e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 24 Apr 2014 13:29:48 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 24 Apr 2014 13:29:48 -0400    

Click here for diff

  
Once we've completed a PREPARE, our session is not running a transaction,  
so its entry in pg_stat_activity should show xact_start as null, rather  
than leaving the value as the start time of the now-prepared transaction.  
  
I think possibly this oversight was triggered by faulty extrapolation  
from the adjacent comment that says PrepareTransaction should not call  
AtEOXact_PgStat, so tweak the wording of that comment.  
  
Noted by Andres Freund while considering bug #10123 from Maxim Boguk,  
although this error doesn't seem to explain that report.  
  
Back-patch to all active branches.  
  

Fix incorrect pg_proc.proallargtypes entries for two built-in functions.

  
commit   : d1d2845287d74e8734f55592c6eeea4dcaae9949    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 23 Apr 2014 21:21:15 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 23 Apr 2014 21:21:15 -0400    

Click here for diff

  
pg_sequence_parameters() and pg_identify_object() have had incorrect  
proallargtypes entries since 9.1 and 9.3 respectively.  This was mostly  
masked by the correct information in proargtypes, but a few operations  
such as pg_get_function_arguments() (and thus psql's \df display) would  
show the wrong data types for these functions' input parameters.  
  
In HEAD, fix the wrong info, bump catversion, and add an opr_sanity  
regression test to catch future mistakes of this sort.  
  
In the back branches, just fix the wrong info so that installations  
initdb'd with future minor releases will have the right data.  We  
can't force an initdb, and it doesn't seem like a good idea to add  
a regression test that will fail on existing installations.  
  
Andres Freund  
  

Fix typos in comment.

  
commit   : 59bc3cac41ec36150c3fe8a582be233be98807d8    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 23 Apr 2014 12:56:41 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 23 Apr 2014 12:56:41 +0300    

Click here for diff

  
  

Fix unused-variable warning on Windows.

  
commit   : 7f814400a8039c55d5691023bbbe8739798b123d    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 17 Apr 2014 16:12:24 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 17 Apr 2014 16:12:24 -0400    

Click here for diff

  
Introduced in 585bca39: msgid is not used in the Windows code path.  
  
Also adjust comments a tad (mostly to keep pgindent from messing it up).  
  
David Rowley  
  

pgcrypto: fix memset() calls that might be optimized away

  
commit   : fc02b87e2876b2492a3d5eebd3b70be383b08f40    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Thu, 17 Apr 2014 12:37:53 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Thu, 17 Apr 2014 12:37:53 -0400    

Click here for diff

  
Specifically, on-stack memset() might be removed, so:  
  
	* Replace memset() with px_memset()  
	* Add px_memset to copy_crlf()  
	* Add px_memset to pgp-s2k.c  
  
Patch by Marko Kreen  
  
Report by PVS-Studio  
  
Backpatch through 8.4.  
  

Attempt to get plpython regression tests working again for MSVC builds.

  
commit   : 179c45ae2fb9519a343bc2e38ebe4609097d14af    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Wed, 16 Apr 2014 13:35:46 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Wed, 16 Apr 2014 13:35:46 -0400    

Click here for diff

  
This has probably been broken for quite a long time. Buildfarm member  
currawong's current results suggest that it's been broken since 9.1, so  
backpatch this to that branch.  
  
This only supports Python 2 - I will handle Python 3 separately, but  
this is a fairly simple fix.  
  

Use AF_UNSPEC not PF_UNSPEC in getaddrinfo calls.

  
commit   : 9ad94ba08491f3300d54f0df954363ae5fe439d4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 16 Apr 2014 13:21:03 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 16 Apr 2014 13:21:03 -0400    

Click here for diff

  
According to the Single Unix Spec and assorted man pages, you're supposed  
to use the constants named AF_xxx when setting ai_family for a getaddrinfo  
call.  In a few places we were using PF_xxx instead.  Use of PF_xxx  
appears to be an ancient BSD convention that was not adopted by later  
standardization.  On BSD and most later Unixen, it doesn't matter much  
because those constants have equivalent values anyway; but nonetheless  
this code is not per spec.  
  
In the same vein, replace PF_INET by AF_INET in one socket() call, which  
wasn't even consistent with the other socket() call in the same function  
let alone the remainder of our code.  
  
Per investigation of a Cygwin trouble report from Marco Atzeri.  It's  
probably a long shot that this will fix his issue, but it's wrong in  
any case.  
  

Fix timeout in LDAP lookup of libpq connection parameters

  
commit   : c4bf15b9c3f80da78e1c5c32c8063c3146f85af8    
  
author   : Magnus Hagander <magnus@hagander.net>    
date     : Wed, 16 Apr 2014 17:18:02 +0200    
  
committer: Magnus Hagander <magnus@hagander.net>    
date     : Wed, 16 Apr 2014 17:18:02 +0200    

Click here for diff

  
Bind attempts to an LDAP server should time out after two seconds,  
allowing additional lines in the service control file to be parsed  
(which provide a fall back to a secondary LDAP server or default options).  
The existing code failed to enforce that timeout during TCP connect,  
resulting in a hang far longer than two seconds if the LDAP server  
does not respond.  
  
Laurenz Albe  
  

check socket creation errors against PGINVALID_SOCKET

  
commit   : bed499ed1d94f05195c63387d4629644e0df4149    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Wed, 16 Apr 2014 10:45:48 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Wed, 16 Apr 2014 10:45:48 -0400    

Click here for diff

  
Previously, in some places, socket creation errors were checked for  
negative values, which is not true for Windows because sockets are  
unsigned.  This masked socket creation errors on Windows.  
  
Backpatch through 9.0.  8.4 doesn't have the infrastructure to fix this.  
  

Use correctly-sized buffer when zero-filling a WAL file.

  
commit   : 61df3d090c0e42fd0ad06e5a3d70aca148107c30    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 16 Apr 2014 10:21:09 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 16 Apr 2014 10:21:09 +0300    

Click here for diff

  
I mixed up BLCKSZ and XLOG_BLCKSZ when I changed the way the buffer is  
allocated a couple of weeks ago. With the default settings, they are both  
8k, but they can be changed at compile-time.  
  

Several fixes to array handling in ecpg.

  
commit   : 0de1068365909970eb75c51d82467d28631b63ef    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Wed, 9 Apr 2014 11:21:46 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Wed, 9 Apr 2014 11:21:46 +0200    

Click here for diff

  
Patches by Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>  
  
Conflicts:  
	src/interfaces/ecpg/test/expected/preproc-outofscope.c  
  

Fix hot standby bug with GiST scans.

  
commit   : ac0078c1de6614b1db40fd1c5d03e4989e7be060    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 8 Apr 2014 14:47:24 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 8 Apr 2014 14:47:24 +0300    

Click here for diff

  
Don't reset the rightlink of a page when replaying a page update record.  
This was a leftover from pre-hot standby days, when it was not possible to  
have scans concurrent with WAL replay. Resetting the right-link was not  
necessary back then either, but it was done for the sake of tidiness. But  
with hot standby, it's wrong, because a concurrent scan might still need it.  
  
Backpatch all versions with hot standby, 9.0 and above.  
  

Block signals earlier during postmaster startup.

  
commit   : 093d3da1dc3cfcf3209fa44d3fdacf7fcd349e93    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 5 Apr 2014 18:16:17 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 5 Apr 2014 18:16:17 -0400    

Click here for diff

  
Formerly, we set up the postmaster's signal handling only when we were  
about to start launching subprocesses.  This is a bad idea though, as  
it means that for example a SIGINT arriving before that will kill the  
postmaster instantly, perhaps leaving lockfiles, socket files, shared  
memory, etc laying about.  We'd rather that such a signal caused orderly  
postmaster termination including releasing of those resources.  A simple  
fix is to move the PostmasterMain stanza that initializes signal handling  
to an earlier point, before we've created any such resources.  Then, an  
early-arriving signal will be blocked until we're ready to deal with it  
in the usual way.  (The only part that really needs to be moved up is  
blocking of signals, but it seems best to keep the signal handler  
installation calls together with that; for one thing this ensures the  
kernel won't drop any signals we wished to get.  The handlers won't get  
invoked in any case until we unblock signals in ServerLoop.)  
  
Per a report from MauMau.  He proposed changing the way "pg_ctl stop"  
works to deal with this, but that'd just be masking one symptom not  
fixing the core issue.  
  
It's been like this since forever, so back-patch to all supported branches.  
  

Fix processing of PGC_BACKEND GUC parameters on Windows.

  
commit   : cb11f4d8d5d7fd171621b8ee6262cd42bc4e9e07    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 5 Apr 2014 12:41:34 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 5 Apr 2014 12:41:34 -0400    

Click here for diff

  
EXEC_BACKEND builds (i.e., Windows) failed to absorb values of PGC_BACKEND  
parameters if they'd been changed post-startup via the config file.  This  
for example prevented log_connections from working if it were turned on  
post-startup.  The mechanism for handling this case has always been a bit  
of a kluge, and it wasn't revisited when we implemented EXEC_BACKEND.  
While in a normal forking environment new backends will inherit the  
postmaster's value of such settings, EXEC_BACKEND backends have to read  
the settings from the CONFIG_EXEC_PARAMS file, and they were mistakenly  
rejecting them.  So this case has always been broken in the Windows port;  
so back-patch to all supported branches.  
  
Amit Kapila  
  

Fix tablespace creation WAL replay to work on Windows.

  
commit   : af7738fe6a33523916a58ac7b276f9467566a439    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 4 Apr 2014 23:09:45 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 4 Apr 2014 23:09:45 -0400    

Click here for diff

  
The code segment that removes the old symlink (if present) wasn't clued  
into the fact that on Windows, symlinks are junction points which have  
to be removed with rmdir().  
  
Backpatch to 9.0, where the failing code was introduced.  
  
MauMau, reviewed by Muhammad Asif Naeem and Amit Kapila  
  

Avoid allocations in critical sections.

  
commit   : 895243d69ba1972157d8d2644efbf87d557abec3    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 4 Apr 2014 13:12:38 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 4 Apr 2014 13:12:38 +0300    

Click here for diff

  
If a palloc in a critical section fails, it becomes a PANIC.  
  

Fix documentation about joining pg_locks to other views.

  
commit   : 447e23737cc82489258f9b2564fac68cf834188f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 3 Apr 2014 14:18:41 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 3 Apr 2014 14:18:41 -0400    

Click here for diff

  
The advice to join to pg_prepared_xacts via the transaction column was not  
updated when the transaction column was replaced by virtualtransaction.  
Since it's not quite obvious how to do that join, give an explicit example.  
For consistency also give an example for the adjacent case of joining to  
pg_stat_activity.  And link-ify the view references too, just because we  
can.  Per bug #9840 from Alexey Bashtanov.  
  
Michael Paquier and Tom Lane  
  

Fix documentation about size of interval type.

  
commit   : 7af116dd2d65ac4341502577f25eadb6fb656cfb    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 3 Apr 2014 11:05:55 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 3 Apr 2014 11:05:55 -0400    

Click here for diff

  
It's been 16 bytes, not 12, for ages.  This was fixed in passing in HEAD  
(commit 146604ec), but as a factual error it should have been back-patched.  
Per gripe from Tatsuhito Kasahara.  
  

Avoid palloc in critical section in GiST WAL-logging.

  
commit   : 05a5623766d527e1683901687dffc1cee9d7d273    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 3 Apr 2014 15:09:37 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 3 Apr 2014 15:09:37 +0300    

Click here for diff

  
Memory allocation can fail if you run out of memory, and inside a critical  
section that will lead to a PANIC. Use conservatively-sized arrays in stack  
instead.  
  
There was previously no explicit limit on the number of pages a GiST split  
can produce, it was only limited by the number of LWLocks that can be held  
simultaneously (100 at the moment). This patch adds an explicit limit of 75  
pages. That should be plenty, a typical split shouldn't produce more than  
2-3 page halves.  
  
The bug has been there forever, but only backpatch down to 9.1. The code  
was changed significantly in 9.1, and it doesn't seem worth the risk or  
trouble to adapt this for 9.0 and 8.4.  
  

Fix assorted issues in client host name lookup.

  
commit   : b7a424371499f1c0b8f2b092a4178ec1b9e368f8    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 2 Apr 2014 17:11:34 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 2 Apr 2014 17:11:34 -0400    

Click here for diff

  
The code for matching clients to pg_hba.conf lines that specify host names  
(instead of IP address ranges) failed to complain if reverse DNS lookup  
failed; instead it silently didn't match, so that you might end up getting  
a surprising "no pg_hba.conf entry for ..." error, as seen in bug #9518  
from Mike Blackwell.  Since we don't want to make this a fatal error in  
situations where pg_hba.conf contains a mixture of host names and IP  
addresses (clients matching one of the numeric entries should not have to  
have rDNS data), remember the lookup failure and mention it as DETAIL if  
we get to "no pg_hba.conf entry".  Apply the same approach to forward-DNS  
lookup failures, too, rather than treating them as immediate hard errors.  
  
Along the way, fix a couple of bugs that prevented us from detecting an  
rDNS lookup error reliably, and make sure that we make only one rDNS lookup  
attempt; formerly, if the lookup attempt failed, the code would try again  
for each host name entry in pg_hba.conf.  Since more or less the whole  
point of this design is to ensure there's only one lookup attempt not one  
per entry, the latter point represents a performance bug that seems  
sufficient justification for back-patching.  
  
Also, adjust src/port/getaddrinfo.c so that it plays as well as it can  
with this code.  Which is not all that well, since it does not have actual  
support for rDNS lookup, but at least it should return the expected (and  
required by spec) error codes so that the main code correctly perceives the  
lack of functionality as a lookup failure.  It's unlikely that PG is still  
being used in production on any machines that require our getaddrinfo.c,  
so I'm not excited about working harder than this.  
  
To keep the code in the various branches similar, this includes  
back-patching commits c424d0d1052cb4053c8712ac44123f9b9a9aa3f2 and  
1997f34db4687e671690ed054c8f30bb501b1168 into 9.2 and earlier.  
  
Back-patch to 9.1 where the facility for hostnames in pg_hba.conf was  
introduced.  
  

Fix bugs in manipulation of PgBackendStatus.st_clienthostname.

  
commit   : 2b5206901111db6dc8c6b9af0c4fde681c30e906    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 1 Apr 2014 21:30:18 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 1 Apr 2014 21:30:18 -0400    

Click here for diff

  
Initialization of this field was not being done according to the  
st_changecount protocol (it has to be done within the changecount increment  
range, not outside).  And the test to see if the value should be reported  
as null was wrong.  Noted while perusing uses of Port.remote_hostname.  
  
This was wrong from the introduction of this code (commit 4a25bc145),  
so back-patch to 9.1.  
  

Fix typo in comment.

  
commit   : b924d4cdc0a95c6af0a57d4e78dbd3d7f1b39775    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 1 Apr 2014 09:27:37 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 1 Apr 2014 09:27:37 +0300    

Click here for diff

  
Amit Langote  
  

Revert “Secure Unix-domain sockets of “make check” temporary clusters.”

  
commit   : 3e7dfbd4fec72e57096517d765fabdf7ecb2f43a    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sat, 29 Mar 2014 03:12:00 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sat, 29 Mar 2014 03:12:00 -0400    

Click here for diff

  
About half of the buildfarm members use too-long directory names,  
strongly suggesting that this approach is a dead end.  
  

Secure Unix-domain sockets of “make check” temporary clusters.

  
commit   : 61017ea214858dc1982faf55744c74256d37056f    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sat, 29 Mar 2014 00:52:56 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sat, 29 Mar 2014 00:52:56 -0400    

Click here for diff

  
Any OS user able to access the socket can connect as the bootstrap  
superuser and in turn execute arbitrary code as the OS user running the  
test.  Protect against that by placing the socket in the temporary data  
directory, which has mode 0700 thanks to initdb.  Back-patch to 8.4 (all  
supported versions).  The hazard remains wherever the temporary cluster  
accepts TCP connections, notably on Windows.  
  
Attempts to run "make check" from a directory with a long name will now  
fail.  An alternative not sharing that problem was to place the socket  
in a subdirectory of /tmp, but that is only secure if /tmp is sticky.  
The PG_REGRESS_SOCK_DIR environment variable is available as a  
workaround when testing from long directory paths.  
  
As a convenient side effect, this lets testing proceed smoothly in  
builds that override DEFAULT_PGSOCKET_DIR.  Popular non-default values  
like /var/run/postgresql are often unwritable to the build user.  
  
Security: CVE-2014-0067  
  

Document platform-specificity of unix_socket_permissions.

  
commit   : 733c2a48c9357c60b1bd61892c8106419a23feb4    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sat, 29 Mar 2014 00:52:31 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sat, 29 Mar 2014 00:52:31 -0400    

Click here for diff

  
Back-patch to 8.4 (all supported versions).  
  

Fix refcounting bug in PLy_modify_tuple().

  
commit   : a0a9928471cb3f9d1f8a04407ea2ade46096a432    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 26 Mar 2014 16:41:41 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 26 Mar 2014 16:41:41 -0400    

Click here for diff

  
We must increment the refcount on "plntup" as soon as we have the  
reference, not sometime later.  Otherwise, if an error is thrown in  
between, the Py_XDECREF(plntup) call in the PG_CATCH block removes a  
refcount we didn't add, allowing the object to be freed even though  
it's still part of the plpython function's parsetree.  
  
This appears to be the cause of crashes seen on buildfarm member  
prairiedog.  It's a bit surprising that we've not seen it fail repeatably  
before, considering that the regression tests have been exercising the  
faulty code path since 2009.  
  
The real-world impact is probably minimal, since it's unlikely anyone would  
be provoking the "TD["new"] is not a dictionary" error in production, and  
that's the only case that is actually wrong.  Still, it's a bug affecting  
the regression tests, so patch all supported branches.  
  
In passing, remove dead variable "plstr", and demote "platt" to a local  
variable inside the PG_TRY block, since we don't need to clean it up  
in the PG_CATCH path.  
  

Don’t forget to flush XLOG_PARAMETER_CHANGE record.

  
commit   : a8603f0da86682fd66c109d2f7a8570c814eba95    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Wed, 26 Mar 2014 02:12:39 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Wed, 26 Mar 2014 02:12:39 +0900    

Click here for diff

  
Backpatch to 9.0 where XLOG_PARAMETER_CHANGE record was instroduced.  
  

Fix typos in pg_basebackup documentation

  
commit   : 1698bd2fbd2e2aa00a3a3db03df220ea0589065a    
  
author   : Magnus Hagander <magnus@hagander.net>    
date     : Tue, 25 Mar 2014 11:16:57 +0100    
  
committer: Magnus Hagander <magnus@hagander.net>    
date     : Tue, 25 Mar 2014 11:16:57 +0100    

Click here for diff

  
Joshua Tolley  
  

Properly check for readdir/closedir() failures

  
commit   : d73cc5857faca215ee95c858e836bcc12d1d1b70    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Fri, 21 Mar 2014 13:45:11 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Fri, 21 Mar 2014 13:45:11 -0400    

Click here for diff

  
Clear errno before calling readdir() and handle old MinGW errno bug  
while adding full test coverage for readdir/closedir failures.  
  
Backpatch through 8.4.