PostgreSQL 9.3.7 commit log

Last-minute updates for release notes.

  
commit   : 70f2e3e20ff7dd10d2b405764f4818b11f167925    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 19 May 2015 18:33:58 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 19 May 2015 18:33:58 -0400    

Click here for diff

  
Revise description of CVE-2015-3166, in line with scaled-back patch.  
Change release date.  
  
Security: CVE-2015-3166  
  

Revert error-throwing wrappers for the printf family of functions.

  
commit   : 13341276ec57fe21956239fa733ed69e1c1938fd    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 19 May 2015 18:16:58 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 19 May 2015 18:16:58 -0400    

Click here for diff

  
This reverts commit 16304a013432931e61e623c8d85e9fe24709d9ba, except  
for its changes in src/port/snprintf.c; as well as commit  
cac18a76bb6b08f1ecc2a85e46c9d2ab82dd9d23 which is no longer needed.  
  
Fujii Masao reported that the previous commit caused failures in psql on  
OS X, since if one exits the pager program early while viewing a query  
result, psql sees an EPIPE error from fprintf --- and the wrapper function  
thought that was reason to panic.  (It's a bit surprising that the same  
does not happen on Linux.)  Further discussion among the security list  
concluded that the risk of other such failures was far too great, and  
that the one-size-fits-all approach to error handling embodied in the  
previous patch is unlikely to be workable.  
  
This leaves us again exposed to the possibility of the type of failure  
envisioned in CVE-2015-3166.  However, that failure mode is strictly  
hypothetical at this point: there is no concrete reason to believe that  
an attacker could trigger information disclosure through the supposed  
mechanism.  In the first place, the attack surface is fairly limited,  
since so much of what the backend does with format strings goes through  
stringinfo.c or psprintf(), and those already had adequate defenses.  
In the second place, even granting that an unprivileged attacker could  
control the occurrence of ENOMEM with some precision, it's a stretch to  
believe that he could induce it just where the target buffer contains some  
valuable information.  So we concluded that the risk of non-hypothetical  
problems induced by the patch greatly outweighs the security risks.  
We will therefore revert, and instead undertake closer analysis to  
identify specific calls that may need hardening, rather than attempt a  
universal solution.  
  
We have kept the portion of the previous patch that improved snprintf.c's  
handling of errors when it calls the platform's sprintf().  That seems to  
be an unalloyed improvement.  
  
Security: CVE-2015-3166  
  

Fix off-by-one error in Assertion.

  
commit   : b3288a6146218f95966aea550ed1a3fcf10bd5d8    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 19 May 2015 19:21:46 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 19 May 2015 19:21:46 +0300    

Click here for diff

  
The point of the assertion is to ensure that the arrays allocated in stack  
are large enough, but the check was one item short.  
  
This won't matter in practice because MaxIndexTuplesPerPage is an  
overestimate, so you can't have that many items on a page in reality.  
But let's be tidy.  
  
Spotted by Anastasia Lubennikova. Backpatch to all supported versions, like  
the patch that added the assertion.  
  

Stamp 9.3.7.

  
commit   : 8c479a8c7ba908f932df29966598341de1a989c1    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 18 May 2015 14:31:21 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 18 May 2015 14:31:21 -0400    

Click here for diff

  
  

Fix error message in pre_sync_fname.

  
commit   : 8388680ce4fedbd4054f1a651d705bc191343a06    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Mon, 18 May 2015 12:53:09 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Mon, 18 May 2015 12:53:09 -0400    

Click here for diff

  
The old one didn't include %m anywhere, and required extra  
translation.  
  
Report by Peter Eisentraut. Fix by me. Review by Tom Lane.  
  

Last-minute updates for release notes.

  
commit   : 32f8d57c1dc14c289959b1d6d96820e8cb02a311    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 18 May 2015 12:09:02 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 18 May 2015 12:09:02 -0400    

Click here for diff

  
Add entries for security issues.  
  
Security: CVE-2015-3165 through CVE-2015-3167  
  

pgcrypto: Report errant decryption as “Wrong key or corrupt data”.

  
commit   : 7b758b7d605aca10b36aa1c26bbf16c33f8ac726    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 18 May 2015 10:02:31 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 18 May 2015 10:02:31 -0400    

Click here for diff

  
This has been the predominant outcome.  When the output of decrypting  
with a wrong key coincidentally resembled an OpenPGP packet header,  
pgcrypto could instead report "Corrupt data", "Not text data" or  
"Unsupported compression algorithm".  The distinct "Corrupt data"  
message added no value.  The latter two error messages misled when the  
decrypted payload also exhibited fundamental integrity problems.  Worse,  
error message variance in other systems has enabled cryptologic attacks;  
see RFC 4880 section "14. Security Considerations".  Whether these  
pgcrypto behaviors are likewise exploitable is unknown.  
  
In passing, document that pgcrypto does not resist side-channel attacks.  
Back-patch to 9.0 (all supported versions).  
  
Security: CVE-2015-3167  
  

Check return values of sensitive system library calls.

  
commit   : c669915fd978c5667ce209a827635befb52819c7    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 18 May 2015 10:02:31 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 18 May 2015 10:02:31 -0400    

Click here for diff

  
PostgreSQL already checked the vast majority of these, missing this  
handful that nearly cannot fail.  If putenv() failed with ENOMEM in  
pg_GSS_recvauth(), authentication would proceed with the wrong keytab  
file.  If strftime() returned zero in cache_locale_time(), using the  
unspecified buffer contents could lead to information exposure or a  
crash.  Back-patch to 9.0 (all supported versions).  
  
Other unchecked calls to these functions, especially those in frontend  
code, pose negligible security concern.  This patch does not address  
them.  Nonetheless, it is always better to check return values whose  
specification provides for indicating an error.  
  
In passing, fix an off-by-one error in strftime_win32()'s invocation of  
WideCharToMultiByte().  Upon retrieving a value of exactly MAX_L10N_DATA  
bytes, strftime_win32() would overrun the caller's buffer by one byte.  
MAX_L10N_DATA is chosen to exceed the length of every possible value, so  
the vulnerable scenario probably does not arise.  
  
Security: CVE-2015-3166  
  

Add error-throwing wrappers for the printf family of functions.

  
commit   : 34d21e77081c6a3d1fb7d8b76d6a2dcef9874efe    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 18 May 2015 10:02:31 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 18 May 2015 10:02:31 -0400    

Click here for diff

  
All known standard library implementations of these functions can fail  
with ENOMEM.  A caller neglecting to check for failure would experience  
missing output, information exposure, or a crash.  Check return values  
within wrappers and code, currently just snprintf.c, that bypasses the  
wrappers.  The wrappers do not return after an error, so their callers  
need not check.  Back-patch to 9.0 (all supported versions).  
  
Popular free software standard library implementations do take pains to  
bypass malloc() in simple cases, but they risk ENOMEM for floating point  
numbers, positional arguments, large field widths, and large precisions.  
No specification demands such caution, so this commit regards every call  
to a printf family function as a potential threat.  
  
Injecting the wrappers implicitly is a compromise between patch scope  
and design goals.  I would prefer to edit each call site to name a  
wrapper explicitly.  libpq and the ECPG libraries would, ideally, convey  
errors to the caller rather than abort().  All that would be painfully  
invasive for a back-patched security fix, hence this compromise.  
  
Security: CVE-2015-3166  
  

Permit use of vsprintf() in PostgreSQL code.

  
commit   : d5abbd11479d1bc2e1439d3764251f9cb3b60755    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 18 May 2015 10:02:31 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 18 May 2015 10:02:31 -0400    

Click here for diff

  
The next commit needs it.  Back-patch to 9.0 (all supported versions).  
  

Prevent a double free by not reentering be_tls_close().

  
commit   : f4c12b415f1ed07e681bab58f7d6520025edfe83    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 18 May 2015 10:02:31 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 18 May 2015 10:02:31 -0400    

Click here for diff

  
Reentering this function with the right timing caused a double free,  
typically crashing the backend.  By synchronizing a disconnection with  
the authentication timeout, an unauthenticated attacker could achieve  
this somewhat consistently.  Call be_tls_close() solely from within  
proc_exit_prepare().  Back-patch to 9.0 (all supported versions).  
  
Benkocs Norbert Attila  
  
Security: CVE-2015-3165  
  

Translation updates

  
commit   : b9403dedc5b157801c19bcba1a135aac7cef2d4a    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Mon, 18 May 2015 08:40:50 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Mon, 18 May 2015 08:40:50 -0400    

Click here for diff

  
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git  
Source-Git-Hash: 3ce9e5ca72c3948b4c592e82a5ddb9b69b97d14b  
  

Fix typos

  
commit   : 271a68b996d6cb73dd4f1bcb56570ea67746cf5f    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 17 May 2015 22:21:36 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 17 May 2015 22:21:36 -0400    

Click here for diff

  
  

Release notes for 9.4.2, 9.3.7, 9.2.11, 9.1.16, 9.0.20.

  
commit   : 01d42ca19529239fe15ae8a2e147de3a02948d7e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 17 May 2015 15:54:20 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 17 May 2015 15:54:20 -0400    

Click here for diff

  
  

pg_upgrade: properly handle timeline variables

  
commit   : 4e9935979aff55b6e6e47ed9649ae6bf01bc228a    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Sat, 16 May 2015 15:16:28 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Sat, 16 May 2015 15:16:28 -0400    

Click here for diff

  
There is no behavior change here as we now always set the timeline to  
one.  
  
Report by Tom Lane  
  
Backpatch to 9.3 and 9.4  
  

Fix docs typo

  
commit   : b054732070a65adb9a6c12bfaed3a0e5b1935135    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 16 May 2015 13:28:26 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 16 May 2015 13:28:26 -0400    

Click here for diff

  
I don't think "respectfully" is what was meant here ...  
  

pg_upgrade: force timeline 1 in the new cluster

  
commit   : bffbeec0cb387e0453484d26444ad6fb281c9331    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Sat, 16 May 2015 00:40:18 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Sat, 16 May 2015 00:40:18 -0400    

Click here for diff

  
Previously, this prevented promoted standby servers from being upgraded  
because of a missing WAL history file.  (Timeline 1 doesn't need a  
history file, and we don't copy WAL files anyway.)  
  
Report by Christian Echerer(?), Alexey Klyukin  
  
Backpatch through 9.0  
  

pg_upgrade: only allow template0 to be non-connectable

  
commit   : 4cfba536981e7584bd051de3e1bcbe7e36a9605b    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Sat, 16 May 2015 00:10:03 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Sat, 16 May 2015 00:10:03 -0400    

Click here for diff

  
This patch causes pg_upgrade to error out during its check phase if:  
  
(1) template0 is marked connectable  
or  
(2) any other database is marked non-connectable  
  
This is done because, in the first case, pg_upgrade would fail because  
the pg_dumpall --globals restore would fail, and in the second case, the  
database would not be restored, leading to data loss.  
  
Report by Matt Landry (1), Stephen Frost (2)  
  
Backpatch through 9.0  
  

Update time zone data files to tzdata release 2015d.

  
commit   : 4fd69e41247a6052f1ebb44a2c6fafacbb4a8898    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 15 May 2015 19:35:29 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 15 May 2015 19:35:29 -0400    

Click here for diff

  
DST law changes in Egypt, Mongolia, Palestine.  
Historical corrections for Canada and Chile.  
Revised zone abbreviation for America/Adak (HST/HDT not HAST/HADT).  
  

Docs: fix erroneous claim about max byte length of GB18030.

  
commit   : 13a2b7bf6ef6232909ade08fda28221f91a3d905    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 14 May 2015 14:59:00 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 14 May 2015 14:59:00 -0400    

Click here for diff

  
This encoding has characters up to 4 bytes long, not 2.  
  

Fix RBM_ZERO_AND_LOCK mode to not acquire lock on local buffers.

  
commit   : 96b676cc66c5a60a522364487bf7c7a9593bb229    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 13 May 2015 09:44:43 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 13 May 2015 09:44:43 +0300    

Click here for diff

  
Commit 81c45081 introduced a new RBM_ZERO_AND_LOCK mode to ReadBuffer, which  
takes a lock on the buffer before zeroing it. However, you cannot take a  
lock on a local buffer, and you got a segfault instead. The version of that  
patch committed to master included a check for !isLocalBuf, and therefore  
didn't crash, but oddly I missed that in the back-patched versions. This  
patch adds that check to the back-branches too.  
  
RBM_ZERO_AND_LOCK mode is only used during WAL replay, and in hash indexes.  
WAL replay only deals with shared buffers, so the only way to trigger the  
bug is with a temporary hash index.  
  
Reported by Artem Ignatyev, analysis by Tom Lane.  
  

Fix incorrect checking of deferred exclusion constraint after a HOT update.

  
commit   : 7d09fdf82363c3d89ce350058a7a940ee843f048    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2015 12:25:28 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 11 May 2015 12:25:28 -0400    

Click here for diff

  
If a row that potentially violates a deferred exclusion constraint is  
HOT-updated later in the same transaction, the exclusion constraint would  
be reported as violated when the check finally occurs, even if the row(s)  
the new row originally conflicted with have since been removed.  This  
happened because the wrong TID was passed to check_exclusion_constraint(),  
causing the live HOT-updated row to be seen as a conflicting row rather  
than recognized as the row-under-test.  
  
Per bug #13148 from Evan Martin.  It's been broken since exclusion  
constraints were invented, so back-patch to all supported branches.  
  

Increase threshold for multixact member emergency autovac to 50%.

  
commit   : ddebd2119582ff84267ccd5e3dd677af8ea469aa    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Mon, 11 May 2015 12:07:13 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Mon, 11 May 2015 12:07:13 -0400    

Click here for diff

  
Analysis by Noah Misch shows that the 25% threshold set by commit  
53bb309d2d5a9432d2602c93ed18e58bd2924e15 is lower than any other,  
similar autovac threshold.  While we don't know exactly what value  
will be optimal for all users, it is better to err a little on the  
high side than on the low side.  A higher value increases the risk  
that users might exhaust the available space and start seeing errors  
before autovacuum can clean things up sufficiently, but a user who  
hits that problem can compensate for it by reducing  
autovacuum_multixact_freeze_max_age to a value dependent on their  
average multixact size.  On the flip side, if the emergency cap  
imposed by that patch kicks in too early, the user will experience  
excessive wraparound scanning and will be unable to mitigate that  
problem by configuration.  The new value will hopefully reduce the  
risk of such bad experiences while still providing enough headroom  
to avoid multixact member exhaustion for most users.  
  
Along the way, adjust the documentation to reflect the effects of  
commit 04e6d3b877e060d8445eb653b7ea26b1ee5cec6b, which taught  
autovacuum to run for multixact wraparound even when autovacuum  
is configured off.  
  

Even when autovacuum=off, force it for members as we do in other cases.

  
commit   : 543fbecee5c182528fc1ecee44d7e4e981801c0b    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Mon, 11 May 2015 10:51:14 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Mon, 11 May 2015 10:51:14 -0400    

Click here for diff

  
Thomas Munro, with some adjustments by me.  
  

Advance the stop point for multixact offset creation only at checkpoint.

  
commit   : 5bbac7ec1b5754043e073a45454e4c257512ce30    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Sun, 10 May 2015 22:21:20 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Sun, 10 May 2015 22:21:20 -0400    

Click here for diff

  
Commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c advanced the stop point  
at vacuum time, but this has subsequently been shown to be unsafe as a  
result of analysis by myself and Thomas Munro and testing by Thomas  
Munro.  The crux of the problem is that the SLRU deletion logic may  
get confused about what to remove if, at exactly the right time during  
the checkpoint process, the head of the SLRU crosses what used to be  
the tail.  
  
This patch, by me, fixes the problem by advancing the stop point only  
following a checkpoint.  This has the additional advantage of making  
the removal logic work during recovery more like the way it works during  
normal running, which is probably good.  
  
At least one of the calls to DetermineSafeOldestOffset which this patch  
removes was already dead, because MultiXactAdvanceOldest is called only  
during recovery and DetermineSafeOldestOffset was set up to do nothing  
during recovery.  That, however, is inconsistent with the principle that  
recovery and normal running should work similarly, and was confusing to  
boot.  
  
Along the way, fix some comments that previous patches in this area  
neglected to update.  It's not clear to me whether there's any  
concrete basis for the decision to use only half of the multixact ID  
space, but it's neither necessary nor sufficient to prevent multixact  
member wraparound, so the comments should not say otherwise.  
  

Fix DetermineSafeOldestOffset for the case where there are no mxacts.

  
commit   : 24aa77ec9549f2ca67220ef7b5d7f2dce5863d31    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Sun, 10 May 2015 21:34:26 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Sun, 10 May 2015 21:34:26 -0400    

Click here for diff

  
Commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c failed to take into  
account the possibility that there might be no multixacts in existence  
at all.  
  
Report by Thomas Munro; patch by me.  
  

Recommend include_realm=1 in docs

  
commit   : 3de791ee766779f89e399da6316e0d280de6ecaa    
  
author   : Stephen Frost <sfrost@snowman.net>    
date     : Fri, 8 May 2015 19:39:52 -0400    
  
committer: Stephen Frost <sfrost@snowman.net>    
date     : Fri, 8 May 2015 19:39:52 -0400    

Click here for diff

  
As discussed, the default setting of include_realm=0 can be dangerous in  
multi-realm environments because it is then impossible to differentiate  
users with the same username but who are from two different realms.  
  
Recommend include_realm=1 and note that the default setting may change  
in a future version of PostgreSQL and therefore users may wish to  
explicitly set include_realm to avoid issues while upgrading.  
  

Teach autovacuum about multixact member wraparound.

  
commit   : 596fb5aa73e6073bf870a9093941f937921ad4a4    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Fri, 8 May 2015 12:09:14 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Fri, 8 May 2015 12:09:14 -0400    

Click here for diff

  
The logic introduced in commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c  
and repaired in commits 669c7d20e6374850593cb430d332e11a3992bbcf and  
7be47c56af3d3013955c91c2877c08f2a0e3e6a2 helps to ensure that we don't  
overwrite old multixact member information while it is still needed,  
but a user who creates many large multixacts can still exhaust the  
member space (and thus start getting errors) while autovacuum stands  
idly by.  
  
To fix this, progressively ramp down the effective value (but not the  
actual contents) of autovacuum_multixact_freeze_max_age as member space  
utilization increases.  This makes autovacuum more aggressive and also  
reduces the threshold for a manual VACUUM to perform a full-table scan.  
  
This patch leaves unsolved the problem of ensuring that emergency  
autovacuums are triggered even when autovacuum=off.  We'll need to fix  
that via a separate patch.  
  
Thomas Munro and Robert Haas  
  

Fix incorrect math in DetermineSafeOldestOffset.

  
commit   : 83fbd9b59906c8543c165c738fc449af24491e63    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Thu, 7 May 2015 11:00:47 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Thu, 7 May 2015 11:00:47 -0400    

Click here for diff

  
The old formula didn't have enough parentheses, so it would do the wrong  
thing, and it used / rather than % to find a remainder.  The effect of  
these oversights is that the stop point chosen by the logic introduced in  
commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c might be rather  
meaningless.  
  
Thomas Munro, reviewed by Kevin Grittner, with a whitespace tweak by me.  
  

Properly send SCM status updates when shutting down service on Windows

  
commit   : ba3caee8438e631eec2ddcbe7f4b87f70fbfc027    
  
author   : Magnus Hagander <magnus@hagander.net>    
date     : Thu, 7 May 2015 15:04:13 +0200    
  
committer: Magnus Hagander <magnus@hagander.net>    
date     : Thu, 7 May 2015 15:04:13 +0200    

Click here for diff

  
The Service Control Manager should be notified regularly during a shutdown  
that takes a long time. Previously we would increaes the counter, but forgot  
to actually send the notification to the system. The loop counter was also  
incorrectly initalized in the event that the startup of the system took long  
enough for it to increase, which could cause the shutdown process not to wait  
as long as expected.  
  
Krystian Bigaj, reviewed by Michael Paquier  
  

citext’s regexp_matches() functions weren’t documented, either.

  
commit   : cf7d5aa977f8d3b6ceb905b387fd157b5243c724    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 5 May 2015 16:11:01 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 5 May 2015 16:11:01 -0400    

Click here for diff

  
  

Fix incorrect declaration of citext’s regexp_matches() functions.

  
commit   : ffac9f65d3ad0c938441a5dacfab1354929d1b29    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 5 May 2015 15:50:53 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 5 May 2015 15:50:53 -0400    

Click here for diff

  
These functions should return SETOF TEXT[], like the core functions they  
are wrappers for; but they were incorrectly declared as returning just  
TEXT[].  This mistake had two results: first, if there was no match you got  
a scalar null result, whereas what you should get is an empty set (zero  
rows).  Second, the 'g' flag was effectively ignored, since you would get  
only one result array even if there were multiple matches, as reported by  
Jeff Certain.  
  
While ignoring 'g' is a clear bug, the behavior for no matches might well  
have been thought to be the intended behavior by people who hadn't compared  
it carefully to the core regexp_matches() functions.  So we should tread  
carefully about introducing this change in the back branches.  Still, it  
clearly is a bug and so providing some fix is desirable.  
  
After discussion, the conclusion was to introduce the change in a 1.1  
version of the citext extension (as we would need to do anyway); 1.0 still  
contains the incorrect behavior.  1.1 is the default and only available  
version in HEAD, but it is optional in the back branches, where 1.0 remains  
the default version.  People wishing to adopt the fix in back branches will  
need to explicitly do ALTER EXTENSION citext UPDATE TO '1.1'.  (I also  
provided a downgrade script in the back branches, so people could go back  
to 1.0 if necessary.)  
  
This should be called out as an incompatible change in the 9.5 release  
notes, although we'll also document it in the next set of back-branch  
release notes.  The notes should mention that any views or rules that use  
citext's regexp_matches() functions will need to be dropped before  
upgrading to 1.1, and then recreated again afterwards.  
  
Back-patch to 9.1.  The bug goes all the way back to citext's introduction  
in 8.4, but pre-9.1 there is no extension mechanism with which to manage  
the change.  Given the lack of previous complaints it seems unnecessary to  
change this behavior in 9.0, anyway.  
  

Fix some problems with patch to fsync the data directory.

  
commit   : 6fd666954bb98b757b56e4f88cc7a8729b4ec968    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Tue, 5 May 2015 08:30:28 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Tue, 5 May 2015 08:30:28 -0400    

Click here for diff

  
pg_win32_is_junction() was a typo for pgwin32_is_junction().  open()  
was used not only in a two-argument form, which breaks on Windows,  
but also where BasicOpenFile() should have been used.  
  
Per reports from Andrew Dunstan and David Rowley.  
  

Recursively fsync() the data directory after a crash.

  
commit   : 14de825dee324cb6b1e8298ee845e1134edcc33e    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Mon, 4 May 2015 12:06:53 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Mon, 4 May 2015 12:06:53 -0400    

Click here for diff

  
Otherwise, if there's another crash, some writes from after the first  
crash might make it to disk while writes from before the crash fail  
to make it to disk.  This could lead to data corruption.  
  
Back-patch to all supported versions.  
  
Abhijit Menon-Sen, reviewed by Andres Freund and slightly revised  
by me.  
  

Fix pg_upgrade’s multixact handling (again)

  
commit   : e60581fdf3dee39d189925673ec17d2c794e84b5    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Thu, 30 Apr 2015 13:55:06 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Thu, 30 Apr 2015 13:55:06 -0300    

Click here for diff

  
We need to create the pg_multixact/offsets file deleted by pg_upgrade  
much earlier than we originally were: it was in TrimMultiXact(), which  
runs after we exit recovery, but it actually needs to run earlier than  
the first call to SetMultiXactIdLimit (before recovery), because that  
routine already wants to read the first offset segment.  
  
Per pg_upgrade trouble report from Jeff Janes.  
  
While at it, silence a compiler warning about a pointless assert that an  
unsigned variable was being tested non-negative.  This was a signed  
constant in Thomas Munro's patch which I changed to unsigned before  
commit.  Pointed out by Andres Freund.  
  

Code review for multixact bugfix

  
commit   : cf0d888ac5fbdc62e09cde3facb8b8aaa549c015    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 28 Apr 2015 14:52:29 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 28 Apr 2015 14:52:29 -0300    

Click here for diff

  
Reword messages, rename a confusingly named function.  
  
Per Robert Haas.  
  

Protect against multixact members wraparound

  
commit   : e2eda4b1159aecd222357db9310aab6d66067d50    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 28 Apr 2015 11:32:53 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 28 Apr 2015 11:32:53 -0300    

Click here for diff

  
Multixact member files are subject to early wraparound overflow and  
removal: if the average multixact size is above a certain threshold (see  
note below) the protections against offset overflow are not enough:  
during multixact truncation at checkpoint time, some  
pg_multixact/members files would be removed because the server considers  
them to be old and not needed anymore.  This leads to loss of files that  
are critical to interpret existing tuples's Xmax values.  
  
To protect against this, since we don't have enough info in pg_control  
and we can't modify it in old branches, we maintain shared memory state  
about the oldest value that we need to keep; we use this during new  
multixact creation to abort if an old still-needed file would get  
overwritten.  This value is kept up to date by checkpoints, which makes  
it not completely accurate but should be good enough.  We start emitting  
warnings sometime earlier, so that the eventual multixact-shutdown  
doesn't take DBAs completely by surprise (more precisely: once 20  
members SLRU segments are remaining before shutdown.)  
  
On troublesome average multixact size: The threshold size depends on the  
multixact freeze parameters. The oldest age is related to the greater of  
multixact_freeze_table_age and multixact_freeze_min_age: anything  
older than that should be removed promptly by autovacuum.  If autovacuum  
is keeping up with multixact freezing, the troublesome multixact average  
size is  
	(2^32-1) / Max(freeze table age, freeze min age)  
or around 28 members per multixact.  Having an average multixact size  
larger than that will eventually cause new multixact data to overwrite  
the data area for older multixacts.  (If autovacuum is not able to keep  
up, or there are errors in vacuuming, the actual maximum is  
multixact_freeeze_max_age instead, at which point multixact generation  
is stopped completely.  The default value for this limit is 400 million,  
which means that the multixact size that would cause trouble is about 10  
members).  
  
Initial bug report by Timothy Garnett, bug #12990  
Backpatch to 9.3, where the problem was introduced.  
  
Authors: Álvaro Herrera, Thomas Munro  
Reviews: Thomas Munro, Amit Kapila, Robert Haas, Kevin Grittner  
  

Build libecpg with -DFRONTEND in all supported versions.

  
commit   : 723613edf195ffd8decdd4971a2d6b21d75b6d81    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sun, 26 Apr 2015 17:20:10 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sun, 26 Apr 2015 17:20:10 -0400    

Click here for diff

  
Fix an oversight in commit 151e74719b0cc5c040bd3191b51b95f925773dd1 by  
back-patching commit 44c5d387eafb4ba1a032f8d7b13d85c553d69181 to 9.0.  
  

Prevent improper reordering of antijoins vs. outer joins.

  
commit   : 3e47d0b2a300305a195ac7a6d8ebd6ff5291b7b4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 25 Apr 2015 16:44:27 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 25 Apr 2015 16:44:27 -0400    

Click here for diff

  
An outer join appearing within the RHS of an antijoin can't commute with  
the antijoin, but somehow I missed teaching make_outerjoininfo() about  
that.  In Teodor Sigaev's recent trouble report, this manifests as a  
"could not find RelOptInfo for given relids" error within eqjoinsel();  
but I think silently wrong query results are possible too, if the planner  
misorders the joins and doesn't happen to trigger any internal consistency  
checks.  It's broken as far back as we had antijoins, so back-patch to all  
supported branches.  
  

Build every ECPG library with -DFRONTEND.

  
commit   : 05c13920a11424af1b19d00cc566bed76073eec3    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Fri, 24 Apr 2015 19:29:02 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Fri, 24 Apr 2015 19:29:02 -0400    

Click here for diff

  
Each of the libraries incorporates src/port files, which often check  
FRONTEND.  Build systems disagreed on whether to build libpgtypes this  
way.  Only libecpg incorporates files that rely on it today.  Back-patch  
to 9.0 (all supported versions) to forestall surprises.  
  

Fix obsolete comment in set_rel_size().

  
commit   : c82e13a915775961b4c237325ef63eb6bf83f599    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 24 Apr 2015 15:18:07 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 24 Apr 2015 15:18:07 -0400    

Click here for diff

  
The cross-reference to set_append_rel_pathlist() was obsoleted by  
commit e2fa76d80ba571d4de8992de6386536867250474, which split what  
had been set_rel_pathlist() and child routines into two sets of  
functions.  But I (tgl) evidently missed updating this comment.  
  
Back-patch to 9.2 to avoid unnecessary divergence among branches.  
  
Amit Langote  
  

Fix deadlock at startup, if max_prepared_transactions is too small.

  
commit   : f73ebd766a4903ab937b28c3cc90a53b6dcb61f4    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 23 Apr 2015 21:25:44 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 23 Apr 2015 21:25:44 +0300    

Click here for diff

  
When the startup process recovers transactions by scanning pg_twophase  
directory, it should clear MyLockedGxact after it's done processing each  
transaction. Like we do during normal operation, at PREPARE TRANSACTION.  
Otherwise, if the startup process exits due to an error, it will try to  
clear the locking_backend field of the last recovered transaction. That's  
usually harmless, but if the error happens in MarkAsPreparing, while  
holding TwoPhaseStateLock, the shmem-exit hook will try to acquire  
TwoPhaseStateLock again, and deadlock with itself.  
  
This fixes bug #13128 reported by Grant McAlister. The bug was introduced  
by commit bb38fb0d, so backpatch to all supported versions like that  
commit.  
  

Fix typo in comment

  
commit   : 7954bc5ecb888a5e2f374871685c8a43546c328d    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 14 Apr 2015 12:12:18 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 14 Apr 2015 12:12:18 -0300    

Click here for diff

  
SLRU_SEGMENTS_PER_PAGE -> SLRU_PAGES_PER_SEGMENT  
  
I introduced this ancient typo in subtrans.c and later propagated it to  
multixact.c.  I fixed the latter in f741300c, but only back to 9.3;  
backpatch to all supported branches for consistency.  
  

Don’t archive bogus recycled or preallocated files after timeline switch.

  
commit   : a800267e46781b0eebd43db8150ca558c9f687c6    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 13 Apr 2015 16:53:49 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 13 Apr 2015 16:53:49 +0300    

Click here for diff

  
After a timeline switch, we would leave behind recycled WAL segments that  
are in the future, but on the old timeline. After promotion, and after they  
become old enough to be recycled again, we would notice that they don't have  
a .ready or .done file, create a .ready file for them, and archive them.  
That's bogus, because the files contain garbage, recycled from an older  
timeline (or prealloced as zeros). We shouldn't archive such files.  
  
This could happen when we're following a timeline switch during replay, or  
when we switch to new timeline at end-of-recovery.  
  
To fix, whenever we switch to a new timeline, scan the data directory for  
WAL segments on the old timeline, but with a higher segment number, and  
remove them. Those don't belong to our timeline history, and are most  
likely bogus recycled or preallocated files. They could also be valid files  
that we streamed from the primary ahead of time, but in any case, they're  
not needed to recover to the new timeline.  
  

Remove duplicated words in comments.

  
commit   : 8dfddf14c598d389b8c95d402de529298274b94f    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Sun, 12 Apr 2015 10:46:17 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Sun, 12 Apr 2015 10:46:17 +0300    

Click here for diff

  
David Rowley  
  

Fix incorrect punctuation

  
commit   : 3b4da9ae994b598de0cd800febb0c17c0c562716    
  
author   : Magnus Hagander <magnus@hagander.net>    
date     : Thu, 9 Apr 2015 13:35:30 +0200    
  
committer: Magnus Hagander <magnus@hagander.net>    
date     : Thu, 9 Apr 2015 13:35:30 +0200    

Click here for diff

  
Amit Langote  
  

Fix autovacuum launcher shutdown sequence

  
commit   : 0d6c9e061b8b089c7130b1daa4f67219fca8491f    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Wed, 8 Apr 2015 13:19:49 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Wed, 8 Apr 2015 13:19:49 -0300    

Click here for diff

  
It was previously possible to have the launcher re-execute its main loop  
before shutting down if some other signal was received or an error  
occurred after getting SIGTERM, as reported by Qingqing Zhou.  
  
While investigating, Tom Lane further noticed that if autovacuum had  
been disabled in the config file, it would misbehave by trying to start  
a new worker instead of bailing out immediately -- it would consider  
itself as invoked in emergency mode.  
  
Fix both problems by checking the shutdown flag in a few more places.  
These problems have existed since autovacuum was introduced, so  
backpatch all the way back.  
  

Fix assorted inconsistent function declarations.

  
commit   : b1145ca19819ae0d64ef9a3cfd0ed94d300389b3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 7 Apr 2015 16:56:21 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 7 Apr 2015 16:56:21 -0400    

Click here for diff

  
While gcc doesn't complain if you declare a function "static" and then  
define it not-static, other compilers do; and in any case the code is  
highly misleading this way.  Add the missing "static" keywords to a  
couple of recent patches.  Per buildfarm member pademelon.  
  

Fix typo in libpq.sgml.

  
commit   : 4e3b1e2389e8f4950a1a2a176d335d1344b88a11    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Mon, 6 Apr 2015 12:15:20 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Mon, 6 Apr 2015 12:15:20 +0900    

Click here for diff

  
Back-patch to all supported versions.  
  
Michael Paquier  
  

Suppress clang’s unhelpful gripes about -pthread switch being unused.

  
commit   : 6347bdb31448f812cd726cfb3cdcdecf41a38b19    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 5 Apr 2015 13:01:55 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 5 Apr 2015 13:01:55 -0400    

Click here for diff

  
Considering the number of cases in which "unused" command line arguments  
are silently ignored by compilers, it's fairly astonishing that anybody  
thought this warning was useful; it's certainly nothing but an annoyance  
when building Postgres.  One such case is that neither gcc nor clang  
complain about unrecognized -Wno-foo switches, making it more difficult  
to figure out whether the switch does anything than one could wish.  
  
Back-patch to 9.3, which is as far back as the patch applies conveniently  
(we'd have to back-patch PGAC_PROG_CC_VAR_OPT to go further, and it doesn't  
seem worth that).  
  

Fix incorrect matching of subexpressions in outer-join plan nodes.

  
commit   : e105df208cf4a5d707a7ad0b9e6a4a23964c534b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 4 Apr 2015 19:55:15 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 4 Apr 2015 19:55:15 -0400    

Click here for diff

  
Previously we would re-use input subexpressions in all expression trees  
attached to a Join plan node.  However, if it's an outer join and the  
subexpression appears in the nullable-side input, this is potentially  
incorrect for apparently-matching subexpressions that came from above  
the outer join (ie, targetlist and qpqual expressions), because the  
executor will treat the subexpression value as NULL when maybe it should  
not be.  
  
The case is fairly hard to hit because (a) you need a non-strict  
subexpression (else NULL is correct), and (b) we don't usually compute  
expressions in the outputs of non-toplevel plan nodes.  But we might do  
so if the expressions are sort keys for a mergejoin, for example.  
  
Probably in the long run we should make a more explicit distinction between  
Vars appearing above and below an outer join, but that will be a major  
planner redesign and not at all back-patchable.  For the moment, just hack  
set_join_references so that it will not match any non-Var expressions  
coming from nullable inputs to expressions that came from above the join.  
(This is somewhat overkill, in that a strict expression could still be  
matched, but it doesn't seem worth the effort to check that.)  
  
Per report from Qingqing Zhou.  The added regression test case is based  
on his example.  
  
This has been broken for a very long time, so back-patch to all active  
branches.  
  

Remove unnecessary variables in _hash_splitbucket().

  
commit   : cbccaf22bf94761f79a66624ac18a5978ab17f41    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 3 Apr 2015 16:49:11 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 3 Apr 2015 16:49:11 -0400    

Click here for diff

  
Commit ed9cc2b5df59fdbc50cce37399e26b03ab2c1686 made it unnecessary to pass  
start_nblkno to _hash_splitbucket(), and for that matter unnecessary to  
have the internal nblkno variable either.  My compiler didn't complain  
about that, but some did.  I also rearranged the use of oblkno a bit to  
make that case more parallel.  
  
Report and initial patch by Petr Jelinek, rearranged a bit by me.  
Back-patch to all branches, like the previous patch.  
  

psql: fix \connect with URIs and conninfo strings

  
commit   : f4540cae10d8642b59deea50869888b78f16d722    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Wed, 1 Apr 2015 20:00:07 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Wed, 1 Apr 2015 20:00:07 -0300    

Click here for diff

  
psql was already accepting conninfo strings as the first parameter in  
\connect, but the way it worked wasn't sane; some of the other  
parameters would get the previous connection's values, causing it to  
connect to a completely unexpected server or, more likely, not finding  
any server at all because of completely wrong combinations of  
parameters.  
  
Fix by explicitely checking for a conninfo-looking parameter in the  
dbname position; if one is found, use its complete specification rather  
than mix with the other arguments.  Also, change tab-completion to not  
try to complete conninfo/URI-looking "dbnames" and document that  
conninfos are accepted as first argument.  
  
There was a weak consensus to backpatch this, because while the behavior  
of using the dbname as a conninfo is nowhere documented for \connect, it  
is reasonable to expect that it works because it does work in many other  
contexts.  Therefore this is backpatched all the way back to 9.0.  
  
To implement this, routines previously private to libpq have been  
duplicated so that psql can decide what looks like a conninfo/URI  
string.  In back branches, just duplicate the same code all the way back  
to 9.2, where URIs where introduced; 9.0 and 9.1 have a simpler version.  
In master, the routines are moved to src/common and renamed.  
  
Author: David Fetter, Andrew Dunstan.  Some editorialization by me  
(probably earning a Gierth's "Sloppy" badge in the process.)  
Reviewers: Andrew Gierth, Erik Rijkers, Pavel Stěhule, Stephen Frost,  
Robert Haas, Andrew Dunstan.  
  

Fix incorrect markup in documentation of window frame clauses.

  
commit   : 44f8f56e6d7b5f19276dbe55c4305b54afddc0b9    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 31 Mar 2015 20:02:40 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 31 Mar 2015 20:02:40 -0400    

Click here for diff

  
You're required to write either RANGE or ROWS to start a frame clause,  
but the documentation incorrectly implied this is optional.  Noted by  
David Johnston.  
  

Remove spurious semicolons.

  
commit   : 9f06729e2a47fe60d56cac78b0c167316938e0ef    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 31 Mar 2015 15:12:27 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 31 Mar 2015 15:12:27 +0300    

Click here for diff

  
Petr Jelinek  
  

Run pg_upgrade and pg_resetxlog with restricted token on Windows

  
commit   : 0904eb3e19842dd103adb6a8b3f65987b678a0e6    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 30 Mar 2015 17:17:17 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 30 Mar 2015 17:17:17 -0400    

Click here for diff

  
As with initdb these programs need to run with a restricted token, and  
if they don't pg_upgrade will fail when run as a user with Adminstrator  
privileges.  
  
Backpatch to all live branches. On the development branch the code is  
reorganized so that the restricted token code is now in a single  
location. On the stable bramches a less invasive change is made by  
simply copying the relevant code to pg_upgrade.c and pg_resetxlog.c.  
  
Patches and bug report from Muhammad Asif Naeem, reviewed by Michael  
Paquier, slightly edited by me.  
  

Fix bogus concurrent use of _hash_getnewbuf() in bucket split code.

  
commit   : 246bbf65cea9a9d91ff387ec0741b180f9956a87    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 30 Mar 2015 16:40:05 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 30 Mar 2015 16:40:05 -0400    

Click here for diff

  
_hash_splitbucket() obtained the base page of the new bucket by calling  
_hash_getnewbuf(), but it held no exclusive lock that would prevent some  
other process from calling _hash_getnewbuf() at the same time.  This is  
contrary to _hash_getnewbuf()'s API spec and could in fact cause failures.  
In practice, we must only call that function while holding write lock on  
the hash index's metapage.  
  
An additional problem was that we'd already modified the metapage's bucket  
mapping data, meaning that failure to extend the index would leave us with  
a corrupt index.  
  
Fix both issues by moving the _hash_getnewbuf() call to just before we  
modify the metapage in _hash_expandtable().  
  
Unfortunately there's still a large problem here, which is that we could  
also incur ENOSPC while trying to get an overflow page for the new bucket.  
That would leave the index corrupt in a more subtle way, namely that some  
index tuples that should be in the new bucket might still be in the old  
one.  Fixing that seems substantially more difficult; even preallocating as  
many pages as we could possibly need wouldn't entirely guarantee that the  
bucket split would complete successfully.  So for today let's just deal  
with the base case.  
  
Per report from Antonin Houska.  Back-patch to all active branches.  
  

Add vacuum_delay_point call in compute_index_stats’s per-sample-row loop.

  
commit   : 995a664c85539e7f4ea6cdf3076df43d482bc7d7    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 29 Mar 2015 15:04:09 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 29 Mar 2015 15:04:09 -0400    

Click here for diff

  
Slow functions in index expressions might cause this loop to take long  
enough to make it worth being cancellable.  Probably it would be enough  
to call CHECK_FOR_INTERRUPTS here, but for consistency with other  
per-sample-row loops in this file, let's use vacuum_delay_point.  
  
Report and patch by Jeff Janes.  Back-patch to all supported branches.  
  

Make SyncRepWakeQueue to a static function

  
commit   : 56abebb9be6d1e7179d10cdc362caa4405f06784    
  
author   : Tatsuo Ishii <ishii@postgresql.org>    
date     : Thu, 26 Mar 2015 10:38:11 +0900    
  
committer: Tatsuo Ishii <ishii@postgresql.org>    
date     : Thu, 26 Mar 2015 10:38:11 +0900    

Click here for diff

  
It is only used in src/backend/replication/syncrep.c.  
  
Back-patch to all supported branches except 9.1 which declares the  
function as static.  
  

Fix ExecOpenScanRelation to take a lock on a ROW_MARK_COPY relation.

  
commit   : 7cd5498b375ea1ce7bdd158da5940e000578ef08    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 24 Mar 2015 15:53:06 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 24 Mar 2015 15:53:06 -0400    

Click here for diff

  
ExecOpenScanRelation assumed that any relation listed in the ExecRowMark  
list has been locked by InitPlan; but this is not true if the rel's  
markType is ROW_MARK_COPY, which is possible if it's a foreign table.  
  
In most (possibly all) cases, failure to acquire a lock here isn't really  
problematic because the parser, planner, or plancache would have taken the  
appropriate lock already.  In principle though it might leave us vulnerable  
to working with a relation that we hold no lock on, and in any case if the  
executor isn't depending on previously-taken locks otherwise then it should  
not do so for ROW_MARK_COPY relations.  
  
Noted by Etsuro Fujita.  Back-patch to all active versions, since the  
inconsistency has been there a long time.  (It's almost certainly  
irrelevant in 9.0, since that predates foreign tables, but the code's  
still wrong on its own terms.)  
  

Replace insertion sort in contrib/intarray with qsort().

  
commit   : 83587a075dd1d6380de24f16df85bf629689effc    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 15 Mar 2015 23:22:03 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 15 Mar 2015 23:22:03 -0400    

Click here for diff

  
It's all very well to claim that a simplistic sort is fast in easy  
cases, but O(N^2) in the worst case is not good ... especially if the  
worst case is as easy to hit as "descending order input".  Replace that  
bit with our standard qsort.  
  
Per bug #12866 from Maksym Boguk.  Back-patch to all active branches.  
  

Remove workaround for ancient incompatibility between readline and libedit.

  
commit   : 2cb76fa6ff63d1d94a18012ec95835af5355f0e5    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 14 Mar 2015 13:43:00 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 14 Mar 2015 13:43:00 -0400    

Click here for diff

  
GNU readline defines the return value of write_history() as "zero if OK,  
else an errno code".  libedit's version of that function used to have a  
different definition (to wit, "-1 if error, else the number of lines  
written to the file").  We tried to work around that by checking whether  
errno had become nonzero, but this method has never been kosher according  
to the published API of either library.  It's reportedly completely broken  
in recent Ubuntu releases: psql bleats about "No such file or directory"  
when saving ~/.psql_history, even though the write worked fine.  
  
However, libedit has been following the readline definition since somewhere  
around 2006, so it seems all right to finally break compatibility with  
ancient libedit releases and trust that the return value is what readline  
specifies.  (I'm not sure when the various Linux distributions incorporated  
this fix, but I did find that OS X has been shipping fixed versions since  
10.5/Leopard.)  
  
If anyone is still using such an ancient libedit, they will find that psql  
complains it can't write ~/.psql_history at exit, even when the file was  
written correctly.  This is no worse than the behavior we're fixing for  
current releases.  
  
Back-patch to all supported branches.  
  

Fix integer overflow in debug message of walreceiver

  
commit   : 089b371a3977bd3a7b71a79966e8ea86296ecda3    
  
author   : Tatsuo Ishii <ishii@postgresql.org>    
date     : Sat, 14 Mar 2015 08:16:50 +0900    
  
committer: Tatsuo Ishii <ishii@postgresql.org>    
date     : Sat, 14 Mar 2015 08:16:50 +0900    

Click here for diff

  
The message tries to tell the replication apply delay which fails if  
the first WAL record is not applied yet. Fix is, instead of telling  
overflowed minus numeric, showing "N/A" which indicates that the delay  
data is not yet available. Problem reported by me and patch by  
Fabrízio de Royes Mello.  
  
Back patched to 9.4, 9.3 and 9.2 stable branches (9.1 and 9.0 do not  
have the debug message).  
  

Ensure tableoid reads correctly in EvalPlanQual-manufactured tuples.

  
commit   : 5bdf3cf5ad3048e8376fff6ccf5dafd0e9d2e603    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 12 Mar 2015 13:38:49 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 12 Mar 2015 13:38:49 -0400    

Click here for diff

  
The ROW_MARK_COPY path in EvalPlanQualFetchRowMarks() was just setting  
tableoid to InvalidOid, I think on the assumption that the referenced  
RTE must be a subquery or other case without a meaningful OID.  However,  
foreign tables also use this code path, and they do have meaningful  
table OIDs; so failure to set the tuple field can lead to user-visible  
misbehavior.  Fix that by fetching the appropriate OID from the range  
table.  
  
There's still an issue about whether CTID can ever have a meaningful  
value in this case; at least with postgres_fdw foreign tables, it does.  
But that is a different problem that seems to require a significantly  
different patch --- it's debatable whether postgres_fdw really wants to  
use this code path at all.  
  
Simplified version of a patch by Etsuro Fujita, who also noted the  
problem to begin with.  The issue can be demonstrated in all versions  
having FDWs, so back-patch to 9.1.  
  

Cast to (void *) rather than (int *) when passing int64’s to PQfn().

  
commit   : d16d821faa5a2c97e2fb532daa9a032b48129c91    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 8 Mar 2015 13:58:28 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 8 Mar 2015 13:58:28 -0400    

Click here for diff

  
This is a possibly-vain effort to silence a Coverity warning about  
bogus endianness dependency.  The code's fine, because it takes care  
of endianness issues for itself, but Coverity sees an int64 being  
passed to an int* argument and not unreasonably suspects something's  
wrong.  I'm not sure if putting the void* cast in the way will shut it  
up; but it can't hurt and seems better from a documentation standpoint  
anyway, since the pointer is not used as an int* in this code path.  
  
Just for a bit of additional safety, verify that the result length  
is 8 bytes as expected.  
  
Back-patch to 9.3 where the code in question was added.  
  

Fix documentation for libpq’s PQfn().

  
commit   : 9937f6e4c8ceba37ef840d550bc202c402d48113    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 8 Mar 2015 13:35:28 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 8 Mar 2015 13:35:28 -0400    

Click here for diff

  
The SGML docs claimed that 1-byte integers could be sent or received with  
the "isint" options, but no such behavior has ever been implemented in  
pqGetInt() or pqPutInt().  The in-code documentation header for PQfn() was  
even less in tune with reality, and the code itself used parameter names  
matching neither the SGML docs nor its libpq-fe.h declaration.  Do a bit  
of additional wordsmithing on the SGML docs while at it.  
  
Since the business about 1-byte integers is a clear documentation bug,  
back-patch to all supported branches.  
  

Rethink function argument sorting in pg_dump.

  
commit   : d645273cffa706936629891f84334a2f7a24232b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 6 Mar 2015 13:27:46 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 6 Mar 2015 13:27:46 -0500    

Click here for diff

  
Commit 7b583b20b1c95acb621c71251150beef958bb603 created an unnecessary  
dump failure hazard by applying pg_get_function_identity_arguments()  
to every function in the database, even those that won't get dumped.  
This could result in snapshot-related problems if concurrent sessions are,  
for example, creating and dropping temporary functions, as noted by Marko  
Tiikkaja in bug #12832.  While this is by no means pg_dump's only such  
issue with concurrent DDL, it's unfortunate that we added a new failure  
mode for cases that used to work, and even more so that the failure was  
created for basically cosmetic reasons (ie, to sort overloaded functions  
more deterministically).  
  
To fix, revert that patch and instead sort function arguments using  
information that pg_dump has available anyway, namely the names of the  
argument types.  This will produce a slightly different sort ordering for  
overloaded functions than the previous coding; but applying strcmp  
directly to the output of pg_get_function_identity_arguments really was  
a bit odd anyway.  The sorting will still be name-based and hence  
independent of possibly-installation-specific OID assignments.  A small  
additional benefit is that sorting now works regardless of server version.  
  
Back-patch to 9.3, where the previous commit appeared.  
  

Fix contrib/file_fdw’s expected file

  
commit   : 49bb34382d31710f28819e1317a1a05ba744b70a    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 6 Mar 2015 11:47:09 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 6 Mar 2015 11:47:09 -0300    

Click here for diff

  
I forgot to update it on yesterday's cf34e373fcf.  
  

Fix user mapping object description

  
commit   : 5cf4000034fb75de2b09642b9d37ee50f708090a    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Thu, 5 Mar 2015 18:03:16 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Thu, 5 Mar 2015 18:03:16 -0300    

Click here for diff

  
We were using "user mapping for user XYZ" as description for user mappings, but  
that's ambiguous because users can have mappings on multiple foreign  
servers; therefore change it to "for user XYZ on server UVW" instead.  
Object identities for user mappings are also updated in the same way, in  
branches 9.3 and above.  
  
The incomplete description string was introduced together with the whole  
SQL/MED infrastructure by commit cae565e503 of 8.4 era, so backpatch all  
the way back.  
  

Add comment for “is_internal” parameter

  
commit   : 73f236f5793e25f04933247917092de39011afd9    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 3 Mar 2015 14:03:33 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 3 Mar 2015 14:03:33 -0300    

Click here for diff

  
This was missed in my commit f4c4335 of 9.3 vintage, so backpatch to  
that.  
  

Fix pg_dump handling of extension config tables

  
commit   : 43d81f16a393cbc233bbb5d8314c4148cf5a5d68    
  
author   : Stephen Frost <sfrost@snowman.net>    
date     : Mon, 2 Mar 2015 14:12:33 -0500    
  
committer: Stephen Frost <sfrost@snowman.net>    
date     : Mon, 2 Mar 2015 14:12:33 -0500    

Click here for diff

  
Since 9.1, we've provided extensions with a way to denote  
"configuration" tables- tables created by an extension which the user  
may modify.  By marking these as "configuration" tables, the extension  
is asking for the data in these tables to be pg_dump'd (tables which  
are not marked in this way are assumed to be entirely handled during  
CREATE EXTENSION and are not included at all in a pg_dump).  
  
Unfortunately, pg_dump neglected to consider foreign key relationships  
between extension configuration tables and therefore could end up  
trying to reload the data in an order which would cause FK violations.  
  
This patch teaches pg_dump about these dependencies, so that the data  
dumped out is done so in the best order possible.  Note that there's no  
way to handle circular dependencies, but those have yet to be seen in  
the wild.  
  
The release notes for this should include a caution to users that  
existing pg_dump-based backups may be invalid due to this issue.  The  
data is all there, but restoring from it will require extracting the  
data for the configuration tables and then loading them in the correct  
order by hand.  
  
Discussed initially back in bug #6738, more recently brought up by  
Gilles Darold, who provided an initial patch which was further reworked  
by Michael Paquier.  Further modifications and documentation updates  
by me.  
  
Back-patch to 9.1 where we added the concept of extension configuration  
tables.  
  

  
commit   : 585f16dc80337a2ee7b0e823c7ff42a9e10d1d88    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sun, 1 Mar 2015 13:05:23 -0500    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sun, 1 Mar 2015 13:05:23 -0500    

Click here for diff

  
When the library already exists in the build directory, "ar" preserves  
members not named on its command line.  This mattered when, for example,  
a "configure" rerun dropped a file from $(LIBOBJS).  libpgport carried  
the obsolete member until "make clean".  Back-patch to 9.0 (all  
supported versions).  
  

Fix planning of star-schema-style queries.

  
commit   : 1b558782b7156bac9b4012ccee5338f1ccd236d9    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 28 Feb 2015 12:43:04 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 28 Feb 2015 12:43:04 -0500    

Click here for diff

  
Part of the intent of the parameterized-path mechanism was to handle  
star-schema queries efficiently, but some overly-restrictive search  
limiting logic added in commit e2fa76d80ba571d4de8992de6386536867250474  
prevented such cases from working as desired.  Fix that and add a  
regression test about it.  Per gripe from Marc Cousin.  
  
This is arguably a bug rather than a new feature, so back-patch to 9.2  
where parameterized paths were introduced.  
  

Reconsider when to wait for WAL flushes/syncrep during commit.

  
commit   : abce8dc7d6d8e30f2d4b02219eb73c205e5bf199    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Thu, 26 Feb 2015 12:50:07 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Thu, 26 Feb 2015 12:50:07 +0100    

Click here for diff

  
Up to now RecordTransactionCommit() waited for WAL to be flushed (if  
synchronous_commit != off) and to be synchronously replicated (if  
enabled), even if a transaction did not have a xid assigned. The primary  
reason for that is that sequence's nextval() did not assign a xid, but  
are worthwhile to wait for on commit.  
  
This can be problematic because sometimes read only transactions do  
write WAL, e.g. HOT page prune records. That then could lead to read only  
transactions having to wait during commit. Not something people expect  
in a read only transaction.  
  
This lead to such strange symptoms as backends being seemingly stuck  
during connection establishment when all synchronous replicas are  
down. Especially annoying when said stuck connection is the standby  
trying to reconnect to allow syncrep again...  
  
This behavior also is involved in a rather complicated <= 9.4 bug where  
the transaction started by catchup interrupt processing waited for  
syncrep using latches, but didn't get the wakeup because it was already  
running inside the same overloaded signal handler. Fix the issue here  
doesn't properly solve that issue, merely papers over the problems. In  
9.5 catchup interrupts aren't processed out of signal handlers anymore.  
  
To fix all this, make nextval() acquire a top level xid, and only wait for  
transaction commit if a transaction both acquired a xid and emitted WAL  
records.  If only a xid has been assigned we don't uselessly want to  
wait just because of writes to temporary/unlogged tables; if only WAL  
has been written we don't want to wait just because of HOT prunes.  
  
The xid assignment in nextval() is unlikely to cause overhead in  
real-world workloads. For one it only happens SEQ_LOG_VALS/32 values  
anyway, for another only usage of nextval() without using the result in  
an insert or similar is affected.  
  
Discussion: 20150223165359.GF30784@awork2.anarazel.de,  
    369698E947874884A77849D8FE3680C2@maumau,  
    5CF4ABBA67674088B3941894E22A0D25@maumau  
  
Per complaint from maumau and Thom Brown  
  
Backpatch all the way back; 9.0 doesn't have syncrep, but it seems  
better to be consistent behavior across all maintained branches.  
  

Free SQLSTATE and SQLERRM no earlier than other PL/pgSQL variables.

  
commit   : 4651e37cb1c9cc34643921bdc8b783d1c81b488f    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 25 Feb 2015 23:48:28 -0500    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 25 Feb 2015 23:48:28 -0500    

Click here for diff

  
"RETURN SQLERRM" prompted plpgsql_exec_function() to read from freed  
memory.  Back-patch to 9.0 (all supported versions).  Little code ran  
between the premature free and the read, so non-assert builds are  
unlikely to witness user-visible consequences.  
  

Fix dumping of views that are just VALUES(…) but have column aliases.

  
commit   : f864fe07403e6d6b38a06490c01a62a41872899a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 25 Feb 2015 12:01:12 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 25 Feb 2015 12:01:12 -0500    

Click here for diff

  
The "simple" path for printing VALUES clauses doesn't work if we need  
to attach nondefault column aliases, because there's noplace to do that  
in the minimal VALUES() syntax.  So modify get_simple_values_rte() to  
detect nondefault aliases and treat that as a non-simple case.  This  
further exposes that the "non-simple" path never actually worked;  
it didn't produce valid syntax.  Fix that too.  Per bug #12789 from  
Curtis McEnroe, and analysis by Andrew Gierth.  
  
Back-patch to all supported branches.  Before 9.3, this also requires  
back-patching the part of commit 092d7ded29f36b0539046b23b81b9f0bf2d637f1  
that created get_simple_values_rte() to begin with; inserting the extra  
test into the old factorization of that logic would've been too messy.  
  

Guard against spurious signals in LockBufferForCleanup.

  
commit   : a6ddff81226aa39a610de1ff312dcbc0e4b9fa4d    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Mon, 23 Feb 2015 16:11:11 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Mon, 23 Feb 2015 16:11:11 +0100    

Click here for diff

  
When LockBufferForCleanup() has to wait for getting a cleanup lock on a  
buffer it does so by setting a flag in the buffer header and then wait  
for other backends to signal it using ProcWaitForSignal().  
Unfortunately LockBufferForCleanup() missed that ProcWaitForSignal() can  
return for other reasons than the signal it is hoping for. If such a  
spurious signal arrives the wait flags on the buffer header will still  
be set. That then triggers "ERROR: multiple backends attempting to wait  
for pincount 1".  
  
The fix is simple, unset the flag if still set when retrying. That  
implies an additional spinlock acquisition/release, but that's unlikely  
to matter given the cost of waiting for a cleanup lock.  Alternatively  
it'd have been possible to move responsibility for maintaining the  
relevant flag to the waiter all together, but that might have had  
negative consequences due to possible floods of signals. Besides being  
more invasive.  
  
This looks to be a very longstanding bug. The relevant code in  
LockBufferForCleanup() hasn't changed materially since its introduction  
and ProcWaitForSignal() was documented to return for unrelated reasons  
since 8.2.  The master only patch series removing ImmediateInterruptOK  
made it much easier to hit though, as ProcSendSignal/ProcWaitForSignal  
now uses a latch shared with other tasks.  
  
Per discussion with Kevin Grittner, Tom Lane and me.  
  
Backpatch to all supported branches.  
  
Discussion: 11553.1423805224@sss.pgh.pa.us  
  

Fix potential deadlock with libpq non-blocking mode.

  
commit   : cdf813c59312298194b16eecc68aa50c9e2e1133    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 23 Feb 2015 13:32:34 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 23 Feb 2015 13:32:34 +0200    

Click here for diff

  
If libpq output buffer is full, pqSendSome() function tries to drain any  
incoming data. This avoids deadlock, if the server e.g. sends a lot of  
NOTICE messages, and blocks until we read them. However, pqSendSome() only  
did that in blocking mode. In non-blocking mode, the deadlock could still  
happen.  
  
To fix, take a two-pronged approach:  
  
1. Change the documentation to instruct that when PQflush() returns 1, you  
should wait for both read- and write-ready, and call PQconsumeInput() if it  
becomes read-ready. That fixes the deadlock, but applications are not going  
to change overnight.  
  
2. In pqSendSome(), drain the input buffer before returning 1. This  
alleviates the problem for applications that only wait for write-ready. In  
particular, a slow but steady stream of NOTICE messages during COPY FROM  
STDIN will no longer cause a deadlock. The risk remains that the server  
attempts to send a large burst of data and fills its output buffer, and at  
the same time the client also sends enough data to fill its output buffer.  
The application will deadlock if it goes to sleep, waiting for the socket  
to become write-ready, before the server's data arrives. In practice,  
NOTICE messages and such that the server might be sending are usually  
short, so it's highly unlikely that the server would fill its output buffer  
so quickly.  
  
Backpatch to all supported versions.  
  

Fix misparsing of empty value in conninfo_uri_parse_params().

  
commit   : f389b6e0a7637de014e5ea1adbcfa50108bb4e58    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 21 Feb 2015 12:59:25 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 21 Feb 2015 12:59:25 -0500    

Click here for diff

  
After finding an "=" character, the pointer was advanced twice when it  
should only advance once.  This is harmless as long as the value after "="  
has at least one character; but if it doesn't, we'd miss the terminator  
character and include too much in the value.  
  
In principle this could lead to reading off the end of memory.  It does not  
seem worth treating as a security issue though, because it would happen on  
client side, and besides client logic that's taking conninfo strings from  
untrusted sources has much worse security problems than this.  
  
Report and patch received off-list from Thomas Fanghaenel.  
Back-patch to 9.2 where the faulty code was introduced.  
  

Fix object identities for pg_conversion objects

  
commit   : a196e67f930e0fef10928928122770d59b14b653    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Wed, 18 Feb 2015 14:28:12 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Wed, 18 Feb 2015 14:28:12 -0300    

Click here for diff

  
We were neglecting to schema-qualify them.  
  
Backpatch to 9.3, where object identities were introduced as a concept  
by commit f8348ea32ec8.  
  

Fix failure to honor -Z compression level option in pg_dump -Fd.

  
commit   : a7ad5cf0cfcfab8418000d652fa4f0c6ad6c8911    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 18 Feb 2015 11:43:00 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 18 Feb 2015 11:43:00 -0500    

Click here for diff

  
cfopen() and cfopen_write() failed to pass the compression level through  
to zlib, so that you always got the default compression level if you got  
any at all.  
  
In passing, also fix these and related functions so that the correct errno  
is reliably returned on failure; the original coding supposes that free()  
cannot change errno, which is untrue on at least some platforms.  
  
Per bug #12779 from Christoph Berg.  Back-patch to 9.1 where the faulty  
code was introduced.  
  
Michael Paquier  
  

Remove code to match IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses.

  
commit   : 4ea2d2ddbe247d529e9d51a362704d67c56f4e48    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 17 Feb 2015 12:49:18 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 17 Feb 2015 12:49:18 -0500    

Click here for diff

  
In investigating yesterday's crash report from Hugo Osvaldo Barrera, I only  
looked back as far as commit f3aec2c7f51904e7 where the breakage occurred  
(which is why I thought the IPv4-in-IPv6 business was undocumented).  But  
actually the logic dates back to commit 3c9bb8886df7d56a and was simply  
broken by erroneous refactoring in the later commit.  A bit of archives  
excavation shows that we added the whole business in response to a report  
that some 2003-era Linux kernels would report IPv4 connections as having  
IPv4-in-IPv6 addresses.  The fact that we've had no complaints since 9.0  
seems to be sufficient confirmation that no modern kernels do that, so  
let's just rip it all out rather than trying to fix it.  
  
Do this in the back branches too, thus essentially deciding that our  
effective behavior since 9.0 is correct.  If there are any platforms on  
which the kernel reports IPv4-in-IPv6 addresses as such, yesterday's fix  
would have made for a subtle and potentially security-sensitive change in  
the effective meaning of IPv4 pg_hba.conf entries, which does not seem like  
a good thing to do in minor releases.  So let's let the post-9.0 behavior  
stand, and change the documentation to match it.  
  
In passing, I failed to resist the temptation to wordsmith the description  
of pg_hba.conf IPv4 and IPv6 address entries a bit.  A lot of this text  
hasn't been touched since we were IPv4-only.  
  

Improve pg_check_dir code and comments.

  
commit   : 9a90ec9cff34168994f5d544c2c3cc1b1ae21277    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Tue, 17 Feb 2015 10:19:30 -0500    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Tue, 17 Feb 2015 10:19:30 -0500    

Click here for diff

  
Avoid losing errno if readdir() fails and closedir() works.  Consistently  
return 4 rather than 3 if both a lost+found directory and other files are  
found, rather than returning one value or the other depending on the  
order of the directory listing.  Update comments to match the actual  
behavior.  
  
These oversights date to commits 6f03927fce038096f53ca67eeab9adb24938f8a6  
and 17f15239325a88581bb4f9cf91d38005f1f52d69.  
  
Marco Nenciarini  
  

Fix misuse of memcpy() in check_ip().

  
commit   : 7bc6e59546db93d975723cff7f19f3bffcefef6b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 16 Feb 2015 16:17:48 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 16 Feb 2015 16:17:48 -0500    

Click here for diff

  
The previous coding copied garbage into a local variable, pretty much  
ensuring that the intended test of an IPv6 connection address against a  
promoted IPv4 address from pg_hba.conf would never match.  The lack of  
field complaints likely indicates that nobody realized this was supposed  
to work, which is unsurprising considering that no user-facing docs suggest  
it should work.  
  
In principle this could have led to a SIGSEGV due to reading off the end of  
memory, but since the source address would have pointed to somewhere in the  
function's stack frame, that's quite unlikely.  What led to discovery of  
the bug is Hugo Osvaldo Barrera's report of a crash after an OS upgrade,  
which is probably because he is now running a system in which memcpy raises  
abort() upon detecting overlapping source and destination areas.  (You'd  
have to additionally suppose some things about the stack frame layout to  
arrive at this conclusion, but it seems plausible.)  
  
This has been broken since the code was added, in commit f3aec2c7f51904e7,  
so back-patch to all supported branches.  
  

Fix null-pointer-deref crash while doing COPY IN with check constraints.

  
commit   : 4662ba5a23db39b695050afbcc5ff65ccd285179    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 15 Feb 2015 23:26:46 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 15 Feb 2015 23:26:46 -0500    

Click here for diff

  
In commit bf7ca15875988a88e97302e012d7c4808bef3ea9 I introduced an  
assumption that an RTE referenced by a whole-row Var must have a valid eref  
field.  This is false for RTEs constructed by DoCopy, and there are other  
places taking similar shortcuts.  Perhaps we should make all those places  
go through addRangeTableEntryForRelation or its siblings instead of having  
ad-hoc logic, but the most reliable fix seems to be to make the new code in  
ExecEvalWholeRowVar cope if there's no eref.  We can reasonably assume that  
there's no need to insert column aliases if no aliases were provided.  
  
Add a regression test case covering this, and also verifying that a sane  
column name is in fact available in this situation.  
  
Although the known case only crashes in 9.4 and HEAD, it seems prudent to  
back-patch the code change to 9.2, since all the ingredients for a similar  
failure exist in the variant patch applied to 9.3 and 9.2.  
  
Per report from Jean-Pierre Pelletier.  
  

pg_regress: Write processed input/*.source into output dir

  
commit   : d5f70a2d68d76a28c18facf22170e99d2a0301c6    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Sat, 14 Feb 2015 21:33:41 -0500    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Sat, 14 Feb 2015 21:33:41 -0500    

Click here for diff

  
Before, it was writing the processed files into the input directory,  
which is incorrect in a vpath build.  
  

Fix broken #ifdef for __sparcv8

  
commit   : 6ef5d894a42646bdc480da88136fb2afd199e187    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 13 Feb 2015 23:51:23 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 13 Feb 2015 23:51:23 +0200    

Click here for diff

  
Rob Rowan. Backpatch to all supported versions, like the patch that added  
the broken #ifdef.  
  

pg_upgrade: quote directory names in delete_old_cluster script

  
commit   : 9ecd51da704b6c451fbf40bc30cd2aff1111eb92    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Wed, 11 Feb 2015 22:06:04 -0500    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Wed, 11 Feb 2015 22:06:04 -0500    

Click here for diff

  
This allows the delete script to properly function when special  
characters appear in directory paths, e.g. spaces.  
  
Backpatch through 9.0  
  

pg_upgrade: preserve freeze info for postgres/template1 dbs

  
commit   : e20523f8f7649f0cb971ef0e8f8d97af9aa55b54    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Wed, 11 Feb 2015 21:02:07 -0500    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Wed, 11 Feb 2015 21:02:07 -0500    

Click here for diff

  
pg_database.datfrozenxid and pg_database.datminmxid were not preserved  
for the 'postgres' and 'template1' databases.  This could cause missing  
clog file errors on access to user tables and indexes after upgrades in  
these databases.  
  
Backpatch through 9.0  
  

Fix missing PQclear() in libpqrcv_endstreaming().

  
commit   : 734bbf2e978314cad86e1dd8fefe7f0c3f52a4ef    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 11 Feb 2015 19:20:49 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 11 Feb 2015 19:20:49 -0500    

Click here for diff

  
This omission leaked one PGresult per WAL streaming cycle, which possibly  
would never be enough to notice in the real world, but it's still a leak.  
  
Per Coverity.  Back-patch to 9.3 where the error was introduced.  
  

Fix minor memory leak in ident_inet().

  
commit   : bcf2decdceb542efb2e55db2dc803b72c2e4cb5c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 11 Feb 2015 19:09:54 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 11 Feb 2015 19:09:54 -0500    

Click here for diff

  
We'd leak the ident_serv data structure if the second pg_getaddrinfo_all  
(the one for the local address) failed.  This is not of great consequence  
because a failure return here just leads directly to backend exit(), but  
if this function is going to try to clean up after itself at all, it should  
not have such holes in the logic.  Try to fix it in a future-proof way by  
having all the failure exits go through the same cleanup path, rather than  
"optimizing" some of them.  
  
Per Coverity.  Back-patch to 9.2, which is as far back as this patch  
applies cleanly.  
  

Fix more memory leaks in failure path in buildACLCommands.

  
commit   : 5ea8cfe8f721418cc54e2847a3867331e57dee20    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 11 Feb 2015 18:35:23 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 11 Feb 2015 18:35:23 -0500    

Click here for diff

  
We already had one go at this issue in commit d73b7f973db5ec7e, but we  
failed to notice that buildACLCommands also leaked several PQExpBuffers  
along with a simply malloc'd string.  This time let's try to make the  
fix a bit more future-proof by eliminating the separate exit path.  
  
It's still not exactly critical because pg_dump will curl up and die on  
failure; but since the amount of the potential leak is now several KB,  
it seems worth back-patching as far as 9.2 where the previous fix landed.  
  
Per Coverity, which evidently is smarter than clang's static analyzer.  
  

Fixed array handling in ecpg.

  
commit   : 1a321fea71db878755a4a2c00d45b98e10842a92    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Wed, 11 Feb 2015 11:13:11 +0100    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Wed, 11 Feb 2015 11:13:11 +0100    

Click here for diff

  
When ecpg was rewritten to the new protocol version not all variable types  
were corrected. This patch rewrites the code for these types to fix that. It  
also fixes the documentation to correctly tell the status of array handling.  
  

Fix pg_dump’s heuristic for deciding which casts to dump.

  
commit   : a4e871caada8117d5ca712187fb2bb0f68ec2879    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 10 Feb 2015 22:38:20 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 10 Feb 2015 22:38:20 -0500    

Click here for diff

  
Back in 2003 we had a discussion about how to decide which casts to dump.  
At the time pg_dump really only considered an object's containing schema  
to decide what to dump (ie, dump whatever's not in pg_catalog), and so  
we chose a complicated idea involving whether the underlying types were to  
be dumped (cf commit a6790ce85752b67ad994f55fdf1a450262ccc32e).  But users  
are allowed to create casts between built-in types, and we failed to dump  
such casts.  Let's get rid of that heuristic, which has accreted even more  
ugliness since then, in favor of just looking at the cast's OID to decide  
if it's a built-in cast or not.  
  
In passing, also fix some really ancient code that supposed that it had to  
manufacture a dependency for the cast on its cast function; that's only  
true when dumping from a pre-7.3 server.  This just resulted in some wasted  
cycles and duplicate dependency-list entries with newer servers, but we  
might as well improve it.  
  
Per gripes from a number of people, most recently Greg Sabino Mullane.  
Back-patch to all supported branches.  
  

Fix GEQO to not assume its join order heuristic always works.

  
commit   : 672abc4024a801478ecca7036c91cfcbee17f767    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 10 Feb 2015 20:37:24 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 10 Feb 2015 20:37:24 -0500    

Click here for diff

  
Back in commit 400e2c934457bef4bc3cc9a3e49b6289bd761bc0 I rewrote GEQO's  
gimme_tree function to improve its heuristic for modifying the given tour  
into a legal join order.  In what can only be called a fit of hubris,  
I supposed that this new heuristic would *always* find a legal join order,  
and ripped out the old logic that allowed gimme_tree to sometimes fail.  
  
The folly of this is exposed by bug #12760, in which the "greedy" clumping  
behavior of merge_clump() can lead it into a dead end which could only be  
recovered from by un-clumping.  We have no code for that and wouldn't know  
exactly what to do with it if we did.  Rather than try to improve the  
heuristic rules still further, let's just recognize that it *is* a  
heuristic and probably must always have failure cases.  So, put back the  
code removed in the previous commit to allow for failure (but comment it  
a bit better this time).  
  
It's possible that this code was actually fully correct at the time and  
has only been broken by the introduction of LATERAL.  But having seen this  
example I no longer have much faith in that proposition, so back-patch to  
all supported branches.  
  

Report WAL flush, not insert, position in replication IDENTIFY_SYSTEM

  
commit   : 5f0ba4abb33640205f38e5bee1e981da25175acf    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 6 Feb 2015 11:18:14 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 6 Feb 2015 11:18:14 +0200    

Click here for diff

  
When beginning streaming replication, the client usually issues the  
IDENTIFY_SYSTEM command, which used to return the current WAL insert  
position. That's not suitable for the intended purpose of that field,  
however. pg_receivexlog uses it to start replication from the reported  
point, but if it hasn't been flushed to disk yet, it will fail. Change  
IDENTIFY_SYSTEM to report the flush position instead.  
  
Backpatch to 9.1 and above. 9.0 doesn't report any WAL position.  
  

Add missing float.h include to snprintf.c.

  
commit   : f0241d648661c6650a32d26df2216ed78ba7953f    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Wed, 4 Feb 2015 13:27:31 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Wed, 4 Feb 2015 13:27:31 +0100    

Click here for diff

  
On windows _isnan() (which isnan() is redirected to in port/win32.h)  
is declared in float.h, not math.h.  
  
Per buildfarm animal currawong.  
  
Backpatch to all supported branches.