PostgreSQL 9.2.14 commit log

Stamp 9.2.14.

  
commit   : 69157086413efb8c1de793aa4493187c819cc5ae    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 5 Oct 2015 15:15:56 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 5 Oct 2015 15:15:56 -0400    

Click here for diff

  
  

doc: Update URLs of external projects

  
commit   : d3ff94d87cee5787aa6f9119cdc75ffc2507ec54    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Fri, 2 Oct 2015 21:50:59 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Fri, 2 Oct 2015 21:50:59 -0400    

Click here for diff

  
  

Translation updates

  
commit   : f177e1bad326a329b168ecc03cdb61c28374a3a3    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Mon, 5 Oct 2015 10:47:48 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Mon, 5 Oct 2015 10:47:48 -0400    

Click here for diff

  
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git  
Source-Git-Hash: d8bd45466e3980b5ab4582ff1705fcd1fff42908  
  

Last-minute updates for release notes.

  
commit   : dd5502a8d5caf4775e06a31d17641d49250f3d34    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 5 Oct 2015 10:57:15 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 5 Oct 2015 10:57:15 -0400    

Click here for diff

  
Add entries for security and not-quite-security issues.  
  
Security: CVE-2015-5288, CVE-2015-5289  
  

Remove outdated comment about relation level autovacuum freeze limits.

  
commit   : 6cb5bdec09521c16892bf9071a705b47bbfb3fac    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Mon, 5 Oct 2015 16:09:13 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Mon, 5 Oct 2015 16:09:13 +0200    

Click here for diff

  
The documentation for the autovacuum_multixact_freeze_max_age and  
autovacuum_freeze_max_age relation level parameters contained:  
"Note that while you can set autovacuum_multixact_freeze_max_age very  
small, or even zero, this is usually unwise since it will force frequent  
vacuuming."  
which hasn't been true since these options were made relation options,  
instead of residing in the pg_autovacuum table (834a6da4f7).  
  
Remove the outdated sentence. Even the lowered limits from 2596d70 are  
high enough that this doesn't warrant calling out the risk in the CREATE  
TABLE docs.  
  
Per discussion with Tom Lane and Alvaro Herrera  
  
Discussion: 26377.1443105453@sss.pgh.pa.us  
Backpatch: 9.0- (in parts)  
  

Prevent stack overflow in query-type functions.

  
commit   : ea68c221f4d2351e111ff3d7573ef67a0e9c855a    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:30 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:30 -0400    

Click here for diff

  
The tsquery, ltxtquery and query_int data types have a common ancestor.  
Having acquired check_stack_depth() calls independently, each was  
missing at least one call.  Back-patch to 9.0 (all supported versions).  
  

Prevent stack overflow in container-type functions.

  
commit   : 5e43130b50be8d0115b1ce083595f9252097369f    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:29 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:29 -0400    

Click here for diff

  
A range type can name another range type as its subtype, and a record  
type can bear a column of another record type.  Consequently, functions  
like range_cmp() and record_recv() are recursive.  Functions at risk  
include operator family members and referents of pg_type regproc  
columns.  Treat as recursive any such function that looks up and calls  
the same-purpose function for a record column type or the range subtype.  
Back-patch to 9.0 (all supported versions).  
  
An array type's element type is never itself an array type, so array  
functions are unaffected.  Recursion depth proportional to array  
dimensionality, found in array_dim_to_jsonb(), is fine thanks to MAXDIM.  
  

  
commit   : 8dacb29ca7c92814d69135f40e16a46f8cf9cbaf    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:29 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:29 -0400    

Click here for diff

  
Sufficiently-deep recursion heretofore elicited a SIGSEGV.  If an  
application constructs PostgreSQL json or jsonb values from arbitrary  
user input, application users could have exploited this to terminate all  
active database connections.  That applies to 9.3, where the json parser  
adopted recursive descent, and later versions.  Only row_to_json() and  
array_to_json() were at risk in 9.2, both in a non-security capacity.  
Back-patch to 9.2, where the json type was introduced.  
  
Oskari Saarenmaa, reviewed by Michael Paquier.  
  
Security: CVE-2015-5289  
  

pgcrypto: Detect and report too-short crypt() salts.

  
commit   : 56232f9879768e961485d8ba218da18c38768413    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:29 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:29 -0400    

Click here for diff

  
Certain short salts crashed the backend or disclosed a few bytes of  
backend memory.  For existing salt-induced error conditions, emit a  
message saying as much.  Back-patch to 9.0 (all supported versions).  
  
Josh Kupershmidt  
  
Security: CVE-2015-5288  
  

Re-Align *_freeze_max_age reloption limits with corresponding GUC limits.

  
commit   : e07cfef34d8e2f74e0d28f3e8b4384ee660aa9e2    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Mon, 5 Oct 2015 11:53:43 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Mon, 5 Oct 2015 11:53:43 +0200    

Click here for diff

  
In 020235a5754 I lowered the autovacuum_*freeze_max_age minimums to  
allow for easier testing of wraparounds. I did not touch the  
corresponding per-table limits. While those don't matter for the purpose  
of wraparound, it seems more consistent to lower them as well.  
  
It's noteworthy that the previous reloption lower limit for  
autovacuum_multixact_freeze_max_age was too high by one magnitude, even  
before 020235a5754.  
  
Discussion: 26377.1443105453@sss.pgh.pa.us  
Backpatch: back to 9.0 (in parts), like the prior patch  
  

Release notes for 9.5beta1, 9.4.5, 9.3.10, 9.2.14, 9.1.19, 9.0.23.

  
commit   : 795d5e427f64232f50f8f52919556f346a9af694    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 4 Oct 2015 19:38:00 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 4 Oct 2015 19:38:00 -0400    

Click here for diff

  
  

Further twiddling of nodeHash.c hashtable sizing calculation.

  
commit   : ebc7d928a761a6e1d56fc23caf1079722821dc0e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 4 Oct 2015 15:55:07 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 4 Oct 2015 15:55:07 -0400    

Click here for diff

  
On reflection, the submitted patch didn't really work to prevent the  
request size from exceeding MaxAllocSize, because of the fact that we'd  
happily round nbuckets up to the next power of 2 after we'd limited it to  
max_pointers.  The simplest way to enforce the limit correctly is to  
round max_pointers down to a power of 2 when it isn't one already.  
  
(Note that the constraint to INT_MAX / 2, if it were doing anything useful  
at all, is properly applied after that.)  
  

Fix possible “invalid memory alloc request size” failure in nodeHash.c.

  
commit   : fd3e3cf500de7f8f625744fbd8a413b27f500abe    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 4 Oct 2015 14:16:59 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 4 Oct 2015 14:16:59 -0400    

Click here for diff

  
Limit the size of the hashtable pointer array to not more than  
MaxAllocSize.  We've seen reports of failures due to this in HEAD/9.5,  
and it seems possible in older branches as well.  The change in  
NTUP_PER_BUCKET in 9.5 may have made the problem more likely, but  
surely it didn't introduce it.  
  
Tomas Vondra, slightly modified by me  
  

Update time zone data files to tzdata release 2015g.

  
commit   : fd519c1701ad509dfbe7b8698ddf5b9d11432da8    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 19:15:39 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 19:15:39 -0400    

Click here for diff

  
DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island,  
North Korea, Turkey, Uruguay.  New zone America/Fort_Nelson for Canadian  
Northern Rockies.  
  

Add recursion depth protection to LIKE matching.

  
commit   : 57bf7b54831b68f63cea006b988e82cccb3469de    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 15:00:52 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 15:00:52 -0400    

Click here for diff

  
Since MatchText() recurses, it could in principle be driven to stack  
overflow, although quite a long pattern would be needed.  
  

Add recursion depth protections to regular expression matching.

  
commit   : a0c089f33f2909663a369eead7877eb6317d31f3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 14:51:58 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 14:51:58 -0400    

Click here for diff

  
Some of the functions in regex compilation and execution recurse, and  
therefore could in principle be driven to stack overflow.  The Tcl crew  
has seen this happen in practice in duptraverse(), though their fix was  
to put in a hard-wired limit on the number of recursive levels, which is  
not too appetizing --- fortunately, we have enough infrastructure to check  
the actually available stack.  Greg Stark has also seen it in other places  
while fuzz testing on a machine with limited stack space.  Let's put guards  
in to prevent crashes in all these places.  
  
Since the regex code would leak memory if we simply threw elog(ERROR),  
we have to introduce an API that checks for stack depth without throwing  
such an error.  Fortunately that's not difficult.  
  

Fix potential infinite loop in regular expression execution.

  
commit   : 483bbc9fea18a0bba4d1a28c7e4ed5de5414ca58    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 14:26:36 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 14:26:36 -0400    

Click here for diff

  
In cfindloop(), if the initial call to shortest() reports that a  
zero-length match is possible at the current search start point, but then  
it is unable to construct any actual match to that, it'll just loop around  
with the same start point, and thus make no progress.  We need to force the  
start point to be advanced.  This is safe because the loop over "begin"  
points has already tried and failed to match starting at "close", so there  
is surely no need to try that again.  
  
This bug was introduced in commit e2bd904955e2221eddf01110b1f25002de2aaa83,  
wherein we allowed continued searching after we'd run out of match  
possibilities, but evidently failed to think hard enough about exactly  
where we needed to search next.  
  
Because of the way this code works, such a match failure is only possible  
in the presence of backrefs --- otherwise, shortest()'s judgment that a  
match is possible should always be correct.  That probably explains how  
come the bug has escaped detection for several years.  
  
The actual fix is a one-liner, but I took the trouble to add/improve some  
comments related to the loop logic.  
  
After fixing that, the submitted test case "()*\1" didn't loop anymore.  
But it reported failure, though it seems like it ought to match a  
zero-length string; both Tcl and Perl think it does.  That seems to be from  
overenthusiastic optimization on my part when I rewrote the iteration match  
logic in commit 173e29aa5deefd9e71c183583ba37805c8102a72: we can't just  
"declare victory" for a zero-length match without bothering to set match  
data for capturing parens inside the iterator node.  
  
Per fuzz testing by Greg Stark.  The first part of this is a bug in all  
supported branches, and the second part is a bug since 9.2 where the  
iteration rewrite happened.  
  

Add some more query-cancel checks to regular expression matching.

  
commit   : 2d51f55ff5a87bf33c2590def03d502ec3c4adb3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 13:45:39 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 13:45:39 -0400    

Click here for diff

  
Commit 9662143f0c35d64d7042fbeaf879df8f0b54be32 added infrastructure to  
allow regular-expression operations to be terminated early in the event  
of SIGINT etc.  However, fuzz testing by Greg Stark disclosed that there  
are still cases where regex compilation could run for a long time without  
noticing a cancel request.  Specifically, the fixempties() phase never  
adds new states, only new arcs, so it doesn't hit the cancel check I'd put  
in newstate().  Add one to newarc() as well to cover that.  
  
Some experimentation of my own found that regex execution could also run  
for a long time despite a pending cancel.  We'd put a high-level cancel  
check into cdissect(), but there was none inside the core text-matching  
routines longest() and shortest().  Ordinarily those inner loops are very  
very fast ... but in the presence of lookahead constraints, not so much.  
As a compromise, stick a cancel check into the stateset cache-miss  
function, which is enough to guarantee a cancel check at least once per  
lookahead constraint test.  
  
Making this work required more attention to error handling throughout the  
regex executor.  Henry Spencer had apparently originally intended longest()  
and shortest() to be incapable of incurring errors while running, so  
neither they nor their subroutines had well-defined error reporting  
behaviors.  However, that was already broken by the lookahead constraint  
feature, since lacon() can surely suffer an out-of-memory failure ---  
which, in the code as it stood, might never be reported to the user at all,  
but just silently be treated as a non-match of the lookahead constraint.  
Normalize all that by inserting explicit error tests as needed.  I took the  
opportunity to add some more comments to the code, too.  
  
Back-patch to all supported branches, like the previous patch.  
  

Docs: add disclaimer about hazards of using regexps from untrusted sources.

  
commit   : 52511fd6243260d826e9fb337d3dcd79811b5f91    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 13:30:43 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 13:30:43 -0400    

Click here for diff

  
It's not terribly hard to devise regular expressions that take large  
amounts of time and/or memory to process.  Recent testing by Greg Stark has  
also shown that machines with small stack limits can be driven to stack  
overflow by suitably crafted regexps.  While we intend to fix these things  
as much as possible, it's probably impossible to eliminate slow-execution  
cases altogether.  In any case we don't want to treat such things as  
security issues.  The history of that code should already discourage  
prudent DBAs from allowing execution of regexp patterns coming from  
possibly-hostile sources, but it seems like a good idea to warn about the  
hazard explicitly.  
  
Currently, similar_escape() allows access to enough of the underlying  
regexp behavior that the warning has to apply to SIMILAR TO as well.  
We might be able to make it safer if we tightened things up to allow only  
SQL-mandated capabilities in SIMILAR TO; but that would be a subtly  
non-backwards-compatible change, so it requires discussion and probably  
could not be back-patched.  
  
Per discussion among pgsql-security list.  
  

Fix pg_dump to handle inherited NOT VALID check constraints correctly.

  
commit   : 3756c65a0764bca1ba104dfa87d8f2cf7f885311    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 1 Oct 2015 16:19:49 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 1 Oct 2015 16:19:49 -0400    

Click here for diff

  
This case seems to have been overlooked when unvalidated check constraints  
were introduced, in 9.2.  The code would attempt to dump such constraints  
over again for each child table, even though adding them to the parent  
table is sufficient.  
  
In 9.2 and 9.3, also fix contrib/pg_upgrade/Makefile so that the "make  
clean" target fully cleans up after a failed test.  This evidently got  
dealt with at some point in 9.4, but it wasn't back-patched.  I ran into  
it while testing this fix ...  
  
Per bug #13656 from Ingmar Brouns.  
  

Fix documentation error in commit 8703059c6b55c427100e00a09f66534b6ccbfaa1.

  
commit   : c4a1039fdfd9543498810873ea1d4cbfa1d06c9e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 1 Oct 2015 10:31:22 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 1 Oct 2015 10:31:22 -0400    

Click here for diff

  
Etsuro Fujita spotted a thinko in the README commentary.  
  

Improve LISTEN startup time when there are many unread notifications.

  
commit   : e4c00750af916cf001725dbb034b198eed8eee0c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 30 Sep 2015 23:32:23 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 30 Sep 2015 23:32:23 -0400    

Click here for diff

  
If some existing listener is far behind, incoming new listener sessions  
would start from that session's read pointer and then need to advance over  
many already-committed notification messages, which they have no interest  
in.  This was expensive in itself and also thrashed the pg_notify SLRU  
buffers a lot more than necessary.  We can improve matters considerably  
in typical scenarios, without much added cost, by starting from the  
furthest-ahead read pointer, not the furthest-behind one.  We do have to  
consider only sessions in our own database when doing this, which requires  
an extra field in the data structure, but that's a pretty small cost.  
  
Back-patch to 9.0 where the current LISTEN/NOTIFY logic was introduced.  
  
Matt Newell, slightly adjusted by me  
  

Fix plperl to handle non-ASCII error message texts correctly.

  
commit   : aae40cf13b9b1b250db4a29e043e019fe85b0522    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Sep 2015 10:52:22 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Sep 2015 10:52:22 -0400    

Click here for diff

  
We were passing error message texts to croak() verbatim, which turns out  
not to work if the text contains non-ASCII characters; Perl mangles their  
encoding, as reported in bug #13638 from Michal Leinweber.  To fix, convert  
the text into a UTF8-encoded SV first.  
  
It's hard to test this without risking failures in different database  
encodings; but we can follow the lead of plpython, which is already  
assuming that no-break space (U+00A0) has an equivalent in all encodings  
we care about running the regression tests in (cf commit 2dfa15de5).  
  
Back-patch to 9.1.  The code is quite different in 9.0, and anyway it seems  
too risky to put something like this into 9.0's final minor release.  
  
Alex Hunsaker, with suggestions from Tim Bunce and Tom Lane  
  

Fix compiler warning about unused function in non-readline case.

  
commit   : a959db8acd058a311478d3692ac4ca35b747079c    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 28 Sep 2015 18:29:20 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 28 Sep 2015 18:29:20 -0400    

Click here for diff

  
Backpatch to all live branches to keep the code in sync.  
  

Further fix for psql’s code for locale-aware formatting of numeric output.

  
commit   : 80fa5421017b23e992ae71676e424fc5d1acbed8    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 25 Sep 2015 12:20:46 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 25 Sep 2015 12:20:46 -0400    

Click here for diff

  
(Third time's the charm, I hope.)  
  
Additional testing disclosed that this code could mangle already-localized  
output from the "money" datatype.  We can't very easily skip applying it  
to "money" values, because the logic is tied to column right-justification  
and people expect "money" output to be right-justified.  Short of  
decoupling that, we can fix it in what should be a safe enough way by  
testing to make sure the string doesn't contain any characters that would  
not be expected in plain numeric output.  
  

Further fix for psql’s code for locale-aware formatting of numeric output.

  
commit   : 60617d7d680e46e45126baa98c6e20728300c893    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 25 Sep 2015 00:00:33 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 25 Sep 2015 00:00:33 -0400    

Click here for diff

  
On closer inspection, those seemingly redundant atoi() calls were not so  
much inefficient as just plain wrong: the author of this code either had  
not read, or had not understood, the POSIX specification for localeconv().  
The grouping field is *not* a textual digit string but separate integers  
encoded as chars.  
  
We'll follow the existing code as well as the backend's cash.c in only  
honoring the first group width, but let's at least honor it correctly.  
  
This doesn't actually result in any behavioral change in any of the  
locales I have installed on my Linux box, which may explain why nobody's  
complained; grouping width 3 is close enough to universal that it's barely  
worth considering other cases.  Still, wrong is wrong, so back-patch.  
  

Fix psql’s code for locale-aware formatting of numeric output.

  
commit   : 596c9e9efdbef0b0a597f3c8df478e51a50729e3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 24 Sep 2015 23:01:04 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 24 Sep 2015 23:01:04 -0400    

Click here for diff

  
This code did the wrong thing entirely for numbers with an exponent  
but no decimal point (e.g., '1e6'), as reported by Jeff Janes in  
bug #13636.  More generally, it made lots of unverified assumptions  
about what the input string could possibly look like.  Rearrange so  
that it only fools with leading digits that it's directly verified  
are there, and an immediately adjacent decimal point.  While at it,  
get rid of some useless inefficiencies, like converting the grouping  
count string to integer over and over (and over).  
  
This has been broken for a long time, so back-patch to all supported  
branches.  
  

Lower *_freeze_max_age minimum values.

  
commit   : f12932dd43512c5be0a07e473be8c77bc39ea74b    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Thu, 24 Sep 2015 14:53:33 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Thu, 24 Sep 2015 14:53:33 +0200    

Click here for diff

  
The old minimum values are rather large, making it time consuming to  
test related behaviour. Additionally the current limits, especially for  
multixacts, can be problematic in space-constrained systems. 10000000  
multixacts can contain a lot of members.  
  
Since there's no good reason for the current limits, lower them a good  
bit. Setting them to 0 would be a bad idea, triggering endless vacuums,  
so still retain a limit.  
  
While at it fix autovacuum_multixact_freeze_max_age to refer to  
multixact.c instead of varsup.c.  
  
Reviewed-By: Robert Haas  
Discussion: CA+TgmoYmQPHcrc3GSs7vwvrbTkbcGD9Gik=OztbDGGrovkkEzQ@mail.gmail.com  
Backpatch: 9.0 (in parts)  
  

Fix sepgsql regression tests (9.2-only patch).

  
commit   : e90a629e126b9459b1f1d8ee8aa8c8598dc36b16    
  
author   : Joe Conway <mail@joeconway.com>    
date     : Tue, 22 Sep 2015 14:58:38 -0700    
  
committer: Joe Conway <mail@joeconway.com>    
date     : Tue, 22 Sep 2015 14:58:38 -0700    

Click here for diff

  
The regression tests for sepgsql were broken by changes in the  
base distro as-shipped policies. Specifically, definition of  
unconfined_t in the system default policy was changed to bypass  
multi-category rules, which the regression test depended on.  
Fix that by defining a custom privileged domain  
(sepgsql_regtest_superuser_t) and using it instead of system's  
unconfined_t domain. The new sepgsql_regtest_superuser_t domain  
performs almost like the current unconfined_t, but restricted by  
multi-category policy as the traditional unconfined_t was.  
  
The custom policy module is a self defined domain, and so should not  
be affected by related future system policy changes. However, it still  
uses the unconfined_u:unconfined_r pair for selinux-user and role.  
Those definitions have not been changed for several years and seem  
less risky to rely on than the unconfined_t domain. Additionally, if  
we define custom user/role, they would need to be manually defined  
at the operating system level, adding more complexity to an already  
non-standard and complex regression test.  
  
Applies only to 9.2. Unlike the previous similar patch, commit 794e2558b,  
this also fixes a bug related to processing SELECT INTO statement.  
Because v9.2 didn't have ObjectAccessPostCreate to inform the context  
when a relation is newly created, sepgsql had an alternative method.  
However, related code in sepgsql_object_access() neglected to consider  
T_CreateTableAsStmt, thus no label was assigned on the new relation.  
This logic was removed and replaced starting in 9.3.  
  
Patch by Kohei KaiGai.  
  

Docs: fix typo in to_char() example.

  
commit   : 11b44d1cf65bcd59f0a827e1ffab1f1bba1cd1e2    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 22 Sep 2015 10:40:25 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 22 Sep 2015 10:40:25 -0400    

Click here for diff

  
Per bug #13631 from KOIZUMI Satoru.  
  

Fix possible internal overflow in numeric multiplication.

  
commit   : 844486216eeb5b483c7b45c13b05dce76387c69a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 21 Sep 2015 12:11:32 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 21 Sep 2015 12:11:32 -0400    

Click here for diff

  
mul_var() postpones propagating carries until it risks overflow in its  
internal digit array.  However, the logic failed to account for the  
possibility of overflow in the carry propagation step, allowing wrong  
results to be generated in corner cases.  We must slightly reduce the  
when-to-propagate-carries threshold to avoid that.  
  
Discovered and fixed by Dean Rasheed, with small adjustments by me.  
  
This has been wrong since commit d72f6c75038d8d37e64a29a04b911f728044d83b,  
so back-patch to all supported branches.  
  

Restrict file mode creation mask during tmpfile().

  
commit   : c94b65f677875140b019bec1f7dc07bd2e14d45b    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sun, 20 Sep 2015 20:42:27 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sun, 20 Sep 2015 20:42:27 -0400    

Click here for diff

  
Per Coverity.  Back-patch to 9.0 (all supported versions).  
  
Michael Paquier, reviewed (in earlier versions) by Heikki Linnakangas.  
  

Be more wary about partially-valid LOCALLOCK data in RemoveLocalLock().

  
commit   : ac0c71228fe7e44d9eba130df9bbbb5c2bfa9faa    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 20 Sep 2015 16:48:44 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 20 Sep 2015 16:48:44 -0400    

Click here for diff

  
RemoveLocalLock() must consider the possibility that LockAcquireExtended()  
failed to palloc the initial space for a locallock's lockOwners array.  
I had evidently meant to cope with this hazard when the code was originally  
written (commit 1785acebf2ed14fd66955e2d9a55d77a025f418d), but missed that  
the pfree needed to be protected with an if-test.  Just to make sure things  
are left in a clean state, reset numLockOwners as well.  
  
Per low-memory testing by Andreas Seltenreich.  Back-patch to all supported  
branches.  
  

Let compiler handle size calculation of bool types.

  
commit   : afca291cc258597098fd01c899a4da49d3896562    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Thu, 17 Sep 2015 15:41:04 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Thu, 17 Sep 2015 15:41:04 +0200    

Click here for diff

  
Back in the day this did not work, but modern compilers should handle it themselves.  
  

Fix low-probability memory leak in regex execution.

  
commit   : dc4e8c10169e094f4241efa586459a812bdfd393    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 18 Sep 2015 13:55:17 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 18 Sep 2015 13:55:17 -0400    

Click here for diff

  
After an internal failure in shortest() or longest() while pinning down the  
exact location of a match, find() forgot to free the DFA structure before  
returning.  This is pretty unlikely to occur, since we just successfully  
ran the "search" variant of the DFA; but it could happen, and it would  
result in a session-lifespan memory leak since this code uses malloc()  
directly.  Problem seems to have been aboriginal in Spencer's library,  
so back-patch all the way.  
  
In passing, correct a thinko in a comment I added awhile back about the  
meaning of the "ntree" field.  
  
I happened across these issues while comparing our code to Tcl's version  
of the library.  
  

Honour TEMP_CONFIG when testing pg_upgrade

  
commit   : e61fb6d542979bba84b9e8afa73015a142c42541    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Thu, 17 Sep 2015 11:57:00 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Thu, 17 Sep 2015 11:57:00 -0400    

Click here for diff

  
This setting contains extra configuration for the temp instance, as used  
in pg_regress' --temp-config flag.  
  
Backpatch to 9.2 where test.sh was introduced.  
  

Fix documentation of regular expression character-entry escapes.

  
commit   : 11103c6d95e43bf720b2293cb4c4b2f6efc4947a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 16 Sep 2015 14:50:12 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 16 Sep 2015 14:50:12 -0400    

Click here for diff

  
The docs claimed that \uhhhh would be interpreted as a Unicode value  
regardless of the database encoding, but it's never been implemented  
that way: \uhhhh and \xhhhh actually mean exactly the same thing, namely  
the character that pg_mb2wchar translates to 0xhhhh.  Moreover we were  
falsely dismissive of the usefulness of Unicode code points above FFFF.  
Fix that.  
  
It's been like this for ages, so back-patch to all supported branches.  
  

Remove set-but-not-used variable.

  
commit   : 49232d4191149fd2955e8739a457d70228526dba    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 12 Sep 2015 11:11:08 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 12 Sep 2015 11:11:08 -0400    

Click here for diff

  
In branches before 9.3, commit 8703059c6 caused join_is_legal()'s  
unique_ified variable to become unused, since its only remaining  
use is for LATERAL-related tests which don't exist pre-9.3.  
My compiler didn't complain about that, but Peter's does.  
  

pg_dump, pg_upgrade: allow postgres/template1 tablespace moves

  
commit   : befc63e849d24a4d4aacece42937f7600af7967f    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Fri, 11 Sep 2015 15:51:10 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Fri, 11 Sep 2015 15:51:10 -0400    

Click here for diff

  
Modify pg_dump to restore postgres/template1 databases to non-default  
tablespaces by switching out of the database to be moved, then switching  
back.  
  
Also, to fix potentially cases where the old/new tablespaces might not  
match, fix pg_upgrade to process new/old tablespaces separately in all  
cases.  
  
Report by Marti Raudsepp  
  
Patch by Marti Raudsepp, me  
  
Backpatch through 9.0  
  

Fix setrefs.c comment properly.

  
commit   : fe146952547ce3e7b2e13f764d62e9c27d370793    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 10 Sep 2015 10:25:58 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 10 Sep 2015 10:25:58 -0400    

Click here for diff

  
The "typo" alleged in commit 1e460d4bd was actually a comment that was  
correct when written, but I missed updating it in commit b5282aa89.  
Use a slightly less specific (and hopefully more future-proof) description  
of what is collected.  Back-patch to 9.2 where that commit appeared, and  
revert the comment to its then-entirely-correct state before that.  
  

Fix typo in setrefs.c

  
commit   : f4ea1b35d56e84d7eb387a0226f6eed32a8a6cb4    
  
author   : Stephen Frost <sfrost@snowman.net>    
date     : Thu, 10 Sep 2015 09:22:38 -0400    
  
committer: Stephen Frost <sfrost@snowman.net>    
date     : Thu, 10 Sep 2015 09:22:38 -0400    

Click here for diff

  
We're adding OIDs, not TIDs, to invalItems.  
  
Pointed out by Etsuro Fujita.  
  
Back-patch to all supported branches.  
  

Fix minor bug in regexp makesearch() function.

  
commit   : 3718015e468855441672b0b341f075cd0ba0f726    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 9 Sep 2015 20:14:58 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 9 Sep 2015 20:14:58 -0400    

Click here for diff

  
The list-wrangling here was done wrong, allowing the same state to get  
put into the list twice.  The following loop then would clone it twice.  
The second clone would wind up with no inarcs, so that there was no  
observable misbehavior AFAICT, but a useless state in the finished NFA  
isn't an especially good thing.  
  

Remove files signaling a standby promotion request at postmaster startup

  
commit   : 67518a1415691f37e8bd0e93e01282f1d461ffa9    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Wed, 9 Sep 2015 22:51:44 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Wed, 9 Sep 2015 22:51:44 +0900    

Click here for diff

  
This commit makes postmaster forcibly remove the files signaling  
a standby promotion request. Otherwise, the existence of those files  
can trigger a promotion too early, whether a user wants that or not.  
  
This removal of files is usually unnecessary because they can exist  
only during a few moments during a standby promotion. However  
there is a race condition: if pg_ctl promote is executed and creates  
the files during a promotion, the files can stay around even after  
the server is brought up to new master. Then, if new standby starts  
by using the backup taken from that master, the files can exist  
at the server startup and should be removed in order to avoid  
an unexpected promotion.  
  
Back-patch to 9.1 where promote signal file was introduced.  
  
Problem reported by Feike Steenbergen.  
Original patch by Michael Paquier, modified by me.  
  
Discussion: 20150528100705.4686.91426@wrigleys.postgresql.org  
  

Add gin_fuzzy_search_limit to postgresql.conf.sample.

  
commit   : be975cf55755b37c3d517ca66ecc725dd7c0cf9c    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Wed, 9 Sep 2015 02:25:50 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Wed, 9 Sep 2015 02:25:50 +0900    

Click here for diff

  
This was forgotten in 8a3631f (commit that originally added the parameter)  
and 0ca9907 (commit that added the documentation later that year).  
  
Back-patch to all supported versions.  
  

Fix error message wording in previous sslinfo commit

  
commit   : 9202892cd206f3a706ae5f05ee6f2873515c2539    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 8 Sep 2015 11:10:20 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 8 Sep 2015 11:10:20 -0300    

Click here for diff

  
  

Add more sanity checks in contrib/sslinfo

  
commit   : 3660778bccb23f233596a1ea140d87e7213847d9    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Mon, 7 Sep 2015 19:18:29 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Mon, 7 Sep 2015 19:18:29 -0300    

Click here for diff

  
We were missing a few return checks on OpenSSL calls.  Should be pretty  
harmless, since we haven't seen any user reports about problems, and  
this is not a high-traffic module anyway; still, a bug is a bug, so  
backpatch this all the way back to 9.0.  
  
Author: Michael Paquier, while reviewing another sslinfo patch  
  

Change type of DOW/DOY to UNITS

  
commit   : af9d9e59c2d644159b906a278cbead0d0049ecd1    
  
author   : Greg Stark <stark@mit.edu>    
date     : Mon, 7 Sep 2015 13:35:09 +0100    
  
committer: Greg Stark <stark@mit.edu>    
date     : Mon, 7 Sep 2015 13:35:09 +0100    

Click here for diff

  
  

Make GIN’s cleanup pending list process interruptable

  
commit   : 3ffbc499419710054d1c96403f6d7edecd344994    
  
author   : Teodor Sigaev <teodor@sigaev.ru>    
date     : Mon, 7 Sep 2015 17:18:26 +0300    
  
committer: Teodor Sigaev <teodor@sigaev.ru>    
date     : Mon, 7 Sep 2015 17:18:26 +0300    

Click here for diff

  
Cleanup process could be called by ordinary insert/update and could take a lot  
of time. Add vacuum_delay_point() to make this process interruptable. Under  
vacuum this call will also throttle a vacuum process to decrease system load,  
called from insert/update it will not throttle, and that reduces a latency.  
  
Backpatch for all supported branches.  
  
Jeff Janes <jeff.janes@gmail.com>  
  

Update site address of Snowball project

  
commit   : a196e50534e28ca4186a6c1095f9561d5cd1d661    
  
author   : Teodor Sigaev <teodor@sigaev.ru>    
date     : Mon, 7 Sep 2015 15:22:07 +0300    
  
committer: Teodor Sigaev <teodor@sigaev.ru>    
date     : Mon, 7 Sep 2015 15:22:07 +0300    

Click here for diff

  
  

Move DTK_ISODOW DTK_DOW and DTK_DOY to be type UNITS rather than RESERV. RESERV is meant for tokens like “now” and having them in that category throws errors like these when used as an input date:

  
commit   : f4afbe0653022b4689e0732e308810373f6d1c46    
  
author   : Greg Stark <stark@mit.edu>    
date     : Sun, 6 Sep 2015 02:04:37 +0100    
  
committer: Greg Stark <stark@mit.edu>    
date     : Sun, 6 Sep 2015 02:04:37 +0100    

Click here for diff

  
stark=# SELECT 'doy'::timestamptz;  
ERROR:  unexpected dtype 33 while parsing timestamptz "doy"  
LINE 1: SELECT 'doy'::timestamptz;  
               ^  
stark=# SELECT 'dow'::timestamptz;  
ERROR:  unexpected dtype 32 while parsing timestamptz "dow"  
LINE 1: SELECT 'dow'::timestamptz;  
               ^  
  
Found by LLVM's Libfuzzer  
  

Fix misc typos.

  
commit   : 68f4b68e4e94d93941be54000c7f3dd8e11f9dee    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Sat, 5 Sep 2015 11:35:49 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Sat, 5 Sep 2015 11:35:49 +0300    

Click here for diff

  
Oskari Saarenmaa. Backpatch to stable branches where applicable.  
  

Fix subtransaction cleanup after an outer-subtransaction portal fails.

  
commit   : 39ebb6466914632d69c669d6da74b614b1daf89f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 4 Sep 2015 13:36:50 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 4 Sep 2015 13:36:50 -0400    

Click here for diff

  
Formerly, we treated only portals created in the current subtransaction as  
having failed during subtransaction abort.  However, if the error occurred  
while running a portal created in an outer subtransaction (ie, a cursor  
declared before the last savepoint), that has to be considered broken too.  
  
To allow reliable detection of which ones those are, add a bookkeeping  
field to struct Portal that tracks the innermost subtransaction in which  
each portal has actually been executed.  (Without this, we'd end up  
failing portals containing functions that had called the subtransaction,  
thereby breaking plpgsql exception blocks completely.)  
  
In addition, when we fail an outer-subtransaction Portal, transfer its  
resources into the subtransaction's resource owner, so that they're  
released early in cleanup of the subxact.  This fixes a problem reported by  
Jim Nasby in which a function executed in an outer-subtransaction cursor  
could cause an Assert failure or crash by referencing a relation created  
within the inner subtransaction.  
  
The proximate cause of the Assert failure is that AtEOSubXact_RelationCache  
assumed it could blow away a relcache entry without first checking that the  
entry had zero refcount.  That was a bad idea on its own terms, so add such  
a check there, and to the similar coding in AtEOXact_RelationCache.  This  
provides an independent safety measure in case there are still ways to  
provoke the situation despite the Portal-level changes.  
  
This has been broken since subtransactions were invented, so back-patch  
to all supported branches.  
  
Tom Lane and Michael Paquier  
  

Fix s_lock.h PPC assembly code to be compatible with native AIX assembler.

  
commit   : 472680c57705004f5eb1f9498c4cb26dc8d52268    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 29 Aug 2015 16:09:25 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 29 Aug 2015 16:09:25 -0400    

Click here for diff

  
On recent AIX it's necessary to configure gcc to use the native assembler  
(because the GNU assembler hasn't been updated to handle AIX 6+).  This  
caused PG builds to fail with assembler syntax errors, because we'd try  
to compile s_lock.h's gcc asm fragment for PPC, and that assembly code  
relied on GNU-style local labels.  We can't substitute normal labels  
because it would fail in any file containing more than one inlined use of  
tas().  Fortunately, that code is stable enough, and the PPC ISA is simple  
enough, that it doesn't seem like too much of a maintenance burden to just  
hand-code the branch offsets, removing the need for any labels.  
  
Note that the AIX assembler only accepts "$" for the location counter  
pseudo-symbol.  The usual GNU convention is "."; but it appears that all  
versions of gas for PPC also accept "$", so in theory this patch will not  
break any other PPC platforms.  
  
This has been reported by a few people, but Steve Underwood gets the credit  
for being the first to pursue the problem far enough to understand why it  
was failing.  Thanks also to Noah Misch for additional testing.  
  

  
commit   : 5690c13ca0887a7828d933323ee4faf2a6409b8f    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Thu, 27 Aug 2015 13:43:10 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Thu, 27 Aug 2015 13:43:10 -0400    

Click here for diff

  
This makes the parameter names match the documented prototype names.  
  
Report by Erwin Brandstetter  
  
Backpatch through 9.0  
  

Add a small cache of locks owned by a resource owner in ResourceOwner.

  
commit   : 0e933fdf9463857d4788e2e638affdde865855ce    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 27 Aug 2015 12:22:10 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 27 Aug 2015 12:22:10 -0400    

Click here for diff

  
Back-patch 9.3-era commit eeb6f37d89fc60c6449ca12ef9e91491069369cb, to  
improve the older branches' ability to cope with pg_dump dumping a large  
number of tables.  
  
I back-patched into 9.2 and 9.1, but not 9.0 as it would have required a  
significant amount of refactoring, thus negating the argument that this  
is by-now-well-tested code.  
  
Jeff Janes, reviewed by Amit Kapila and Heikki Linnakangas.  
  

Docs: be explicit about datatype matching for lead/lag functions.

  
commit   : 8cffc4f5bdfbd41e27680fca27d82ae939cdeda8    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 25 Aug 2015 19:12:34 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 25 Aug 2015 19:12:34 -0400    

Click here for diff

  
The default argument, if given, has to be of exactly the same datatype  
as the first argument; but this was not stated in so many words, and  
the error message you get about it might not lead your thought in the  
right direction.  Per bug #13587 from Robert McGehee.  
  
A quick scan says that these are the only two built-in functions with two  
anyelement arguments and no other polymorphic arguments.  There are plenty  
of cases of, eg, anyarray and anyelement, but those seem less likely to  
confuse.  For instance this doesn't seem terribly hard to figure out:  
"function array_remove(integer[], numeric) does not exist".  So I've  
contented myself with fixing these two cases.  
  

Avoid O(N^2) behavior when enlarging SPI tuple table in spi_printtup().

  
commit   : d951d6065dbf11cca35fd1f7741156d0c55dcd7e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 21 Aug 2015 20:32:11 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 21 Aug 2015 20:32:11 -0400    

Click here for diff

  
For no obvious reason, spi_printtup() was coded to enlarge the tuple  
pointer table by just 256 slots at a time, rather than doubling the size at  
each reallocation, as is our usual habit.  For very large SPI results, this  
makes for O(N^2) time spent in repalloc(), which of course soon comes to  
dominate the runtime.  Use the standard doubling approach instead.  
  
This is a longstanding performance bug, so back-patch to all active  
branches.  
  
Neil Conway  
  

Fix plpython crash when returning string representation of a RECORD result.

  
commit   : dadef8af2e71368da8ad0212389771b1fce22ee2    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 21 Aug 2015 12:21:37 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 21 Aug 2015 12:21:37 -0400    

Click here for diff

  
PLyString_ToComposite() blithely overwrote proc->result.out.d, even though  
for a composite result type the other union variant proc->result.out.r is  
the one that should be valid.  This could result in a crash if out.r had  
in fact been filled in (proc->result.is_rowtype == 1) and then somebody  
later attempted to use that data; as per bug #13579 from Paweł Michalak.  
  
Just to add insult to injury, it didn't work for RECORD results anyway,  
because record_in() would refuse the case.  
  
Fix by doing the I/O function lookup in a local PLyTypeInfo variable,  
as we were doing already in PLyObject_ToComposite().  This is not a great  
technique because any fn_extra data allocated by the input function will  
be leaked permanently (thanks to using TopMemoryContext as fn_mcxt).  
But that's a pre-existing issue that is much less serious than a crash,  
so leave it to be fixed separately.  
  
This bug would be a potential security issue, except that plpython is  
only available to superusers and the crash requires coding the function  
in a way that didn't work before today's patches.  
  
Add regression test cases covering all the supported methods of converting  
composite results.  
  
Back-patch to 9.1 where the faulty coding was introduced.  
  

Allow record_in() and record_recv() to work for transient record types.

  
commit   : 2f1d558bcb7d41f185d33cefeae68edaac2671ff    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 21 Aug 2015 11:19:33 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 21 Aug 2015 11:19:33 -0400    

Click here for diff

  
If we have the typmod that identifies a registered record type, there's no  
reason that record_in() should refuse to perform input conversion for it.  
Now, in direct SQL usage, record_in() will always be passed typmod = -1  
with type OID RECORDOID, because no typmodin exists for type RECORD, so the  
case can't arise.  However, some InputFunctionCall users such as PLs may be  
able to supply the right typmod, so we should allow this to support them.  
  
Note: the previous coding and comment here predate commit 59c016aa9f490b53.  
There has been no case since 8.1 in which the passed type OID wouldn't be  
valid; and if it weren't, this error message wouldn't be apropos anyway.  
Better to let lookup_rowtype_tupdesc complain about it.  
  
Back-patch to 9.1, as this is necessary for my upcoming plpython fix.  
I'm committing it separately just to make it a bit more visible in the  
commit history.  
  

Fix a few bogus statement type names in plpgsql error messages.

  
commit   : fb41bf4b5169015169f41ed0a0d4f8ad1d823973    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 18 Aug 2015 19:22:38 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 18 Aug 2015 19:22:38 -0400    

Click here for diff

  
plpgsql's error location context messages ("PL/pgSQL function fn-name line  
line-no at stmt-type") would misreport a CONTINUE statement as being an  
EXIT, and misreport a MOVE statement as being a FETCH.  These are clear  
bugs that have been there a long time, so back-patch to all supported  
branches.  
  
In addition, in 9.5 and HEAD, change the description of EXECUTE from  
"EXECUTE statement" to just plain EXECUTE; there seems no good reason why  
this statement type should be described differently from others that have  
a well-defined head keyword.  And distinguish GET STACKED DIAGNOSTICS from  
plain GET DIAGNOSTICS.  These are a bit more of a judgment call, and also  
affect existing regression-test outputs, so I did not back-patch into  
stable branches.  
  
Pavel Stehule and Tom Lane  
  

Improve documentation about MVCC-unsafe utility commands.

  
commit   : ed51165907e5b413cf221bd82c7f4274bf6ab58f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 15 Aug 2015 13:30:16 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 15 Aug 2015 13:30:16 -0400    

Click here for diff

  
The table-rewriting forms of ALTER TABLE are MVCC-unsafe, in much the same  
way as TRUNCATE, because they replace all rows of the table with newly-made  
rows with a new xmin.  (Ideally, concurrent transactions with old snapshots  
would continue to see the old table contents, but the data is not there  
anymore --- and if it were there, it would be inconsistent with the table's  
updated rowtype, so there would be serious implementation problems to fix.)  
This was nowhere documented though, and the problem was only documented for  
TRUNCATE in a note in the TRUNCATE reference page.  Create a new "Caveats"  
section in the MVCC chapter that can be home to this and other limitations  
on serializable consistency.  
  
In passing, fix a mistaken statement that VACUUM and CLUSTER would reclaim  
space occupied by a dropped column.  They don't reconstruct existing tuples  
so they couldn't do that.  
  
Back-patch to all supported branches.  
  

Don’t use ‘bool’ as a struct member name in help_config.c.

  
commit   : 86ec86c2a6b6694fc5e578a430163f4a900b8544    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Wed, 12 Aug 2015 16:02:20 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Wed, 12 Aug 2015 16:02:20 +0200    

Click here for diff

  
Doing so doesn't work if bool is a macro rather than a typedef.  
  
Although c.h spends some effort to support configurations where bool is  
a preexisting macro, help_config.c has existed this way since  
2003 (b700a6), and there have not been any reports of  
problems. Backpatch anyway since this is as riskless as it gets.  
  
Discussion: 20150812084351.GD8470@awork2.anarazel.de  
Backpatch: 9.0-master  
  

Improve regression test case to avoid depending on system catalog stats.

  
commit   : e8485478631583bddb46b118330df54909a8eee6    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 13 Aug 2015 13:25:02 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 13 Aug 2015 13:25:02 -0400    

Click here for diff

  
In commit 95f4e59c32866716 I added a regression test case that examined  
the plan of a query on system catalogs.  That isn't a terribly great idea  
because the catalogs tend to change from version to version, or even  
within a version if someone makes an unrelated regression-test change that  
populates the catalogs a bit differently.  Usually I try to make planner  
test cases rely on test tables that have not changed since Berkeley days,  
but I got sloppy in this case because the submitted crasher example queried  
the catalogs and I didn't spend enough time on rewriting it.  But it was a  
problem waiting to happen, as I was rudely reminded when I tried to port  
that patch into Salesforce's Postgres variant :-(.  So spend a little more  
effort and rewrite the query to not use any system catalogs.  I verified  
that this version still provokes the Assert if 95f4e59c32866716's code fix  
is reverted.  
  
I also removed the EXPLAIN output from the test, as it turns out that the  
assertion occurs while considering a plan that isn't the one ultimately  
selected anyway; so there's no value in risking any cross-platform  
variation in that printout.  
  
Back-patch to 9.2, like the previous patch.  
  

Fix declaration of isarray variable.

  
commit   : e8f417774e14f2deeffbf340b703c316fbc15ddd    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Thu, 13 Aug 2015 13:22:29 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Thu, 13 Aug 2015 13:22:29 +0200    

Click here for diff

  
Found and fixed by Andres Freund.  
  

  
commit   : 866197d828a85d80886871801b1d084dd3116936    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 12 Aug 2015 21:18:45 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 12 Aug 2015 21:18:45 -0400    

Click here for diff

  
One of the changes I made in commit 8703059c6b55c427 turns out not to have  
been such a good idea: we still need the exception in join_is_legal() that  
allows a join if both inputs already overlap the RHS of the special join  
we're checking.  Otherwise we can miss valid plans, and might indeed fail  
to find a plan at all, as in recent report from Andreas Seltenreich.  
  
That code was added way back in commit c17117649b9ae23d, but I failed to  
include a regression test case then; my bad.  Put it back with a better  
explanation, and a test this time.  The logic does end up a bit different  
than before though: I now believe it's appropriate to make this check  
first, thereby allowing such a case whether or not we'd consider the  
previous SJ(s) to commute with this one.  (Presumably, we already decided  
they did; but it was confusing to have this consideration in the middle  
of the code that was handling the other case.)  
  
Back-patch to all active branches, like the previous patch.  
  

This routine was calling ecpg_alloc to allocate to memory but did not actually check the returned pointer allocated, potentially NULL which could be the result of a malloc call.

  
commit   : e0ca86aa98de9c60815d3717b92b62a9f2c0c07d    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Thu, 5 Feb 2015 15:12:34 +0100    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Thu, 5 Feb 2015 15:12:34 +0100    

Click here for diff

  
Issue noted by Coverity, fixed by Michael Paquier <michael@otacoo.com>  
  

Fix some possible low-memory failures in regexp compilation.

  
commit   : 234205a2e34464a9aabc9ab5e65692652f8fc910    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 12 Aug 2015 00:48:11 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 12 Aug 2015 00:48:11 -0400    

Click here for diff

  
newnfa() failed to set the regex error state when malloc() fails.  
Several places in regcomp.c failed to check for an error after calling  
subre().  Each of these mistakes could lead to null-pointer-dereference  
crashes in memory-starved backends.  
  
Report and patch by Andreas Seltenreich.  Back-patch to all branches.  
  

Fix privilege dumping from servers too old to have that type of privilege.

  
commit   : be9ef396cb581256755594f98b34e9398c5f8d4f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Aug 2015 20:10:16 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Aug 2015 20:10:16 -0400    

Click here for diff

  
pg_dump produced fairly silly GRANT/REVOKE commands when dumping types from  
pre-9.2 servers, and when dumping functions or procedural languages from  
pre-7.3 servers.  Those server versions lack the typacl, proacl, and/or  
lanacl columns respectively, and pg_dump substituted default values that  
were in fact incorrect.  We ended up revoking all the owner's own  
privileges for the object while granting all privileges to PUBLIC.  
Of course the owner would then have those privileges again via PUBLIC, so  
long as she did not try to revoke PUBLIC's privileges; which may explain  
the lack of field reports.  Nonetheless this is pretty silly behavior.  
  
The stakes were raised by my recent patch to make pg_dump dump shell types,  
because 9.2 and up pg_dump would proceed to emit bogus GRANT/REVOKE  
commands for a shell type if dumping from a pre-9.2 server; and the server  
will not accept GRANT/REVOKE commands for a shell type.  (Perhaps it  
should, but that's a topic for another day.)  So the resulting dump script  
wouldn't load without errors.  
  
The right thing to do is to act as though these objects have default  
privileges (null ACL entries), which causes pg_dump to print no  
GRANT/REVOKE commands at all for them.  That fixes the silly results  
and also dodges the problem with shell types.  
  
In passing, modify getProcLangs() to be less creatively different about  
how to handle missing columns when dumping from older server versions.  
Every other data-acquisition function in pg_dump does that by substituting  
appropriate default values in the version-specific SQL commands, and I see  
no reason why this one should march to its own drummer.  Its use of  
"SELECT *" was likewise not conformant with anyplace else, not to mention  
it's not considered good SQL style for production queries.  
  
Back-patch to all supported versions.  Although 9.0 and 9.1 pg_dump don't  
have the issue with typacl, they are more likely than newer versions to be  
used to dump from ancient servers, so we ought to fix the proacl/lanacl  
issues all the way back.  
  

Accept alternate spellings of __sparcv7 and __sparcv8.

  
commit   : fe460461d99df9162d3ef59b0f1a15954e3a620e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Aug 2015 17:34:51 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Aug 2015 17:34:51 -0400    

Click here for diff

  
Apparently some versions of gcc prefer __sparc_v7__ and __sparc_v8__.  
Per report from Waldemar Brodkorb.  
  

  
commit   : 54cea765c633360dbb355b942f1144442176035b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Aug 2015 17:18:17 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Aug 2015 17:18:17 -0400    

Click here for diff

  
Commit 85e5e222b1dd02f135a8c3bf387d0d6d88e669bd turns out not to have taken  
care of all cases of the partially-evaluatable-PlaceHolderVar problem found  
by Andreas Seltenreich's fuzz testing.  I had set it up to check for risky  
PHVs only in the event that we were making a star-schema-based exception to  
the param_source_rels join ordering heuristic.  However, it turns out that  
the problem can occur even in joins that satisfy the param_source_rels  
heuristic, in which case allow_star_schema_join() isn't consulted.  
Refactor so that we check for risky PHVs whenever the proposed join has  
any remaining parameterization.  
  
Back-patch to 9.2, like the previous patch (except for the regression test  
case, which only works back to 9.3 because it uses LATERAL).  
  
Note that this discovery implies that problems of this sort could've  
occurred in 9.2 and up even before the star-schema patch; though I've not  
tried to prove that experimentally.  
  

Further fixes for degenerate outer join clauses.

  
commit   : 754ece936ca0938dcb149f7617df14b202034e28    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 6 Aug 2015 15:35:27 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 6 Aug 2015 15:35:27 -0400    

Click here for diff

  
Further testing revealed that commit f69b4b9495269cc4 was still a few  
bricks shy of a load: minor tweaking of the previous test cases resulted  
in the same wrong-outer-join-order problem coming back.  After study  
I concluded that my previous changes in make_outerjoininfo() were just  
accidentally masking the problem, and should be reverted in favor of  
forcing syntactic join order whenever an upper outer join's predicate  
doesn't mention a lower outer join's LHS.  This still allows the  
chained-outer-joins style that is the normally optimizable case.  
  
I also tightened things up some more in join_is_legal().  It seems to me  
on review that what's really happening in the exception case where we  
ignore a mismatched special join is that we're allowing the proposed join  
to associate into the RHS of the outer join we're comparing it to.  As  
such, we should *always* insist that the proposed join be a left join,  
which eliminates a bunch of rather dubious argumentation.  The case where  
we weren't enforcing that was the one that was already known buggy anyway  
(it had a violatable Assert before the aforesaid commit) so it hardly  
deserves a lot of deference.  
  
Back-patch to all active branches, like the previous patch.  The added  
regression test case failed in all branches back to 9.1, and I think it's  
only an unrelated change in costing calculations that kept 9.0 from  
choosing a broken plan.  
  

Make real sure we don’t reassociate joins into or out of SEMI/ANTI joins.

  
commit   : 08dee567ed6b168e5646718da0279c46ede77ffc    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 5 Aug 2015 14:39:07 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 5 Aug 2015 14:39:07 -0400    

Click here for diff

  
Per the discussion in optimizer/README, it's unsafe to reassociate anything  
into or out of the RHS of a SEMI or ANTI join.  An example from Piotr  
Stefaniak showed that join_is_legal() wasn't sufficiently enforcing this  
rule, so lock it down a little harder.  
  
I couldn't find a reasonably simple example of the optimizer trying to  
do this, so no new regression test.  (Piotr's example involved the random  
search in GEQO accidentally trying an invalid case and triggering a sanity  
check way downstream in clause selectivity estimation, which did not seem  
like a sequence of events that would be useful to memorialize in a  
regression test as-is.)  
  
Back-patch to all active branches.  
  

Docs: add an explicit example about controlling overall greediness of REs.

  
commit   : 4eb4e71119780a15168d03c933f41bc04b1ecd4e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 21:09:12 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 21:09:12 -0400    

Click here for diff

  
Per discussion of bug #13538.  
  

Fix pg_dump to dump shell types.

  
commit   : dae6e4601289218ae2326f9be1ecffdac70a1dc0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 19:34:12 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 19:34:12 -0400    

Click here for diff

  
Per discussion, it really ought to do this.  The original choice to  
exclude shell types was probably made in the dark ages before we made  
it harder to accidentally create shell types; but that was in 7.3.  
  
Also, cause the standard regression tests to leave a shell type behind,  
for convenience in testing the case in pg_dump and pg_upgrade.  
  
Back-patch to all supported branches.  
  

Fix bogus “out of memory” reports in tuplestore.c.

  
commit   : b6659a3b9eead7e2a2e8eee6152720ec803b5ac2    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 18:18:46 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 18:18:46 -0400    

Click here for diff

  
The tuplesort/tuplestore memory management logic assumed that the chunk  
allocation overhead for its memtuples array could not increase when  
increasing the array size.  This is and always was true for tuplesort,  
but we (I, I think) blindly copied that logic into tuplestore.c without  
noticing that the assumption failed to hold for the much smaller array  
elements used by tuplestore.  Given rather small work_mem, this could  
result in an improper complaint about "unexpected out-of-memory situation",  
as reported by Brent DeSpain in bug #13530.  
  
The easiest way to fix this is just to increase tuplestore's initial  
array size so that the assumption holds.  Rather than relying on magic  
constants, though, let's export a #define from aset.c that represents  
the safe allocation threshold, and make tuplestore's calculation depend  
on that.  
  
Do the same in tuplesort.c to keep the logic looking parallel, even though  
tuplesort.c isn't actually at risk at present.  This will keep us from  
breaking it if we ever muck with the allocation parameters in aset.c.  
  
Back-patch to all supported versions.  The error message doesn't occur  
pre-9.3, not so much because the problem can't happen as because the  
pre-9.3 tuplestore code neglected to check for it.  (The chance of  
trouble is a great deal larger as of 9.3, though, due to changes in the  
array-size-increasing strategy.)  However, allowing LACKMEM() to become  
true unexpectedly could still result in less-than-desirable behavior,  
so let's patch it all the way back.  
  

  
commit   : 359016d2e97338724b0c649d5d2fd7120f8a4581    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 14:55:33 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 14:55:33 -0400    

Click here for diff

  
In commit b514a7460d9127ddda6598307272c701cbb133b7, I changed the planner  
so that it would allow nestloop paths to remain partially parameterized,  
ie the inner relation might need parameters from both the current outer  
relation and some upper-level outer relation.  That's fine so long as we're  
talking about distinct parameters; but the patch also allowed creation of  
nestloop paths for cases where the inner relation's parameter was a  
PlaceHolderVar whose eval_at set included the current outer relation and  
some upper-level one.  That does *not* work.  
  
In principle we could allow such a PlaceHolderVar to be evaluated at the  
lower join node using values passed down from the upper relation along with  
values from the join's own outer relation.  However, nodeNestloop.c only  
supports simple Vars not arbitrary expressions as nestloop parameters.  
createplan.c is also a few bricks shy of being able to handle such cases;  
it misplaces the PlaceHolderVar parameters in the plan tree, which is why  
the visible symptoms of this bug are "plan should not reference subplan's  
variable" and "failed to assign all NestLoopParams to plan nodes" planner  
errors.  
  
Adding the necessary complexity to make this work doesn't seem like it  
would be repaid in significantly better plans, because in cases where such  
a PHV exists, there is probably a corresponding join order constraint that  
would allow a good plan to be found without using the star-schema exception.  
Furthermore, adding complexity to nodeNestloop.c would create a run-time  
penalty even for plans where this whole consideration is irrelevant.  
So let's just reject such paths instead.  
  
Per fuzz testing by Andreas Seltenreich; the added regression test is based  
on his example query.  Back-patch to 9.2, like the previous patch.  
  

Cap wal_buffers to avoid a server crash when it’s set very large.

  
commit   : 5ef8e1114774ea65eb03f48846e1d708ca8da4be    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Tue, 4 Aug 2015 12:58:54 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Tue, 4 Aug 2015 12:58:54 -0400    

Click here for diff

  
It must be possible to multiply wal_buffers by XLOG_BLCKSZ without  
overflowing int, or calculations in StartupXLOG will go badly wrong  
and crash the server.  Avoid that by imposing a maximum value on  
wal_buffers.  This will be just under 2GB, assuming the usual value  
for XLOG_BLCKSZ.  
  
Josh Berkus, per an analysis by Andrew Gierth.  
  

contrib/isn now needs a .gitignore file.

  
commit   : 121869fe41a73ae4381b4d1342b0f9f114866326    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 2 Aug 2015 23:57:32 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 2 Aug 2015 23:57:32 -0400    

Click here for diff

  
Oversight in commit cb3384a0cb4cf900622b77865f60e31259923079.  
Back-patch to 9.1, like that commit.  
  

Fix output of ISBN-13 numbers beginning with 979.

  
commit   : 56187c6fb1da0cd5e9b466147595872cfff4b908    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Sun, 2 Aug 2015 22:12:33 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Sun, 2 Aug 2015 22:12:33 +0300    

Click here for diff

  
An EAN beginning with 979 (but not 9790 - those are ISMN's) are accepted  
as ISBN numbers, but they cannot be represented in the old, 10-digit ISBN  
format. They must be output in the new 13-digit ISBN-13 format. We printed  
out an incorrect value for those.  
  
Also add a regression test, to test this and some other basic functionality  
of the module.  
  
Patch by Fabien Coelho. This fixes bug #13442, reported by B.Z. Backpatch  
to 9.1, where we started to recognize ISBN-13 numbers.  
  

Fix incorrect order of lock file removal and failure to close() sockets.

  
commit   : 20d1878b6a7f78a4a9ce3668c4588d92bc0af78d    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 2 Aug 2015 14:54:44 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 2 Aug 2015 14:54:44 -0400    

Click here for diff

  
Commit c9b0cbe98bd783e24a8c4d8d8ac472a494b81292 accidentally broke the  
order of operations during postmaster shutdown: it resulted in removing  
the per-socket lockfiles after, not before, postmaster.pid.  This creates  
a race-condition hazard for a new postmaster that's started immediately  
after observing that postmaster.pid has disappeared; if it sees the  
socket lockfile still present, it will quite properly refuse to start.  
This error appears to be the explanation for at least some of the  
intermittent buildfarm failures we've seen in the pg_upgrade test.  
  
Another problem, which has been there all along, is that the postmaster  
has never bothered to close() its listen sockets, but has just allowed them  
to close at process death.  This creates a different race condition for an  
incoming postmaster: it might be unable to bind to the desired listen  
address because the old postmaster is still incumbent.  This might explain  
some odd failures we've seen in the past, too.  (Note: this is not related  
to the fact that individual backends don't close their client communication  
sockets.  That behavior is intentional and is not changed by this patch.)  
  
Fix by adding an on_proc_exit function that closes the postmaster's ports  
explicitly, and (in 9.3 and up) reshuffling the responsibility for where  
to unlink the Unix socket files.  Lock file unlinking can stay where it  
is, but teach it to unlink the lock files in reverse order of creation.  
  

Fix some planner issues with degenerate outer join clauses.

  
commit   : 44618f92bb3f287124c616cc9d25cce5592c3d2b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 1 Aug 2015 20:57:41 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 1 Aug 2015 20:57:41 -0400    

Click here for diff

  
An outer join clause that didn't actually reference the RHS (perhaps only  
after constant-folding) could confuse the join order enforcement logic,  
leading to wrong query results.  Also, nested occurrences of such things  
could trigger an Assertion that on reflection seems incorrect.  
  
Per fuzz testing by Andreas Seltenreich.  The practical use of such cases  
seems thin enough that it's not too surprising we've not heard field  
reports about it.  
  
This has been broken for a long time, so back-patch to all active branches.  
  

Avoid some zero-divide hazards in the planner.

  
commit   : c7d1712519085b46b084412631ab3a0c27a7a1a6    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 30 Jul 2015 12:11:23 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 30 Jul 2015 12:11:23 -0400    

Click here for diff

  
Although I think on all modern machines floating division by zero  
results in Infinity not SIGFPE, we still don't want infinities  
running around in the planner's costing estimates; too much risk  
of that leading to insane behavior.  
  
grouping_planner() failed to consider the possibility that final_rel  
might be known dummy and hence have zero rowcount.  (I wonder if it  
would be better to set a rows estimate of 1 for dummy relations?  
But at least in the back branches, changing this convention seems  
like a bad idea, so I'll leave that for another day.)  
  
Make certain that get_variable_numdistinct() produces a nonzero result.  
The case that can be shown to be broken is with stadistinct < 0.0 and  
small ntuples; we did not prevent the result from rounding to zero.  
For good luck I applied clamp_row_est() to all the nonconstant return  
values.  
  
In ExecChooseHashTableSize(), Assert that we compute positive nbuckets  
and nbatch.  I know of no reason to think this isn't the case, but it  
seems like a good safety check.  
  
Per reports from Piotr Stefaniak.  Back-patch to all active branches.  
  

Blacklist xlc 32-bit inlining.

  
commit   : 0a89f3bc6e9b1e911e9efd8132377fe5c6838c66    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 29 Jul 2015 22:49:48 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 29 Jul 2015 22:49:48 -0400    

Click here for diff

  
Per a suggestion from Tom Lane.  Back-patch to 9.0 (all supported  
versions).  While only 9.4 and up have code known to elicit this  
compiler bug, we were disabling inlining by accident until commit  
43d89a23d59c487bc9258fad7a6187864cb8c0c0.  
  

Update our documentation concerning where to create data directories.

  
commit   : 263f225965bd583bf403cbd9d5e1171f1e445bdf    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 28 Jul 2015 18:42:59 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 28 Jul 2015 18:42:59 -0400    

Click here for diff

  
Although initdb has long discouraged use of a filesystem mount-point  
directory as a PG data directory, this point was covered nowhere in the  
user-facing documentation.  Also, with the popularity of pg_upgrade,  
we really need to recommend that the PG user own not only the data  
directory but its parent directory too.  (Without a writable parent  
directory, operations such as "mv data data.old" fail immediately.  
pg_upgrade itself doesn't do that, but wrapper scripts for it often do.)  
  
Hence, adjust the "Creating a Database Cluster" section to address  
these points.  I also took the liberty of wordsmithing the discussion  
of NFS a bit.  
  
These considerations aren't by any means new, so back-patch to all  
supported branches.  
  

Reduce chatter from signaling of autovacuum workers.

  
commit   : 1a2f95630d4897b81feb299bdf70d10c3c2654c4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 28 Jul 2015 17:34:00 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 28 Jul 2015 17:34:00 -0400    

Click here for diff

  
Don't print a WARNING if we get ESRCH from a kill() that's attempting  
to cancel an autovacuum worker.  It's possible (and has been seen in the  
buildfarm) that the worker is already gone by the time we are able to  
execute the kill, in which case the failure is harmless.  About the only  
plausible reason for reporting such cases would be to help debug corrupted  
lock table contents, but this is hardly likely to be the most important  
symptom if that happens.  Moreover issuing a WARNING might scare users  
more than is warranted.  
  
Also, since sending a signal to an autovacuum worker is now entirely a  
routine thing, and the worker will log the query cancel on its end anyway,  
reduce the message saying we're doing that from LOG to DEBUG1 level.  
  
Very minor cosmetic cleanup as well.  
  
Since the main practical reason for doing this is to avoid unnecessary  
buildfarm failures, back-patch to all active branches.  
  

Disable ssl renegotiation by default.

  
commit   : 2f91e7bb5643b3a828ee62dee51893a48fe1d080    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Tue, 28 Jul 2015 21:39:40 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Tue, 28 Jul 2015 21:39:40 +0200    

Click here for diff

  
While postgres' use of SSL renegotiation is a good idea in theory, it  
turned out to not work well in practice. The specification and openssl's  
implementation of it have lead to several security issues. Postgres' use  
of renegotiation also had its share of bugs.  
  
Additionally OpenSSL has a bunch of bugs around renegotiation, reported  
and open for years, that regularly lead to connections breaking with  
obscure error messages. We tried increasingly complex workarounds to get  
around these bugs, but we didn't find anything complete.  
  
Since these connection breakages often lead to hard to debug problems,  
e.g. spuriously failing base backups and significant latency spikes when  
synchronous replication is used, we have decided to change the default  
setting for ssl renegotiation to 0 (disabled) in the released  
backbranches and remove it entirely in 9.5 and master..  
  
Author: Michael Paquier, with changes by me  
Discussion: 20150624144148.GQ4797@alap3.anarazel.de  
Backpatch: 9.0-9.4; 9.5 and master get a different patch  
  

Remove an unsafe Assert, and explain join_clause_is_movable_into() better.

  
commit   : d6a8a29ac2bca06d4466e449f13d0cc52220714b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 28 Jul 2015 13:20:40 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 28 Jul 2015 13:20:40 -0400    

Click here for diff

  
join_clause_is_movable_into() is approximate, in the sense that it might  
sometimes return "false" when actually it would be valid to push the given  
join clause down to the specified level.  This is okay ... but there was  
an Assert in get_joinrel_parampathinfo() that's only safe if the answers  
are always exact.  Comment out the Assert, and add a bunch of commentary  
to clarify what's going on.  
  
Per fuzz testing by Andreas Seltenreich.  The added regression test is  
a pretty silly query, but it's based on his crasher example.  
  
Back-patch to 9.2 where the faulty logic was introduced.  
  

Don’t assume that PageIsEmpty() returns true on an all-zeros page.

  
commit   : 0abe9554fc793f05d5c7f18b4b6682d9f1154922    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 27 Jul 2015 18:54:09 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 27 Jul 2015 18:54:09 +0300    

Click here for diff

  
It does currently, and I don't see us changing that any time soon, but we  
don't make that assumption anywhere else.  
  
Per Tom Lane's suggestion. Backpatch to 9.2, like the previous patch that  
added this assumption.  
  

Reuse all-zero pages in GIN.

  
commit   : 7658368cfd5b5a010e57f8eec6cc4c6e19ee635b    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 27 Jul 2015 12:30:26 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 27 Jul 2015 12:30:26 +0300    

Click here for diff

  
In GIN, an all-zeros page would be leaked forever, and never reused. Just  
add them to the FSM in vacuum, and they will be reinitialized when grabbed  
from the FSM. On master and 9.5, attempting to access the page's opaque  
struct also caused an assertion failure, although that was otherwise  
harmless.  
  
Reported by Jeff Janes. Backpatch to all supported versions.  
  

Fix handling of all-zero pages in SP-GiST vacuum.

  
commit   : f4297f8c5fd457a54671947023f8b56237b952db    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 27 Jul 2015 12:28:21 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 27 Jul 2015 12:28:21 +0300    

Click here for diff

  
SP-GiST initialized an all-zeros page at vacuum, but that was not  
WAL-logged, which is not safe. You might get a torn page write, when it gets  
flushed to disk, and end-up with a half-initialized index page. To fix,  
leave it in the all-zeros state, and add it to the FSM. It will be  
initialized when reused. Also don't set the page-deleted flag when recycling  
an empty page. That was also not WAL-logged, and a torn write of that would  
cause the page to have an invalid checksum.  
  
Backpatch to 9.2, where SP-GiST indexes were added.  
  

Make entirely-dummy appendrels get marked as such in set_append_rel_size.

  
commit   : 16b2a50187ab2b4b1a0c2f5798b6cfaaebd93ed0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 26 Jul 2015 16:19:09 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 26 Jul 2015 16:19:09 -0400    

Click here for diff

  
The planner generally expects that the estimated rowcount of any relation  
is at least one row, *unless* it has been proven empty by constraint  
exclusion or similar mechanisms, which is marked by installing a dummy path  
as the rel's cheapest path (cf. IS_DUMMY_REL).  When I split up  
allpaths.c's processing of base rels into separate set_base_rel_sizes and  
set_base_rel_pathlists steps, the intention was that dummy rels would get  
marked as such during the "set size" step; this is what justifies an Assert  
in indxpath.c's get_loop_count that other relations should either be dummy  
or have positive rowcount.  Unfortunately I didn't get that quite right  
for append relations: if all the child rels have been proven empty then  
set_append_rel_size would come up with a rowcount of zero, which is  
correct, but it didn't then do set_dummy_rel_pathlist.  (We would have  
ended up with the right state after set_append_rel_pathlist, but that's  
too late, if we generate indexpaths for some other rel first.)  
  
In addition to fixing the actual bug, I installed an Assert enforcing this  
convention in set_rel_size; that then allows simplification of a couple  
of now-redundant tests for zero rowcount in set_append_rel_size.  
  
Also, to cover the possibility that third-party FDWs have been careless  
about not returning a zero rowcount estimate, apply clamp_row_est to  
whatever an FDW comes up with as the rows estimate.  
  
Per report from Andreas Seltenreich.  Back-patch to 9.2.  Earlier branches  
did not have the separation between set_base_rel_sizes and  
set_base_rel_pathlists steps, so there was no intermediate state where an  
appendrel would have had inconsistent rowcount and pathlist.  It's possible  
that adding the Assert to set_rel_size would be a good idea in older  
branches too; but since they're not under development any more, it's likely  
not worth the trouble.  
  

Restore use of zlib default compression in pg_dump directory mode.

  
commit   : aa1266d5f8662a8a768bb83f12bbf5386d0cbd55    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Sat, 25 Jul 2015 17:14:36 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Sat, 25 Jul 2015 17:14:36 -0400    

Click here for diff

  
This was broken by commit 0e7e355f27302b62af3e1add93853ccd45678443 and  
friends, which ignored the fact that gzopen() will treat "-1" in the  
mode argument as an invalid character, which it ignores, and a flag for  
compression level 1. Now, when this value is encountered no compression  
level flag is passed  to gzopen, leaving it to use the zlib default.  
  
Also, enforce the documented allowed range for pg_dump's -Z option,  
namely 0 .. 9, and remove some consequently dead code from  
pg_backup_tar.c.  
  
Problem reported by Marc Mamin.  
  
Backpatch to 9.1, like the patch that introduced the bug.  
  

Fix off-by-one error in calculating subtrans/multixact truncation point.

  
commit   : 84330d0c1fd3dd76f95ce82dd5856a9de52b5b23    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 23 Jul 2015 01:30:15 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 23 Jul 2015 01:30:15 +0300    

Click here for diff

  
If there were no subtransactions (or multixacts) active, we would calculate  
the oldestxid == next xid. That's correct, but if next XID happens to be  
on the next pg_subtrans (pg_multixact) page, the page does not exist yet,  
and SimpleLruTruncate will produce an "apparent wraparound" warning. The  
warning is harmless in this case, but looks very alarming to users.  
  
Backpatch to all supported versions. Patch and analysis by Thomas Munro.  
  

Fix (some of) pltcl memory usage

  
commit   : 3cb6ef9983b5d7b45e7a5bd42c2cf8714f931850    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Mon, 20 Jul 2015 14:18:08 +0200    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Mon, 20 Jul 2015 14:18:08 +0200    

Click here for diff

  
As reported by Bill Parker, PL/Tcl did not validate some malloc() calls  
against NULL return.  Fix by using palloc() in a new long-lived memory  
context instead.  This allows us to simplify error handling too, by  
simply deleting the memory context instead of doing retail frees.  
  
There's still a lot that could be done to improve PL/Tcl's memory  
handling ...  
  
This is pretty ancient, so backpatch all the way back.  
  
Author: Michael Paquier and Álvaro Herrera  
Discussion: https://www.postgresql.org/message-id/CAFrbyQwyLDYXfBOhPfoBGqnvuZO_Y90YgqFM11T2jvnxjLFmqw@mail.gmail.com  
  

Make WaitLatchOrSocket’s timeout detection more robust.

  
commit   : 6675301ea56d6f518eefa4b4603a8efd1e754a8f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 18 Jul 2015 11:47:13 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 18 Jul 2015 11:47:13 -0400    

Click here for diff

  
In the previous coding, timeout would be noticed and reported only when  
poll() or socket() returned zero (or the equivalent behavior on Windows).  
Ordinarily that should work well enough, but it seems conceivable that we  
could get into a state where poll() always returns a nonzero value --- for  
example, if it is noticing a condition on one of the file descriptors that  
we do not think is reason to exit the loop.  If that happened, we'd be in a  
busy-wait loop that would fail to terminate even when the timeout expires.  
  
We can make this more robust at essentially no cost, by deciding to exit  
of our own accord if we compute a zero or negative time-remaining-to-wait.  
Previously the code noted this but just clamped the time-remaining to zero,  
expecting that we'd detect timeout on the next loop iteration.  
  
Back-patch to 9.2.  While 9.1 had a version of WaitLatchOrSocket, it was  
primitive compared to later versions, and did not guarantee reliable  
detection of timeouts anyway.  (Essentially, this is a refinement of  
commit 3e7fdcffd6f77187, which was back-patched only as far as 9.2.)  
  

AIX: Test the -qlonglong option before use.

  
commit   : 12073b9aadc91c51bf8a21a7aabe7a1083aa4903    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Fri, 17 Jul 2015 03:01:14 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Fri, 17 Jul 2015 03:01:14 -0400    

Click here for diff

  
xlc provides "long long" unconditionally at C99-compatible language  
levels, and this option provokes a warning.  The warning interferes with  
"configure" tests that fail in response to any warning.  Notably, before  
commit 85a2a8903f7e9151793308d0638621003aded5ae, it interfered with the  
test for -qnoansialias.  Back-patch to 9.0 (all supported versions).  
  

Fix a low-probability crash in our qsort implementation.

  
commit   : 15ca2b6cd039aa35ae6d0cd02c100ffc020b5ea8    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 16 Jul 2015 22:57:46 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 16 Jul 2015 22:57:46 -0400    

Click here for diff

  
It's standard for quicksort implementations, after having partitioned the  
input into two subgroups, to recurse to process the smaller partition and  
then handle the larger partition by iterating.  This method guarantees  
that no more than log2(N) levels of recursion can be needed.  However,  
Bentley and McIlroy argued that checking to see which partition is smaller  
isn't worth the cycles, and so their code doesn't do that but just always  
recurses on the left partition.  In most cases that's fine; but with  
worst-case input we might need O(N) levels of recursion, and that means  
that qsort could be driven to stack overflow.  Such an overflow seems to  
be the only explanation for today's report from Yiqing Jin of a SIGSEGV  
in med3_tuple while creating an index of a couple billion entries with a  
very large maintenance_work_mem setting.  Therefore, let's spend the few  
additional cycles and lines of code needed to choose the smaller partition  
for recursion.  
  
Also, fix up the qsort code so that it properly uses size_t not int for  
some intermediate values representing numbers of items.  This would only  
be a live risk when sorting more than INT_MAX bytes (in qsort/qsort_arg)  
or tuples (in qsort_tuple), which I believe would never happen with any  
caller in the current core code --- but perhaps it could happen with  
call sites in third-party modules?  In any case, this is trouble waiting  
to happen, and the corrected code is probably if anything shorter and  
faster than before, since it removes sign-extension steps that had to  
happen when converting between int and size_t.  
  
In passing, move a couple of CHECK_FOR_INTERRUPTS() calls so that it's  
not necessary to preserve the value of "r" across them, and prettify  
the output of gen_qsort_tuple.pl a little.  
  
Back-patch to all supported branches.  The odds of hitting this issue  
are probably higher in 9.4 and up than before, due to the new ability  
to allocate sort workspaces exceeding 1GB, but there's no good reason  
to believe that it's impossible to crash older branches this way.  
  

  
commit   : 690bec26c65610ab8aac8326221c4c4ebbe57a54    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 15 Jul 2015 21:00:26 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 15 Jul 2015 21:00:26 -0400    

Click here for diff

  
This allows PostgreSQL modules and their dependencies to have undefined  
symbols, resolved at runtime.  Perl module shared objects rely on that  
in Perl 5.8.0 and later.  This fixes the crash when PL/PerlU loads such  
modules, as the hstore_plperl test suite does.  Module authors can link  
using -Wl,-G to permit undefined symbols; by default, linking will fail  
as it has.  Back-patch to 9.0 (all supported versions).  
  

Fix assorted memory leaks.

  
commit   : a24ceea4b311146d41e3bd5b8e0df7176113675f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 12 Jul 2015 16:25:52 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 12 Jul 2015 16:25:52 -0400    

Click here for diff

  
Per Coverity (not that any of these are so non-obvious that they should not  
have been caught before commit).  The extent of leakage is probably minor  
to unnoticeable, but a leak is a leak.  Back-patch as necessary.  
  
Michael Paquier  
  

Improve documentation about array concat operator vs. underlying functions.

  
commit   : 349ce2870f1039b83811af0c640fe58efe16e3f7    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 9 Jul 2015 18:50:31 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 9 Jul 2015 18:50:31 -0400    

Click here for diff

  
The documentation implied that there was seldom any reason to use the  
array_append, array_prepend, and array_cat functions directly.  But that's  
not really true, because they can help make it clear which case is meant,  
which the || operator can't do since it's overloaded to represent all three  
cases.  Add some discussion and examples illustrating the potentially  
confusing behavior that can ensue if the parser misinterprets what was  
meant.  
  
Per a complaint from Michael Herold.  Back-patch to 9.2, which is where ||  
started to behave this way.  
  

Fix postmaster’s handling of a startup-process crash.

  
commit   : 97122b8a8e592d71408addce98aa3ac66f56fdc3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 9 Jul 2015 13:22:23 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 9 Jul 2015 13:22:23 -0400    

Click here for diff

  
Ordinarily, a failure (unexpected exit status) of the startup subprocess  
should be considered fatal, so the postmaster should just close up shop  
and quit.  However, if we sent the startup process a SIGQUIT or SIGKILL  
signal, the failure is hardly "unexpected", and we should attempt restart;  
this is necessary for recovery from ordinary backend crashes in hot-standby  
scenarios.  I attempted to implement the latter rule with a two-line patch  
in commit 442231d7f71764b8c628044e7ce2225f9aa43b67, but it now emerges that  
that patch was a few bricks shy of a load: it failed to distinguish the  
case of a signaled startup process from the case where the new startup  
process crashes before reaching database consistency.  That resulted in  
infinitely respawning a new startup process only to have it crash again.  
  
To handle this properly, we really must track whether we have sent the  
*current* startup process a kill signal.  Rather than add yet another  
ad-hoc boolean to the postmaster's state, I chose to unify this with the  
existing RecoveryError flag into an enum tracking the startup process's  
state.  That seems more consistent with the postmaster's general state  
machine design.  
  
Back-patch to 9.0, like the previous patch.  
  

  
commit   : 8dc8a31a3a036830bcffe2c803688b60ebd67bfc    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 9 Jul 2015 16:00:14 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 9 Jul 2015 16:00:14 +0300    

Click here for diff

  
Tom fixed another one of these in commit 7f32dbcd, but there was another  
almost identical one in libpq docs. Per his comment:  
  
HP's web server has apparently become case-sensitive sometime recently.  
Per bug #13479 from Daniel Abraham.  Corrected link identified by Alvaro.  
  

Replace use of “diff -q”.

  
commit   : f7b3300384c1f726e4c1fe34ecec409826c2cdef    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 8 Jul 2015 20:44:21 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 8 Jul 2015 20:44:21 -0400    

Click here for diff

  
POSIX does not specify the -q option, and many implementations do not  
offer it.  Don't bother changing the MSVC build system, because having  
non-GNU diff on Windows is vanishingly unlikely.  Back-patch to 9.2,  
where this invocation was introduced.  
  

Fix null pointer dereference in “\c” psql command.

  
commit   : 458ccbf2b3a9a8566a405368825954cf99b8dbef    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 8 Jul 2015 20:44:21 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 8 Jul 2015 20:44:21 -0400    

Click here for diff

  
The psql crash happened when no current connection existed.  (The second  
new check is optional given today's undocumented NULL argument handling  
in PQhost() etc.)  Back-patch to 9.0 (all supported versions).  
  

Fix portability issue in pg_upgrade test script: avoid $PWD.

  
commit   : ebe601c4daf3cdcde5156d941a6ae880ab94c7cd    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 7 Jul 2015 12:49:18 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 7 Jul 2015 12:49:18 -0400    

Click here for diff

  
SUSv2-era shells don't set the PWD variable, though anything more modern  
does.  In the buildfarm environment this could lead to test.sh executing  
with PWD pointing to $HOME or another high-level directory, so that there  
were conflicts between concurrent executions of the test in different  
branch subdirectories.  This appears to be the explanation for recent  
intermittent failures on buildfarm members binturong and dingo (and might  
well have something to do with the buildfarm script's failure to capture  
log files from pg_upgrade tests, too).  
  
To fix, just use `pwd` in place of $PWD.  AFAICS test.sh is the only place  
in our source tree that depended on $PWD.  Back-patch to all versions  
containing this script.  
  
Per buildfarm.  Thanks to Oskari Saarenmaa for diagnosing the problem.  
  

Improve handling of out-of-memory in libpq.

  
commit   : 6d88c1fc5f2eba341186e13dc9ab4a0ca8eeeeba    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 7 Jul 2015 18:37:45 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 7 Jul 2015 18:37:45 +0300    

Click here for diff

  
If an allocation fails in the main message handling loop, pqParseInput3  
or pqParseInput2, it should not be treated as "not enough data available  
yet". Otherwise libpq will wait indefinitely for more data to arrive from  
the server, and gets stuck forever.  
  
This isn't a complete fix - getParamDescriptions and getCopyStart still  
have the same issue, but it's a step in the right direction.  
  
Michael Paquier and me. Backpatch to all supported versions.  
  

Turn install.bat into a pure one line wrapper fort he perl script.

  
commit   : a5273ef371cedb2c5e51d151e30ebe6f44615a03    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 7 Jul 2015 16:31:52 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 7 Jul 2015 16:31:52 +0300    

Click here for diff

  
Build.bat and vcregress.bat got similar treatment years ago. I'm not sure  
why install.bat wasn't treated at the same time, but it seems like a good  
idea anyway.  
  
The immediate problem with the old install.bat was that it had quoting  
issues, and wouldn't work if the target directory's name contained spaces.  
This fixes that problem.  
  
I committed this to master yesterday, this is a backpatch of the same for  
all supported versions.  
  

Remove incorrect warning from pg_archivecleanup document.

  
commit   : e27d1f3ce5ceeb04b5c05b5c0bfac04db4dc9f86    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Mon, 6 Jul 2015 20:58:58 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Mon, 6 Jul 2015 20:58:58 +0900    

Click here for diff

  
The .backup file name can be passed to pg_archivecleanup even if  
it includes the extension which is specified in -x option.  
However, previously the document incorrectly warned a user  
not to do that.  
  
Back-patch to 9.2 where pg_archivecleanup's -x option and  
the warning were added.  
  

Make numeric form of PG version number readily available in Makefiles.

  
commit   : 89b8cf47b865df89ebcec14eb760ee0d4b6dcc14    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 5 Jul 2015 12:01:01 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 5 Jul 2015 12:01:01 -0400    

Click here for diff

  
Expose PG_VERSION_NUM (e.g., "90600") as a Make variable; but for  
consistency with the other Make variables holding similar info,  
call the variable just VERSION_NUM not PG_VERSION_NUM.  
  
There was some discussion of making this value available as a pg_config  
value as well.  However, that would entail substantially more work than  
this two-line patch.  Given that there was not exactly universal consensus  
that we need this at all, let's just do a minimal amount of work for now.  
  
Back-patch of commit a5d489ccb7e613c7ca3be6141092b8c1d2c13fa7, so that this  
variable is actually useful for its intended purpose sometime before 2020.  
  
Michael Paquier, reviewed by Pavel Stehule  
  

PL/Perl: Add alternative expected file for Perl 5.22

  
commit   : b2e54be2ea620d6916e3fc0cf04d063c8ef98c01    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 21 Jun 2015 10:37:24 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 21 Jun 2015 10:37:24 -0400    

Click here for diff

  
  

  
commit   : 49946f2b2d7b5defddce1321ca706fbac32059ad    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 30 Jun 2015 18:47:32 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 30 Jun 2015 18:47:32 -0400    

Click here for diff

  
HP's web server has apparently become case-sensitive sometime recently.  
Per bug #13479 from Daniel Abraham.  Corrected link identified by Alvaro.  
  

Test -lrt for sched_yield

  
commit   : c538d736324767ef04b9ff9ee0056752a8c59e53    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 30 Jun 2015 14:20:38 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 30 Jun 2015 14:20:38 -0300    

Click here for diff

  
Apparently, this is needed in some Solaris versions.  
  
Author: Oskari Saarenmaa  
  

Revoke incorrectly applied patch version

  
commit   : 9d22ba6b4065f493935f03bc28658744ae0a94c1    
  
author   : Simon Riggs <simon@2ndQuadrant.com>    
date     : Sat, 27 Jun 2015 02:22:12 +0100    
  
committer: Simon Riggs <simon@2ndQuadrant.com>    
date     : Sat, 27 Jun 2015 02:22:12 +0100    

Click here for diff

  
  

Avoid hot standby cancels from VAC FREEZE

  
commit   : ca6a11d785abcb42a96362d87a735b1ed1129074    
  
author   : Simon Riggs <simon@2ndQuadrant.com>    
date     : Sat, 27 Jun 2015 00:47:34 +0100    
  
committer: Simon Riggs <simon@2ndQuadrant.com>    
date     : Sat, 27 Jun 2015 00:47:34 +0100    

Click here for diff

  
VACUUM FREEZE generated false cancelations of standby queries on an  
otherwise idle master. Caused by an off-by-one error on cutoff_xid  
which goes back to original commit.  
  
Backpatch to all versions 9.0+  
  
Analysis and report by Marco Nenciarini  
  
Bug fix by Simon Riggs  
  

Fix the logic for putting relations into the relcache init file.

  
commit   : 88fab18a4cf14604d994fbee1cf6bf5c67b166b6    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 25 Jun 2015 14:39:05 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 25 Jun 2015 14:39:05 -0400    

Click here for diff

  
Commit f3b5565dd4e59576be4c772da364704863e6a835 was a couple of bricks shy  
of a load; specifically, it missed putting pg_trigger_tgrelid_tgname_index  
into the relcache init file, because that index is not used by any  
syscache.  However, we have historically nailed that index into cache for  
performance reasons.  The upshot was that load_relcache_init_file always  
decided that the init file was busted and silently ignored it, resulting  
in a significant hit to backend startup speed.  
  
To fix, reinstantiate RelationIdIsInInitFile() as a wrapper around  
RelationSupportsSysCache(), which can know about additional relations  
that should be in the init file despite being unknown to syscache.c.  
  
Also install some guards against future mistakes of this type: make  
write_relcache_init_file Assert that all nailed relations get written to  
the init file, and make load_relcache_init_file emit a WARNING if it takes  
the "wrong number of nailed relations" exit path.  Now that we remove the  
init files during postmaster startup, that case should never occur in the  
field, even if we are starting a minor-version update that added or removed  
rels from the nailed set.  So the warning shouldn't ever be seen by end  
users, but it will show up in the regression tests if somebody breaks this  
logic.  
  
Back-patch to all supported branches, like the previous commit.  
  

Docs: fix claim that to_char(‘FM’) removes trailing zeroes.

  
commit   : 03655d215d32428b1fa628e4855ae24e5f1f0ec3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 25 Jun 2015 10:44:03 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 25 Jun 2015 10:44:03 -0400    

Click here for diff

  
Of course, what it removes is leading zeroes.  Seems to have been a thinko  
in commit ffe92d15d53625d5ae0c23f4e1984ed43614a33d.  Noted by Hubert Depesz  
Lubaczewski.  
  

Improve inheritance_planner()’s performance for large inheritance sets.

  
commit   : e538e510e11a6d3aa2a080f36bc46f1cb537532f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 22 Jun 2015 18:53:27 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 22 Jun 2015 18:53:27 -0400    

Click here for diff

  
Commit c03ad5602f529787968fa3201b35c119bbc6d782 introduced a planner  
performance regression for UPDATE/DELETE on large inheritance sets.  
It required copying the append_rel_list (which is of size proportional to  
the number of inherited tables) once for each inherited table, thus  
resulting in O(N^2) time and memory consumption.  While it's difficult to  
avoid that in general, the extra work only has to be done for  
append_rel_list entries that actually reference subquery RTEs, which  
inheritance-set entries will not.  So we can buy back essentially all of  
the loss in cases without subqueries in FROM; and even for those, the added  
work is mainly proportional to the number of UNION ALL subqueries.  
  
Back-patch to 9.2, like the previous commit.  
  
Tom Lane and Dean Rasheed, per a complaint from Thomas Munro.  
  

Truncate strings in tarCreateHeader() with strlcpy(), not sprintf().

  
commit   : 926efeb042f64083e127b165ac98752dfccbff1f    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sun, 21 Jun 2015 20:04:36 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sun, 21 Jun 2015 20:04:36 -0400    

Click here for diff

  
This supplements the GNU libc bug #6530 workarounds introduced in commit  
54cd4f04576833abc394e131288bf3dd7dcf4806.  On affected systems, a  
tar-format pg_basebackup failed when some filename beneath the data  
directory was not valid character data in the postmaster/walsender  
locale.  Back-patch to 9.1, where pg_basebackup was introduced.  Extant,  
bug-prone conversion specifications receive only ASCII bytes or involve  
low-importance messages.  
  

Fix thinko in comment (launcher -> worker)

  
commit   : ab232db2c2ce3e58ded053f351cef4773523f498    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Sat, 20 Jun 2015 11:45:58 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Sat, 20 Jun 2015 11:45:58 -0300    

Click here for diff

  
  

Clamp autovacuum launcher sleep time to 5 minutes

  
commit   : 41acde2df6d537569f497afaa7316333457e10e8    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 19 Jun 2015 12:44:34 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 19 Jun 2015 12:44:34 -0300    

Click here for diff

  
This avoids the problem that it might go to sleep for an unreasonable  
amount of time in unusual conditions like the server clock moving  
backwards an unreasonable amount of time.  
  
(Simply moving the server clock forward again doesn't solve the problem  
unless you wake up the autovacuum launcher manually, say by sending it  
SIGHUP).  
  
Per trouble report from Prakash Itnal in  
https://www.postgresql.org/message-id/CAHC5u79-UqbapAABH2t4Rh2eYdyge0Zid-X=Xz-ZWZCBK42S0Q@mail.gmail.com  
  
Analyzed independently by Haribabu Kommi and Tom Lane.  
  

Check for out of memory when allocating sqlca.

  
commit   : 711cbaadd078321507fb9bc507ffe727e976bbf7    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Mon, 15 Jun 2015 14:21:03 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Mon, 15 Jun 2015 14:21:03 +0200    

Click here for diff

  
Patch by Michael Paquier  
  

Fix memory leak in ecpglib’s connect function.

  
commit   : fd1ff4a130c804e10bcf2e1ec385e4c06835f9fd    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Mon, 15 Jun 2015 14:20:09 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Mon, 15 Jun 2015 14:20:09 +0200    

Click here for diff

  
Patch by Michael Paquier  
  

Fixed some memory leaks in ECPG.

  
commit   : ec311b1d8f2062d600201c846f817a80eadca6ca    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Fri, 12 Jun 2015 14:52:55 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Fri, 12 Jun 2015 14:52:55 +0200    

Click here for diff

  
Patch by Michael Paquier  
  
Conflicts:  
	src/interfaces/ecpg/preproc/variable.c  
  

Fix intoasc() in Informix compat lib. This function used to be a noop.

  
commit   : 1ea539ae3fc197e6512e09609229a2bad8f3001d    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Fri, 12 Jun 2015 14:50:47 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Fri, 12 Jun 2015 14:50:47 +0200    

Click here for diff

  
Patch by Michael Paquier  
  

Improve error message and hint for ALTER COLUMN TYPE can’t-cast failure.

  
commit   : ae6ae424b502afcc514a2ce14de476a9714b6a75    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 12 Jun 2015 11:54:03 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 12 Jun 2015 11:54:03 -0400    

Click here for diff

  
We already tried to improve this once, but the "improved" text was rather  
off-target if you had provided a USING clause.  Also, it seems helpful  
to provide the exact text of a suggested USING clause, so users can just  
copy-and-paste it when needed.  Per complaint from Keith Rarick and a  
suggestion from Merlin Moncure.  
  
Back-patch to 9.2 where the current wording was adopted.