PostgreSQL 9.3.10 commit log

Stamp 9.3.10.

  
commit   : f5bbaeef1a5cdce1349ed6a1f87a85f17d741b56    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 5 Oct 2015 15:14:02 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 5 Oct 2015 15:14:02 -0400    

Click here for diff

  
  

doc: Update URLs of external projects

  
commit   : cc0c8ec9fca50721beb1a36d43114e66b5d96825    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Fri, 2 Oct 2015 21:50:59 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Fri, 2 Oct 2015 21:50:59 -0400    

Click here for diff

  
  

Fix insufficiently-portable regression test case.

  
commit   : 57b02827f3c086695e82358d7466b0168c64d3b9    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 5 Oct 2015 12:19:15 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 5 Oct 2015 12:19:15 -0400    

Click here for diff

  
Some of the buildfarm members are evidently miserly enough of stack space  
to pass the originally-committed form of this test.  Increase the  
requirement 10X to hopefully ensure that it fails as-expected everywhere.  
  
Security: CVE-2015-5289  
  

Translation updates

  
commit   : 921c18c150b202a1ada974a3e2b963f23f2b0604    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Mon, 5 Oct 2015 10:50:52 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Mon, 5 Oct 2015 10:50:52 -0400    

Click here for diff

  
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git  
Source-Git-Hash: 576bd3231176cdea570609e7fd16152bf2e5e15a  
  

Last-minute updates for release notes.

  
commit   : f7957536631e240321d6988edb4543dff35bc29b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 5 Oct 2015 10:57:15 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 5 Oct 2015 10:57:15 -0400    

Click here for diff

  
Add entries for security and not-quite-security issues.  
  
Security: CVE-2015-5288, CVE-2015-5289  
  

Remove outdated comment about relation level autovacuum freeze limits.

  
commit   : e946cc601c7bb2a3aa7c7ed2ec8faa2282f36951    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Mon, 5 Oct 2015 16:09:13 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Mon, 5 Oct 2015 16:09:13 +0200    

Click here for diff

  
The documentation for the autovacuum_multixact_freeze_max_age and  
autovacuum_freeze_max_age relation level parameters contained:  
"Note that while you can set autovacuum_multixact_freeze_max_age very  
small, or even zero, this is usually unwise since it will force frequent  
vacuuming."  
which hasn't been true since these options were made relation options,  
instead of residing in the pg_autovacuum table (834a6da4f7).  
  
Remove the outdated sentence. Even the lowered limits from 2596d70 are  
high enough that this doesn't warrant calling out the risk in the CREATE  
TABLE docs.  
  
Per discussion with Tom Lane and Alvaro Herrera  
  
Discussion: 26377.1443105453@sss.pgh.pa.us  
Backpatch: 9.0- (in parts)  
  

Prevent stack overflow in query-type functions.

  
commit   : 28dea9485ef20897c540ef5c86059dc12fe3fe7b    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:30 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:30 -0400    

Click here for diff

  
The tsquery, ltxtquery and query_int data types have a common ancestor.  
Having acquired check_stack_depth() calls independently, each was  
missing at least one call.  Back-patch to 9.0 (all supported versions).  
  

Prevent stack overflow in container-type functions.

  
commit   : 9286ff78f8a5dbb7edec678c4873c63b56266193    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:29 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:29 -0400    

Click here for diff

  
A range type can name another range type as its subtype, and a record  
type can bear a column of another record type.  Consequently, functions  
like range_cmp() and record_recv() are recursive.  Functions at risk  
include operator family members and referents of pg_type regproc  
columns.  Treat as recursive any such function that looks up and calls  
the same-purpose function for a record column type or the range subtype.  
Back-patch to 9.0 (all supported versions).  
  
An array type's element type is never itself an array type, so array  
functions are unaffected.  Recursion depth proportional to array  
dimensionality, found in array_dim_to_jsonb(), is fine thanks to MAXDIM.  
  

  
commit   : f8862172e6519b82e66c51baa5b87e29847db2b9    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:29 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:29 -0400    

Click here for diff

  
Sufficiently-deep recursion heretofore elicited a SIGSEGV.  If an  
application constructs PostgreSQL json or jsonb values from arbitrary  
user input, application users could have exploited this to terminate all  
active database connections.  That applies to 9.3, where the json parser  
adopted recursive descent, and later versions.  Only row_to_json() and  
array_to_json() were at risk in 9.2, both in a non-security capacity.  
Back-patch to 9.2, where the json type was introduced.  
  
Oskari Saarenmaa, reviewed by Michael Paquier.  
  
Security: CVE-2015-5289  
  

pgcrypto: Detect and report too-short crypt() salts.

  
commit   : cc1210f0aa441cd0825380ed3fddfeadb6f6533f    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:29 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 5 Oct 2015 10:06:29 -0400    

Click here for diff

  
Certain short salts crashed the backend or disclosed a few bytes of  
backend memory.  For existing salt-induced error conditions, emit a  
message saying as much.  Back-patch to 9.0 (all supported versions).  
  
Josh Kupershmidt  
  
Security: CVE-2015-5288  
  

Re-Align *_freeze_max_age reloption limits with corresponding GUC limits.

  
commit   : 3933417141587253c85285f23c789538aa96a22f    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Mon, 5 Oct 2015 11:53:43 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Mon, 5 Oct 2015 11:53:43 +0200    

Click here for diff

  
In 020235a5754 I lowered the autovacuum_*freeze_max_age minimums to  
allow for easier testing of wraparounds. I did not touch the  
corresponding per-table limits. While those don't matter for the purpose  
of wraparound, it seems more consistent to lower them as well.  
  
It's noteworthy that the previous reloption lower limit for  
autovacuum_multixact_freeze_max_age was too high by one magnitude, even  
before 020235a5754.  
  
Discussion: 26377.1443105453@sss.pgh.pa.us  
Backpatch: back to 9.0 (in parts), like the prior patch  
  

Release notes for 9.5beta1, 9.4.5, 9.3.10, 9.2.14, 9.1.19, 9.0.23.

  
commit   : 04811c350b76b4bbccde1dea43c8032f7f8524df    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 4 Oct 2015 19:38:00 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 4 Oct 2015 19:38:00 -0400    

Click here for diff

  
  

Further twiddling of nodeHash.c hashtable sizing calculation.

  
commit   : 0867e0ad53598e0f700ecce5f8472da4451eeeb6    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 4 Oct 2015 15:55:07 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 4 Oct 2015 15:55:07 -0400    

Click here for diff

  
On reflection, the submitted patch didn't really work to prevent the  
request size from exceeding MaxAllocSize, because of the fact that we'd  
happily round nbuckets up to the next power of 2 after we'd limited it to  
max_pointers.  The simplest way to enforce the limit correctly is to  
round max_pointers down to a power of 2 when it isn't one already.  
  
(Note that the constraint to INT_MAX / 2, if it were doing anything useful  
at all, is properly applied after that.)  
  

Fix possible “invalid memory alloc request size” failure in nodeHash.c.

  
commit   : 45dd7cdbabae665e4a37750da97ee296c2f76d32    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 4 Oct 2015 14:16:59 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 4 Oct 2015 14:16:59 -0400    

Click here for diff

  
Limit the size of the hashtable pointer array to not more than  
MaxAllocSize.  We've seen reports of failures due to this in HEAD/9.5,  
and it seems possible in older branches as well.  The change in  
NTUP_PER_BUCKET in 9.5 may have made the problem more likely, but  
surely it didn't introduce it.  
  
Tomas Vondra, slightly modified by me  
  

Update time zone data files to tzdata release 2015g.

  
commit   : 0f6a046b6736c1fc5af5cea36561312e33effef9    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 19:15:39 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 19:15:39 -0400    

Click here for diff

  
DST law changes in Cayman Islands, Fiji, Moldova, Morocco, Norfolk Island,  
North Korea, Turkey, Uruguay.  New zone America/Fort_Nelson for Canadian  
Northern Rockies.  
  

Add recursion depth protection to LIKE matching.

  
commit   : 4175cc604f6df6b0f9e9f177898a1b94e6f14aee    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 15:00:52 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 15:00:52 -0400    

Click here for diff

  
Since MatchText() recurses, it could in principle be driven to stack  
overflow, although quite a long pattern would be needed.  
  

Add recursion depth protections to regular expression matching.

  
commit   : 9ed207ae99bca08851afbc3e189a95468dacdf97    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 14:51:58 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 14:51:58 -0400    

Click here for diff

  
Some of the functions in regex compilation and execution recurse, and  
therefore could in principle be driven to stack overflow.  The Tcl crew  
has seen this happen in practice in duptraverse(), though their fix was  
to put in a hard-wired limit on the number of recursive levels, which is  
not too appetizing --- fortunately, we have enough infrastructure to check  
the actually available stack.  Greg Stark has also seen it in other places  
while fuzz testing on a machine with limited stack space.  Let's put guards  
in to prevent crashes in all these places.  
  
Since the regex code would leak memory if we simply threw elog(ERROR),  
we have to introduce an API that checks for stack depth without throwing  
such an error.  Fortunately that's not difficult.  
  

Fix potential infinite loop in regular expression execution.

  
commit   : 6b3810d0a4f6db5d6f87e997535b14fd306fa3a7    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 14:26:36 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 14:26:36 -0400    

Click here for diff

  
In cfindloop(), if the initial call to shortest() reports that a  
zero-length match is possible at the current search start point, but then  
it is unable to construct any actual match to that, it'll just loop around  
with the same start point, and thus make no progress.  We need to force the  
start point to be advanced.  This is safe because the loop over "begin"  
points has already tried and failed to match starting at "close", so there  
is surely no need to try that again.  
  
This bug was introduced in commit e2bd904955e2221eddf01110b1f25002de2aaa83,  
wherein we allowed continued searching after we'd run out of match  
possibilities, but evidently failed to think hard enough about exactly  
where we needed to search next.  
  
Because of the way this code works, such a match failure is only possible  
in the presence of backrefs --- otherwise, shortest()'s judgment that a  
match is possible should always be correct.  That probably explains how  
come the bug has escaped detection for several years.  
  
The actual fix is a one-liner, but I took the trouble to add/improve some  
comments related to the loop logic.  
  
After fixing that, the submitted test case "()*\1" didn't loop anymore.  
But it reported failure, though it seems like it ought to match a  
zero-length string; both Tcl and Perl think it does.  That seems to be from  
overenthusiastic optimization on my part when I rewrote the iteration match  
logic in commit 173e29aa5deefd9e71c183583ba37805c8102a72: we can't just  
"declare victory" for a zero-length match without bothering to set match  
data for capturing parens inside the iterator node.  
  
Per fuzz testing by Greg Stark.  The first part of this is a bug in all  
supported branches, and the second part is a bug since 9.2 where the  
iteration rewrite happened.  
  

Add some more query-cancel checks to regular expression matching.

  
commit   : 384ce1b7560ab5b26b251a663d6e70d66e654b10    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 13:45:39 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 13:45:39 -0400    

Click here for diff

  
Commit 9662143f0c35d64d7042fbeaf879df8f0b54be32 added infrastructure to  
allow regular-expression operations to be terminated early in the event  
of SIGINT etc.  However, fuzz testing by Greg Stark disclosed that there  
are still cases where regex compilation could run for a long time without  
noticing a cancel request.  Specifically, the fixempties() phase never  
adds new states, only new arcs, so it doesn't hit the cancel check I'd put  
in newstate().  Add one to newarc() as well to cover that.  
  
Some experimentation of my own found that regex execution could also run  
for a long time despite a pending cancel.  We'd put a high-level cancel  
check into cdissect(), but there was none inside the core text-matching  
routines longest() and shortest().  Ordinarily those inner loops are very  
very fast ... but in the presence of lookahead constraints, not so much.  
As a compromise, stick a cancel check into the stateset cache-miss  
function, which is enough to guarantee a cancel check at least once per  
lookahead constraint test.  
  
Making this work required more attention to error handling throughout the  
regex executor.  Henry Spencer had apparently originally intended longest()  
and shortest() to be incapable of incurring errors while running, so  
neither they nor their subroutines had well-defined error reporting  
behaviors.  However, that was already broken by the lookahead constraint  
feature, since lacon() can surely suffer an out-of-memory failure ---  
which, in the code as it stood, might never be reported to the user at all,  
but just silently be treated as a non-match of the lookahead constraint.  
Normalize all that by inserting explicit error tests as needed.  I took the  
opportunity to add some more comments to the code, too.  
  
Back-patch to all supported branches, like the previous patch.  
  

Docs: add disclaimer about hazards of using regexps from untrusted sources.

  
commit   : 71d9523d77fe77de96c2948e441dc707c5a93f7b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 13:30:43 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Oct 2015 13:30:43 -0400    

Click here for diff

  
It's not terribly hard to devise regular expressions that take large  
amounts of time and/or memory to process.  Recent testing by Greg Stark has  
also shown that machines with small stack limits can be driven to stack  
overflow by suitably crafted regexps.  While we intend to fix these things  
as much as possible, it's probably impossible to eliminate slow-execution  
cases altogether.  In any case we don't want to treat such things as  
security issues.  The history of that code should already discourage  
prudent DBAs from allowing execution of regexp patterns coming from  
possibly-hostile sources, but it seems like a good idea to warn about the  
hazard explicitly.  
  
Currently, similar_escape() allows access to enough of the underlying  
regexp behavior that the warning has to apply to SIMILAR TO as well.  
We might be able to make it safer if we tightened things up to allow only  
SQL-mandated capabilities in SIMILAR TO; but that would be a subtly  
non-backwards-compatible change, so it requires discussion and probably  
could not be back-patched.  
  
Per discussion among pgsql-security list.  
  

Fix pg_dump to handle inherited NOT VALID check constraints correctly.

  
commit   : 7e1e1c9d18c4d2d0b6cc64c697f6af043aac7f36    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 1 Oct 2015 16:19:49 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 1 Oct 2015 16:19:49 -0400    

Click here for diff

  
This case seems to have been overlooked when unvalidated check constraints  
were introduced, in 9.2.  The code would attempt to dump such constraints  
over again for each child table, even though adding them to the parent  
table is sufficient.  
  
In 9.2 and 9.3, also fix contrib/pg_upgrade/Makefile so that the "make  
clean" target fully cleans up after a failed test.  This evidently got  
dealt with at some point in 9.4, but it wasn't back-patched.  I ran into  
it while testing this fix ...  
  
Per bug #13656 from Ingmar Brouns.  
  

Fix documentation error in commit 8703059c6b55c427100e00a09f66534b6ccbfaa1.

  
commit   : c3e847902fce46c44cd733506646bf19eaeecb6e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 1 Oct 2015 10:31:22 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 1 Oct 2015 10:31:22 -0400    

Click here for diff

  
Etsuro Fujita spotted a thinko in the README commentary.  
  

Fix mention of htup.h in storage.sgml

  
commit   : 2d57d886fa0c30c28ed3545aa4e6968efd1acdf2    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 1 Oct 2015 23:00:52 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 1 Oct 2015 23:00:52 +0900    

Click here for diff

  
Previously it was documented that the details on HeapTupleHeaderData  
struct could be found in htup.h. This is not correct because it's now  
defined in htup_details.h.  
  
Back-patch to 9.3 where the definition of HeapTupleHeaderData struct  
was moved from htup.h to htup_details.h.  
  
Michael Paquier  
  

Improve LISTEN startup time when there are many unread notifications.

  
commit   : aad86c518d7b3b97c14872258c02551b443536f0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 30 Sep 2015 23:32:23 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 30 Sep 2015 23:32:23 -0400    

Click here for diff

  
If some existing listener is far behind, incoming new listener sessions  
would start from that session's read pointer and then need to advance over  
many already-committed notification messages, which they have no interest  
in.  This was expensive in itself and also thrashed the pg_notify SLRU  
buffers a lot more than necessary.  We can improve matters considerably  
in typical scenarios, without much added cost, by starting from the  
furthest-ahead read pointer, not the furthest-behind one.  We do have to  
consider only sessions in our own database when doing this, which requires  
an extra field in the data structure, but that's a pretty small cost.  
  
Back-patch to 9.0 where the current LISTEN/NOTIFY logic was introduced.  
  
Matt Newell, slightly adjusted by me  
  

Fix plperl to handle non-ASCII error message texts correctly.

  
commit   : f60b2e2d48976b71a47505b24468617886ac1a31    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Sep 2015 10:52:22 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Sep 2015 10:52:22 -0400    

Click here for diff

  
We were passing error message texts to croak() verbatim, which turns out  
not to work if the text contains non-ASCII characters; Perl mangles their  
encoding, as reported in bug #13638 from Michal Leinweber.  To fix, convert  
the text into a UTF8-encoded SV first.  
  
It's hard to test this without risking failures in different database  
encodings; but we can follow the lead of plpython, which is already  
assuming that no-break space (U+00A0) has an equivalent in all encodings  
we care about running the regression tests in (cf commit 2dfa15de5).  
  
Back-patch to 9.1.  The code is quite different in 9.0, and anyway it seems  
too risky to put something like this into 9.0's final minor release.  
  
Alex Hunsaker, with suggestions from Tim Bunce and Tom Lane  
  

Fix compiler warning about unused function in non-readline case.

  
commit   : 729d0056b42a512648e49daa727e5fe77d0cfeeb    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 28 Sep 2015 18:29:20 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 28 Sep 2015 18:29:20 -0400    

Click here for diff

  
Backpatch to all live branches to keep the code in sync.  
  

Second try at fixing O(N^2) problem in foreign key references.

  
commit   : 1bcc9e60a7d44f6b824f4bd9c73a035af2572794    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 25 Sep 2015 13:16:31 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 25 Sep 2015 13:16:31 -0400    

Click here for diff

  
This replaces ill-fated commit 5ddc72887a012f6a8b85707ef27d85c274faf53d,  
which was reverted because it broke active uses of FK cache entries.  In  
this patch, we still do nothing more to invalidatable cache entries than  
mark them as needing revalidation, so we won't break active uses.  To keep  
down the overhead of InvalidateConstraintCacheCallBack(), keep a list of  
just the currently-valid cache entries.  (The entries are large enough that  
some added space for list links doesn't seem like a big problem.)  This  
would still be O(N^2) when there are many valid entries, though, so when  
the list gets too long, just force the "sinval reset" behavior to remove  
everything from the list.  I set the threshold at 1000 entries, somewhat  
arbitrarily.  Possibly that could be fine-tuned later.  Another item for  
future study is whether it's worth adding reference counting so that we  
could safely remove invalidated entries.  As-is, problem cases are likely  
to end up with large and mostly invalid FK caches.  
  
Like the previous attempt, backpatch to 9.3.  
  
Jan Wieck and Tom Lane  
  

Further fix for psql’s code for locale-aware formatting of numeric output.

  
commit   : b7d17eca577e01ef50a5f5d09e42f6b271cceee1    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 25 Sep 2015 12:20:46 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 25 Sep 2015 12:20:46 -0400    

Click here for diff

  
(Third time's the charm, I hope.)  
  
Additional testing disclosed that this code could mangle already-localized  
output from the "money" datatype.  We can't very easily skip applying it  
to "money" values, because the logic is tied to column right-justification  
and people expect "money" output to be right-justified.  Short of  
decoupling that, we can fix it in what should be a safe enough way by  
testing to make sure the string doesn't contain any characters that would  
not be expected in plain numeric output.  
  

Further fix for psql’s code for locale-aware formatting of numeric output.

  
commit   : 9c547c939c6a23766797573431b574e2842abf9c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 25 Sep 2015 00:00:33 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 25 Sep 2015 00:00:33 -0400    

Click here for diff

  
On closer inspection, those seemingly redundant atoi() calls were not so  
much inefficient as just plain wrong: the author of this code either had  
not read, or had not understood, the POSIX specification for localeconv().  
The grouping field is *not* a textual digit string but separate integers  
encoded as chars.  
  
We'll follow the existing code as well as the backend's cash.c in only  
honoring the first group width, but let's at least honor it correctly.  
  
This doesn't actually result in any behavioral change in any of the  
locales I have installed on my Linux box, which may explain why nobody's  
complained; grouping width 3 is close enough to universal that it's barely  
worth considering other cases.  Still, wrong is wrong, so back-patch.  
  

Fix psql’s code for locale-aware formatting of numeric output.

  
commit   : 7e327ecd2b4e4ded1c4375a085bdf34e08885ee6    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 24 Sep 2015 23:01:04 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 24 Sep 2015 23:01:04 -0400    

Click here for diff

  
This code did the wrong thing entirely for numbers with an exponent  
but no decimal point (e.g., '1e6'), as reported by Jeff Janes in  
bug #13636.  More generally, it made lots of unverified assumptions  
about what the input string could possibly look like.  Rearrange so  
that it only fools with leading digits that it's directly verified  
are there, and an immediately adjacent decimal point.  While at it,  
get rid of some useless inefficiencies, like converting the grouping  
count string to integer over and over (and over).  
  
This has been broken for a long time, so back-patch to all supported  
branches.  
  

Improve handling of collations in contrib/postgres_fdw.

  
commit   : b7dcb2dd4a1d0b89488b1275ccc865c684cf11b5    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 24 Sep 2015 12:47:30 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 24 Sep 2015 12:47:30 -0400    

Click here for diff

  
If we have a local Var of say varchar type with default collation, and  
we apply a RelabelType to convert that to text with default collation, we  
don't want to consider that as creating an FDW_COLLATE_UNSAFE situation.  
It should be okay to compare that to a remote Var, so long as the remote  
Var determines the comparison collation.  (When we actually ship such an  
expression to the remote side, the local Var would become a Param with  
default collation, meaning the remote Var would in fact control the  
comparison collation, because non-default implicit collation overrides  
default implicit collation in parse_collate.c.)  To fix, be more precise  
about what FDW_COLLATE_NONE means: it applies either to a noncollatable  
data type or to a collatable type with default collation, if that collation  
can't be traced to a remote Var.  (When it can, FDW_COLLATE_SAFE is  
appropriate.)  We were essentially using that interpretation already at  
the Var/Const/Param level, but we weren't bubbling it up properly.  
  
An alternative fix would be to introduce a separate FDW_COLLATE_DEFAULT  
value to describe the second situation, but that would add more code  
without changing the actual behavior, so it didn't seem worthwhile.  
  
Also, since we're clarifying the rule to be that we care about whether  
operator/function input collations match, there seems no need to fail  
immediately upon seeing a Const/Param/non-foreign-Var with nondefault  
collation.  We only have to reject if it appears in a collation-sensitive  
context (for example, "var IS NOT NULL" is perfectly safe from a collation  
standpoint, whatever collation the var has).  So just set the state to  
UNSAFE rather than failing immediately.  
  
Per report from Jeevan Chalke.  This essentially corrects some sloppy  
thinking in commit ed3ddf918b59545583a4b374566bc1148e75f593, so back-patch  
to 9.3 where that logic appeared.  
  

Lower *_freeze_max_age minimum values.

  
commit   : fee2275ae9602de2ef773af9e1906b68290f4570    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Thu, 24 Sep 2015 14:53:33 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Thu, 24 Sep 2015 14:53:33 +0200    

Click here for diff

  
The old minimum values are rather large, making it time consuming to  
test related behaviour. Additionally the current limits, especially for  
multixacts, can be problematic in space-constrained systems. 10000000  
multixacts can contain a lot of members.  
  
Since there's no good reason for the current limits, lower them a good  
bit. Setting them to 0 would be a bad idea, triggering endless vacuums,  
so still retain a limit.  
  
While at it fix autovacuum_multixact_freeze_max_age to refer to  
multixact.c instead of varsup.c.  
  
Reviewed-By: Robert Haas  
Discussion: CA+TgmoYmQPHcrc3GSs7vwvrbTkbcGD9Gik=OztbDGGrovkkEzQ@mail.gmail.com  
Backpatch: back to 9.0 (in parts)  
  

Docs: fix typo in to_char() example.

  
commit   : a86eab94304a9dee7a2c1504e0051f1959a37b6a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 22 Sep 2015 10:40:25 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 22 Sep 2015 10:40:25 -0400    

Click here for diff

  
Per bug #13631 from KOIZUMI Satoru.  
  

Fix possible internal overflow in numeric multiplication.

  
commit   : 8b75e489a48cc7ad61cd723180652e6fc1977911    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 21 Sep 2015 12:11:32 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 21 Sep 2015 12:11:32 -0400    

Click here for diff

  
mul_var() postpones propagating carries until it risks overflow in its  
internal digit array.  However, the logic failed to account for the  
possibility of overflow in the carry propagation step, allowing wrong  
results to be generated in corner cases.  We must slightly reduce the  
when-to-propagate-carries threshold to avoid that.  
  
Discovered and fixed by Dean Rasheed, with small adjustments by me.  
  
This has been wrong since commit d72f6c75038d8d37e64a29a04b911f728044d83b,  
so back-patch to all supported branches.  
  

Restrict file mode creation mask during tmpfile().

  
commit   : ea218a2ba70d9f11f5a271728de26450a9b23d6c    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sun, 20 Sep 2015 20:42:27 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sun, 20 Sep 2015 20:42:27 -0400    

Click here for diff

  
Per Coverity.  Back-patch to 9.0 (all supported versions).  
  
Michael Paquier, reviewed (in earlier versions) by Heikki Linnakangas.  
  

Be more wary about partially-valid LOCALLOCK data in RemoveLocalLock().

  
commit   : 7e6e3bdd3c36d77b8a611dbf5e8c72164bc6108b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 20 Sep 2015 16:48:44 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 20 Sep 2015 16:48:44 -0400    

Click here for diff

  
RemoveLocalLock() must consider the possibility that LockAcquireExtended()  
failed to palloc the initial space for a locallock's lockOwners array.  
I had evidently meant to cope with this hazard when the code was originally  
written (commit 1785acebf2ed14fd66955e2d9a55d77a025f418d), but missed that  
the pfree needed to be protected with an if-test.  Just to make sure things  
are left in a clean state, reset numLockOwners as well.  
  
Per low-memory testing by Andreas Seltenreich.  Back-patch to all supported  
branches.  
  

Let compiler handle size calculation of bool types.

  
commit   : f6b701c0b41a55a167ae0ae6f21e068e57005039    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Thu, 17 Sep 2015 15:41:04 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Thu, 17 Sep 2015 15:41:04 +0200    

Click here for diff

  
Back in the day this did not work, but modern compilers should handle it themselves.  
  

Fix low-probability memory leak in regex execution.

  
commit   : b8431080851e64f5bf9efebc866fc02f0d57f56f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 18 Sep 2015 13:55:17 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 18 Sep 2015 13:55:17 -0400    

Click here for diff

  
After an internal failure in shortest() or longest() while pinning down the  
exact location of a match, find() forgot to free the DFA structure before  
returning.  This is pretty unlikely to occur, since we just successfully  
ran the "search" variant of the DFA; but it could happen, and it would  
result in a session-lifespan memory leak since this code uses malloc()  
directly.  Problem seems to have been aboriginal in Spencer's library,  
so back-patch all the way.  
  
In passing, correct a thinko in a comment I added awhile back about the  
meaning of the "ntree" field.  
  
I happened across these issues while comparing our code to Tcl's version  
of the library.  
  

Honour TEMP_CONFIG when testing pg_upgrade

  
commit   : 508435f9d33a895bcab194bbcef1ad0232216285    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Thu, 17 Sep 2015 11:57:00 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Thu, 17 Sep 2015 11:57:00 -0400    

Click here for diff

  
This setting contains extra configuration for the temp instance, as used  
in pg_regress' --temp-config flag.  
  
Backpatch to 9.2 where test.sh was introduced.  
  

Fix documentation of regular expression character-entry escapes.

  
commit   : 16e985b47dfed5b370186ca984d7f380a9c127a9    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 16 Sep 2015 14:50:12 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 16 Sep 2015 14:50:12 -0400    

Click here for diff

  
The docs claimed that \uhhhh would be interpreted as a Unicode value  
regardless of the database encoding, but it's never been implemented  
that way: \uhhhh and \xhhhh actually mean exactly the same thing, namely  
the character that pg_mb2wchar translates to 0xhhhh.  Moreover we were  
falsely dismissive of the usefulness of Unicode code points above FFFF.  
Fix that.  
  
It's been like this for ages, so back-patch to all supported branches.  
  

Revert “Fix an O(N^2) problem in foreign key references”.

  
commit   : be136b2430e59b3aafd08c6df1ae8289e1a0922b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 15 Sep 2015 11:08:56 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 15 Sep 2015 11:08:56 -0400    

Click here for diff

  
Commit 5ddc72887a012f6a8b85707ef27d85c274faf53d does not actually work  
because it will happily blow away ri_constraint_cache entries that are  
in active use in outer call levels.  In any case, it's a very ugly,  
brute-force solution to the problem of limiting the cache size.  
Revert until it can be redesigned.  
  

pg_dump, pg_upgrade: allow postgres/template1 tablespace moves

  
commit   : ca445043e78ef7b2bbb911739f60b7a4726702b1    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Fri, 11 Sep 2015 15:51:10 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Fri, 11 Sep 2015 15:51:10 -0400    

Click here for diff

  
Modify pg_dump to restore postgres/template1 databases to non-default  
tablespaces by switching out of the database to be moved, then switching  
back.  
  
Also, to fix potentially cases where the old/new tablespaces might not  
match, fix pg_upgrade to process new/old tablespaces separately in all  
cases.  
  
Report by Marti Raudsepp  
  
Patch by Marti Raudsepp, me  
  
Backpatch through 9.0  
  

Fix an O(N^2) problem in foreign key references.

  
commit   : a7516bbc49f0d55e3ca24f294ae3df2b91838720    
  
author   : Kevin Grittner <kgrittn@postgresql.org>    
date     : Fri, 11 Sep 2015 13:20:49 -0500    
  
committer: Kevin Grittner <kgrittn@postgresql.org>    
date     : Fri, 11 Sep 2015 13:20:49 -0500    

Click here for diff

  
Commit 45ba424f improved foreign key lookups during bulk updates  
when the FK value does not change.  When restoring a schema dump  
from a database with many (say 100,000) foreign keys, this cache  
would grow very big and every ALTER TABLE command was causing an  
InvalidateConstraintCacheCallBack(), which uses a sequential hash  
table scan.  This could cause a severe performance regression in  
restoring a schema dump (including during pg_upgrade).  
  
The patch uses a heuristic method of detecting when the hash table  
should be destroyed and recreated.  
InvalidateConstraintCacheCallBack() adds the current size of the  
hash table to a counter.  When that sum reaches 1,000,000, the hash  
table is flushed.  This fixes the regression without noticeable  
harm to the bulk update use case.  
  
Jan Wieck  
Backpatch to 9.3 where the performance regression was introduced.  
  

Correct description of PageHeaderData layout in documentation

  
commit   : 2176da70f84d89208eabe9094d20e9a8edd1b8a6    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Fri, 11 Sep 2015 13:02:15 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Fri, 11 Sep 2015 13:02:15 +0900    

Click here for diff

  
Back-patch to 9.3 where PageHeaderData layout was changed.  
  
Michael Paquier  
  

Fix setrefs.c comment properly.

  
commit   : d6e36d8603878c24af958400be2a44b81887e1b7    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 10 Sep 2015 10:25:58 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 10 Sep 2015 10:25:58 -0400    

Click here for diff

  
The "typo" alleged in commit 1e460d4bd was actually a comment that was  
correct when written, but I missed updating it in commit b5282aa89.  
Use a slightly less specific (and hopefully more future-proof) description  
of what is collected.  Back-patch to 9.2 where that commit appeared, and  
revert the comment to its then-entirely-correct state before that.  
  

Fix typo in setrefs.c

  
commit   : dc24b7fead6323f36608d57d73a28a5969d05954    
  
author   : Stephen Frost <sfrost@snowman.net>    
date     : Thu, 10 Sep 2015 09:22:33 -0400    
  
committer: Stephen Frost <sfrost@snowman.net>    
date     : Thu, 10 Sep 2015 09:22:33 -0400    

Click here for diff

  
We're adding OIDs, not TIDs, to invalItems.  
  
Pointed out by Etsuro Fujita.  
  
Back-patch to all supported branches.  
  

Fix minor bug in regexp makesearch() function.

  
commit   : d61ab72318e20354435560145981a56d67b084d7    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 9 Sep 2015 20:14:58 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 9 Sep 2015 20:14:58 -0400    

Click here for diff

  
The list-wrangling here was done wrong, allowing the same state to get  
put into the list twice.  The following loop then would clone it twice.  
The second clone would wind up with no inarcs, so that there was no  
observable misbehavior AFAICT, but a useless state in the finished NFA  
isn't an especially good thing.  
  

Remove files signaling a standby promotion request at postmaster startup

  
commit   : 47387732ba28e940e27f3f82c657d11bab628e1a    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Wed, 9 Sep 2015 22:51:44 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Wed, 9 Sep 2015 22:51:44 +0900    

Click here for diff

  
This commit makes postmaster forcibly remove the files signaling  
a standby promotion request. Otherwise, the existence of those files  
can trigger a promotion too early, whether a user wants that or not.  
  
This removal of files is usually unnecessary because they can exist  
only during a few moments during a standby promotion. However  
there is a race condition: if pg_ctl promote is executed and creates  
the files during a promotion, the files can stay around even after  
the server is brought up to new master. Then, if new standby starts  
by using the backup taken from that master, the files can exist  
at the server startup and should be removed in order to avoid  
an unexpected promotion.  
  
Back-patch to 9.1 where promote signal file was introduced.  
  
Problem reported by Feike Steenbergen.  
Original patch by Michael Paquier, modified by me.  
  
Discussion: 20150528100705.4686.91426@wrigleys.postgresql.org  
  

Lock all relations referred to in updatable views

  
commit   : cb1b9b959cf068650999af55e160cdce88c4a5a4    
  
author   : Stephen Frost <sfrost@snowman.net>    
date     : Tue, 8 Sep 2015 17:02:59 -0400    
  
committer: Stephen Frost <sfrost@snowman.net>    
date     : Tue, 8 Sep 2015 17:02:59 -0400    

Click here for diff

  
Even views considered "simple" enough to be automatically updatable may  
have mulitple relations involved (eg: in a where clause).  We need to  
make sure and lock those relations when rewriting the query.  
  
Back-patch to 9.3 where updatable views were added.  
  
Pointed out by Andres, patch thanks to Dean Rasheed.  
  

Add gin_fuzzy_search_limit to postgresql.conf.sample.

  
commit   : 75232ad799b5061bc296928fe54ed28daf041766    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Wed, 9 Sep 2015 02:25:50 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Wed, 9 Sep 2015 02:25:50 +0900    

Click here for diff

  
This was forgotten in 8a3631f (commit that originally added the parameter)  
and 0ca9907 (commit that added the documentation later that year).  
  
Back-patch to all supported versions.  
  

Fix error message wording in previous sslinfo commit

  
commit   : 45829e00e6d12f62e118d9183f4ac3df8b6519aa    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 8 Sep 2015 11:10:20 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 8 Sep 2015 11:10:20 -0300    

Click here for diff

  
  

Add more sanity checks in contrib/sslinfo

  
commit   : 28120a06952369e12bc9405e5ffb9e15babc9795    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Mon, 7 Sep 2015 19:18:29 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Mon, 7 Sep 2015 19:18:29 -0300    

Click here for diff

  
We were missing a few return checks on OpenSSL calls.  Should be pretty  
harmless, since we haven't seen any user reports about problems, and  
this is not a high-traffic module anyway; still, a bug is a bug, so  
backpatch this all the way back to 9.0.  
  
Author: Michael Paquier, while reviewing another sslinfo patch  
  

Change type of DOW/DOY to UNITS

  
commit   : fde40e53f3b188f974f87aa3608210ccb1232fb8    
  
author   : Greg Stark <stark@mit.edu>    
date     : Mon, 7 Sep 2015 13:35:09 +0100    
  
committer: Greg Stark <stark@mit.edu>    
date     : Mon, 7 Sep 2015 13:35:09 +0100    

Click here for diff

  
  

Make GIN’s cleanup pending list process interruptable

  
commit   : cd6f4248f8810cb186edc2c92f0834762e0f88b6    
  
author   : Teodor Sigaev <teodor@sigaev.ru>    
date     : Mon, 7 Sep 2015 17:18:10 +0300    
  
committer: Teodor Sigaev <teodor@sigaev.ru>    
date     : Mon, 7 Sep 2015 17:18:10 +0300    

Click here for diff

  
Cleanup process could be called by ordinary insert/update and could take a lot  
of time. Add vacuum_delay_point() to make this process interruptable. Under  
vacuum this call will also throttle a vacuum process to decrease system load,  
called from insert/update it will not throttle, and that reduces a latency.  
  
Backpatch for all supported branches.  
  
Jeff Janes <jeff.janes@gmail.com>  
  

Update site address of Snowball project

  
commit   : 6ce9d81086032c737e8bf198ae1ad32420d3f843    
  
author   : Teodor Sigaev <teodor@sigaev.ru>    
date     : Mon, 7 Sep 2015 15:21:56 +0300    
  
committer: Teodor Sigaev <teodor@sigaev.ru>    
date     : Mon, 7 Sep 2015 15:21:56 +0300    

Click here for diff

  
  

Move DTK_ISODOW DTK_DOW and DTK_DOY to be type UNITS rather than RESERV. RESERV is meant for tokens like “now” and having them in that category throws errors like these when used as an input date:

  
commit   : dd04d43bfda7d4475819822dc92fcecfd8627b7b    
  
author   : Greg Stark <stark@mit.edu>    
date     : Sun, 6 Sep 2015 02:04:37 +0100    
  
committer: Greg Stark <stark@mit.edu>    
date     : Sun, 6 Sep 2015 02:04:37 +0100    

Click here for diff

  
stark=# SELECT 'doy'::timestamptz;  
ERROR:  unexpected dtype 33 while parsing timestamptz "doy"  
LINE 1: SELECT 'doy'::timestamptz;  
               ^  
stark=# SELECT 'dow'::timestamptz;  
ERROR:  unexpected dtype 32 while parsing timestamptz "dow"  
LINE 1: SELECT 'dow'::timestamptz;  
               ^  
  
Found by LLVM's Libfuzzer  
  

  
commit   : e9bacfca442f828dec373ef4ed9bf6268ec38ed7    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 5 Sep 2015 16:15:38 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 5 Sep 2015 16:15:38 -0400    

Click here for diff

  
This has been broken since 9.3 (commit 82b1b213cad3a69c to be exact),  
which suggests that nobody is any longer using a Windows build system that  
doesn't provide a symlink emulation.  Still, it's wrong on its own terms,  
so repair.  
  
YUriy Zhuravlev  
  

Fix misc typos.

  
commit   : 658ec626406342211837280a6eb0f836f7d14429    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Sat, 5 Sep 2015 11:35:49 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Sat, 5 Sep 2015 11:35:49 +0300    

Click here for diff

  
Oskari Saarenmaa. Backpatch to stable branches where applicable.  
  

Fix subtransaction cleanup after an outer-subtransaction portal fails.

  
commit   : 9e9b310d8bcac76bf19f99f3c90e7c08366c52f7    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 4 Sep 2015 13:36:50 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 4 Sep 2015 13:36:50 -0400    

Click here for diff

  
Formerly, we treated only portals created in the current subtransaction as  
having failed during subtransaction abort.  However, if the error occurred  
while running a portal created in an outer subtransaction (ie, a cursor  
declared before the last savepoint), that has to be considered broken too.  
  
To allow reliable detection of which ones those are, add a bookkeeping  
field to struct Portal that tracks the innermost subtransaction in which  
each portal has actually been executed.  (Without this, we'd end up  
failing portals containing functions that had called the subtransaction,  
thereby breaking plpgsql exception blocks completely.)  
  
In addition, when we fail an outer-subtransaction Portal, transfer its  
resources into the subtransaction's resource owner, so that they're  
released early in cleanup of the subxact.  This fixes a problem reported by  
Jim Nasby in which a function executed in an outer-subtransaction cursor  
could cause an Assert failure or crash by referencing a relation created  
within the inner subtransaction.  
  
The proximate cause of the Assert failure is that AtEOSubXact_RelationCache  
assumed it could blow away a relcache entry without first checking that the  
entry had zero refcount.  That was a bad idea on its own terms, so add such  
a check there, and to the similar coding in AtEOXact_RelationCache.  This  
provides an independent safety measure in case there are still ways to  
provoke the situation despite the Portal-level changes.  
  
This has been broken since subtransactions were invented, so back-patch  
to all supported branches.  
  
Tom Lane and Michael Paquier  
  

psql: print longtable as a possible \pset option

  
commit   : a7f8e306d2af5b497077a0dc1bfd2cb27aa232f4    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Mon, 31 Aug 2015 12:24:16 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Mon, 31 Aug 2015 12:24:16 -0400    

Click here for diff

  
For some reason this message was not updated when the longtable option  
was added.  
  
Backpatch through 9.3  
  

Fix sepgsql regression tests.

  
commit   : d66455bc9fff72593353c67c449fe2145dca7f19    
  
author   : Joe Conway <mail@joeconway.com>    
date     : Sun, 30 Aug 2015 11:11:08 -0700    
  
committer: Joe Conway <mail@joeconway.com>    
date     : Sun, 30 Aug 2015 11:11:08 -0700    

Click here for diff

  
The regression tests for sepgsql were broken by changes in the  
base distro as-shipped policies. Specifically, definition of  
unconfined_t in the system default policy was changed to bypass  
multi-category rules, which the regression test depended on.  
Fix that by defining a custom privileged domain  
(sepgsql_regtest_superuser_t) and using it instead of system's  
unconfined_t domain. The new sepgsql_regtest_superuser_t domain  
performs almost like the current unconfined_t, but restricted by  
multi-category policy as the traditional unconfined_t was.  
  
The custom policy module is a self defined domain, and so should not  
be affected by related future system policy changes. However, it still  
uses the unconfined_u:unconfined_r pair for selinux-user and role.  
Those definitions have not been changed for several years and seem  
less risky to rely on than the unconfined_t domain. Additionally, if  
we define custom user/role, they would need to be manually defined  
at the operating system level, adding more complexity to an already  
non-standard and complex regression test.  
  
Back-patch to 9.3. The regression tests will need more work before  
working correctly on 9.2. Starting with 9.2, sepgsql has had dependencies  
on libselinux versions that are only available on newer distros with  
the changed set of policies (e.g. RHEL 7.x). On 9.1 sepgsql works  
fine with the older distros with original policy set (e.g. RHEL 6.x),  
and on which the existing regression tests work fine. We might want  
eventually change 9.1 sepgsql regression tests to be more independent  
from the underlying OS policies, however more work will be needed to  
make that happen and it is not clear that it is worth the effort.  
  
Kohei KaiGai with review by Adam Brightwell and me, commentary by  
Stephen, Alvaro, Tom, Robert, and others.  
  

Fix s_lock.h PPC assembly code to be compatible with native AIX assembler.

  
commit   : c355df54e7c1e63c75bd51ff86084247ce1894ff    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 29 Aug 2015 16:09:25 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 29 Aug 2015 16:09:25 -0400    

Click here for diff

  
On recent AIX it's necessary to configure gcc to use the native assembler  
(because the GNU assembler hasn't been updated to handle AIX 6+).  This  
caused PG builds to fail with assembler syntax errors, because we'd try  
to compile s_lock.h's gcc asm fragment for PPC, and that assembly code  
relied on GNU-style local labels.  We can't substitute normal labels  
because it would fail in any file containing more than one inlined use of  
tas().  Fortunately, that code is stable enough, and the PPC ISA is simple  
enough, that it doesn't seem like too much of a maintenance burden to just  
hand-code the branch offsets, removing the need for any labels.  
  
Note that the AIX assembler only accepts "$" for the location counter  
pseudo-symbol.  The usual GNU convention is "."; but it appears that all  
versions of gas for PPC also accept "$", so in theory this patch will not  
break any other PPC platforms.  
  
This has been reported by a few people, but Steve Underwood gets the credit  
for being the first to pursue the problem far enough to understand why it  
was failing.  Thanks also to Noah Misch for additional testing.  
  

  
commit   : be49d7d69fb32cdf3b1aceb744398a8aca178dfa    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Thu, 27 Aug 2015 13:43:10 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Thu, 27 Aug 2015 13:43:10 -0400    

Click here for diff

  
This makes the parameter names match the documented prototype names.  
  
Report by Erwin Brandstetter  
  
Backpatch through 9.0  
  

Docs: be explicit about datatype matching for lead/lag functions.

  
commit   : 845b91cbae20d8e081cfe917f6ddd24f39661855    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 25 Aug 2015 19:12:17 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 25 Aug 2015 19:12:17 -0400    

Click here for diff

  
The default argument, if given, has to be of exactly the same datatype  
as the first argument; but this was not stated in so many words, and  
the error message you get about it might not lead your thought in the  
right direction.  Per bug #13587 from Robert McGehee.  
  
A quick scan says that these are the only two built-in functions with two  
anyelement arguments and no other polymorphic arguments.  There are plenty  
of cases of, eg, anyarray and anyelement, but those seem less likely to  
confuse.  For instance this doesn't seem terribly hard to figure out:  
"function array_remove(integer[], numeric) does not exist".  So I've  
contented myself with fixing these two cases.  
  

Avoid O(N^2) behavior when enlarging SPI tuple table in spi_printtup().

  
commit   : ea989244427f0c98836e1fffdba9fed27a20a79d    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 21 Aug 2015 20:32:11 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 21 Aug 2015 20:32:11 -0400    

Click here for diff

  
For no obvious reason, spi_printtup() was coded to enlarge the tuple  
pointer table by just 256 slots at a time, rather than doubling the size at  
each reallocation, as is our usual habit.  For very large SPI results, this  
makes for O(N^2) time spent in repalloc(), which of course soon comes to  
dominate the runtime.  Use the standard doubling approach instead.  
  
This is a longstanding performance bug, so back-patch to all active  
branches.  
  
Neil Conway  
  

Fix plpython crash when returning string representation of a RECORD result.

  
commit   : 59592efcfbc30f96e7bf25d075a436033d2b534c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 21 Aug 2015 12:21:37 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 21 Aug 2015 12:21:37 -0400    

Click here for diff

  
PLyString_ToComposite() blithely overwrote proc->result.out.d, even though  
for a composite result type the other union variant proc->result.out.r is  
the one that should be valid.  This could result in a crash if out.r had  
in fact been filled in (proc->result.is_rowtype == 1) and then somebody  
later attempted to use that data; as per bug #13579 from Paweł Michalak.  
  
Just to add insult to injury, it didn't work for RECORD results anyway,  
because record_in() would refuse the case.  
  
Fix by doing the I/O function lookup in a local PLyTypeInfo variable,  
as we were doing already in PLyObject_ToComposite().  This is not a great  
technique because any fn_extra data allocated by the input function will  
be leaked permanently (thanks to using TopMemoryContext as fn_mcxt).  
But that's a pre-existing issue that is much less serious than a crash,  
so leave it to be fixed separately.  
  
This bug would be a potential security issue, except that plpython is  
only available to superusers and the crash requires coding the function  
in a way that didn't work before today's patches.  
  
Add regression test cases covering all the supported methods of converting  
composite results.  
  
Back-patch to 9.1 where the faulty coding was introduced.  
  

Allow record_in() and record_recv() to work for transient record types.

  
commit   : 461235bdabc5de2d9977b31ecf2a32a200d8e224    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 21 Aug 2015 11:19:33 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 21 Aug 2015 11:19:33 -0400    

Click here for diff

  
If we have the typmod that identifies a registered record type, there's no  
reason that record_in() should refuse to perform input conversion for it.  
Now, in direct SQL usage, record_in() will always be passed typmod = -1  
with type OID RECORDOID, because no typmodin exists for type RECORD, so the  
case can't arise.  However, some InputFunctionCall users such as PLs may be  
able to supply the right typmod, so we should allow this to support them.  
  
Note: the previous coding and comment here predate commit 59c016aa9f490b53.  
There has been no case since 8.1 in which the passed type OID wouldn't be  
valid; and if it weren't, this error message wouldn't be apropos anyway.  
Better to let lookup_rowtype_tupdesc complain about it.  
  
Back-patch to 9.1, as this is necessary for my upcoming plpython fix.  
I'm committing it separately just to make it a bit more visible in the  
commit history.  
  

Fix a few bogus statement type names in plpgsql error messages.

  
commit   : 8992e1acde19342f70210801b0442b6be68ef4f7    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 18 Aug 2015 19:22:38 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 18 Aug 2015 19:22:38 -0400    

Click here for diff

  
plpgsql's error location context messages ("PL/pgSQL function fn-name line  
line-no at stmt-type") would misreport a CONTINUE statement as being an  
EXIT, and misreport a MOVE statement as being a FETCH.  These are clear  
bugs that have been there a long time, so back-patch to all supported  
branches.  
  
In addition, in 9.5 and HEAD, change the description of EXECUTE from  
"EXECUTE statement" to just plain EXECUTE; there seems no good reason why  
this statement type should be described differently from others that have  
a well-defined head keyword.  And distinguish GET STACKED DIAGNOSTICS from  
plain GET DIAGNOSTICS.  These are a bit more of a judgment call, and also  
affect existing regression-test outputs, so I did not back-patch into  
stable branches.  
  
Pavel Stehule and Tom Lane  
  

Add docs about postgres_fdw’s setting of search_path and other GUCs.

  
commit   : 5f1ee4777e13477a00ec65f8479b700ee8b64644    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 15 Aug 2015 14:31:04 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 15 Aug 2015 14:31:04 -0400    

Click here for diff

  
This behavior wasn't documented, but it should be because it's user-visible  
in triggers and other functions executed on the remote server.  
Per question from Adam Fuchs.  
  
Back-patch to 9.3 where postgres_fdw was added.  
  

Improve documentation about MVCC-unsafe utility commands.

  
commit   : 7e451f7dc241b4947843f84b74298f253c898ed1    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 15 Aug 2015 13:30:16 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 15 Aug 2015 13:30:16 -0400    

Click here for diff

  
The table-rewriting forms of ALTER TABLE are MVCC-unsafe, in much the same  
way as TRUNCATE, because they replace all rows of the table with newly-made  
rows with a new xmin.  (Ideally, concurrent transactions with old snapshots  
would continue to see the old table contents, but the data is not there  
anymore --- and if it were there, it would be inconsistent with the table's  
updated rowtype, so there would be serious implementation problems to fix.)  
This was nowhere documented though, and the problem was only documented for  
TRUNCATE in a note in the TRUNCATE reference page.  Create a new "Caveats"  
section in the MVCC chapter that can be home to this and other limitations  
on serializable consistency.  
  
In passing, fix a mistaken statement that VACUUM and CLUSTER would reclaim  
space occupied by a dropped column.  They don't reconstruct existing tuples  
so they couldn't do that.  
  
Back-patch to all supported branches.  
  

Don’t use ‘bool’ as a struct member name in help_config.c.

  
commit   : b6c28ee2efd75d7028716ef4432974ab79334cf8    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Wed, 12 Aug 2015 16:02:20 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Wed, 12 Aug 2015 16:02:20 +0200    

Click here for diff

  
Doing so doesn't work if bool is a macro rather than a typedef.  
  
Although c.h spends some effort to support configurations where bool is  
a preexisting macro, help_config.c has existed this way since  
2003 (b700a6), and there have not been any reports of  
problems. Backpatch anyway since this is as riskless as it gets.  
  
Discussion: 20150812084351.GD8470@awork2.anarazel.de  
Backpatch: 9.0-master  
  

Improve regression test case to avoid depending on system catalog stats.

  
commit   : 96a7e3a2619cace4a9d710e308c2480c385adb60    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 13 Aug 2015 13:25:02 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 13 Aug 2015 13:25:02 -0400    

Click here for diff

  
In commit 95f4e59c32866716 I added a regression test case that examined  
the plan of a query on system catalogs.  That isn't a terribly great idea  
because the catalogs tend to change from version to version, or even  
within a version if someone makes an unrelated regression-test change that  
populates the catalogs a bit differently.  Usually I try to make planner  
test cases rely on test tables that have not changed since Berkeley days,  
but I got sloppy in this case because the submitted crasher example queried  
the catalogs and I didn't spend enough time on rewriting it.  But it was a  
problem waiting to happen, as I was rudely reminded when I tried to port  
that patch into Salesforce's Postgres variant :-(.  So spend a little more  
effort and rewrite the query to not use any system catalogs.  I verified  
that this version still provokes the Assert if 95f4e59c32866716's code fix  
is reverted.  
  
I also removed the EXPLAIN output from the test, as it turns out that the  
assertion occurs while considering a plan that isn't the one ultimately  
selected anyway; so there's no value in risking any cross-platform  
variation in that printout.  
  
Back-patch to 9.2, like the previous patch.  
  

Fix declaration of isarray variable.

  
commit   : 2f59008848791dc816a92f94d13b2e61ab089129    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Thu, 13 Aug 2015 13:22:29 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Thu, 13 Aug 2015 13:22:29 +0200    

Click here for diff

  
Found and fixed by Andres Freund.  
  

  
commit   : 7950657a92d108458508eb690e0aa6322640e5e5    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 12 Aug 2015 21:18:45 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 12 Aug 2015 21:18:45 -0400    

Click here for diff

  
One of the changes I made in commit 8703059c6b55c427 turns out not to have  
been such a good idea: we still need the exception in join_is_legal() that  
allows a join if both inputs already overlap the RHS of the special join  
we're checking.  Otherwise we can miss valid plans, and might indeed fail  
to find a plan at all, as in recent report from Andreas Seltenreich.  
  
That code was added way back in commit c17117649b9ae23d, but I failed to  
include a regression test case then; my bad.  Put it back with a better  
explanation, and a test this time.  The logic does end up a bit different  
than before though: I now believe it's appropriate to make this check  
first, thereby allowing such a case whether or not we'd consider the  
previous SJ(s) to commute with this one.  (Presumably, we already decided  
they did; but it was confusing to have this consideration in the middle  
of the code that was handling the other case.)  
  
Back-patch to all active branches, like the previous patch.  
  

This routine was calling ecpg_alloc to allocate to memory but did not actually check the returned pointer allocated, potentially NULL which could be the result of a malloc call.

  
commit   : ed089d2fec8dde61d169f1f67a96e099425e77c7    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Thu, 5 Feb 2015 15:12:34 +0100    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Thu, 5 Feb 2015 15:12:34 +0100    

Click here for diff

  
Issue noted by Coverity, fixed by Michael Paquier <michael@otacoo.com>  
  

Fix some possible low-memory failures in regexp compilation.

  
commit   : a54875602a057f8ee0cf5e880bfe2056b5dd11f0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 12 Aug 2015 00:48:11 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 12 Aug 2015 00:48:11 -0400    

Click here for diff

  
newnfa() failed to set the regex error state when malloc() fails.  
Several places in regcomp.c failed to check for an error after calling  
subre().  Each of these mistakes could lead to null-pointer-dereference  
crashes in memory-starved backends.  
  
Report and patch by Andreas Seltenreich.  Back-patch to all branches.  
  

Fix privilege dumping from servers too old to have that type of privilege.

  
commit   : 75d02d787804f643af09abcc87ade5c0b990b2b6    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Aug 2015 20:10:16 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Aug 2015 20:10:16 -0400    

Click here for diff

  
pg_dump produced fairly silly GRANT/REVOKE commands when dumping types from  
pre-9.2 servers, and when dumping functions or procedural languages from  
pre-7.3 servers.  Those server versions lack the typacl, proacl, and/or  
lanacl columns respectively, and pg_dump substituted default values that  
were in fact incorrect.  We ended up revoking all the owner's own  
privileges for the object while granting all privileges to PUBLIC.  
Of course the owner would then have those privileges again via PUBLIC, so  
long as she did not try to revoke PUBLIC's privileges; which may explain  
the lack of field reports.  Nonetheless this is pretty silly behavior.  
  
The stakes were raised by my recent patch to make pg_dump dump shell types,  
because 9.2 and up pg_dump would proceed to emit bogus GRANT/REVOKE  
commands for a shell type if dumping from a pre-9.2 server; and the server  
will not accept GRANT/REVOKE commands for a shell type.  (Perhaps it  
should, but that's a topic for another day.)  So the resulting dump script  
wouldn't load without errors.  
  
The right thing to do is to act as though these objects have default  
privileges (null ACL entries), which causes pg_dump to print no  
GRANT/REVOKE commands at all for them.  That fixes the silly results  
and also dodges the problem with shell types.  
  
In passing, modify getProcLangs() to be less creatively different about  
how to handle missing columns when dumping from older server versions.  
Every other data-acquisition function in pg_dump does that by substituting  
appropriate default values in the version-specific SQL commands, and I see  
no reason why this one should march to its own drummer.  Its use of  
"SELECT *" was likewise not conformant with anyplace else, not to mention  
it's not considered good SQL style for production queries.  
  
Back-patch to all supported versions.  Although 9.0 and 9.1 pg_dump don't  
have the issue with typacl, they are more likely than newer versions to be  
used to dump from ancient servers, so we ought to fix the proacl/lanacl  
issues all the way back.  
  

Accept alternate spellings of __sparcv7 and __sparcv8.

  
commit   : 9cd3a0fc5a86770f8cdded8dc3e36214f155bc6f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Aug 2015 17:34:51 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Aug 2015 17:34:51 -0400    

Click here for diff

  
Apparently some versions of gcc prefer __sparc_v7__ and __sparc_v8__.  
Per report from Waldemar Brodkorb.  
  

  
commit   : f6d7a79f420df27146fea24cb7d6342f1fb4e1dc    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Aug 2015 17:18:17 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Aug 2015 17:18:17 -0400    

Click here for diff

  
Commit 85e5e222b1dd02f135a8c3bf387d0d6d88e669bd turns out not to have taken  
care of all cases of the partially-evaluatable-PlaceHolderVar problem found  
by Andreas Seltenreich's fuzz testing.  I had set it up to check for risky  
PHVs only in the event that we were making a star-schema-based exception to  
the param_source_rels join ordering heuristic.  However, it turns out that  
the problem can occur even in joins that satisfy the param_source_rels  
heuristic, in which case allow_star_schema_join() isn't consulted.  
Refactor so that we check for risky PHVs whenever the proposed join has  
any remaining parameterization.  
  
Back-patch to 9.2, like the previous patch (except for the regression test  
case, which only works back to 9.3 because it uses LATERAL).  
  
Note that this discovery implies that problems of this sort could've  
occurred in 9.2 and up even before the star-schema patch; though I've not  
tried to prove that experimentally.  
  

Fix typo in LDAP example

  
commit   : 8ff0eb8c6b9ea07707c9ffcf08f75b16c243defa    
  
author   : Magnus Hagander <magnus@hagander.net>    
date     : Sun, 9 Aug 2015 14:49:47 +0200    
  
committer: Magnus Hagander <magnus@hagander.net>    
date     : Sun, 9 Aug 2015 14:49:47 +0200    

Click here for diff

  
Reported by William Meitzen  
  

Further adjustments to PlaceHolderVar removal.

  
commit   : 868bfd1f3db3f79d5f86577e3171fdff40ce21fe    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 7 Aug 2015 14:13:39 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 7 Aug 2015 14:13:39 -0400    

Click here for diff

  
A new test case from Andreas Seltenreich showed that we were still a bit  
confused about removing PlaceHolderVars during join removal.  Specifically,  
remove_rel_from_query would remove a PHV that was used only underneath  
the removable join, even if the place where it's used was the join partner  
relation and not the join clause being deleted.  This would lead to a  
"too late to create a new PlaceHolderInfo" error later on.  We can defend  
against that by checking ph_eval_at to see if the PHV could possibly be  
getting used at some partner rel.  
  
Also improve some nearby LATERAL-related logic.  I decided that the check  
on ph_lateral needed to take precedence over the check on ph_needed, in  
case there's a lateral reference underneath the join being considered.  
(That may be impossible, but I'm not convinced of it, and it's easy enough  
to defend against the case.)  Also, I realized that remove_rel_from_query's  
logic for updating LateralJoinInfos is dead code, because we don't build  
those at all until after join removal.  
  
Back-patch to 9.3.  Previous versions didn't have the LATERAL issues, of  
course, and they also didn't attempt to remove PlaceHolderInfos during join  
removal.  (I'm starting to wonder if changing that was really such a great  
idea.)  
  

Fix old oversight in join removal logic.

  
commit   : de5edc660ae09e9a2785e52d2b539f2e0def1e63    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 6 Aug 2015 22:14:07 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 6 Aug 2015 22:14:07 -0400    

Click here for diff

  
Commit 9e7e29c75ad441450f9b8287bd51c13521641e3b introduced an Assert that  
join removal didn't reduce the eval_at set of any PlaceHolderVar to empty.  
At first glance it looks like join_is_removable ensures that's true --- but  
actually, the loop in join_is_removable skips PlaceHolderVars that are not  
referenced above the join due to be removed.  So, if we don't want any  
empty eval_at sets, the right thing to do is to delete any now-unreferenced  
PlaceHolderVars from the data structure entirely.  
  
Per fuzz testing by Andreas Seltenreich.  Back-patch to 9.3 where the  
aforesaid Assert was added.  
  

Fix eclass_useful_for_merging to give valid results for appendrel children.

  
commit   : 0d4913509b0becfd3db3fe9d0266b0ecdf0b3334    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 6 Aug 2015 20:14:37 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 6 Aug 2015 20:14:37 -0400    

Click here for diff

  
Formerly, this function would always return "true" for an appendrel child  
relation, because it would think that the appendrel parent was a potential  
join target for the child.  In principle that should only lead to some  
inefficiency in planning, but fuzz testing by Andreas Seltenreich disclosed  
that it could lead to "could not find pathkey item to sort" planner errors  
in odd corner cases.  Specifically, we would think that all columns of a  
child table's multicolumn index were interesting pathkeys, causing us to  
generate a MergeAppend path that sorts by all the columns.  However, if any  
of those columns weren't actually used above the level of the appendrel,  
they would not get added to that rel's targetlist, which would result in  
being unable to resolve the MergeAppend's sort keys against its targetlist  
during createplan.c.  
  
Backpatch to 9.3.  In older versions, columns of an appendrel get added  
to its targetlist even if they're not mentioned above the scan level,  
so that the failure doesn't occur.  It might be worth back-patching this  
fix to older versions anyway, but I'll refrain for the moment.  
  

Further fixes for degenerate outer join clauses.

  
commit   : 3e79144a848e9e65d248d252488a37678407d6b3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 6 Aug 2015 15:35:27 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 6 Aug 2015 15:35:27 -0400    

Click here for diff

  
Further testing revealed that commit f69b4b9495269cc4 was still a few  
bricks shy of a load: minor tweaking of the previous test cases resulted  
in the same wrong-outer-join-order problem coming back.  After study  
I concluded that my previous changes in make_outerjoininfo() were just  
accidentally masking the problem, and should be reverted in favor of  
forcing syntactic join order whenever an upper outer join's predicate  
doesn't mention a lower outer join's LHS.  This still allows the  
chained-outer-joins style that is the normally optimizable case.  
  
I also tightened things up some more in join_is_legal().  It seems to me  
on review that what's really happening in the exception case where we  
ignore a mismatched special join is that we're allowing the proposed join  
to associate into the RHS of the outer join we're comparing it to.  As  
such, we should *always* insist that the proposed join be a left join,  
which eliminates a bunch of rather dubious argumentation.  The case where  
we weren't enforcing that was the one that was already known buggy anyway  
(it had a violatable Assert before the aforesaid commit) so it hardly  
deserves a lot of deference.  
  
Back-patch to all active branches, like the previous patch.  The added  
regression test case failed in all branches back to 9.1, and I think it's  
only an unrelated change in costing calculations that kept 9.0 from  
choosing a broken plan.  
  

Make real sure we don’t reassociate joins into or out of SEMI/ANTI joins.

  
commit   : 9bc4d5927c573d5fa5081b37feab06f07b7b5b2c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 5 Aug 2015 14:39:07 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 5 Aug 2015 14:39:07 -0400    

Click here for diff

  
Per the discussion in optimizer/README, it's unsafe to reassociate anything  
into or out of the RHS of a SEMI or ANTI join.  An example from Piotr  
Stefaniak showed that join_is_legal() wasn't sufficiently enforcing this  
rule, so lock it down a little harder.  
  
I couldn't find a reasonably simple example of the optimizer trying to  
do this, so no new regression test.  (Piotr's example involved the random  
search in GEQO accidentally trying an invalid case and triggering a sanity  
check way downstream in clause selectivity estimation, which did not seem  
like a sequence of events that would be useful to memorialize in a  
regression test as-is.)  
  
Back-patch to all active branches.  
  

Docs: add an explicit example about controlling overall greediness of REs.

  
commit   : ea1703eb490b34ec689c7b1d32c06634871ada36    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 21:09:12 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 21:09:12 -0400    

Click here for diff

  
Per discussion of bug #13538.  
  

Fix pg_dump to dump shell types.

  
commit   : 5da713f315fbb0669db60ea44183c791ebe7647f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 19:34:12 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 19:34:12 -0400    

Click here for diff

  
Per discussion, it really ought to do this.  The original choice to  
exclude shell types was probably made in the dark ages before we made  
it harder to accidentally create shell types; but that was in 7.3.  
  
Also, cause the standard regression tests to leave a shell type behind,  
for convenience in testing the case in pg_dump and pg_upgrade.  
  
Back-patch to all supported branches.  
  

Fix bogus “out of memory” reports in tuplestore.c.

  
commit   : 8bd45a394958c3fd7400654439ef2a113043f8f5    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 18:18:46 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 18:18:46 -0400    

Click here for diff

  
The tuplesort/tuplestore memory management logic assumed that the chunk  
allocation overhead for its memtuples array could not increase when  
increasing the array size.  This is and always was true for tuplesort,  
but we (I, I think) blindly copied that logic into tuplestore.c without  
noticing that the assumption failed to hold for the much smaller array  
elements used by tuplestore.  Given rather small work_mem, this could  
result in an improper complaint about "unexpected out-of-memory situation",  
as reported by Brent DeSpain in bug #13530.  
  
The easiest way to fix this is just to increase tuplestore's initial  
array size so that the assumption holds.  Rather than relying on magic  
constants, though, let's export a #define from aset.c that represents  
the safe allocation threshold, and make tuplestore's calculation depend  
on that.  
  
Do the same in tuplesort.c to keep the logic looking parallel, even though  
tuplesort.c isn't actually at risk at present.  This will keep us from  
breaking it if we ever muck with the allocation parameters in aset.c.  
  
Back-patch to all supported versions.  The error message doesn't occur  
pre-9.3, not so much because the problem can't happen as because the  
pre-9.3 tuplestore code neglected to check for it.  (The chance of  
trouble is a great deal larger as of 9.3, though, due to changes in the  
array-size-increasing strategy.)  However, allowing LACKMEM() to become  
true unexpectedly could still result in less-than-desirable behavior,  
so let's patch it all the way back.  
  

  
commit   : 33afbdd0205c86aa37c501306df957058798562d    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 14:55:32 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Aug 2015 14:55:32 -0400    

Click here for diff

  
In commit b514a7460d9127ddda6598307272c701cbb133b7, I changed the planner  
so that it would allow nestloop paths to remain partially parameterized,  
ie the inner relation might need parameters from both the current outer  
relation and some upper-level outer relation.  That's fine so long as we're  
talking about distinct parameters; but the patch also allowed creation of  
nestloop paths for cases where the inner relation's parameter was a  
PlaceHolderVar whose eval_at set included the current outer relation and  
some upper-level one.  That does *not* work.  
  
In principle we could allow such a PlaceHolderVar to be evaluated at the  
lower join node using values passed down from the upper relation along with  
values from the join's own outer relation.  However, nodeNestloop.c only  
supports simple Vars not arbitrary expressions as nestloop parameters.  
createplan.c is also a few bricks shy of being able to handle such cases;  
it misplaces the PlaceHolderVar parameters in the plan tree, which is why  
the visible symptoms of this bug are "plan should not reference subplan's  
variable" and "failed to assign all NestLoopParams to plan nodes" planner  
errors.  
  
Adding the necessary complexity to make this work doesn't seem like it  
would be repaid in significantly better plans, because in cases where such  
a PHV exists, there is probably a corresponding join order constraint that  
would allow a good plan to be found without using the star-schema exception.  
Furthermore, adding complexity to nodeNestloop.c would create a run-time  
penalty even for plans where this whole consideration is irrelevant.  
So let's just reject such paths instead.  
  
Per fuzz testing by Andreas Seltenreich; the added regression test is based  
on his example query.  Back-patch to 9.2, like the previous patch.  
  

Cap wal_buffers to avoid a server crash when it’s set very large.

  
commit   : 11ed4bab50ba6d80cc982cc4ae4675df705eda4b    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Tue, 4 Aug 2015 12:58:54 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Tue, 4 Aug 2015 12:58:54 -0400    

Click here for diff

  
It must be possible to multiply wal_buffers by XLOG_BLCKSZ without  
overflowing int, or calculations in StartupXLOG will go badly wrong  
and crash the server.  Avoid that by imposing a maximum value on  
wal_buffers.  This will be just under 2GB, assuming the usual value  
for XLOG_BLCKSZ.  
  
Josh Berkus, per an analysis by Andrew Gierth.  
  

contrib/isn now needs a .gitignore file.

  
commit   : bd8f768926f7eed520ba30bc135018ec24e0dd91    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 2 Aug 2015 23:57:32 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 2 Aug 2015 23:57:32 -0400    

Click here for diff

  
Oversight in commit cb3384a0cb4cf900622b77865f60e31259923079.  
Back-patch to 9.1, like that commit.  
  

Fix output of ISBN-13 numbers beginning with 979.

  
commit   : 9d04a9824279cae8a6f2f1d5ced7a8ff75f2cae9    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Sun, 2 Aug 2015 22:12:33 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Sun, 2 Aug 2015 22:12:33 +0300    

Click here for diff

  
An EAN beginning with 979 (but not 9790 - those are ISMN's) are accepted  
as ISBN numbers, but they cannot be represented in the old, 10-digit ISBN  
format. They must be output in the new 13-digit ISBN-13 format. We printed  
out an incorrect value for those.  
  
Also add a regression test, to test this and some other basic functionality  
of the module.  
  
Patch by Fabien Coelho. This fixes bug #13442, reported by B.Z. Backpatch  
to 9.1, where we started to recognize ISBN-13 numbers.  
  

Fix incorrect order of lock file removal and failure to close() sockets.

  
commit   : fad824a88cfc5973eacc8df061879665bc86e3b3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 2 Aug 2015 14:54:44 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 2 Aug 2015 14:54:44 -0400    

Click here for diff

  
Commit c9b0cbe98bd783e24a8c4d8d8ac472a494b81292 accidentally broke the  
order of operations during postmaster shutdown: it resulted in removing  
the per-socket lockfiles after, not before, postmaster.pid.  This creates  
a race-condition hazard for a new postmaster that's started immediately  
after observing that postmaster.pid has disappeared; if it sees the  
socket lockfile still present, it will quite properly refuse to start.  
This error appears to be the explanation for at least some of the  
intermittent buildfarm failures we've seen in the pg_upgrade test.  
  
Another problem, which has been there all along, is that the postmaster  
has never bothered to close() its listen sockets, but has just allowed them  
to close at process death.  This creates a different race condition for an  
incoming postmaster: it might be unable to bind to the desired listen  
address because the old postmaster is still incumbent.  This might explain  
some odd failures we've seen in the past, too.  (Note: this is not related  
to the fact that individual backends don't close their client communication  
sockets.  That behavior is intentional and is not changed by this patch.)  
  
Fix by adding an on_proc_exit function that closes the postmaster's ports  
explicitly, and (in 9.3 and up) reshuffling the responsibility for where  
to unlink the Unix socket files.  Lock file unlinking can stay where it  
is, but teach it to unlink the lock files in reverse order of creation.  
  

Fix some planner issues with degenerate outer join clauses.

  
commit   : 1044541dccfed0da1f27b2b8929e9524a1f577b4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 1 Aug 2015 20:57:41 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 1 Aug 2015 20:57:41 -0400    

Click here for diff

  
An outer join clause that didn't actually reference the RHS (perhaps only  
after constant-folding) could confuse the join order enforcement logic,  
leading to wrong query results.  Also, nested occurrences of such things  
could trigger an Assertion that on reflection seems incorrect.  
  
Per fuzz testing by Andreas Seltenreich.  The practical use of such cases  
seems thin enough that it's not too surprising we've not heard field  
reports about it.  
  
This has been broken for a long time, so back-patch to all active branches.  
  

  
commit   : a4df781c9037584d66a6f04b0b010a5bab0b509b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 31 Jul 2015 19:26:33 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 31 Jul 2015 19:26:33 -0400    

Click here for diff

  
In many cases, we can implement a semijoin as a plain innerjoin by first  
passing the righthand-side relation through a unique-ification step.  
However, one of the cases where this does NOT work is where the RHS has  
a LATERAL reference to the LHS; that makes the RHS dependent on the LHS  
so that unique-ification is meaningless.  joinpath.c understood this,  
and so would not generate any join paths of this kind ... but join_is_legal  
neglected to check for the case, so it would think that we could do it.  
The upshot would be a "could not devise a query plan for the given query"  
failure once we had failed to generate any join paths at all for the bogus  
join pair.  
  
Back-patch to 9.3 where LATERAL was added.  
  

Avoid some zero-divide hazards in the planner.

  
commit   : caae9f764699e44e2e95394b90f48d4429b8ea3f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 30 Jul 2015 12:11:23 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 30 Jul 2015 12:11:23 -0400    

Click here for diff

  
Although I think on all modern machines floating division by zero  
results in Infinity not SIGFPE, we still don't want infinities  
running around in the planner's costing estimates; too much risk  
of that leading to insane behavior.  
  
grouping_planner() failed to consider the possibility that final_rel  
might be known dummy and hence have zero rowcount.  (I wonder if it  
would be better to set a rows estimate of 1 for dummy relations?  
But at least in the back branches, changing this convention seems  
like a bad idea, so I'll leave that for another day.)  
  
Make certain that get_variable_numdistinct() produces a nonzero result.  
The case that can be shown to be broken is with stadistinct < 0.0 and  
small ntuples; we did not prevent the result from rounding to zero.  
For good luck I applied clamp_row_est() to all the nonconstant return  
values.  
  
In ExecChooseHashTableSize(), Assert that we compute positive nbuckets  
and nbatch.  I know of no reason to think this isn't the case, but it  
seems like a good safety check.  
  
Per reports from Piotr Stefaniak.  Back-patch to all active branches.  
  

Blacklist xlc 32-bit inlining.

  
commit   : 23e7ee9621be120a642e68f854c08a000310e87e    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 29 Jul 2015 22:49:48 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 29 Jul 2015 22:49:48 -0400    

Click here for diff

  
Per a suggestion from Tom Lane.  Back-patch to 9.0 (all supported  
versions).  While only 9.4 and up have code known to elicit this  
compiler bug, we were disabling inlining by accident until commit  
43d89a23d59c487bc9258fad7a6187864cb8c0c0.  
  

Update our documentation concerning where to create data directories.

  
commit   : 7bdf6d0440697619cea61841cd1432337b151b47    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 28 Jul 2015 18:42:59 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 28 Jul 2015 18:42:59 -0400    

Click here for diff

  
Although initdb has long discouraged use of a filesystem mount-point  
directory as a PG data directory, this point was covered nowhere in the  
user-facing documentation.  Also, with the popularity of pg_upgrade,  
we really need to recommend that the PG user own not only the data  
directory but its parent directory too.  (Without a writable parent  
directory, operations such as "mv data data.old" fail immediately.  
pg_upgrade itself doesn't do that, but wrapper scripts for it often do.)  
  
Hence, adjust the "Creating a Database Cluster" section to address  
these points.  I also took the liberty of wordsmithing the discussion  
of NFS a bit.  
  
These considerations aren't by any means new, so back-patch to all  
supported branches.  
  

Reduce chatter from signaling of autovacuum workers.

  
commit   : 47ee27521ac0a59e14b1f0f86d8cef1c7fa0ddff    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 28 Jul 2015 17:34:00 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 28 Jul 2015 17:34:00 -0400    

Click here for diff

  
Don't print a WARNING if we get ESRCH from a kill() that's attempting  
to cancel an autovacuum worker.  It's possible (and has been seen in the  
buildfarm) that the worker is already gone by the time we are able to  
execute the kill, in which case the failure is harmless.  About the only  
plausible reason for reporting such cases would be to help debug corrupted  
lock table contents, but this is hardly likely to be the most important  
symptom if that happens.  Moreover issuing a WARNING might scare users  
more than is warranted.  
  
Also, since sending a signal to an autovacuum worker is now entirely a  
routine thing, and the worker will log the query cancel on its end anyway,  
reduce the message saying we're doing that from LOG to DEBUG1 level.  
  
Very minor cosmetic cleanup as well.  
  
Since the main practical reason for doing this is to avoid unnecessary  
buildfarm failures, back-patch to all active branches.  
  

Disable ssl renegotiation by default.

  
commit   : 48d23c72d30a05cae84a8a1b18368014c711f1fe    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Tue, 28 Jul 2015 21:39:40 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Tue, 28 Jul 2015 21:39:40 +0200    

Click here for diff

  
While postgres' use of SSL renegotiation is a good idea in theory, it  
turned out to not work well in practice. The specification and openssl's  
implementation of it have lead to several security issues. Postgres' use  
of renegotiation also had its share of bugs.  
  
Additionally OpenSSL has a bunch of bugs around renegotiation, reported  
and open for years, that regularly lead to connections breaking with  
obscure error messages. We tried increasingly complex workarounds to get  
around these bugs, but we didn't find anything complete.  
  
Since these connection breakages often lead to hard to debug problems,  
e.g. spuriously failing base backups and significant latency spikes when  
synchronous replication is used, we have decided to change the default  
setting for ssl renegotiation to 0 (disabled) in the released  
backbranches and remove it entirely in 9.5 and master..  
  
Author: Michael Paquier, with changes by me  
Discussion: 20150624144148.GQ4797@alap3.anarazel.de  
Backpatch: 9.0-9.4; 9.5 and master get a different patch  
  

Remove an unsafe Assert, and explain join_clause_is_movable_into() better.

  
commit   : 03d7f3ba58590d5a6eed89604cc3e4912f458df4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 28 Jul 2015 13:20:40 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 28 Jul 2015 13:20:40 -0400    

Click here for diff

  
join_clause_is_movable_into() is approximate, in the sense that it might  
sometimes return "false" when actually it would be valid to push the given  
join clause down to the specified level.  This is okay ... but there was  
an Assert in get_joinrel_parampathinfo() that's only safe if the answers  
are always exact.  Comment out the Assert, and add a bunch of commentary  
to clarify what's going on.  
  
Per fuzz testing by Andreas Seltenreich.  The added regression test is  
a pretty silly query, but it's based on his crasher example.  
  
Back-patch to 9.2 where the faulty logic was introduced.  
  

Don’t assume that PageIsEmpty() returns true on an all-zeros page.

  
commit   : 588f50f851cdcb3da0755c3ad17a3427f1e57914    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 27 Jul 2015 18:54:09 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 27 Jul 2015 18:54:09 +0300    

Click here for diff

  
It does currently, and I don't see us changing that any time soon, but we  
don't make that assumption anywhere else.  
  
Per Tom Lane's suggestion. Backpatch to 9.2, like the previous patch that  
added this assumption.  
  

Reuse all-zero pages in GIN.

  
commit   : bafe3b00730184e7f9860cbab545f8b21c1dc70b    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 27 Jul 2015 12:30:26 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 27 Jul 2015 12:30:26 +0300    

Click here for diff

  
In GIN, an all-zeros page would be leaked forever, and never reused. Just  
add them to the FSM in vacuum, and they will be reinitialized when grabbed  
from the FSM. On master and 9.5, attempting to access the page's opaque  
struct also caused an assertion failure, although that was otherwise  
harmless.  
  
Reported by Jeff Janes. Backpatch to all supported versions.  
  

Fix handling of all-zero pages in SP-GiST vacuum.

  
commit   : 863af3a3797a3a8ca19c9bdd57c5f5b538e04e22    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 27 Jul 2015 12:28:21 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 27 Jul 2015 12:28:21 +0300    

Click here for diff

  
SP-GiST initialized an all-zeros page at vacuum, but that was not  
WAL-logged, which is not safe. You might get a torn page write, when it gets  
flushed to disk, and end-up with a half-initialized index page. To fix,  
leave it in the all-zeros state, and add it to the FSM. It will be  
initialized when reused. Also don't set the page-deleted flag when recycling  
an empty page. That was also not WAL-logged, and a torn write of that would  
cause the page to have an invalid checksum.  
  
Backpatch to 9.2, where SP-GiST indexes were added.  
  

Make entirely-dummy appendrels get marked as such in set_append_rel_size.

  
commit   : 8352b141f7f5c0bcdfbc32555f65a5ba0fd390d1    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 26 Jul 2015 16:19:08 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 26 Jul 2015 16:19:08 -0400    

Click here for diff

  
The planner generally expects that the estimated rowcount of any relation  
is at least one row, *unless* it has been proven empty by constraint  
exclusion or similar mechanisms, which is marked by installing a dummy path  
as the rel's cheapest path (cf. IS_DUMMY_REL).  When I split up  
allpaths.c's processing of base rels into separate set_base_rel_sizes and  
set_base_rel_pathlists steps, the intention was that dummy rels would get  
marked as such during the "set size" step; this is what justifies an Assert  
in indxpath.c's get_loop_count that other relations should either be dummy  
or have positive rowcount.  Unfortunately I didn't get that quite right  
for append relations: if all the child rels have been proven empty then  
set_append_rel_size would come up with a rowcount of zero, which is  
correct, but it didn't then do set_dummy_rel_pathlist.  (We would have  
ended up with the right state after set_append_rel_pathlist, but that's  
too late, if we generate indexpaths for some other rel first.)  
  
In addition to fixing the actual bug, I installed an Assert enforcing this  
convention in set_rel_size; that then allows simplification of a couple  
of now-redundant tests for zero rowcount in set_append_rel_size.  
  
Also, to cover the possibility that third-party FDWs have been careless  
about not returning a zero rowcount estimate, apply clamp_row_est to  
whatever an FDW comes up with as the rows estimate.  
  
Per report from Andreas Seltenreich.  Back-patch to 9.2.  Earlier branches  
did not have the separation between set_base_rel_sizes and  
set_base_rel_pathlists steps, so there was no intermediate state where an  
appendrel would have had inconsistent rowcount and pathlist.  It's possible  
that adding the Assert to set_rel_size would be a good idea in older  
branches too; but since they're not under development any more, it's likely  
not worth the trouble.  
  

Restore use of zlib default compression in pg_dump directory mode.

  
commit   : 84bf6ece1fffaf6d56e7de52c8dbc2d33d433618    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Sat, 25 Jul 2015 17:14:36 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Sat, 25 Jul 2015 17:14:36 -0400    

Click here for diff

  
This was broken by commit 0e7e355f27302b62af3e1add93853ccd45678443 and  
friends, which ignored the fact that gzopen() will treat "-1" in the  
mode argument as an invalid character, which it ignores, and a flag for  
compression level 1. Now, when this value is encountered no compression  
level flag is passed  to gzopen, leaving it to use the zlib default.  
  
Also, enforce the documented allowed range for pg_dump's -Z option,  
namely 0 .. 9, and remove some consequently dead code from  
pg_backup_tar.c.  
  
Problem reported by Marc Mamin.  
  
Backpatch to 9.1, like the patch that introduced the bug.  
  

Fix off-by-one error in calculating subtrans/multixact truncation point.

  
commit   : 6ae9a021893b5cd24e1d7e8a1a07854d33af9990    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 23 Jul 2015 01:30:11 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 23 Jul 2015 01:30:11 +0300    

Click here for diff

  
If there were no subtransactions (or multixacts) active, we would calculate  
the oldestxid == next xid. That's correct, but if next XID happens to be  
on the next pg_subtrans (pg_multixact) page, the page does not exist yet,  
and SimpleLruTruncate will produce an "apparent wraparound" warning. The  
warning is harmless in this case, but looks very alarming to users.  
  
Backpatch to all supported versions. Patch and analysis by Thomas Munro.  
  

Fix (some of) pltcl memory usage

  
commit   : b2efbb71dfb48108d142e954b9e608253ceee91c    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Mon, 20 Jul 2015 14:18:08 +0200    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Mon, 20 Jul 2015 14:18:08 +0200    

Click here for diff

  
As reported by Bill Parker, PL/Tcl did not validate some malloc() calls  
against NULL return.  Fix by using palloc() in a new long-lived memory  
context instead.  This allows us to simplify error handling too, by  
simply deleting the memory context instead of doing retail frees.  
  
There's still a lot that could be done to improve PL/Tcl's memory  
handling ...  
  
This is pretty ancient, so backpatch all the way back.  
  
Author: Michael Paquier and Álvaro Herrera  
Discussion: https://www.postgresql.org/message-id/CAFrbyQwyLDYXfBOhPfoBGqnvuZO_Y90YgqFM11T2jvnxjLFmqw@mail.gmail.com  
  

Make WaitLatchOrSocket’s timeout detection more robust.

  
commit   : 498a29dc32f34e6936d456e95a0beb44a9c3b637    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 18 Jul 2015 11:47:13 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 18 Jul 2015 11:47:13 -0400    

Click here for diff

  
In the previous coding, timeout would be noticed and reported only when  
poll() or socket() returned zero (or the equivalent behavior on Windows).  
Ordinarily that should work well enough, but it seems conceivable that we  
could get into a state where poll() always returns a nonzero value --- for  
example, if it is noticing a condition on one of the file descriptors that  
we do not think is reason to exit the loop.  If that happened, we'd be in a  
busy-wait loop that would fail to terminate even when the timeout expires.  
  
We can make this more robust at essentially no cost, by deciding to exit  
of our own accord if we compute a zero or negative time-remaining-to-wait.  
Previously the code noted this but just clamped the time-remaining to zero,  
expecting that we'd detect timeout on the next loop iteration.  
  
Back-patch to 9.2.  While 9.1 had a version of WaitLatchOrSocket, it was  
primitive compared to later versions, and did not guarantee reliable  
detection of timeouts anyway.  (Essentially, this is a refinement of  
commit 3e7fdcffd6f77187, which was back-patched only as far as 9.2.)  
  

AIX: Test the -qlonglong option before use.

  
commit   : 7319c0524d66b00a711db7f560d6a34051739e35    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Fri, 17 Jul 2015 03:01:14 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Fri, 17 Jul 2015 03:01:14 -0400    

Click here for diff

  
xlc provides "long long" unconditionally at C99-compatible language  
levels, and this option provokes a warning.  The warning interferes with  
"configure" tests that fail in response to any warning.  Notably, before  
commit 85a2a8903f7e9151793308d0638621003aded5ae, it interfered with the  
test for -qnoansialias.  Back-patch to 9.0 (all supported versions).  
  

Fix a low-probability crash in our qsort implementation.

  
commit   : 730089d879751d890ecdfbc4b5cba04440ae4af2    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 16 Jul 2015 22:57:46 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 16 Jul 2015 22:57:46 -0400    

Click here for diff

  
It's standard for quicksort implementations, after having partitioned the  
input into two subgroups, to recurse to process the smaller partition and  
then handle the larger partition by iterating.  This method guarantees  
that no more than log2(N) levels of recursion can be needed.  However,  
Bentley and McIlroy argued that checking to see which partition is smaller  
isn't worth the cycles, and so their code doesn't do that but just always  
recurses on the left partition.  In most cases that's fine; but with  
worst-case input we might need O(N) levels of recursion, and that means  
that qsort could be driven to stack overflow.  Such an overflow seems to  
be the only explanation for today's report from Yiqing Jin of a SIGSEGV  
in med3_tuple while creating an index of a couple billion entries with a  
very large maintenance_work_mem setting.  Therefore, let's spend the few  
additional cycles and lines of code needed to choose the smaller partition  
for recursion.  
  
Also, fix up the qsort code so that it properly uses size_t not int for  
some intermediate values representing numbers of items.  This would only  
be a live risk when sorting more than INT_MAX bytes (in qsort/qsort_arg)  
or tuples (in qsort_tuple), which I believe would never happen with any  
caller in the current core code --- but perhaps it could happen with  
call sites in third-party modules?  In any case, this is trouble waiting  
to happen, and the corrected code is probably if anything shorter and  
faster than before, since it removes sign-extension steps that had to  
happen when converting between int and size_t.  
  
In passing, move a couple of CHECK_FOR_INTERRUPTS() calls so that it's  
not necessary to preserve the value of "r" across them, and prettify  
the output of gen_qsort_tuple.pl a little.  
  
Back-patch to all supported branches.  The odds of hitting this issue  
are probably higher in 9.4 and up than before, due to the new ability  
to allocate sort workspaces exceeding 1GB, but there's no good reason  
to believe that it's impossible to crash older branches this way.  
  

  
commit   : dc5075fed0e1f50ff88adfa6012ab17d5c8f4351    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 15 Jul 2015 21:00:26 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 15 Jul 2015 21:00:26 -0400    

Click here for diff

  
This allows PostgreSQL modules and their dependencies to have undefined  
symbols, resolved at runtime.  Perl module shared objects rely on that  
in Perl 5.8.0 and later.  This fixes the crash when PL/PerlU loads such  
modules, as the hstore_plperl test suite does.  Module authors can link  
using -Wl,-G to permit undefined symbols; by default, linking will fail  
as it has.  Back-patch to 9.0 (all supported versions).  
  

Fix assorted memory leaks.

  
commit   : faf686b540e602f77c7a31442f3344444eb848e9    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 12 Jul 2015 16:25:52 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 12 Jul 2015 16:25:52 -0400    

Click here for diff

  
Per Coverity (not that any of these are so non-obvious that they should not  
have been caught before commit).  The extent of leakage is probably minor  
to unnoticeable, but a leak is a leak.  Back-patch as necessary.  
  
Michael Paquier  
  

Improve documentation about array concat operator vs. underlying functions.

  
commit   : 0a9b0428f03379931aa6c1866526a975ae2d59b2    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 9 Jul 2015 18:50:31 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 9 Jul 2015 18:50:31 -0400    

Click here for diff

  
The documentation implied that there was seldom any reason to use the  
array_append, array_prepend, and array_cat functions directly.  But that's  
not really true, because they can help make it clear which case is meant,  
which the || operator can't do since it's overloaded to represent all three  
cases.  Add some discussion and examples illustrating the potentially  
confusing behavior that can ensue if the parser misinterprets what was  
meant.  
  
Per a complaint from Michael Herold.  Back-patch to 9.2, which is where ||  
started to behave this way.  
  

Fix postmaster’s handling of a startup-process crash.

  
commit   : 9c39d7ae0871adfa377c37c208957889d5d3307c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 9 Jul 2015 13:22:23 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 9 Jul 2015 13:22:23 -0400    

Click here for diff

  
Ordinarily, a failure (unexpected exit status) of the startup subprocess  
should be considered fatal, so the postmaster should just close up shop  
and quit.  However, if we sent the startup process a SIGQUIT or SIGKILL  
signal, the failure is hardly "unexpected", and we should attempt restart;  
this is necessary for recovery from ordinary backend crashes in hot-standby  
scenarios.  I attempted to implement the latter rule with a two-line patch  
in commit 442231d7f71764b8c628044e7ce2225f9aa43b67, but it now emerges that  
that patch was a few bricks shy of a load: it failed to distinguish the  
case of a signaled startup process from the case where the new startup  
process crashes before reaching database consistency.  That resulted in  
infinitely respawning a new startup process only to have it crash again.  
  
To handle this properly, we really must track whether we have sent the  
*current* startup process a kill signal.  Rather than add yet another  
ad-hoc boolean to the postmaster's state, I chose to unify this with the  
existing RecoveryError flag into an enum tracking the startup process's  
state.  That seems more consistent with the postmaster's general state  
machine design.  
  
Back-patch to 9.0, like the previous patch.  
  

  
commit   : 63277305d8bdb2a90fc88f67b2e3078cf7531289    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 9 Jul 2015 16:00:14 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 9 Jul 2015 16:00:14 +0300    

Click here for diff

  
Tom fixed another one of these in commit 7f32dbcd, but there was another  
almost identical one in libpq docs. Per his comment:  
  
HP's web server has apparently become case-sensitive sometime recently.  
Per bug #13479 from Daniel Abraham.  Corrected link identified by Alvaro.  
  

Replace use of “diff -q”.

  
commit   : d4051bfa2c4f2c1192500cf8107bc766d3854646    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 8 Jul 2015 20:44:21 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 8 Jul 2015 20:44:21 -0400    

Click here for diff

  
POSIX does not specify the -q option, and many implementations do not  
offer it.  Don't bother changing the MSVC build system, because having  
non-GNU diff on Windows is vanishingly unlikely.  Back-patch to 9.2,  
where this invocation was introduced.  
  

Fix null pointer dereference in “\c” psql command.

  
commit   : 49008d64507858678becf59f7e7caa75afb04cf2    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 8 Jul 2015 20:44:21 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 8 Jul 2015 20:44:21 -0400    

Click here for diff

  
The psql crash happened when no current connection existed.  (The second  
new check is optional given today's undocumented NULL argument handling  
in PQhost() etc.)  Back-patch to 9.0 (all supported versions).  
  

Fix portability issue in pg_upgrade test script: avoid $PWD.

  
commit   : dca992d8b099ab71b14cd4aa09f31506e3903224    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 7 Jul 2015 12:49:18 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 7 Jul 2015 12:49:18 -0400    

Click here for diff

  
SUSv2-era shells don't set the PWD variable, though anything more modern  
does.  In the buildfarm environment this could lead to test.sh executing  
with PWD pointing to $HOME or another high-level directory, so that there  
were conflicts between concurrent executions of the test in different  
branch subdirectories.  This appears to be the explanation for recent  
intermittent failures on buildfarm members binturong and dingo (and might  
well have something to do with the buildfarm script's failure to capture  
log files from pg_upgrade tests, too).  
  
To fix, just use `pwd` in place of $PWD.  AFAICS test.sh is the only place  
in our source tree that depended on $PWD.  Back-patch to all versions  
containing this script.  
  
Per buildfarm.  Thanks to Oskari Saarenmaa for diagnosing the problem.  
  

Improve handling of out-of-memory in libpq.

  
commit   : fcdac561405e14e1ce9f8c07a53e7d4983afa7a0    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 7 Jul 2015 18:37:45 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 7 Jul 2015 18:37:45 +0300    

Click here for diff

  
If an allocation fails in the main message handling loop, pqParseInput3  
or pqParseInput2, it should not be treated as "not enough data available  
yet". Otherwise libpq will wait indefinitely for more data to arrive from  
the server, and gets stuck forever.  
  
This isn't a complete fix - getParamDescriptions and getCopyStart still  
have the same issue, but it's a step in the right direction.  
  
Michael Paquier and me. Backpatch to all supported versions.  
  

Turn install.bat into a pure one line wrapper fort he perl script.

  
commit   : 880365a3c56866119e89a530c68b94a5d99d5846    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 7 Jul 2015 16:31:52 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 7 Jul 2015 16:31:52 +0300    

Click here for diff

  
Build.bat and vcregress.bat got similar treatment years ago. I'm not sure  
why install.bat wasn't treated at the same time, but it seems like a good  
idea anyway.  
  
The immediate problem with the old install.bat was that it had quoting  
issues, and wouldn't work if the target directory's name contained spaces.  
This fixes that problem.  
  
I committed this to master yesterday, this is a backpatch of the same for  
all supported versions.  
  

Remove incorrect warning from pg_archivecleanup document.

  
commit   : 68f9e67ef20bccfcb1c5fa009381d81aaeb69ec6    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Mon, 6 Jul 2015 20:58:58 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Mon, 6 Jul 2015 20:58:58 +0900    

Click here for diff

  
The .backup file name can be passed to pg_archivecleanup even if  
it includes the extension which is specified in -x option.  
However, previously the document incorrectly warned a user  
not to do that.  
  
Back-patch to 9.2 where pg_archivecleanup's -x option and  
the warning were added.  
  

Make numeric form of PG version number readily available in Makefiles.

  
commit   : 544e7581427a2c2d14f32f85043e27c1ad67f14f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 5 Jul 2015 12:01:01 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 5 Jul 2015 12:01:01 -0400    

Click here for diff

  
Expose PG_VERSION_NUM (e.g., "90600") as a Make variable; but for  
consistency with the other Make variables holding similar info,  
call the variable just VERSION_NUM not PG_VERSION_NUM.  
  
There was some discussion of making this value available as a pg_config  
value as well.  However, that would entail substantially more work than  
this two-line patch.  Given that there was not exactly universal consensus  
that we need this at all, let's just do a minimal amount of work for now.  
  
Back-patch of commit a5d489ccb7e613c7ca3be6141092b8c1d2c13fa7, so that this  
variable is actually useful for its intended purpose sometime before 2020.  
  
Michael Paquier, reviewed by Pavel Stehule  
  

PL/Perl: Add alternative expected file for Perl 5.22

  
commit   : 0e6dae3f19dde22fdc7098a59dc399faac0e9482    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 21 Jun 2015 10:37:24 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 21 Jun 2015 10:37:24 -0400    

Click here for diff

  
  

Don’t emit a spurious space at end of line in pg_dump of event triggers.

  
commit   : 52fc303e640c328b9634866b88cf2528f2732d16    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 2 Jul 2015 12:50:29 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 2 Jul 2015 12:50:29 +0300    

Click here for diff

  
Backpatch to 9.3 and above, where event triggers were added.  
  

  
commit   : cfd4876f174b70129626346c4991ebf5f61ebc6a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 30 Jun 2015 18:47:32 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 30 Jun 2015 18:47:32 -0400    

Click here for diff

  
HP's web server has apparently become case-sensitive sometime recently.  
Per bug #13479 from Daniel Abraham.  Corrected link identified by Alvaro.  
  

Test -lrt for sched_yield

  
commit   : c085e072ff247ad66ec9955c20d24556ab2a92f7    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 30 Jun 2015 14:20:38 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 30 Jun 2015 14:20:38 -0300    

Click here for diff

  
Apparently, this is needed in some Solaris versions.  
  
Author: Oskari Saarenmaa  
  

Back-patch some minor bug fixes in GUC code.

  
commit   : 5a56c254588a91de393b8ce99f8deeefc6a44d67    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 28 Jun 2015 18:38:06 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 28 Jun 2015 18:38:06 -0400    

Click here for diff

  
In 9.4, fix a 9.4.1 regression that allowed multiple entries for a  
PGC_POSTMASTER variable to cause bogus complaints in the postmaster log.  
(The issue here was that commit bf007a27acd7b2fb unintentionally reverted  
3e3f65973a3c94a6, which suppressed any duplicate entries within  
ParseConfigFp.  Back-patch the reimplementation just made in HEAD, which  
makes use of an "ignore" field to prevent application of superseded items.)  
  
Add missed failure check in AlterSystemSetConfigFile().  We don't really  
expect ParseConfigFp() to fail, but that's not an excuse for not checking.  
  
In both 9.3 and 9.4, remove mistaken assignment to ConfigFileLineno that  
caused line counting after an include_dir directive to be completely wrong.  
  

Fix comment for GetCurrentIntegerTimestamp().

  
commit   : f9b38ab6536e0c175cf94bea0c8f2a3d9fed175a    
  
author   : Kevin Grittner <kgrittn@postgresql.org>    
date     : Sun, 28 Jun 2015 12:46:03 -0500    
  
committer: Kevin Grittner <kgrittn@postgresql.org>    
date     : Sun, 28 Jun 2015 12:46:03 -0500    

Click here for diff

  
The unit of measure is microseconds, not milliseconds.  
  
Backpatch to 9.3 where the function and its comment were added.  
  

Fix function declaration style to respect the coding standard.

  
commit   : fc7f6e331d7d35f7f24f72a62e4907887e7bcb11    
  
author   : Tatsuo Ishii <ishii@postgresql.org>    
date     : Sun, 28 Jun 2015 18:54:27 +0900    
  
committer: Tatsuo Ishii <ishii@postgresql.org>    
date     : Sun, 28 Jun 2015 18:54:27 +0900    

Click here for diff

  
  

Revoke incorrectly applied patch version

  
commit   : 05d9e17fa99febb3ec024fbd6cdc323985fd0e6e    
  
author   : Simon Riggs <simon@2ndQuadrant.com>    
date     : Sat, 27 Jun 2015 02:21:51 +0100    
  
committer: Simon Riggs <simon@2ndQuadrant.com>    
date     : Sat, 27 Jun 2015 02:21:51 +0100    

Click here for diff

  
  

Avoid hot standby cancels from VAC FREEZE

  
commit   : 892a0e4e4559fc200d4803db24f4babca37fe76d    
  
author   : Simon Riggs <simon@2ndQuadrant.com>    
date     : Sat, 27 Jun 2015 00:46:58 +0100    
  
committer: Simon Riggs <simon@2ndQuadrant.com>    
date     : Sat, 27 Jun 2015 00:46:58 +0100    

Click here for diff

  
VACUUM FREEZE generated false cancelations of standby queries on an  
otherwise idle master. Caused by an off-by-one error on cutoff_xid  
which goes back to original commit.  
  
Backpatch to all versions 9.0+  
  
Analysis and report by Marco Nenciarini  
  
Bug fix by Simon Riggs  
  

Allow background workers to connect to no particular database.

  
commit   : d66b67fe21e11d8f1c7ac7d8445f8468fbc9222f    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Thu, 25 Jun 2015 15:52:13 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Thu, 25 Jun 2015 15:52:13 -0400    

Click here for diff

  
The documentation claims that this is supported, but it didn't  
actually work.  Fix that.  
  
Reported by Pavel Stehule; patch by me.  
  

Fix the logic for putting relations into the relcache init file.

  
commit   : 834aa56ea16ff4a7e217a1115797078398d85cc8    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 25 Jun 2015 14:39:05 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 25 Jun 2015 14:39:05 -0400    

Click here for diff

  
Commit f3b5565dd4e59576be4c772da364704863e6a835 was a couple of bricks shy  
of a load; specifically, it missed putting pg_trigger_tgrelid_tgname_index  
into the relcache init file, because that index is not used by any  
syscache.  However, we have historically nailed that index into cache for  
performance reasons.  The upshot was that load_relcache_init_file always  
decided that the init file was busted and silently ignored it, resulting  
in a significant hit to backend startup speed.  
  
To fix, reinstantiate RelationIdIsInInitFile() as a wrapper around  
RelationSupportsSysCache(), which can know about additional relations  
that should be in the init file despite being unknown to syscache.c.  
  
Also install some guards against future mistakes of this type: make  
write_relcache_init_file Assert that all nailed relations get written to  
the init file, and make load_relcache_init_file emit a WARNING if it takes  
the "wrong number of nailed relations" exit path.  Now that we remove the  
init files during postmaster startup, that case should never occur in the  
field, even if we are starting a minor-version update that added or removed  
rels from the nailed set.  So the warning shouldn't ever be seen by end  
users, but it will show up in the regression tests if somebody breaks this  
logic.  
  
Back-patch to all supported branches, like the previous commit.  
  

Docs: fix claim that to_char(‘FM’) removes trailing zeroes.

  
commit   : 7e5859cbc26bc90911ecf3db394cecfcd9da953b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 25 Jun 2015 10:44:03 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 25 Jun 2015 10:44:03 -0400    

Click here for diff

  
Of course, what it removes is leading zeroes.  Seems to have been a thinko  
in commit ffe92d15d53625d5ae0c23f4e1984ed43614a33d.  Noted by Hubert Depesz  
Lubaczewski.  
  

Improve inheritance_planner()’s performance for large inheritance sets.

  
commit   : 67306858866359397a5058d1b7054f53426cf165    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 22 Jun 2015 18:53:27 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 22 Jun 2015 18:53:27 -0400    

Click here for diff

  
Commit c03ad5602f529787968fa3201b35c119bbc6d782 introduced a planner  
performance regression for UPDATE/DELETE on large inheritance sets.  
It required copying the append_rel_list (which is of size proportional to  
the number of inherited tables) once for each inherited table, thus  
resulting in O(N^2) time and memory consumption.  While it's difficult to  
avoid that in general, the extra work only has to be done for  
append_rel_list entries that actually reference subquery RTEs, which  
inheritance-set entries will not.  So we can buy back essentially all of  
the loss in cases without subqueries in FROM; and even for those, the added  
work is mainly proportional to the number of UNION ALL subqueries.  
  
Back-patch to 9.2, like the previous commit.  
  
Tom Lane and Dean Rasheed, per a complaint from Thomas Munro.  
  

Truncate strings in tarCreateHeader() with strlcpy(), not sprintf().

  
commit   : 45a1d7770743345b712dc6bdda71dd06967846be    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sun, 21 Jun 2015 20:04:36 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sun, 21 Jun 2015 20:04:36 -0400    

Click here for diff

  
This supplements the GNU libc bug #6530 workarounds introduced in commit  
54cd4f04576833abc394e131288bf3dd7dcf4806.  On affected systems, a  
tar-format pg_basebackup failed when some filename beneath the data  
directory was not valid character data in the postmaster/walsender  
locale.  Back-patch to 9.1, where pg_basebackup was introduced.  Extant,  
bug-prone conversion specifications receive only ASCII bytes or involve  
low-importance messages.  
  

Improve multixact emergency autovacuum logic.

  
commit   : 2031931440e382c58fe21944c5e707e4a16da5df    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Sun, 21 Jun 2015 18:57:28 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Sun, 21 Jun 2015 18:57:28 +0200    

Click here for diff

  
Previously autovacuum was not necessarily triggered if space in the  
members slru got tight. The first problem was that the signalling was  
tied to values in the offsets slru, but members can advance much  
faster. Thats especially a problem if old sessions had been around that  
previously prevented the multixact horizon to increase. Secondly the  
skipping logic doesn't work if the database was restarted after  
autovacuum was triggered - that knowledge is not preserved across  
restart. This is especially a problem because it's a common  
panic-reaction to restart the database if it gets slow to  
anti-wraparound vacuums.  
  
Fix the first problem by separating the logic for members from  
offsets. Trigger autovacuum whenever a multixact crosses a segment  
boundary, as the current member offset increases in irregular values, so  
we can't use a simple modulo logic as for offsets.  Add a stopgap for  
the second problem, by signalling autovacuum whenver ERRORing out  
because of boundaries.  
  
Discussion: 20150608163707.GD20772@alap3.anarazel.de  
  
Backpatch into 9.3, where it became more likely that multixacts wrap  
around.  
  

Fix thinko in comment (launcher -> worker)

  
commit   : 0f65b6cfce33c32edba025ce701f9a5c334371b5    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Sat, 20 Jun 2015 11:45:59 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Sat, 20 Jun 2015 11:45:59 -0300    

Click here for diff

  
  

Clamp autovacuum launcher sleep time to 5 minutes

  
commit   : 5ac77a276311a7373950020973967d8d62e20d38    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 19 Jun 2015 12:44:34 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 19 Jun 2015 12:44:34 -0300    

Click here for diff

  
This avoids the problem that it might go to sleep for an unreasonable  
amount of time in unusual conditions like the server clock moving  
backwards an unreasonable amount of time.  
  
(Simply moving the server clock forward again doesn't solve the problem  
unless you wake up the autovacuum launcher manually, say by sending it  
SIGHUP).  
  
Per trouble report from Prakash Itnal in  
https://www.postgresql.org/message-id/CAHC5u79-UqbapAABH2t4Rh2eYdyge0Zid-X=Xz-ZWZCBK42S0Q@mail.gmail.com  
  
Analyzed independently by Haribabu Kommi and Tom Lane.  
  

Fix corner case in autovacuum-forcing logic for multixact wraparound.

  
commit   : 6199b1f90c5c3b800fa4d647e333c194ae4ee933    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Fri, 19 Jun 2015 11:28:30 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Fri, 19 Jun 2015 11:28:30 -0400    

Click here for diff

  
Since find_multixact_start() relies on SimpleLruDoesPhysicalPageExist(),  
and that function looks only at the on-disk state, it's possible for it  
to fail to find a page that exists in the in-memory SLRU that has not  
been written yet.  If that happens, SetOffsetVacuumLimit() will  
erroneously decide to force emergency autovacuuming immediately.  
  
We should probably fix find_multixact_start() to consider the data  
cached in memory as well as on the on-disk state, but that's no excuse  
for SetOffsetVacuumLimit() to be stupid about the case where it can  
no longer read the value after having previously succeeded in doing so.  
  
Report by Andres Freund.  
  

Check for out of memory when allocating sqlca.

  
commit   : 4130b2c1fdbe71838baba00312b8ca599b62f98d    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Mon, 15 Jun 2015 14:21:03 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Mon, 15 Jun 2015 14:21:03 +0200    

Click here for diff

  
Patch by Michael Paquier  
  

Fix memory leak in ecpglib’s connect function.

  
commit   : 3e2a17eecc4ceb76ac40978a15b26e120dbd52a6    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Mon, 15 Jun 2015 14:20:09 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Mon, 15 Jun 2015 14:20:09 +0200    

Click here for diff

  
Patch by Michael Paquier  
  

Fixed some memory leaks in ECPG.

  
commit   : 31c06d4b66627527fa1cc63a1f1e240848b5c7d8    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Fri, 12 Jun 2015 14:52:55 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Fri, 12 Jun 2015 14:52:55 +0200    

Click here for diff

  
Patch by Michael Paquier  
  
Conflicts:  
	src/interfaces/ecpg/preproc/variable.c  
  

Fix intoasc() in Informix compat lib. This function used to be a noop.

  
commit   : d65e5f832ebd784f98c3a68f8572ff39f91446be    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Fri, 12 Jun 2015 14:50:47 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Fri, 12 Jun 2015 14:50:47 +0200    

Click here for diff

  
Patch by Michael Paquier  
  

Improve error message and hint for ALTER COLUMN TYPE can’t-cast failure.

  
commit   : 9e86bc29b6e5dba11e687161c3520b90b570e491    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 12 Jun 2015 11:54:03 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 12 Jun 2015 11:54:03 -0400    

Click here for diff

  
We already tried to improve this once, but the "improved" text was rather  
off-target if you had provided a USING clause.  Also, it seems helpful  
to provide the exact text of a suggested USING clause, so users can just  
copy-and-paste it when needed.  Per complaint from Keith Rarick and a  
suggestion from Merlin Moncure.  
  
Back-patch to 9.2 where the current wording was adopted.