PostgreSQL 9.5.4 commit log

Stamp 9.5.4.

commit   : eb4dfa239e6f54fef5c486caf4b58a9805c19572    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 8 Aug 2016 16:27:53 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 8 Aug 2016 16:27:53 -0400    

Click here for diff

M configure
M configure.in
M doc/bug.template
M src/include/pg_config.h.win32
M src/interfaces/libpq/libpq.rc.in
M src/port/win32ver.rc

Last-minute updates for release notes.

commit   : 2183966c6d00a16cc307f8563da469b14ed07b6f    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 8 Aug 2016 11:56:10 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 8 Aug 2016 11:56:10 -0400    

Click here for diff

Security: CVE-2016-5423, CVE-2016-5424  

M doc/src/sgml/release-9.1.sgml
M doc/src/sgml/release-9.2.sgml
M doc/src/sgml/release-9.3.sgml
M doc/src/sgml/release-9.4.sgml
M doc/src/sgml/release-9.5.sgml

Fix several one-byte buffer over-reads in to_number

commit   : 04cee8f835bcf95ff80b734c335927aaf6551d2d    
  
author   : Peter Eisentraut <[email protected]>    
date     : Mon, 8 Aug 2016 11:12:59 -0400    
  
committer: Peter Eisentraut <[email protected]>    
date     : Mon, 8 Aug 2016 11:12:59 -0400    

Click here for diff

Several places in NUM_numpart_from_char(), which is called from the SQL  
function to_number(text, text), could accidentally read one byte past  
the end of the input buffer (which comes from the input text datum and  
is not null-terminated).  
  
1. One leading space character would be skipped, but there was no check  
   that the input was at least one byte long.  This does not happen in  
   practice, but for defensiveness, add a check anyway.  
  
2. Commit 4a3a1e2cf apparently accidentally doubled that code that skips  
   one space character (so that two spaces might be skipped), but there  
   was no overflow check before skipping the second byte.  Fix by  
   removing that duplicate code.  
  
3. A logic error would allow a one-byte over-read when looking for a  
   trailing sign (S) placeholder.  
  
In each case, the extra byte cannot be read out directly, but looking at  
it might cause a crash.  
  
The third item was discovered by Piotr Stefaniak, the first two were  
found and analyzed by Tom Lane and Peter Eisentraut.  

M src/backend/utils/adt/formatting.c

Translation updates

commit   : 4da812fa8adb22874a937f1b000253fecf526cb0    
  
author   : Peter Eisentraut <[email protected]>    
date     : Mon, 8 Aug 2016 11:02:52 -0400    
  
committer: Peter Eisentraut <[email protected]>    
date     : Mon, 8 Aug 2016 11:02:52 -0400    

Click here for diff

Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git  
Source-Git-Hash: f1a1631efd7a51f9b1122f22cf688a3124bf1342  

M src/backend/po/de.po
M src/backend/po/es.po
M src/backend/po/ru.po
M src/bin/initdb/po/de.po
M src/bin/pg_basebackup/po/de.po
M src/bin/pg_config/po/de.po
M src/bin/pg_controldata/po/de.po
M src/bin/pg_ctl/po/de.po
M src/bin/pg_dump/po/de.po
M src/bin/pg_dump/po/ru.po
M src/bin/pg_resetxlog/po/de.po
M src/bin/pg_rewind/po/de.po
M src/bin/pg_rewind/po/ru.po
M src/bin/psql/po/de.po
M src/bin/psql/po/es.po
M src/bin/scripts/po/de.po
M src/interfaces/ecpg/ecpglib/po/de.po
M src/interfaces/ecpg/preproc/po/de.po
M src/interfaces/libpq/po/de.po
M src/pl/plperl/po/de.po
M src/pl/plpgsql/src/po/de.po
M src/pl/plpython/po/de.po
M src/pl/tcl/po/de.po

Fix two errors with nested CASE/WHEN constructs.

commit   : 98b0c6280667ce1efae763340fb2c13c81e4d706    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 8 Aug 2016 10:33:46 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 8 Aug 2016 10:33:46 -0400    

Click here for diff

ExecEvalCase() tried to save a cycle or two by passing  
&econtext->caseValue_isNull as the isNull argument to its sub-evaluation of  
the CASE value expression.  If that subexpression itself contained a CASE,  
then *isNull was an alias for econtext->caseValue_isNull within the  
recursive call of ExecEvalCase(), leading to confusion about whether the  
inner call's caseValue was null or not.  In the worst case this could lead  
to a core dump due to dereferencing a null pointer.  Fix by not assigning  
to the global variable until control comes back from the subexpression.  
Also, avoid using the passed-in isNull pointer transiently for evaluation  
of WHEN expressions.  (Either one of these changes would have been  
sufficient to fix the known misbehavior, but it's clear now that each of  
these choices was in itself dangerous coding practice and best avoided.  
There do not seem to be any similar hazards elsewhere in execQual.c.)  
  
Also, it was possible for inlining of a SQL function that implements the  
equality operator used for a CASE comparison to result in one CASE  
expression's CaseTestExpr node being inserted inside another CASE  
expression.  This would certainly result in wrong answers since the  
improperly nested CaseTestExpr would be caused to return the inner CASE's  
comparison value not the outer's.  If the CASE values were of different  
data types, a crash might result; moreover such situations could be abused  
to allow disclosure of portions of server memory.  To fix, teach  
inline_function to check for "bare" CaseTestExpr nodes in the arguments of  
a function to be inlined, and avoid inlining if there are any.  
  
Heikki Linnakangas, Michael Paquier, Tom Lane  
  
Report: https://github.com/greenplum-db/gpdb/pull/327  
Report: <[email protected]>  
Security: CVE-2016-5423  

M src/backend/executor/execQual.c
M src/backend/optimizer/util/clauses.c
M src/test/regress/expected/case.out
M src/test/regress/sql/case.sql

Obstruct shell, SQL, and conninfo injection via database and role names.

commit   : 286c8bc646468fe68c6af484463391dd414ee65d    
  
author   : Noah Misch <[email protected]>    
date     : Mon, 8 Aug 2016 10:07:46 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Mon, 8 Aug 2016 10:07:46 -0400    

Click here for diff

Due to simplistic quoting and confusion of database names with conninfo  
strings, roles with the CREATEDB or CREATEROLE option could escalate to  
superuser privileges when a superuser next ran certain maintenance  
commands.  The new coding rule for PQconnectdbParams() calls, documented  
at conninfo_array_parse(), is to pass expand_dbname=true and wrap  
literal database names in a trivial connection string.  Escape  
zero-length values in appendConnStrVal().  Back-patch to 9.1 (all  
supported versions).  
  
Nathan Bossart, Michael Paquier, and Noah Misch.  Reviewed by Peter  
Eisentraut.  Reported by Nathan Bossart.  
  
Security: CVE-2016-5424  

M src/bin/pg_basebackup/streamutil.c
M src/bin/pg_dump/dumputils.c
M src/bin/pg_dump/dumputils.h
M src/bin/pg_dump/pg_backup.h
M src/bin/pg_dump/pg_backup_archiver.c
M src/bin/pg_dump/pg_backup_db.c
M src/bin/pg_dump/pg_dumpall.c
M src/bin/pg_upgrade/check.c
M src/bin/pg_upgrade/dump.c
M src/bin/pg_upgrade/pg_upgrade.c
M src/bin/pg_upgrade/pg_upgrade.h
M src/bin/pg_upgrade/server.c
M src/bin/pg_upgrade/test.sh
M src/bin/pg_upgrade/util.c
M src/bin/pg_upgrade/version.c
M src/bin/psql/command.c
M src/bin/scripts/clusterdb.c
M src/bin/scripts/reindexdb.c
M src/bin/scripts/vacuumdb.c
M src/interfaces/libpq/fe-connect.c
M src/tools/msvc/vcregress.pl

Promote pg_dumpall shell/connstr quoting functions to src/fe_utils.

commit   : 8adff378308b3500aad45a2d5bfa9ca808f37627    
  
author   : Noah Misch <[email protected]>    
date     : Mon, 8 Aug 2016 10:07:46 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Mon, 8 Aug 2016 10:07:46 -0400    

Click here for diff

Rename these newly-extern functions with terms more typical of their new  
neighbors.  No functional changes; a subsequent commit will use them in  
more places.  Back-patch to 9.1 (all supported versions).  Back branches  
lack src/fe_utils, so instead rename the functions in place; the  
subsequent commit will copy them into the other programs using them.  
  
Security: CVE-2016-5424  

M src/bin/pg_dump/pg_dumpall.c

Fix Windows shell argument quoting.

commit   : 2e5e90d8d10ca568381adfaaf53e8a9e8e342375    
  
author   : Noah Misch <[email protected]>    
date     : Mon, 8 Aug 2016 10:07:46 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Mon, 8 Aug 2016 10:07:46 -0400    

Click here for diff

The incorrect quoting may have permitted arbitrary command execution.  
At a minimum, it gave broader control over the command line to actors  
supposed to have control over a single argument.  Back-patch to 9.1 (all  
supported versions).  
  
Security: CVE-2016-5424  

M src/bin/pg_dump/pg_dumpall.c

Reject, in pg_dumpall, names containing CR or LF.

commit   : ec3aebdbdf0137c8ff27ae089d1431cf61bc15b5    
  
author   : Noah Misch <[email protected]>    
date     : Mon, 8 Aug 2016 10:07:46 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Mon, 8 Aug 2016 10:07:46 -0400    

Click here for diff

These characters prematurely terminate Windows shell command processing,  
causing the shell to execute a prefix of the intended command.  The  
chief alternative to rejecting these characters was to bypass the  
Windows shell with CreateProcess(), but the ability to use such names  
has little value.  Back-patch to 9.1 (all supported versions).  
  
This change formally revokes support for these characters in database  
names and roles names.  Don't document this; the error message is  
self-explanatory, and too few users would benefit.  A future major  
release may forbid creation of databases and roles so named.  For now,  
check only at known weak points in pg_dumpall.  Future commits will,  
without notice, reject affected names from other frontend programs.  
  
Also extend the restriction to pg_dumpall --dbname=CONNSTR arguments and  
--file arguments.  Unlike the effects on role name arguments and  
database names, this does not reflect a broad policy change.  A  
migration to CreateProcess() could lift these two restrictions.  
  
Reviewed by Peter Eisentraut.  
  
Security: CVE-2016-5424  

M src/bin/pg_dump/pg_dumpall.c

Field conninfo strings throughout src/bin/scripts.

commit   : 640768ceb6caec39fd4bcb3efa070fcb17ce6cd2    
  
author   : Noah Misch <[email protected]>    
date     : Mon, 8 Aug 2016 10:07:46 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Mon, 8 Aug 2016 10:07:46 -0400    

Click here for diff

These programs nominally accepted conninfo strings, but they would  
proceed to use the original dbname parameter as though it were an  
unadorned database name.  This caused "reindexdb dbname=foo" to issue an  
SQL command that always failed, and other programs printed a conninfo  
string in error messages that purported to print a database name.  Fix  
both problems by using PQdb() to retrieve actual database names.  
Continue to print the full conninfo string when reporting a connection  
failure.  It is informative there, and if the database name is the sole  
problem, the server-side error message will include the name.  Beyond  
those user-visible fixes, this allows a subsequent commit to synthesize  
and use conninfo strings without that implementation detail leaking into  
messages.  As a side effect, the "vacuuming database" message now  
appears after, not before, the connection attempt.  Back-patch to 9.1  
(all supported versions).  
  
Reviewed by Michael Paquier and Peter Eisentraut.  
  
Security: CVE-2016-5424  

M src/bin/scripts/clusterdb.c
M src/bin/scripts/createlang.c
M src/bin/scripts/droplang.c
M src/bin/scripts/reindexdb.c
M src/bin/scripts/vacuumdb.c

Introduce a psql "\connect -reuse-previous=on|off" option.

commit   : 6655c07574f2c966ad17e8cc21e9d399f07266f7    
  
author   : Noah Misch <[email protected]>    
date     : Mon, 8 Aug 2016 10:07:46 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Mon, 8 Aug 2016 10:07:46 -0400    

Click here for diff

The decision to reuse values of parameters from a previous connection  
has been based on whether the new target is a conninfo string.  Add this  
means of overriding that default.  This feature arose as one component  
of a fix for security vulnerabilities in pg_dump, pg_dumpall, and  
pg_upgrade, so back-patch to 9.1 (all supported versions).  In 9.3 and  
later, comment paragraphs that required update had already-incorrect  
claims about behavior when no connection is open; fix those problems.  
  
Security: CVE-2016-5424  

M doc/src/sgml/ref/psql-ref.sgml
M src/bin/psql/command.c
M src/bin/psql/startup.c

Sort out paired double quotes in \connect, \password and \crosstabview.

commit   : db951dd1959fb6032c97a81a33139125c85a59fb    
  
author   : Noah Misch <[email protected]>    
date     : Mon, 8 Aug 2016 10:07:46 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Mon, 8 Aug 2016 10:07:46 -0400    

Click here for diff

In arguments, these meta-commands wrongly treated each pair as closing  
the double quoted string.  Make the behavior match the documentation.  
This is a compatibility break, but I more expect to find software with  
untested reliance on the documented behavior than software reliant on  
today's behavior.  Back-patch to 9.1 (all supported versions).  
  
Reviewed by Tom Lane and Peter Eisentraut.  
  
Security: CVE-2016-5424  

M src/bin/psql/psqlscan.l

Release notes for 9.5.4, 9.4.9, 9.3.14, 9.2.18, 9.1.23.

commit   : c9bb00fbd4de20ce959cefefaffe70acfffd89e4    
  
author   : Tom Lane <[email protected]>    
date     : Sun, 7 Aug 2016 21:31:01 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sun, 7 Aug 2016 21:31:01 -0400    

Click here for diff

M doc/src/sgml/release-9.1.sgml
M doc/src/sgml/release-9.2.sgml
M doc/src/sgml/release-9.3.sgml
M doc/src/sgml/release-9.4.sgml
M doc/src/sgml/release-9.5.sgml

Fix misestimation of n_distinct for a nearly-unique column with many nulls.

commit   : cb5c14984ad327e52dfb470fde466a5aca7d50a1    
  
author   : Tom Lane <[email protected]>    
date     : Sun, 7 Aug 2016 18:52:02 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sun, 7 Aug 2016 18:52:02 -0400    

Click here for diff

If ANALYZE found no repeated non-null entries in its sample, it set the  
column's stadistinct value to -1.0, intending to indicate that the entries  
are all distinct.  But what this value actually means is that the number  
of distinct values is 100% of the table's rowcount, and thus it was  
overestimating the number of distinct values by however many nulls there  
are.  This could lead to very poor selectivity estimates, as for example  
in a recent report from Andreas Joseph Krogh.  We should discount the  
stadistinct value by whatever we've estimated the nulls fraction to be.  
(That is what will happen if we choose to use a negative stadistinct for  
a column that does have repeated entries, so this code path was just  
inconsistent.)  
  
In addition to fixing the stadistinct entries stored by several different  
ANALYZE code paths, adjust the logic where get_variable_numdistinct()  
forces an "all distinct" estimate on the basis of finding a relevant unique  
index.  Unique indexes don't reject nulls, so there's no reason to assume  
that the null fraction doesn't apply.  
  
Back-patch to all supported branches.  Back-patching is a bit of a judgment  
call, but this problem seems to affect only a few users (else we'd have  
identified it long ago), and it's bad enough when it does happen that  
destabilizing plan choices in a worse direction seems unlikely.  
  
Patch by me, with documentation wording suggested by Dean Rasheed  
  
Report: <VisenaEmail.26.df42f82acae38a58.156463942b8@tc7-visena>  
Discussion: <[email protected]>  

M doc/src/sgml/catalogs.sgml
M src/backend/commands/analyze.c
M src/backend/tsearch/ts_typanalyze.c
M src/backend/utils/adt/rangetypes_typanalyze.c
M src/backend/utils/adt/selfuncs.c
M src/include/catalog/pg_statistic.h

Don't propagate a null subtransaction snapshot up to parent transaction.

commit   : 71dca408c0030ad76044c6b17367c9fbeac511ec    
  
author   : Tom Lane <[email protected]>    
date     : Sun, 7 Aug 2016 13:15:55 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sun, 7 Aug 2016 13:15:55 -0400    

Click here for diff

This oversight could cause logical decoding to fail to decode an outer  
transaction containing changes, if a subtransaction had an XID but no  
actual changes.  Per bug #14279 from Marko Tiikkaja.  Patch by Marko  
based on analysis by Andrew Gierth.  
  
Discussion: <[email protected]>  

M contrib/test_decoding/expected/xact.out
M contrib/test_decoding/sql/xact.sql
M src/backend/replication/logical/reorderbuffer.c

In B-tree page deletion, clean up properly after page deletion failure.

commit   : ee5d1de04f3354c7a87219c2ed481ae51a1bb3b8    
  
author   : Tom Lane <[email protected]>    
date     : Sat, 6 Aug 2016 14:28:37 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sat, 6 Aug 2016 14:28:37 -0400    

Click here for diff

In _bt_unlink_halfdead_page(), we might fail to find an immediate left  
sibling of the target page, perhaps because of corruption of the page  
sibling links.  The code intends to cope with this by just abandoning  
the deletion attempt; but what actually happens is that it fails outright  
due to releasing the same buffer lock twice.  (And error recovery masks  
a second problem, which is possible leakage of a pin on another page.)  
Seems to have been introduced by careless refactoring in commit efada2b8e.  
Since there are multiple cases to consider, let's make releasing the buffer  
lock in the failure case the responsibility of _bt_unlink_halfdead_page()  
not its caller.  
  
Also, avoid fetching the leaf page's left-link again after we've dropped  
lock on the page.  This is probably harmless, but it's not exactly good  
coding practice.  
  
Per report from Kyotaro Horiguchi.  Back-patch to 9.4 where the faulty code  
was introduced.  
  
Discussion: <[email protected]>  

M src/backend/access/nbtree/nbtpage.c

Teach libpq to decode server version correctly from future servers.

commit   : cae0d4f9ba5f34fe27a5806f23df6fa6e2785e35    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 5 Aug 2016 18:58:12 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 5 Aug 2016 18:58:12 -0400    

Click here for diff

Beginning with the next development cycle, PG servers will report two-part  
not three-part version numbers.  Fix libpq so that it will compute the  
correct numeric representation of such server versions for reporting by  
PQserverVersion().  It's desirable to get this into the field and  
back-patched ASAP, so that older clients are more likely to understand the  
new server version numbering by the time any such servers are in the wild.  
  
(The results with an old client would probably not be catastrophic anyway  
for a released server; for example "10.1" would be interpreted as 100100  
which would be wrong in detail but would not likely cause an old client to  
misbehave badly.  But "10devel" or "10beta1" would result in sversion==0  
which at best would result in disabling all use of modern features.)  
  
Extracted from a patch by Peter Eisentraut; comments added by me  
  
Patch: <[email protected]>  

M src/interfaces/libpq/fe-exec.c

Add note about unused arguments for pg_replication_origin_xact_reset() in docs.

commit   : b07058c213725e9493052ffe0a219323f74e8ed3    
  
author   : Fujii Masao <[email protected]>    
date     : Sat, 6 Aug 2016 03:23:41 +0900    
  
committer: Fujii Masao <[email protected]>    
date     : Sat, 6 Aug 2016 03:23:41 +0900    

Click here for diff

In 9.5, two arguments were introduced into pg_replication_origin_xact_reset()  
by mistake while they are actually not used at all. We cannot fix this issue  
for 9.5 anymore because it needs a catalog version bump. Instead, we add  
a note about those unused arguments into the document.  
  
Reviewed-By: Andres Freund  

M doc/src/sgml/func.sgml

Update time zone data files to tzdata release 2016f.

commit   : 3fddd64846ae01886b48c6f4675a5ba93a612fe0    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 5 Aug 2016 12:58:17 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 5 Aug 2016 12:58:17 -0400    

Click here for diff

DST law changes in Kemerovo and Novosibirsk.  Historical corrections for  
Azerbaijan, Belarus, and Morocco.  Asia/Novokuznetsk and Asia/Novosibirsk  
now use numeric time zone abbreviations instead of invented ones.  Zones  
for Antarctic bases and other locations that have been uninhabited for  
portions of the time span known to the tzdata database now report "-00"  
rather than "zzz" as the zone abbreviation for those time spans.  
  
Also, I decided to remove some of the timezone/data/ files that we don't  
use.  At one time that subdirectory was a complete copy of what IANA  
distributes in the tzdata tarballs, but that hasn't been true for a long  
time.  There seems no good reason to keep shipping those specific files  
but not others; they're just bloating our tarballs.  

M src/timezone/data/africa
M src/timezone/data/antarctica
M src/timezone/data/asia
M src/timezone/data/australasia
M src/timezone/data/backzone
M src/timezone/data/europe
D src/timezone/data/iso3166.tab
D src/timezone/data/leapseconds
M src/timezone/data/northamerica
M src/timezone/data/southamerica
D src/timezone/data/yearistype.sh
D src/timezone/data/zone.tab
D src/timezone/data/zone1970.tab
M src/timezone/known_abbrevs.txt
M src/timezone/tznames/Asia.txt
M src/timezone/tznames/Default

Fix bogus coding in WaitForBackgroundWorkerShutdown().

commit   : c1d6ee879285969a93f244e08a3ff2344d2cd7ff    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 4 Aug 2016 16:06:14 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 4 Aug 2016 16:06:14 -0400    

Click here for diff

Some conditions resulted in "return" directly out of a PG_TRY block,  
which left the exception stack dangling, and to add insult to injury  
failed to restore the state of set_latch_on_sigusr1.  
  
This is a bug only in 9.5; in HEAD it was accidentally fixed by commit  
db0f6cad4, which removed the surrounding PG_TRY block.  However, I (tgl)  
chose to apply the patch to HEAD as well, because the old coding was  
gratuitously different from WaitForBackgroundWorkerStartup(), and there  
would indeed have been no bug if it were done like that to start with.  
  
Dmitry Ivanov  
  
Discussion: <1637882.WfYN5gPf1A@abook>  

M src/backend/postmaster/bgworker.c

doc: Remove documentation of nonexistent information schema columns

commit   : 4c275117cc1bd0d25e3515aabf4a3cb8e2e7e515    
  
author   : Peter Eisentraut <[email protected]>    
date     : Wed, 3 Aug 2016 13:45:55 -0400    
  
committer: Peter Eisentraut <[email protected]>    
date     : Wed, 3 Aug 2016 13:45:55 -0400    

Click here for diff

These were probably copied in by accident.  
  
From: Clément Prévost <[email protected]>  

M doc/src/sgml/information_schema.sgml

Remove duplicate InitPostmasterChild() call while starting a bgworker.

commit   : 5816f21d8c7311ff5320c20a9d90c9e98649cec4    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 2 Aug 2016 18:39:14 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 2 Aug 2016 18:39:14 -0400    

Click here for diff

This is apparently harmless on Windows, but on Unix it results in an  
assertion failure.  We'd not noticed because this code doesn't get  
used on Unix unless you build with -DEXEC_BACKEND.  Bug was evidently  
introduced by sloppy refactoring in commit 31c453165.  
  
Thomas Munro  
  
Discussion: <CAEepm=1VOnbVx4wsgQFvj94hu9jVt2nVabCr7QiooUSvPJXkgQ@mail.gmail.com>  

M src/backend/postmaster/postmaster.c

doc: OS collation changes can break indexes

commit   : d02b38799d956ae6e8d600f081212448abe3f30e    
  
author   : Bruce Momjian <[email protected]>    
date     : Tue, 2 Aug 2016 17:13:10 -0400    
  
committer: Bruce Momjian <[email protected]>    
date     : Tue, 2 Aug 2016 17:13:10 -0400    

Click here for diff

Discussion: [email protected]  
  
Reviewed-by: Christoph Berg  
  
Backpatch-through: 9.1  

M doc/src/sgml/runtime.sgml

Block interrupts during HandleParallelMessages().

commit   : 75c452a755a0c1e1500362b9bd7976a0be2588d2    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 2 Aug 2016 16:39:16 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 2 Aug 2016 16:39:16 -0400    

Click here for diff

As noted by Alvaro, there are CHECK_FOR_INTERRUPTS() calls in the shm_mq.c  
functions called by HandleParallelMessages().  I believe they're all  
unreachable since we always pass nowait = true, but it doesn't seem like  
a great idea to assume that no such call will ever be reachable from  
HandleParallelMessages().  If that did happen, there would be a risk of a  
recursive call to HandleParallelMessages(), which it does not appear to be  
designed for --- for example, there's nothing that would prevent  
out-of-order processing of received messages.  And certainly such cases  
cannot easily be tested.  So let's prevent it by holding off interrupts for  
the duration of the function.  Back-patch to 9.5 which contains identical  
code.  
  
Discussion: <[email protected]>  

M src/backend/access/transam/parallel.c

Sync 9.5 version of access/transam/parallel.c with HEAD.

commit   : 93ac14efb465d3160a77b5f75dad8e4721cee41a    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 2 Aug 2016 16:09:09 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 2 Aug 2016 16:09:09 -0400    

Click here for diff

This back-patches commit a5fe473ad (notably, marking ParallelMessagePending  
as volatile, which is not particularly optional).  I also back-patched some  
previous cosmetic changes to remove unnecessary diffs between the two  
branches.  I'm unsure how much of this code is actually reachable in 9.5,  
but to the extent that it is reachable, it needs to be maintained, and  
minimizing cross-branch diffs will make that easier.  

M src/backend/access/transam/parallel.c
M src/include/access/parallel.h

Fix pg_dump's handling of public schema with both -c and -C options.

commit   : 89c30d1133be5ba4da6098da2ee12114e527f03b    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 2 Aug 2016 12:48:51 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 2 Aug 2016 12:48:51 -0400    

Click here for diff

Since -c plus -C requests dropping and recreating the target database  
as a whole, not dropping individual objects in it, we should assume that  
the public schema already exists and need not be created.  The previous  
coding considered only the state of the -c option, so it would emit  
"CREATE SCHEMA public" anyway, leading to an unexpected error in restore.  
  
Back-patch to 9.2.  Older versions did not accept -c with -C so the  
issue doesn't arise there.  (The logic being patched here dates to 8.0,  
cf commit 2193121fa, so it's not really wrong that it didn't consider  
the case at the time.)  
  
Note that versions before 9.6 will still attempt to emit REVOKE/GRANT  
on the public schema; but that happens without -c/-C too, and doesn't  
seem to be the focus of this complaint.  I considered extending this  
stanza to also skip the public schema's ACL, but that would be a  
misfeature, as it'd break cases where users intentionally changed that  
ACL.  The real fix for this aspect is Stephen Frost's work to not dump  
built-in ACLs, and that's not going to get back-ported.  
  
Per bugs #13804 and #14271.  Solution found by David Johnston and later  
rediscovered by me.  
  
Report: <[email protected]>  
Report: <[email protected]>  

M src/bin/pg_dump/pg_backup_archiver.c

doc: Whitespace fixes in man pages

commit   : b2148e176f28ad3a7e468742b27eb6fdff348b9a    
  
author   : Peter Eisentraut <[email protected]>    
date     : Tue, 2 Aug 2016 12:35:35 -0400    
  
committer: Peter Eisentraut <[email protected]>    
date     : Tue, 2 Aug 2016 12:35:35 -0400    

Click here for diff

M doc/src/sgml/ref/insert.sgml
M doc/src/sgml/ref/pgupgrade.sgml

Don't CHECK_FOR_INTERRUPTS between WaitLatch and ResetLatch.

commit   : 8ef3d9fae496d80fc1d100d49b46891ae9c2c0fc    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 1 Aug 2016 15:13:53 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 1 Aug 2016 15:13:53 -0400    

Click here for diff

This coding pattern creates a race condition, because if an interesting  
interrupt happens after we've checked InterruptPending but before we reset  
our latch, the latch-setting done by the signal handler would get lost,  
and then we might block at WaitLatch in the next iteration without ever  
noticing the interrupt condition.  You can put the CHECK_FOR_INTERRUPTS  
before WaitLatch or after ResetLatch, but not between them.  
  
Aside from fixing the bugs, add some explanatory comments to latch.h  
to perhaps forestall the next person from making the same mistake.  
  
In HEAD, also replace gather_readnext's direct call of  
HandleParallelMessages with CHECK_FOR_INTERRUPTS.  It does not seem clean  
or useful for this one caller to bypass ProcessInterrupts and go straight  
to HandleParallelMessages; not least because that fails to consider the  
InterruptPending flag, resulting in useless work both here  
(if InterruptPending isn't set) and in the next CHECK_FOR_INTERRUPTS call  
(if it is).  
  
This thinko seems to have been introduced in the initial coding of  
storage/ipc/shm_mq.c (commit ec9037df2), and then blindly copied into all  
the subsequent parallel-query support logic.  Back-patch relevant hunks  
to 9.4 to extirpate the error everywhere.  
  
Discussion: <[email protected]>  

M src/backend/libpq/pqmq.c
M src/backend/storage/ipc/shm_mq.c
M src/include/storage/latch.h
M src/test/modules/test_shm_mq/setup.c
M src/test/modules/test_shm_mq/test.c

Fixed array checking code for "unsigned long long" datatypes in libecpg.

commit   : dc6b20c6bed779399ddc8da53de3e1749666a098    
  
author   : Michael Meskes <[email protected]>    
date     : Mon, 1 Aug 2016 06:36:27 +0200    
  
committer: Michael Meskes <[email protected]>    
date     : Mon, 1 Aug 2016 06:36:27 +0200    

Click here for diff

M src/interfaces/ecpg/ecpglib/data.c

Fix pg_basebackup so that it accepts 0 as a valid compression level.

commit   : 928e92fda30fa61688534802c849797c9986dc4c    
  
author   : Fujii Masao <[email protected]>    
date     : Mon, 1 Aug 2016 17:36:14 +0900    
  
committer: Fujii Masao <[email protected]>    
date     : Mon, 1 Aug 2016 17:36:14 +0900    

Click here for diff

The help message for pg_basebackup specifies that the numbers 0 through 9  
are accepted as valid values of -Z option. But, previously -Z 0 was rejected  
as an invalid compression level.  
  
Per discussion, it's better to make pg_basebackup treat 0 as valid  
compression level meaning no compression, like pg_dump.  
  
Back-patch to all supported versions.  
  
Reported-By: Jeff Janes  
Reviewed-By: Amit Kapila  
Discussion: CAMkU=1x+GwjSayc57v6w87ij6iRGFWt=hVfM0B64b1_bPVKRqg@mail.gmail.com  

M doc/src/sgml/ref/pg_basebackup.sgml
M src/bin/pg_basebackup/pg_basebackup.c

Doc: remove claim that hash index creation depends on effective_cache_size.

commit   : c0782a147390d11eab8387fd612b15d5ec6d5240    
  
author   : Tom Lane <[email protected]>    
date     : Sun, 31 Jul 2016 18:32:34 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sun, 31 Jul 2016 18:32:34 -0400    

Click here for diff

This text was added by commit ff213239c, and not long thereafter obsoleted  
by commit 4adc2f72a (which made the test depend on NBuffers instead); but  
nobody noticed the need for an update.  Commit 9563d5b5e adds some further  
dependency on maintenance_work_mem, but the existing verbiage seems to  
cover that with about as much precision as we really want here.  Let's  
just take it all out rather than leaving ourselves open to more errors of  
omission in future.  (That solution makes this change back-patchable, too.)  
  
Noted by Peter Geoghegan.  
  
Discussion: <CAM3SWZRVANbj9GA9j40fAwheQCZQtSwqTN1GBTVwRrRbmSf7cg@mail.gmail.com>  

M doc/src/sgml/ref/create_index.sgml

pgbench docs: fix incorrect "last two" fields text

commit   : b57a0a9d72cb51acffddbf06345134414d79309a    
  
author   : Bruce Momjian <[email protected]>    
date     : Sat, 30 Jul 2016 16:59:34 -0400    
  
committer: Bruce Momjian <[email protected]>    
date     : Sat, 30 Jul 2016 16:59:34 -0400    

Click here for diff

Reported-by: Alexander Law  
  
Discussion: [email protected]  
  
Backpatch-through: 9.4  

M doc/src/sgml/ref/pgbench.sgml

doc: apply hypen fix that was not backpatched

commit   : 0343d663348113ab4293e132c87feba5c36d5bb8    
  
author   : Bruce Momjian <[email protected]>    
date     : Sat, 30 Jul 2016 14:52:17 -0400    
  
committer: Bruce Momjian <[email protected]>    
date     : Sat, 30 Jul 2016 14:52:17 -0400    

Click here for diff

Head patch was 42ec6c2da699e8e0b1774988fa97297a2cdf716c.  
  
Reported-by: Alexander Law  
  
Discussion: [email protected]  
  
Backpatch-through: 9.1  

M doc/src/sgml/runtime.sgml

Fix pq_putmessage_noblock() to not block.

commit   : c8966a925e139df2176958667b4a19c068d617aa    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 29 Jul 2016 12:52:57 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 29 Jul 2016 12:52:57 -0400    

Click here for diff

An evident copy-and-pasteo in commit 2bd9e412f broke the non-blocking  
aspect of pq_putmessage_noblock(), causing it to behave identically to  
pq_putmessage().  That function is nowadays used only in walsender.c,  
so that the net effect was to cause walsenders to hang up waiting for  
the receiver in situations where they should not.  
  
Kyotaro Horiguchi  
  
Patch: <[email protected]>  

M src/include/libpq/libpq.h

Guard against empty buffer in gets_fromFile()'s check for a newline.

commit   : 67fb608fe3b086e8218ff6560c11274ab56acf10    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 28 Jul 2016 18:57:24 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 28 Jul 2016 18:57:24 -0400    

Click here for diff

Per the fgets() specification, it cannot return without reading some data  
unless it reports EOF or error.  So the code here assumed that the data  
buffer would necessarily be nonempty when we go to check for a newline  
having been read.  However, Agostino Sarubbo noticed that this could fail  
to be true if the first byte of the data is a NUL (\0).  The fgets() API  
doesn't really work for embedded NULs, which is something I don't feel  
any great need for us to worry about since we generally don't allow NULs  
in SQL strings anyway.  But we should not access off the end of our own  
buffer if the case occurs.  Normally this would just be a harmless read,  
but if you were unlucky the byte before the buffer would contain '\n'  
and we'd overwrite it with '\0', and if you were really unlucky that  
might be valuable data and psql would crash.  
  
Agostino reported this to pgsql-security, but after discussion we concluded  
that it isn't worth treating as a security bug; if you can control the  
input to psql you can do far more interesting things than just maybe-crash  
it.  Nonetheless, it is a bug, so back-patch to all supported versions.  

M src/bin/psql/input.c

Fix assorted fallout from IS [NOT] NULL patch.

commit   : 1e2f96f0a56c2d67a84ffb58383e6354546cf96f    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 28 Jul 2016 16:09:15 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 28 Jul 2016 16:09:15 -0400    

Click here for diff

Commits 4452000f3 et al established semantics for NullTest.argisrow that  
are a bit different from its initial conception: rather than being merely  
a cache of whether we've determined the input to have composite type,  
the flag now has the further meaning that we should apply field-by-field  
testing as per the standard's definition of IS [NOT] NULL.  If argisrow  
is false and yet the input has composite type, the construct instead has  
the semantics of IS [NOT] DISTINCT FROM NULL.  Update the comments in  
primnodes.h to clarify this, and fix ruleutils.c and deparse.c to print  
such cases correctly.  In the case of ruleutils.c, this merely results in  
cosmetic changes in EXPLAIN output, since the case can't currently arise  
in stored rules.  However, it represents a live bug for deparse.c, which  
would formerly have sent a remote query that had semantics different  
from the local behavior.  (From the user's standpoint, this means that  
testing a remote nested-composite column for null-ness could have had  
unexpected recursive behavior much like that fixed in 4452000f3.)  
  
In a related but somewhat independent fix, make plancat.c set argisrow  
to false in all NullTest expressions constructed to represent "attnotnull"  
constructs.  Since attnotnull is actually enforced as a simple null-value  
check, this is a more accurate representation of the semantics; we were  
previously overpromising what it meant for composite columns, which might  
possibly lead to incorrect planner optimizations.  (It seems that what the  
SQL spec expects a NOT NULL constraint to mean is an IS NOT NULL test, so  
arguably we are violating the spec and should fix attnotnull to do the  
other thing.  If we ever do, this part should get reverted.)  
  
Back-patch, same as the previous commit.  
  
Discussion: <[email protected]>  

M contrib/postgres_fdw/deparse.c
M src/backend/optimizer/util/plancat.c
M src/backend/utils/adt/ruleutils.c
M src/include/nodes/primnodes.h
M src/test/regress/expected/rowtypes.out

Improve documentation about CREATE TABLE ... LIKE.

commit   : 884aec4f8505d9b4c767cf0f2456e12b843688cc    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 28 Jul 2016 13:26:59 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 28 Jul 2016 13:26:59 -0400    

Click here for diff

The docs failed to explain that LIKE INCLUDING INDEXES would not preserve  
the names of indexes and associated constraints.  Also, it wasn't mentioned  
that EXCLUDE constraints would be copied by this option.  The latter  
oversight seems enough of a documentation bug to justify back-patching.  
  
In passing, do some minor copy-editing in the same area, and add an entry  
for LIKE under "Compatibility", since it's not exactly a faithful  
implementation of the standard's feature.  
  
Discussion: <[email protected]>  

M doc/src/sgml/ref/create_table.sgml
M src/backend/parser/parse_utilcmd.c

Register atexit hook only once in pg_upgrade.

commit   : 93b99d3b6aec67a5eac30c67c511dbb03dd2f72c    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 28 Jul 2016 11:39:10 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 28 Jul 2016 11:39:10 -0400    

Click here for diff

start_postmaster() registered stop_postmaster_atexit as an atexit(3)  
callback each time through, although the obvious intention was to do  
so only once per program run.  The extra registrations were harmless,  
so long as we didn't exceed ATEXIT_MAX, but still it's a bug.  
  
Artur Zakirov, with bikeshedding by Kyotaro Horiguchi and me  
  
Discussion: <[email protected]>  

M src/bin/pg_upgrade/server.c

Fix incorrect description of udt_privileges view in documentation.

commit   : 6b8a89e646be8a25771e292e09550aa1abe7019d    
  
author   : Fujii Masao <[email protected]>    
date     : Thu, 28 Jul 2016 22:34:42 +0900    
  
committer: Fujii Masao <[email protected]>    
date     : Thu, 28 Jul 2016 22:34:42 +0900    

Click here for diff

The description of udt_privileges view contained an incorrect copy-pasted word.  
  
Back-patch to 9.2 where udt_privileges view was added.  
  
Author: Alexander Law  

M doc/src/sgml/information_schema.sgml

Fix constant-folding of ROW(...) IS [NOT] NULL with composite fields.

commit   : d2ef7758d2d2175509b2f49f7049e06ccd81fd57    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 26 Jul 2016 15:25:02 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 26 Jul 2016 15:25:02 -0400    

Click here for diff

The SQL standard appears to specify that IS [NOT] NULL's tests of field  
nullness are non-recursive, ie, we shouldn't consider that a composite  
field with value ROW(NULL,NULL) is null for this purpose.  
ExecEvalNullTest got this right, but eval_const_expressions did not,  
leading to weird inconsistencies depending on whether the expression  
was such that the planner could apply constant folding.  
  
Also, adjust the docs to mention that IS [NOT] DISTINCT FROM NULL can be  
used as a substitute test if a simple null check is wanted for a rowtype  
argument.  That motivated reordering things so that IS [NOT] DISTINCT FROM  
is described before IS [NOT] NULL.  In HEAD, I went a bit further and added  
a table showing all the comparison-related predicates.  
  
Per bug #14235.  Back-patch to all supported branches, since it's certainly  
undesirable that constant-folding should change the semantics.  
  
Report and patch by Andrew Gierth; assorted wordsmithing and revised  
regression test cases by me.  
  
Report: <[email protected]>  

M doc/src/sgml/func.sgml
M src/backend/executor/execQual.c
M src/backend/optimizer/util/clauses.c
M src/test/regress/expected/rowtypes.out
M src/test/regress/sql/rowtypes.sql

Make the AIX case of Makefile.shlib safe for parallel make.

commit   : cf35406f9bce70c5f52b11122bdfb245c680000b    
  
author   : Noah Misch <[email protected]>    
date     : Sat, 23 Jul 2016 20:30:03 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Sat, 23 Jul 2016 20:30:03 -0400    

Click here for diff

Use our typical approach, from src/backend/parser.  Back-patch to 9.1  
(all supported versions).  

M src/Makefile.shlib

Fix regression tests to work in Welsh locale.

commit   : 2aa2533f2818687e1ae5a3238dd70b175c3cdfb3    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 22 Jul 2016 15:41:40 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 22 Jul 2016 15:41:40 -0400    

Click here for diff

Welsh (cy_GB) apparently sorts 'dd' after 'f', creating problems  
analogous to the odd sorting of 'aa' in Danish.  Adjust regression  
test case to not use data that provokes that.  
  
Jeff Janes  
  
Patch: <CAMkU=1zx-pqcfSApL2pYDQurPOCfcYU0wJorsmY1OrYPiXRbLw@mail.gmail.com>  

M src/test/regress/expected/rowsecurity.out
M src/test/regress/sql/rowsecurity.sql

Make contrib regression tests safe for Danish locale.

commit   : d365dc3d1bdb5b54c61a8dfaf2d2fd948419c752    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 21 Jul 2016 16:52:36 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 21 Jul 2016 16:52:36 -0400    

Click here for diff

In btree_gin and citext, avoid some not-particularly-interesting  
dependencies on the sorting of 'aa'.  In tsearch2, use COLLATE "C" to  
remove an uninteresting dependency on locale sort order (and thereby  
allow removal of a variant expected-file).  
  
Also, in citext, avoid assuming that lower('I') = 'i'.  This isn't relevant  
to Danish but it does fail in Turkish.  

M contrib/btree_gin/expected/bytea.out
M contrib/btree_gin/expected/text.out
M contrib/btree_gin/expected/varchar.out
M contrib/btree_gin/sql/bytea.sql
M contrib/btree_gin/sql/text.sql
M contrib/btree_gin/sql/varchar.sql
M contrib/citext/expected/citext.out
M contrib/citext/expected/citext_1.out
M contrib/citext/sql/citext.sql
M contrib/tsearch2/expected/tsearch2.out
D contrib/tsearch2/expected/tsearch2_1.out
M contrib/tsearch2/sql/tsearch2.sql

Make pltcl regression tests safe for Danish locale.

commit   : 95e8b44f03e0f870678a507413134e7d86e684db    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 21 Jul 2016 14:24:07 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 21 Jul 2016 14:24:07 -0400    

Click here for diff

Another peculiarity of Danish locale is that it has an unusual idea  
of how to sort upper vs. lower case.  One of the pltcl test cases has  
an issue with that.  Now that COLLATE works in all supported branches,  
we can just change the test to be locale-independent, and get rid of  
the variant expected file that used to support non-C locales.  

M src/pl/tcl/expected/pltcl_queries.out
D src/pl/tcl/expected/pltcl_queries_1.out
M src/pl/tcl/sql/pltcl_queries.sql

Make core regression tests safe for Danish locale.

commit   : fd507d542eeb537159b00c3971203d2f132b0262    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 21 Jul 2016 13:11:00 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 21 Jul 2016 13:11:00 -0400    

Click here for diff

Some tests added in 9.5 depended on 'aa' sorting before 'bb', which  
doesn't hold true in Danish.  Use slightly different test data to  
avoid the problem.  
  
Jeff Janes  
  
Report: <CAMkU=1w-cEDbA+XHdNb=YS_4wvZbs66Ni9KeSJKAJGNJyOsgQw@mail.gmail.com>  

M src/test/regress/expected/brin.out
M src/test/regress/expected/rowsecurity.out
M src/test/regress/sql/brin.sql
M src/test/regress/sql/rowsecurity.sql

Fix typos

commit   : 15c2992e899a6756ab780bae00f2a04925354845    
  
author   : Magnus Hagander <[email protected]>    
date     : Wed, 20 Jul 2016 10:39:18 +0200    
  
committer: Magnus Hagander <[email protected]>    
date     : Wed, 20 Jul 2016 10:39:18 +0200    

Click here for diff

Alexander Law  

M doc/src/sgml/ref/pg_receivexlog.sgml
M doc/src/sgml/ref/pg_recvlogical.sgml

Remove very-obsolete estimates of shmem usage from postgresql.conf.sample.

commit   : 350db87203268ba89213527eb24ec01afc098637    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 19 Jul 2016 18:41:30 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 19 Jul 2016 18:41:30 -0400    

Click here for diff

runtime.sgml used to contain a table of estimated shared memory consumption  
rates for max_connections and some other GUCs.  Commit 390bfc643 removed  
that on the well-founded grounds that (a) we weren't maintaining the  
entries well and (b) it no longer mattered so much once we got out from  
under SysV shmem limits.  But it missed that there were even-more-obsolete  
versions of some of those numbers in comments in postgresql.conf.sample.  
Remove those too.  Back-patch to 9.3 where the aforesaid commit went in.  

M src/backend/utils/misc/postgresql.conf.sample

Fix MSVC build for changes in zic.

commit   : 0aabe80c6f7885dd74e451198587f821c6b8ae18    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 19 Jul 2016 17:53:31 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 19 Jul 2016 17:53:31 -0400    

Click here for diff

Ooops, I missed back-patching commit f5f15ea6a along with the other stuff.  

M src/tools/msvc/Mkvcbuild.pm

Sync back-branch copies of the timezone code with IANA release tzcode2016c.

commit   : 19d477aa681b4927f95824d724a4197c696f8c75    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 19 Jul 2016 15:59:36 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 19 Jul 2016 15:59:36 -0400    

Click here for diff

Back-patch commit 1c1a7cbd6a1600c9, along with subsequent portability  
fixes, into all active branches.  Also, back-patch commits 696027727 and  
596857043 (addition of zic -P option) into 9.1 and 9.2, just to reduce  
differences between the branches.  src/timezone/ is now largely identical  
in all active branches, except that in 9.1, pgtz.c retains the  
initial-timezone-selection code that was moved over to initdb in 9.2.  
  
Ordinarily we wouldn't risk this much code churn in back branches, but it  
seems necessary in this case, because among the changes are two feature  
additions in the "zic" zone data file compiler (a larger limit on the  
number of allowed DST transitions, and addition of a "%z" escape in zone  
abbreviations).  IANA have not yet started to use those features in their  
tzdata files, but presumably they will before too long.  If we don't update  
then we'll be unable to adopt new timezone data.  Also, installations built  
with --with-system-tzdata (which includes most distro-supplied builds, I  
believe) might fail even if we don't update our copies of the data files.  
There are assorted bug fixes too, mostly affecting obscure timezones or  
post-2037 dates.  
  
Discussion: <[email protected]>  

M src/bin/initdb/findtimezone.c
M src/timezone/Makefile
M src/timezone/README
D src/timezone/ialloc.c
M src/timezone/localtime.c
M src/timezone/pgtz.c
M src/timezone/pgtz.h
M src/timezone/private.h
D src/timezone/scheck.c
M src/timezone/strftime.c
M src/timezone/tzfile.h
M src/timezone/zic.c

Doc: improve discussion of plpgsql's GET DIAGNOSTICS, other minor fixes.

commit   : bdeed0944fadff3ea394d361d0137997fb4db953    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 18 Jul 2016 16:52:06 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 18 Jul 2016 16:52:06 -0400    

Click here for diff

9.4 added a second description of GET DIAGNOSTICS that was totally  
independent of the existing one, resulting in each description lying to the  
extent that it claimed the set of status items it described was complete.  
Fix that, and do some minor markup improvement.  
  
Also some other small fixes per bug #14258 from Dilian Palauzov.  
  
Discussion: <[email protected]>  

M doc/src/sgml/plpgsql.sgml
M doc/src/sgml/release-9.4.sgml

Use correct symbol for minimum int64 value

commit   : fb279fc7a3e65d15be00f220543956433fdba844    
  
author   : Peter Eisentraut <[email protected]>    
date     : Sun, 17 Jul 2016 09:15:37 -0400    
  
committer: Peter Eisentraut <[email protected]>    
date     : Sun, 17 Jul 2016 09:15:37 -0400    

Click here for diff

The old code used SEQ_MINVALUE to get the smallest int64 value.  This  
was done as a convenience to avoid having to deal with INT64_IS_BUSTED,  
but that is obsolete now.  Also, it is incorrect because the smallest  
int64 value is actually SEQ_MINVALUE-1.  Fix by using PG_INT64_MIN.  

M contrib/btree_gin/btree_gin.c

Fix crash in close_ps() for NaN input coordinates.

commit   : 884bae143c235981e53eae4ea56c47060740e3ee    
  
author   : Tom Lane <[email protected]>    
date     : Sat, 16 Jul 2016 14:42:37 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sat, 16 Jul 2016 14:42:37 -0400    

Click here for diff

The Assert() here seems unreasonably optimistic.  Andreas Seltenreich  
found that it could fail with NaNs in the input geometries, and it  
seems likely to me that it might fail in corner cases due to roundoff  
error, even for ordinary input values.  As a band-aid, make the function  
return SQL NULL instead of crashing.  
  
Report: <[email protected]>  

M src/backend/utils/adt/geo_ops.c

Fix torn-page, unlogged xid and further risks from heap_update().

commit   : 1f9534b49ce3ab02aac65c4033218cc3348d17b8    
  
author   : Andres Freund <[email protected]>    
date     : Fri, 15 Jul 2016 17:49:48 -0700    
  
committer: Andres Freund <[email protected]>    
date     : Fri, 15 Jul 2016 17:49:48 -0700    

Click here for diff

When heap_update needs to look for a page for the new tuple version,  
because the current one doesn't have sufficient free space, or when  
columns have to be processed by the tuple toaster, it has to release the  
lock on the old page during that. Otherwise there'd be lock ordering and  
lock nesting issues.  
  
To avoid concurrent sessions from trying to update / delete / lock the  
tuple while the page's content lock is released, the tuple's xmax is set  
to the current session's xid.  
  
That unfortunately was done without any WAL logging, thereby violating  
the rule that no XIDs may appear on disk, without an according WAL  
record.  If the database were to crash / fail over when the page level  
lock is released, and some activity lead to the page being written out  
to disk, the xid could end up being reused; potentially leading to the  
row becoming invisible.  
  
There might be additional risks by not having t_ctid point at the tuple  
itself, without having set the appropriate lock infomask fields.  
  
To fix, compute the appropriate xmax/infomask combination for locking  
the tuple, and perform WAL logging using the existing XLOG_HEAP_LOCK  
record. That allows the fix to be backpatched.  
  
This issue has existed for a long time. There appears to have been  
partial attempts at preventing dangers, but these never have fully been  
implemented, and were removed a long time ago, in  
11919160 (cf. HEAP_XMAX_UNLOGGED).  
  
In master / 9.6, there's an additional issue, namely that the  
visibilitymap's freeze bit isn't reset at that point yet. Since that's a  
new issue, introduced only in a892234f830, that'll be fixed in a  
separate commit.  
  
Author: Masahiko Sawada and Andres Freund  
Reported-By: Different aspects by Thomas Munro, Noah Misch, and others  
Discussion: CAEepm=3fWAbWryVW9swHyLTY4sXVf0xbLvXqOwUoDiNCx9mBjQ@mail.gmail.com  
Backpatch: 9.1/all supported versions  

M src/backend/access/heap/heapam.c

Make HEAP_LOCK/HEAP2_LOCK_UPDATED replay reset HEAP_XMAX_INVALID.

commit   : b33e81cba8ae9c86639d666a24b6221a797ec594    
  
author   : Andres Freund <[email protected]>    
date     : Fri, 15 Jul 2016 14:37:06 -0700    
  
committer: Andres Freund <[email protected]>    
date     : Fri, 15 Jul 2016 14:37:06 -0700    

Click here for diff

0ac5ad5 started to compress infomask bits in WAL records. Unfortunately  
the replay routines for XLOG_HEAP_LOCK/XLOG_HEAP2_LOCK_UPDATED forgot to  
reset the HEAP_XMAX_INVALID (and some other) hint bits.  
  
Luckily that's not problematic in the majority of cases, because after a  
crash/on a standby row locks aren't meaningful. Unfortunately that does  
not hold true in the presence of prepared transactions. This means that  
after a crash, or after promotion, row level locks held by a prepared,  
but not yet committed, prepared transaction might not be enforced.  
  
Discussion: [email protected]  
Backpatch: 9.3, the oldest branch on which 0ac5ad5 is present.  

M src/backend/access/heap/heapam.c

Avoid serializability errors when locking a tuple with a committed update

commit   : 649dd1b58b90754c67be86e99bd3af1bc3ab2c99    
  
author   : Alvaro Herrera <[email protected]>    
date     : Fri, 15 Jul 2016 14:17:20 -0400    
  
committer: Alvaro Herrera <[email protected]>    
date     : Fri, 15 Jul 2016 14:17:20 -0400    

Click here for diff

When key-share locking a tuple that has been not-key-updated, and the  
update is a committed transaction, in some cases we raised  
serializability errors:  
    ERROR:  could not serialize access due to concurrent update  
  
Because the key-share doesn't conflict with the update, the error is  
unnecessary and inconsistent with the case that the update hasn't  
committed yet.  This causes problems for some usage patterns, even if it  
can be claimed that it's sufficient to retry the aborted transaction:  
given a steady stream of updating transactions and a long locking  
transaction, the long transaction can be starved indefinitely despite  
multiple retries.  
  
To fix, we recognize that HeapTupleSatisfiesUpdate can return  
HeapTupleUpdated when an updating transaction has committed, and that we  
need to deal with that case exactly as if it were a non-committed  
update: verify whether the two operations conflict, and if not, carry on  
normally.  If they do conflict, however, there is a difference: in the  
HeapTupleBeingUpdated case we can just sleep until the concurrent  
transaction is gone, while in the HeapTupleUpdated case this is not  
possible and we must raise an error instead.  
  
Per trouble report from Olivier Dony.  
  
In addition to a couple of test cases that verify the changed behavior,  
I added a test case to verify the behavior that remains unchanged,  
namely that errors are raised when a update that modifies the key is  
used.  That must still generate serializability errors.  One  
pre-existing test case changes behavior; per discussion, the new  
behavior is actually the desired one.  
  
Discussion: https://www.postgresql.org/message-id/[email protected]  
  https://www.postgresql.org/message-id/[email protected]  
  
Backpatch to 9.3, where the problem appeared.  

M src/backend/access/heap/heapam.c
A src/test/isolation/expected/lock-committed-keyupdate.out
A src/test/isolation/expected/lock-committed-update.out
M src/test/isolation/expected/lock-update-delete.out
A src/test/isolation/expected/update-locked-tuple.out
M src/test/isolation/isolation_schedule
A src/test/isolation/specs/lock-committed-keyupdate.spec
A src/test/isolation/specs/lock-committed-update.spec
A src/test/isolation/specs/update-locked-tuple.spec

doc: Fix typos

commit   : 3246135c478aac417aefb122d2299433eb1ff8f8    
  
author   : Peter Eisentraut <[email protected]>    
date     : Thu, 14 Jul 2016 22:28:41 -0400    
  
committer: Peter Eisentraut <[email protected]>    
date     : Thu, 14 Jul 2016 22:28:41 -0400    

Click here for diff

From: Alexander Law <[email protected]>  

M doc/src/sgml/btree-gist.sgml
M doc/src/sgml/install-windows.sgml
M doc/src/sgml/ref/pg_xlogdump.sgml
M doc/src/sgml/sepgsql.sgml

Fix GiST index build for NaN values in geometric types.

commit   : 50354637694b0040e33f91f354c53227de9df9a6    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 14 Jul 2016 18:46:00 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 14 Jul 2016 18:46:00 -0400    

Click here for diff

GiST index build could go into an infinite loop when presented with boxes  
(or points, circles or polygons) containing NaN component values.  This  
happened essentially because the code assumed that x == x is true for any  
"double" value x; but it's not true for NaNs.  The looping behavior was not  
the only problem though: we also attempted to sort the items using simple  
double comparisons.  Since NaNs violate the trichotomy law, qsort could  
(in principle at least) get arbitrarily confused and mess up the sorting of  
ordinary values as well as NaNs.  And we based splitting choices on box size  
calculations that could produce NaNs, again resulting in undesirable  
behavior.  
  
To fix, replace all comparisons of doubles in this logic with  
float8_cmp_internal, which is NaN-aware and is careful to sort NaNs  
consistently, higher than any non-NaN.  Also rearrange the box size  
calculation to not produce NaNs; instead it should produce an infinity  
for a box with NaN on one side and not-NaN on the other.  
  
I don't by any means claim that this solves all problems with NaNs in  
geometric values, but it should at least make GiST index insertion work  
reliably with such data.  It's likely that the index search side of things  
still needs some work, and probably regular geometric operations too.  
But with this patch we're laying down a convention for how such cases  
ought to behave.  
  
Per bug #14238 from Guang-Dih Lei.  Back-patch to 9.2; the code used before  
commit 7f3bd86843e5aad8 is quite different and doesn't lock up on my simple  
test case, nor on the submitter's dataset.  
  
Report: <[email protected]>  
Discussion: <[email protected]>  

M src/backend/access/gist/gistproc.c
M src/backend/utils/adt/float.c
M src/include/utils/builtins.h

Fix obsolete header-file reference in pg_buffercache docs.

commit   : b8a5780c1d4b86b9adf52a8c426ad592ad1d97a1    
  
author   : Tom Lane <[email protected]>    
date     : Wed, 13 Jul 2016 11:17:15 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Wed, 13 Jul 2016 11:17:15 -0400    

Click here for diff

Commit 2d0019049 moved enum ForkNumber from relfilenode.h into relpath.h,  
but missed updating this documentation reference.  
  
Alexander Law  

M doc/src/sgml/pgbuffercache.sgml

Allow IMPORT FOREIGN SCHEMA within pl/pgsql.

commit   : a0943dbbea533e266f2db56d62ad43d9e0ed090a    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 12 Jul 2016 18:06:50 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 12 Jul 2016 18:06:50 -0400    

Click here for diff

Since IMPORT FOREIGN SCHEMA has an INTO clause, pl/pgsql needs to be  
aware of that and avoid capturing the INTO as an INTO-variables clause.  
This isn't hard, though it's annoying to have to make IMPORT a plpgsql  
keyword just for this.  (Fortunately, we have the infrastructure now  
to make it an unreserved keyword, so at least this shouldn't break any  
existing pl/pgsql code.)  
  
Per report from Merlin Moncure.  Back-patch to 9.5 where IMPORT FOREIGN  
SCHEMA was introduced.  
  
Report: <CAHyXU0wpHf2bbtKGL1gtUEFATCY86r=VKxfcACVcTMQ70mCyig@mail.gmail.com>  

M src/pl/plpgsql/src/pl_gram.y
M src/pl/plpgsql/src/pl_scanner.c

doc: Update URL for PL/PHP

commit   : 2d22c3b701ca1283eeb504d0fe75e8c595e5523b    
  
author   : Peter Eisentraut <[email protected]>    
date     : Mon, 11 Jul 2016 12:12:04 -0400    
  
committer: Peter Eisentraut <[email protected]>    
date     : Mon, 11 Jul 2016 12:12:04 -0400    

Click here for diff

M doc/src/sgml/external-projects.sgml

Add missing newline in error message

commit   : 8cd927d8325fa909f5cbf74dcb8232b6c39deeff    
  
author   : Magnus Hagander <[email protected]>    
date     : Mon, 11 Jul 2016 13:53:17 +0200    
  
committer: Magnus Hagander <[email protected]>    
date     : Mon, 11 Jul 2016 13:53:17 +0200    

Click here for diff

M src/bin/pg_xlogdump/pg_xlogdump.c

Fix TAP tests and MSVC scripts for pathnames with spaces.

commit   : f80395ca1f95dde1d5e0eabcab148b3645bcb6ec    
  
author   : Tom Lane <[email protected]>    
date     : Sat, 9 Jul 2016 16:47:39 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sat, 9 Jul 2016 16:47:39 -0400    

Click here for diff

Change assorted places in our Perl code that did things like  
	system("prog $path/file");  
to do it more like  
	system('prog', "$path/file");  
which is safe against spaces and other special characters in the path  
variable.  The latter was already the prevailing style, but a few bits  
of code hadn't gotten this memo.  Back-patch to 9.4 as relevant.  
  
Michael Paquier, Kyotaro Horiguchi  
  
Discussion: <[email protected]>  

M src/tools/msvc/Install.pm
M src/tools/msvc/vcregress.pl

Docs: improve examples about not repeating table name in UPDATE ... SET.

commit   : fead79407a601845b97ed74b8de9e8c865f5eba0    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 8 Jul 2016 12:46:04 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 8 Jul 2016 12:46:04 -0400    

Click here for diff

Alexander Law  

M doc/src/sgml/ref/insert.sgml
M doc/src/sgml/ref/update.sgml

Fix failure to handle conflicts in non-arbiter exclusion constraints.

commit   : 31ce32ade428dd3ea11ea468f8bdd6492b991ed1    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 4 Jul 2016 16:09:11 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 4 Jul 2016 16:09:11 -0400    

Click here for diff

ExecInsertIndexTuples treated an exclusion constraint as subject to  
noDupErr processing even when it was not listed in arbiterIndexes, and  
would therefore not error out for a conflict in such a constraint, instead  
returning it as an arbiter-index failure.  That led to an infinite loop in  
ExecInsert, since ExecCheckIndexConstraints ignored the index as-intended  
and therefore didn't throw the expected error.  To fix, make the exclusion  
constraint code path use the same condition as the index_insert call does  
to decide whether no-error-for-duplicates behavior is appropriate.  While  
at it, refactor a little bit to avoid unnecessary list_member_oid calls.  
(That surely wouldn't save anything worth noticing, but I find the code  
a bit clearer this way.)  
  
Per bug report from Heikki Rauhala.  Back-patch to 9.5 where ON CONFLICT  
was introduced.  
  
Report: <[email protected]>  

M src/backend/executor/execIndexing.c
M src/test/regress/expected/insert_conflict.out
M src/test/regress/sql/insert_conflict.sql

doc: mention dependency on collation libraries

commit   : e612181686b54a0311a85247c7f1640dee53636f    
  
author   : Bruce Momjian <[email protected]>    
date     : Sat, 2 Jul 2016 11:22:36 -0400    
  
committer: Bruce Momjian <[email protected]>    
date     : Sat, 2 Jul 2016 11:22:36 -0400    

Click here for diff

Document that index storage is dependent on the operating system's  
collation library ordering, and any change in that ordering can create  
invalid indexes.  
  
Discussion: [email protected]  
  
Backpatch-through: 9.1  

M doc/src/sgml/runtime.sgml

Be more paranoid in ruleutils.c's get_variable().

commit   : 40d0bd8d5e0fa86ece5f8ad9489adab0a30dca9a    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 1 Jul 2016 11:40:22 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 1 Jul 2016 11:40:22 -0400    

Click here for diff

We were merely Assert'ing that the Var matched the RTE it's supposedly  
from.  But if the user passes incorrect information to pg_get_expr(),  
the RTE might in fact not match; this led either to Assert failures  
or core dumps, as reported by Chris Hanks in bug #14220.  To fix, just  
convert the Asserts to test-and-elog.  Adjust an existing test-and-elog  
elsewhere in the same function to be consistent in wording.  
  
(If we really felt these were user-facing errors, we might promote them to  
ereport's; but I can't convince myself that they're worth translating.)  
  
Back-patch to 9.3; the problematic code doesn't exist before that, and  
a quick check says that 9.2 doesn't crash on such cases.  
  
Michael Paquier and Thomas Munro  
  
Report: <[email protected]>  

M src/backend/utils/adt/ruleutils.c

Fix crash bug in RestoreSnapshot.

commit   : 8f4a369c28be28351ce64e12ac895db515dd5916    
  
author   : Robert Haas <[email protected]>    
date     : Fri, 1 Jul 2016 08:51:58 -0400    
  
committer: Robert Haas <[email protected]>    
date     : Fri, 1 Jul 2016 08:51:58 -0400    

Click here for diff

If serialized_snapshot->subxcnt > 0 and serialized_snapshot->xcnt == 0,  
the old coding would do the wrong thing and crash.  This can happen  
on standby servers.  
  
Report by Andreas Seltenreich.  Patch by Thomas Munro, reviewed by  
Amit Kapila and tested by Andreas Seltenreich.  

M src/backend/utils/time/snapmgr.c

Fix typo in ReorderBufferIterTXNInit().

commit   : 8caf9fe62544b351d4f6219bf416f5ce08ef3c21    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 30 Jun 2016 12:37:02 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 30 Jun 2016 12:37:02 -0400    

Click here for diff

This looks like it would cause changes from subtransactions to be missed  
by the iterator being constructed, if those changes had been spilled to  
disk previously.  This implies that large subtransactions might be lost  
(in whole or in part) by logical replication.  Found and fixed by  
Petru-Florin Mihancea, per bug #14208.  
  
Report: <[email protected]>  

M src/backend/replication/logical/reorderbuffer.c

Fix CREATE MATVIEW/CREATE TABLE AS ... WITH NO DATA to not plan the query.

commit   : 1651b9aa2d97c67ce731173bf78899b1053ecc4a    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 27 Jun 2016 15:57:21 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 27 Jun 2016 15:57:21 -0400    

Click here for diff

Previously, these commands always planned the given query and went through  
executor startup before deciding not to actually run the query if WITH NO  
DATA is specified.  This behavior is problematic for pg_dump because it  
may cause errors to be raised that we would rather not see before a  
REFRESH MATERIALIZED VIEW command is issued.  See for example bug #13907  
from Marian Krucina.  This change is not sufficient to fix that particular  
bug, because we also need to tweak pg_dump to issue the REFRESH later,  
but it's a necessary step on the way.  
  
A user-visible side effect of doing things this way is that the returned  
command tag for WITH NO DATA cases will now be "CREATE MATERIALIZED VIEW"  
or "CREATE TABLE AS", not "SELECT 0".  We could preserve the old behavior  
but it would take more code, and arguably that was just an implementation  
artifact not intended behavior anyhow.  
  
In 9.5 and HEAD, also get rid of the static variable CreateAsReladdr, which  
was trouble waiting to happen; there is not any prohibition on nested  
CREATE commands.  
  
Back-patch to 9.3 where CREATE MATERIALIZED VIEW was introduced.  
  
Michael Paquier and Tom Lane  
  
Report: <[email protected]>  

M src/backend/commands/createas.c
M src/backend/commands/view.c
M src/backend/nodes/makefuncs.c
M src/include/nodes/makefuncs.h
M src/test/regress/expected/matview.out
M src/test/regress/expected/select_into.out
M src/test/regress/sql/matview.sql
M src/test/regress/sql/select_into.sql

Fix handling of multixacts predating pg_upgrade

commit   : d372cb173e0732b557d2ebd35f4680e8d08c5239    
  
author   : Alvaro Herrera <[email protected]>    
date     : Fri, 24 Jun 2016 18:29:28 -0400    
  
committer: Alvaro Herrera <[email protected]>    
date     : Fri, 24 Jun 2016 18:29:28 -0400    

Click here for diff

After pg_upgrade, it is possible that some tuples' Xmax have multixacts  
corresponding to the old installation; such multixacts cannot have  
running members anymore.  In many code sites we already know not to read  
them and clobber them silently, but at least when VACUUM tries to freeze  
a multixact or determine whether one needs freezing, there's an attempt  
to resolve it to its member transactions by calling GetMultiXactIdMembers,  
and if the multixact value is "in the future" with regards to the  
current valid multixact range, an error like this is raised:  
    ERROR:  MultiXactId 123 has not been created yet -- apparent wraparound  
and vacuuming fails.  Per discussion with Andrew Gierth, it is completely  
bogus to try to resolve multixacts coming from before a pg_upgrade,  
regardless of where they stand with regards to the current valid  
multixact range.  
  
It's possible to get from under this problem by doing SELECT FOR UPDATE  
of the problem tuples, but if tables are large, this is slow and  
tedious, so a more thorough solution is desirable.  
  
To fix, we realize that multixacts in xmax created in 9.2 and previous  
have a specific bit pattern that is never used in 9.3 and later (we  
already knew this, per comments and infomask tests sprinkled in various  
places, but we weren't leveraging this knowledge appropriately).  
Whenever the infomask of the tuple matches that bit pattern, we just  
ignore the multixact completely as if Xmax wasn't set; or, in the case  
of tuple freezing, we act as if an unwanted value is set and clobber it  
without decoding.  This guarantees that no errors will be raised, and  
that the values will be progressively removed until all tables are  
clean.  Most callers of GetMultiXactIdMembers are patched to recognize  
directly that the value is a removable "empty" multixact and avoid  
calling GetMultiXactIdMembers altogether.  
  
To avoid changing the signature of GetMultiXactIdMembers() in back  
branches, we keep the "allow_old" boolean flag but rename it to  
"from_pgupgrade"; if the flag is true, we always return an empty set  
instead of looking up the multixact.  (I suppose we could remove the  
argument in the master branch, but I chose not to do so in this commit).  
  
This was broken all along, but the error-facing message appeared first  
because of commit 8e9a16ab8f7f and was partially fixed in a25c2b7c4db3.  
This fix, backpatched all the way back to 9.3, goes approximately in the  
same direction as a25c2b7c4db3 but should cover all cases.  
  
Bug analysis by Andrew Gierth and Álvaro Herrera.  
  
A number of public reports match this bug:  
  https://www.postgresql.org/message-id/[email protected]  
  https://www.postgresql.org/message-id/[email protected]  
  https://www.postgresql.org/message-id/[email protected]  
  https://www.postgresql.org/message-id/SG2PR06MB0760098A111C88E31BD4D96FB3540@SG2PR06MB0760.apcprd06.prod.outlook.com  
  https://www.postgresql.org/message-id/[email protected]  

M contrib/pgrowlocks/pgrowlocks.c
M src/backend/access/heap/heapam.c
M src/backend/access/transam/multixact.c
M src/backend/utils/time/tqual.c
M src/include/access/htup_details.h

Fix building of large (bigger than shared_buffers) hash indexes.

commit   : 07f69137b15e594edfaec29f73efa86aa442902c    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 24 Jun 2016 16:57:36 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 24 Jun 2016 16:57:36 -0400    

Click here for diff

When the index is predicted to need more than NBuffers buckets,  
CREATE INDEX attempts to sort the index entries by hash key before  
insertion, so as to reduce thrashing.  This code path got broken by  
commit 9f03ca915196dfc8, which overlooked that _hash_form_tuple() is not  
just an alias for index_form_tuple().  The index got built anyway, but  
with garbage data, so that searches for pre-existing tuples always  
failed.  Fix by refactoring to separate construction of the indexable  
data from calling index_form_tuple().  
  
Per bug #14210 from Daniel Newman.  Back-patch to 9.5 where the  
bug was introduced.  
  
Report: <[email protected]>  

M src/backend/access/hash/hash.c
M src/backend/access/hash/hashutil.c
M src/include/access/hash.h

Add tab completion for pager_min_lines to psql.

commit   : b4e6123bf604ed316b03629b881fbf67edcb9725    
  
author   : Andrew Dunstan <[email protected]>    
date     : Thu, 23 Jun 2016 16:10:15 -0400    
  
committer: Andrew Dunstan <[email protected]>    
date     : Thu, 23 Jun 2016 16:10:15 -0400    

Click here for diff

This was inadvertantly omitted from commit  
7655f4ccea570d57c4d473cd66b755c03c904942. Mea culpa.  
  
Backpatched to 9.5 where pager_min_lines was introduced.  

M src/bin/psql/tab-complete.c

Make "postgres -C guc" print "" not "(null)" for null-valued GUCs.

commit   : f2c28bb1f2fee6fa33d2d6d4316b3f1d499543a4    
  
author   : Tom Lane <[email protected]>    
date     : Wed, 22 Jun 2016 11:55:18 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Wed, 22 Jun 2016 11:55:18 -0400    

Click here for diff

Commit 0b0baf262 et al made this case print "(null)" on the grounds that  
that's what happened on platforms that didn't crash.  But neither behavior  
was actually intentional.  What we should print is just an empty string,  
for compatibility with the behavior of SHOW and other ways of examining  
string GUCs.  Those code paths don't distinguish NULL from empty strings,  
so we should not here either.  Per gripe from Alain Radix.  
  
Like the previous patch, back-patch to 9.2 where -C option was introduced.  
  
Discussion: <CA+YdpwxPUADrmxSD7+Td=uOshMB1KkDN7G7cf+FGmNjjxMhjbw@mail.gmail.com>  

M src/backend/postmaster/postmaster.c

Document that dependency tracking doesn't consider function bodies.

commit   : 7a349889ec1e80123f475fb171640b8273f921fd    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 21 Jun 2016 20:07:58 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 21 Jun 2016 20:07:58 -0400    

Click here for diff

If there's anyplace in our SGML docs that explains this behavior, I can't  
find it right at the moment.  Add an explanation in "Dependency Tracking"  
which seems like the authoritative place for such a discussion.  Per  
gripe from Michelle Schwan.  
  
While at it, update this section's example of a dependency-related  
error message: they last looked like that in 8.3.  And remove the  
explanation of dependency updates from pre-7.3 installations, which  
is probably no longer worth anybody's brain cells to read.  
  
The bogus error message example seems like an actual documentation bug,  
so back-patch to all supported branches.  
  
Discussion: <[email protected]>  

M doc/src/sgml/ddl.sgml

Add missing check for malloc failure in plpgsql_extra_checks_check_hook().

commit   : 1d07722f0174b563bbd3640f046dbd4de126ffe4    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 20 Jun 2016 15:36:54 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 20 Jun 2016 15:36:54 -0400    

Click here for diff

Per report from Andreas Seltenreich.  Back-patch to affected versions.  
  
Report: <[email protected]>  

M src/pl/plpgsql/src/pl_handler.c

Add missing documentation of pg_roles.rolbypassrls

commit   : def0eae4f25589bac6c8d3f3734f8d8ba654c853    
  
author   : Magnus Hagander <[email protected]>    
date     : Mon, 20 Jun 2016 10:29:20 +0200    
  
committer: Magnus Hagander <[email protected]>    
date     : Mon, 20 Jun 2016 10:29:20 +0200    

Click here for diff

Noted by Lukas Fittl  

M doc/src/sgml/catalogs.sgml

Docs: improve description of psql's %R prompt escape sequence.

commit   : a3eb19ba4a34dcfed7a79167717d9fed1ef9d26e    
  
author   : Tom Lane <[email protected]>    
date     : Sun, 19 Jun 2016 13:11:40 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sun, 19 Jun 2016 13:11:40 -0400    

Click here for diff

Dilian Palauzov pointed out in bug #14201 that the docs failed to mention  
the possibility of %R producing '(' due to an unmatched parenthesis.  
  
He proposed just adding that in the same style as the other options were  
listed; but it seemed to me that the sentence was already nearly  
unintelligible, so I rewrote it a bit more extensively.  
  
Report: <[email protected]>  

M doc/src/sgml/ref/psql-ref.sgml

Finish up XLOG_HINT renaming

commit   : 6fce92a7d7090fe02c39837908b4feb55806b3ee    
  
author   : Alvaro Herrera <[email protected]>    
date     : Fri, 17 Jun 2016 18:05:55 -0400    
  
committer: Alvaro Herrera <[email protected]>    
date     : Fri, 17 Jun 2016 18:05:55 -0400    

Click here for diff

Commit b8fd1a09f3 renamed XLOG_HINT to XLOG_FPI, but neglected two  
places.  
  
Backpatch to 9.3, like that commit.  

M src/backend/access/transam/README
M src/backend/storage/buffer/bufmgr.c

Fix validation of overly-long IPv6 addresses.

commit   : a41b14f94a44c1738356719f46b330372228ee4e    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 16 Jun 2016 17:16:32 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 16 Jun 2016 17:16:32 -0400    

Click here for diff

The inet/cidr types sometimes failed to reject IPv6 inputs with too many  
colon-separated fields, instead translating them to '::/0'.  This is the  
result of a thinko in the original ISC code that seems to be as yet  
unreported elsewhere.  Per bug #14198 from Stefan Kaltenbrunner.  
  
Report: <[email protected]>  

M src/backend/utils/adt/inet_net_pton.c

Avoid crash in "postgres -C guc" for a GUC with a null string value.

commit   : 4f5995dd983db31bce347411c16ecc7319a2d9af    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 16 Jun 2016 12:17:03 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 16 Jun 2016 12:17:03 -0400    

Click here for diff

Emit "(null)" instead, which was the behavior all along on platforms  
that don't crash, eg OS X.  Per report from Jehan-Guillaume de Rorthais.  
Back-patch to 9.2 where -C option was introduced.  
  
Michael Paquier  
  
Report: <20160615204036.2d35d86a@firost>  

M src/backend/postmaster/postmaster.c

Widen buffer for headers in psql's \watch command.

commit   : 455812da4820b21c938fac840c17e68d4ec856a9    
  
author   : Tom Lane <[email protected]>    
date     : Wed, 15 Jun 2016 19:35:39 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Wed, 15 Jun 2016 19:35:39 -0400    

Click here for diff

This is to make sure there's enough room for translated versions of  
the message.  HEAD's already addressed this issue, but back-patch a  
simple increase in the array size.  
  
Discussion: <[email protected]>  

M src/bin/psql/command.c

Fix multiple minor infelicities in aclchk.c error reports.

commit   : e5bdaa127be617c7c4d49e1bb8960d002918f96d    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 13 Jun 2016 13:53:10 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 13 Jun 2016 13:53:10 -0400    

Click here for diff

pg_type_aclmask reported the wrong type's OID when complaining that  
it could not find a type's typelem.  It also failed to provide a  
suitable errcode when the initially given OID doesn't exist (which  
is a user-facing error, since that OID can be user-specified).  
pg_foreign_data_wrapper_aclmask and pg_foreign_server_aclmask likewise  
lacked errcode specifications.  Trivial cosmetic adjustments too.  
  
The wrong-type-OID problem was reported by Petru-Florin Mihancea in  
bug #14186; the other issues noted by me while reading the code.  
These errors all seem to be aboriginal in the respective routines, so  
back-patch as necessary.  
  
Report: <[email protected]>  

M src/backend/catalog/aclchk.c

Remove extraneous leading whitespace in Windows build script.

commit   : 1ad83738d0cd9b9af9a8f695eb9acc327a02c18b    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 13 Jun 2016 11:50:27 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 13 Jun 2016 11:50:27 -0400    

Click here for diff

Apparently, at least some versions of Microsoft's shell fail on variable  
assignments that have leading whitespace.  This instance, introduced in  
commit 680513ab7, managed to escape notice for awhile because it's only  
invoked if building with OpenSSL.  Per bug #14185 from Torben Dannhauer.  
  
Report: <[email protected]>  

M src/interfaces/libpq/win32.mak

Clarify documentation of ceil/ceiling/floor functions.

commit   : 719dd9a64af5f2ec07a9cac16fb161a3c5c18010    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 9 Jun 2016 11:58:00 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 9 Jun 2016 11:58:00 -0400    

Click here for diff

Document these as "nearest integer >= argument" and "nearest integer <=  
argument", which will hopefully be less confusing than the old formulation.  
New wording is from Matlab via Dean Rasheed.  
  
I changed the pg_description entries as well as the SGML docs.  In the  
back branches, this will only affect installations initdb'd in the future,  
but it should be harmless otherwise.  
  
Discussion: <CAEZATCW3yzJo-NMSiQs5jXNFbTsCEftZS-Og8=FvFdiU+kYuSA@mail.gmail.com>  

M doc/src/sgml/func.sgml
M src/include/catalog/pg_proc.h

nls-global.mk: search build dir for source files, too

commit   : 5b3cd1a771a609306c057969094cd31c0fee00d3    
  
author   : Alvaro Herrera <[email protected]>    
date     : Tue, 7 Jun 2016 18:55:18 -0400    
  
committer: Alvaro Herrera <[email protected]>    
date     : Tue, 7 Jun 2016 18:55:18 -0400    

Click here for diff

In VPATH builds, the build directory was not being searched for files in  
GETTEXT_FILES, leading to failure to construct the .pot files.  This has  
bit me all along, but never hard enough to get it fixed; I suppose not a  
lot of people uses VPATH and NLS-enabled builds, and those that do,  
don't do "make update-po" often.  
  
This is a longstanding problem, so backpatch all the way back.  

M src/nls-global.mk

Fix thinko in description of table_name parameter

commit   : 00e67c3c6d0ab372d82a21642f8361763b434601    
  
author   : Alvaro Herrera <[email protected]>    
date     : Tue, 7 Jun 2016 18:18:26 -0400    
  
committer: Alvaro Herrera <[email protected]>    
date     : Tue, 7 Jun 2016 18:18:26 -0400    

Click here for diff

Commit 6820094d1 mixed up types of parent object (table) with type of  
sub-object being commented on.  Noticed while fixing docs for  
COMMENT ON ACCESS METHOD.  
  
Backpatch to 9.5, like that commit.  

M doc/src/sgml/ref/comment.sgml

Don't reset changes_since_analyze after a selective-columns ANALYZE.

commit   : 5acc58c5e5336d16a1e238e8f1d599e8bfa4183b    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 6 Jun 2016 17:44:17 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 6 Jun 2016 17:44:17 -0400    

Click here for diff

If we ANALYZE only selected columns of a table, we should not postpone  
auto-analyze because of that; other columns may well still need stats  
updates.  As committed, the counter is left alone if a column list is  
given, whether or not it includes all analyzable columns of the table.  
Per complaint from Tomasz Ostrowski.  
  
It's been like this a long time, so back-patch to all supported branches.  
  
Report: <[email protected]>  

M src/backend/commands/analyze.c
M src/backend/postmaster/pgstat.c
M src/include/pgstat.h

Properly initialize SortSupport for ORDER BY rechecks in nodeIndexscan.c.

commit   : a7aa61ffe7ed12cf8d5cbdfc887900549f9ed354    
  
author   : Tom Lane <[email protected]>    
date     : Sun, 5 Jun 2016 11:53:06 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sun, 5 Jun 2016 11:53:06 -0400    

Click here for diff

Fix still another bug in commit 35fcb1b3d: it failed to fully initialize  
the SortSupport states it introduced to allow the executor to re-check  
ORDER BY expressions containing distance operators.  That led to a null  
pointer dereference if the sortsupport code tried to use ssup_cxt.  The  
problem only manifests in narrow cases, explaining the lack of previous  
field reports.  It requires a GiST-indexable distance operator that lacks  
SortSupport and is on a pass-by-ref data type, which among core+contrib  
seems to be only btree_gist's interval opclass; and it requires the scan  
to be done as an IndexScan not an IndexOnlyScan, which explains how  
btree_gist's regression test didn't catch it.  Per bug #14134 from  
Jihyun Yu.  
  
Peter Geoghegan  
  
Report: <[email protected]>  

M contrib/btree_gist/expected/interval.out
M contrib/btree_gist/sql/interval.sql
M src/backend/executor/nodeIndexscan.c

Fix grammar's AND/OR flattening to work with operator_precedence_warning.

commit   : c82037e372394ee046e278887c8f938591ca7406    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 3 Jun 2016 19:12:30 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 3 Jun 2016 19:12:30 -0400    

Click here for diff

It'd be good for "(x AND y) AND z" to produce a three-child AND node  
whether or not operator_precedence_warning is on, but that failed to  
happen when it's on because makeAndExpr() didn't look through the added  
AEXPR_PAREN node.  This has no effect on generated plans because prepqual.c  
would flatten the AND nest anyway; but it does affect the number of parens  
printed in ruleutils.c, for example.  I'd already fixed some similar  
hazards in parse_expr.c in commit abb164655, but didn't think to search  
gram.y for problems of this ilk.  Per gripe from Jean-Pierre Pelletier.  
  
Report: <[email protected]>  

M src/backend/parser/gram.y

Mark read/write expanded values as read-only in ValuesNext(), too.

commit   : 8355897ff296a3634f6b9c3da444622a02adda9c    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 3 Jun 2016 18:07:14 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 3 Jun 2016 18:07:14 -0400    

Click here for diff

Further thought about bug #14174 motivated me to try the case of a  
R/W datum being returned from a VALUES list, and sure enough it was  
broken.  Fix that.  
  
Also add a regression test case exercising the same scenario for  
FunctionScan.  That's not broken right now, because the function's  
result will get shoved into a tuplestore between generation and use;  
but it could easily become broken whenever we get around to optimizing  
FunctionScan better.  
  
There don't seem to be any other places where we put the result of  
expression evaluation into a virtual tuple slot that could then be  
the source for Vars of further expression evaluation, so I think  
this is the end of this bug.  

M src/backend/executor/nodeValuesscan.c
M src/test/regress/expected/plpgsql.out
M src/test/regress/sql/plpgsql.sql

Mark read/write expanded values as read-only in ExecProject().

commit   : a102f98e26bc7d21eb5246fd345f8e3ab31be109    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 3 Jun 2016 15:14:35 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 3 Jun 2016 15:14:35 -0400    

Click here for diff

If a plan node output expression returns an "expanded" datum, and that  
output column is referenced in more than one place in upper-level plan  
nodes, we need to ensure that what is returned is a read-only reference  
not a read/write reference.  Otherwise one of the referencing sites could  
scribble on or even delete the expanded datum before we have evaluated the  
others.  Commit 1dc5ebc9077ab742, which introduced this feature, supposed  
that it'd be sufficient to make SubqueryScan nodes force their output  
columns to read-only state.  The folly of that was revealed by bug #14174  
from Andrew Gierth, and really should have been immediately obvious  
considering that the planner will happily optimize SubqueryScan nodes  
out of the plan without any regard for this issue.  
  
The safest fix seems to be to make ExecProject() force its results into  
read-only state; that will cover every case where a plan node returns  
expression results.  Actually we can delegate this to ExecTargetList()  
since we can recursively assume that plain Vars will not reference  
read-write datums.  That should keep the extra overhead down to something  
minimal.  We no longer need ExecMakeSlotContentsReadOnly(), which was  
introduced only in support of the idea that just a few plan node types  
would need to do this.  
  
In the future it would be nice to have the planner account for this problem  
and inject force-to-read-only expression evaluation nodes into only the  
places where there's a risk of multiple evaluation.  That's not a suitable  
solution for 9.5 or even 9.6 at this point, though.  
  
Report: <[email protected]>  

M src/backend/executor/execQual.c
M src/backend/executor/execTuples.c
M src/backend/executor/nodeSubqueryscan.c
M src/include/executor/tuptable.h
M src/test/regress/expected/plpgsql.out
M src/test/regress/sql/plpgsql.sql

Suppress -Wunused-result warnings about write(), again.

commit   : ec5622351208989620b6563b7890922e28e65e7b    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 3 Jun 2016 11:29:20 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 3 Jun 2016 11:29:20 -0400    

Click here for diff

Adopt the same solution as in commit aa90e148ca70a235, but this time  
let's put the ugliness inside the write_stderr() macro, instead of  
expecting each call site to deal with it.  Back-port that decision  
into psql/common.c where I got the macro from in the first place.  
  
Per gripe from Peter Eisentraut.  

M src/bin/pg_dump/parallel.c
M src/bin/psql/common.c

Redesign handling of SIGTERM/control-C in parallel pg_dump/pg_restore.

commit   : 404429038896d914f38e1ee80d6d6905be0ad261    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 2 Jun 2016 13:27:53 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 2 Jun 2016 13:27:53 -0400    

Click here for diff

Formerly, Unix builds of pg_dump/pg_restore would trap SIGINT and similar  
signals and set a flag that was tested in various data-transfer loops.  
This was prone to errors of omission (cf commit 3c8aa6654); and even if  
the client-side response was prompt, we did nothing that would cause  
long-running SQL commands (e.g. CREATE INDEX) to terminate early.  
Also, the master process would effectively do nothing at all upon receipt  
of SIGINT; the only reason it seemed to work was that in typical scenarios  
the signal would also be delivered to the child processes.  We should  
support termination when a signal is delivered only to the master process,  
though.  
  
Windows builds had no console interrupt handler, so they would just fall  
over immediately at control-C, again leaving long-running SQL commands to  
finish unmolested.  
  
To fix, remove the flag-checking approach altogether.  Instead, allow the  
Unix signal handler to send a cancel request directly and then exit(1).  
In the master process, also have it forward the signal to the children.  
On Windows, add a console interrupt handler that behaves approximately  
the same.  The main difference is that a single execution of the Windows  
handler can send all the cancel requests since all the info is available  
in one process, whereas on Unix each process sends a cancel only for its  
own database connection.  
  
In passing, fix an old problem that DisconnectDatabase tends to send a  
cancel request before exiting a parallel worker, even if nothing went  
wrong.  This is at least a waste of cycles, and could lead to unexpected  
log messages, or maybe even data loss if it happened in pg_restore (though  
in the current code the problem seems to affect only pg_dump).  The cause  
was that after a COPY step, pg_dump was leaving libpq in PGASYNC_BUSY  
state, causing PQtransactionStatus() to report PQTRANS_ACTIVE.  That's  
normally harmless because the next PQexec() will silently clear the  
PGASYNC_BUSY state; but in a parallel worker we might exit without any  
additional SQL commands after a COPY step.  So add an extra PQgetResult()  
call after a COPY to allow libpq to return to PGASYNC_IDLE state.  
  
This is a bug fix, IMO, so back-patch to 9.3 where parallel dump/restore  
were introduced.  
  
Thanks to Kyotaro Horiguchi for Windows testing and code suggestions.  
  
Original-Patch: <[email protected]>  
Discussion: <[email protected]>  

M src/bin/pg_dump/compress_io.c
M src/bin/pg_dump/parallel.c
M src/bin/pg_dump/parallel.h
M src/bin/pg_dump/pg_backup_archiver.c
M src/bin/pg_dump/pg_backup_archiver.h
M src/bin/pg_dump/pg_backup_db.c
M src/bin/pg_dump/pg_backup_directory.c
M src/bin/pg_dump/pg_dump.c

Fix btree mark/restore bug.

commit   : 236d569f92b298c697e0f54891418acfc8310003    
  
author   : Kevin Grittner <[email protected]>    
date     : Thu, 2 Jun 2016 12:23:19 -0500    
  
committer: Kevin Grittner <[email protected]>    
date     : Thu, 2 Jun 2016 12:23:19 -0500    

Click here for diff

Commit 2ed5b87f96d473962ec5230fd820abfeaccb2069 introduced a bug in  
mark/restore, in an attempt to optimize repeated restores to the  
same page.  This caused an assertion failure during a merge join  
which fed directly from an index scan, although the impact would  
not be limited to that case.  Revert the bad chunk of code from  
that commit.  
  
While investigating this bug it was discovered that a particular  
"paranoia" set of the mark position field would not prevent bad  
behavior; it would just make it harder to diagnose.  Change that  
into an assertion, which will draw attention to any future problem  
in that area more directly.  
  
Backpatch to 9.5, where the bug was introduced.  
  
Bug #14169 reported by Shinta Koyanagi.  
Preliminary analysis by Tom Lane identified which commit caused  
the bug.  

M src/backend/access/nbtree/nbtree.c
M src/backend/access/nbtree/nbtsearch.c

Clean up some minor inefficiencies in parallel dump/restore.

commit   : 43d3fbe369088f089afd55847dde0f34b339b5f2    
  
author   : Tom Lane <[email protected]>    
date     : Wed, 1 Jun 2016 16:14:21 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Wed, 1 Jun 2016 16:14:21 -0400    

Click here for diff

Parallel dump did a totally pointless query to find out the name of each  
table to be dumped, which it already knows.  Parallel restore runs issued  
lots of redundant SET commands because _doSetFixedOutputState() was invoked  
once per TOC item rather than just once at connection start.  While the  
extra queries are insignificant if you're dumping or restoring large  
tables, it still seems worth getting rid of them.  
  
Also, give the responsibility for selecting the right client_encoding for  
a parallel dump worker to setup_connection() where it naturally belongs,  
instead of having ad-hoc code for that in CloneArchive().  And fix some  
minor bugs like use of strdup() where pg_strdup() would be safer.  
  
Back-patch to 9.3, mostly to keep the branches in sync in an area that  
we're still finding bugs in.  
  
Discussion: <[email protected]>  

M src/bin/pg_dump/parallel.c
M src/bin/pg_dump/pg_backup_archiver.c
M src/bin/pg_dump/pg_dump.c

Avoid useless closely-spaced writes of statistics files.

commit   : 47215c16f20631627ce0e1d78f5a592886c97b44    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 31 May 2016 15:54:46 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 31 May 2016 15:54:46 -0400    

Click here for diff

The original intent in the stats collector was that we should not write out  
stats data oftener than every PGSTAT_STAT_INTERVAL msec.  Backends will not  
make requests at all if they see the existing data is newer than that, and  
the stats collector is supposed to disregard requests having a cutoff_time  
older than its most recently written data, so that close-together requests  
don't result in multiple writes.  But the latter part of that got broken  
in commit 187492b6c2e8cafc, so that if two backends concurrently decide  
the existing stats are too old, the collector would write the data twice.  
(In principle the collector's logic would still merge requests as long as  
the second one arrives before we've actually written data ... but since  
the message collection loop would write data immediately after processing  
a single inquiry message, that never happened in practice, and in any case  
the window in which it might work would be much shorter than  
PGSTAT_STAT_INTERVAL.)  
  
To fix, improve pgstat_recv_inquiry so that it checks whether the cutoff  
time is too old, and doesn't add a request to the queue if so.  This means  
that we do not need DBWriteRequest.request_time, because the decision is  
taken before making a queue entry.  And that means that we don't really  
need the DBWriteRequest data structure at all; an OID list of database  
OIDs will serve and allow removal of some rather verbose and crufty code.  
  
In passing, improve the comments in this area, which have been rather  
neglected.  Also change backend_read_statsfile so that it's not silently  
relying on MyDatabaseId to have some particular value in the autovacuum  
launcher process.  It accidentally worked as desired because MyDatabaseId  
is zero in that process; but that does not seem like a dependency we want,  
especially with no documentation about it.  
  
Although this patch is mine, it turns out I'd rediscovered a known bug,  
for which Tomas Vondra had already submitted a patch that's functionally  
equivalent to the non-cosmetic aspects of this patch.  Thanks to Tomas  
for reviewing this version.  
  
Back-patch to 9.3 where the bug was introduced.  
  
Prior-Discussion: <[email protected]>  
Patch: <[email protected]>  

M src/backend/postmaster/pgstat.c
M src/include/pgstat.h

Fix typo in CREATE DATABASE syntax synopsis.

commit   : ad829c307b5b4f466adf760b1a209840900ec2a5    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 31 May 2016 12:05:22 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 31 May 2016 12:05:22 -0400    

Click here for diff

Misplaced "]", evidently a thinko in commit 213335c14.  

M doc/src/sgml/ref/create_database.sgml

Fix PageAddItem BRIN bug

commit   : 2973d7d02085ffad5697770ae5cfdc20d5b1aae7    
  
author   : Alvaro Herrera <[email protected]>    
date     : Mon, 30 May 2016 14:47:22 -0400    
  
committer: Alvaro Herrera <[email protected]>    
date     : Mon, 30 May 2016 14:47:22 -0400    

Click here for diff

BRIN was relying on the ability to remove a tuple from an index page,  
then putting another tuple in the same line pointer.  But PageAddItem  
refuses to add a tuple beyond the first free item past the last used  
item, and in particular, it rejects an attempt to add an item to an  
empty page anywhere other than the first line pointer.  PageAddItem  
issues a WARNING and indicates to the caller that it failed, which in  
turn causes the BRIN calling code to issue a PANIC, so the whole  
sequence looks like this:  
	WARNING:  specified item offset is too large  
	PANIC:  failed to add BRIN tuple  
  
To fix, create a new function PageAddItemExtended which is like  
PageAddItem except that the two boolean arguments become a flags bitmap;  
the "overwrite" and "is_heap" boolean flags in PageAddItem become  
PAI_OVERWITE and PAI_IS_HEAP flags in the new function, and a new flag  
PAI_ALLOW_FAR_OFFSET enables the behavior required by BRIN.  
PageAddItem() retains its original signature, for compatibility with  
third-party modules (other callers in core code are not modified,  
either).  
  
Also, in the belt-and-suspenders spirit, I added a new sanity check in  
brinGetTupleForHeapBlock to raise an error if an TID found in the revmap  
is not marked as live by the page header.  This causes it to react with  
"ERROR: corrupted BRIN index" to the bug at hand, rather than a hard  
crash.  
  
Backpatch to 9.5.  
  
Bug reported by Andreas Seltenreich as detected by his handy sqlsmith  
fuzzer.  
Discussion: https://www.postgresql.org/message-id/[email protected]  

M src/backend/access/brin/brin_pageops.c
M src/backend/access/brin/brin_revmap.c
M src/backend/access/brin/brin_xlog.c
M src/backend/storage/page/bufpage.c
M src/include/storage/bufpage.h

Fix missing abort checks in pg_backup_directory.c.

commit   : 73f5acce3f5f56189ead7666cf932e52b6c42adb    
  
author   : Tom Lane <[email protected]>    
date     : Sun, 29 May 2016 13:18:48 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sun, 29 May 2016 13:18:48 -0400    

Click here for diff

Parallel restore from directory format failed to respond to control-C  
in a timely manner, because there were no checkAborting() calls in the  
code path that reads data from a file and sends it to the backend.  
If any worker was in the midst of restoring data for a large table,  
you'd just have to wait.  
  
This fix doesn't do anything for the problem of aborting a long-running  
server-side command, but at least it fixes things for data transfers.  
  
Back-patch to 9.3 where parallel restore was introduced.  

M src/bin/pg_dump/pg_backup_directory.c

Remove pg_dump/parallel.c's useless "aborting" flag.

commit   : 937b85805b6e87180bf9e088acfb4ba4e3e3d9bc    
  
author   : Tom Lane <[email protected]>    
date     : Sun, 29 May 2016 13:00:09 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sun, 29 May 2016 13:00:09 -0400    

Click here for diff

This was effectively dead code, since the places that tested it could not  
be reached after we entered the on-exit-cleanup routine that would set it.  
It seems to have been a leftover from a design in which error abort would  
try to send fresh commands to the workers --- a design which could never  
have worked reliably, of course.  Since the flag is not cross-platform, it  
complicates reasoning about the code's behavior, which we could do without.  
  
Although this is effectively just cosmetic, back-patch anyway, because  
there are some actual bugs in the vicinity of this behavior.  
  
Discussion: <[email protected]>  

M src/bin/pg_dump/parallel.c

Lots of comment-fixing, and minor cosmetic cleanup, in pg_dump/parallel.c.

commit   : bf7b1691eac84b09557f191a6b973bd5faa983b0    
  
author   : Tom Lane <[email protected]>    
date     : Sat, 28 May 2016 14:02:11 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sat, 28 May 2016 14:02:11 -0400    

Click here for diff

The commentary in this file was in extremely sad shape.  The author(s)  
had clearly never heard of the project convention that a function header  
comment should provide an API spec of some sort for that function.  Much  
of it was flat out wrong, too --- maybe it was accurate when written, but  
if so it had not been updated to track subsequent code revisions.  Rewrite  
and rearrange to try to bring it up to speed, and annotate some of the  
places where more work is needed.  (I've refrained from actually fixing  
anything of substance ... yet.)  
  
Also, rename a couple of functions for more clarity as to what they do,  
do some very minor code rearrangement, remove some pointless Asserts,  
fix an incorrect Assert in readMessageFromPipe, and add a missing socket  
close in one error exit from pgpipe().  The last would be a bug if we  
tried to continue after pgpipe() failure, but since we don't, it's just  
cosmetic at present.  
  
Although this is only cosmetic, back-patch to 9.3 where parallel.c was  
added.  It's sufficiently invasive that it'll pose a hazard for future  
back-patching if we don't.  
  
Discussion: <[email protected]>  

M src/bin/pg_dump/parallel.c
M src/bin/pg_dump/pg_backup_archiver.c

Clean up thread management in parallel pg_dump for Windows.

commit   : 715db0b73926c7e591aaa2552b429a0cd793e200    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 27 May 2016 12:02:09 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 27 May 2016 12:02:09 -0400    

Click here for diff

Since we start the worker threads with _beginthreadex(), we should use  
_endthreadex() to terminate them.  We got this right in the normal-exit  
code path, but not so much during an error exit from a worker.  
In addition, be sure to apply CloseHandle to the thread handle after  
each thread exits.  
  
It's not clear that these oversights cause any user-visible problems,  
since the pg_dump run is about to terminate anyway.  Still, it's clearly  
better to follow Microsoft's API specifications than ignore them.  
  
Also a few cosmetic cleanups in WaitForTerminatingWorkers(), including  
being a bit less random about where to cast between uintptr_t and HANDLE,  
and being sure to clear the worker identity field for each dead worker  
(not that false matches should be possible later, but let's be careful).  
  
Original observation and patch by Armin Schöffmann, cosmetic improvements  
by Michael Paquier and me.  (Armin's patch also included closing sockets  
in ShutdownWorkersHard(), but that's been dealt with already in commit  
df8d2d8c4.)  Back-patch to 9.3 where parallel pg_dump was introduced.  
  
Discussion: <[email protected]>  

M src/bin/pg_dump/parallel.c
M src/bin/pg_dump/pg_backup_utils.c

Be more predictable about reporting "lock timeout" vs "statement timeout".

commit   : cea17ba07a93c0185aa9cbbf79ce9d3241b9c547    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 27 May 2016 10:40:20 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 27 May 2016 10:40:20 -0400    

Click here for diff

If both timeout indicators are set when we arrive at ProcessInterrupts,  
we've historically just reported "lock timeout".  However, some buildfarm  
members have been observed to fail isolationtester's timeouts test by  
reporting "lock timeout" when the statement timeout was expected to fire  
first.  The cause seems to be that the process is allowed to sleep longer  
than expected (probably due to heavy machine load) so that the lock  
timeout happens before we reach the point of reporting the error, and  
then this arbitrary tiebreak rule does the wrong thing.  We can improve  
matters by comparing the scheduled timeout times to decide which error  
to report.  
  
I had originally proposed greatly reducing the 1-second window between  
the two timeouts in the test cases.  On reflection that is a bad idea,  
at least for the case where the lock timeout is expected to fire first,  
because that would assume that it takes negligible time to get from  
statement start to the beginning of the lock wait.  Thus, this patch  
doesn't completely remove the risk of test failures on slow machines.  
Empirically, however, the case this handles is the one we are seeing  
in the buildfarm.  The explanation may be that the other case requires  
the scheduler to take the CPU away from a busy process, whereas the  
case fixed here only requires the scheduler to not give the CPU back  
right away to a process that has been woken from a multi-second sleep  
(and, perhaps, has been swapped out meanwhile).  
  
Back-patch to 9.3 where the isolationtester timeouts test was added.  
  
Discussion: <[email protected]>  

M src/backend/tcop/postgres.c
M src/backend/utils/misc/timeout.c
M src/include/utils/timeout.h

Make pg_dump error cleanly with -j against hot standby

commit   : 47e59697679a0877e0525c565b1be437487604a7    
  
author   : Magnus Hagander <[email protected]>    
date     : Thu, 26 May 2016 22:14:23 +0200    
  
committer: Magnus Hagander <[email protected]>    
date     : Thu, 26 May 2016 22:14:23 +0200    

Click here for diff

Getting a synchronized snapshot is not supported on a hot standby node,  
and is by default taken when using -j with multiple sessions. Trying to  
do so still failed, but with a server error that would also go in the  
log. Instead, proprely detect this case and give a better error message.  

M src/bin/pg_dump/pg_backup.h
M src/bin/pg_dump/pg_backup_db.c
M src/bin/pg_dump/pg_backup_db.h
M src/bin/pg_dump/pg_dump.c

Fix typo in 9.5 release nodes

commit   : aa86edb38da6f822a53553c94d11f95ed4549d76    
  
author   : Alvaro Herrera <[email protected]>    
date     : Thu, 26 May 2016 11:58:22 -0400    
  
committer: Alvaro Herrera <[email protected]>    
date     : Thu, 26 May 2016 11:58:22 -0400    

Click here for diff

Noted by 星合 拓馬 (HOSHIAI Takuma)  

M doc/src/sgml/release-9.5.sgml

Make pg_dump behave more sanely when built without HAVE_LIBZ.

commit   : 64b296976befff7ec0debdc82db96b7dd3c7f45a    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 26 May 2016 11:51:04 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 26 May 2016 11:51:04 -0400    

Click here for diff

For some reason the code to emit a warning and switch to uncompressed  
output was placed down in the guts of pg_backup_archiver.c.  This is  
definitely too late in the case of parallel operation (and I rather  
wonder if it wasn't too late for other purposes as well).  Put it in  
pg_dump.c's option-processing logic, which seems a much saner place.  
  
Also, the default behavior with custom or directory output format was  
to emit the warning telling you the output would be uncompressed.  This  
seems unhelpful, so silence that case.  
  
Back-patch to 9.3 where parallel dump was introduced.  
  
Kyotaro Horiguchi, adjusted a bit by me  
  
Report: <[email protected]>  

M src/bin/pg_dump/pg_backup_archiver.c
M src/bin/pg_dump/pg_dump.c

In Windows pg_dump, ensure idle workers will shut down during error exit.

commit   : 6479df1378607e5edbe19cc28a3b52c62f11d8fa    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 26 May 2016 10:50:30 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 26 May 2016 10:50:30 -0400    

Click here for diff

The Windows coding of ShutdownWorkersHard() thought that setting termEvent  
was sufficient to make workers exit after an error.  But that only helps  
if a worker is busy and passes through checkAborting().  An idle worker  
will just sit, resulting in pg_dump failing to exit until the user gives up  
and hits control-C.  We should close the write end of the command pipe  
so that idle workers will see socket EOF and exit, as the Unix coding was  
already doing.  
  
Back-patch to 9.3 where parallel pg_dump was introduced.  
  
Kyotaro Horiguchi  

M src/bin/pg_dump/parallel.c

Ensure that backends see up-to-date statistics for shared catalogs.

commit   : b2355a29c69c90b9987cc3a8884b8ed3396842e9    
  
author   : Tom Lane <[email protected]>    
date     : Wed, 25 May 2016 17:48:15 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Wed, 25 May 2016 17:48:15 -0400    

Click here for diff

Ever since we split the statistics collector's reports into per-database  
files (commit 187492b6c2e8cafc), backends have been seeing stale statistics  
for shared catalogs.  This is because the inquiry message only prompts the  
collector to write the per-database file for the requesting backend's own  
database.  Stats for shared catalogs are in a separate file for "DB 0",  
which didn't get updated.  
  
In normal operation this was partially masked by the fact that the  
autovacuum launcher would send an inquiry message at least once per  
autovacuum_naptime that asked for "DB 0"; so the shared-catalog stats would  
never be more than a minute out of date.  However the problem becomes very  
obvious with autovacuum disabled, as reported by Peter Eisentraut.  
  
To fix, redefine the semantics of inquiry messages so that both the  
specified DB and DB 0 will be dumped.  (This might seem a bit inefficient,  
but we have no good way to know whether a backend's transaction will look  
at shared-catalog stats, so we have to read both groups of stats whenever  
we request stats.  Sending two inquiry messages would definitely not be  
better.)  
  
Back-patch to 9.3 where the bug was introduced.  
  
Report: <[email protected]>  

M src/backend/postmaster/pgstat.c

Fix broken error handling in parallel pg_dump/pg_restore.

commit   : af6555b80c7b5f9827a58c5872a723d5660897ae    
  
author   : Tom Lane <[email protected]>    
date     : Wed, 25 May 2016 12:39:57 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Wed, 25 May 2016 12:39:57 -0400    

Click here for diff

In the original design for parallel dump, worker processes reported errors  
by sending them up to the master process, which would print the messages.  
This is unworkably fragile for a couple of reasons: it risks deadlock if a  
worker sends an error at an unexpected time, and if the master has already  
died for some reason, the user will never get to see the error at all.  
Revert that idea and go back to just always printing messages to stderr.  
This approach means that if all the workers fail for similar reasons (eg,  
bad password or server shutdown), the user will see N copies of that  
message, not only one as before.  While that's slightly annoying, it's  
certainly better than not seeing any message; not to mention that we  
shouldn't assume that only the first failure is interesting.  
  
An additional problem in the same area was that the master failed to  
disable SIGPIPE (at least until much too late), which meant that sending a  
command to an already-dead worker would cause the master to crash silently.  
That was bad enough in itself but was made worse by the total reliance on  
the master to print errors: even if the worker had reported an error, you  
would probably not see it, depending on timing.  Instead disable SIGPIPE  
right after we've forked the workers, before attempting to send them  
anything.  
  
Additionally, the master relies on seeing socket EOF to realize that a  
worker has exited prematurely --- but on Windows, there would be no EOF  
since the socket is attached to the process that includes both the master  
and worker threads, so it remains open.  Make archive_close_connection()  
close the worker end of the sockets so that this acts more like the Unix  
case.  It's not perfect, because if a worker thread exits without going  
through exit_nicely() the closures won't happen; but that's not really  
supposed to happen.  
  
This has been wrong all along, so back-patch to 9.3 where parallel dump  
was introduced.  
  
Report: <[email protected]>  

M src/bin/pg_dump/parallel.c
M src/bin/pg_dump/parallel.h
M src/bin/pg_dump/pg_backup_utils.c

Fetch XIDs atomically during vac_truncate_clog().

commit   : bbbe2c97eba37fd8a9e580a596f48935f3d9ded8    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 24 May 2016 15:47:51 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 24 May 2016 15:47:51 -0400    

Click here for diff

Because vac_update_datfrozenxid() updates datfrozenxid and datminmxid  
in-place, it's unsafe to assume that successive reads of those values will  
give consistent results.  Fetch each one just once to ensure sane behavior  
in the minimum calculation.  Noted while reviewing Alexander Korotkov's  
patch in the same area.  
  
Discussion: <[email protected]>  

M src/backend/commands/vacuum.c

Avoid consuming an XID during vac_truncate_clog().

commit   : a34c3dd50f661126ac51c794e63f7932fe657542    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 24 May 2016 15:20:12 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 24 May 2016 15:20:12 -0400    

Click here for diff

vac_truncate_clog() uses its own transaction ID as the comparison point in  
a sanity check that no database's datfrozenxid has already wrapped around  
"into the future".  That was probably fine when written, but in a lazy  
vacuum we won't have assigned an XID, so calling GetCurrentTransactionId()  
causes an XID to be assigned when otherwise one would not be.  Most of the  
time that's not a big problem ... but if we are hard up against the  
wraparound limit, consuming XIDs during antiwraparound vacuums is a very  
bad thing.  
  
Instead, use ReadNewTransactionId(), which not only avoids this problem  
but is in itself a better comparison point to test whether wraparound  
has already occurred.  
  
Report and patch by Alexander Korotkov.  Back-patch to all versions.  
  
Report: <CAPpHfdspOkmiQsxh-UZw2chM6dRMwXAJGEmmbmqYR=yvM7-s6A@mail.gmail.com>  

M src/backend/commands/vacuum.c

Support IndexElem in raw_expression_tree_walker().

commit   : e504d915bbf352ecfc4ed335af934e799bf01053    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 23 May 2016 19:23:36 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 23 May 2016 19:23:36 -0400    

Click here for diff

Needed for cases in which INSERT ... ON CONFLICT appears inside a  
recursive CTE item.  Per bug #14153 from Thomas Alton.  
  
Patch by Peter Geoghegan, slightly adjusted by me  
  
Report: <[email protected]>  

M src/backend/nodes/nodeFuncs.c

Fix latent crash in do_text_output_multiline().

commit   : 9d91cd865bdc52ee2ebe70447be582fabfe88664    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 23 May 2016 14:16:40 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 23 May 2016 14:16:40 -0400    

Click here for diff

do_text_output_multiline() would fail (typically with a null pointer  
dereference crash) if its input string did not end with a newline.  Such  
cases do not arise in our current sources; but it certainly could happen  
in future, or in extension code's usage of the function, so we should fix  
it.  To fix, replace "eol += len" with "eol = text + len".  
  
While at it, make two cosmetic improvements: mark the input string const,  
and rename the argument from "text" to "txt" to dodge pgindent strangeness  
(since "text" is a typedef name).  
  
Even though this problem is only latent at present, it seems like a good  
idea to back-patch the fix, since it's a very simple/safe patch and it's  
not out of the realm of possibility that we might in future back-patch  
something that expects sane behavior from do_text_output_multiline().  
  
Per report from Hao Lee.  
  
Report: <CAGoxFiFPAGyPAJLcFxTB5cGhTW2yOVBDYeqDugYwV4dEd1L_Ag@mail.gmail.com>  

M src/backend/executor/execTuples.c
M src/include/executor/executor.h

Further improve documentation about --quote-all-identifiers switch.

commit   : 7fc5064df41529d57dd63be09a7dee1df8693b5c    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 20 May 2016 15:51:57 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 20 May 2016 15:51:57 -0400    

Click here for diff

Mention it in the Notes section too, per suggestion from David Johnston.  
  
Discussion: <[email protected]>  

M doc/src/sgml/ref/pg_dump.sgml

Improve documentation about pg_dump's --quote-all-identifiers switch.

commit   : c09f1dcef3ed4f8a50fb6391d1894fa1d5634c03    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 20 May 2016 14:59:48 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 20 May 2016 14:59:48 -0400    

Click here for diff

Per bug #14152 from Alejandro Martínez.  Back-patch to all supported  
branches.  
  
Discussion: <[email protected]>  

M doc/src/sgml/ref/pg_dump.sgml
M doc/src/sgml/ref/pg_dumpall.sgml

doc: Fix typo

commit   : 9312c04ac0162ce50a175bba8d1ab9ba023b5770    
  
author   : Peter Eisentraut <[email protected]>    
date     : Fri, 13 May 2016 21:24:13 -0400    
  
committer: Peter Eisentraut <[email protected]>    
date     : Fri, 13 May 2016 21:24:13 -0400    

Click here for diff

From: Alexander Law <[email protected]>  

M doc/src/sgml/gin.sgml

Ensure plan stability in contrib/btree_gist regression test.

commit   : ab5f73b3ea3180959bb265811c6ac1d535c6f01c    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 12 May 2016 20:04:12 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 12 May 2016 20:04:12 -0400    

Click here for diff

Buildfarm member skink failed with symptoms suggesting that an  
auto-analyze had happened and changed the plan displayed for a  
test query.  Although this is evidently of low probability,  
regression tests that sometimes fail are no fun, so add commands  
to force a bitmap scan to be chosen.  

M contrib/btree_gist/expected/not_equal.out
M contrib/btree_gist/sql/not_equal.sql

Fix bogus comments

commit   : 4140619df909ed90f485c0e96a5c4b901e544976    
  
author   : Alvaro Herrera <[email protected]>    
date     : Thu, 12 May 2016 16:02:49 -0300    
  
committer: Alvaro Herrera <[email protected]>    
date     : Thu, 12 May 2016 16:02:49 -0300    

Click here for diff

Some comments mentioned XLogReplayBuffer, but there's no such function:  
that was an interim name for a function that got renamed to  
XLogReadBufferForRedo, before commit 2c03216d831160 was pushed.  

M src/backend/access/heap/heapam.c
M src/backend/access/transam/xlogutils.c

Fix obsolete comment

commit   : 21ef195340bf37cb55383f6fdfe33609f5003a27    
  
author   : Alvaro Herrera <[email protected]>    
date     : Thu, 12 May 2016 15:36:51 -0300    
  
committer: Alvaro Herrera <[email protected]>    
date     : Thu, 12 May 2016 15:36:51 -0300    

Click here for diff

M src/backend/access/heap/heapam.c

Fix infer_arbiter_indexes() to not barf on system columns.

commit   : 428484ce102b3d4e6308c8504744558c2e2d99af    
  
author   : Tom Lane <[email protected]>    
date     : Wed, 11 May 2016 17:06:53 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Wed, 11 May 2016 17:06:53 -0400    

Click here for diff

While it could be argued that rejecting system column mentions in the  
ON CONFLICT list is an unsupported feature, falling over altogether  
just because the table has a unique index on OID is indubitably a bug.  
  
As far as I can tell, fixing infer_arbiter_indexes() is sufficient to  
make ON CONFLICT (oid) actually work, though making a regression test  
for that case is problematic because of the impossibility of setting  
the OID counter to a known value.  
  
Minor cosmetic cleanups along with the bug fix.  

M src/backend/optimizer/util/plancat.c

Fix assorted missing infrastructure for ON CONFLICT.

commit   : 58d802410ad85c44073d4ef494a9d5ac24ecba66    
  
author   : Tom Lane <[email protected]>    
date     : Wed, 11 May 2016 16:20:03 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Wed, 11 May 2016 16:20:03 -0400    

Click here for diff

subquery_planner() failed to apply expression preprocessing to the  
arbiterElems and arbiterWhere fields of an OnConflictExpr.  No doubt the  
theory was that this wasn't necessary because we don't actually try to  
execute those expressions; but that's wrong, because it results in failure  
to match to index expressions or index predicates that are changed at all  
by preprocessing.  Per bug #14132 from Reynold Smith.  
  
Also add pullup_replace_vars processing for onConflictWhere.  Perhaps  
it's impossible to have a subquery reference there, but I'm not exactly  
convinced; and even if true today it's a failure waiting to happen.  
  
Also add some comments to other places where one or another field of  
OnConflictExpr is intentionally ignored, with explanation as to why it's  
okay to do so.  
  
Also, catalog/dependency.c failed to record any dependency on the named  
constraint in ON CONFLICT ON CONSTRAINT, allowing such a constraint to  
be dropped while rules exist that depend on it, and allowing pg_dump to  
dump such a rule before the constraint it refers to.  The normal execution  
path managed to error out reasonably for a dangling constraint reference,  
but ruleutils.c dumped core; so in addition to fixing the omission, add  
a protective check in ruleutils.c, since we can't retroactively add a  
dependency in existing databases.  
  
Back-patch to 9.5 where this code was introduced.  
  
Report: <[email protected]>  

M src/backend/catalog/dependency.c
M src/backend/optimizer/plan/planner.c
M src/backend/optimizer/plan/subselect.c
M src/backend/optimizer/prep/prepjointree.c
M src/backend/optimizer/util/plancat.c
M src/backend/utils/adt/ruleutils.c
M src/test/regress/expected/insert_conflict.out
M src/test/regress/sql/insert_conflict.sql

Fix autovacuum for shared relations

commit   : 7516cdb76adb0710b0453d6b5252f14fc8ca49bc    
  
author   : Alvaro Herrera <[email protected]>    
date     : Tue, 10 May 2016 16:23:54 -0300    
  
committer: Alvaro Herrera <[email protected]>    
date     : Tue, 10 May 2016 16:23:54 -0300    

Click here for diff

The table-skipping logic in autovacuum would fail to consider that  
multiple workers could be processing the same shared catalog in  
different databases.  This normally wouldn't be a problem: firstly  
because autovacuum workers not for wraparound would simply ignore tables  
in which they cannot acquire lock, and secondly because most of the time  
these tables are small enough that even if multiple for-wraparound  
workers are stuck in the same catalog, they would be over pretty  
quickly.  But in cases where the catalogs are severely bloated it could  
become a problem.  
  
Backpatch all the way back, because the problem has been there since the  
beginning.  
  
Reported by Ondřej Světlík  
  
Discussion: https://www.postgresql.org/message-id/572B63B1.3030603%40flexibee.eu  
	https://www.postgresql.org/message-id/572A1072.5080308%40flexibee.eu  

M src/backend/postmaster/autovacuum.c