PostgreSQL 11.4 (upcoming) commit log

Fix ordering of GRANT commands in pg_dump for database creation

commit   : 8357a413f439887ef243f9efd2417b1a7409e694    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Wed, 22 May 2019 14:48:14 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Wed, 22 May 2019 14:48:14 +0900    

Click here for diff

This uses a method similar to 68a7c24f, which guarantees that GRANT  
commands using the WITH GRANT OPTION are dumped in a way so as cascading  
dependencies are respected.  As databases do not have support for  
initial privileges via pg_init_privs, we need to repeat again the same  
ACL reordering method.  
  
ACL for databases have been moved from pg_dumpall to pg_dump in v11, so  
this impacts pg_dump for v11 and above, and pg_dumpall for v9.6 and  
v10.  
  
Discussion: https://postgr.es/m/15788-4e18847520ebcc75@postgresql.org  
Author: Nathan Bossart  
Reviewed-by: Haribabu Kommi  
Backpatch-through: 9.6  

M src/bin/pg_dump/pg_dump.c

Minimally fix partial aggregation for aggregates that don’t have one argument.

commit   : 9fea0b0e287e39c96f1486b0af23102ac5b752a5    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Sun, 19 May 2019 18:01:06 -0700    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Sun, 19 May 2019 18:01:06 -0700    

Click here for diff

For partial aggregation combine steps,  
AggStatePerTrans->numTransInputs was set to the transition function's  
number of inputs, rather than the combine function's number of  
inputs (always 1).  
  
That lead to partial aggregates with strict combine functions to  
wrongly check for NOT NULL input as required by strictness. When the  
aggregate wasn't exactly passed one argument, the strictness check was  
either omitted (in the 0 args case) or too many arguments were  
checked. In the latter case we'd read beyond the end of  
FunctionCallInfoData->args (only in master).  
  
AggStatePerTrans->numTransInputs actually has been wrong since since  
9.6, where partial aggregates were added. But it turns out to not be  
an active problem in 9.6 and 10, because numTransInputs wasn't used at  
all for combine functions: Before c253b722f6 there simply was no NULL  
check for the input to strict trans functions, and after that the  
check was simply hardcoded for the right offset in fcinfo, as it's  
done by code specific to combine functions.  
  
In bf6c614a2f2 (11) the strictness check was generalized, with common  
code doing the strictness checks for both plain and combine transition  
functions, based on numTransInputs. For combine functions this lead to  
not emitting an expression step to check for strict input in the 0  
arguments case, and in the > 1 arguments case, we'd check too many  
arguments.Due to the fact that the relevant fcinfo->isnull[2..] was  
always zero-initialized (more or less by accident, by being part of  
the AggStatePerTrans struct, which is palloc0'ed), there was no  
observable damage in the latter case before a9c35cf85ca1f, we just  
checked too many array elements.  
  
Due to the changes in a9c35cf85ca1f, > 1 argument bug became visible,  
because these days fcinfo is a) dynamically allocated without being  
zeroed b) exactly the length required for the number of specified  
arguments (hardcoded to 2 in this case).  
  
This commit only contains a fairly minimal fix, setting numTransInputs  
to a hardcoded 1 when building a pertrans for a combine function. It  
seems likely that we'll want to clean this up further (e.g. the  
arguments build_pertrans_for_aggref() aren't particularly meaningful  
for combine functions). But the wrap date for 12 beta1 is coming up  
fast, so it seems good to have a minimal fix in place.  
  
Backpatch to 11. While AggStatePerTrans->numTransInputs was set  
wrongly before that, the value was not used for combine functions.  
  
Reported-By: Rajkumar Raghuwanshi  
Diagnosed-By: Kyotaro Horiguchi, Jeevan Chalke, Andres Freund, David Rowley  
Author: David Rowley, Kyotaro Horiguchi, Andres Freund  
Discussion: https://postgr.es/m/CAKcux6=uZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew@mail.gmail.com  

M src/backend/executor/nodeAgg.c
M src/test/regress/expected/aggregates.out
M src/test/regress/sql/aggregates.sql

Fix some grammar in documentation of spgist and pgbench

commit   : 0950d25acec66ad02d2fc2d6d75a36ec334ed6f8    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Mon, 20 May 2019 09:48:27 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Mon, 20 May 2019 09:48:27 +0900    

Click here for diff

Discussion: https://postgr.es/m/92961161-9b49-e42f-0a72-d5d47e0ed4de@postgrespro.ru  
Author: Liudmila Mantrova  
Reviewed-by: Jonathan Katz, Tom Lane, Michael Paquier  
Backpatch-through: 9.4  

M doc/src/sgml/ref/pgbench.sgml
M doc/src/sgml/spgist.sgml

Revert “In the pg_upgrade test suite, don’t write to src/test/regress.”

commit   : 9518978e223b758f0efbc28422c5bf164d521f28    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sun, 19 May 2019 15:24:42 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sun, 19 May 2019 15:24:42 -0700    

Click here for diff

This reverts commit bd1592e8570282b1650af6b8eede0016496daecd.  It had  
multiple defects.  
  
Discussion: https://postgr.es/m/12717.1558304356@sss.pgh.pa.us  

M src/bin/pg_upgrade/test.sh
M src/test/regress/input/largeobject.source
M src/test/regress/output/largeobject.source
M src/test/regress/output/largeobject_1.source
M src/tools/msvc/vcregress.pl

In the pg_upgrade test suite, don’t write to src/test/regress.

commit   : d08d880ab41afff57280e69b89144076ae068999    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sun, 19 May 2019 14:36:44 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sun, 19 May 2019 14:36:44 -0700    

Click here for diff

When this suite runs installcheck, redirect file creations from  
src/test/regress to src/bin/pg_upgrade/tmp_check/regress.  This closes a  
race condition in "make -j check-world".  If the pg_upgrade suite wrote  
to a given src/test/regress/results file in parallel with the regular  
src/test/regress invocation writing it, a test failed spuriously.  Even  
without parallelism, in "make -k check-world", the suite finishing  
second overwrote the other's regression.diffs.  This revealed test  
"largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too.  
  
Buildfarm client REL_10, released forty-five days ago, supports saving  
regression.diffs from its new location.  When an older client reports a  
pg_upgradeCheck failure, it will no longer include regression.diffs.  
Back-patch to 9.5, where pg_upgrade moved to src/bin.  
  
Reviewed by Andrew Dunstan.  
  
Discussion: https://postgr.es/m/20181224034411.GA3224776@rfd.leadboat.com  

M src/bin/pg_upgrade/test.sh
M src/test/regress/input/largeobject.source
M src/test/regress/output/largeobject.source
M src/test/regress/output/largeobject_1.source
M src/tools/msvc/vcregress.pl

Restructure creation of run-time pruning steps.

commit   : 592d5d75be9720e575e76ba35c3ff04659ec0603    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 17 May 2019 19:44:19 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 17 May 2019 19:44:19 -0400    

Click here for diff

Previously, gen_partprune_steps() always built executor pruning steps  
using all suitable clauses, including those containing PARAM_EXEC  
Params.  This meant that the pruning steps were only completely safe  
for executor run-time (scan start) pruning.  To prune at executor  
startup, we had to ignore the steps involving exec Params.  But this  
doesn't really work in general, since there may be logic changes  
needed as well --- for example, pruning according to the last operator's  
btree strategy is the wrong thing if we're not applying that operator.  
The rules embodied in gen_partprune_steps() and its minions are  
sufficiently complicated that tracking their incremental effects in  
other logic seems quite impractical.  
  
Short of a complete redesign, the only safe fix seems to be to run  
gen_partprune_steps() twice, once to create executor startup pruning  
steps and then again for run-time pruning steps.  We can save a few  
cycles however by noting during the first scan whether we rejected  
any clauses because they involved exec Params --- if not, we don't  
need to do the second scan.  
  
In support of this, refactor the internal APIs in partprune.c to make  
more use of passing information in the GeneratePruningStepsContext  
struct, rather than as separate arguments.  
  
This is, I hope, the last piece of our response to a bug report from  
Alan Jackson.  Back-patch to v11 where this code came in.  
  
Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com  

M src/backend/executor/execPartition.c
M src/backend/nodes/copyfuncs.c
M src/backend/nodes/outfuncs.c
M src/backend/nodes/readfuncs.c
M src/backend/partitioning/partprune.c
M src/include/executor/execPartition.h
M src/include/nodes/plannodes.h
M src/include/partitioning/partprune.h
M src/test/regress/expected/partition_prune.out
M src/test/regress/sql/partition_prune.sql

Fix bogus logic for combining range-partitioned columns during pruning.

commit   : 51948c4e1fdef88ba9b953bd7b58d19a348732be    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 16 May 2019 16:25:43 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 16 May 2019 16:25:43 -0400    

Click here for diff

gen_prune_steps_from_opexps's notion of how to do this was overly  
complicated and underly correct.  
  
Per discussion of a report from Alan Jackson (though this fixes only one  
aspect of that problem).  Back-patch to v11 where this code came in.  
  
Amit Langote  
  
Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com  

M src/backend/partitioning/partprune.c
M src/test/regress/expected/partition_prune.out
M src/test/regress/sql/partition_prune.sql

Fix partition pruning to treat stable comparison operators properly.

commit   : 10c5cc4b4f88d249751e27034a8dd59ea903a698    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 16 May 2019 11:58:22 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 16 May 2019 11:58:22 -0400    

Click here for diff

Cross-type comparison operators in a btree or hash opclass might be  
only stable not immutable (this is true of timestamp vs. timestamptz  
for example).  partprune.c ignored this possibility and would perform  
plan-time pruning with them anyway, possibly leading to wrong answers  
if the environment changed between planning and execution.  
  
To fix, teach gen_partprune_steps() to do things differently when  
creating plan-time pruning steps vs. run-time pruning steps.  
analyze_partkey_exprs() also needs an extra check, which is rather  
annoying but now is not the time to restructure things enough to  
avoid that.  
  
While at it, simplify the logic for the plan-time case a little  
by insisting that the comparison value be a Const and nothing else.  
This relies on the assumption that eval_const_expressions will have  
reduced any immutable expression to a Const; which is not quite  
100% true, but certainly any case that comes up often enough to be  
interesting should have simplification logic there.  
  
Also improve a bunch of inadequate/obsolete/wrong comments.  
  
Per discussion of a report from Alan Jackson (though this fixes only one  
aspect of that problem).  Back-patch to v11 where this code came in.  
  
David Rowley, with some further hacking by me  
  
Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com  

M src/backend/partitioning/partprune.c
M src/test/regress/expected/partition_prune.out
M src/test/regress/sql/partition_prune.sql

Add isolation test for INSERT ON CONFLICT speculative insertion failure.

commit   : 05cf41973157577aac9706dcc7998054949b0ed4    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Tue, 14 May 2019 11:45:40 -0700    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Tue, 14 May 2019 11:45:40 -0700    

Click here for diff

This path previously was not reliably covered. There was some  
heuristic coverage via insert-conflict-toast.spec, but that test is  
not deterministic, and only tested for a somewhat specific bug.  
  
Backpatch, as this is a complicated and otherwise untested code  
path. Unfortunately 9.5 cannot handle two waiting sessions, and thus  
cannot execute this test.  
  
Triggered by a conversion with Melanie Plageman.  
  
Author: Andres Freund  
Discussion: https://postgr.es/m/CAAKRu_a7hbyrk=wveHYhr4LbcRnRCG=yPUVoQYB9YO1CdUBE9Q@mail.gmail.com  
Backpatch: 9.5-  

A src/test/isolation/expected/insert-conflict-specconflict.out
M src/test/isolation/isolation_schedule
A src/test/isolation/specs/insert-conflict-specconflict.spec

Fix comment on when HOT update is possible.

commit   : 3293330f79af9d66e9df251266c882794edfec4e    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 14 May 2019 13:06:33 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 14 May 2019 13:06:33 +0300    

Click here for diff

The conditions listed in this comment have changed several times, and at  
some point the thing that the "if so" referred to was negated.  
  
The text was OK up to 9.6. It was differently wrong in v10, v11 and  
master, so fix in all those versions.  

M src/backend/access/heap/heapam.c

Doc: Refer to line pointers as item identifiers.

commit   : 6bbc2f9b66104de67f29881c54e75fd6f5d2f694    
  
author   : Peter Geoghegan <pg@bowt.ie>    
date     : Mon, 13 May 2019 15:39:05 -0700    
  
committer: Peter Geoghegan <pg@bowt.ie>    
date     : Mon, 13 May 2019 15:39:05 -0700    

Click here for diff

An upcoming HEAD-only patch will standardize the terminology around  
ItemIdData variables/line pointers, ending the practice of referring to  
them as "item pointers".  Make the "Database Page Layout" docs  
consistent with the new policy.  The term "item identifier" is already  
used in the same section, so stick with that.  
  
Discussion: https://postgr.es/m/CAH2-Wz=c=MZQjUzde3o9+2PLAPuHTpVZPPdYxN=E4ndQ2--8ew@mail.gmail.com  
Backpatch: All supported branches.  

M doc/src/sgml/storage.sgml

Fix logical replication’s ideas about which type OIDs are built-in.

commit   : b6abc2241ac4549623d6894d7855765df6345ad5    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 13 May 2019 17:23:00 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 13 May 2019 17:23:00 -0400    

Click here for diff

Only hand-assigned type OIDs should be presumed to match across different  
PG servers; those assigned during genbki.pl or during initdb are likely  
to change due to addition or removal of unrelated objects.  
  
This means that the cutoff should be FirstGenbkiObjectId (in HEAD)  
or FirstBootstrapObjectId (before that), not FirstNormalObjectId.  
Compare postgres_fdw's is_builtin() test.  
  
It's likely that this error has no observable consequence in a  
normally-functioning system, since ATM the only affected type OIDs are  
system catalog rowtypes and information_schema types, which would not  
typically be interesting for logical replication.  But you could  
probably break it if you tried hard, so back-patch.  
  
Discussion: https://postgr.es/m/15150.1557257111@sss.pgh.pa.us  

M src/backend/replication/logical/relation.c
M src/backend/replication/pgoutput/pgoutput.c

Don’t leave behind junk nbtree pages during split.

commit   : bf78f50bae0b3b5ffcbf3e3c5b03fd138be15f9a    
  
author   : Peter Geoghegan <pg@bowt.ie>    
date     : Mon, 13 May 2019 10:27:57 -0700    
  
committer: Peter Geoghegan <pg@bowt.ie>    
date     : Mon, 13 May 2019 10:27:57 -0700    

Click here for diff

Commit 8fa30f906be reduced the elevel of a number of "can't happen"  
_bt_split() errors from PANIC to ERROR.  At the same time, the new right  
page buffer for the split could continue to be acquired well before the  
critical section.  This was possible because it was relatively  
straightforward to make sure that _bt_split() could not throw an error,  
with a few specific exceptions.  The exceptional cases were safe because  
they involved specific, well understood errors, making it possible to  
consistently zero the right page before actually raising an error using  
elog().  There was no danger of leaving around a junk page, provided  
_bt_split() stuck to this coding rule.  
  
Commit 8224de4f, which introduced INCLUDE indexes, added code to make  
_bt_split() truncate away non-key attributes.  This happened at a point  
that broke the rule around zeroing the right page in _bt_split().  If  
truncation failed (perhaps due to palloc() failure), that would result  
in an errant right page buffer with junk contents.  This could confuse  
VACUUM when it attempted to delete the page, and should be avoided on  
general principle.  
  
To fix, reorganize _bt_split() so that truncation occurs before the new  
right page buffer is even acquired.  A junk page/buffer will not be left  
behind if _bt_nonkey_truncate()/_bt_truncate() raise an error.  
  
Discussion: https://postgr.es/m/CAH2-WzkcWT_-NH7EeL=Az4efg0KCV+wArygW8zKB=+HoP=VWMw@mail.gmail.com  
Backpatch: 11-, where INCLUDE indexes were introduced.  

M src/backend/access/nbtree/nbtinsert.c

Fix misuse of an integer as a bool.

commit   : 6b0e9411ff0f0116d6f9118a870a682a17eea110    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 13 May 2019 10:53:19 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 13 May 2019 10:53:19 -0400    

Click here for diff

pgtls_read_pending is declared to return bool, but what the underlying  
SSL_pending function returns is a count of available bytes.  
  
This is actually somewhat harmless if we're using C99 bools, but in  
the back branches it's a live bug: if the available-bytes count happened  
to be a multiple of 256, it would get converted to a zero char value.  
On machines where char is signed, counts of 128 and up could misbehave  
as well.  The net effect is that when using SSL, libpq might block  
waiting for data even though some has already been received.  
  
Broken by careless refactoring in commit 4e86f1b16, so back-patch  
to 9.5 where that came in.  
  
Per bug #15802 from David Binderman.  
  
Discussion: https://postgr.es/m/15802-f0911a97f0346526@postgresql.org  

M src/interfaces/libpq/fe-misc.c
M src/interfaces/libpq/fe-secure-openssl.c

postgres_fdw: Fix typo in comment.

commit   : 6ba0ff47cd9a7e86298dca3ead112eb27ae21265    
  
author   : Etsuro Fujita <efujita@postgresql.org>    
date     : Mon, 13 May 2019 17:30:37 +0900    
  
committer: Etsuro Fujita <efujita@postgresql.org>    
date     : Mon, 13 May 2019 17:30:37 +0900    

Click here for diff

M contrib/postgres_fdw/postgres_fdw.c

Fix misoptimization of “{1,1}” quantifiers in regular expressions.

commit   : 72ce7acaf3e60da712f0de1916704a4aec06600d    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 12 May 2019 18:53:12 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 12 May 2019 18:53:12 -0400    

Click here for diff

A bounded quantifier with m = n = 1 might be thought a no-op.  But  
according to our documentation (which traces back to Henry Spencer's  
original man page) it still imposes greediness, or non-greediness in the  
case of the non-greedy variant "{1,1}?", on whatever it's attached to.  
  
This turns out not to work though, because parseqatom() optimizes away  
the m = n = 1 case without regard for whether it's supposed to change  
the greediness of the argument RE.  
  
We can fix this by just not applying the optimization when the greediness  
needs to change; the subsequent general cases handle it fine.  
  
The three cases in which we can still apply the optimization are  
(a) no quantifier, or quantifier does not impose a preference;  
(b) atom has no greediness property, implying it cannot match a  
variable amount of text anyway; or  
(c) quantifier's greediness is same as atom's.  
Note that in most cases where one of these applies, we'd have exited  
earlier in the "not a messy case" fast path.  I think it's now only  
possible to get to the optimization when the atom involves capturing  
parentheses or a non-top-level backref.  
  
Back-patch to all supported branches.  I'd ordinarily be hesitant to  
put a subtle behavioral change into back branches, but in this case  
it's very hard to see a reason why somebody would write "{1,1}?" unless  
they're trying to get the documented change-of-greediness behavior.  
  
Discussion: https://postgr.es/m/5bb27a41-350d-37bf-901e-9d26f5592dd0@charter.net  

M src/backend/regex/regcomp.c
M src/test/regress/expected/regex.out
M src/test/regress/sql/regex.sql

Fail pgwin32_message_to_UTF16() for SQL_ASCII messages.

commit   : 4ec14e5aa1f79d01a2558b694ccbe7756c4d186e    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sun, 12 May 2019 10:33:05 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sun, 12 May 2019 10:33:05 -0700    

Click here for diff

The function had been interpreting SQL_ASCII messages as UTF8, throwing  
an error when they were invalid UTF8.  The new behavior is consistent  
with pg_do_encoding_conversion().  This affects LOG_DESTINATION_STDERR  
and LOG_DESTINATION_EVENTLOG, which will send untranslated bytes to  
write() and ReportEventA().  On buildfarm member bowerbird, enabling  
log_connections caused an error whenever the role name was not valid  
UTF8.  Back-patch to 9.4 (all supported versions).  
  
Discussion: https://postgr.es/m/20190512015615.GD1124997@rfd.leadboat.com  

M src/backend/utils/mb/mbutils.c
M src/bin/pg_dump/t/010_dump_connstr.pl
M src/bin/scripts/t/200_connstr.pl

Rearrange pgstat_bestart() to avoid failures within its critical section.

commit   : eb97242c2f78869376277567dcb8102283368489    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 11 May 2019 21:27:13 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 11 May 2019 21:27:13 -0400    

Click here for diff

We long ago decided to design the shared PgBackendStatus data structure to  
minimize the cost of writing status updates, which means that writers just  
have to increment the st_changecount field twice.  That isn't hooked into  
any sort of resource management mechanism, which means that if something  
were to throw error between the two increments, the st_changecount field  
would be left odd indefinitely.  That would cause readers to lock up.  
Now, since it's also a bad idea to leave the field odd for longer than  
absolutely necessary (because readers will spin while we have it set),  
the expectation was that we'd treat these segments like spinlock critical  
sections, with only short, more or less straight-line, code in them.  
  
That was fine as originally designed, but commit 9029f4b37 broke it  
by inserting a significant amount of non-straight-line code into  
pgstat_bestart(), code that is very capable of throwing errors, not to  
mention taking a significant amount of time during which readers will spin.  
We have a report from Neeraj Kumar of readers actually locking up, which  
I suspect was due to an encoding conversion error in X509_NAME_to_cstring,  
though conceivably it was just a garden-variety OOM failure.  
  
Subsequent commits have loaded even more dubious code into pgstat_bestart's  
critical section (and commit fc70a4b0d deserves some kind of booby prize  
for managing to miss the critical section entirely, although the negative  
consequences seem minimal given that the PgBackendStatus entry should be  
seen by readers as inactive at that point).  
  
The right way to fix this mess seems to be to compute all these values  
into a local copy of the process' PgBackendStatus struct, and then just  
copy the data back within the critical section proper.  This plan can't  
be implemented completely cleanly because of the struct's heavy reliance  
on out-of-line strings, which we must initialize separately within the  
critical section.  But still, the critical section is far smaller and  
safer than it was before.  
  
In hopes of forestalling future errors of the same ilk, rename the  
macros for st_changecount management to make it more apparent that  
the writer-side macros create a critical section.  And to prevent  
the worst consequences if we nonetheless manage to mess it up anyway,  
adjust those macros so that they really are a critical section, ie  
they now bump CritSectionCount.  That doesn't add much overhead, and  
it guarantees that if we do somehow throw an error while the counter  
is odd, it will lead to PANIC and a database restart to reset shared  
memory.  
  
Back-patch to 9.5 where the problem was introduced.  
  
In HEAD, also fix an oversight in commit b0b39f72b: it failed to teach  
pgstat_read_current_status to copy st_gssstatus data from shared memory to  
local memory.  Hence, subsequent use of that data within the transaction  
would potentially see changing data that it shouldn't see.  
  
Discussion: https://postgr.es/m/CAPR3Wj5Z17=+eeyrn_ZDG3NQGYgMEOY6JV6Y-WRRhGgwc16U3Q@mail.gmail.com  

M src/backend/postmaster/pgstat.c
M src/include/pgstat.h

Honor TEMP_CONFIG in TAP suites.

commit   : 239dcf8f15b70102ed18d1d8a020e4a7bbc2a6f9    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sat, 11 May 2019 00:22:38 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sat, 11 May 2019 00:22:38 -0700    

Click here for diff

The buildfarm client uses TEMP_CONFIG to implement its extra_config  
setting.  Except for stats_temp_directory, extra_config now applies to  
TAP suites; extra_config values seen in the past month are compatible  
with this.  Back-patch to 9.6, where PostgresNode was introduced, so the  
buildfarm can rely on it sooner.  
  
Reviewed by Andrew Dunstan and Tom Lane.  
  
Discussion: https://postgr.es/m/20181229021950.GA3302966@rfd.leadboat.com  

M src/bin/pg_ctl/t/001_start_stop.pl
M src/test/perl/PostgresNode.pm

Fix error reporting in reindexdb

commit   : e16ab408f3db5ced50d84748b7a9f367ece93d3f    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Sat, 11 May 2019 13:01:07 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Sat, 11 May 2019 13:01:07 +0900    

Click here for diff

When failing to reindex a table or an index, reindexdb would generate an  
extra error message related to a database failure, which is misleading.  
  
Backpatch all the way down, as this has been introduced by 85e9a5a0.  
  
Discussion: https://postgr.es/m/CAOBaU_Yo61RwNO3cW6WVYWwH7EYMPuexhKqufb2nFGOdunbcHw@mail.gmail.com  
Author: Julien Rouhaud  
Reviewed-by: Daniel Gustafsson, Álvaro Herrera, Tom Lane, Michael  
Paquier  
Backpatch-through: 9.4  

M src/bin/scripts/reindexdb.c

Cope with EINVAL and EIDRM shmat() failures in PGSharedMemoryAttach.

commit   : 803f90ab795b6bc170ba517cdd0dfddc85a5f961    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 10 May 2019 14:56:41 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 10 May 2019 14:56:41 -0400    

Click here for diff

There's a very old race condition in our code to see whether a pre-existing  
shared memory segment is still in use by a conflicting postmaster: it's  
possible for the other postmaster to remove the segment in between our  
shmctl() and shmat() calls.  It's a narrow window, and there's no risk  
unless both postmasters are using the same port number, but that's possible  
during parallelized "make check" tests.  (Note that while the TAP tests  
take some pains to choose a randomized port number, pg_regress doesn't.)  
If it does happen, we treated that as an unexpected case and errored out.  
  
To fix, allow EINVAL to be treated as segment-not-present, and the same  
for EIDRM on Linux.  AFAICS, the considerations here are basically  
identical to the checks for acceptable shmctl() failures, so I documented  
and coded it that way.  
  
While at it, adjust PGSharedMemoryAttach's API to remove its undocumented  
dependency on UsedShmemSegAddr in favor of passing the attach address  
explicitly.  This makes it easier to be sure we're using a null shmaddr  
when probing for segment conflicts (thus avoiding questions about what  
EINVAL means).  I don't think there was a bug there, but it required  
fragile assumptions about the state of UsedShmemSegAddr during  
PGSharedMemoryIsInUse.  
  
Commit c09850992 may have made this failure more probable by applying  
the conflicting-segment tests more often.  Hence, back-patch to all  
supported branches, as that was.  
  
Discussion: https://postgr.es/m/22224.1557340366@sss.pgh.pa.us  

M src/backend/port/sysv_shmem.c

Repair issues with faulty generation of merge-append plans.

commit   : e7eed0baa049ee2a1b06b7af10f7e4580a3a6cdd    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 9 May 2019 16:52:49 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 9 May 2019 16:52:49 -0400    

Click here for diff

create_merge_append_plan failed to honor the CP_EXACT_TLIST flag:  
it would generate the expected targetlist but then it felt free to  
add resjunk sort targets to it.  This demonstrably leads to assertion  
failures in v11 and HEAD, and it's probably just accidental that we  
don't see the same in older branches.  I've not looked into whether  
there would be any real-world consequences in non-assert builds.  
In HEAD, create_append_plan has sprouted the same problem, so fix  
that too (although we do not have any test cases that seem able to  
reach that bug).  This is an oversight in commit 3fc6e2d7f which  
invented the CP_EXACT_TLIST flag, so back-patch to 9.6 where that  
came in.  
  
convert_subquery_pathkeys would create pathkeys for subquery output  
values if they match any EquivalenceClass known in the outer query  
and are available in the subquery's syntactic targetlist.  However,  
the second part of that condition is wrong, because such values might  
not appear in the subquery relation's reltarget list, which would  
mean that they couldn't be accessed above the level of the subquery  
scan.  We must check that they appear in the reltarget list, instead.  
This can lead to dropping knowledge about the subquery's sort  
ordering, but I believe it's okay, because any sort key that the  
outer query actually has any interest in would appear in the  
reltarget list.  
  
This second issue is of very long standing, but right now there's no  
evidence that it causes observable problems before 9.6, so I refrained  
from back-patching further than that.  We can revisit that choice if  
somebody finds a way to make it cause problems in older branches.  
(Developing useful test cases for these issues is really problematic;  
fixing convert_subquery_pathkeys removes the only known way to exhibit  
the create_merge_append_plan bug, and neither of the test cases added  
by this patch causes a problem in all branches, even when considering  
the issues separately.)  
  
The second issue explains bug #15795 from Suresh Kumar R ("could not  
find pathkey item to sort" with nested DISTINCT queries).  I stumbled  
across the first issue while investigating that.  
  
Discussion: https://postgr.es/m/15795-fadb56c8e44ee73c@postgresql.org  

M src/backend/optimizer/path/pathkeys.c
M src/backend/optimizer/plan/createplan.c
M src/test/regress/expected/union.out
M src/test/regress/sql/union.sql

Fix error status of vacuumdb when multiple jobs are used

commit   : 25f12acd53f603a581d8bc89920037a811f12f82    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Thu, 9 May 2019 10:29:29 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Thu, 9 May 2019 10:29:29 +0900    

Click here for diff

When running a batch of VACUUM or ANALYZE commands on a given database,  
there were cases where it is possible to have vacuumdb not report an  
error where it actually should, leading to incorrect status results.  
  
Author: Julien Rouhaud  
Reviewed-by: Amit Kapila, Michael Paquier  
Discussion: https://postgr.es/m/CAOBaU_ZuTwz7CtqLYJ1Ouuh272bTQPLN8b1bAPk0bCBm4PDMTQ@mail.gmail.com  
Backpatch-through: 9.5  

M src/bin/scripts/vacuumdb.c

Fix documentation for the privileges required for replication functions.

commit   : a9d5383db2e17a602ab6f9f0b4955623a8d444a6    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 9 May 2019 01:35:13 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 9 May 2019 01:35:13 +0900    

Click here for diff

Previously it's documented that use of replication functions is  
restricted to superusers. This is true for the functions which  
use replication origin, but not for pg_logicl_emit_message() and  
functions which use replication slot. For example, not only  
superusers but also users with REPLICATION privilege is allowed  
to use the functions for replication slot. This commit fixes  
the documentation for the privileges required for those replication  
functions.  
  
Back-patch to 9.4 (all supported versions).  
  
Author: Matsumura Ryo  
Discussion: https://postgr.es/m/03040DFF97E6E54E88D3BFEE5F5480F74ABA6E16@G01JPEXMBYT04  

M doc/src/sgml/func.sgml

Probe only 127.0.0.1 when looking for ports on Unix.

commit   : 1f3bcb4972009c8af7b71d1526559475a248f77a    
  
author   : Thomas Munro <tmunro@postgresql.org>    
date     : Mon, 6 May 2019 15:02:41 +1200    
  
committer: Thomas Munro <tmunro@postgresql.org>    
date     : Mon, 6 May 2019 15:02:41 +1200    

Click here for diff

Commit c0985099, later adjusted by commit 4ab02e81, probed 0.0.0.0  
in addition to 127.0.0.1, for the benefit of Windows build farm  
animals.  It isn't really useful on Unix systems, and turned out to  
be a bit inconvenient to users of some corporate firewall software.  
Switch back to probing just 127.0.0.1 on non-Windows systems.  
  
Back-patch to 9.6, like the earlier changes.  
  
Discussion: https://postgr.es/m/CA%2BhUKG%2B21EPwfgs4m%2BtqyRtbVqkOUvP8QQ8sWk9%2Bh55Aub1H3A%40mail.gmail.com  

M src/test/perl/PostgresNode.pm

Remove leftover reference to old “flat file” mechanism in a comment.

commit   : 2bc59f890100f9a90289f8ef10b9403294915ff8    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 8 May 2019 09:32:34 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 8 May 2019 09:32:34 +0300    

Click here for diff

The flat file mechanism was removed in PostgreSQL 9.0.  

M src/backend/access/transam/xact.c

commit   : 64ad372346b358aeaf7fd7c6d913f636dc4af4db    
  
author   : Michael Paquier <michael@paquier.xyz>    
date     : Tue, 7 May 2019 14:19:56 +0900    
  
committer: Michael Paquier <michael@paquier.xyz>    
date     : Tue, 7 May 2019 14:19:56 +0900    

Click here for diff

This code was broken as of 582edc3, and is most likely not used anymore.  
Note that pg_dump supports servers down to 8.0, and psql has code to  
support servers down to 7.4.  
  
Author: Julien Rouhaud  
Reviewed-by: Tom Lane  
Discussion: https://postgr.es/m/CAOBaU_Y5y=zo3+2gf+2NJC1pvMYPcbRXoQaPXx=U7+C8Qh4CzQ@mail.gmail.com  

M src/bin/scripts/common.c