Stamp 10.6.
commit : c63d9ebb5940bf3d24a6ecdc300ca9e95e29ddbe
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 5 Nov 2018 16:45:50 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 5 Nov 2018 16:45:50 -0500
M configure
M configure.in
M doc/bug.template
M src/include/pg_config.h.win32
M src/interfaces/libpq/libpq.rc.in
M src/port/win32ver.rc
Last-minute updates for release notes.
commit : bce0ff4080eba1f7175db832227bfde84db3af31
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 5 Nov 2018 16:07:06 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 5 Nov 2018 16:07:06 -0500
I removed the item about the pg_stat_statements change from
release-11.sgml, as part of a sweep to delete items already committed
in 11.0; but actually we'd best keep it to ensure that people who've
pg_upgraded their databases will take the requisite action. Also make
said action more visible by making it into its own para. Noted by
Jonathan Katz.
M doc/src/sgml/release-10.sgml
Fix copy-paste error in errhint() introduced in 691d79a07933.
commit : ac36e6aee8086847a4c21ba202f91ba0171da205
author : Andres Freund <andres@anarazel.de>
date : Mon, 5 Nov 2018 12:02:25 -0800
committer: Andres Freund <andres@anarazel.de>
date : Mon, 5 Nov 2018 12:02:25 -0800
Reported-By: Petr Jelinek
Discussion: https://postgr.es/m/c95a620b-34f0-7930-aeb5-f7ab804f26cb@2ndquadrant.com
Backpatch: 9.4-, like the previous commit
M src/backend/replication/slot.c
Last-minute updates for release notes.
commit : 613373b52b08dee01fad2f25162dd92486740c76
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 5 Nov 2018 10:48:23 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 5 Nov 2018 10:48:23 -0500
Security: CVE-2018-16850
M doc/src/sgml/release-10.sgml
Translation updates
commit : 5d846a2dd7b1d3e61f8bb813e1f7b7e1ad18607b
author : Peter Eisentraut <peter_e@gmx.net>
date : Mon, 5 Nov 2018 14:46:40 +0100
committer: Peter Eisentraut <peter_e@gmx.net>
date : Mon, 5 Nov 2018 14:46:40 +0100
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 4aac9391521d21fdecc378db4750a59795350b33
M src/backend/po/de.po
M src/backend/po/fr.po
M src/backend/po/it.po
M src/backend/po/ru.po
M src/backend/po/sv.po
M src/backend/po/tr.po
M src/bin/initdb/po/fr.po
M src/bin/initdb/po/ru.po
M src/bin/initdb/po/sv.po
M src/bin/pg_basebackup/po/fr.po
M src/bin/pg_basebackup/po/it.po
M src/bin/pg_basebackup/po/ru.po
M src/bin/pg_basebackup/po/tr.po
M src/bin/pg_controldata/po/fr.po
M src/bin/pg_controldata/po/it.po
M src/bin/pg_controldata/po/ru.po
M src/bin/pg_controldata/po/sv.po
M src/bin/pg_controldata/po/tr.po
M src/bin/pg_ctl/po/it.po
M src/bin/pg_ctl/po/sv.po
M src/bin/pg_dump/po/de.po
M src/bin/pg_dump/po/fr.po
M src/bin/pg_dump/po/it.po
M src/bin/pg_dump/po/ru.po
M src/bin/pg_dump/po/sv.po
M src/bin/pg_resetwal/po/ru.po
M src/bin/pg_resetwal/po/sv.po
M src/bin/pg_rewind/po/fr.po
M src/bin/pg_rewind/po/it.po
M src/bin/pg_rewind/po/ru.po
M src/bin/pg_rewind/po/sv.po
M src/bin/pg_rewind/po/tr.po
M src/bin/pg_upgrade/po/de.po
M src/bin/pg_upgrade/po/fr.po
M src/bin/pg_upgrade/po/ru.po
M src/bin/pg_upgrade/po/sv.po
M src/bin/pg_upgrade/po/tr.po
M src/bin/psql/po/it.po
M src/bin/psql/po/ru.po
M src/bin/psql/po/sv.po
M src/bin/psql/po/tr.po
M src/bin/scripts/po/fr.po
M src/bin/scripts/po/it.po
M src/bin/scripts/po/tr.po
M src/interfaces/ecpg/preproc/po/it.po
M src/interfaces/ecpg/preproc/po/ru.po
M src/interfaces/libpq/po/it.po
M src/interfaces/libpq/po/ru.po
M src/interfaces/libpq/po/tr.po
M src/pl/plpgsql/src/po/ru.po
M src/pl/plpython/po/tr.po
Block creation of partitions with open references to its parent
commit : 8aad248f7c67f1225027414530ce2809c1fcd104
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 5 Nov 2018 11:04:20 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 5 Nov 2018 11:04:20 +0900
When a partition is created as part of a trigger processing, it is
possible that the partition which just gets created changes the
properties of the table the executor of the ongoing command relies on,
causing a subsequent crash. This has been found possible when for
example using a BEFORE INSERT which creates a new partition for a
partitioned table being inserted to.
Any attempt to do so is blocked when working on a partition, with
regression tests added for both CREATE TABLE PARTITION OF and ALTER
TABLE ATTACH PARTITION.
Reported-by: Dmitry Shalashov
Author: Amit Langote
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/15437-3fe01ee66bd1bae1@postgresql.org
Backpatch-through: 10
M src/backend/commands/tablecmds.c
M src/test/regress/expected/alter_table.out
M src/test/regress/expected/create_table.out
M src/test/regress/sql/alter_table.sql
M src/test/regress/sql/create_table.sql
Ignore partitioned tables when processing ON COMMIT DELETE ROWS
commit : 70c38e7080128e27cb6b9e20237f2c36807b0000
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 5 Nov 2018 09:15:25 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 5 Nov 2018 09:15:25 +0900
Those tables have no physical storage, making this option unusable with
partition trees as at commit time an actual truncation was attempted.
There are still issues with the way ON COMMIT actions are done when
mixing several action types, however this impacts as well inheritance
trees, so this issue will be dealt with later.
Reported-by: Rajkumar Raghuwanshi
Author: Amit Langote
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/CAKcux6mhgcjSiB_egqEAEFgX462QZtncU8QCAJ2HZwM-wWGVew@mail.gmail.com
M src/backend/catalog/heap.c
M src/test/regress/expected/temp.out
M src/test/regress/sql/temp.sql
Release notes for 11.1, 10.6, 9.6.11, 9.5.15, 9.4.20, 9.3.25.
commit : d7c3719298e631cfd73e385a031c86efc11ab726
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 4 Nov 2018 16:57:14 -0500
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 4 Nov 2018 16:57:14 -0500
M doc/src/sgml/release-10.sgml
M doc/src/sgml/release-9.3.sgml
M doc/src/sgml/release-9.4.sgml
M doc/src/sgml/release-9.5.sgml
M doc/src/sgml/release-9.6.sgml
Make ts_locale.c's character-type functions cope with UTF-16.
commit : f7ba6e951a12fb1d8cc3cd1346e02856009f9c4c
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 3 Nov 2018 13:56:10 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 3 Nov 2018 13:56:10 -0400
On Windows, in UTF8 database encoding, what char2wchar() produces is
UTF16 not UTF32, ie, characters above U+FFFF will be represented by
surrogate pairs. t_isdigit() and siblings did not account for this
and failed to provide a large enough result buffer. That in turn
led to bogus "invalid multibyte character for locale" errors, because
contrary to what you might think from char2wchar()'s documentation,
its Windows code path doesn't cope sanely with buffer overflow.
The solution for t_isdigit() and siblings is pretty clear: provide
a 3-wchar_t result buffer not 2.
char2wchar() also needs some work to provide more consistent, and more
accurately documented, buffer overrun behavior. But that's a bigger job
and it doesn't actually have any immediate payoff, so leave it for later.
Per bug #15476 from Kenji Uno, who deserves credit for identifying the
cause of the problem. Back-patch to all active branches.
Discussion: https://postgr.es/m/15476-4314f480acf0f114@postgresql.org
M src/backend/tsearch/ts_locale.c
Remove extra word from create sub docs
commit : cf2245159172836db29d8f0bc7d7164fd33714b1
author : Stephen Frost <sfrost@snowman.net>
date : Sat, 3 Nov 2018 12:22:09 -0400
committer: Stephen Frost <sfrost@snowman.net>
date : Sat, 3 Nov 2018 12:22:09 -0400
Improve the documentation in the CREATE SUBSCRIPTION command a bit by
removing an extraneous word and spelling out 'information'.
M doc/src/sgml/ref/create_subscription.sgml
Yet further rethinking of build changes for macOS Mojave.
commit : 229a5c8ade48db7eced79539ab1b8d8a0e86ae88
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 2 Nov 2018 18:54:00 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 2 Nov 2018 18:54:00 -0400
The solution arrived at in commit e74dd00f5 presumes that the compiler
has a suitable default -isysroot setting ... but further experience
shows that in many combinations of macOS version, XCode version, Xcode
command line tools version, and phase of the moon, Apple's compiler
will *not* supply a default -isysroot value.
We could potentially go back to the approach used in commit 68fc227dd,
but I don't have a lot of faith in the reliability or life expectancy of
that either. Let's just revert to the approach already shipped in 11.0,
namely specifying an -isysroot switch globally. As a partial response to
the concerns raised by Jakob Egger, adjust the contents of Makefile.global
to look like
CPPFLAGS = -isysroot $(PG_SYSROOT) ...
PG_SYSROOT = /path/to/sysroot
This allows overriding the sysroot path at build time in a relatively
painless way.
Add documentation to installation.sgml about how to use the PG_SYSROOT
option. I also took the opportunity to document how to work around
macOS's "System Integrity Protection" feature.
As before, back-patch to all supported versions.
Discussion: https://postgr.es/m/20840.1537850987@sss.pgh.pa.us
M configure
M configure.in
M doc/src/sgml/installation.sgml
M src/Makefile.global.in
M src/template/darwin
docs: adjust simpler language for NULL return from ANY/ALL
commit : beceb87259780e50da0e6f596b53b2fb235d7962
author : Bruce Momjian <bruce@momjian.us>
date : Fri, 2 Nov 2018 13:05:30 -0400
committer: Bruce Momjian <bruce@momjian.us>
date : Fri, 2 Nov 2018 13:05:30 -0400
Adjustment to commit 8610c973ddf1cbf0befc1369d2cf0d56c0efcd0a.
Reported-by: Tom Lane
Discussion: https://postgr.es/m/17406.1541168421@sss.pgh.pa.us
Backpatch-through: 9.3
M doc/src/sgml/func.sgml
GUC: adjust effective_cache_size docs and SQL description
commit : 5463deb15f648685bcd406f020af6e434c4ed28c
author : Bruce Momjian <bruce@momjian.us>
date : Fri, 2 Nov 2018 09:11:00 -0400
committer: Bruce Momjian <bruce@momjian.us>
date : Fri, 2 Nov 2018 09:11:00 -0400
Clarify that effective_cache_size is both kernel buffers and shared
buffers.
Reported-by: nat@makarevitch.org
Discussion: https://postgr.es/m/153685164808.22334.15432535018443165207@wrigleys.postgresql.org
Backpatch-through: 9.3
M doc/src/sgml/config.sgml
M src/backend/utils/misc/guc.c
Fix some spelling errors in the documentation
commit : 919cffd323a0408a31cde7bc58994add5694b7fc
author : Magnus Hagander <magnus@hagander.net>
date : Fri, 2 Nov 2018 13:55:57 +0100
committer: Magnus Hagander <magnus@hagander.net>
date : Fri, 2 Nov 2018 13:55:57 +0100
Author: Daniel Gustafsson <daniel@yesql.se>
M doc/src/sgml/libpq.sgml
M doc/src/sgml/lobj.sgml
doc: use simpler language for NULL return from ANY/ALL
commit : b5acaa8bc8dd7f13f7ae1879aea39cef48d7aa3c
author : Bruce Momjian <bruce@momjian.us>
date : Fri, 2 Nov 2018 08:54:34 -0400
committer: Bruce Momjian <bruce@momjian.us>
date : Fri, 2 Nov 2018 08:54:34 -0400
Previously the combination of "does not return" and "any row" caused
ambiguity.
Reported-by: KES <kes-kes@yandex.ru>
Discussion: https://postgr.es/m/153701242703.22334.1476830122267077397@wrigleys.postgresql.org
Reviewed-by: David G. Johnston
Backpatch-through: 9.3
M doc/src/sgml/func.sgml
Fix error message typo introduced 691d79a07933.
commit : 877b00561eea4118a15d8125ee2dbb7d98de6904
author : Andres Freund <andres@anarazel.de>
date : Thu, 1 Nov 2018 10:44:29 -0700
committer: Andres Freund <andres@anarazel.de>
date : Thu, 1 Nov 2018 10:44:29 -0700
Reported-By: Michael Paquier
Discussion: https://postgr.es/m/20181101003405.GB1727@paquier.xyz
Backpatch: 9.4-, like the previous commit
M src/backend/replication/slot.c
Disallow starting server with insufficient wal_level for existing slot.
commit : 021e1c329d7c5e9a72cc7c7bd21d4604d087635f
author : Andres Freund <andres@anarazel.de>
date : Wed, 31 Oct 2018 14:47:41 -0700
committer: Andres Freund <andres@anarazel.de>
date : Wed, 31 Oct 2018 14:47:41 -0700
Previously it was possible to create a slot, change wal_level, and
restart, even if the new wal_level was insufficient for the
slot. That's a problem for both logical and physical slots, because
the necessary WAL records are not generated.
This removes a few tests in newer versions that, somewhat
inexplicably, whether restarting with a too low wal_level worked (a
buggy behaviour!).
Reported-By: Joshua D. Drake
Author: Andres Freund
Discussion: https://postgr.es/m/20181029191304.lbsmhshkyymhw22w@alap3.anarazel.de
Backpatch: 9.4-, where replication slots where introduced
M src/backend/replication/logical/logical.c
M src/backend/replication/slot.c
M src/test/recovery/t/006_logical_decoding.pl
Fix memory leak in repeated SPGIST index scans.
commit : 92e371f9b143dacd83ec14ef8c7caba9612d9294
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 31 Oct 2018 17:04:43 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 31 Oct 2018 17:04:43 -0400
spgendscan neglected to pfree all the memory allocated by spgbeginscan.
It's possible to get away with that in most normal queries, since the
memory is allocated in the executor's per-query context which is about
to get deleted anyway; but it causes severe memory leakage during
creation or filling of large exclusion-constraint indexes.
Also, document that amendscan is supposed to free what ambeginscan
allocates. The docs' lack of clarity on that point probably caused this
bug to begin with. (There is discussion of changing that API spec going
forward, but I don't think it'd be appropriate for the back branches.)
Per report from Bruno Wolff. It's been like this since the beginning,
so back-patch to all active branches.
In HEAD, also fix an independent leak caused by commit 2a6368343
(allocating memory during spgrescan instead of spgbeginscan, which
might be all right if it got cleaned up, but it didn't). And do a bit
of code beautification on that commit, too.
Discussion: https://postgr.es/m/20181024012314.GA27428@wolff.to
M doc/src/sgml/indexam.sgml
M src/backend/access/spgist/spgscan.c
Sync our copy of the timezone library with IANA release tzcode2018g.
commit : 48c6df11a434765b1154dc07803bf95e764632fb
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 31 Oct 2018 09:47:53 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 31 Oct 2018 09:47:53 -0400
This patch absorbs an upstream fix to "zic" for a recently-introduced
bug that made it output data that some 32-bit clients couldn't read.
Given the current source data, the bug only manifests in zones with
leap seconds, which we don't generate, so that there's no actual
change in our installed timezone data files from this. Still, in
case somebody uses our copy of "zic" to do something else, it seems
best to apply the fix promptly.
Also, update the README's notes about converting upstream code to
our conventions.
M src/timezone/README
M src/timezone/zic.c
Update time zone data files to tzdata release 2018g.
commit : 671f43d883a47b22d38e1d26513d70b7830e2cdd
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 31 Oct 2018 08:35:50 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 31 Oct 2018 08:35:50 -0400
DST law changes in Morocco (with, effectively, zero notice).
Historical corrections for Hawaii.
M src/timezone/data/tzdata.zi
Fix missing whitespace in pg_dump ref page
commit : e873886b83e1ef5787dc0e4cd9588a703b4324a0
author : Magnus Hagander <magnus@hagander.net>
date : Mon, 29 Oct 2018 12:34:49 +0100
committer: Magnus Hagander <magnus@hagander.net>
date : Mon, 29 Oct 2018 12:34:49 +0100
Author: Daniel Gustafsson <daniel@yesql.se>
M doc/src/sgml/ref/pg_dump.sgml
pg_restore: Augment documentation for -N option
commit : fef9482dacfef6fadebcc8ce2da57b850a32e8fd
author : Peter Eisentraut <peter_e@gmx.net>
date : Mon, 29 Oct 2018 11:31:43 +0100
committer: Peter Eisentraut <peter_e@gmx.net>
date : Mon, 29 Oct 2018 11:31:43 +0100
This was forgotten when the option was added.
Author: Michael Banck <michael.banck@credativ.de>
M src/bin/pg_dump/pg_restore.c
Fix perl searchpath for modern perl for MSVC tools
commit : a71f55652276805289d73dd88bc8f18e4a4d9ab2
author : Andrew Dunstan <andrew@dunslane.net>
date : Sun, 28 Oct 2018 12:22:32 -0400
committer: Andrew Dunstan <andrew@dunslane.net>
date : Sun, 28 Oct 2018 12:22:32 -0400
Modern versions of perl no longer include the current directory in the
perl searchpath, as it's insecure. Instead of adding the current
directory, we get around the problem by adding the directory where the
script lives.
Problem noted by Victor Wagner.
Solution adapted from buildfarm client code.
Backpatch to all live versions.
M src/tools/msvc/install.pl
M src/tools/msvc/mkvcbuild.pl
M src/tools/msvc/vcregress.pl
List wait events in alphabetical order in documentation
commit : aa9642acb96b253f80c940f7b6bce38c7cca2c0f
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 24 Oct 2018 17:02:51 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 24 Oct 2018 17:02:51 +0900
Keeping all those entries in order helps the user looking at the
documentation in finding them.
Author: Michael Paquier, Kuntal Ghosh
Discussion: https://postgr.es/m/20181024002539.GI1658@paquier.xy
M doc/src/sgml/monitoring.sgml
Fix some grammar errors in bloom.sgml
commit : 28ddee2b02127a03a983a6c76a10dedb4d6c48d5
author : Alexander Korotkov <akorotkov@postgresql.org>
date : Mon, 22 Oct 2018 00:23:26 +0300
committer: Alexander Korotkov <akorotkov@postgresql.org>
date : Mon, 22 Oct 2018 00:23:26 +0300
Discussion: https://postgr.es/m/CAEepm%3D3sijpGr8tXdyz-7EJJZfhQHABPKEQ29gpnb7-XSy%2B%3D5A%40mail.gmail.com
Reported-by: Thomas Munro
Backpatch-through: 9.6
M doc/src/sgml/bloom.sgml
Lower privilege level of programs calling regression_main
commit : f4b67efdcbc7b9f72fddd2fc0fddc2f51eebf357
author : Andrew Dunstan <andrew@dunslane.net>
date : Sat, 20 Oct 2018 09:02:36 -0400
committer: Andrew Dunstan <andrew@dunslane.net>
date : Sat, 20 Oct 2018 09:02:36 -0400
On Windows this mean that the regression tests can now safely and
successfully run as Administrator, which is useful in situations like
Appveyor. Elsewhere it's a no-op.
Backpatch to 9.5 - this is harder in earlier branches and not worth the
trouble.
Discussion: https://postgr.es/m/650b0c29-9578-8571-b1d2-550d7f89f307@2ndQuadrant.com
M src/test/regress/pg_regress.c
Client-side fixes for delayed NOTIFY receipt.
commit : ecc59e31a81e34f4e6c0d9e73d8992d13f12b051
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 19 Oct 2018 22:22:57 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 19 Oct 2018 22:22:57 -0400
PQnotifies() is defined to just process already-read data, not try to read
any more from the socket. (This is a debatable decision, perhaps, but I'm
hesitant to change longstanding library behavior.) The documentation has
long recommended calling PQconsumeInput() before PQnotifies() to ensure
that any already-arrived message would get absorbed and processed.
However, psql did not get that memo, which explains why it's not very
reliable about reporting notifications promptly.
Also, most (not quite all) callers called PQconsumeInput() just once before
a PQnotifies() loop. Taking this recommendation seriously implies that we
should do PQconsumeInput() before each call. This is more important now
that we have "payload" strings in notification messages than it was before;
that increases the probability of having more than one packet's worth
of notify messages. Hence, adjust code as well as documentation examples
to do it like that.
Back-patch to 9.5 to match related server fixes. In principle we could
probably go back further with these changes, but given lack of field
complaints I doubt it's worthwhile.
Discussion: https://postgr.es/m/CAOYf6ec-TmRYjKBXLLaGaB-jrd=mjG1Hzn1a1wufUAR39PQYhw@mail.gmail.com
M doc/src/sgml/libpq.sgml
M src/bin/psql/common.c
M src/interfaces/ecpg/ecpglib/execute.c
M src/interfaces/libpq/fe-exec.c
M src/test/examples/testlibpq2.c
Server-side fix for delayed NOTIFY and SIGTERM processing.
commit : 3bdef6d211314842ad8d2415287013ddc3314024
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 19 Oct 2018 21:39:21 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 19 Oct 2018 21:39:21 -0400
Commit 4f85fde8e introduced some code that was meant to ensure that we'd
process cancel, die, sinval catchup, and notify interrupts while waiting
for client input. But there was a flaw: it supposed that the process
latch would be set upon arrival at secure_read() if any such interrupt
was pending. In reality, we might well have cleared the process latch
at some earlier point while those flags remained set -- particularly
notifyInterruptPending, which can't be handled as long as we're within
a transaction.
To fix the NOTIFY case, also attempt to process signals (except
ProcDiePending) before trying to read.
Also, if we see that ProcDiePending is set before we read, forcibly set the
process latch to ensure that we will handle that signal promptly if no data
is available. I also made it set the process latch on the way out, in case
there is similar logic elsewhere. (It remains true that we won't service
ProcDiePending here unless we need to wait for input.)
The code for handling ProcDiePending during a write needs those changes,
too.
Also be a little more careful about when to reset whereToSendOutput,
and improve related comments.
Back-patch to 9.5 where this code was added. I'm not entirely convinced
that older branches don't have similar issues, but the complaint at hand
is just about the >= 9.5 code.
Jeff Janes and Tom Lane
Discussion: https://postgr.es/m/CAOYf6ec-TmRYjKBXLLaGaB-jrd=mjG1Hzn1a1wufUAR39PQYhw@mail.gmail.com
M src/backend/libpq/be-secure.c
M src/backend/tcop/postgres.c
Sync our copy of the timezone library with IANA release tzcode2018f.
commit : 11359db354b2f62e1da433da482657ff0a5ac180
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 19 Oct 2018 19:36:34 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 19 Oct 2018 19:36:34 -0400
About half of this is purely cosmetic changes to reduce the diff between
our code and theirs, like inserting "const" markers where they have them.
The other half is tracking actual code changes in zic.c and localtime.c.
I don't think any of these represent near-term compatibility hazards, but
it seems best to stay up to date.
I also fixed longstanding bugs in our code for producing the
known_abbrevs.txt list, which by chance hadn't been exposed before,
but which resulted in some garbage output after applying the upstream
changes in zic.c. Notably, because upstream removed their old phony
transitions at the Big Bang, it's now necessary to cope with TZif files
containing no DST transition times at all.
M src/timezone/README
M src/timezone/localtime.c
M src/timezone/pgtz.h
M src/timezone/private.h
M src/timezone/strftime.c
M src/timezone/tzfile.h
M src/timezone/zic.c
Update time zone data files to tzdata release 2018f.
commit : 5777c93af49127988b32f2d3b50e9b617bf86d3f
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 19 Oct 2018 17:01:34 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 19 Oct 2018 17:01:34 -0400
DST law changes in Chile, Fiji, and Russia (Volgograd).
Historical corrections for China, Japan, Macau, and North Korea.
Note: like the previous tzdata update, this involves a depressingly
large amount of semantically-meaningless churn in tzdata.zi. That
is a consequence of upstream's data compression method assigning
unstable abbreviations to DST rulesets. I complained about that
to them last time, and this version now uses an assignment method
that pays some heed to not changing abbreviations unnecessarily.
So hopefully, that'll be better going forward.
M src/timezone/data/tzdata.zi
M src/timezone/known_abbrevs.txt
M src/timezone/tznames/America.txt
M src/timezone/tznames/Asia.txt
M src/timezone/tznames/Default
M src/timezone/tznames/Pacific.txt
Add missing quote_identifier calls for CREATE TRIGGER ... REFERENCING.
commit : 09397f0ed6c4fd3b76658058e4e914b80a509237
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 19 Oct 2018 00:50:17 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 19 Oct 2018 00:50:17 -0400
Mixed-case names for transition tables weren't dumped correctly.
Oversight in commit 8c48375e5, per bug #15440 from Karl Czajkowski.
In passing, I couldn't resist a bit of code beautification.
Back-patch to v10 where this was introduced.
Discussion: https://postgr.es/m/15440-02d1468e94d63d76@postgresql.org
M src/backend/utils/adt/ruleutils.c
Still further rethinking of build changes for macOS Mojave.
commit : 34f9944c207f9f710eb7f65d22805f42d46e479e
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 18 Oct 2018 14:55:23 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 18 Oct 2018 14:55:23 -0400
To avoid the sorts of problems complained of by Jakob Egger, it'd be
best if configure didn't emit any references to the sysroot path at all.
In the case of PL/Tcl, we can do that just by keeping our hands off the
TCL_INCLUDE_SPEC string altogether. In the case of PL/Perl, we need to
substitute -iwithsysroot for -I in the compile commands, which is easily
handled if we change to using a configure output variable that includes
the switch not only the directory name. Since PL/Tcl and PL/Python
already do it like that, this seems like good consistency cleanup anyway.
Hence, this replaces the advice given to Perl-related extensions in commit
5e2217131; instead of writing "-I$(perl_archlibexp)/CORE", they should
just write "$(perl_includespec)". (The old way continues to work, but not
on recent macOS.)
It's still the case that configure needs to be aware of the sysroot
path internally, but that's cleaner than what we had before.
As before, back-patch to all supported versions.
Discussion: https://postgr.es/m/20840.1537850987@sss.pgh.pa.us
M configure
M configure.in
M contrib/hstore_plperl/Makefile
M src/Makefile.global.in
M src/pl/plperl/GNUmakefile
M src/template/darwin
Fix minor bug in isolationtester.
commit : 5d91d78fe95f02ad44d4222411f9b72e58dd84e3
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 17 Oct 2018 15:06:38 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 17 Oct 2018 15:06:38 -0400
If the lock wait query failed, isolationtester would report the
PQerrorMessage from some other connection, meaning there would be
no message or an unrelated one. This seems like a pretty unlikely
occurrence, but if it did happen, this bug could make it really
difficult/confusing to figure out what happened. That seems to
justify patching all the way back.
In passing, clean up another place where the "wrong" conn was used
for an error report. That one's not actually buggy because it's
a different alias for the same connection, but it's still confusing
to the reader.
M src/test/isolation/isolationtester.c
Improve tzparse's handling of TZDEFRULES ("posixrules") zone data.
commit : 312f632005f36e8735d22e8d989a94ff76329493
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 17 Oct 2018 12:26:48 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 17 Oct 2018 12:26:48 -0400
In the IANA timezone code, tzparse() always tries to load the zone
file named by TZDEFRULES ("posixrules"). Previously, we'd hacked
that logic to skip the load in the "lastditch" code path, which we use
only to initialize the default "GMT" zone during GUC initialization.
That's critical for a couple of reasons: since we do not support leap
seconds, we *must not* allow "GMT" to have leap seconds, and since this
case runs before the GUC subsystem is fully alive, we'd really rather
not take the risk of pg_open_tzfile throwing any errors.
However, that still left the code reading TZDEFRULES on every other
call, something we'd noticed to the extent of having added code to cache
the result so it was only done once per process not a lot of times.
Andres Freund complained about the static data space used up for the
cache; but as long as the logic was like this, there was no point in
trying to get rid of that space.
We can improve matters by looking a bit more closely at what the IANA
code actually needs the TZDEFRULES data for. One thing it does is
that if "posixrules" is a leap-second-aware zone, the leap-second
behavior will be absorbed into every POSIX-style zone specification.
However, that's a behavior we'd really prefer to do without, since
for our purposes the end effect is to render every POSIX-style zone
name unsupported. Otherwise, the TZDEFRULES data is used only if
the POSIX zone name specifies DST but doesn't include a transition
date rule (e.g., "EST5EDT" rather than "EST5EDT,M3.2.0,M11.1.0").
That is a minority case for our purposes --- in particular, it
never happens when tzload() invokes tzparse() to interpret a
transition date rule string found in a tzdata zone file.
Hence, if we legislate that we're going to ignore leap-second data
from "posixrules", we can postpone the TZDEFRULES load into the path
where we actually need to substitute for a missing date rule string.
That means it will never happen at all in common scenarios, making it
reasonable to dynamically allocate the cache space when it does happen.
Even when the data is already loaded, this saves some cycles in the
common code path since we avoid a memcpy of 23KB or so. And, IMO at
least, this is a less ugly hack on the IANA logic than what we had
before, since it's not messing with the lastditch-vs-regular code paths.
Back-patch to all supported branches, not so much because this is a
critical change as that I want to keep all our copies of the IANA
timezone code in sync.
Discussion: https://postgr.es/m/20181015200754.7y7zfuzsoux2c4ya@alap3.anarazel.de
M src/timezone/README
M src/timezone/localtime.c
Back off using -isysroot on Darwin.
commit : ee6c08b01b9b803715c92ac1433752f962eebfde
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 16 Oct 2018 16:27:15 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 16 Oct 2018 16:27:15 -0400
Rethink the solution applied in commit 5e2217131 to get PL/Tcl to
build on macOS Mojave. I feared that adding -isysroot globally might
have undesirable consequences, and sure enough Jakob Egger reported
one: it complicates building extensions with a different Xcode version
than was used for the core server. (I find that a risky proposition
in general, but apparently it works most of the time, so we shouldn't
break it if we don't have to.)
We'd already adopted the solution for PL/Perl of inserting the sysroot
path directly into the -I switches used to find Perl's headers, and we
can do the same thing for PL/Tcl by changing the -iwithsysroot switch
that Apple's tclConfig.sh reports. This restricts the risks to PL/Perl
and PL/Tcl themselves and directly-dependent extensions, which is a lot
more pleasing in general than a global -isysroot switch.
Along the way, tighten the test to see if we need to inject the sysroot
path into $perl_includedir, as I'd speculated about upthread but not
gotten round to doing.
As before, back-patch to all supported versions.
Discussion: https://postgr.es/m/20840.1537850987@sss.pgh.pa.us
M configure
M configure.in
M src/template/darwin
Avoid rare race condition in privileges.sql regression test.
commit : 7bee1d520d12e5880a733e4b1c1fb53bf69df03a
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 16 Oct 2018 13:56:58 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 16 Oct 2018 13:56:58 -0400
We created a temp table, then switched to a new session, leaving
the old session to clean up its temp objects in background. If that
took long enough, the eventual attempt to drop the user that owns
the temp table could fail, as exhibited today by sidewinder.
Fix by dropping the temp table explicitly when we're done with it.
It's been like this for quite some time, so back-patch to all
supported branches.
Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2018-10-16%2014%3A45%3A00
M src/test/regress/expected/privileges.out
M src/test/regress/sql/privileges.sql
Make PostgresNode.pm's poll_query_until() more chatty about failures.
commit : 0a576cd2a9ab8267720709481cdbbd5e06988821
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 16 Oct 2018 12:27:14 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 16 Oct 2018 12:27:14 -0400
Reporting only the stderr is unhelpful when the problem is that the
server output we're getting doesn't match what was expected. So we
should report the query output too; and just for good measure, let's
print the query we used and the output we expected.
Back-patch to 9.5 where poll_query_until was introduced.
Discussion: https://postgr.es/m/17913.1539634756@sss.pgh.pa.us
M src/test/perl/PostgresNode.pm
Improve stability of recently-added regression test case.
commit : afb5fb290e90ecd0ef220810402fee5dbb142585
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 16 Oct 2018 12:01:19 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 16 Oct 2018 12:01:19 -0400
Commit b5febc1d1 added a contrib/btree_gist test case that has been
observed to fail in the buildfarm as a result of background auto-analyze
updating stats and changing the selected plan. Forestall that by
forcibly analyzing in foreground, instead. The new plan choice is
just as good for our purposes, since we really only care that an
index-only plan does not get selected.
Back-patch to 9.5, like the previous patch.
Discussion: https://postgr.es/m/14643.1539629304@sss.pgh.pa.us
M contrib/btree_gist/expected/inet.out
M contrib/btree_gist/sql/inet.sql
Avoid statically allocating gmtsub()'s timezone workspace.
commit : d64a54fb9c81c644879e095ab7de6b4c64a6896b
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 16 Oct 2018 11:50:18 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 16 Oct 2018 11:50:18 -0400
localtime.c's "struct state" is a rather large object, ~23KB. We were
statically allocating one for gmtsub() to use to represent the GMT
timezone, even though that function is not at all heavily used and is
never reached in most backends. Let's malloc it on-demand, instead.
This does pose the question of how to handle a malloc failure, but
there's already a well-defined error report convention here, ie
set errno and return NULL.
We have but one caller of pg_gmtime in HEAD, and two in back branches,
neither of which were troubling to check for error. Make them do so.
The possible errors are sufficiently unlikely (out-of-range timestamp,
and now malloc failure) that I think elog() is adequate.
Back-patch to all supported branches to keep our copies of the IANA
timezone code in sync. This particular change is in a stanza that
already differs from upstream, so it's a wash for maintenance purposes
--- but only as long as we keep the branches the same.
Discussion: https://postgr.es/m/20181015200754.7y7zfuzsoux2c4ya@alap3.anarazel.de
M src/backend/utils/adt/nabstime.c
M src/backend/utils/adt/timestamp.c
M src/timezone/localtime.c
Check for stack overrun in standard_ProcessUtility().
commit : 9d4212afa100ca3a03bb5d12e1caf6c205457117
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 15 Oct 2018 14:01:38 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 15 Oct 2018 14:01:38 -0400
ProcessUtility can recurse, and indeed can be driven to infinite
recursion, so it ought to have a check_stack_depth() call. This
covers the reported bug (portal trying to execute itself) and a bunch
of other cases that could perhaps arise somewhere.
Per bug #15428 from Malthe Borch. Back-patch to all supported branches.
Discussion: https://postgr.es/m/15428-b3c2915ec470b033@postgresql.org
M src/backend/tcop/utility.c
contrib/bloom documentation improvement
commit : 872b6f72d419a7fb2bc18236ee70175e8ff2abf0
author : Alexander Korotkov <akorotkov@postgresql.org>
date : Mon, 15 Oct 2018 00:40:17 +0300
committer: Alexander Korotkov <akorotkov@postgresql.org>
date : Mon, 15 Oct 2018 00:40:17 +0300
This commit documents rounding of "length" parameter and absence of support
for unique indexes and NULLs searching. Backpatch to 9.6 where contrib/bloom
was introduced.
Discussion: https://postgr.es/m/CAF4Au4wPQQ7EHVSnzcLjsbY3oLSzVk6UemZLD1Sbmwysy3R61g%40mail.gmail.com
Author: Oleg Bartunov with minor editorialization by me
Backpatch-through: 9.6
M doc/src/sgml/bloom.sgml
Avoid duplicate XIDs at recovery when building initial snapshot
commit : 8384ff42486e4326a5e50dc97c16d6aa0e22cd38
author : Michael Paquier <michael@paquier.xyz>
date : Sun, 14 Oct 2018 22:23:35 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Sun, 14 Oct 2018 22:23:35 +0900
On a primary, sets of XLOG_RUNNING_XACTS records are generated on a
periodic basis to allow recovery to build the initial state of
transactions for a hot standby. The set of transaction IDs is created
by scanning all the entries in ProcArray. However it happens that its
logic never counted on the fact that two-phase transactions finishing to
prepare can put ProcArray in a state where there are two entries with
the same transaction ID, one for the initial transaction which gets
cleared when prepare finishes, and a second, dummy, entry to track that
the transaction is still running after prepare finishes. This way
ensures a continuous presence of the transaction so as callers of for
example TransactionIdIsInProgress() are always able to see it as alive.
So, if a XLOG_RUNNING_XACTS takes a standby snapshot while a two-phase
transaction finishes to prepare, the record can finish with duplicated
XIDs, which is a state expected by design. If this record gets applied
on a standby to initial its recovery state, then it would simply fail,
so the odds of facing this failure are very low in practice. It would
be tempting to change the generation of XLOG_RUNNING_XACTS so as
duplicates are removed on the source, but this requires to hold on
ProcArrayLock for longer and this would impact all workloads,
particularly those using heavily two-phase transactions.
XLOG_RUNNING_XACTS is also actually used only to initialize the standby
state at recovery, so instead the solution is taken to discard
duplicates when applying the initial snapshot.
Diagnosed-by: Konstantin Knizhnik
Author: Michael Paquier
Discussion: https://postgr.es/m/0c96b653-4696-d4b4-6b5d-78143175d113@postgrespro.ru
Backpatch-through: 9.3
M src/backend/storage/ipc/procarray.c
Remove abstime, reltime, tinterval tables from old regression databases.
commit : 9320263ae7cac206c2e82e00e42dbcddce4e8dfb
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 12 Oct 2018 19:33:56 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 12 Oct 2018 19:33:56 -0400
In the back branches, drop these tables after the regression tests are
done with them. This fixes failures of cross-branch pg_upgrade testing
caused by these types having been removed in v12. We do lose the ability
to test dump/restore behavior with these types in the back branches, but
the actual loss of code coverage seems to be nil given that there's nothing
very special about these types.
Discussion: https://postgr.es/m/20181009192237.34wjp3nmw7oynmmr@alap3.anarazel.de
M src/test/regress/expected/horology.out
M src/test/regress/expected/sanity_check.out
M src/test/regress/sql/horology.sql
Fix logical decoding error when system table w/ toast is repeatedly rewritten.
commit : 532e3b5b3dd278f77baa5f0b9483bf2aba226399
author : Andres Freund <andres@anarazel.de>
date : Wed, 10 Oct 2018 13:53:02 -0700
committer: Andres Freund <andres@anarazel.de>
date : Wed, 10 Oct 2018 13:53:02 -0700
Repeatedly rewriting a mapped catalog table with VACUUM FULL or
CLUSTER could cause logical decoding to fail with:
ERROR, "could not map filenode \"%s\" to relation OID"
To trigger the problem the rewritten catalog had to have live tuples
with toasted columns.
The problem was triggered as during catalog table rewrites the
heap_insert() check that prevents logical decoding information to be
emitted for system catalogs, failed to treat the new heap's toast table
as a system catalog (because the new heap is not recognized as a
catalog table via RelationIsLogicallyLogged()). The relmapper, in
contrast to the normal catalog contents, does not contain historical
information. After a single rewrite of a mapped table the new relation
is known to the relmapper, but if the table is rewritten twice before
logical decoding occurs, the relfilenode cannot be mapped to a
relation anymore. Which then leads us to error out. This only
happens for toast tables, because the main table contents aren't
re-inserted with heap_insert().
The fix is simple, add a new heap_insert() flag that prevents logical
decoding information from being emitted, and accept during decoding
that there might not be tuple data for toast tables.
Unfortunately that does not fix pre-existing logical decoding
errors. Doing so would require not throwing an error when a filenode
cannot be mapped to a relation during decoding, and that seems too
likely to hide bugs. If it's crucial to fix decoding for an existing
slot, temporarily changing the ERROR in ReorderBufferCommit() to a
WARNING appears to be the best fix.
Author: Andres Freund
Discussion: https://postgr.es/m/20180914021046.oi7dm4ra3ot2g2kt@alap3.anarazel.de
Backpatch: 9.4-, where logical decoding was introduced
M contrib/test_decoding/expected/rewrite.out
M contrib/test_decoding/sql/rewrite.sql
M src/backend/access/heap/heapam.c
M src/backend/access/heap/rewriteheap.c
M src/backend/replication/logical/reorderbuffer.c
M src/include/access/heapam.h
Silence compiler warning in Assert()
commit : 6b6b59b38e4407c7cc2ed860bfafc721fe24ec7e
author : Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Mon, 8 Oct 2018 10:37:21 -0300
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Mon, 8 Oct 2018 10:37:21 -0300
gcc 6.3 does not whine about this mistake I made in 39808e8868c8 but
evidently lots of other compilers do, according to Michael Paquier,
Peter Eisentraut, Arthur Zakirov, Tomas Vondra.
Discussion: too many to list
M src/backend/commands/event_trigger.c
Add regression test for ATTACH PARTITION
commit : afe9b9e68afb93b6831a939a7a18973ee5286d68
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 8 Oct 2018 00:06:54 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 8 Oct 2018 00:06:54 +0900
This test case uses a SQL function as partitioning operator, whose
evaluation results in the table's relcache being rebuilt partway
through the execution of an ATTACH PARTITION command.
It is extracted from 39808e8, which fixed a bug where this test case
failed on master and REL_11_STABLE, but passed on REL_10_STABLE. The
partitioning code has changed a lot during v11 development, so this
makes sure that any patch applied to REL_10_STABLE fixing a
partition-related bug does not break it again.
Author: Amit Langote
Reviewed-by: Michaël Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/CAKcux6=nTz9KSfTr_6Z2mpzLJ_09JN-rK6=dWic6gGyTSWueyQ@mail.gmail.com
M src/test/regress/expected/alter_table.out
M src/test/regress/sql/alter_table.sql
Fix event triggers for partitioned tables
commit : 101b21ead356023c0b86d28dac3d6c08828c77b5
author : Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Sat, 6 Oct 2018 19:17:46 -0300
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Sat, 6 Oct 2018 19:17:46 -0300
Index DDL cascading on partitioned tables introduced a way for ALTER
TABLE to be called reentrantly. This caused an an important deficiency
in event trigger support to be exposed: on exiting the reentrant call,
the alter table state object was clobbered, causing a crash when the
outer alter table tries to finalize its processing. Fix the crash by
creating a stack of event trigger state objects. There are still ways
to cause things to misbehave (and probably other crashers) with more
elaborate tricks, but at least it now doesn't crash in the obvious
scenario.
Backpatch to 9.5, where DDL deparsing of event triggers was introduced.
Reported-by: Marco Slot
Authors: Michaël Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/CANNhMLCpi+HQ7M36uPfGbJZEQLyTy7XvX=5EFkpR-b1bo0uJew@mail.gmail.com
M src/backend/catalog/index.c
M src/backend/commands/event_trigger.c
M src/backend/commands/indexcmds.c
M src/backend/commands/tablecmds.c
M src/backend/commands/view.c
M src/include/catalog/index.h
M src/include/tcop/deparse_utility.h
M src/test/regress/expected/event_trigger.out
M src/test/regress/sql/event_trigger.sql
Propagate xactStartTimestamp and stmtStartTimestamp to parallel workers.
commit : 58454d0bb07c222fae19c4ced38beef36e27015b
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 6 Oct 2018 12:00:10 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 6 Oct 2018 12:00:10 -0400
Previously, a worker process would establish values for these based on
its own start time. In v10 and up, this can trivially be shown to cause
misbehavior of transaction_timestamp(), timestamp_in(), and related
functions which are (perhaps unwisely?) marked parallel-safe. It seems
likely that other behaviors might diverge from what happens in the parent
as well.
It's not as trivial to demonstrate problems in 9.6 or 9.5, but I'm sure
it's still possible, so back-patch to all branches containing parallel
worker infrastructure.
In HEAD only, mark now() and statement_timestamp() as parallel-safe
(other affected functions already were). While in theory we could
still squeeze that change into v11, it doesn't seem important enough
to force a last-minute catversion bump.
Konstantin Knizhnik, whacked around a bit by me
Discussion: https://postgr.es/m/6406dbd2-5d37-4cb6-6eb2-9c44172c7e7c@postgrespro.ru
M src/backend/access/transam/parallel.c
M src/backend/access/transam/xact.c
M src/include/access/xact.h
Allow btree comparison functions to return INT_MIN.
commit : 142cfd3cd82efd4d661f8a8d88960602a439a744
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 5 Oct 2018 16:01:29 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 5 Oct 2018 16:01:29 -0400
Historically we forbade datatype-specific comparison functions from
returning INT_MIN, so that it would be safe to invert the sort order
just by negating the comparison result. However, this was never
really safe for comparison functions that directly return the result
of memcmp(), strcmp(), etc, as POSIX doesn't place any such restriction
on those library functions. Buildfarm results show that at least on
recent Linux on s390x, memcmp() actually does return INT_MIN sometimes,
causing sort failures.
The agreed-on answer is to remove this restriction and fix relevant
call sites to not make such an assumption; code such as "res = -res"
should be replaced by "INVERT_COMPARE_RESULT(res)". The same is needed
in a few places that just directly negated the result of memcmp or
strcmp.
To help find places having this problem, I've also added a compile option
to nbtcompare.c that causes some of the commonly used comparators to
return INT_MIN/INT_MAX instead of their usual -1/+1. It'd likely be
a good idea to have at least one buildfarm member running with
"-DSTRESS_SORT_INT_MIN". That's far from a complete test of course,
but it should help to prevent fresh introductions of such bugs.
This is a longstanding portability hazard, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/20180928185215.ffoq2xrq5d3pafna@alap3.anarazel.de
M contrib/ltree/ltree_op.c
M contrib/pgcrypto/imath.c
M src/backend/access/nbtree/nbtcompare.c
M src/backend/access/nbtree/nbtsearch.c
M src/backend/access/nbtree/nbtutils.c
M src/backend/executor/nodeGatherMerge.c
M src/backend/executor/nodeIndexscan.c
M src/backend/executor/nodeMergeAppend.c
M src/bin/pg_rewind/filemap.c
M src/include/access/nbtree.h
M src/include/c.h
M src/include/utils/sortsupport.h
MAXALIGN the target address where we store flattened value.
commit : 9718c93f532c3cfb6c39679452149dba711d12af
author : Amit Kapila <akapila@postgresql.org>
date : Wed, 3 Oct 2018 09:38:07 +0530
committer: Amit Kapila <akapila@postgresql.org>
date : Wed, 3 Oct 2018 09:38:07 +0530
The API (EOH_flatten_into) that flattens the expanded value representation
expects the target address to be maxaligned. All it's usage adhere to that
principle except when serializing datums for parallel query. Fix that
usage.
Diagnosed-by: Tom Lane
Author: Tom Lane and Amit Kapila
Backpatch-through: 9.6
Discussion: https://postgr.es/m/11629.1536550032@sss.pgh.pa.us
M src/backend/utils/adt/datum.c
M src/test/regress/expected/select_parallel.out
M src/test/regress/sql/select_parallel.sql
Set snprintf.c's maximum number of NL arguments to be 31.
commit : 6483381a4d0b7aab74e71579d92daf490cc54fb2
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 2 Oct 2018 12:41:28 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 2 Oct 2018 12:41:28 -0400
Previously, we used the platform's NL_ARGMAX if any, otherwise 16.
The trouble with this is that the platform value is hugely variable,
ranging from the POSIX-minimum 9 to as much as 64K on recent FreeBSD.
Values of more than a dozen or two have no practical use and slow down
the initialization of the argtypes array. Worse, they cause snprintf.c
to consume far more stack space than was the design intention, possibly
resulting in stack-overflow crashes.
Standardize on 31, which is comfortably more than we need (it looks like
no existing translatable message has more than about 10 parameters).
I chose that, not 32, to make the array sizes powers of 2, for some
possible small gain in speed of the memset.
The lack of reported crashes suggests that the set of platforms we
use snprintf.c on (in released branches) may have no overlap with
the set where NL_ARGMAX has unreasonably large values. But that's
not entirely clear, so back-patch to all supported branches.
Per report from Mateusz Guzik (via Thomas Munro).
Discussion: https://postgr.es/m/CAEepm=3VF=PUp2f8gU8fgZB22yPE_KBS0+e1AHAtQ=09schTHg@mail.gmail.com
M src/port/snprintf.c
Fix corner-case failures in has_foo_privilege() family of functions.
commit : 7eed72333731fabd0fc816c0dfbadebe6d7d3063
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 2 Oct 2018 11:54:12 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 2 Oct 2018 11:54:12 -0400
The variants of these functions that take numeric inputs (OIDs or
column numbers) are supposed to return NULL rather than failing
on bad input; this rule reduces problems with snapshot skew when
queries apply the functions to all rows of a catalog.
has_column_privilege() had careless handling of the case where the
table OID didn't exist. You might get something like this:
select has_column_privilege(9999,'nosuchcol','select');
ERROR: column "nosuchcol" of relation "(null)" does not exist
or you might get a crash, depending on the platform's printf's response
to a null string pointer.
In addition, while applying the column-number variant to a dropped
column returned NULL as desired, applying the column-name variant
did not:
select has_column_privilege('mytable','........pg.dropped.2........','select');
ERROR: column "........pg.dropped.2........" of relation "mytable" does not exist
It seems better to make this case return NULL as well.
Also, the OID-accepting variants of has_foreign_data_wrapper_privilege,
has_server_privilege, and has_tablespace_privilege didn't follow the
principle of returning NULL for nonexistent OIDs. Superusers got TRUE,
everybody else got an error.
Per investigation of Jaime Casanova's report of a new crash in HEAD.
These behaviors have been like this for a long time, so back-patch to
all supported branches.
Patch by me; thanks to Stephen Frost for discussion and review
Discussion: https://postgr.es/m/CAJGNTeP=-6Gyqq5TN9OvYEydi7Fv1oGyYj650LGTnW44oAzYCg@mail.gmail.com
M src/backend/utils/adt/acl.c
M src/test/regress/expected/privileges.out
M src/test/regress/sql/privileges.sql
Fix documentation of pgrowlocks using "lock_type" instead of "modes"
commit : 5dd7f5cecf2c9f4f90d5031671acf31e4292a271
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 2 Oct 2018 16:35:25 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 2 Oct 2018 16:35:25 +0900
The example used in the documentation is outdated as well. This is an
oversight from 0ac5ad5, which bumped up pgrowlocks but forgot some bits
of the documentation.
Reported-by: Chris Wilson
Discussion: https://postgr.es/m/153838692816.2950.12001142346234155699@wrigleys.postgresql.org
Backpatch-through: 9.3
M doc/src/sgml/pgrowlocks.sgml
Fix tuple_data_split() to not open a relation without any lock.
commit : 370b28ccd430ed90125e8a2e016c617658f11b9f
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 1 Oct 2018 11:51:07 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 1 Oct 2018 11:51:07 -0400
contrib/pageinspect's tuple_data_split() function thought it could get
away with opening the referenced relation with NoLock. In practice
there's no guarantee that the current session holds any lock on that
rel (even if we just read a page from it), so that this is unsafe.
Switch to using AccessShareLock. Also, postpone closing the relation,
so that we needn't copy its tupdesc. Also, fix unsafe use of
att_isnull() for attributes past the end of the tuple.
Per testing with a patch that complains if we open a relation without
holding any lock on it. I don't plan to back-patch that patch, but we
should close the holes it identifies in all supported branches.
Discussion: https://postgr.es/m/2038.1538335244@sss.pgh.pa.us
M contrib/pageinspect/heapfuncs.c
Fix ALTER COLUMN TYPE to not open a relation without any lock.
commit : db01fc97ad80e6e29dd5a2d5736cfd3e484f9a30
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 1 Oct 2018 11:39:14 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 1 Oct 2018 11:39:14 -0400
If the column being modified is referenced by a foreign key constraint
of another table, ALTER TABLE would open the other table (to re-parse
the constraint's definition) without having first obtained a lock on it.
This was evidently intentional, but that doesn't mean it's really safe.
It's especially not safe in 9.3, which pre-dates use of MVCC scans for
catalog reads, but even in current releases it doesn't seem like a good
idea.
We know we'll need AccessExclusiveLock shortly to drop the obsoleted
constraint, so just get that a little sooner to close the hole.
Per testing with a patch that complains if we open a relation without
holding any lock on it. I don't plan to back-patch that patch, but we
should close the holes it identifies in all supported branches.
Discussion: https://postgr.es/m/2038.1538335244@sss.pgh.pa.us
M src/backend/commands/tablecmds.c
Fix detection of the result type of strerror_r().
commit : 0aa1e0ef167d05a9ec66958b8784d72becf9303d
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 30 Sep 2018 16:24:56 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 30 Sep 2018 16:24:56 -0400
The method we've traditionally used, of redeclaring strerror_r() to
see if the compiler complains of inconsistent declarations, turns out
not to work reliably because some compilers only report a warning,
not an error. Amazingly, this has gone undetected for years, even
though it certainly breaks our detection of whether strerror_r
succeeded.
Let's instead test whether the compiler will take the result of
strerror_r() as a switch() argument. It's possible this won't
work universally either, but it's the best idea I could come up with
on the spur of the moment.
Back-patch of commit 751f532b9. Buildfarm results indicate that only
icc-on-Linux actually has an issue here; perhaps the lack of field
reports indicates that people don't build PG for production that way.
Discussion: https://postgr.es/m/10877.1537993279@sss.pgh.pa.us
M config/c-library.m4
M configure
M src/include/pg_config.h.in
M src/include/pg_config.h.win32
Fix assertion failure when updating full_page_writes for checkpointer.
commit : 8256d7ae9ee3f8fec4bbe763b042b62d684e623f
author : Amit Kapila <akapila@postgresql.org>
date : Fri, 28 Sep 2018 12:31:48 +0530
committer: Amit Kapila <akapila@postgresql.org>
date : Fri, 28 Sep 2018 12:31:48 +0530
When the checkpointer receives a SIGHUP signal to update its configuration,
it may need to update the shared memory for full_page_writes and need to
write a WAL record for it. Now, it is quite possible that the XLOG
machinery has not been initialized by that time and it will lead to
assertion failure while doing that. Fix is to allow the initialization of
the XLOG machinery outside critical section.
This bug has been introduced by the commit 2c03216d83 which added the XLOG
machinery initialization in RecoveryInProgress code path.
Reported-by: Dilip Kumar
Author: Dilip Kumar
Reviewed-by: Michael Paquier and Amit Kapila
Backpatch-through: 9.5
Discussion: https://postgr.es/m/CAFiTN-u4BA8KXcQUWDPNgaKAjDXC=C2whnzBM8TAcv=stckYUw@mail.gmail.com
M src/backend/access/transam/xlog.c
Fix WAL recycling on standbys depending on archive_mode
commit : 05b9c58da141f5be07556b532c58d7ce84d10d72
author : Michael Paquier <michael@paquier.xyz>
date : Fri, 28 Sep 2018 11:55:55 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Fri, 28 Sep 2018 11:55:55 +0900
A restart point or a checkpoint recycling WAL segments treats segments
marked with neither ".done" (archiving is done) or ".ready" (segment is
ready to be archived) in archive_status the same way for archive_mode
being "on" or "always". While for a primary this is fine, a standby
running a restart point with archive_mode = on would try to mark such a
segment as ready for archiving, which is something that will never
happen except after the standby is promoted.
Note that this problem applies only to WAL segments coming from the
local pg_wal the first time archive recovery is run. Segments part of a
self-contained base backup are the most common case where this could
happen, however even in this case normally the .done markers would be
most likely part of the backup. Segments recovered from an archive are
marked as .ready or .done by the startup process, and segments finished
streaming are marked as such by the WAL receiver, so they are handled
already.
Reported-by: Haruka Takatsuka
Author: Michael Paquier
Discussion: https://postgr.es/m/15402-a453c90ed4cf88b2@postgresql.org
Backpatch-through: 9.5, where archive_mode = always has been added.
M src/backend/access/transam/xlogarchive.c
Fix assorted bugs in pg_get_partition_constraintdef().
commit : dff3f06dc95e3c8dd6debbec3282556fbcb3f9ee
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 27 Sep 2018 18:15:06 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 27 Sep 2018 18:15:06 -0400
It failed if passed a nonexistent relation OID, or one that was a non-heap
relation, because of blindly applying heap_open to a user-supplied OID.
This is not OK behavior for a SQL-exposed function; we have a project
policy that we should return NULL in such cases. Moreover, since
pg_get_partition_constraintdef ought now to work on indexes, restricting
it to heaps is flat wrong anyway.
The underlying function generate_partition_qual() wasn't on board with
indexes having partition quals either, nor for that matter with rels
having relispartition set but yet null relpartbound. (One wonders
whether the person who wrote the function comment blocks claiming that
these functions allow a missing relpartbound had ever tested it.)
Fix by testing relispartition before opening the rel, and by using
relation_open not heap_open. (If any other relkinds ever grow the
ability to have relispartition set, the code will work with them
automatically.) Also, don't reject null relpartbound in
generate_partition_qual.
Back-patch to v11, and all but the null-relpartbound change to v10.
(It's not really necessary to change generate_partition_qual at all
in v10, but I thought s/heap_open/relation_open/ would be a good
idea anyway just to keep the code in sync with later branches.)
Per report from Justin Pryzby.
Discussion: https://postgr.es/m/20180927200020.GJ776@telsasoft.com
M src/backend/catalog/partition.c
M src/backend/utils/cache/lsyscache.c
M src/include/utils/lsyscache.h
Recurse to sequences on ownership change for all relkinds
commit : 5f6b0e6d69f1087847c8456b3f69761c950d52c6
author : Peter Eisentraut <peter_e@gmx.net>
date : Thu, 14 Jun 2018 23:22:14 -0400
committer: Peter Eisentraut <peter_e@gmx.net>
date : Thu, 14 Jun 2018 23:22:14 -0400
When a table ownership is changed, we must apply that also to any owned
sequences. (Otherwise, it would result in a situation that cannot be
restored, because linked sequences must have the same owner as the
table.) But this was previously only applied to regular tables and
materialized views. But it should also apply to at least foreign
tables. This patch removes the relkind check altogether, because it
doesn't save very much and just introduces the possibility of similar
omissions.
Bug: #15238
Reported-by: Christoph Berg <christoph.berg@credativ.de>
M src/backend/commands/tablecmds.c
Rework activation of commit timestamps during recovery
commit : cb822ffb798019b3cfb779f73725925d0416f6c8
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 26 Sep 2018 10:29:28 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 26 Sep 2018 10:29:28 +0900
The activation and deactivation of commit timestamp tracking has not
been handled consistently for a primary or standbys at recovery. The
facility can be activated at three different moments of recovery:
- The beginning, where a primary would use the GUC value for the
decision-making, and where a standby relies on the contents of the
control file.
- When replaying a XLOG_PARAMETER_CHANGE record at redo.
- The end, where both primary and standby rely on the GUC value.
Using the GUC value for a primary at the beginning of recovery causes
problems with commit timestamp access when doing crash recovery.
Particularly, when replaying transaction commits, it could be possible
that an attempt to read commit timestamps is done for a transaction
which committed at a moment when track_commit_timestamp was disabled.
A test case is added to reproduce the failure. The test works down to
v11 as it takes advantage of transaction commits within procedures.
Reported-by: Hailong Li
Author: Masahiko Sawasa, Michael Paquier
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/11224478-a782-203b-1f17-e4797b39bdf0@qunar.com
Backpatch-through: 9.5, where commit timestamps have been introduced.
M src/backend/access/transam/commit_ts.c
M src/backend/access/transam/xlog.c
Remove obsolete comment
commit : 21c8f9c28981f0dc81fbf0d26b85ec119511f35c
author : Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Tue, 25 Sep 2018 17:55:22 -0300
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Tue, 25 Sep 2018 17:55:22 -0300
The documented shortcoming was actually fixed in 4c728f3829
so the comment is not true anymore.
M src/backend/executor/execParallel.c
Make some fixes to allow building Postgres on macOS 10.14 ("Mojave").
commit : 736c3a48c400920bb71e044f9e9a7fcc3dd41e21
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 25 Sep 2018 13:23:29 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 25 Sep 2018 13:23:29 -0400
Apple's latest rearrangements of the system-supplied headers have broken
building of PL/Perl and PL/Tcl. The only practical way to fix PL/Tcl is to
start using the "-isysroot" compiler flag to point to SDK-supplied headers,
as Apple expects. We must also start distinguishing where to find Perl's
headers from where to find its shared library; but that seems like good
cleanup anyway.
Extensions that formerly did something like -I$(perl_archlibexp)/CORE
should now do -I$(perl_includedir)/CORE instead. perl_archlibexp
is still the place to look for libperl.so, though.
If for some reason you don't like the default -isysroot setting, you can
override that by setting PG_SYSROOT in configure's arguments. I don't
currently think people would need to do so, unless maybe for cross-version
build purposes.
In addition, teach configure where to find tclConfig.sh. Our traditional
method of searching $auto_path hasn't worked for the last couple of macOS
releases, and it now seems clear that Apple's not going to change that.
The workaround of manually specifying --with-tclconfig was annoying
already, but Mojave's made it a lot more so because the sysroot path now
has to be included as well. Let's just wire the knowledge into configure
instead. To avoid breaking builds against non-default Tcl installations
(e.g. MacPorts) wherein the $auto_path method probably still works,
arrange to try the additional case only after all else has failed.
Back-patch to all supported versions, since at least the buildfarm
cares about that. The changes are set up to not do anything on macOS
releases that are old enough to not have functional sysroot trees.
M config/tcl.m4
M configure
M configure.in
M contrib/hstore_plperl/Makefile
M src/Makefile.global.in
M src/pl/plperl/GNUmakefile
M src/template/darwin
Ignore publication tables when --no-publications is used
commit : 55a586ba9715b044d7b23a2f60dc003ddcf9239a
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 25 Sep 2018 11:05:29 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 25 Sep 2018 11:05:29 +0900
96e1cb4 has added support for --no-publications in pg_dump, pg_dumpall
and pg_restore, but forgot the fact that publication tables also need to
be ignored when this option is used.
Author: Gilles Darold
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/3f48e812-b0fa-388e-2043-9a176bdee27e@dalibo.com
Backpatch-through: 10, where publications have been added.
M src/bin/pg_dump/pg_backup_archiver.c
M src/bin/pg_dump/pg_dump.c
Revoke pg_stat_statements_reset() permissions
commit : 90a1f97867feb6b6469c641f0e4a00842c52bafb
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 25 Sep 2018 09:56:57 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 25 Sep 2018 09:56:57 +0900
Commit 25fff40 has granted execute permission of the function
pg_stat_statements_reset() to default role "pg_read_all_stats", but this
role is meant to read statistics, and not to reset them. The
permissions on this function are revoked from "pg_read_all_stats". The
version of pg_stat_statements is bumped up in consequence.
Author: Haribabu Kommi
Reviewed-by: Michael Paquier, Amit Kapila
Discussion: https://postgr.es/m/CAJrrPGf5fCnKqXObpwGN9nMyD--tzOf-7LFCJiz59Z1wJ5qj9A@mail.gmail.com
M contrib/pg_stat_statements/Makefile
A contrib/pg_stat_statements/pg_stat_statements–1.5–1.6.sql
M contrib/pg_stat_statements/pg_stat_statements.control
Fix over-allocation of space for array_out()'s result string.
commit : 103511723ee2c3304a6991a49accf51b674b6907
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 24 Sep 2018 11:30:51 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 24 Sep 2018 11:30:51 -0400
array_out overestimated the space needed for its output, possibly by
a very substantial amount if the array is multi-dimensional, because
of wrong order of operations in the loop that counts the number of
curly-brace pairs needed. While the output string is normally
short-lived, this could still cause problems in extreme cases.
An additional minor error was that it counted one more delimiter than
is actually needed.
Repair those errors, add an Assert that the space is now correctly
calculated, and make some minor improvements in the comments.
I also failed to resist the temptation to get rid of an integer
modulus operation per array element; a simple comparison is sufficient.
This bug dates clear back to Berkeley days, so back-patch to all
supported versions.
Keiichi Hirobe, minor additional work by me
Discussion: https://postgr.es/m/CAH=EFxE9W0tRvQkixR2XJRRCToUYUEDkJZk6tnADXugPBRdcdg@mail.gmail.com
M src/backend/utils/adt/arrayfuncs.c
Initialize random() in bootstrap/stand-alone postgres and in initdb.
commit : 4232cff11b84ddb7440bc2fdc3a5bd6ab0baeea4
author : Noah Misch <noah@leadboat.com>
date : Sun, 23 Sep 2018 22:56:39 -0700
committer: Noah Misch <noah@leadboat.com>
date : Sun, 23 Sep 2018 22:56:39 -0700
This removes a difference between the standard IsUnderPostmaster
execution environment and that of --boot and --single. In a stand-alone
backend, "SELECT random()" always started at the same seed.
On a system capable of using posix shared memory, initdb could still
conclude "selecting dynamic shared memory implementation ... sysv".
Crashed --boot or --single postgres processes orphaned shared memory
objects having names that collided with the not-actually-random names
that initdb probed. The sysv fallback appeared after ten crashes of
--boot or --single postgres. Since --boot and --single are rare in
production use, systems used for PostgreSQL development are the
principal candidate to notice this symptom.
Back-patch to 9.3 (all supported versions). PostgreSQL 9.4 introduced
dynamic shared memory, but 9.3 does share the "SELECT random()" problem.
Reviewed by Tom Lane and Kyotaro HORIGUCHI.
Discussion: https://postgr.es/m/20180915221546.GA3159382@rfd.leadboat.com
M src/backend/utils/init/miscinit.c
M src/bin/initdb/initdb.c
Fix failure in WHERE CURRENT OF after rewinding the referenced cursor.
commit : 5ed281e21d363aa1661587c331a022d7e6763d0c
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 23 Sep 2018 16:05:45 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 23 Sep 2018 16:05:45 -0400
In a case where we have multiple relation-scan nodes in a cursor plan,
such as a scan of an inheritance tree, it's possible to fetch from a
given scan node, then rewind the cursor and fetch some row from an
earlier scan node. In such a case, execCurrent.c mistakenly thought
that the later scan node was still active, because ExecReScan hadn't
done anything to make it look not-active. We'd get some sort of
failure in the case of a SeqScan node, because the node's scan tuple
slot would be pointing at a HeapTuple whose t_self gets reset to
invalid by heapam.c. But it seems possible that for other relation
scan node types we'd actually return a valid tuple TID to the caller,
resulting in updating or deleting a tuple that shouldn't have been
considered current. To fix, forcibly clear the ScanTupleSlot in
ExecScanReScan.
Another issue here, which seems only latent at the moment but could
easily become a live bug in future, is that rewinding a cursor does
not necessarily lead to *immediately* applying ExecReScan to every
scan-level node in the plan tree. Upper-level nodes will think that
they can postpone that call if their child node is already marked
with chgParam flags. I don't see a way for that to happen today in
a plan tree that's simple enough for execCurrent.c's search_plan_tree
to understand, but that's one heck of a fragile assumption. So, add
some logic in search_plan_tree to detect chgParam flags being set on
nodes that it descended to/through, and assume that that means we
should consider lower scan nodes to be logically reset even if their
ReScan call hasn't actually happened yet.
Per bug #15395 from Matvey Arye. This has been broken for a long time,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/153764171023.14986.280404050547008575@wrigleys.postgresql.org
M src/backend/executor/execCurrent.c
M src/backend/executor/execScan.c
M src/test/regress/expected/portals.out
M src/test/regress/sql/portals.sql
docs: remove use of escape strings and use bytea hex output
commit : 1927e431dd69daece3152f8264df1a7ba1fcea9b
author : Bruce Momjian <bruce@momjian.us>
date : Fri, 21 Sep 2018 19:55:07 -0400
committer: Bruce Momjian <bruce@momjian.us>
date : Fri, 21 Sep 2018 19:55:07 -0400
standard_conforming_strings defaulted to 'on' in PG 9.1.
bytea_output defaulted to 'hex' in PG 9.0.
Reported-by: André Hänsel
Discussion: https://postgr.es/m/12e601d447ac$345994a0$9d0cbde0$@webkr.de
Backpatch-through: 9.3
M doc/src/sgml/array.sgml
M doc/src/sgml/datatype.sgml
M doc/src/sgml/func.sgml
M doc/src/sgml/lobj.sgml
M doc/src/sgml/rowtypes.sgml
Fix bogus tab-completion rule for CREATE PUBLICATION.
commit : e8d118fe859cdbae8b7dafefd126f31105f3aaab
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 21 Sep 2018 15:58:37 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 21 Sep 2018 15:58:37 -0400
You can't use "FOR TABLE" as a single Matches argument, because readline
will consider that input to be two words not one. It's necessary to make
the pattern contain two arguments.
The case accidentally worked anyway because the words_after_create
test fired ... but only for the first such table name.
Noted by Edmund Horner, though this isn't exactly his proposed fix.
Backpatch to v10 where the faulty code came in.
Discussion: https://postgr.es/m/CAMyN-kDe=gBmHgxWwUUaXuwK+p+7g1vChR7foPHRDLE592nJPQ@mail.gmail.com
M src/bin/psql/tab-complete.c
Use size_t consistently in dsa.{ch}.
commit : 917fe6a48218c3dcbb74129da9ec69b524c45f47
author : Thomas Munro <tmunro@postgresql.org>
date : Sat, 22 Sep 2018 00:40:13 +1200
committer: Thomas Munro <tmunro@postgresql.org>
date : Sat, 22 Sep 2018 00:40:13 +1200
Takeshi Ideriha complained that there is a mixture of Size and size_t
in dsa.c and corresponding header. Let's use size_t. Back-patch to 10
where dsa.c landed, to make future back-patching easy.
Discussion: https://postgr.es/m/4E72940DA2BF16479384A86D54D0988A6F19ABD9%40G01JPEXMBKW04
M src/backend/utils/mmgr/dsa.c
M src/include/utils/dsa.h
Error out for clang on x86-32 without SSE2 support, no -fexcess-precision.
commit : 1b8f09dbd352cb2280ff6e7c8896db57a2d68f06
author : Andres Freund <andres@anarazel.de>
date : Thu, 13 Sep 2018 14:18:43 -0700
committer: Andres Freund <andres@anarazel.de>
date : Thu, 13 Sep 2018 14:18:43 -0700
As clang currently doesn't support -fexcess-precision=standard,
compiling x86-32 code with SSE2 disabled, can lead to problems with
floating point overflow checks and the like.
This issue was noticed because clang, on at least some BSDs, defaults
to i386 compatibility, whereas it defaults to pentium4 on Linux. Our
forced usage of __builtin_isinf() lead to some overflow checks not
triggering when compiling for i386, e.g. when the result of the
calculation didn't overflow in 80bit registers, but did so in 64bit.
While we could just fall back to a non-builtin isinf, it seems likely
that the use of 80bit registers leads to other problems (which is why
we force the flag for GCC already). Therefore error out when
detecting clang in that situation.
Reported-By: Victor Wagner
Analyzed-By: Andrew Gierth and Andres Freund
Author: Andres Freund
Discussion: https://postgr.es/m/20180905005130.ewk4xcs5dgyzcy45@alap3.anarazel.de
Backpatch: 9.3-, all supported versions are affected
M configure
M configure.in
Fix segment_bins corruption in dsa.c.
commit : ba20d392584cdecc2808fe936448d127f43f2c07
author : Thomas Munro <tmunro@postgresql.org>
date : Thu, 20 Sep 2018 15:52:39 +1200
committer: Thomas Munro <tmunro@postgresql.org>
date : Thu, 20 Sep 2018 15:52:39 +1200
If a segment has been freed by dsa.c because it is entirely empty, other
backends must make sure to unmap it before following links to new
segments that might happen to have the same index number, or they could
finish up looking at a defunct segment and then corrupt the segment_bins
lists. The correct protocol requires checking freed_segment_counter
after acquiring the area lock and before resolving any index number to a
segment. Add the missing checks and an assertion.
Back-patch to 10, where dsa.c first arrived.
Author: Thomas Munro
Reported-by: Tomas Vondra
Discussion: https://postgr.es/m/CAEepm%3D0thg%2Bja5zGVa7jBy-uqyHrTqTm8HGhEOtMmigGrAqTbw%40mail.gmail.com
M src/backend/utils/mmgr/dsa.c
Defer restoration of libraries in parallel workers.
commit : 98a4e814e473784716f889d8e67d66c38e94c6b0
author : Thomas Munro <tmunro@postgresql.org>
date : Thu, 20 Sep 2018 14:03:05 +1200
committer: Thomas Munro <tmunro@postgresql.org>
date : Thu, 20 Sep 2018 14:03:05 +1200
Several users of extensions complained of crashes in parallel workers
that turned out to be due to syscache access from their _PG_init()
functions. Reorder the initialization of parallel workers so that
libraries are restored after the caches are initialized, and inside a
transaction.
This was reported in bug #15350 and elsewhere. We don't consider it
to be a bug: extensions shouldn't do that, because then they can't be
used in shared_preload_libraries. However, it's a fairly obscure
hazard and these extensions worked in practice before parallel query
came along. So let's make it work. Later commits might add a warning
message and eventually an error.
Back-patch to 9.6, where parallel query landed.
Author: Thomas Munro
Reviewed-by: Amit Kapila
Reported-by: Kieran McCusker, Jimmy
Discussion: https://postgr.es/m/153512195228.1489.8545997741965926448%40wrigleys.postgresql.org
M src/backend/access/transam/parallel.c
Don't ignore locktable-full failures in StandbyAcquireAccessExclusiveLock.
commit : 82b7cfaaadb46f50bfa23759d2e6ce3a343e932e
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 19 Sep 2018 12:43:51 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 19 Sep 2018 12:43:51 -0400
Commit 37c54863c removed the code in StandbyAcquireAccessExclusiveLock
that checked the return value of LockAcquireExtended. That created a
bug, because it's still passing reportMemoryError = false to
LockAcquireExtended, meaning that LOCKACQUIRE_NOT_AVAIL will be returned
if we're out of shared memory for the lock table.
In such a situation, the startup process would believe it had acquired an
exclusive lock even though it hadn't, with potentially dire consequences.
To fix, just drop the use of reportMemoryError = false, which allows us
to simplify the call into a plain LockAcquire(). It's unclear that the
locktable-full situation arises often enough that it's worth having a
better recovery method than crash-and-restart. (I strongly suspect that
the only reason the code path existed at all was that it was relatively
simple to do in the pre-37c54863c implementation. But now it's not.)
LockAcquireExtended's reportMemoryError parameter is now dead code and
could be removed. I refrained from doing so, however, because there
was some interest in resurrecting the behavior if we do get reports of
locktable-full failures in the field. Also, it seems unwise to remove
the parameter concurrently with shipping commit f868a8143, which added a
parameter; if there are any third-party callers of LockAcquireExtended,
we want them to get a wrong-number-of-parameters compile error rather
than a possibly-silent misinterpretation of its last parameter.
Back-patch to 9.6 where the bug was introduced.
Discussion: https://postgr.es/m/6202.1536359835@sss.pgh.pa.us
M src/backend/storage/ipc/standby.c
M src/backend/storage/lmgr/lock.c
Fix some probably-minor oversights in readfuncs.c.
commit : cdbdf85ec78fac3a9a2daf8541f185c07f2a2eff
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 18 Sep 2018 13:02:27 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 18 Sep 2018 13:02:27 -0400
The system expects TABLEFUNC RTEs to have coltypes, coltypmods, and
colcollations lists, but outfuncs doesn't dump them and readfuncs doesn't
restore them. This doesn't cause obvious failures, because the only things
that look at those fields are expandRTE() and get_rte_attribute_type(),
which are mostly used during parse analysis, before anything would've
passed the parsetree through outfuncs/readfuncs. But expandRTE() is used
in build_physical_tlist(), which means that that function will return a
wrong answer for a TABLEFUNC RTE that came from a view. Very accidentally,
this doesn't cause serious problems, because what it will return is NIL
which callers will interpret as "couldn't build a physical tlist because
of dropped columns". So you still get a plan that works, though it's
marginally less efficient than it could be. There are also some other
expandRTE() calls associated with transformation of whole-row Vars in
the planner. I have been unable to exhibit misbehavior from that, and
it may be unreachable in any case that anyone would care about ... but
I'm not entirely convinced, so this seems like something we should back-
patch a fix for. Fortunately, we can fix it without forcing a change
of stored rules and a catversion bump, because we can just copy these
lists from the subsidiary TableFunc object.
readfuncs.c was also missing support for NamedTuplestoreScan plan nodes.
This accidentally fails to break parallel query because a query using
a named tuplestore would never be considered parallel-safe anyway.
However, project policy since parallel query came in is that all plan
node types should have outfuncs/readfuncs support, so this is clearly
an oversight that should be repaired.
Noted while fooling around with a patch to test outfuncs/readfuncs more
thoroughly. That exposed some other issues too, but these are the only
ones that seem worth back-patching.
Back-patch to v10 where both of these features came in.
Discussion: https://postgr.es/m/17114.1537138992@sss.pgh.pa.us
M src/backend/nodes/readfuncs.c
M src/include/nodes/parsenodes.h
Allow DSM allocation to be interrupted.
commit : 7167fa876e48b79131dbfc09789c92c85026c8bc
author : Thomas Munro <tmunro@postgresql.org>
date : Tue, 18 Sep 2018 22:56:36 +1200
committer: Thomas Munro <tmunro@postgresql.org>
date : Tue, 18 Sep 2018 22:56:36 +1200
Chris Travers reported that the startup process can repeatedly try to
cancel a backend that is in a posix_fallocate()/EINTR loop and cause it
to loop forever. Teach the retry loop to give up if an interrupt is
pending. Don't actually check for interrupts in that loop though,
because a non-local exit would skip some clean-up code in the caller.
Back-patch to 9.4 where DSM was added (and posix_fallocate() was later
back-patched).
Author: Chris Travers
Reviewed-by: Ildar Musin, Murat Kabilov, Oleksii Kliukin
Tested-by: Oleksii Kliukin
Discussion: https://postgr.es/m/CAN-RpxB-oeZve_J3SM_6%3DHXPmvEG%3DHX%2B9V9pi8g2YR7YW0rBBg%40mail.gmail.com
M src/backend/storage/ipc/dsm_impl.c
Fix parsetree representation of XMLTABLE(XMLNAMESPACES(DEFAULT ...)).
commit : a552e3bc502ee679b0b8ff19e69b7b69890c0613
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 17 Sep 2018 13:16:32 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 17 Sep 2018 13:16:32 -0400
The original coding for XMLTABLE thought it could represent a default
namespace by a T_String Value node with a null string pointer. That's
not okay, though; in particular outfuncs.c/readfuncs.c are not on board
with such a representation, meaning you'll get a null pointer crash
if you try to store a view or rule containing this construct.
To fix, change the parsetree representation so that we have a NULL
list element, instead of a bogus Value node.
This isn't really a functional limitation since default XML namespaces
aren't yet implemented in the executor; you'd just get "DEFAULT
namespace is not supported" anyway. But crashes are not nice, so
back-patch to v10 where this syntax was added. Ordinarily we'd consider
a parsetree representation change to be un-backpatchable; but since
existing releases would crash on the way to storing such constructs,
there can't be any existing views/rules to be incompatible with.
Per report from Andrey Lepikhov.
Discussion: https://postgr.es/m/3690074f-abd2-56a9-144a-aa5545d7a291@postgrespro.ru
M src/backend/executor/nodeTableFuncscan.c
M src/backend/parser/parse_clause.c
M src/backend/utils/adt/ruleutils.c
M src/include/nodes/execnodes.h
M src/include/nodes/primnodes.h
Fix pgbench lexer's "continuation" rule to cope with Windows newlines.
commit : 3ea7e015f37afd615234d94181840b8e6e44e6ed
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 17 Sep 2018 12:11:43 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 17 Sep 2018 12:11:43 -0400
Our general practice in frontend code is to accept input with either
Unix-style newlines (\n) or DOS-style (\r\n). pgbench was mostly down
with that, but its rule for line continuations (backslash-newline) was
not. This had been masked on Windows buildfarm machines before commit
0ba06e0bf by use of Windows text mode to read files. We could have fixed
it by forcing text mode again, but it's better to fix the parsing code
so that Windows-style text files on Unix systems don't cause problems.
Back-patch to v10 where pgbench grew line continuations.
Discussion: https://postgr.es/m/17194.1537191697@sss.pgh.pa.us
M src/bin/pgbench/exprscan.l
Add outfuncs.c support for RawStmt nodes.
commit : c37401733aa73949507dbe6abf4d6137fd1f5529
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 16 Sep 2018 13:02:47 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 16 Sep 2018 13:02:47 -0400
I noticed while poking at a report from Andrey Lepikhov that the
recent addition of RawStmt nodes at the top of raw parse trees
makes it impossible to print any raw parse trees whatsoever,
because outfuncs.c doesn't know RawStmt and hence fails to descend
into it.
While we generally lack outfuncs.c support for utility statements,
there is reasonably complete support for what you can find in a
raw SELECT statement. It was not my intention to make that all
dead code ... so let's add support for RawStmt.
Back-patch to v10 where RawStmt appeared.
M src/backend/nodes/outfuncs.c
Fix failure with initplans used conditionally during EvalPlanQual rechecks.
commit : 99cbbbbd1ddd3eb23dba7d0f4810e5accccce8c8
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 15 Sep 2018 13:42:34 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 15 Sep 2018 13:42:34 -0400
The EvalPlanQual machinery assumes that any initplans (that is,
uncorrelated sub-selects) used during an EPQ recheck would have already
been evaluated during the main query; this is implicit in the fact that
execPlan pointers are not copied into the EPQ estate's es_param_exec_vals.
But it's possible for that assumption to fail, if the initplan is only
reached conditionally. For example, a sub-select inside a CASE expression
could be reached during a recheck when it had not been previously, if the
CASE test depends on a column that was just updated.
This bug is old, appearing to date back to my rewrite of EvalPlanQual in
commit 9f2ee8f28, but was not detected until Kyle Samson reported a case.
To fix, force all not-yet-evaluated initplans used within the EPQ plan
subtree to be evaluated at the start of the recheck, before entering the
EPQ environment. This could be inefficient, if such an initplan is
expensive and goes unused again during the recheck --- but that's piling
one layer of improbability atop another. It doesn't seem worth adding
more complexity to prevent that, at least not in the back branches.
It was convenient to use the new-in-v11 ExecEvalParamExecParams function
to implement this, but I didn't like either its name or the specifics of
its API, so revise that.
Back-patch all the way. Rather than rewrite the patch to avoid depending
on bms_next_member() in the oldest branches, I chose to back-patch that
function into 9.4 and 9.3. (This isn't the first time back-patches have
needed that, and it exhausted my patience.) I also chose to back-patch
some test cases added by commits 71404af2a and 342a1ffa2 into 9.4 and 9.3,
so that the 9.x versions of eval-plan-qual.spec are all the same.
Andrew Gierth diagnosed the problem and contributed the added test cases,
though the actual code changes are by me.
Discussion: https://postgr.es/m/A033A40A-B234-4324-BE37-272279F7B627@tripadvisor.com
M src/backend/executor/execMain.c
M src/backend/executor/nodeSubplan.c
M src/include/executor/nodeSubplan.h
M src/test/isolation/expected/eval-plan-qual.out
M src/test/isolation/specs/eval-plan-qual.spec
Don't allow LIMIT/OFFSET clause within sub-selects to be pushed to workers.
commit : 1ceb103e7d1e794c0b171b0594fc6936003eb4ab
author : Amit Kapila <akapila@postgresql.org>
date : Fri, 14 Sep 2018 10:05:45 +0530
committer: Amit Kapila <akapila@postgresql.org>
date : Fri, 14 Sep 2018 10:05:45 +0530
Allowing sub-select containing LIMIT/OFFSET in workers can lead to
inconsistent results at the top-level as there is no guarantee that the
row order will be fully deterministic. The fix is to prohibit pushing
LIMIT/OFFSET within sub-selects to workers.
Reported-by: Andrew Fletcher
Bug: 15324
Author: Amit Kapila
Reviewed-by: Dilip Kumar
Backpatch-through: 9.6
Discussion: https://postgr.es/m/153417684333.10284.11356259990921828616@wrigleys.postgresql.org
M src/backend/optimizer/path/allpaths.c
M src/backend/optimizer/plan/planner.c
M src/include/optimizer/planner.h
M src/test/regress/expected/select_parallel.out
M src/test/regress/sql/select_parallel.sql
Attach FPI to the first record after full_page_writes is turned on.
commit : ede7d8192ca3d1f731d34fb82fdcfc3308b4355f
author : Amit Kapila <akapila@postgresql.org>
date : Thu, 13 Sep 2018 16:01:57 +0530
committer: Amit Kapila <akapila@postgresql.org>
date : Thu, 13 Sep 2018 16:01:57 +0530
XLogInsert fails to attach a required FPI to the first record after
full_page_writes is turned on by the last checkpoint. This bug got
introduced in 9.5 due to code rearrangement in commits 2c03216d83 and
2076db2aea. Fix it by ensuring that XLogInsertRecord performs a
recomputation when the given record is generated with FPW as off but
found that the flag has been turned on while actually inserting the
record.
Reported-by: Kyotaro Horiguchi
Author: Kyotaro Horiguchi
Reviewed-by: Amit Kapila
Backpatch-through: 9.5 where this problem was introduced
Discussion: https://postgr.es/m/20180420.151043.74298611.horiguchi.kyotaro@lab.ntt.co.jp
M src/backend/access/transam/xlog.c
doc: Update broken links
commit : aed0150eda2cb8b56b00b1a5cfcb452f0fa8235b
author : Peter Eisentraut <peter_e@gmx.net>
date : Tue, 14 Aug 2018 22:54:52 +0200
committer: Peter Eisentraut <peter_e@gmx.net>
date : Tue, 14 Aug 2018 22:54:52 +0200
Discussion: https://www.postgresql.org/message-id/flat/153044458767.13254.16049977382403131287%40wrigleys.postgresql.org
M doc/src/sgml/libpq.sgml
M doc/src/sgml/pgcrypto.sgml
M doc/src/sgml/runtime.sgml
Repair bug in regexp split performance improvements.
commit : ab78c6e36635da96c5ef8c229942a16b5391c0b9
author : Andrew Gierth <rhodiumtoad@postgresql.org>
date : Wed, 12 Sep 2018 19:31:06 +0100
committer: Andrew Gierth <rhodiumtoad@postgresql.org>
date : Wed, 12 Sep 2018 19:31:06 +0100
Commit c8ea87e4b introduced a temporary conversion buffer for
substrings extracted during regexp splits. Unfortunately the code that
sized it was failing to ignore the effects of ignored degenerate
regexp matches, so for regexp_split_* calls it could under-size the
buffer in such cases.
Fix, and add some regression test cases (though those will only catch
the bug if run in a multibyte encoding).
Backpatch to 9.3 as the faulty code was.
Thanks to the PostGIS project, Regina Obe and Paul Ramsey for the
report (via IRC) and assistance in analysis. Patch by me.
M src/backend/utils/adt/regexp.c
M src/test/regress/expected/strings.out
M src/test/regress/sql/strings.sql
ecpg: Change --version output to common style
commit : 6592d89068b694af6894117e629ab45c1e1f49b2
author : Peter Eisentraut <peter_e@gmx.net>
date : Wed, 12 Sep 2018 14:33:15 +0200
committer: Peter Eisentraut <peter_e@gmx.net>
date : Wed, 12 Sep 2018 14:33:15 +0200
When we removed the ecpg-specific versions, we also removed the
"(PostgreSQL)" from the --version output, which we show in other
programs.
Reported-by: Ioseph Kim <pgsql-kr@postgresql.kr>
M src/interfaces/ecpg/preproc/ecpg.c
Repair double-free in SP-GIST rescan (bug #15378)
commit : c02b56869439281d139d47dae784e3f7cf765f2d
author : Andrew Gierth <rhodiumtoad@postgresql.org>
date : Tue, 11 Sep 2018 18:14:19 +0100
committer: Andrew Gierth <rhodiumtoad@postgresql.org>
date : Tue, 11 Sep 2018 18:14:19 +0100
spgrescan would first reset traversalCxt, and then traverse a
potentially non-empty stack containing pointers to traversalValues
which had been allocated in those contexts, freeing them a second
time. This bug originates in commit ccd6eb49a where traversalValue was
introduced.
Repair by traversing the stack before the context reset; this isn't
ideal, since it means doing retail pfree in a context that's about to
be reset, but the freeing of a stack entry is also done in other
places in the code during the scan so it's not worth trying to
refactor it further. Regression test added.
Backpatch to 9.6 where the problem was introduced.
Per bug #15378; analysis and patch by me, originally from a report on
IRC by user velix; see also PostGIS ticket #4174; review by Alexander
Korotkov.
Discussion: https://postgr.es/m/153663176628.23136.11901365223750051490@wrigleys.postgresql.org
M src/backend/access/spgist/spgscan.c
M src/test/regress/expected/spgist.out
M src/test/regress/sql/spgist.sql
Use -Bsymbolic for shared libraries on HP-UX and Solaris.
commit : 355fd62e8c73c023b2adc7b04a9cd83336ef7e8a
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 10 Sep 2018 22:22:12 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 10 Sep 2018 22:22:12 -0400
These platforms are also subject to the mis-linking problem addressed
in commit e3d77ea6b. It's not clear whether we could solve it with
a solution equivalent to GNU ld's version scripts, but -Bsymbolic
appears to fix it, so let's use that.
Like the previous commit, back-patch as far as v10.
Discussion: https://postgr.es/m/153626613985.23143.4743626885618266803@wrigleys.postgresql.org
M src/Makefile.shlib
Prevent mis-linking of src/port and src/common functions on *BSD.
commit : d6ff5322c23272b15af606d7da12f49eca4d4470
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 9 Sep 2018 15:16:51 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 9 Sep 2018 15:16:51 -0400
On ELF-based platforms (and maybe others?) it's possible for a shared
library, when dynamically loaded into the backend, to call the backend
versions of src/port and src/common functions rather than the frontend
versions that are actually linked into the shlib. This is the cause
of bug #15367 from Jeremy Evans, and is likely to lead to more problems
in future; it's accidental that we've failed to notice any bad effects
up to now.
The recommended way to fix this on ELF-based platforms is to use a
linker "version script" that makes the shlib's versions of the functions
local. (Apparently, -Bsymbolic would fix it as well, but with other
side effects that we don't want.) Doing so has the additional benefit
that we can make sure the shlib only exposes the symbols that are meant
to be part of its API, and not ones that are just for cross-file
references within the shlib. So we'd already been using a version
script for libpq on popular platforms, but it's now apparent that it's
necessary for correctness on every ELF-based platform.
Hence, add appropriate logic to the openbsd, freebsd, and netbsd stanzas
of Makefile.shlib; this is just a copy-and-paste from the linux stanza.
There may be additional work to do if commit ed0cdf0e0 reveals that the
problem exists elsewhere, but this is all that is known to be needed
right now.
Back-patch to v10 where SCRAM support came in. The problem is ancient,
but analysis suggests that there were no really severe consequences
in older branches. Hence, I won't take the risk of such a large change
in the build process for older branches.
In passing, remove a rather opaque comment about -Bsymbolic; I don't
think it's very on-point about why we don't use that, if indeed that's
what it's talking about at all.
Patch by me; thanks to Andrew Gierth for helping to diagnose the problem,
and for additional testing.
Discussion: https://postgr.es/m/153626613985.23143.4743626885618266803@wrigleys.postgresql.org
M src/Makefile.shlib
Fix past pd_upper write in ginRedoRecompress()
commit : bccfd381707b9d66989259b94dc79e080fa56dec
author : Alexander Korotkov <akorotkov@postgresql.org>
date : Sun, 9 Sep 2018 21:19:29 +0300
committer: Alexander Korotkov <akorotkov@postgresql.org>
date : Sun, 9 Sep 2018 21:19:29 +0300
ginRedoRecompress() replays actions over compressed segments of posting list
in-place. However, it might lead to write past pg_upper, because intermediate
state during playing the changes can take more space than both original state
and final state. This commit fixes that by refuse from in-place modification.
Instead page tail is copied once modification is started, and then it's used
as the source of original segments. Backpatch to 9.4 where posting list
compression was introduced.
Reported-by: Sivasubramanian Ramasubramanian
Discussion: https://postgr.es/m/1536091151804.6588%40amazon.com
Author: Alexander Korotkov based on patch from and ideas by Sivasubramanian Ramasubramanian
Review: Sivasubramanian Ramasubramanian
Backpatch-through: 9.4
M src/backend/access/gin/ginxlog.c
Fix v10 back-patch of 076a3c2112b127b3b36346dbc64659f9a165f60f.
commit : aa6d0114c4d9b54dad2abd9065b4f063c517b800
author : Noah Misch <noah@leadboat.com>
date : Sat, 8 Sep 2018 17:47:05 -0700
committer: Noah Misch <noah@leadboat.com>
date : Sat, 8 Sep 2018 17:47:05 -0700
M src/test/subscription/t/002_types.pl
Fix logical subscriber wait in test.
commit : f998b177dd329f1eda734a27433fd2aa559ec1f9
author : Noah Misch <noah@leadboat.com>
date : Sat, 8 Sep 2018 16:20:50 -0700
committer: Noah Misch <noah@leadboat.com>
date : Sat, 8 Sep 2018 16:20:50 -0700
Buildfarm members sungazer and tern revealed this deficit. Back-patch
to v10, like commit 4f10e7ea7b2231f453bb18b6e710ac333eaf121b, which
introduced the test.
M src/test/subscription/t/002_types.pl
Minor cleanup/future-proofing for pg_saslprep().
commit : 930b785d40cf53d679c72ffc2c34a63d412bee5b
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 8 Sep 2018 18:20:36 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 8 Sep 2018 18:20:36 -0400
Ensure that pg_saslprep() initializes its output argument to NULL in
all failure paths, and then remove the redundant initialization that
some (not all) of its callers did. This does not fix any live bug,
but it reduces the odds of future bugs of omission.
Also add a comment about why the existing failure-path coding is
adequate.
Back-patch so as to keep the function's API consistent across branches,
again to forestall future bug introduction.
Patch by me, reviewed by Michael Paquier
Discussion: https://postgr.es/m/16558.1536407783@sss.pgh.pa.us
M src/backend/libpq/auth-scram.c
M src/common/saslprep.c
M src/interfaces/libpq/fe-auth-scram.c
Save/restore SPI's global variables in SPI_connect() and SPI_finish().
commit : 3985b75dca6d1101cc4cb6e78456dc6c5f72fcac
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 7 Sep 2018 20:09:57 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 7 Sep 2018 20:09:57 -0400
This patch removes two sources of interference between nominally
independent functions when one SPI-using function calls another,
perhaps without knowing that it does so.
Chapman Flack pointed out that xml.c's query_to_xml_internal() expects
SPI_tuptable and SPI_processed to stay valid across datatype output
function calls; but it's possible that such a call could involve
re-entrant use of SPI. It seems likely that there are similar hazards
elsewhere, if not in the core code then in third-party SPI users.
Previously SPI_finish() reset SPI's API globals to zeroes/nulls, which
would typically make for a crash in such a situation. Restoring them
to the values they had at SPI_connect() seems like a considerably more
useful behavior, and it still meets the design goal of not leaving any
dangling pointers to tuple tables of the function being exited.
Also, cause SPI_connect() to reset these variables to zeroes/nulls after
saving them. This prevents interference in the opposite direction: it's
possible that a SPI-using function that's only ever been tested standalone
contains assumptions that these variables start out as zeroes. That was
the case as long as you were the outermost SPI user, but not so much for
an inner user. Now it's consistent.
Report and fix suggestion by Chapman Flack, actual patch by me.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/9fa25bef-2e4f-1c32-22a4-3ad0723c4a17@anastigmatix.net
M src/backend/executor/spi.c
M src/include/executor/spi_priv.h
Limit depth of forced recursion for CLOBBER_CACHE_RECURSIVELY.
commit : adfc156d356a6bd80293464b6a33b93c0184a92f
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 7 Sep 2018 18:13:29 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 7 Sep 2018 18:13:29 -0400
It's somewhat surprising that we got away with this before. (Actually,
since nobody tests this routinely AFAIK, it might've been broken for
awhile. But it's definitely broken in the wake of commit f868a8143.)
It seems sufficient to limit the forced recursion to a small number
of levels.
Back-patch to all supported branches, like the preceding patch.
Discussion: https://postgr.es/m/12259.1532117714@sss.pgh.pa.us
M src/backend/utils/cache/inval.c
Fix longstanding recursion hazard in sinval message processing.
commit : 9e6f4fbdd0cfcf7235f884d662aed4f2a1c416c3
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 7 Sep 2018 18:04:38 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 7 Sep 2018 18:04:38 -0400
LockRelationOid and sibling routines supposed that, if our session already
holds the lock they were asked to acquire, they could skip calling
AcceptInvalidationMessages on the grounds that we must have already read
any remote sinval messages issued against the relation being locked.
This is normally true, but there's a critical special case where it's not:
processing inside AcceptInvalidationMessages might attempt to access system
relations, resulting in a recursive call to acquire a relation lock.
Hence, if the outer call had acquired that same system catalog lock, we'd
fall through, despite the possibility that there's an as-yet-unread sinval
message for that system catalog. This could, for example, result in
failure to access a system catalog or index that had just been processed
by VACUUM FULL. This is the explanation for buildfarm failures we've been
seeing intermittently for the past three months. The bug is far older
than that, but commits a54e1f158 et al added a new recursion case within
AcceptInvalidationMessages that is apparently easier to hit than any
previous case.
To fix this, we must not skip calling AcceptInvalidationMessages until
we have *finished* a call to it since acquiring a relation lock, not
merely acquired the lock. (There's already adequate logic inside
AcceptInvalidationMessages to deal with being called recursively.)
Fortunately, we can implement that at trivial cost, by adding a flag
to LOCALLOCK hashtable entries that tracks whether we know we have
completed such a call.
There is an API hazard added by this patch for external callers of
LockAcquire: if anything is testing for LOCKACQUIRE_ALREADY_HELD,
it might be fooled by the new return code LOCKACQUIRE_ALREADY_CLEAR
into thinking the lock wasn't already held. This should be a fail-soft
condition, though, unless something very bizarre is being done in
response to the test.
Also, I added an additional output argument to LockAcquireExtended,
assuming that that probably isn't called by any outside code given
the very limited usefulness of its additional functionality.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/12259.1532117714@sss.pgh.pa.us
M src/backend/storage/ipc/standby.c
M src/backend/storage/lmgr/lmgr.c
M src/backend/storage/lmgr/lock.c
M src/include/storage/lock.h
doc: wording fix
commit : ee2b0a392ff843a7f8539f907498d550bfe51462
author : Bruce Momjian <bruce@momjian.us>
date : Thu, 6 Sep 2018 20:42:24 -0400
committer: Bruce Momjian <bruce@momjian.us>
date : Thu, 6 Sep 2018 20:42:24 -0400
Author: Liudmila Mantrova
Backpatch-through: 9.6 and 10 only
M doc/src/sgml/pgtrgm.sgml
Make contrib/unaccent's unaccent() function work when not in search path.
commit : a54f5b187a4a12a0a694d0911525a451840a1d30
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 6 Sep 2018 10:49:45 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 6 Sep 2018 10:49:45 -0400
Since the fixes for CVE-2018-1058, we've advised people to schema-qualify
function references in order to fix failures in code that executes under
a minimal search_path setting. However, that's insufficient to make the
single-argument form of unaccent() work, because it looks up the "unaccent"
text search dictionary using the search path.
The most expedient answer seems to be to remove the search_path dependency
by making it look in the same schema that the unaccent() function itself
is declared in. This will definitely work for the normal usage of this
function with the unaccent dictionary provided by the extension.
It's barely possible that there are people who were relying on the
search-path-dependent behavior to select other dictionaries with the same
name; but if there are any such people at all, they can still get that
behavior by writing unaccent('unaccent', ...), or possibly
unaccent('unaccent'::text::regdictionary, ...) if the lookup has to be
postponed to runtime.
Per complaint from Gunnlaugur Thor Briem. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/CAPs+M8LCex6d=DeneofdsoJVijaG59m9V0ggbb3pOH7hZO4+cQ@mail.gmail.com
M contrib/unaccent/unaccent.c
M doc/src/sgml/unaccent.sgml
Fix the overrun in hash index metapage for smaller block sizes.
commit : 916afca45d99c8e736eb45aeec9ef256967bc1e2
author : Amit Kapila <akapila@postgresql.org>
date : Thu, 6 Sep 2018 10:19:51 +0530
committer: Amit Kapila <akapila@postgresql.org>
date : Thu, 6 Sep 2018 10:19:51 +0530
The commit 620b49a1 changed the value of HASH_MAX_BITMAPS with the intent
to allow many non-unique values in hash indexes without worrying to reach
the limit of the number of overflow pages. At that time, this didn't
occur to us that it can overrun the block for smaller block sizes.
Choose the value of HASH_MAX_BITMAPS based on BLCKSZ such that it gives
the same answer as now for the cases where the overrun doesn't occur, and
some other sufficiently-value for the cases where an overrun currently
does occur. This allows us not to change the behavior in any case that
currently works, so there's really no reason for a HASH_VERSION bump.
Author: Dilip Kumar
Reviewed-by: Amit Kapila
Backpatch-through: 10
Discussion: https://postgr.es/m/CAA4eK1LtF4VmU4mx_+i72ff1MdNZ8XaJMGkt2HV8+uSWcn8t4A@mail.gmail.com
M src/include/access/hash.h
Make argument names of pg_get_object_address consistent, and fix docs.
commit : 0f3637ace36bf42f5385a3304e28f8530c5691d1
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 5 Sep 2018 13:47:16 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 5 Sep 2018 13:47:16 -0400
pg_get_object_address and pg_identify_object_as_address are supposed
to be inverses, but they disagreed as to the names of the arguments
representing the textual form of an object address. Moreover, the
documented argument names didn't agree with reality at all, either
for these functions or pg_identify_object.
In HEAD and v11, I think we can get away with renaming the input
arguments of pg_get_object_address to match the outputs of
pg_identify_object_as_address. In theory that might break queries
using named-argument notation to call pg_get_object_address, but
it seems really unlikely that anybody is doing that, or that they'd
have much trouble adjusting if they were. In older branches, we'll
just live with the lack of consistency.
Aside from fixing the documentation of these functions to match reality,
I couldn't resist the temptation to do some copy-editing.
Per complaint from Jean-Pierre Pelletier. Back-patch to 9.5 where these
functions were introduced. (Before v11, this is a documentation change
only.)
Discussion: https://postgr.es/m/CANGqjDnWH8wsTY_GzDUxbt4i=y-85SJreZin4Hm8uOqv1vzRQA@mail.gmail.com
M doc/src/sgml/func.sgml
docs: improve AT TIME ZONE description
commit : b7156cfe1bc105691baf40578076187e1e6436d5
author : Bruce Momjian <bruce@momjian.us>
date : Tue, 4 Sep 2018 22:34:07 -0400
committer: Bruce Momjian <bruce@momjian.us>
date : Tue, 4 Sep 2018 22:34:07 -0400
The previous description was unclear. Also add a third example, change
use of time zone acronyms to more verbose descriptions, and add a
mention that using 'time' with AT TIME ZONE uses the current time zone
rules.
Backpatch-through: 9.3
M doc/src/sgml/func.sgml
Prohibit pushing subqueries containing window function calculation to workers.
commit : bf61873ae3b64e2883c1a9c7af8e5df788a7123c
author : Amit Kapila <akapila@postgresql.org>
date : Tue, 4 Sep 2018 10:49:05 +0530
committer: Amit Kapila <akapila@postgresql.org>
date : Tue, 4 Sep 2018 10:49:05 +0530
Allowing window function calculation in workers leads to inconsistent
results because if the input row ordering is not fully deterministic, the
output of window functions might vary across workers. The fix is to treat
them as parallel-restricted.
In the passing, improve the coding pattern in max_parallel_hazard_walker
so that it has a chain of mutually-exclusive if ... else if ... else if
... else if ... IsA tests.
Reported-by: Marko Tiikkaja
Bug: 15324
Author: Amit Kapila
Reviewed-by: Tom Lane
Backpatch-through: 9.6
Discussion: https://postgr.es/m/CAL9smLAnfPJCDUUG4ckX2iznj53V7VSMsYefzZieN93YxTNOcw@mail.gmail.com
M src/backend/optimizer/util/clauses.c
M src/test/regress/expected/select_parallel.out
M src/test/regress/sql/select_parallel.sql
During the split, set checksum on an empty hash index page.
commit : 3b7a96a619f78f80b4a470a0dcde4c92b5b7141f
author : Amit Kapila <akapila@postgresql.org>
date : Tue, 4 Sep 2018 08:43:37 +0530
committer: Amit Kapila <akapila@postgresql.org>
date : Tue, 4 Sep 2018 08:43:37 +0530
On a split, we allocate a new splitpoint's worth of bucket pages wherein
we initialize the last page with zeros which is fine, but we forgot to set
the checksum for that last page.
We decided to back-patch this fix till 10 because we don't have an easy
way to test it in prior versions. Another reason is that the hash-index
code is changed heavily in 10, so it is not advisable to push the fix
without testing it in prior versions.
Author: Amit Kapila
Reviewed-by: Yugo Nagata
Backpatch-through: 10
Discussion: https://postgr.es/m/5d03686d-727c-dbf8-0064-bf8b97ffe850@2ndquadrant.com
M src/backend/access/hash/hashpage.c
Fix initial sync of slot parent directory when restoring status
commit : 504f059f5a198cff558e1c95a55aa63e03e2c3a8
author : Michael Paquier <michael@paquier.xyz>
date : Sun, 2 Sep 2018 12:40:45 -0700
committer: Michael Paquier <michael@paquier.xyz>
date : Sun, 2 Sep 2018 12:40:45 -0700
At the beginning of recovery, information from replication slots is
recovered from disk to memory. In order to ensure the durability of the
information, the status file as well as its parent directory are
synced. It happens that the sync on the parent directory was done
directly using the status file path, which is logically incorrect, and
the current code has been doing a sync on the same object twice in a
row.
Reported-by: Konstantin Knizhnik
Diagnosed-by: Konstantin Knizhnik
Author: Michael Paquier
Discussion: https://postgr.es/m/9eb1a6d5-b66f-2640-598d-c5ea46b8f68a@postgrespro.ru
Backpatch-through: 9.4-
M src/backend/replication/slot.c
Doc: fix oversights in "Client/Server Character Set Conversions" table.
commit : 9b0c58e6d43a75f6739e6b7d096f55dc32d27730
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 1 Sep 2018 16:02:47 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 1 Sep 2018 16:02:47 -0400
This table claimed that JOHAB could be used as a server encoding, which
was true originally but hasn't been true since 8.3. It also lacked
entries for EUC_JIS_2004 and SHIFT_JIS_2004.
JOHAB problem noted by Lars Kanis, the others by me.
Discussion: https://postgr.es/m/c0f514a1-b7a9-b9ea-1c02-c34aead56c06@greiz-reinsdorf.de
M doc/src/sgml/charset.sgml
Avoid using potentially-under-aligned page buffers.
commit : 10b9af3ebbedfcbe696bf93d4db861db8de0de32
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 1 Sep 2018 15:27:13 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 1 Sep 2018 15:27:13 -0400
There's a project policy against using plain "char buf[BLCKSZ]" local
or static variables as page buffers; preferred style is to palloc or
malloc each buffer to ensure it is MAXALIGN'd. However, that policy's
been ignored in an increasing number of places. We've apparently got
away with it so far, probably because (a) relatively few people use
platforms on which misalignment causes core dumps and/or (b) the
variables chance to be sufficiently aligned anyway. But this is not
something to rely on. Moreover, even if we don't get a core dump,
we might be paying a lot of cycles for misaligned accesses.
To fix, invent new union types PGAlignedBlock and PGAlignedXLogBlock
that the compiler must allocate with sufficient alignment, and use
those in place of plain char arrays.
I used these types even for variables where there's no risk of a
misaligned access, since ensuring proper alignment should make
kernel data transfers faster. I also changed some places where
we had been palloc'ing short-lived buffers, for coding style
uniformity and to save palloc/pfree overhead.
Since this seems to be a live portability hazard (despite the lack
of field reports), back-patch to all supported versions.
Patch by me; thanks to Michael Paquier for review.
Discussion: https://postgr.es/m/1535618100.1286.3.camel@credativ.de
M contrib/bloom/blinsert.c
M contrib/pg_prewarm/pg_prewarm.c
M src/backend/access/gin/ginentrypage.c
M src/backend/access/gin/ginfast.c
M src/backend/access/hash/hashpage.c
M src/backend/access/heap/heapam.c
M src/backend/access/heap/visibilitymap.c
M src/backend/access/transam/generic_xlog.c
M src/backend/access/transam/xlog.c
M src/backend/access/transam/xloginsert.c
M src/backend/access/transam/xlogreader.c
M src/backend/commands/tablecmds.c
M src/backend/replication/walsender.c
M src/backend/storage/file/buffile.c
M src/backend/storage/freespace/freespace.c
M src/backend/utils/sort/logtape.c
M src/bin/pg_basebackup/walmethods.c
M src/bin/pg_resetwal/pg_resetwal.c
M src/bin/pg_rewind/copy_fetch.c
M src/bin/pg_upgrade/file.c
M src/include/c.h
Ignore server-side delays when enforcing wal_sender_timeout.
commit : 1664c8b300e345158b007eba2c6879e7bde74cc8
author : Noah Misch <noah@leadboat.com>
date : Fri, 31 Aug 2018 22:59:58 -0700
committer: Noah Misch <noah@leadboat.com>
date : Fri, 31 Aug 2018 22:59:58 -0700
Healthy clients of servers having poor I/O performance, such as
buildfarm members hamster and tern, saw unexpected timeouts. That
disagreed with documentation. This fix adds one gettimeofday() call
whenever ProcessRepliesIfAny() finds no client reply messages.
Back-patch to 9.4; the bug's symptom is rare and mild, and the code all
moved between 9.3 and 9.4.
Discussion: https://postgr.es/m/20180826034600.GA1105084@rfd.leadboat.com
M src/backend/replication/walsender.c
Ensure correct minimum consistent point on standbys
commit : 2c8cff5dd60b372654eda4dba72b1cea2e91f0f0
author : Michael Paquier <michael@paquier.xyz>
date : Fri, 31 Aug 2018 11:04:07 -0700
committer: Michael Paquier <michael@paquier.xyz>
date : Fri, 31 Aug 2018 11:04:07 -0700
Startup process has improved its calculation of incorrect minimum
consistent point in 8d68ee6, which ensures that all WAL available gets
replayed when doing crash recovery, and has introduced an incorrect
calculation of the minimum recovery point for non-startup processes,
which can cause incorrect page references on a standby when for example
the background writer flushed a couple of pages on-disk but was not
updating the control file to let a subsequent crash recovery replay to
where it should have.
The only case where this has been reported to be a problem is when a
standby needs to calculate the latest removed xid when replaying a btree
deletion record, so one would need connections on a standby that happen
just after recovery has thought it reached a consistent point. Using a
background worker which is started after the consistent point is reached
would be the easiest way to get into problems if it connects to a
database. Having clients which attempt to connect periodically could
also be a problem, but the odds of seeing this problem are much lower.
The fix used is pretty simple, as the idea is to give access to the
minimum recovery point written in the control file to non-startup
processes so as they use a reference, while the startup process still
initializes its own references of the minimum consistent point so as the
original problem with incorrect page references happening post-promotion
with a crash do not show up.
Reported-by: Alexander Kukushkin
Diagnosed-by: Alexander Kukushkin
Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi, Alexander Kukushkin
Discussion: https://postgr.es/m/153492341830.1368.3936905691758473953@wrigleys.postgresql.org
Backpatch-through: 9.3
M src/backend/access/transam/xlog.c
Enforce cube dimension limit in all cube construction functions
commit : 29e07cd224717fd11be7071166d9f08f8b44f1f2
author : Alexander Korotkov <akorotkov@postgresql.org>
date : Thu, 30 Aug 2018 14:18:53 +0300
committer: Alexander Korotkov <akorotkov@postgresql.org>
date : Thu, 30 Aug 2018 14:18:53 +0300
contrib/cube has a limit to 100 dimensions for cube datatype. However, it's
not enforced everywhere, and one can actually construct cube with more than
100 dimensions having then trouble with dump/restore. This commit add checks
for dimensions limit in all functions responsible for cube construction.
Backpatch to all supported versions.
Reported-by: Andrew Gierth
Discussion: https://postgr.es/m/87va7uybt4.fsf%40news-spur.riddles.org.uk
Author: Andrey Borodin with small additions by me
Review: Tom Lane
Backpatch-through: 9.3
M contrib/cube/cube.c
M contrib/cube/expected/cube.out
M contrib/cube/sql/cube.sql
Split contrib/cube platform-depended checks into separate test
commit : 4ffb7c7b3c0a6bb291aff23b0acd94012cde6a48
author : Alexander Korotkov <akorotkov@postgresql.org>
date : Thu, 30 Aug 2018 14:09:25 +0300
committer: Alexander Korotkov <akorotkov@postgresql.org>
date : Thu, 30 Aug 2018 14:09:25 +0300
We're currently maintaining two outputs for cube regression test. But that
appears to be unsuitable, because these outputs are different in out few checks
involving scientific notation. So, split checks involving scientific notation
into separate test, making contrib/cube easier to maintain. Backpatch to all
supported versions in order to make further backpatching easier.
Discussion: https://postgr.es/m/CAPpHfdvJgWjxHsJTtT%2Bo1tz3OR8EFHcLQjhp-d3%2BUcmJLh-fQA%40mail.gmail.com
Author: Alexander Korotkov
Backpatch-through: 9.3
M contrib/cube/Makefile
M contrib/cube/expected/cube.out
D contrib/cube/expected/cube_2.out
A contrib/cube/expected/cube_sci.out
A contrib/cube/expected/cube_sci_1.out
M contrib/cube/sql/cube.sql
A contrib/cube/sql/cube_sci.sql
Make checksum_impl.h safe to compile with -fstrict-aliasing.
commit : c2dfbd18ce25c0141b681c04aa79c2e4e7cb20a7
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 31 Aug 2018 12:26:20 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 31 Aug 2018 12:26:20 -0400
In general, Postgres requires -fno-strict-aliasing with compilers that
implement C99 strict aliasing rules. There's little hope of getting
rid of that overall. But it seems like it would be a good idea if
storage/checksum_impl.h in particular didn't depend on it, because
that header is explicitly intended to be included by external programs.
We don't have a lot of control over the compiler switches that an
external program might use, as shown by Michael Banck's report of
failure in a privately-modified version of pg_verify_checksums.
Hence, switch to using a union in place of willy-nilly pointer casting
inside this file. I think this makes the code a bit more readable
anyway.
checksum_impl.h hasn't changed since it was introduced in 9.3,
so back-patch all the way.
Discussion: https://postgr.es/m/1535618100.1286.3.camel@credativ.de
M src/include/storage/checksum_impl.h
Mention change of width of values generated by SERIAL sequences
commit : 8af2a680681bcc7d85807bcfcaf336573432e046
author : Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Thu, 30 Aug 2018 05:39:56 -0300
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Thu, 30 Aug 2018 05:39:56 -0300
This changed during pg10 development, but had not been documented.
Co-authored-by: Jonathan S. Katz <jkatz@postgresql.org>
Discussion: https://postgr.es/m/20180828163408.vl44nwetdybwffyk@alvherre.pgsql
M doc/src/sgml/release-10.sgml
Stop bgworkers during fast shutdown with postmaster in startup phase
commit : 89f562ae10c53c767f3c6c23820022230c641692
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 29 Aug 2018 17:11:19 -0700
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 29 Aug 2018 17:11:19 -0700
When a postmaster gets into its phase PM_STARTUP, it would start
background workers using BgWorkerStart_PostmasterStart mode immediately,
which would cause problems for a fast shutdown as the postmaster forgets
to send SIGTERM to already-started background workers. With smart and
immediate shutdowns, this correctly happened, and fast shutdown is the
only mode missing the shot.
Author: Alexander Kukushkin
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CAFh8B=mvnD8+DZUfzpi50DoaDfZRDfd7S=gwj5vU9GYn8UvHkA@mail.gmail.com
Backpatch-through: 9.5
M src/backend/postmaster/postmaster.c
Make pg_restore's identify_locking_dependencies() more bulletproof.
commit : e2841d399a89636b8411d9985c468f202cfd0171
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 28 Aug 2018 19:46:59 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 28 Aug 2018 19:46:59 -0400
This function had a blacklist of dump object types that it believed
needed exclusive lock ... but we hadn't maintained that, so that it
was missing ROW SECURITY, POLICY, and INDEX ATTACH items, all of
which need (or should be treated as needing) exclusive lock.
Since the same oversight seems likely in future, let's reverse the
sense of the test so that the code has a whitelist of safe object
types; better to wrongly assume a command can't be run in parallel
than the opposite. Currently the only POST_DATA object type that's
safe is CREATE INDEX ... and that list hasn't changed in a long time.
Back-patch to 9.5 where RLS came in.
Discussion: https://postgr.es/m/11450.1535483506@sss.pgh.pa.us
M src/bin/pg_dump/pg_backup_archiver.c
postgres_fdw: don't push ORDER BY with no vars (bug #15352)
commit : 64eed263ac474433d17f05b23df3018a38e71fca
author : Andrew Gierth <rhodiumtoad@postgresql.org>
date : Tue, 28 Aug 2018 14:43:51 +0100
committer: Andrew Gierth <rhodiumtoad@postgresql.org>
date : Tue, 28 Aug 2018 14:43:51 +0100
Commit aa09cd242 changed a condition in find_em_expr_for_rel from
being a bms_equal comparison of relids to bms_is_subset, in order to
support order by clauses on foreign joins. But this also allows
through the degenerate case of expressions with no Vars at all (and
hence empty relids), including integer constants which will be parsed
unexpectedly on the remote (viz. "ERROR: ORDER BY position 0 is not in
select list" as in the bug report).
Repair by adding an additional !bms_is_empty test.
Backpatch through to 9.6 where the aforementioned change was made.
Per bug #15352 from Maksym Boguk; analysis and patch by me.
Discussion: https://postgr.es/m/153518420278.1478.14875560810251994661@wrigleys.postgresql.org
M contrib/postgres_fdw/postgres_fdw.c
Avoid quadratic slowdown in regexp match/split functions.
commit : f6f61d937bfddbe2a5f6a37bc26a0587117d7837
author : Andrew Gierth <rhodiumtoad@postgresql.org>
date : Tue, 28 Aug 2018 09:52:25 +0100
committer: Andrew Gierth <rhodiumtoad@postgresql.org>
date : Tue, 28 Aug 2018 09:52:25 +0100
regexp_matches, regexp_split_to_table and regexp_split_to_array all
work by compiling a list of match positions as character offsets (NOT
byte positions) in the source string.
Formerly, they then used text_substr to extract the matched text; but
in a multi-byte encoding, that counts the characters in the string,
and the characters needed to reach the starting byte position, on
every call. Accordingly, the performance degraded as the product of
the input string length and the number of match positions, such that
splitting a string of a few hundred kbytes could take many minutes.
Repair by keeping the wide-character copy of the input string
available (only in the case where encoding_max_length is not 1) after
performing the match operation, and extracting substrings from that
instead. This reduces the complexity to being linear in the number of
result bytes, discounting the actual regexp match itself (which is not
affected by this patch).
In passing, remove cleanup using retail pfree() which was obsoleted by
commit ff428cded (Feb 2008) which made cleanup of SRF multi-call
contexts automatic. Also increase (to ~134 million) the maximum number
of matches and provide an error message when it is reached.
Backpatch all the way because this has been wrong forever.
Analysis and patch by me; review by Kaiting Chen.
Discussion: https://postgr.es/m/87pnyn55qh.fsf@news-spur.riddles.org.uk
see also https://postgr.es/m/87lg996g4r.fsf@news-spur.riddles.org.uk
M src/backend/utils/adt/regexp.c
Fix missing dependency for pg_dump's ENABLE ROW LEVEL SECURITY items.
commit : 0f3dd76f527deb81ee5ba60048df04c598c93960
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 27 Aug 2018 15:11:12 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 27 Aug 2018 15:11:12 -0400
The archive should show a dependency on the item's table, but it failed
to include one. This could cause failures in parallel restore due to
emitting ALTER TABLE ... ENABLE ROW LEVEL SECURITY before restoring
the table's data. In practice the odds of a problem seem low, since
you would typically need to have set FORCE ROW LEVEL SECURITY as well,
and you'd also need a very high --jobs count to have any chance of this
happening. That probably explains the lack of field reports.
Still, it's a bug, so back-patch to 9.5 where RLS was introduced.
Discussion: https://postgr.es/m/19784.1535390902@sss.pgh.pa.us
M src/bin/pg_dump/pg_dump.c
Make syslogger more robust against failures in opening CSV log files.
commit : 6fbbe335317797160daeaa0bd5efd998f5baac3b
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 26 Aug 2018 14:21:55 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 26 Aug 2018 14:21:55 -0400
The previous coding figured it'd be good enough to postpone opening
the first CSV log file until we got a message we needed to write there.
This is unsafe, though, because if the open fails we end up in infinite
recursion trying to report the failure. Instead make the CSV log file
management code look as nearly as possible like the longstanding logic
for the stderr log file. In particular, open it immediately at postmaster
startup (if enabled), or when we get a SIGHUP in which we find that
log_destination has been changed to enable CSV logging.
It seems OK to fail if a postmaster-start-time open attempt fails, as
we've long done for the stderr log file. But we can't die if we fail
to open a CSV log file during SIGHUP, so we're still left with a problem.
In that case, write any output meant for the CSV log file to the stderr
log file. (This will also cover race-condition cases in which backends
send CSV log data before or after we have the CSV log file open.)
This patch also fixes an ancient oversight that, if CSV logging was
turned off during a SIGHUP, we never actually closed the last CSV
log file.
In passing, remember to reset whereToSendOutput = DestNone during syslogger
start, since (unlike all other postmaster children) it's forked before the
postmaster has done that. This made for a platform-dependent difference
in error reporting behavior between the syslogger and other children:
except on Windows, it'd report problems to the original postmaster stderr
as well as the normal error log file(s). It's barely possible that that
was intentional at some point; but it doesn't seem likely to be desirable
in production, and the platform dependency definitely isn't desirable.
Per report from Alexander Kukushkin. It's been like this for a long time,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/CAFh8B==iLUD_gqC-dAENS0V+kVrCeGiKujtKqSQ7++S-caaChw@mail.gmail.com
M src/backend/postmaster/syslogger.c
doc: correct syntax of pgtrgm examples in older releases
commit : 993b5a78adff29fc58e7449cab98bb865b77c663
author : Bruce Momjian <bruce@momjian.us>
date : Sat, 25 Aug 2018 15:03:32 -0400
committer: Bruce Momjian <bruce@momjian.us>
date : Sat, 25 Aug 2018 15:03:32 -0400
Reported-by: Liudmila Mantrova
Discussion: https://postgr.es/m/ded40ecb-557e-8c50-7d58-69f4b5226664@postgrespro.ru
Backpatch-through: 9.6 and 10 only
M doc/src/sgml/pgtrgm.sgml
doc: "Latest checkpoint location" will not match in pg_upgrade
commit : 474d8102466e7d3a4aaa9ec4e68d38371247faac
author : Bruce Momjian <bruce@momjian.us>
date : Sat, 25 Aug 2018 13:35:14 -0400
committer: Bruce Momjian <bruce@momjian.us>
date : Sat, 25 Aug 2018 13:35:14 -0400
Mention that "Latest checkpoint location" will not match in pg_upgrade
if the standby server is still running during the upgrade, which is
possible. "Match" text first appeared in PG 9.5.
Reported-by: Paul Bonaud
Discussion: https://postgr.es/m/c7268794-edb4-1772-3bfd-04c54585c24e@trainline.com
Backpatch-through: 9.5
M doc/src/sgml/ref/pgupgrade.sgml
doc: add doc link for 'applicable_roles'
commit : 943f75d543e10fed4215d21dd985a5e215fae56c
author : Bruce Momjian <bruce@momjian.us>
date : Sat, 25 Aug 2018 13:01:24 -0400
committer: Bruce Momjian <bruce@momjian.us>
date : Sat, 25 Aug 2018 13:01:24 -0400
Reported-by: Ashutosh Sharma
Discussion: https://postgr.es/m/CAE9k0PnhnL6MNDLuvkk8USzOa_DpzDzFQPAM_uaGuXbh9HMKYw@mail.gmail.com
Author: Ashutosh Sharma
Backpatch-through: 9.3
M doc/src/sgml/information_schema.sgml
docs: Clarify pg_ctl initdb option text to match options proto.
commit : dfd840f105506db988566436c1559866bd73ab55
author : Bruce Momjian <bruce@momjian.us>
date : Sat, 25 Aug 2018 12:01:53 -0400
committer: Bruce Momjian <bruce@momjian.us>
date : Sat, 25 Aug 2018 12:01:53 -0400
The options string appeared in PG 10.
Reported-by: pgsql-kr@postgresql.kr
Discussion: https://postgr.es/m/153500377658.1378.6587007319641704057@wrigleys.postgresql.org
Backpatch-through: 10
M doc/src/sgml/ref/pg_ctl-ref.sgml
docs: clarify plpython SD and GD dictionary behavior
commit : 9de7ae32cfea40e16b9e53523aa4f94f17fa2881
author : Bruce Momjian <bruce@momjian.us>
date : Sat, 25 Aug 2018 11:52:29 -0400
committer: Bruce Momjian <bruce@momjian.us>
date : Sat, 25 Aug 2018 11:52:29 -0400
Reported-by: Adam Bielański
Discussion: https://postgr.es/m/153484305538.1370.7605856225879294548@wrigleys.postgresql.org
Backpatch-through: 9.3
M doc/src/sgml/plpython.sgml
Fix lexing of standard multi-character operators in edge cases.
commit : d64fad6669926c95f1a2e49ebec58ed4412be1a0
author : Andrew Gierth <rhodiumtoad@postgresql.org>
date : Thu, 23 Aug 2018 18:29:18 +0100
committer: Andrew Gierth <rhodiumtoad@postgresql.org>
date : Thu, 23 Aug 2018 18:29:18 +0100
Commits c6b3c939b (which fixed the precedence of >=, <=, <> operators)
and 865f14a2d (which added support for the standard => notation for
named arguments) created a class of lexer tokens which look like
multi-character operators but which have their own token IDs distinct
from Op. However, longest-match rules meant that following any of
these tokens with another operator character, as in (1<>-1), would
cause them to be incorrectly returned as Op.
The error here isn't immediately obvious, because the parser would
usually still find the correct operator via the Op token, but there
were more subtle problems:
1. If immediately followed by a comment or +-, >= <= <> would be given
the old precedence of Op rather than the correct new precedence;
2. If followed by a comment, != would be returned as Op rather than as
NOT_EQUAL, causing it not to be found at all;
3. If followed by a comment or +-, the => token for named arguments
would be lexed as Op, causing the argument to be mis-parsed as a
simple expression, usually causing an error.
Fix by explicitly checking for the operators in the {operator} code
block in addition to all the existing special cases there.
Backpatch to 9.5 where the problem was introduced.
Analysis and patch by me; review by Tom Lane.
Discussion: https://postgr.es/m/87va851ppl.fsf@news-spur.riddles.org.uk
M src/backend/parser/scan.l
M src/fe_utils/psqlscan.l
M src/interfaces/ecpg/preproc/pgc.l
M src/test/regress/expected/create_operator.out
M src/test/regress/expected/polymorphism.out
M src/test/regress/sql/create_operator.sql
M src/test/regress/sql/polymorphism.sql
Reduce an unnecessary O(N^3) loop in lexer.
commit : 2dbfbd630bc952b5e6f7e484f5538abad499533f
author : Andrew Gierth <rhodiumtoad@postgresql.org>
date : Thu, 23 Aug 2018 20:00:50 +0100
committer: Andrew Gierth <rhodiumtoad@postgresql.org>
date : Thu, 23 Aug 2018 20:00:50 +0100
The lexer's handling of operators contained an O(N^3) hazard when
dealing with long strings of + or - characters; it seems hard to
prevent this case from being O(N^2), but the additional N multiplier
was not needed.
Backpatch all the way since this has been there since 7.x, and it
presents at least a mild hazard in that trying to do Bind, PREPARE or
EXPLAIN on a hostile query could take excessive time (without
honouring cancels or timeouts) even if the query was never executed.
M src/backend/parser/scan.l
M src/fe_utils/psqlscan.l
M src/interfaces/ecpg/preproc/pgc.l
In libpq, don't look up all the hostnames at once.
commit : 6953daf08e6b1ef333758f6cda54de228212f12e
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 23 Aug 2018 16:39:20 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 23 Aug 2018 16:39:20 -0400
Historically, we looked up the target hostname in connectDBStart, so that
PQconnectPoll did not need to do DNS name resolution. The patches that
added multiple-target-host support to libpq preserved this division of
labor; but it's really nonsensical now, because it means that if any one
of the target hosts fails to resolve in DNS, the connection fails. That
negates the no-single-point-of-failure goal of the feature. Additionally,
DNS lookups aren't exactly cheap, but the code did them all even if the
first connection attempt succeeds.
Hence, rearrange so that PQconnectPoll does the lookups, and only looks
up a hostname when it's time to try that host. This does mean that
PQconnectPoll could block on a DNS lookup --- but if you wanted to avoid
that, you should be using hostaddr, as the documentation has always
specified. It seems fairly unlikely that any applications would really
care whether the lookup occurs inside PQconnectStart or PQconnectPoll.
In addition to calling out that fact explicitly, do some other minor
wordsmithing in the docs around the multiple-target-host feature.
Since this seems like a bug in the multiple-target-host feature,
backpatch to v10 where that was introduced. In the back branches,
avoid moving any existing fields of struct pg_conn, just in case
any third-party code is looking into that struct.
Tom Lane, reviewed by Fabien Coelho
Discussion: https://postgr.es/m/4913.1533827102@sss.pgh.pa.us
M doc/src/sgml/libpq.sgml
M src/interfaces/libpq/fe-connect.c
M src/interfaces/libpq/libpq-int.h
Return type of txid_status is text, not txid_status
commit : 3f722ae26a92d1c46079dffc72b63d8376958f8c
author : Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Thu, 23 Aug 2018 11:40:30 -0300
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Thu, 23 Aug 2018 11:40:30 -0300
Thinko in commit 857ee8e39.
Discovered-by: Gianni Ciolli
M doc/src/sgml/func.sgml
Do not dump identity sequences with excluded parent table
commit : cb282eab1a684bb409dcb6cadbf7dd868227713d
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 22 Aug 2018 14:23:03 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 22 Aug 2018 14:23:03 +0900
This commit prevents a crash of pg_dump caused by the exclusion of a
table which has identity columns, as the table would be correctly
excluded but not its identity sequence. In order to fix that, identity
sequences are excluded if the parent table is defined as such. Knowing
about such sequences has no meaning without their parent table anyway.
Reported-by: Andy Abelisto
Author: David Rowley
Reviewed-by: Peter Eisentraut, Michael Paquier
Discussion: https://postgr.es/m/153479393218.1316.8472285660264976457@wrigleys.postgresql.org
Backpatch-through: 10
M src/bin/pg_dump/pg_dump.c
Fix typo
commit : 6350dc7c89d24abcf38268a8158b8a2152ecd99a
author : Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Tue, 21 Aug 2018 17:16:10 -0300
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Tue, 21 Aug 2018 17:16:10 -0300
M src/backend/utils/mmgr/dsa.c
fix typo
commit : 358fa997a31109757c9866709cd4ee43ddb292f0
author : Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Tue, 21 Aug 2018 17:03:35 -0300
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Tue, 21 Aug 2018 17:03:35 -0300
M src/backend/access/hash/README
Fix set of NLS translation issues
commit : ecf56dc5e5b61e76de4cf2ed2a25b05718d45e9c
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 21 Aug 2018 15:17:38 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 21 Aug 2018 15:17:38 +0900
While monitoring the code, a couple of issues related to string
translation has showed up:
- Some routines for auto-updatable views return an error string, which
sometimes missed the shot. A comment regarding string translation is
added for each routine to help with future features.
- GSSAPI authentication missed two translations.
- vacuumdb handles non-translated strings.
Reported-by: Kyotaro Horiguchi
Author: Kyotaro Horiguchi
Reviewed-by: Michael Paquier, Tom Lane
Discussion: https://postgr.es/m/20180810.152131.31921918.horiguchi.kyotaro@lab.ntt.co.jp
Backpatch-through: 9.3
M src/backend/commands/tablecmds.c
M src/backend/commands/view.c
M src/backend/libpq/auth.c
M src/backend/rewrite/rewriteHandler.c
M src/bin/scripts/vacuumdb.c
Ensure schema qualification in pg_restore DISABLE/ENABLE TRIGGER commands.
commit : 05aeeb5e2803ebe537516774f766792c109eaac5
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 17 Aug 2018 17:12:21 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 17 Aug 2018 17:12:21 -0400
Previously, this code blindly followed the common coding pattern of
passing PQserverVersion(AH->connection) as the server-version parameter
of fmtQualifiedId. That works as long as we have a connection; but in
pg_restore with text output, we don't. Instead we got a zero from
PQserverVersion, which fmtQualifiedId interpreted as "server is too old to
have schemas", and so the name went unqualified. That still accidentally
managed to work in many cases, which is probably why this ancient bug went
undetected for so long. It only became obvious in the wake of the changes
to force dump/restore to execute with restricted search_path.
In HEAD/v11, let's deal with this by ripping out fmtQualifiedId's server-
version behavioral dependency, and just making it schema-qualify all the
time. We no longer support pg_dump from servers old enough to need the
ability to omit schema name, let alone restoring to them. (Also, the few
callers outside pg_dump already didn't work with pre-schema servers.)
In older branches, that's not an acceptable solution, so instead just
tweak the DISABLE/ENABLE TRIGGER logic to ensure it will schema-qualify
its output regardless of server version.
Per bug #15338 from Oleg somebody. Back-patch to all supported branches.
Discussion: https://postgr.es/m/153452458706.1316.5328079417086507743@wrigleys.postgresql.org
M src/bin/pg_dump/pg_backup_archiver.c
Set scan direction appropriately for SubPlans (bug #15336)
commit : d31ebbff5b2bbe9dfe3fb130448581a4da388031
author : Andrew Gierth <rhodiumtoad@postgresql.org>
date : Fri, 17 Aug 2018 15:04:26 +0100
committer: Andrew Gierth <rhodiumtoad@postgresql.org>
date : Fri, 17 Aug 2018 15:04:26 +0100
When executing a SubPlan in an expression, the EState's direction
field was left alone, resulting in an attempt to execute the subplan
backwards if it was encountered during a backwards scan of a cursor.
Also, though much less likely, it was possible to reach the execution
of an InitPlan while in backwards-scan state.
Repair by saving/restoring estate->es_direction and forcing forward
scan mode in the relevant places.
Backpatch all the way, since this has been broken since 8.3 (prior to
commit c7ff7663e, SubPlans had their own EStates rather than sharing
the parent plan's, so there was no confusion over scan direction).
Per bug #15336 reported by Vladimir Baranoff; analysis and patch by
me, review by Tom Lane.
Discussion: https://postgr.es/m/153449812167.1304.1741624125628126322@wrigleys.postgresql.org
M src/backend/executor/nodeSubplan.c
M src/test/regress/expected/subselect.out
M src/test/regress/sql/subselect.sql
pg_upgrade: issue helpful error message for use on standbys
commit : bd30f51c0ec97207ac9baf68d03fd4e851f9777f
author : Bruce Momjian <bruce@momjian.us>
date : Fri, 17 Aug 2018 10:25:48 -0400
committer: Bruce Momjian <bruce@momjian.us>
date : Fri, 17 Aug 2018 10:25:48 -0400
Commit 777e6ddf1723306bd2bf8fe6f804863f459b0323 checked for a shut down
message from a standby and allowed it to continue. This patch reports a
helpful error message in these cases, suggesting to use rsync as
documented.
Diagnosed-by: Martín Marqués
Discussion: https://postgr.es/m/CAPdiE1xYCow-reLjrhJ9DqrMu-ppNq0ChUUEvVdxhdjGRD5_eA@mail.gmail.com
Backpatch-through: 9.3
M src/bin/pg_upgrade/controldata.c
Mention ownership requirements for REFRESH MATERIALIZED VIEW in docs
commit : 0dfaf8f763ae66eca841366552d2e69a73cd9af2
author : Michael Paquier <michael@paquier.xyz>
date : Fri, 17 Aug 2018 11:29:15 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Fri, 17 Aug 2018 11:29:15 +0900
Author: Dian Fay
Discussion: https://postgr.es/m/745abbd2-a1a0-ead8-2cb2-768c16747d97@gmail.com
Backpatch-through: 9.3
M doc/src/sgml/ref/refresh_materialized_view.sgml
Proof-reading for documentation.
commit : 07b895aef764fe528af57f04d454336e67d6c6aa
author : Thomas Munro <tmunro@postgresql.org>
date : Fri, 17 Aug 2018 11:38:44 +1200
committer: Thomas Munro <tmunro@postgresql.org>
date : Fri, 17 Aug 2018 11:38:44 +1200
Somebody accidentally a word. Back-patch to 9.6.
Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/20180816195431.GA23707%40telsasoft.com
M doc/src/sgml/parallel.sgml
Close the file descriptor in ApplyLogicalMappingFile
commit : e00f4b68dc878dcee46833a742844346daa1e3c8
author : Tomas Vondra <tomas.vondra@postgresql.org>
date : Thu, 16 Aug 2018 16:49:10 +0200
committer: Tomas Vondra <tomas.vondra@postgresql.org>
date : Thu, 16 Aug 2018 16:49:10 +0200
The function was forgetting to close the file descriptor, resulting
in failures like this:
ERROR: 53000: exceeded maxAllocatedDescs (492) while trying to open
file "pg_logical/mappings/map-4000-4eb-1_60DE1E08-5376b5-537c6b"
LOCATION: OpenTransientFile, fd.c:2161
Simply close the file at the end, and backpatch to 9.4 (where logical
decoding was introduced). While at it, fix a nearby typo.
Discussion: https://www.postgresql.org/message-id/flat/738a590a-2ce5-9394-2bef-7b1caad89b37%402ndquadrant.com
M src/backend/replication/logical/reorderbuffer.c
Make snprintf.c follow the C99 standard for snprintf's result value.
commit : 1811900b933c892a8ee102b8b62028de4c1379ef
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 15 Aug 2018 17:25:23 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 15 Aug 2018 17:25:23 -0400
C99 says that the result should be the number of bytes that would have
been emitted given a large enough buffer, not the number we actually
were able to put in the buffer. It's time to make our substitute
implementation comply with that. Not doing so results in inefficiency
in buffer-enlargement cases, and also poses a portability hazard for
third-party code that might expect C99-compliant snprintf behavior
within Postgres.
In passing, remove useless tests for str == NULL; neither C99 nor
predecessor standards ever allowed that except when count == 0,
so I see no reason to expend cycles on making that a non-crash case
for this implementation. Also, don't waste a byte in pg_vfprintf's
local I/O buffer; this might have performance benefits by allowing
aligned writes during flushbuffer calls.
Back-patch of commit 805889d7d. There was some concern about this
possibly breaking code that assumes pre-C99 behavior, but there is
much more risk (and reality, in our own code) of code that assumes
C99 behavior and hence fails to detect buffer overrun without this.
Discussion: https://postgr.es/m/17245.1534289329@sss.pgh.pa.us
M src/port/snprintf.c
Update FSM on WAL replay of page all-visible/frozen
commit : 255e2fbe8fa68277c7e2f5339ea2ca05b899ffc4
author : Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Wed, 15 Aug 2018 18:09:29 -0300
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Wed, 15 Aug 2018 18:09:29 -0300
We aren't very strict about keeping FSM up to date on WAL replay,
because per-page freespace values aren't critical in replicas (can't
write to heap in a replica; and if the replica is promoted, the values
would be updated by VACUUM anyway). However, VACUUM since 9.6 can skip
processing pages marked all-visible or all-frozen, and if such pages are
recorded in FSM with wrong values, those values are blindly propagated
to FSM's upper layers by VACUUM's FreeSpaceMapVacuum. (This rationale
assumes that crashes are not very frequent, because those would cause
outdated FSM to occur in the primary.)
Even when the FSM is outdated in standby, things are not too bad
normally, because, most per-page FSM values will be zero (other than
those propagated with the base-backup that created the standby); only
once the remaining free space is less than 0.2*BLCKSZ the per-page value
is maintained by WAL replay of heap ins/upd/del. However, if
wal_log_hints=on causes complete FSM pages to be propagated to a standby
via full-page images, many too-optimistic per-page values can end up
being registered in the standby.
Incorrect per-page values aren't critical in most cases, since an
inserter that is given a page that doesn't actually contain the claimed
free space will update FSM with the correct value, and retry until it
finds a usable page. However, if there are many such updates to do, an
inserter can spend a long time doing them before a usable page is found;
in a heavily trafficked insert-only table with many concurrent inserters
this has been observed to cause several second stalls, causing visible
application malfunction.
To fix this problem, it seems sufficient to have heap_xlog_visible
(replay of setting all-visible and all-frozen VM bits for a heap page)
update the FSM value for the page being processed. This fixes the
per-page counters together with making the page skippable to vacuum, so
when vacuum does FreeSpaceMapVacuum, the values propagated to FSM upper
layers are the correct ones, avoiding the problem.
While at it, apply the same fix to heap_xlog_clean (replay of tuple
removal by HOT pruning and vacuum). This makes any space freed by the
cleaning available earlier than the next vacuum in the promoted replica.
Backpatch to 9.6, where this problem was diagnosed on an insert-only
table with all-frozen pages, which were introduced as a concept in that
release. Theoretically it could apply with all-visible pages to older
branches, but there's been no report of that and it doesn't backpatch
cleanly anyway.
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20180802172857.5skoexsilnjvgruk@alvherre.pgsql
M src/backend/access/heap/heapam.c
Clean up assorted misuses of snprintf()'s result value.
commit : 6101bc2f459c3c5f4e8386083edb79443075b54e
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 15 Aug 2018 16:29:32 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 15 Aug 2018 16:29:32 -0400
Fix a small number of places that were testing the result of snprintf()
but doing so incorrectly. The right test for buffer overrun, per C99,
is "result >= bufsize" not "result > bufsize". Some places were also
checking for failure with "result == -1", but the standard only says
that a negative value is delivered on failure.
(Note that this only makes these places correct if snprintf() delivers
C99-compliant results. But at least now these places are consistent
with all the other places where we assume that.)
Also, make psql_start_test() and isolation_start_test() check for
buffer overrun while constructing their shell commands. There seems
like a higher risk of overrun, with more severe consequences, here
than there is for the individual file paths that are made elsewhere
in the same functions, so this seemed like a worthwhile change.
Also fix guc.c's do_serialize() to initialize errno = 0 before
calling vsnprintf. In principle, this should be unnecessary because
vsnprintf should have set errno if it returns a failure indication ...
but the other two places this coding pattern is cribbed from don't
assume that, so let's be consistent.
These errors are all very old, so back-patch as appropriate. I think
that only the shell command overrun cases are even theoretically
reachable in practice, but there's not much point in erroneous error
checks.
Discussion: https://postgr.es/m/17245.1534289329@sss.pgh.pa.us
M src/backend/postmaster/pgstat.c
M src/backend/utils/misc/guc.c
M src/common/ip.c
M src/interfaces/ecpg/pgtypeslib/common.c
M src/port/getaddrinfo.c
M src/test/isolation/isolation_main.c
M src/test/regress/pg_regress.c
M src/test/regress/pg_regress_main.c
pg_upgrade: fix shutdown check for standby servers
commit : efc4b489786d4e035eebab4b4c3faf3d6963986e
author : Bruce Momjian <bruce@momjian.us>
date : Tue, 14 Aug 2018 17:19:02 -0400
committer: Bruce Momjian <bruce@momjian.us>
date : Tue, 14 Aug 2018 17:19:02 -0400
Commit 244142d32afd02e7408a2ef1f249b00393983822 only tested for the
pg_controldata output for primary servers, but standby servers have
different "Database cluster state" output, so check for that too.
Diagnosed-by: Michael Paquier
Discussion: https://postgr.es/m/20180810164240.GM13638@paquier.xyz
Backpatch-through: 9.3
M src/bin/pg_upgrade/controldata.c
Remove obsolete comment
commit : 6c206de559d40641dc0357fdae2003bfc7b015ad
author : Peter Eisentraut <peter_e@gmx.net>
date : Mon, 13 Aug 2018 21:07:31 +0200
committer: Peter Eisentraut <peter_e@gmx.net>
date : Mon, 13 Aug 2018 21:07:31 +0200
The sequence name is no longer stored in the sequence relation, since
1753b1b027035029c2a2a1649065762fafbf63f3.
M src/backend/commands/tablecmds.c
Fix libpq's implementation of per-host connection timeouts.
commit : e0db288abfa8cfd2e471017441e401e8f92aeda7
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 13 Aug 2018 13:07:53 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 13 Aug 2018 13:07:53 -0400
Commit 5f374fe7a attempted to turn the connect_timeout from an overall
maximum time limit into a per-host limit, but it didn't do a great job of
that. The timer would only get restarted if we actually detected timeout
within connectDBComplete(), not if we changed our attention to a new host
for some other reason. In that case the old timeout continued to run,
possibly causing a premature timeout failure for the new host.
Fix that, and also tweak the logic so that if we do get a timeout,
we advance to the next available IP address, not to the next host name.
There doesn't seem to be a good reason to assume that all the IP
addresses supplied for a given host name will necessarily fail the
same way as the current one. Moreover, this conforms better to the
admittedly-vague documentation statement that the timeout is "per
connection attempt". I changed that to "per host name or IP address"
to be clearer. (Note that reconnections to the same server, such as for
switching protocol version or SSL status, don't get their own separate
timeout; that was true before and remains so.)
Also clarify documentation about the interpretation of connect_timeout
values less than 2.
This seems like a bug, so back-patch to v10 where this logic came in.
Tom Lane, reviewed by Fabien Coelho
Discussion: https://postgr.es/m/5735.1533828184@sss.pgh.pa.us
M doc/src/sgml/libpq.sgml
M src/interfaces/libpq/fe-connect.c
Adjust comment atop ExecShutdownNode.
commit : 32b16d497a93074167b03038c45c6b299edc392b
author : Amit Kapila <akapila@postgresql.org>
date : Mon, 13 Aug 2018 10:22:32 +0530
committer: Amit Kapila <akapila@postgresql.org>
date : Mon, 13 Aug 2018 10:22:32 +0530
After commits a315b967cc and b805b63ac2, part of the comment atop
ExecShutdownNode is redundant. Adjust it.
Author: Amit Kapila
Backpatch-through: 10 where both the mentioned commits are present.
Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
M src/backend/executor/execProcnode.c
Prohibit shutting down resources if there is a possibility of back up.
commit : ba10eaef509bf47de743589a232e83f071acf8dc
author : Amit Kapila <akapila@postgresql.org>
date : Mon, 13 Aug 2018 08:43:33 +0530
committer: Amit Kapila <akapila@postgresql.org>
date : Mon, 13 Aug 2018 08:43:33 +0530
Currently, we release the asynchronous resources as soon as it is evident
that no more rows will be needed e.g. when a Limit is filled. This can be
problematic especially for custom and foreign scans where we can scan
backward. Fix that by disallowing the shutting down of resources in such
cases.
Reported-by: Robert Haas
Analysed-by: Robert Haas and Amit Kapila
Author: Amit Kapila
Reviewed-by: Robert Haas
Backpatch-through: 9.6 where this code was introduced
Discussion: https://postgr.es/m/86137f17-1dfb-42f9-7421-82fd786b04a1@anayrat.info
M src/backend/executor/execMain.c
M src/backend/executor/nodeLimit.c
Avoid query-lifetime memory leaks in XMLTABLE (bug #15321)
commit : 556140424c4b46a7076cdf13f306bd92db797b61
author : Andrew Gierth <rhodiumtoad@postgresql.org>
date : Mon, 13 Aug 2018 01:45:35 +0100
committer: Andrew Gierth <rhodiumtoad@postgresql.org>
date : Mon, 13 Aug 2018 01:45:35 +0100
Multiple calls to XMLTABLE in a query (e.g. laterally applying it to a
table with an xml column, an important use-case) were leaking large
amounts of memory into the per-query context, blowing up memory usage.
Repair by reorganizing memory context usage in nodeTableFuncscan; use
the usual per-tuple context for row-by-row evaluations instead of
perValueCxt, and use the explicitly created context -- renamed from
perValueCxt to perTableCxt -- for arguments and state for each
individual table-generation operation.
Backpatch to PG10 where this code was introduced.
Original report by IRC user begriffs; analysis and patch by me.
Reviewed by Tom Lane and Pavel Stehule.
Discussion: https://postgr.es/m/153394403528.10284.7530399040974170549@wrigleys.postgresql.org
M src/backend/executor/nodeTableFuncscan.c
M src/include/nodes/execnodes.h
Add missing documentation for argument of amcostestimate()
commit : 26853a86ae4bc4f214005e05f55677e5b228f4f8
author : Alexander Korotkov <akorotkov@postgresql.org>
date : Fri, 10 Aug 2018 14:14:36 +0300
committer: Alexander Korotkov <akorotkov@postgresql.org>
date : Fri, 10 Aug 2018 14:14:36 +0300
5262f7a4fc44 have introduced parallel index scan. In order to estimate the
number of parallel workers, it adds extra argument to amcostestimate() index
access method API function. However, this extra argument was missed in the
documentation. This commit fixes that.
Discussion: https://postgr.es/m/4128fdb4-8b63-2e05-38f6-3125f8c27263%40lab.ntt.co.jp
Author: Tatsuro Yamada, Alexander Korotkov
Backpatch-through: 10
M doc/src/sgml/indexam.sgml
docs: Only first instance of a PREPARE parameter sets data type
commit : da1a5da1e4216af22cfa461292f0031cb254cab9
author : Bruce Momjian <bruce@momjian.us>
date : Thu, 9 Aug 2018 10:13:15 -0400
committer: Bruce Momjian <bruce@momjian.us>
date : Thu, 9 Aug 2018 10:13:15 -0400
If the first reference to $1 is "($1 = col) or ($1 is null)", the data
type can be determined, but not for "($1 is null) or ($1 = col)". This
change documents this.
Reported-by: Morgan Owens
Discussion: https://postgr.es/m/153233728858.1404.15268121695358514937@wrigleys.postgresql.org
Backpatch-through: 9.3
M doc/src/sgml/ref/prepare.sgml
Doc: Correct description of amcheck example query.
commit : f85537a88dda4aa015f9a1b4a43189395d36b328
author : Peter Geoghegan <pg@bowt.ie>
date : Wed, 8 Aug 2018 12:56:28 -0700
committer: Peter Geoghegan <pg@bowt.ie>
date : Wed, 8 Aug 2018 12:56:28 -0700
The amcheck documentation incorrectly claimed that its example query
verifies every catalog index in the database. In fact, the query only
verifies the 10 largest indexes (as determined by pg_class.relpages).
Adjust the description accordingly.
Backpatch: 10-, where contrib/amcheck was introduced.
M doc/src/sgml/amcheck.sgml
Don't run atexit callbacks in quickdie signal handlers.
commit : 2332020d6d1912bff4c03a8c2b37ad1c8b184281
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 8 Aug 2018 19:08:10 +0300
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 8 Aug 2018 19:08:10 +0300
exit() is not async-signal safe. Even if the libc implementation is, 3rd
party libraries might have installed unsafe atexit() callbacks. After
receiving SIGQUIT, we really just want to exit as quickly as possible, so
we don't really want to run the atexit() callbacks anyway.
The original report by Jimmy Yih was a self-deadlock in startup_die().
However, this patch doesn't address that scenario; the signal handling
while waiting for the startup packet is more complicated. But at least this
alleviates similar problems in the SIGQUIT handlers, like that reported
by Asim R P later in the same thread.
Backpatch to 9.3 (all supported versions).
Discussion: https://www.postgresql.org/message-id/CAOMx_OAuRUHiAuCg2YgicZLzPVv5d9_H4KrL_OFsFP%3DVPekigA%40mail.gmail.com
M src/backend/postmaster/bgworker.c
M src/backend/postmaster/bgwriter.c
M src/backend/postmaster/checkpointer.c
M src/backend/postmaster/startup.c
M src/backend/postmaster/walwriter.c
M src/backend/replication/walreceiver.c
M src/backend/tcop/postgres.c
Don't record FDW user mappings as members of extensions.
commit : 9446d7157740a09613175209c9e4eef02d3d92db
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 7 Aug 2018 16:32:50 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 7 Aug 2018 16:32:50 -0400
CreateUserMapping has a recordDependencyOnCurrentExtension call that's
been there since extensions were introduced (very possibly my fault).
However, there's no support anywhere else for user mappings as members
of extensions, nor are they listed as a possible member object type in
the documentation. Nor does it really seem like a good idea for user
mappings to belong to extensions when roles don't. Hence, remove the
bogus call.
(As we saw in bug #15310, the lack of any pg_dump support for this case
ensures that any such membership record would silently disappear during
pg_upgrade. So there's probably no need for us to do anything else
about cleaning up after this mistake.)
Discussion: https://postgr.es/m/27952.1533667213@sss.pgh.pa.us
M src/backend/commands/foreigncmds.c
Fix incorrect initialization of BackendActivityBuffer.
commit : 8dd07458149a951a2d40bd4d0061ca33cbf61860
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 7 Aug 2018 16:00:44 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 7 Aug 2018 16:00:44 -0400
Since commit c8e8b5a6e, this has been zeroed out using the wrong length.
In practice the length would always be too small, leading to not zeroing
the whole buffer rather than clobbering additional memory; and that's
pretty harmless, both because shmem would likely start out as zeroes
and because we'd reinitialize any given entry before use. Still,
it's bogus, so fix it.
Reported by Petru-Florin Mihancea (bug #15312)
Discussion: https://postgr.es/m/153363913073.1303.6518849192351268091@wrigleys.postgresql.org
M src/backend/postmaster/pgstat.c
Fix pg_upgrade to handle event triggers in extensions correctly.
commit : c9dacdb1c9428c2a021ee4d0a444147f8fcf07ef
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 7 Aug 2018 15:43:49 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 7 Aug 2018 15:43:49 -0400
pg_dump with --binary-upgrade must emit ALTER EXTENSION ADD commands
for all objects that are members of extensions. It forgot to do so for
event triggers, as per bug #15310 from Nick Barnes. Back-patch to 9.3
where event triggers were introduced.
Haribabu Kommi
Discussion: https://postgr.es/m/153360083872.1395.4593932457718151600@wrigleys.postgresql.org
M src/bin/pg_dump/pg_dump.c
Ensure pg_dump_sort.c sorts null vs non-null namespace consistently.
commit : dc391dacf170593dceef07fc7ea287cb02d09d5d
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 7 Aug 2018 13:13:42 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 7 Aug 2018 13:13:42 -0400
The original coding here (which is, I believe, my fault) supposed that
it didn't need to concern itself with the possibility that one object
of a given type-priority has a namespace while another doesn't. But
that's not reliably true anymore, if it ever was; and if it does happen
then it's possible that DOTypeNameCompare returns self-inconsistent
comparison results. That leads to unspecified behavior in qsort()
and a resultant weird output order from pg_dump.
This should end up being only a cosmetic problem, because any ordering
constraints that actually matter should be enforced by the later
dependency-based sort. Still, it's a bug, so back-patch.
Report and fix by Jacob Champion, though I editorialized on his
patch to the extent of making NULL sort after non-NULL, for consistency
with our usual sorting definitions.
Discussion: https://postgr.es/m/CABAq_6Hw+V-Kj7PNfD5tgOaWT_-qaYkc+SRmJkPLeUjYXLdxwQ@mail.gmail.com
M src/bin/pg_dump/pg_dump_sort.c