Stamp 9.4.18.
commit : 364998df87a8fc498302d4b02681909f81635914
author : Tom Lane <[email protected]>
date : Mon, 7 May 2018 16:57:35 -0400
committer: Tom Lane <[email protected]>
date : Mon, 7 May 2018 16:57:35 -0400
M configure
M configure.in
M doc/bug.template
M src/include/pg_config.h.win32
M src/interfaces/libpq/libpq.rc.in
M src/port/win32ver.rc
Last-minute updates for release notes.
commit : d289dcfc393c5329fb67a1df27957a9923365ce6
author : Tom Lane <[email protected]>
date : Mon, 7 May 2018 13:13:27 -0400
committer: Tom Lane <[email protected]>
date : Mon, 7 May 2018 13:13:27 -0400
The set of functions that need parallel-safety adjustments isn't the
same in 9.6 as 10, so I shouldn't have blindly back-patched that list.
Adjust as needed. Also, provide examples of the commands to issue.
M doc/src/sgml/release-9.3.sgml
M doc/src/sgml/release-9.4.sgml
Translation updates
commit : dc441d5c2da745fe79b2b1ea27d06e7aeae8c941
author : Peter Eisentraut <[email protected]>
date : Mon, 7 May 2018 11:47:28 -0400
committer: Peter Eisentraut <[email protected]>
date : Mon, 7 May 2018 11:47:28 -0400
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: d73a8c239ac494c183e266261c657580526d4cba
M src/backend/po/de.po
M src/backend/po/fr.po
M src/backend/po/ru.po
M src/bin/pg_basebackup/po/de.po
M src/bin/pg_basebackup/po/ru.po
M src/bin/pg_dump/po/de.po
M src/bin/pg_dump/po/ru.po
M src/bin/scripts/po/de.po
M src/bin/scripts/po/ru.po
M src/interfaces/ecpg/preproc/po/ru.po
M src/interfaces/libpq/po/ru.po
M src/pl/plpgsql/src/po/ru.po
M src/pl/plpython/po/ru.po
Release notes for 10.4, 9.6.9, 9.5.13, 9.4.18, 9.3.23.
commit : d1e913f5c97e9e2c2a744e54b0cf8b6e38b3e8ac
author : Tom Lane <[email protected]>
date : Sun, 6 May 2018 15:30:44 -0400
committer: Tom Lane <[email protected]>
date : Sun, 6 May 2018 15:30:44 -0400
M doc/src/sgml/release-9.3.sgml
M doc/src/sgml/release-9.4.sgml
Clear severity 5 perlcritic warnings from vcregress.pl
commit : 1eb24720c655f06fb0ed671472cbdeca9fa8bf3e
author : Andrew Dunstan <[email protected]>
date : Sun, 6 May 2018 07:37:05 -0400
committer: Andrew Dunstan <[email protected]>
date : Sun, 6 May 2018 07:37:05 -0400
My recent update for python3 support used some idioms that are
unapproved. This fixes them. Backpatch to all live branches like the
original.
M src/tools/msvc/vcregress.pl
Tweak tests to support Python 3.7
commit : af9e0d5cdf385d3924e8a8569df6b8314848e242
author : Peter Eisentraut <[email protected]>
date : Tue, 13 Feb 2018 16:13:20 -0500
committer: Peter Eisentraut <[email protected]>
date : Tue, 13 Feb 2018 16:13:20 -0500
Python 3.7 removes the trailing comma in the repr() of
BaseException (see <https://bugs.python.org/issue30399>), leading to
test output differences. Work around that by composing the equivalent
test output in a more manual way.
M src/pl/plpython/expected/plpython_subtransaction.out
M src/pl/plpython/expected/plpython_subtransaction_0.out
M src/pl/plpython/expected/plpython_subtransaction_5.out
M src/pl/plpython/sql/plpython_subtransaction.sql
Remove extra newlines after PQerrorMessage()
commit : 280cf0fe789d8ea2fdc3b8e374bfd82f674264de
author : Peter Eisentraut <[email protected]>
date : Sat, 5 May 2018 10:51:38 -0400
committer: Peter Eisentraut <[email protected]>
date : Sat, 5 May 2018 10:51:38 -0400
M src/bin/pg_basebackup/streamutil.c
M src/bin/pg_dump/pg_dumpall.c
Fix scenario where streaming standby gets stuck at a continuation record.
commit : c06380e97692963dd23c0a26708a431f5783c3d4
author : Heikki Linnakangas <[email protected]>
date : Sat, 5 May 2018 01:34:53 +0300
committer: Heikki Linnakangas <[email protected]>
date : Sat, 5 May 2018 01:34:53 +0300
If a continuation record is split so that its first half has already been
removed from the master, and is only present in pg_wal, and there is a
recycled WAL segment in the standby server that looks like it would
contain the second half, recovery would get stuck. The code in
XLogPageRead() incorrectly started streaming at the beginning of the
WAL record, even if we had already read the first page.
Backpatch to 9.4. In principle, older versions have the same problem, but
without replication slots, there was no straightforward mechanism to
prevent the master from recycling old WAL that was still needed by standby.
Without such a mechanism, I think it's reasonable to assume that there's
enough slack in how many old segments are kept around to not run into this,
or you have a WAL archive.
Reported by Jonathon Nelson. Analysis and patch by Kyotaro HORIGUCHI, with
some extra comments by me.
Discussion: https://www.postgresql.org/message-id/CACJqAM3xVz0JY1XFDKPP%2BJoJAjoGx%3DGNuOAshEDWCext7BFvCQ%40mail.gmail.com
M src/backend/access/transam/xlog.c
M src/backend/access/transam/xlogreader.c
M src/include/access/xlogreader.h
Provide for testing on python3 modules when under MSVC
commit : 134db37d21358fa5b0030c179bfa97af414d0f69
author : Andrew Dunstan <[email protected]>
date : Fri, 4 May 2018 15:22:48 -0400
committer: Andrew Dunstan <[email protected]>
date : Fri, 4 May 2018 15:22:48 -0400
This should have been done some years ago as promised in commit
c4dcdd0c2. However, better late than never.
Along the way do a little housekeeping, including using a simpler test
for the python version being tested, and removing a redundant subroutine
parameter. These changes only apply back to release 9.5.
Backpatch to all live releases.
M src/tools/msvc/vcregress.pl
Allow MSYS as well as MINGW in Msys uname
commit : ade3b273caeefc9ec51355607679f913606eab1b
author : Andrew Dunstan <[email protected]>
date : Fri, 4 May 2018 14:54:04 -0400
committer: Andrew Dunstan <[email protected]>
date : Fri, 4 May 2018 14:54:04 -0400
Msys2's uname -s outputs a string beginning MSYS rather than MINGW as is
output by Msys. Allow either in pg_upgrade's test.sh.
Backpatch to all live branches.
M contrib/pg_upgrade/test.sh
Sync our copy of the timezone library with IANA release tzcode2018e.
commit : 2d123b31048d4669027bdd1741a935dfc9d8a416
author : Tom Lane <[email protected]>
date : Fri, 4 May 2018 12:26:25 -0400
committer: Tom Lane <[email protected]>
date : Fri, 4 May 2018 12:26:25 -0400
The non-cosmetic changes involve teaching the "zic" tzdata compiler about
negative DST. While I'm not currently intending that we start using
negative-DST data right away, it seems possible that somebody would try
to use our copy of zic with bleeding-edge IANA data. So we'd better be
out in front of this change code-wise, even though it doesn't matter for
the data file we're shipping.
Discussion: https://postgr.es/m/[email protected]
M src/timezone/README
M src/timezone/localtime.c
M src/timezone/strftime.c
M src/timezone/zic.c
Add HOLD_INTERRUPTS section into FinishPreparedTransaction.
commit : 6bd659f19cac6bcf7235ac4f7dd4e36c8d2cc755
author : Teodor Sigaev <[email protected]>
date : Thu, 3 May 2018 20:10:11 +0300
committer: Teodor Sigaev <[email protected]>
date : Thu, 3 May 2018 20:10:11 +0300
If an interrupt arrives in the middle of FinishPreparedTransaction
and any callback decide to call CHECK_FOR_INTERRUPTS (e.g.
RemoveTwoPhaseFile can write a warning with ereport, which checks for
interrupts) then it's possible to leave current GXact undeleted.
Backpatch to all supported branches
Stas Kelvich
Discussion: ihttps://www.postgresql.org/message-id/[email protected]
M src/backend/access/transam/twophase.c
Revert back-branch changes in power()'s behavior for NaN inputs.
commit : 70211459a507b080d15b6a82d8a466323edc356d
author : Tom Lane <[email protected]>
date : Wed, 2 May 2018 17:32:40 -0400
committer: Tom Lane <[email protected]>
date : Wed, 2 May 2018 17:32:40 -0400
Per discussion, the value of fixing these bugs in the back branches
doesn't outweigh the downsides of changing corner-case behavior in
a minor release. Hence, revert commits 217d8f3a1 and 4d864de48 in
the v10 branch and the corresponding commits in 9.3-9.6.
Discussion: https://postgr.es/m/[email protected]
M src/backend/utils/adt/float.c
M src/test/regress/expected/float8-exp-three-digits-win32.out
M src/test/regress/expected/float8-small-is-zero.out
M src/test/regress/expected/float8-small-is-zero_1.out
M src/test/regress/expected/float8.out
M src/test/regress/sql/float8.sql
Fix bogus list-iteration code in pg_regress.c, affecting ecpg tests only.
commit : 8109a3c14486ff204b27a9c6e6df6a79c0735219
author : Tom Lane <[email protected]>
date : Sun, 29 Apr 2018 21:56:28 -0400
committer: Tom Lane <[email protected]>
date : Sun, 29 Apr 2018 21:56:28 -0400
While looking at a recent buildfarm failure in the ecpg tests, I wondered
why the pg_regress output claimed the stderr part of the test failed, when
the regression diffs were clearly for the stdout part. Looking into it,
the reason is that pg_regress.c's logic for iterating over three parallel
lists is wrong, and has been wrong since it was written: it advances the
"tag" pointer at a different place in the loop than the other two pointers.
Fix that.
M src/test/regress/pg_regress.c
Avoid wrong results for power() with NaN input on more platforms.
commit : 59c2df3ae8947c4c06217d1b330cfa529f480e17
author : Tom Lane <[email protected]>
date : Sun, 29 Apr 2018 18:15:16 -0400
committer: Tom Lane <[email protected]>
date : Sun, 29 Apr 2018 18:15:16 -0400
Buildfarm results show that the modern POSIX rule that 1 ^ NaN = 1 is not
honored on *BSD until relatively recently, and really old platforms don't
believe that NaN ^ 0 = 1 either. (This is unsurprising, perhaps, since
SUSv2 doesn't require either behavior.) In hopes of getting to platform
independent behavior, let's deal with all the NaN-input cases explicitly
in dpow().
Note that numeric_power() doesn't know either of these special cases.
But since that behavior is platform-independent, I think it should be
addressed separately, and probably not back-patched.
Discussion: https://postgr.es/m/[email protected]
M src/backend/utils/adt/float.c
M src/test/regress/expected/float8-exp-three-digits-win32.out
M src/test/regress/expected/float8-small-is-zero.out
M src/test/regress/expected/float8-small-is-zero_1.out
M src/test/regress/expected/float8.out
M src/test/regress/sql/float8.sql
Update time zone data files to tzdata release 2018d.
commit : 37c02b2b0a146e623ff6d350f89310a5cfad25e0
author : Tom Lane <[email protected]>
date : Sun, 29 Apr 2018 15:50:08 -0400
committer: Tom Lane <[email protected]>
date : Sun, 29 Apr 2018 15:50:08 -0400
DST law changes in Palestine and Antarctica (Casey Station). Historical
corrections for Portugal and its colonies, as well as Enderbury, Jamaica,
Turks & Caicos Islands, and Uruguay.
M src/timezone/data/tzdata.zi
Avoid wrong results for power() with NaN input on some platforms.
commit : 44ccd11cbbe60940689e35ac83a65ea4e219a983
author : Tom Lane <[email protected]>
date : Sun, 29 Apr 2018 15:21:45 -0400
committer: Tom Lane <[email protected]>
date : Sun, 29 Apr 2018 15:21:45 -0400
Per spec, the result of power() should be NaN if either input is NaN.
It appears that on some versions of Windows, the libc function does
return NaN, but it also sets errno = EDOM, confusing our code that
attempts to work around shortcomings of other platforms. Hence, add
guard tests to avoid substituting a wrong result for the right one.
It's been like this for a long time (and the odd behavior only appears
in older MSVC releases, too) so back-patch to all supported branches.
Dang Minh Huong, reviewed by David Rowley
Discussion: https://postgr.es/m/[email protected]
M src/backend/utils/adt/float.c
M src/test/regress/expected/float8-exp-three-digits-win32.out
M src/test/regress/expected/float8-small-is-zero.out
M src/test/regress/expected/float8-small-is-zero_1.out
M src/test/regress/expected/float8.out
M src/test/regress/sql/float8.sql
docs: remove "III" version text from pgAdmin link
commit : 367e57fbd0180d2f0bc2ab9670154a2af6da3727
author : Bruce Momjian <[email protected]>
date : Thu, 26 Apr 2018 11:10:43 -0400
committer: Bruce Momjian <[email protected]>
date : Thu, 26 Apr 2018 11:10:43 -0400
Reported-by: [email protected]
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 9.3
M doc/src/sgml/external-projects.sgml
Correct pg_recvlogical server version test.
commit : bb532859f45a16ae622eeebfd6e409c4481c28fa
author : Noah Misch <[email protected]>
date : Wed, 25 Apr 2018 18:50:29 -0700
committer: Noah Misch <[email protected]>
date : Wed, 25 Apr 2018 18:50:29 -0700
The predecessor test boiled down to "PQserverVersion(NULL) >= 100000",
which is always false. No release includes that, so it could not have
reintroduced CVE-2018-1058. Back-patch to 9.4, like the addition of the
predecessor in commit 8d2814f274def85f39fbe997d454b01628cb5667.
Discussion: https://postgr.es/m/[email protected]
M src/bin/pg_basebackup/streamutil.c
Change more places to be less trusting of RestrictInfo.is_pushed_down.
commit : 58fec95268dc925b8ae77dd50589c004b43202ba
author : Tom Lane <[email protected]>
date : Fri, 20 Apr 2018 15:19:17 -0400
committer: Tom Lane <[email protected]>
date : Fri, 20 Apr 2018 15:19:17 -0400
On further reflection, commit e5d83995e didn't go far enough: pretty much
everywhere in the planner that examines a clause's is_pushed_down flag
ought to be changed to use the more complicated behavior where we also
check the clause's required_relids. Otherwise we could make incorrect
decisions about whether, say, a clause is safe to use as a hash clause.
Some (many?) of these places are safe as-is, either because they are
never reached while considering a parameterized path, or because there
are additional checks that would reject a pushed-down clause anyway.
However, it seems smarter to just code them all the same way rather
than rely on easily-broken reasoning of that sort.
In support of that, invent a new macro RINFO_IS_PUSHED_DOWN that should
be used in place of direct tests on the is_pushed_down flag.
Like the previous patch, back-patch to all supported branches.
Discussion: https://postgr.es/m/[email protected]
M src/backend/optimizer/path/costsize.c
M src/backend/optimizer/path/joinpath.c
M src/backend/optimizer/path/joinrels.c
M src/backend/optimizer/plan/analyzejoins.c
M src/backend/optimizer/plan/initsplan.c
M src/backend/optimizer/util/restrictinfo.c
M src/include/nodes/relation.h
Fix incorrect handling of join clauses pushed into parameterized paths.
commit : a347d5210ef0330b911e31c6249e4318d933b27b
author : Tom Lane <[email protected]>
date : Thu, 19 Apr 2018 15:49:12 -0400
committer: Tom Lane <[email protected]>
date : Thu, 19 Apr 2018 15:49:12 -0400
In some cases a clause attached to an outer join can be pushed down into
the outer join's RHS even though the clause is not degenerate --- this
can happen if we choose to make a parameterized path for the RHS. If
the clause ends up attached to a lower outer join, we'd misclassify it
as being a "join filter" not a plain "filter" condition at that node,
leading to wrong query results.
To fix, teach extract_actual_join_clauses to examine each join clause's
required_relids, not just its is_pushed_down flag. (The latter now
seems vestigial, or at least in need of rethinking, but we won't do
anything so invasive as redefining it in a bug-fix patch.)
This has been wrong since we introduced parameterized paths in 9.2,
though it's evidently hard to hit given the lack of previous reports.
The test case used here involves a lateral function call, and I think
that a lateral reference may be required to get the planner to select
a broken plan; though I wouldn't swear to that. In any case, even if
LATERAL is needed to trigger the bug, it still affects all supported
branches, so back-patch to all.
Per report from Andreas Karlsson. Thanks to Andrew Gierth for
preliminary investigation.
Discussion: https://postgr.es/m/[email protected]
M src/backend/optimizer/plan/createplan.c
M src/backend/optimizer/util/restrictinfo.c
M src/include/optimizer/restrictinfo.h
M src/test/regress/expected/join.out
M src/test/regress/sql/join.sql
Enlarge find_other_exec's meager fgets buffer
commit : e668507d36f8d299e5f04b4796cbbc1704f2c763
author : Alvaro Herrera <[email protected]>
date : Thu, 19 Apr 2018 10:45:15 -0300
committer: Alvaro Herrera <[email protected]>
date : Thu, 19 Apr 2018 10:45:15 -0300
The buffer was 100 bytes long, which is barely sufficient when the
version string gets longer (such as by configure --with-extra-version).
Set it to MAXPGPATH.
Author: Nikhil Sontakke
Discussion: https://postgr.es/m/CAMGcDxfLfpYU_Jru++L6ARPCOyxr0W+2O3Q54TDi5XdYeU36ow@mail.gmail.com
M src/common/exec.c
Better fix for deadlock hazard in CREATE INDEX CONCURRENTLY.
commit : 7490ce725edd5b46f14fa8a449a97c9487c1d143
author : Tom Lane <[email protected]>
date : Wed, 18 Apr 2018 12:07:38 -0400
committer: Tom Lane <[email protected]>
date : Wed, 18 Apr 2018 12:07:38 -0400
Commit 54eff5311 did not account for the possibility that we'd have
a transaction snapshot due to default_transaction_isolation being
set high enough to require one. The transaction snapshot is enough
to hold back our advertised xmin and thus risk deadlock anyway.
The only way to get rid of that snap is to start a new transaction,
so let's do that instead. Also throw in an assert checking that we
really have gotten to a state where no xmin is being advertised.
Back-patch to 9.4, like the previous commit.
Discussion: https://postgr.es/m/CAMkU=1ztk3TpQdcUNbxq93pc80FrXUjpDWLGMeVBDx71GHNwZQ@mail.gmail.com
M src/backend/commands/indexcmds.c
Revert "Add temporary debug logging, in 9.4 branch only."
commit : 92b503c48d9d98657c59fe630f44b80efaab87de
author : Tom Lane <[email protected]>
date : Wed, 18 Apr 2018 11:57:37 -0400
committer: Tom Lane <[email protected]>
date : Wed, 18 Apr 2018 11:57:37 -0400
This reverts commit e55380f3b60108d402f64131fe655b0e5ccc1f31.
It's served its purpose.
M src/backend/commands/indexcmds.c
M src/backend/utils/time/snapmgr.c
M src/include/utils/snapmgr.h
Revert "Add more temporary debug logging, in 9.4 branch only."
commit : 248c268d5b9fce1c2bb3d8fc28634f4977b613b5
author : Tom Lane <[email protected]>
date : Wed, 18 Apr 2018 11:56:56 -0400
committer: Tom Lane <[email protected]>
date : Wed, 18 Apr 2018 11:56:56 -0400
This reverts commit eef1a609adfd0c41361aac2e04020bd199fb61fb.
It's served its purpose.
M src/backend/commands/indexcmds.c
M src/backend/utils/time/snapmgr.c
Add more temporary debug logging, in 9.4 branch only.
commit : eef1a609adfd0c41361aac2e04020bd199fb61fb
author : Tom Lane <[email protected]>
date : Tue, 17 Apr 2018 11:26:37 -0400
committer: Tom Lane <[email protected]>
date : Tue, 17 Apr 2018 11:26:37 -0400
Last night's results were inconclusive, but after more staring at the
code I've thought of some more data to gather.
Discussion: https://postgr.es/m/[email protected]
M src/backend/commands/indexcmds.c
M src/backend/utils/time/snapmgr.c
Fix broken collation-aware searches in SP-GiST text opclass.
commit : 608d1f97114d3e449beef543c9a528a1ee2f5ada
author : Tom Lane <[email protected]>
date : Mon, 16 Apr 2018 16:06:47 -0400
committer: Tom Lane <[email protected]>
date : Mon, 16 Apr 2018 16:06:47 -0400
spg_text_leaf_consistent() supposed that it should compare only
Min(querylen, entrylen) bytes of the two strings, and then deal with
any excess bytes in one string or the other by assuming the longer
string is greater if the prefixes are equal. Quite aside from the
fact that that's just wrong in some locales (e.g., 'ch' is not less
than 'd' in cs_CZ), it also risked passing incomplete multibyte
characters to strcoll(), with ensuing bad results.
Instead, just pass the full strings to varstr_cmp, and let it decide
what to do about unequal-length strings.
Fortunately, this error doesn't imply any index corruption, it's just
that searches might return the wrong set of entries.
Per report from Emre Hasegeli, though this is not his patch.
Thanks to Peter Geoghegan for review and discussion.
This code was born broken, so back-patch to all supported branches.
In HEAD, I failed to resist the temptation to do a bit of cosmetic
cleanup/pgindent'ing on 710d90da1, too.
Discussion: https://postgr.es/m/CAE2gYzzb6K51VnTq5i5p52z+j9p2duEa-K1T3RrC_GQEynAKEg@mail.gmail.com
M src/backend/access/spgist/spgtextproc.c
Add temporary debug logging, in 9.4 branch only.
commit : e55380f3b60108d402f64131fe655b0e5ccc1f31
author : Tom Lane <[email protected]>
date : Mon, 16 Apr 2018 13:44:39 -0400
committer: Tom Lane <[email protected]>
date : Mon, 16 Apr 2018 13:44:39 -0400
Commit 5ee940e1c served its purpose by demonstrating that buildfarm
member okapi is seeing some sort of locally-visible state mismanagement,
not a cross-process data visibility problem as I'd first theorized.
Put in some elog(LOG) messages in hopes of gathering more info about
exactly what's happening there. Again, this is temporary code to be
reverted once we have buildfarm results.
Discussion: https://postgr.es/m/[email protected]
M src/backend/commands/indexcmds.c
M src/backend/utils/time/snapmgr.c
M src/include/utils/snapmgr.h
Revert "Add temporary debugging assertion, in 9.4 branch only."
commit : fea5bfde1673fdbcf3ae2ce1ce3d5df2743e5653
author : Tom Lane <[email protected]>
date : Mon, 16 Apr 2018 13:23:35 -0400
committer: Tom Lane <[email protected]>
date : Mon, 16 Apr 2018 13:23:35 -0400
This reverts commit 5ee940e1cdb6af3af52bb01e44aac63f3a73a28d.
Further debugging is needed, but it'll look different than this,
so for simplicity revert this first.
M src/backend/commands/indexcmds.c
Add temporary debugging assertion, in 9.4 branch only.
commit : 5ee940e1cdb6af3af52bb01e44aac63f3a73a28d
author : Tom Lane <[email protected]>
date : Sun, 15 Apr 2018 20:23:59 -0400
committer: Tom Lane <[email protected]>
date : Sun, 15 Apr 2018 20:23:59 -0400
Buildfarm member okapi has been failing the multiple-cic isolation
test for months now, but only in 9.4. To narrow down the possible
causes, add an Assert testing that CREATE INDEX CONCURRENTLY is
advertising zero xmin before waiting for other transactions to end.
I'm not sure that this would hold in general, so this assertion isn't
meant to get released, but it passes all 9.4 regression tests for me.
Will revert once we see how okapi responds.
M src/backend/commands/indexcmds.c
Fix potentially-unportable code in contrib/adminpack.
commit : e6b71727afc148f90967f4d7ca5cb29891ba2c6c
author : Tom Lane <[email protected]>
date : Sun, 15 Apr 2018 13:02:12 -0400
committer: Tom Lane <[email protected]>
date : Sun, 15 Apr 2018 13:02:12 -0400
Spelling access(2)'s second argument as "2" is just horrid.
POSIX makes no promises as to the numeric values of W_OK and related
macros. Even if it accidentally works as intended on every supported
platform, it's still unreadable and inconsistent with adjacent code.
In passing, don't spell "NULL" as "0" either. Yes, that's legal C;
no, it's not project style.
Back-patch, just in case the unportability is real and not theoretical.
(Most likely, even if a platform had different bit assignments for
access()'s modes, there'd not be an observable behavior difference
here; but I'm being paranoid today.)
M contrib/adminpack/adminpack.c
In libpq, free any partial query result before collecting a server error.
commit : 3dd36aa4b36c97cf9e00d29d8b138a1f1990fc62
author : Tom Lane <[email protected]>
date : Fri, 13 Apr 2018 12:53:46 -0400
committer: Tom Lane <[email protected]>
date : Fri, 13 Apr 2018 12:53:46 -0400
We'd throw away the partial result anyway after parsing the error message.
Throwing it away beforehand costs nothing and reduces the risk of
out-of-memory failure. Also, at least in systems that behave like
glibc/Linux, if the partial result was very large then the error PGresult
would get allocated at high heap addresses, preventing the heap storage
used by the partial result from being released to the OS until the error
PGresult is freed.
In psql >= 9.6, we hold onto the error PGresult until another error is
received (for \errverbose), so that this behavior causes a seeming
memory leak to persist for awhile, as in a recent complaint from
Darafei Praliaskouski. This is a potential performance regression from
older versions, justifying back-patching at least that far. But similar
behavior may occur in other client applications, so it seems worth just
back-patching to all supported branches.
Discussion: https://postgr.es/m/CAC8Q8tJ=7cOkPePyAbJE_Pf691t8nDFhJp0KZxHvnq_uicfyVg@mail.gmail.com
M src/interfaces/libpq/fe-protocol2.c
M src/interfaces/libpq/fe-protocol3.c
Fix bogus affix-merging code.
commit : f71d803c8de8daa7219ea52366ad3f4ff9345b3f
author : Tom Lane <[email protected]>
date : Thu, 12 Apr 2018 18:39:51 -0400
committer: Tom Lane <[email protected]>
date : Thu, 12 Apr 2018 18:39:51 -0400
NISortAffixes() compared successive compound affixes incorrectly,
thus possibly failing to merge identical affixes, or (less likely)
merging ones that shouldn't be merged. The user-visible effects
of this are unclear, to me anyway.
Per bug #15150 from Alexander Lakhin. It's been broken for a long time,
so back-patch to all supported branches.
Arthur Zakirov
Discussion: https://postgr.es/m/[email protected]
M src/backend/tsearch/spell.c
Ignore nextOid when replaying an ONLINE checkpoint.
commit : 6943fb9275a50f3a9d177da1a06ea387bf490ead
author : Tom Lane <[email protected]>
date : Wed, 11 Apr 2018 18:11:30 -0400
committer: Tom Lane <[email protected]>
date : Wed, 11 Apr 2018 18:11:30 -0400
The nextOid value is from the start of the checkpoint and may well be stale
compared to values from more recent XLOG_NEXTOID records. Previously, we
adopted it anyway, allowing the OID counter to go backwards during a crash.
While this should be harmless, it contributed to the severity of the bug
fixed in commit 0408e1ed5, by allowing duplicate TOAST OIDs to be assigned
immediately following a crash. Without this error, that issue would only
have arisen when TOAST objects just younger than a multiple of 2^32 OIDs
were deleted and then not vacuumed in time to avoid a conflict.
Pavan Deolasee
Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4OLosXHf9w@mail.gmail.com
M src/backend/access/transam/xlog.c
Do not select new object OIDs that match recently-dead entries.
commit : 5b3ed6b7880b88a355bf809dad83cbe7cbc49316
author : Tom Lane <[email protected]>
date : Wed, 11 Apr 2018 17:41:10 -0400
committer: Tom Lane <[email protected]>
date : Wed, 11 Apr 2018 17:41:10 -0400
When selecting a new OID, we take care to avoid picking one that's already
in use in the target table, so as not to create duplicates after the OID
counter has wrapped around. However, up to now we used SnapshotDirty when
scanning for pre-existing entries. That ignores committed-dead rows, so
that we could select an OID matching a deleted-but-not-yet-vacuumed row.
While that mostly worked, it has two problems:
* If recently deleted, the dead row might still be visible to MVCC
snapshots, creating a risk for duplicate OIDs when examining the catalogs
within our own transaction. Such duplication couldn't be visible outside
the object-creating transaction, though, and we've heard few if any field
reports corresponding to such a symptom.
* When selecting a TOAST OID, deleted toast rows definitely *are* visible
to SnapshotToast, and will remain so until vacuumed away. This leads to
a conflict that will manifest in errors like "unexpected chunk number 0
(expected 1) for toast value nnnnn". We've been seeing reports of such
errors from the field for years, but the cause was unclear before.
The fix is simple: just use SnapshotAny to search for conflicting rows.
This results in a slightly longer window before object OIDs can be
recycled, but that seems unlikely to create any large problems.
Pavan Deolasee
Discussion: https://postgr.es/m/CABOikdOgWT2hHkYG3Wwo2cyZJq2zfs1FH0FgX-=h4OLosXHf9w@mail.gmail.com
M src/backend/access/heap/tuptoaster.c
M src/backend/catalog/catalog.c
Make local copy of client hostnames in backend status array.
commit : 310d1379dd710322da398f5223051368fc876e23
author : Heikki Linnakangas <[email protected]>
date : Wed, 11 Apr 2018 23:39:48 +0300
committer: Heikki Linnakangas <[email protected]>
date : Wed, 11 Apr 2018 23:39:48 +0300
The other strings, application_name and query string, were snapshotted to
local memory in pgstat_read_current_status(), but we forgot to do that for
client hostnames. As a result, the client hostname would appear to change in
the local copy, if the client disconnected.
Backpatch to all supported versions.
Author: Edmund Horner
Reviewed-by: Michael Paquier
Discussion: https://www.postgresql.org/message-id/CAMyN-kA7aOJzBmrYFdXcc7Z0NmW%2B5jBaf_m%3D_-77uRNyKC9r%3DA%40mail.gmail.com
M src/backend/postmaster/pgstat.c
Fix incorrect close() call in dsm_impl_mmap().
commit : f530af8fc931b4063f1a9f4239908c7110666848
author : Tom Lane <[email protected]>
date : Tue, 10 Apr 2018 18:34:40 -0400
committer: Tom Lane <[email protected]>
date : Tue, 10 Apr 2018 18:34:40 -0400
One improbable error-exit path in this function used close() where
it should have used CloseTransientFile(). This is unlikely to be
hit in the field, and I think the consequences wouldn't be awful
(just an elog(LOG) bleat later). But a bug is a bug, so back-patch
to 9.4 where this code came in.
Pan Bian
Discussion: https://postgr.es/m/[email protected]
M src/backend/storage/ipc/dsm_impl.c
Doc: clarify explanation of pg_dump usage.
commit : 4716c9e70ea50aa414b8f6c08f2a8ad7f001c6bc
author : Tom Lane <[email protected]>
date : Sun, 8 Apr 2018 16:35:43 -0400
committer: Tom Lane <[email protected]>
date : Sun, 8 Apr 2018 16:35:43 -0400
This section confusingly used both "infile" and "outfile" to refer
to the same file, i.e. the textual output of pg_dump. Use "dumpfile"
for both cases, per suggestion from Jonathan Katz.
Discussion: https://postgr.es/m/[email protected]
M doc/src/sgml/backup.sgml
doc: remove mention of the DMOZ catalog in ltree docs
commit : ec45e60362e1d35510d31b5c58915aca3a6aedbc
author : Bruce Momjian <[email protected]>
date : Thu, 5 Apr 2018 15:55:41 -0400
committer: Bruce Momjian <[email protected]>
date : Thu, 5 Apr 2018 15:55:41 -0400
Discussion: https://postgr.es/m/CAF4Au4xYem_W3KOuxcKct7=G4j8Z3uO9j3DUKTFJqUsfp_9pQg@mail.gmail.com
Author: Oleg Bartunov
Backpatch-through: 9.3
M doc/src/sgml/ltree.sgml
docs: update ltree URL for the DMOZ catalog
commit : 9305257baffad0c212489b33616cdfd385d195b0
author : Bruce Momjian <[email protected]>
date : Wed, 4 Apr 2018 15:06:21 -0400
committer: Bruce Momjian <[email protected]>
date : Wed, 4 Apr 2018 15:06:21 -0400
Reported-by: [email protected]
Discussion: https://postgr.es/m/[email protected]
Author: Oleg Bartunov
Backpatch-through: 9.3
M doc/src/sgml/ltree.sgml
doc: document "IS NOT DOCUMENT"
commit : b71741b425787644bbf8b8d469622fb315892cb0
author : Bruce Momjian <[email protected]>
date : Mon, 2 Apr 2018 16:41:46 -0400
committer: Bruce Momjian <[email protected]>
date : Mon, 2 Apr 2018 16:41:46 -0400
Reported-by: [email protected]
Discussion: https://postgr.es/m/[email protected]
Author: Euler Taveira
Backpatch-through: 9.3
M doc/src/sgml/func.sgml
Fix bogus provolatile/proparallel markings on a few built-in functions.
commit : b7537ffb1ab59b3fedf6d01e6074fa2d465756c3
author : Tom Lane <[email protected]>
date : Fri, 30 Mar 2018 18:14:51 -0400
committer: Tom Lane <[email protected]>
date : Fri, 30 Mar 2018 18:14:51 -0400
Richard Yen reported that pg_upgrade failed if the target cluster had
force_parallel_mode = on, because binary_upgrade_create_empty_extension()
is marked parallel restricted, allowing it to be executed in parallel
mode, which complains because it tries to acquire an XID.
In general, no function that might try to modify database data should
be considered parallel safe or restricted, since execution of it might
force XID acquisition. We found several other examples of this mistake.
Furthermore, functions that execute user-supplied SQL queries or query
fragments, or pull data from user-supplied cursors, had better be marked
both volatile and parallel unsafe, because we don't know what the supplied
query or cursor might try to do. There were several tsquery and XML
functions that had the wrong proparallel marking for this, and some of
them were even mislabeled as to volatility.
All these bugs are old, dating back to 9.6 for the proparallel mistakes
and much further for the provolatile mistakes. We can't force a
catversion bump in the back branches, but we can at least ensure that
installations initdb'd in future have the right values.
Thomas Munro and Tom Lane
Discussion: https://postgr.es/m/CAEepm=2sNDScSLTfyMYu32Q=ob98ZGW-vM_2oLxinzSABGQ6VA@mail.gmail.com
M src/include/catalog/pg_proc.h
docs: add parameter with brackets around varbit()
commit : 839e26e31b6bc10a3b49b4b860d6bc21e81d46de
author : Bruce Momjian <[email protected]>
date : Fri, 30 Mar 2018 13:34:12 -0400
committer: Bruce Momjian <[email protected]>
date : Fri, 30 Mar 2018 13:34:12 -0400
Reported-by: [email protected]
Discussion: https://postgr.es/m/[email protected]
Author: Euler Taveira
Backpatch-through: 9.3
M doc/src/sgml/datatype.sgml
Doc: add example of type resolution in nested UNIONs.
commit : 428d2465c2eea7dbccca42f0e15d5f352606776e
author : Tom Lane <[email protected]>
date : Sun, 25 Mar 2018 16:15:16 -0400
committer: Tom Lane <[email protected]>
date : Sun, 25 Mar 2018 16:15:16 -0400
Section 10.5 didn't say explicitly that multiple UNIONs are resolved
pairwise. Since the resolution algorithm is described as taking any
number of inputs, readers might well think that a query like
"select x union select y union select z" would be resolved by
considering x, y, and z in one resolution step. But that's not what
happens (and I think that behavior is per SQL spec). Add an example
clarifying this point.
Per bug #15129 from Philippe Beaudoin.
Discussion: https://postgr.es/m/[email protected]
M doc/src/sgml/typeconv.sgml
Doc: remove extra comma in syntax summary for array_fill().
commit : 8f991f41bff6a97759c0f80f02a3141408f68138
author : Tom Lane <[email protected]>
date : Sun, 25 Mar 2018 12:38:21 -0400
committer: Tom Lane <[email protected]>
date : Sun, 25 Mar 2018 12:38:21 -0400
Noted by Scott Ure. Back-patch to all supported branches.
Discussion: https://postgr.es/m/[email protected]
M doc/src/sgml/func.sgml
Don't qualify type pg_catalog.text in extend-extensions-example.
commit : 60c623678fde2c6bc331c11197e457ba69170b50
author : Noah Misch <[email protected]>
date : Fri, 23 Mar 2018 20:31:03 -0700
committer: Noah Misch <[email protected]>
date : Fri, 23 Mar 2018 20:31:03 -0700
Extension scripts begin execution with pg_catalog at the front of the
search path, so type names reliably refer to pg_catalog. Remove these
superfluous qualifications. Earlier <programlisting> of this <sect1>
already omitted them. Back-patch to 9.3 (all supported versions).
M doc/src/sgml/extend.sgml
Fix make rules that generate multiple output files.
commit : 4c26965166c8180a1f017fac85447f0bc9df98cf
author : Tom Lane <[email protected]>
date : Fri, 23 Mar 2018 13:45:38 -0400
committer: Tom Lane <[email protected]>
date : Fri, 23 Mar 2018 13:45:38 -0400
For years, our makefiles have correctly observed that "there is no correct
way to write a rule that generates two files". However, what we did is to
provide empty rules that "generate" the secondary output files from the
primary one, and that's not right either. Depending on the details of
the creating process, the primary file might end up timestamped later than
one or more secondary files, causing subsequent make runs to consider the
secondary file(s) out of date. That's harmless in a plain build, since
make will just re-execute the empty rule and nothing happens. But it's
fatal in a VPATH build, since make will expect the secondary file to be
rebuilt in the build directory. This would manifest as "file not found"
failures during VPATH builds from tarballs, if we were ever unlucky enough
to ship a tarball with apparently out-of-date secondary files. (It's not
clear whether that has ever actually happened, but it definitely could.)
To ensure that secondary output files have timestamps >= their primary's,
change our makefile convention to be that we provide a "touch $@" action
not an empty rule. Also, make sure that this rule actually gets invoked
during a distprep run, else the hazard remains.
It's been like this a long time, so back-patch to all supported branches.
In HEAD, I skipped the changes in src/backend/catalog/Makefile, because
those rules are due to get replaced soon in the bootstrap data format
patch, and there seems no need to create a merge issue for that patch.
If for some reason we fail to land that patch in v11, we'll need to
back-fill the changes in that one makefile from v10.
Discussion: https://postgr.es/m/[email protected]
M src/Makefile.shlib
M src/backend/Makefile
M src/backend/catalog/Makefile
M src/backend/parser/Makefile
M src/backend/utils/Makefile
M src/bin/psql/Makefile
M src/interfaces/ecpg/preproc/Makefile
M src/pl/plpgsql/src/Makefile
M src/test/isolation/Makefile
Fix tuple counting in SP-GiST index build.
commit : 7f6f8ccd976da21c0117bddfee8bd86226bd2559
author : Tom Lane <[email protected]>
date : Thu, 22 Mar 2018 13:23:48 -0400
committer: Tom Lane <[email protected]>
date : Thu, 22 Mar 2018 13:23:48 -0400
Count the number of tuples in the index honestly, instead of assuming
that it's the same as the number of tuples in the heap. (It might be
different if the index is partial.)
Back-patch to all supported versions.
Tomas Vondra
Discussion: https://postgr.es/m/[email protected]
M src/backend/access/spgist/spginsert.c
Fix mishandling of quoted-list GUC values in pg_dump and ruleutils.c.
commit : 67e02cde7373b09a8f6897b7fea3800ceb91ad9e
author : Tom Lane <[email protected]>
date : Wed, 21 Mar 2018 20:03:28 -0400
committer: Tom Lane <[email protected]>
date : Wed, 21 Mar 2018 20:03:28 -0400
Code that prints out the contents of setconfig or proconfig arrays in
SQL format needs to handle GUC_LIST_QUOTE variables differently from
other ones, because for those variables, flatten_set_variable_args()
already applied a layer of quoting. The value can therefore safely
be printed as-is, and indeed must be, or flatten_set_variable_args()
will muck it up completely on reload. For all other GUC variables,
it's necessary and sufficient to quote the value as a SQL literal.
We'd recognized the need for this long ago, but mis-analyzed the
need slightly, thinking that all GUC_LIST_INPUT variables needed
the special treatment. That's actually wrong, since a valid value
of a LIST variable might include characters that need quoting,
although no existing variables accept such values.
More to the point, we hadn't made any particular effort to keep the
various places that deal with this up-to-date with the set of variables
that actually need special treatment, meaning that we'd do the wrong
thing with, for example, temp_tablespaces values. This affects dumping
of SET clauses attached to functions, as well as ALTER DATABASE/ROLE SET
commands.
In ruleutils.c we can fix it reasonably honestly by exporting a guc.c
function that allows discovering the flags for a given GUC variable.
But pg_dump doesn't have easy access to that, so continue the old method
of having a hard-wired list of affected variable names. At least we can
fix it to have just one list not two, and update the list to match
current reality.
A remaining problem with this is that it only works for built-in
GUC variables. pg_dump's list obvious knows nothing of third-party
extensions, and even the "ask guc.c" method isn't bulletproof since
the relevant extension might not be loaded. There's no obvious
solution to that, so for now, we'll just have to discourage extension
authors from inventing custom GUCs that need GUC_LIST_QUOTE.
This has been busted for a long time, so back-patch to all supported
branches.
Michael Paquier and Tom Lane, reviewed by Kyotaro Horiguchi and
Pavel Stehule
Discussion: https://postgr.es/m/[email protected]
M src/backend/utils/adt/ruleutils.c
M src/backend/utils/misc/guc.c
M src/bin/pg_dump/dumputils.c
M src/bin/pg_dump/dumputils.h
M src/bin/pg_dump/pg_dump.c
M src/bin/pg_dump/pg_dumpall.c
M src/include/utils/guc.h
M src/test/regress/expected/rules.out
M src/test/regress/sql/rules.sql
Fix typo.
commit : 9312fcf03b4889f9f2ba1c9b4a3447951a5bb6fb
author : Tatsuo Ishii <[email protected]>
date : Wed, 21 Mar 2018 23:08:43 +0900
committer: Tatsuo Ishii <[email protected]>
date : Wed, 21 Mar 2018 23:08:43 +0900
Patch by me.
M doc/src/sgml/protocol.sgml
Fix some corner-case issues in REFRESH MATERIALIZED VIEW CONCURRENTLY.
commit : e1f186da949dff96f974f491c0248b44845cc3e8
author : Tom Lane <[email protected]>
date : Mon, 19 Mar 2018 18:49:53 -0400
committer: Tom Lane <[email protected]>
date : Mon, 19 Mar 2018 18:49:53 -0400
refresh_by_match_merge() has some issues in the way it builds a SQL
query to construct the "diff" table:
1. It doesn't require the selected unique index(es) to be indimmediate.
2. It doesn't pay attention to the particular equality semantics enforced
by a given index, but just assumes that they must be those of the column
datatype's default btree opclass.
3. It doesn't check that the indexes are btrees.
4. It's insufficiently careful to ensure that the parser will pick the
intended operator when parsing the query. (This would have been a
security bug before CVE-2018-1058.)
5. It's not careful about indexes on system columns.
The way to fix #4 is to make use of the existing code in ri_triggers.c
for generating an arbitrary binary operator clause. I chose to move
that to ruleutils.c, since that seems a more reasonable place to be
exporting such functionality from than ri_triggers.c.
While #1, #3, and #5 are just latent given existing feature restrictions,
and #2 doesn't arise in the core system for lack of alternate opclasses
with different equality behaviors, #4 seems like an issue worth
back-patching. That's the bulk of the change anyway, so just back-patch
the whole thing to 9.4 where this code was introduced.
Discussion: https://postgr.es/m/[email protected]
M src/backend/commands/matview.c
M src/backend/utils/adt/ri_triggers.c
M src/backend/utils/adt/ruleutils.c
M src/include/utils/builtins.h
Fix performance hazard in REFRESH MATERIALIZED VIEW CONCURRENTLY.
commit : b6ba94ec45bbf49a95a31c4ef740d1d60de7b143
author : Tom Lane <[email protected]>
date : Mon, 19 Mar 2018 17:23:07 -0400
committer: Tom Lane <[email protected]>
date : Mon, 19 Mar 2018 17:23:07 -0400
Jeff Janes discovered that commit 7ca25b7de made one of the queries run by
REFRESH MATERIALIZED VIEW CONCURRENTLY perform badly. The root cause is
bad cardinality estimation for correlated quals, but a principled solution
to that problem is some way off, especially since the planner lacks any
statistics about whole-row variables. Moreover, in non-error cases this
query produces no rows, meaning it must be run to completion; but use of
LIMIT 1 encourages the planner to pick a fast-start, slow-completion plan,
exactly not what we want. Remove the LIMIT clause, and instead rely on
the count parameter we pass to SPI_execute() to prevent excess work if the
query does return some rows.
While we've heard no field reports of planner misbehavior with this query,
it could be that people are having performance issues that haven't reached
the level of pain needed to cause a bug report. In any case, that LIMIT
clause can't possibly do anything helpful with any existing version of the
planner, and it demonstrably can cause bad choices in some cases, so
back-patch to 9.4 where the code was introduced.
Thomas Munro
Discussion: https://postgr.es/m/CAMkU=1z-JoGymHneGHar1cru4F1XDfHqJDzxP_CtK5cL3DOfmg@mail.gmail.com
M src/backend/commands/matview.c
Doc: note that statement-level view triggers require an INSTEAD OF trigger.
commit : b1e48cc9c79ce9d4bd6f1297ce54089781b9f97d
author : Tom Lane <[email protected]>
date : Sun, 18 Mar 2018 15:10:28 -0400
committer: Tom Lane <[email protected]>
date : Sun, 18 Mar 2018 15:10:28 -0400
If a view lacks an INSTEAD OF trigger, DML on it can only work by rewriting
the command into a command on the underlying base table(s). Then we will
fire triggers attached to those table(s), not those for the view. This
seems appropriate from a consistency standpoint, but nowhere was the
behavior explicitly documented, so let's do that.
There was some discussion of throwing an error or warning if a statement
trigger is created on a view without creating a row INSTEAD OF trigger.
But a simple implementation of that would result in dump/restore ordering
hazards. Given that it's been like this all along, and we hadn't heard
a complaint till now, a documentation improvement seems sufficient.
Per bug #15106 from Pu Qun. Back-patch to all supported branches.
Discussion: https://postgr.es/m/[email protected]
M doc/src/sgml/ref/create_trigger.sgml
M doc/src/sgml/trigger.sgml
Fix pg_recvlogical for pre-10 versions
commit : af5fbb1286cd4319db52835d4847175af9c2ed56
author : Magnus Hagander <[email protected]>
date : Sun, 18 Mar 2018 13:08:25 +0100
committer: Magnus Hagander <[email protected]>
date : Sun, 18 Mar 2018 13:08:25 +0100
In e170b8c8, protection against modified search_path was added. However,
PostgreSQL versions prior to 10 does not accept SQL commands over a
replication connection, so the protection would generate a syntax error.
Since we cannot run SQL commands on it, we are also not vulnerable to
the issue that e170b8c8 fixes, so we can just skip this command for
older versions.
Author: Michael Paquier <[email protected]>
M src/bin/pg_basebackup/streamutil.c
Fix overflow handling in plpgsql's integer FOR loops.
commit : 092401b14fa19d0ea28d57ba2f73b7aee64f960d
author : Tom Lane <[email protected]>
date : Sat, 17 Mar 2018 15:38:15 -0400
committer: Tom Lane <[email protected]>
date : Sat, 17 Mar 2018 15:38:15 -0400
The test to exit the loop if the integer control value would overflow
an int32 turns out not to work on some ICC versions, as it's dependent
on the assumption that the compiler will execute the code as written
rather than "optimize" it. ICC lacks any equivalent of gcc's -fwrapv
switch, so it was optimizing on the assumption of no integer overflow,
and that breaks this. Rewrite into a form that in fact does not
do any overflowing computations.
Per Tomas Vondra and buildfarm member fulmar. It's been like this
for a long time, although it was not till we added a regression test
case covering the behavior (in commit dd2243f2a) that the problem
became apparent. Back-patch to all supported versions.
Discussion: https://postgr.es/m/[email protected]
M src/pl/plpgsql/src/pl_exec.c
Fix WHERE CURRENT OF when the referenced cursor uses an index-only scan.
commit : 0a0721f84c40ef3a511cd1b233579777b71d8bce
author : Tom Lane <[email protected]>
date : Sat, 17 Mar 2018 14:59:31 -0400
committer: Tom Lane <[email protected]>
date : Sat, 17 Mar 2018 14:59:31 -0400
"UPDATE/DELETE WHERE CURRENT OF cursor_name" failed, with an error message
like "cannot extract system attribute from virtual tuple", if the cursor
was using a index-only scan for the target table. Fix it by digging the
current TID out of the indexscan state.
It seems likely that the same failure could occur for CustomScan plans
and perhaps some FDW plan types, so that leaving this to be treated as an
internal error with an obscure message isn't as good an idea as it first
seemed. Hence, add a bit of heaptuple.c infrastructure to let us deliver
a more on-topic message. I chose to make the message match what you get
for the case where execCurrentOf can't identify the target scan node at
all, "cursor "foo" is not a simply updatable scan of table "bar"".
Perhaps it should be different, but we can always adjust that later.
In the future, it might be nice to provide hooks that would let custom
scan providers and/or FDWs deal with this in other ways; but that's
not a suitable topic for a back-patchable bug fix.
It's been like this all along, so back-patch to all supported branches.
Yugo Nagata and Tom Lane
Discussion: https://postgr.es/m/[email protected]
M src/backend/access/common/heaptuple.c
M src/backend/executor/execCurrent.c
M src/include/executor/tuptable.h
M src/test/regress/expected/portals.out
M src/test/regress/sql/portals.sql
Fix query-lifespan memory leakage in repeatedly executed hash joins.
commit : 2709549ecdd68313201307653f3ddd0f24dd8427
author : Tom Lane <[email protected]>
date : Fri, 16 Mar 2018 16:03:45 -0400
committer: Tom Lane <[email protected]>
date : Fri, 16 Mar 2018 16:03:45 -0400
ExecHashTableCreate allocated some memory that wasn't freed by
ExecHashTableDestroy, specifically the per-hash-key function information.
That's not a huge amount of data, but if one runs a query that repeats
a hash join enough times, it builds up. Fix by arranging for the data
in question to be kept in the hashtable's hashCxt instead of leaving it
"loose" in the query-lifespan executor context. (This ensures that we'll
also clean up anything that the hash functions allocate in fn_mcxt.)
Per report from Amit Khandekar. It's been like this forever, so back-patch
to all supported branches.
Discussion: https://postgr.es/m/CAJ3gD9cFofAWGvcxLOxDHC=B0hjtW8yGmUsF2hdGh97CM38=7g@mail.gmail.com
M src/backend/executor/nodeHash.c
Doc: explicitly point out that enum values can't be dropped.
commit : 21c90dfcf8305d214d052fa0f36f2d7359bbd698
author : Tom Lane <[email protected]>
date : Fri, 16 Mar 2018 13:44:34 -0400
committer: Tom Lane <[email protected]>
date : Fri, 16 Mar 2018 13:44:34 -0400
This was not stated in so many words anywhere. Document it to make
clear that it's a design limitation and not just an oversight or
documentation omission.
Discussion: https://postgr.es/m/[email protected]
M doc/src/sgml/datatype.sgml
Fix double frees in ecpg.
commit : fcc15bf38100edf26b38c73a809579fc0e9ccc78
author : Michael Meskes <[email protected]>
date : Wed, 14 Mar 2018 00:47:49 +0100
committer: Michael Meskes <[email protected]>
date : Wed, 14 Mar 2018 00:47:49 +0100
Patch by Patrick Krecker <[email protected]>
M src/interfaces/ecpg/preproc/ecpg.c
When updating reltuples after ANALYZE, just extrapolate from our sample.
commit : 25a2ba35edbc3b60121ca9cfbd6cb0137b5e2f32
author : Tom Lane <[email protected]>
date : Tue, 13 Mar 2018 13:24:27 -0400
committer: Tom Lane <[email protected]>
date : Tue, 13 Mar 2018 13:24:27 -0400
The existing logic for updating pg_class.reltuples trusted the sampling
results only for the pages ANALYZE actually visited, preferring to
believe the previous tuple density estimate for all the unvisited pages.
While there's some rationale for doing that for VACUUM (first that
VACUUM is likely to visit a very nonrandom subset of pages, and second
that we know for sure that the unvisited pages did not change), there's
no such rationale for ANALYZE: by assumption, it's looked at an unbiased
random sample of the table's pages. Furthermore, in a very large table
ANALYZE will have examined only a tiny fraction of the table's pages,
meaning it cannot slew the overall density estimate very far at all.
In a table that is physically growing, this causes reltuples to increase
nearly proportionally to the change in relpages, regardless of what is
actually happening in the table. This has been observed to cause reltuples
to become so much larger than reality that it effectively shuts off
autovacuum, whose threshold for doing anything is a fraction of reltuples.
(Getting to the point where that would happen seems to require some
additional, not well understood, conditions. But it's undeniable that if
reltuples is seriously off in a large table, ANALYZE alone will not fix it
in any reasonable number of iterations, especially not if the table is
continuing to grow.)
Hence, restrict the use of vac_estimate_reltuples() to VACUUM alone,
and in ANALYZE, just extrapolate from the sample pages on the assumption
that they provide an accurate model of the whole table. If, by very bad
luck, they don't, at least another ANALYZE will fix it; in the old logic
a single bad estimate could cause problems indefinitely.
In HEAD, let's remove vac_estimate_reltuples' is_analyze argument
altogether; it was never used for anything and now it's totally pointless.
But keep it in the back branches, in case any third-party code is calling
this function.
Per bug #15005. Back-patch to all supported branches.
David Gould, reviewed by Alexander Kuzmenkov, cosmetic changes by me
Discussion: https://postgr.es/m/20180117164916.3fdcf2e9@engels
M src/backend/commands/analyze.c
M src/backend/commands/vacuum.c
Avoid holding AutovacuumScheduleLock while rechecking table statistics.
commit : 95f08d32dec4428192834d233d86d4b0546765f5
author : Tom Lane <[email protected]>
date : Tue, 13 Mar 2018 12:28:16 -0400
committer: Tom Lane <[email protected]>
date : Tue, 13 Mar 2018 12:28:16 -0400
In databases with many tables, re-fetching the statistics takes some time,
so that this behavior seriously decreases the available concurrency for
multiple autovac workers. There's discussion afoot about more complete
fixes, but a simple and back-patchable amelioration is to claim the table
and release the lock before rechecking stats. If we find out there's no
longer a reason to process the table, re-taking the lock to un-claim the
table is cheap enough.
(This patch is quite old, but got lost amongst a discussion of more
aggressive fixes. It's not clear when or if such a fix will be
accepted, but in any case it'd be unlikely to get back-patched.
Let's do this now so we have some improvement for the back branches.)
In passing, make the normal un-claim step take AutovacuumScheduleLock
not AutovacuumLock, since that is what is documented to protect the
wi_tableoid field. This wasn't an actual bug in view of the fact that
readers of that field hold both locks, but it creates some concurrency
penalty against operations that need only AutovacuumLock.
Back-patch to all supported versions.
Jeff Janes
Discussion: https://postgr.es/m/[email protected]
M src/backend/postmaster/autovacuum.c
Set connection back to NULL after freeing it.
commit : bd7eb6fe65b34963135851f06210d8e5fe048ef4
author : Michael Meskes <[email protected]>
date : Mon, 12 Mar 2018 23:52:08 +0100
committer: Michael Meskes <[email protected]>
date : Mon, 12 Mar 2018 23:52:08 +0100
Patch by Jeevan Ladhe <[email protected]>
M src/interfaces/ecpg/preproc/output.c
Fix improper uses of canonicalize_qual().
commit : e556fb1372796c760ea4f33f1b460e611596a4b4
author : Tom Lane <[email protected]>
date : Sun, 11 Mar 2018 18:10:42 -0400
committer: Tom Lane <[email protected]>
date : Sun, 11 Mar 2018 18:10:42 -0400
One of the things canonicalize_qual() does is to remove constant-NULL
subexpressions of top-level AND/OR clauses. It does that on the assumption
that what it's given is a top-level WHERE clause, so that NULL can be
treated like FALSE. Although this is documented down inside a subroutine
of canonicalize_qual(), it wasn't mentioned in the documentation of that
function itself, and some callers hadn't gotten that memo.
Notably, commit d007a9505 caused get_relation_constraints() to apply
canonicalize_qual() to CHECK constraints. That allowed constraint
exclusion to misoptimize situations in which a CHECK constraint had a
provably-NULL subclause, as seen in the regression test case added here,
in which a child table that should be scanned is not. (Although this
thinko is ancient, the test case doesn't fail before 9.2, for reasons
I've not bothered to track down in detail. There may be related cases
that do fail before that.)
More recently, commit f0e44751d added an independent bug by applying
canonicalize_qual() to index expressions, which is even sillier since
those might not even be boolean. If they are, though, I think this
could lead to making incorrect index entries for affected index
expressions in v10. I haven't attempted to prove that though.
To fix, add an "is_check" parameter to canonicalize_qual() to specify
whether it should assume WHERE or CHECK semantics, and make it perform
NULL-elimination accordingly. Adjust the callers to apply the right
semantics, or remove the call entirely in cases where it's not known
that the expression has one or the other semantics. I also removed
the call in some cases involving partition expressions, where it should
be a no-op because such expressions should be canonical already ...
and was a no-op, independently of whether it could in principle have
done something, because it was being handed the qual in implicit-AND
format which isn't what it expects. In HEAD, add an Assert to catch
that type of mistake in future.
This represents an API break for external callers of canonicalize_qual().
While that's intentional in HEAD to make such callers think about which
case applies to them, it seems like something we probably wouldn't be
thanked for in released branches. Hence, in released branches, the
extra parameter is added to a new function canonicalize_qual_ext(),
and canonicalize_qual() is a wrapper that retains its old behavior.
Patch by me with suggestions from Dean Rasheed. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/[email protected]
M src/backend/optimizer/plan/planner.c
M src/backend/optimizer/plan/subselect.c
M src/backend/optimizer/prep/prepqual.c
M src/backend/optimizer/util/plancat.c
M src/backend/utils/cache/relcache.c
M src/include/optimizer/prep.h
M src/test/regress/expected/inherit.out
M src/test/regress/sql/inherit.sql
Refrain from duplicating data in reorderbuffers
commit : 6d30e3a2b6b7ce260b3b8bff5ad44c5b6a924b32
author : Alvaro Herrera <[email protected]>
date : Tue, 6 Mar 2018 15:57:20 -0300
committer: Alvaro Herrera <[email protected]>
date : Tue, 6 Mar 2018 15:57:20 -0300
If a walsender exits leaving data in reorderbuffers, the next walsender
that tries to decode the same transaction would append its decoded data
in the same spill files without truncating it first, which effectively
duplicate the data. Avoid that by removing any leftover reorderbuffer
spill files when a walsender starts.
Backpatch to 9.4; this bug has been there from the very beginning of
logical decoding.
Author: Craig Ringer, revised by me
Reviewed by: Álvaro Herrera, Petr Jelínek, Masahiko Sawada
M src/backend/replication/logical/reorderbuffer.c
Fix assorted issues in convert_to_scalar().
commit : 165fa27fe440137eb724304343035d76d19342c7
author : Tom Lane <[email protected]>
date : Sat, 3 Mar 2018 20:31:35 -0500
committer: Tom Lane <[email protected]>
date : Sat, 3 Mar 2018 20:31:35 -0500
If convert_to_scalar is passed a pair of datatypes it can't cope with,
its former behavior was just to elog(ERROR). While this is OK so far as
the core code is concerned, there's extension code that would like to use
scalarltsel/scalargtsel/etc as selectivity estimators for operators that
work on non-core datatypes, and this behavior is a show-stopper for that
use-case. If we simply allow convert_to_scalar to return FALSE instead of
outright failing, then the main logic of scalarltsel/scalargtsel will work
fine for any operator that behaves like a scalar inequality comparison.
The lack of conversion capability will mean that we can't estimate to
better than histogram-bin-width precision, since the code will effectively
assume that the comparison constant falls at the middle of its bin. But
that's still a lot better than nothing. (Someday we should provide a way
for extension code to supply a custom version of convert_to_scalar, but
today is not that day.)
While poking at this issue, we noted that the existing code for handling
type bytea in convert_to_scalar is several bricks shy of a load.
It assumes without checking that if the comparison value is type bytea,
the bounds values are too; in the worst case this could lead to a crash.
It also fails to detoast the input values, so that the comparison result is
complete garbage if any input is toasted out-of-line, compressed, or even
just short-header. I'm not sure how often such cases actually occur ---
the bounds values, at least, are probably safe since they are elements of
an array and hence can't be toasted. But that doesn't make this code OK.
Back-patch to all supported branches, partly because author requested that,
but mostly because of the bytea bugs. The change in API for the exposed
routine convert_network_to_scalar() is theoretically a back-patch hazard,
but it seems pretty unlikely that any third-party code is calling that
function directly.
Tomas Vondra, with some adjustments by me
Discussion: https://postgr.es/m/[email protected]
M contrib/btree_gist/btree_inet.c
M src/backend/utils/adt/network.c
M src/backend/utils/adt/selfuncs.c
M src/include/utils/builtins.h
Make gistvacuumcleanup() count the actual number of index tuples.
commit : 947f06c6224e5873e912fa546bbca48ccc4fc229
author : Tom Lane <[email protected]>
date : Fri, 2 Mar 2018 11:22:42 -0500
committer: Tom Lane <[email protected]>
date : Fri, 2 Mar 2018 11:22:42 -0500
Previously, it just returned the heap tuple count, which might be only an
estimate, and would be completely the wrong thing if the index is partial.
Since this function scans every index page anyway to find free pages,
it's practically free to count the surviving index tuples. Let's do that
and return an accurate count.
This is easily visible as a wrong reltuples value for a partial GiST
index following VACUUM, so back-patch to all supported branches.
Andrey Borodin, reviewed by Michail Nikolaev
Discussion: https://postgr.es/m/[email protected]
M src/backend/access/gist/gistvacuum.c
Use ereport not elog for some corrupt-HOT-chain reports.
commit : a4fed310cbb4f61fe89b819b82eff8961c647a33
author : Tom Lane <[email protected]>
date : Thu, 1 Mar 2018 16:23:30 -0500
committer: Tom Lane <[email protected]>
date : Thu, 1 Mar 2018 16:23:30 -0500
These errors have been seen in the field in corrupted-data situations.
It seems worthwhile to report them with ERRCODE_DATA_CORRUPTED, rather
than the generic ERRCODE_INTERNAL_ERROR, for the benefit of log monitoring
and tools like amcheck. However, use errmsg_internal so that the text
strings still aren't translated; it seems unlikely to be worth
translators' time to do so.
Back-patch to 9.3, like the predecessor commit d70cf811f that introduced
these elog calls originally (replacing Asserts).
Peter Geoghegan
Discussion: https://postgr.es/m/CAH2-Wzmn4-Pg-UGFwyuyK-wiTih9j32pwg_7T9iwqXpAUZr=Mg@mail.gmail.com
M src/backend/catalog/index.c
Relax overly strict sanity check for upgraded ancient databases
commit : 3ee23834ed48eaada0f4ae761ad7945b922a31ed
author : Alvaro Herrera <[email protected]>
date : Thu, 1 Mar 2018 18:07:46 -0300
committer: Alvaro Herrera <[email protected]>
date : Thu, 1 Mar 2018 18:07:46 -0300
Commit 4800f16a7ad0 added some sanity checks to ensure we don't
accidentally corrupt data, but in one of them we failed to consider the
effects of a database upgraded from 9.2 or earlier, where a tuple
exclusively locked prior to the upgrade has a slightly different bit
pattern. Fix that by using the macro that we fixed in commit
74ebba84aeb6 for similar situations.
Reported-by: Alexandre Garcia
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CAPYLKR6yxV4=pfW0Gwij7aPNiiPx+3ib4USVYnbuQdUtmkMaEA@mail.gmail.com
Andres suspects that this bug may have wider ranging consequences, but I
couldn't find anything.
M src/backend/access/heap/heapam.c
Rename base64 routines to avoid conflict with Solaris built-in functions.
commit : d07f79a9cc7bf119bca2eacc7f90f1aab0fc9972
author : Tom Lane <[email protected]>
date : Wed, 28 Feb 2018 18:33:45 -0500
committer: Tom Lane <[email protected]>
date : Wed, 28 Feb 2018 18:33:45 -0500
Solaris 11.4 has built-in functions named b64_encode and b64_decode.
Rename ours to something else to avoid the conflict (fortunately,
ours are static so the impact is limited).
One could wish for less duplication of code in this area, but that
would be a larger patch and not very suitable for back-patching.
Since this is a portability fix, we want to put it into all supported
branches.
Report and initial patch by Rainer Orth, reviewed and adjusted a bit
by Michael Paquier
Discussion: https://postgr.es/m/[email protected]
M contrib/pgcrypto/pgp-armor.c
M src/backend/utils/adt/encode.c
Remove restriction on SQL block length in isolationtester scanner.
commit : cadb14c271bf1e303a11c6c75c3fd02299be743a
author : Tom Lane <[email protected]>
date : Wed, 28 Feb 2018 16:57:38 -0500
committer: Tom Lane <[email protected]>
date : Wed, 28 Feb 2018 16:57:38 -0500
specscanner.l had a fixed limit of 1024 bytes on the length of
individual SQL stanzas in an isolation test script. People are
starting to run into that, so fix it by making the buffer resizable.
Once we allow this in HEAD, it seems inevitable that somebody will
try to back-patch a test that exceeds the old limit, so back-patch
this change as a preventive measure.
Daniel Gustafsson
Discussion: https://postgr.es/m/[email protected]
M src/test/isolation/specscanner.l
Fix up ecpg's configuration so it handles "long long int" in MSVC builds.
commit : 49f9014c8c1e6401eabfad9d1bf30a7c0b4ff2fb
author : Tom Lane <[email protected]>
date : Tue, 27 Feb 2018 16:46:52 -0500
committer: Tom Lane <[email protected]>
date : Tue, 27 Feb 2018 16:46:52 -0500
Although configure-based builds correctly define HAVE_LONG_LONG_INT when
appropriate (in both pg_config.h and ecpg_config.h), builds using the MSVC
scripts failed to do so. This currently has no impact on the backend,
since it uses that symbol nowhere; but it does prevent ecpg from
supporting "long long int". Fix that.
Also, adjust Solution.pm so that in the constructed ecpg_config.h file,
the "#if (_MSC_VER > 1200)" covers only the LONG_LONG_INT-related
#defines, not the whole file. AFAICS this was a thinko on somebody's
part: ENABLE_THREAD_SAFETY should always be defined in Windows builds,
and in branches using USE_INTEGER_DATETIMES, the setting of that shouldn't
depend on the compiler version either. If I'm wrong, I imagine the
buildfarm will say so.
Per bug #15080 from Jonathan Allen; issue diagnosed by Michael Meskes
and Andrew Gierth. Back-patch to all supported branches.
Discussion: https://postgr.es/m/[email protected]
M src/include/pg_config.h.win32
M src/tools/msvc/Solution.pm
Remove regression tests' CREATE FUNCTION commands for unused C functions.
commit : 5cedaeca261bfa330981e9e9abc7fb01b5a246cb
author : Tom Lane <[email protected]>
date : Tue, 27 Feb 2018 15:04:21 -0500
committer: Tom Lane <[email protected]>
date : Tue, 27 Feb 2018 15:04:21 -0500
I removed these functions altogether in HEAD, in commit db3af9feb, and
it emerges that that causes trouble for cross-branch upgrade testing.
We could put back stub functions but that seems pretty silly. Instead,
back-patch a minimal subset of db3af9feb, namely just removing the
CREATE FUNCTION commands.
Discussion: https://postgr.es/m/[email protected]
M src/test/regress/input/create_function_1.source
M src/test/regress/input/create_function_2.source
M src/test/regress/output/create_function_1.source
M src/test/regress/output/create_function_2.source
Prevent dangling-pointer access when update trigger returns old tuple.
commit : 5ccb77586955375167917c9910c0fc29cd3cfea5
author : Tom Lane <[email protected]>
date : Tue, 27 Feb 2018 13:27:38 -0500
committer: Tom Lane <[email protected]>
date : Tue, 27 Feb 2018 13:27:38 -0500
A before-update row trigger may choose to return the "new" or "old" tuple
unmodified. ExecBRUpdateTriggers failed to consider the second
possibility, and would proceed to free the "old" tuple even if it was the
one returned, leading to subsequent access to already-deallocated memory.
In debug builds this reliably leads to an "invalid memory alloc request
size" failure; in production builds it might accidentally work, but data
corruption is also possible.
This is a very old bug. There are probably a couple of reasons it hasn't
been noticed up to now. It would be more usual to return NULL if one
wanted to suppress the update action; returning "old" is significantly less
efficient since the update will occur anyway. Also, none of the standard
PLs would ever cause this because they all returned freshly-manufactured
tuples even if they were just copying "old". But commit 4b93f5799 changed
that for plpgsql, making it possible to see the bug with a plpgsql trigger.
Still, this is certainly legal behavior for a trigger function, so it's
ExecBRUpdateTriggers's fault not plpgsql's.
It seems worth creating a test case that exercises returning "old" directly
with a C-language trigger; testing this through plpgsql seems unreliable
because its behavior might change again.
Report and fix by Rushabh Lathia; regression test case by me.
Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAGPqQf1P4pjiNPrMof=P_16E-DFjt457j+nH2ex3=nBTew7tXw@mail.gmail.com
M src/backend/commands/trigger.c
M src/test/regress/expected/triggers.out
M src/test/regress/input/create_function_1.source
M src/test/regress/output/create_function_1.source
M src/test/regress/regress.c
M src/test/regress/sql/triggers.sql