Stamp 11.4.
commit : e5f26d79badfae8018ac70f2137158fe36246c2b
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 17 Jun 2019 17:15:30 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 17 Jun 2019 17:15:30 -0400
M configure
M configure.in
M doc/bug.template
M src/include/pg_config.h.win32
M src/interfaces/libpq/libpq.rc.in
M src/port/win32ver.rc
Last-minute updates for release notes.
commit : a4e4418c3f68986607d7b588389e026108c79d71
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 17 Jun 2019 10:53:45 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 17 Jun 2019 10:53:45 -0400
Security: CVE-2019-10164
M doc/src/sgml/release-11.sgml
Translation updates
commit : bf94911d437c1c2524feb0bdb07b7263e7399abd
author : Peter Eisentraut <peter@eisentraut.org>
date : Mon, 17 Jun 2019 15:04:41 +0200
committer: Peter Eisentraut <peter@eisentraut.org>
date : Mon, 17 Jun 2019 15:04:41 +0200
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 599a4bccd28710a88972e1a0ef6961c9bad816fc
M src/backend/po/es.po
M src/backend/po/zh_CN.po
M src/bin/initdb/po/de.po
M src/bin/initdb/po/sv.po
M src/bin/pg_dump/po/de.po
M src/bin/pg_verify_checksums/po/zh_CN.po
M src/bin/pg_waldump/nls.mk
A src/bin/pg_waldump/po/zh_CN.po
M src/bin/psql/po/zh_CN.po
M src/bin/scripts/po/de.po
M src/bin/scripts/po/zh_CN.po
M src/interfaces/ecpg/preproc/po/zh_CN.po
M src/interfaces/libpq/po/zh_CN.po
M src/pl/plperl/po/zh_CN.po
M src/pl/plpgsql/src/po/zh_CN.po
M src/pl/plpython/po/zh_CN.po
M src/pl/tcl/nls.mk
A src/pl/tcl/po/zh_CN.po
Fix buffer overflow when processing SCRAM final message in libpq
commit : 27c464e42a9e3cb3779d1ea63b835a3e191682d6
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 17 Jun 2019 22:14:04 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 17 Jun 2019 22:14:04 +0900
When a client connects to a rogue server sending specifically-crafted
messages, this can suffice to execute arbitrary code as the operating
system account used by the client.
While on it, fix one error handling when decoding an incorrect salt
included in the first message received from server.
Author: Michael Paquier
Reviewed-by: Jonathan Katz, Heikki Linnakangas
Security: CVE-2019-10164
Backpatch-through: 10
M src/interfaces/libpq/fe-auth-scram.c
Fix buffer overflow when parsing SCRAM verifiers in backend
commit : 4c779ce324a15ffa0171160c52579130f25fcd3f
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 17 Jun 2019 21:48:25 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 17 Jun 2019 21:48:25 +0900
Any authenticated user can overflow a stack-based buffer by changing the
user's own password to a purpose-crafted value. This often suffices to
execute arbitrary code as the PostgreSQL operating system account.
This fix is contributed by multiple folks, based on an initial analysis
from Tom Lane. This issue has been introduced by 68e61ee, so it was
possible to make use of it at authentication time. It became more
easily to trigger after ccae190 which has made the SCRAM parsing more
strict when changing a password, in the case where the client passes
down a verifier already hashed using SCRAM. Back-patch to v10 where
SCRAM has been introduced.
Reported-by: Alexander Lakhin
Author: Jonathan Katz, Heikki Linnakangas, Michael Paquier
Security: CVE-2019-10164
Backpatch-through: 10
M src/backend/libpq/auth-scram.c
M src/test/regress/expected/password.out
M src/test/regress/sql/password.sql
Revert "Avoid spurious deadlocks when upgrading a tuple lock"
commit : 28dc2c25c579232dbba98a0ec62476b4091df96e
author : Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Sun, 16 Jun 2019 22:24:21 -0400
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Sun, 16 Jun 2019 22:24:21 -0400
This reverts commits 3da73d6839dc and de87a084c0a5.
This code has some tricky corner cases that I'm not sure are correct and
not properly tested anyway, so I'm reverting the whole thing for next
week's releases (reintroducing the deadlock bug that we set to fix).
I'll try again afterwards.
Discussion: https://postgr.es/m/E1hbXKQ-0003g1-0C@gemulon.postgresql.org
M src/backend/access/heap/README.tuplock
M src/backend/access/heap/heapam.c
D src/test/isolation/expected/tuplelock-upgrade-no-deadlock.out
M src/test/isolation/isolation_schedule
D src/test/isolation/specs/tuplelock-upgrade-no-deadlock.spec
Doc: update 11.4 release notes through today.
commit : f5ee6a7acc5db0d77f73aa1db531a684179b5478
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 16 Jun 2019 14:47:34 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 16 Jun 2019 14:47:34 -0400
Also improve wording of some items (thanks to Noah Misch for suggestions).
M doc/src/sgml/release-11.sgml
Prefer timezone name "UTC" over alternative spellings.
commit : 7f28fc8e929e9c63a489de4a359464d700025930
author : Andrew Gierth <rhodiumtoad@postgresql.org>
date : Sat, 15 Jun 2019 18:15:23 +0100
committer: Andrew Gierth <rhodiumtoad@postgresql.org>
date : Sat, 15 Jun 2019 18:15:23 +0100
tzdb 2019a made "UCT" a link to the "UTC" zone rather than a separate
zone with its own abbreviation. Unfortunately, our code for choosing a
timezone in initdb has an arbitrary preference for names earlier in
the alphabet, and so it would choose the spelling "UCT" over "UTC"
when the system is running on a UTC zone.
Commit 23bd3cec6 was backpatched in order to address this issue, but
that code helps only when /etc/localtime exists as a symlink, and does
nothing to help on systems where /etc/localtime is a copy of a zone
file (as is the standard setup on FreeBSD and probably some other
platforms too) or when /etc/localtime is simply absent (giving UTC as
the default).
Accordingly, add a preference for the spelling "UTC", such that if
multiple zone names have equally good content matches, we prefer that
name before applying the existing arbitrary rules. Also add a slightly
lower preference for "Etc/UTC"; lower because that preserves the
previous behaviour of choosing the shorter name, but letting us still
choose "Etc/UTC" over "Etc/UCT" when both exist but "UTC" does
not (not common, but I've seen it happen).
Backpatch all the way, because the tzdb change that sparked this issue
is in those branches too.
M src/bin/initdb/findtimezone.c
First-draft release notes for 11.4.
commit : 0995cefa74510ee0e38d1bf095b2eef2c1ea37c4
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 14 Jun 2019 16:56:49 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 14 Jun 2019 16:56:49 -0400
As usual, the release notes for other branches will be made by cutting
these down, but put them up for community review first.
M doc/src/sgml/release-11.sgml
Silence compiler warning
commit : 1f8f144fe3a98928a026af9c2a45e57a962cc90d
author : Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Fri, 14 Jun 2019 11:33:40 -0400
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Fri, 14 Jun 2019 11:33:40 -0400
Introduced in de87a084c0a5.
M src/backend/access/heap/heapam.c
Attempt to identify system timezone by reading /etc/localtime symlink.
commit : 995b4fe0b14fddb8cbe349809116e2bad260fd31
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 14 Jun 2019 11:25:13 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 14 Jun 2019 11:25:13 -0400
On many modern platforms, /etc/localtime is a symlink to a file within the
IANA database. Reading the symlink lets us find out the name of the system
timezone directly, without going through the brute-force search embodied in
scan_available_timezones(). This shortens the runtime of initdb by some
tens of ms, which is helpful for the buildfarm, and it also allows us to
reliably select the same zone name the system was actually configured for,
rather than possibly choosing one of IANA's many zone aliases. (For
example, in a system configured for "Asia/Tokyo", the brute-force search
would not choose that name but its alias "Japan", on the grounds of the
latter string being shorter. More surprisingly, "Navajo" is preferred
to either "America/Denver" or "US/Mountain", as seen in an old complaint
from Josh Berkus.)
If /etc/localtime doesn't exist, or isn't a symlink, or we can't make
sense of its contents, or the contents match a zone we know but that
zone doesn't match the observed behavior of localtime(), fall back to
the brute-force search.
Also, tweak initdb so that it prints the zone name it selected.
In passing, replace the last few references to the "Olson" database in
code comments with "IANA", as that's been our preferred term since
commit b2cbced9e.
Back-patch of commit 23bd3cec6. The original intention was to not
back-patch, since this can result in cosmetic behavioral changes ---
for example, on my own workstation initdb now chooses "America/New_York",
where it used to prefer "US/Eastern" which is equivalent and shorter.
However, our hand has been more or less forced by tzdb update 2019a,
which made the "UCT" zone fully equivalent to "UTC". Our old code
now prefers "UCT" on the grounds of it being alphabetically first,
and that's making nobody happy. Choosing the alias indicated by
/etc/localtime is a more defensible behavior. (Users who don't like
the results can always force the decision by setting the TZ environment
variable before running initdb.)
Patch by me, per a suggestion from Robert Haas; review by Michael Paquier
Discussion: https://postgr.es/m/7408.1525812528@sss.pgh.pa.us
Discussion: https://postgr.es/m/20190604085735.GD24018@msg.df7cb.de
M src/bin/initdb/findtimezone.c
M src/bin/initdb/initdb.c
M src/interfaces/ecpg/pgtypeslib/dt_common.c
Avoid spurious deadlocks when upgrading a tuple lock
commit : 85600b7b5da42c5166eb1188c173beb3cc356178
author : Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Thu, 13 Jun 2019 17:28:24 -0400
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Thu, 13 Jun 2019 17:28:24 -0400
When two (or more) transactions are waiting for transaction T1 to release a
tuple-level lock, and transaction T1 upgrades its lock to a higher level, a
spurious deadlock can be reported among the waiting transactions when T1
finishes. The simplest example case seems to be:
T1: select id from job where name = 'a' for key share;
Y: select id from job where name = 'a' for update; -- starts waiting for X
Z: select id from job where name = 'a' for key share;
T1: update job set name = 'b' where id = 1;
Z: update job set name = 'c' where id = 1; -- starts waiting for X
T1: rollback;
At this point, transaction Y is rolled back on account of a deadlock: Y
holds the heavyweight tuple lock and is waiting for the Xmax to be released,
while Z holds part of the multixact and tries to acquire the heavyweight
lock (per protocol) and goes to sleep; once X releases its part of the
multixact, Z is awakened only to be put back to sleep on the heavyweight
lock that Y is holding while sleeping. Kaboom.
This can be avoided by having Z skip the heavyweight lock acquisition. As
far as I can see, the biggest downside is that if there are multiple Z
transactions, the order in which they resume after X finishes is not
guaranteed.
Backpatch to 9.6. The patch applies cleanly on 9.5, but the new tests don't
work there (because isolationtester is not smart enough), so I'm not going
to risk it.
Author: Oleksii Kliukin
Discussion: https://postgr.es/m/B9C9D7CD-EB94-4635-91B6-E558ACEC0EC3@hintbits.com
M src/backend/access/heap/README.tuplock
M src/backend/access/heap/heapam.c
A src/test/isolation/expected/tuplelock-upgrade-no-deadlock.out
M src/test/isolation/isolation_schedule
A src/test/isolation/specs/tuplelock-upgrade-no-deadlock.spec
Mark ReplicationSlotCtl as PGDLLIMPORT.
commit : 07accce500d81957d059761a1055702d147b29a7
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 13 Jun 2019 10:53:17 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 13 Jun 2019 10:53:17 -0400
Also MyReplicationSlot, in branches where it wasn't already.
This was discussed in the thread that resulted in c572599c6, but
for some reason nobody pulled the trigger. Now that we have another
request for the same thing, we should just do it.
Craig Ringer
Discussion: https://postgr.es/m/CAMsr+YFTsq-86MnsNng=mPvjjh5EAbzfMK0ptJPvzyvpFARuRg@mail.gmail.com
Discussion: https://postgr.es/m/345138875.20190611151943@cybertec.at
M src/include/replication/slot.h
postgres_fdw: Account for triggers in non-direct remote UPDATE planning.
commit : 2144601821618ddd007b4ce6b7f081a8ac6f65c9
author : Etsuro Fujita <efujita@postgresql.org>
date : Thu, 13 Jun 2019 17:59:11 +0900
committer: Etsuro Fujita <efujita@postgresql.org>
date : Thu, 13 Jun 2019 17:59:11 +0900
Previously, in postgresPlanForeignModify, we planned an UPDATE operation
on a foreign table so that we transmit only columns that were explicitly
targets of the UPDATE, so as to avoid unnecessary data transmission, but
if there were BEFORE ROW UPDATE triggers on the foreign table, those
triggers might change values for non-target columns, in which case we
would miss sending changed values for those columns. Prevent optimizing
away transmitting all columns if there are BEFORE ROW UPDATE triggers on
the foreign table.
This is an oversight in commit 7cbe57c34 which added triggers on foreign
tables, so apply the patch all the way back to 9.4 where that came in.
Author: Shohei Mochizuki
Reviewed-by: Amit Langote
Discussion: https://postgr.es/m/201905270152.x4R1q3qi014550@toshiba.co.jp
M contrib/postgres_fdw/expected/postgres_fdw.out
M contrib/postgres_fdw/postgres_fdw.c
M contrib/postgres_fdw/sql/postgres_fdw.sql
Doc: improve description of allowed spellings for Boolean input.
commit : afaa32daf293163cb9612bdb20a04a5fcb26309d
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 12 Jun 2019 22:54:46 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 12 Jun 2019 22:54:46 -0400
datatype.sgml failed to explain that boolin() accepts any unique
prefix of the basic input strings. Indeed it was actively misleading
because it called out a few minimal prefixes without mentioning that
there were more valid inputs.
I also felt that it wasn't doing anybody any favors by conflating
SQL key words, valid Boolean input, and string literals containing
valid Boolean input. Rewrite in hopes of reducing the confusion.
Per bug #15836 from Yuming Wang, as diagnosed by David Johnston.
Back-patch to supported branches.
Discussion: https://postgr.es/m/15836-656fab055735f511@postgresql.org
M doc/src/sgml/datatype.sgml
Fix incorrect printing of queries with duplicated join names.
commit : f95d8f81062a7afa00fa034022724734a1ff5e60
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 12 Jun 2019 19:42:38 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 12 Jun 2019 19:42:38 -0400
Given a query in which multiple JOIN nodes used the same alias
(which'd necessarily be in different sub-SELECTs), ruleutils.c
would assign the JOIN nodes distinct aliases for clarity ...
but then it forgot to print the modified aliases when dumping
the JOIN nodes themselves. This results in a dump/reload hazard
for views, because the emitted query is flat-out incorrect:
Vars will be printed with table names that have no referent.
This has been wrong for a long time, so back-patch to all supported
branches.
Philip Dubé
Discussion: https://postgr.es/m/CY4PR2101MB080246F2955FF58A6ED1FEAC98140@CY4PR2101MB0802.namprd21.prod.outlook.com
M src/backend/utils/adt/ruleutils.c
M src/test/regress/expected/create_view.out
M src/test/regress/sql/create_view.sql
doc: Fix grammatical error in partitioning docs
commit : e23338cec4fb088235f27949c4f298b9738877d9
author : David Rowley <drowley@postgresql.org>
date : Thu, 13 Jun 2019 10:35:27 +1200
committer: David Rowley <drowley@postgresql.org>
date : Thu, 13 Jun 2019 10:35:27 +1200
Reported-by: Amit Langote
Discussion: https://postgr.es/m/CA+HiwqGZFkKi0TkBGYpr2_5qrRAbHZoP47AP1BRLUOUkfQdy_A@mail.gmail.com
Backpatch-through: 10
M doc/src/sgml/ddl.sgml
In walreceiver, don't try to do ereport() in a signal handler.
commit : 9346d396fd4a643653b5f3822dbfbd9968b32679
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 12 Jun 2019 17:29:48 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 12 Jun 2019 17:29:48 -0400
This is quite unsafe, even for the case of ereport(FATAL) where we won't
return control to the interrupted code, and despite this code's use of
a flag to restrict the areas where we'd try to do it. It's possible
for example that we interrupt malloc or free while that's holding a lock
that's meant to protect against cross-thread interference. Then, any
attempt to do malloc or free within ereport() will result in a deadlock,
preventing the walreceiver process from exiting in response to SIGTERM.
We hypothesize that this explains some hard-to-reproduce failures seen
in the buildfarm.
Hence, get rid of the immediate-exit code in WalRcvShutdownHandler,
as well as the logic associated with WalRcvImmediateInterruptOK.
Instead, we need to take care that potentially-blocking operations
in the walreceiver's data transmission logic (libpqwalreceiver.c)
will respond reasonably promptly to the process's latch becoming
set and then call ProcessWalRcvInterrupts. Much of the needed code
for that was already present in libpqwalreceiver.c. I refactored
things a bit so that all the uses of PQgetResult use latch-aware
waiting, but didn't need to do much more.
These changes should be enough to ensure that libpqwalreceiver.c
will respond promptly to SIGTERM whenever it's waiting to receive
data. In principle, it could block for a long time while waiting
to send data too, and this patch does nothing to guard against that.
I think that that hazard is mostly theoretical though: such blocking
should occur only if we fill the kernel's data transmission buffers,
and we don't generally send enough data to make that happen without
waiting for input. If we find out that the hazard isn't just
theoretical, we could fix it by using PQsetnonblocking, but that
would require more ticklish changes than I care to make now.
Back-patch of commit a1a789eb5. This problem goes all the way back
to the origins of walreceiver; but given the substantial reworking
the module received during the v10 cycle, it seems unsafe to assume
that our testing on HEAD validates this patch for pre-v10 branches.
And we'd need to back-patch some prerequisite patches (at least
597a87ccc and its followups, maybe other things), increasing the risk
of problems. Given the dearth of field reports matching this problem,
it's not worth much risk. Hence back-patch to v10 and v11 only.
Patch by me; thanks to Thomas Munro for review.
Discussion: https://postgr.es/m/20190416070119.GK2673@paquier.xyz
M src/backend/replication/libpqwalreceiver/libpqwalreceiver.c
M src/backend/replication/walreceiver.c
M src/include/replication/walreceiver.h
Fix ALTER COLUMN TYPE failure with a partial exclusion constraint.
commit : 0b6edb9fb3d24c73b21917945e830e1c84135575
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 12 Jun 2019 12:29:24 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Wed, 12 Jun 2019 12:29:24 -0400
ATExecAlterColumnType failed to consider the possibility that an index
that needs to be rebuilt might be a child of a constraint that needs to be
rebuilt. We missed this so far because usually a constraint index doesn't
have a direct dependency on its table, just on the constraint object.
But if there's a WHERE clause, then dependency analysis of the WHERE
clause results in direct dependencies on the column(s) mentioned in WHERE.
This led to trying to drop and rebuild both the constraint and its
underlying index.
In v11/HEAD, we successfully drop both the index and the constraint,
and then try to rebuild both, and of course the second rebuild hits a
duplicate-index-name problem. Before v11, it fails with obscure messages
about a missing relation OID, due to trying to drop the index twice.
This is essentially the same kind of problem noted in commit
20bef2c31: the possible dependency linkages are broader than what
ATExecAlterColumnType was designed for. It was probably OK when
written, but it's certainly been broken since the introduction of
partial exclusion constraints. Fix by adding an explicit check
for whether any of the indexes-to-be-rebuilt belong to any of the
constraints-to-be-rebuilt, and ignoring any that do.
In passing, fix a latent bug introduced by commit 8b08f7d48: in
get_constraint_index() we must "continue" not "break" when rejecting
a relation of a wrong relkind. This is harmless today because we don't
expect that code path to be taken anyway; but if there ever were any
relations to be ignored, the existing coding would have an extremely
undesirable dependency on the order of pg_depend entries.
Also adjust a couple of obsolete comments.
Per bug #15835 from Yaroslav Schekin. Back-patch to all supported
branches.
Discussion: https://postgr.es/m/15835-32d9b7a76c06a7a9@postgresql.org
M src/backend/catalog/pg_depend.c
M src/backend/commands/tablecmds.c
M src/test/regress/expected/alter_table.out
M src/test/regress/sql/alter_table.sql
Fix handling of COMMENT for domain constraints
commit : fa5f3a4bcca7c222b4ab5a6aba27373aea49a2ec
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 12 Jun 2019 11:30:41 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 12 Jun 2019 11:30:41 +0900
For a non-superuser, changing a comment on a domain constraint was
leading to a cache lookup failure as the code tried to perform the
ownership lookup on the constraint OID itself, thinking that it was a
type, but this check needs to happen on the type the domain constraint
relies on. As the type a domain constraint relies on can be guessed
directly based on the constraint OID, first fetch its type OID and
perform the ownership on it.
This is broken since 7eca575, which has split the handling of comments
for table constraints and domain constraints, so back-patch down to
9.5.
Reported-by: Clemens Ladisch
Author: Daniel Gustafsson, Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/15833-808e11904835d26f@postgresql.org
Backpatch-through: 9.5
M src/backend/catalog/objectaddress.c
M src/test/regress/input/constraints.source
M src/test/regress/output/constraints.source
doc: Add best practises section to partitioning docs
commit : 936b5e589e041d04c9dd9ad66883d45caaf0665e
author : David Rowley <drowley@postgresql.org>
date : Wed, 12 Jun 2019 08:09:11 +1200
committer: David Rowley <drowley@postgresql.org>
date : Wed, 12 Jun 2019 08:09:11 +1200
A few questionable partitioning designs have been cropping up lately
around the mailing lists. Generally, these cases have been partitioning
using too many partitions which have caused performance or OOM problems for
the users.
Since we have very little else to guide users into good design, here we
add a new section to the partitioning documentation with some best
practise guidelines for good design.
Reviewed-by: Justin Pryzby, Amit Langote, Alvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f-2rx+E9mG3xrCVHupefMjAp1+tpczQa9SEOZWyU7fjEA@mail.gmail.com
Backpatch-through: 10
M doc/src/sgml/ddl.sgml
Fix conversion of JSON strings to JSON output columns in json_to_record().
commit : 1c9034579c026562ae6db5a3f2d3a7678653b9ff
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 11 Jun 2019 13:33:08 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Tue, 11 Jun 2019 13:33:08 -0400
json_to_record(), when an output column is declared as type json or jsonb,
should emit the corresponding field of the input JSON object. But it got
this slightly wrong when the field is just a string literal: it failed to
escape the contents of the string. That typically resulted in syntax
errors if the string contained any double quotes or backslashes.
jsonb_to_record() handles such cases correctly, but I added corresponding
test cases for it too, to prevent future backsliding.
Improve the documentation, as it provided only a very hand-wavy
description of the conversion rules used by these functions.
Per bug report from Robert Vollmert. Back-patch to v10 where the
error was introduced (by commit cf35346e8).
Note that PG 9.4 - 9.6 also get this case wrong, but differently so:
they feed the de-escaped contents of the string literal to json[b]_in.
That behavior is less obviously wrong, so possibly it's being depended on
in the field, so I won't risk trying to make the older branches behave
like the newer ones.
Discussion: https://postgr.es/m/D6921B37-BD8E-4664-8D5F-DB3525765DCD@vllmrt.net
M doc/src/sgml/func.sgml
M src/backend/utils/adt/jsonfuncs.c
M src/test/regress/expected/json.out
M src/test/regress/expected/jsonb.out
M src/test/regress/sql/json.sql
M src/test/regress/sql/jsonb.sql
Don't access catalogs to validate GUCs when not connected to a DB.
commit : c0155601763a153658a72a628bd66ebacdd2670a
author : Andres Freund <andres@anarazel.de>
date : Mon, 10 Jun 2019 23:20:48 -0700
committer: Andres Freund <andres@anarazel.de>
date : Mon, 10 Jun 2019 23:20:48 -0700
Vignesh found this bug in the check function for
default_table_access_method's check hook, but that was just copied
from older GUCs. Investigation by Michael and me then found the bug in
further places.
When not connected to a database (e.g. in a walsender connection), we
cannot perform (most) GUC checks that need database access. Even when
only shared tables are needed, unless they're
nailed (c.f. RelationCacheInitializePhase2()), they cannot be accessed
without pg_class etc. being present.
Fix by extending the existing IsTransactionState() checks to also
check for MyDatabaseOid.
Reported-By: Vignesh C, Michael Paquier, Andres Freund
Author: Vignesh C, Andres Freund
Discussion: https://postgr.es/m/CALDaNm1KXK9gbZfY-p_peRFm_XrBh1OwQO1Kk6Gig0c0fVZ2uw%40mail.gmail.com
Backpatch: 9.4-
M src/backend/commands/tablespace.c
M src/backend/utils/cache/ts_cache.c
Make pg_dump emit ATTACH PARTITION instead of PARTITION OF (reprise)
commit : 6a781c4f5fecc5cde5444459f4cd187872487bda
author : Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Mon, 10 Jun 2019 18:56:23 -0400
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Mon, 10 Jun 2019 18:56:23 -0400
Using PARTITION OF can result in column ordering being changed from the
database being dumped, if the partition uses a column layout different
from the parent's. It's not pg_dump's job to editorialize on table
definitions, so this is not acceptable; back-patch all the way back to
pg10, where partitioned tables where introduced.
This change also ensures that partitions end up in the correct
tablespace, if different from the parent's; this is an oversight in
ca4103025dfe (in pg12 only). Partitioned indexes (in pg11) don't have
this problem, because they're already created as independent indexes and
attached to their parents afterwards.
This change also has the advantage that the partition is restorable from
the dump (as a standalone table) even if its parent table isn't
restored.
The original commits (3b23552ad8bb in branch master) failed to cover
subsidiary column elements correctly, such as NOT NULL constraint and
CHECK constraints, as reported by Rushabh Lathia (initially as a failure
to restore serial columns). They were reverted. This recapitulation
commit fixes those problems.
Add some pg_dump tests to verify these things more exhaustively,
including constraints with legacy-inheritance tables, which were not
tested originally. In branches 10 and 11, add a local constraint to the
pg_dump test partition that was added by commit 2d7eeb1b1492 to master.
Author: Álvaro Herrera, David Rowley
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f_1c260nOt_vBJ067AZ3JXptXVRohDVMLEBmudX1YEx-A@mail.gmail.com
Discussion: https://postgr.es/m/20190423185007.GA27954@alvherre.pgsql
Discussion: https://postgr.es/m/CAGPqQf0iQV=PPOv2Btog9J9AwOQp6HmuVd6SbGTR_v3Zp2XT1w@mail.gmail.com
M src/bin/pg_dump/pg_dump.c
M src/bin/pg_dump/t/002_pg_dump.pl
Fix operator naming in pg_trgm GUC option descriptions
commit : bc93a5ab407051c382fdabc190d1d0ffd92efb60
author : Alexander Korotkov <akorotkov@postgresql.org>
date : Mon, 10 Jun 2019 20:14:19 +0300
committer: Alexander Korotkov <akorotkov@postgresql.org>
date : Mon, 10 Jun 2019 20:14:19 +0300
Descriptions of pg_trgm GUC options have % replaced with %% like it was
a printf-like format. But that's not needed since they are just plain strings.
This commit fixed that. Backpatch to last supported version since this error
present from the beginning.
Reported-by: Masahiko Sawada
Discussion: https://postgr.es/m/CAD21AoAgPKODUsu9gqUFiNqEOAqedStxJ-a0sapsJXWWAVp%3Dxg%40mail.gmail.com
Backpatch-through: 9.4
M contrib/pg_trgm/trgm_op.c
Add docs of missing GUC to pgtrgm.sgml
commit : 19dc23a5ef7559cf22cdf586b9b4be7ad4497ba0
author : Alexander Korotkov <akorotkov@postgresql.org>
date : Mon, 10 Jun 2019 19:38:13 +0300
committer: Alexander Korotkov <akorotkov@postgresql.org>
date : Mon, 10 Jun 2019 19:38:13 +0300
be8a7a68 introduced pg_trgm.strict_word_similarity_threshold GUC, but missed
docs for that. This commit fixes that.
Discussion: https://postgr.es/m/fc907f70-448e-fda3-3aa4-209a59597af0%402ndquadrant.com
Author: Ian Barwick
Reviewed-by: Masahiko Sawada, Michael Paquier
Backpatch-through: 9.6
M doc/src/sgml/pgtrgm.sgml
Fix docs indentation in pgtrgm.sgml
commit : 76bccb12dbb8a683bd6659ab815d42bfad9b437c
author : Alexander Korotkov <akorotkov@postgresql.org>
date : Mon, 10 Jun 2019 19:28:47 +0300
committer: Alexander Korotkov <akorotkov@postgresql.org>
date : Mon, 10 Jun 2019 19:28:47 +0300
5871b884 introduced pg_trgm.word_similarity_threshold GUC, but its documentation
contains wrong indentation. This commit fixes that. Backpatch for easier
backpatching of other documentation fixes.
Discussion: https://postgr.es/m/4c735d30-ab59-fc0e-45d8-f90eb5ed3855%402ndquadrant.com
Author: Ian Barwick
Backpatch-through: 9.6
M doc/src/sgml/pgtrgm.sgml
Fix copy-pasto in freeing memory on error in vacuumlo.
commit : 12a45a20aa25468c56311e71320bb586c2490836
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Fri, 7 Jun 2019 12:43:55 +0300
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Fri, 7 Jun 2019 12:43:55 +0300
It's harmless to call PQfreemem() with a NULL argument, so the only
consequence was that if allocating 'schema' failed, but allocating 'table'
or 'field' succeeded, we would leak a bit of memory. That's highly
unlikely to happen, so this is just academical, but let's get it right.
Per bug #15838 from Timur Birsh. Backpatch back to 9.5, where the
PQfreemem() calls were introduced.
Discussion: https://www.postgresql.org/message-id/15838-3221652c72c5e69d@postgresql.org
M contrib/vacuumlo/vacuumlo.c
Fix inconsistency in comments atop ExecParallelEstimate.
commit : 17aa054a79961556da8f6bbc158a7786345ac926
author : Amit Kapila <akapila@postgresql.org>
date : Fri, 7 Jun 2019 05:29:11 +0530
committer: Amit Kapila <akapila@postgresql.org>
date : Fri, 7 Jun 2019 05:29:11 +0530
When this code was initially introduced in commit d1b7c1ff, the structure
used was SharedPlanStateInstrumentation, but later when it got changed to
Instrumentation structure in commit b287df70, we forgot to update the
comment.
Reported-by: Wu Fei
Author: Wu Fei
Reviewed-by: Amit Kapila
Backpatch-through: 9.6
Discussion: https://postgr.es/m/52E6E0843B9D774C8C73D6CF64402F0562215EB2@G08CNEXMBPEKD02.g08.fujitsu.local
M src/backend/executor/execParallel.c
Docs: concurrent builds of partitioned indexes are not supported
commit : a15e8ce7b647ccc4580bf00374c33e5ab793f3e3
author : David Rowley <drowley@postgresql.org>
date : Thu, 6 Jun 2019 12:37:04 +1200
committer: David Rowley <drowley@postgresql.org>
date : Thu, 6 Jun 2019 12:37:04 +1200
Document that CREATE INDEX CONCURRENTLY is not currently supported for
indexes on partitioned tables.
Discussion: https://postgr.es/m/CAKJS1f_CErd2z9L21Q8OGLD4TgH7yw1z9MAtHTSO13sXVG-yow@mail.gmail.com
Backpatch-through: 11
M doc/src/sgml/ref/create_index.sgml
Document piecemeal construction of partitioned indexes
commit : a99b653ac1e6689d13d77622dd2c31309184f28c
author : Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Tue, 4 Jun 2019 16:43:45 -0400
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>
date : Tue, 4 Jun 2019 16:43:45 -0400
Continuous operation cannot be achieved without applying this technique,
so it needs to be properly described.
Author: Álvaro Herrera
Reported-by: Tom Lane
Discussion: https://postgr.es/m/8756.1556302759@sss.pgh.pa.us
M doc/src/sgml/ddl.sgml
Fix contrib/auto_explain to not cause problems in parallel workers.
commit : 57e85fa2cb7e7a99306be4b62c6a547532b7e849
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 3 Jun 2019 18:06:04 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 3 Jun 2019 18:06:04 -0400
A parallel worker process should not be making any decisions of its
own about whether to auto-explain. If the parent session process
passed down flags asking for instrumentation data, do that, otherwise
not. Trying to enable instrumentation anyway leads to bugs like the
"could not find key N in shm TOC" failure reported in bug #15821
from Christian Hofstaedtler.
We can implement this cheaply by piggybacking on the existing logic
for not doing anything when we've chosen not to sample a statement.
While at it, clean up some tin-eared coding related to the sampling
feature, including an off-by-one error that meant that asking for 1.0
sampling rate didn't actually result in sampling every statement.
Although the specific case reported here only manifested in >= v11,
I believe that related misbehaviors can be demonstrated in any version
that has parallel query; and the off-by-one error is certainly there
back to 9.6 where that feature was added. So back-patch to 9.6.
Discussion: https://postgr.es/m/15821-5eb422e980594075@postgresql.org
M contrib/auto_explain/auto_explain.c
Fix unsafe memory management in CloneRowTriggersToPartition().
commit : 601084eb1aa8f1ae5303d9cf130d5b8cc385b517
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 3 Jun 2019 16:59:16 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 3 Jun 2019 16:59:16 -0400
It's not really supported to call systable_getnext() in a different
memory context than systable_beginscan() was called in, and it's
*definitely* not safe to do so and then reset that context between
calls. I'm not very clear on how this code survived
CLOBBER_CACHE_ALWAYS testing ... but Alexander Lakhin found a case
that would crash it pretty reliably.
Per bug #15828. Fix, and backpatch to v11 where this code came in.
Discussion: https://postgr.es/m/15828-f6ddd7df4852f473@postgresql.org
M src/backend/commands/tablecmds.c
Fix documentation of check_option in information_schema.views
commit : 3c461d510dc5af7a7c4d4947ad7c2ef703e3646c
author : Michael Paquier <michael@paquier.xyz>
date : Sat, 1 Jun 2019 15:33:58 -0400
committer: Michael Paquier <michael@paquier.xyz>
date : Sat, 1 Jun 2019 15:33:58 -0400
Support of CHECK OPTION for updatable views has been added in 9.4, but
the documentation of information_schema never got the call even if the
information displayed is correct.
Author: Gilles Darold
Discussion: https://postgr.es/m/75d07704-6c74-4f26-656a-10045c01a17e@darold.net
Backpatch-through: 9.4
M doc/src/sgml/information_schema.sgml
Fix C++ incompatibilities in plpgsql's header files.
commit : 312017fcc46b56c7c2230dbe0908ba78b384735e
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 31 May 2019 12:34:54 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 31 May 2019 12:34:54 -0400
Rename some exposed parameters so that they don't conflict with
C++ reserved words.
Back-patch to all supported versions.
George Tarasov
Discussion: https://postgr.es/m/b517ec3918d645eb950505eac8dd434e@gaz-is.ru
M src/pl/plpgsql/src/pl_comp.c
M src/pl/plpgsql/src/pl_exec.c
M src/pl/plpgsql/src/plpgsql.h
Make error logging in extended statistics more consistent
commit : 9c9a74cd3257324257ec016e800ce0a6d5af88c7
author : Tomas Vondra <tomas.vondra@postgresql.org>
date : Thu, 30 May 2019 16:16:12 +0200
committer: Tomas Vondra <tomas.vondra@postgresql.org>
date : Thu, 30 May 2019 16:16:12 +0200
Most errors reported in extended statistics are internal issues, and so
should use elog(). The MCV list code was already following this rule, but
the functional dependencies and ndistinct coefficients were using a mix
of elog() and ereport(). Fix this by changing most places to elog(), with
the exception of input functions.
This is a mostly cosmetic change, it makes the life a little bit easier
for translators, as elog() messages are not translated. So backpatch to
PostgreSQL 10, where extended statistics were introduced.
Author: Tomas Vondra
Backpatch-through: 10 where extended statistics were added
Discussion: https://postgr.es/m/20190503154404.GA7478@alvherre.pgsql
M src/backend/statistics/dependencies.c
M src/backend/statistics/mvdistinct.c
In the pg_upgrade test suite, don't write to src/test/regress.
commit : 88a0e3daf862def3503a69f89bc9eeecb7d73736
author : Noah Misch <noah@leadboat.com>
date : Tue, 28 May 2019 12:59:00 -0700
committer: Noah Misch <noah@leadboat.com>
date : Tue, 28 May 2019 12:59:00 -0700
When this suite runs installcheck, redirect file creations from
src/test/regress to src/bin/pg_upgrade/tmp_check/regress. This closes a
race condition in "make -j check-world". If the pg_upgrade suite wrote
to a given src/test/regress/results file in parallel with the regular
src/test/regress invocation writing it, a test failed spuriously. Even
without parallelism, in "make -k check-world", the suite finishing
second overwrote the other's regression.diffs. This revealed test
"largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too.
Buildfarm client REL_10, released fifty-four days ago, supports saving
regression.diffs from its new location. When an older client reports a
pg_upgradeCheck failure, it will no longer include regression.diffs.
Back-patch to 9.5, where pg_upgrade moved to src/bin.
Reviewed (in earlier versions) by Andrew Dunstan.
Discussion: https://postgr.es/m/20181224034411.GA3224776@rfd.leadboat.com
M src/bin/pg_upgrade/test.sh
M src/test/regress/input/largeobject.source
M src/test/regress/output/largeobject.source
M src/test/regress/output/largeobject_1.source
M src/tools/msvc/vcregress.pl
In the pg_upgrade test suite, remove and recreate "tmp_check".
commit : 20103a26094beeadb2019f5b86a57f6eee684d8e
author : Noah Misch <noah@leadboat.com>
date : Tue, 28 May 2019 12:58:30 -0700
committer: Noah Misch <noah@leadboat.com>
date : Tue, 28 May 2019 12:58:30 -0700
This allows "vcregress upgradecheck" to pass twice in immediate
succession, and it's more like how $(prove_check) works. Back-patch to
9.5, where pg_upgrade moved to src/bin.
Discussion: https://postgr.es/m/20190520012436.GA1480421@rfd.leadboat.com
M src/bin/pg_upgrade/test.sh
M src/tools/msvc/vcregress.pl
Doc: fix typo in pgbench random_zipfian() documentation.
commit : 329575db94cda78d467c52f71967acf26fcc5ce2
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 24 May 2019 11:16:06 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 24 May 2019 11:16:06 -0400
Per bug #15819 from Koizumi Satoru.
Discussion: https://postgr.es/m/15819-e6191bef1f7334c0@postgresql.org
M doc/src/sgml/ref/pgbench.sgml
pg_upgrade: Make test.sh's installcheck use to-be-upgraded version's bindir.
commit : 5d91a9e8ac9cc6dc9d3695fcaa41132f2c4f0a89
author : Andres Freund <andres@anarazel.de>
date : Thu, 23 May 2019 14:46:57 -0700
committer: Andres Freund <andres@anarazel.de>
date : Thu, 23 May 2019 14:46:57 -0700
On master (after 700538) the old version's installed psql was used -
even when the old version might not actually be installed / might be
installed into a temporary directory. As commonly the case when just
executing make check for pg_upgrade, as $oldbindir is just the current
version's $bindir.
In the back branches, with --install specified, psql from the new
version's temporary installation was used, without --install (e.g for
NO_TEMP_INSTALL, cf 47b3c26642), the new version's installed psql was
used (which might or might not exist).
Author: Andres Freund
Discussion: https://postgr.es/m/20190522175150.c26f4jkqytahajdg@alap3.anarazel.de
M src/bin/pg_upgrade/test.sh
Fix array size allocation for HashAggregate hash keys.
commit : f7da492dca2a929045414aaf17f2e8cbf778df3d
author : Andrew Gierth <rhodiumtoad@postgresql.org>
date : Thu, 23 May 2019 15:26:01 +0100
committer: Andrew Gierth <rhodiumtoad@postgresql.org>
date : Thu, 23 May 2019 15:26:01 +0100
When there were duplicate columns in the hash key list, the array
sizes could be miscomputed, resulting in access off the end of the
array. Adjust the computation to ensure the array is always large
enough.
(I considered whether the duplicates could be removed in planning, but
I can't rule out the possibility that duplicate columns might have
different hash functions assigned. Simpler to just make sure it works
at execution time regardless.)
Bug apparently introduced in fc4b3dea2 as part of narrowing down the
tuples stored in the hashtable. Reported by Colm McHugh of Salesforce,
though I didn't use their patch. Backpatch back to version 10 where
the bug was introduced.
Discussion: https://postgr.es/m/CAFeeJoKKu0u+A_A9R9316djW-YW3-+Gtgvy3ju655qRHR3jtdA@mail.gmail.com
M src/backend/executor/nodeAgg.c
M src/test/regress/expected/aggregates.out
M src/test/regress/sql/aggregates.sql
Fix ordering of GRANT commands in pg_dumpall for tablespaces
commit : a7b2fca15b2552e004c9c1af2bbf07d0d8ef7a8b
author : Michael Paquier <michael@paquier.xyz>
date : Thu, 23 May 2019 10:48:24 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Thu, 23 May 2019 10:48:24 +0900
This uses a method similar to 68a7c24f and now b8c6014 (applied for
database creation), which guarantees that GRANT commands using the WITH
GRANT OPTION are dumped in a way so as cascading dependencies are
respected. Note that tablespaces do not have support for initial
privileges via pg_init_privs, so the same method needs to be applied
again. It would be nice to merge all the logic generating ACL queries
in dumps under the same banner, but this requires extending the support
of pg_init_privs to objects that cannot use it yet, so this is left as
future work.
Discussion: https://postgr.es/m/20190522071555.GB1278@paquier.xyz
Author: Michael Paquier
Reviewed-by: Nathan Bossart
Backpatch-through: 9.6
M src/bin/pg_dump/pg_dumpall.c
Fix ordering of GRANT commands in pg_dump for database creation
commit : 8357a413f439887ef243f9efd2417b1a7409e694
author : Michael Paquier <michael@paquier.xyz>
date : Wed, 22 May 2019 14:48:14 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Wed, 22 May 2019 14:48:14 +0900
This uses a method similar to 68a7c24f, which guarantees that GRANT
commands using the WITH GRANT OPTION are dumped in a way so as cascading
dependencies are respected. As databases do not have support for
initial privileges via pg_init_privs, we need to repeat again the same
ACL reordering method.
ACL for databases have been moved from pg_dumpall to pg_dump in v11, so
this impacts pg_dump for v11 and above, and pg_dumpall for v9.6 and
v10.
Discussion: https://postgr.es/m/15788-4e18847520ebcc75@postgresql.org
Author: Nathan Bossart
Reviewed-by: Haribabu Kommi
Backpatch-through: 9.6
M src/bin/pg_dump/pg_dump.c
Minimally fix partial aggregation for aggregates that don't have one argument.
commit : 9fea0b0e287e39c96f1486b0af23102ac5b752a5
author : Andres Freund <andres@anarazel.de>
date : Sun, 19 May 2019 18:01:06 -0700
committer: Andres Freund <andres@anarazel.de>
date : Sun, 19 May 2019 18:01:06 -0700
For partial aggregation combine steps,
AggStatePerTrans->numTransInputs was set to the transition function's
number of inputs, rather than the combine function's number of
inputs (always 1).
That lead to partial aggregates with strict combine functions to
wrongly check for NOT NULL input as required by strictness. When the
aggregate wasn't exactly passed one argument, the strictness check was
either omitted (in the 0 args case) or too many arguments were
checked. In the latter case we'd read beyond the end of
FunctionCallInfoData->args (only in master).
AggStatePerTrans->numTransInputs actually has been wrong since since
9.6, where partial aggregates were added. But it turns out to not be
an active problem in 9.6 and 10, because numTransInputs wasn't used at
all for combine functions: Before c253b722f6 there simply was no NULL
check for the input to strict trans functions, and after that the
check was simply hardcoded for the right offset in fcinfo, as it's
done by code specific to combine functions.
In bf6c614a2f2 (11) the strictness check was generalized, with common
code doing the strictness checks for both plain and combine transition
functions, based on numTransInputs. For combine functions this lead to
not emitting an expression step to check for strict input in the 0
arguments case, and in the > 1 arguments case, we'd check too many
arguments.Due to the fact that the relevant fcinfo->isnull[2..] was
always zero-initialized (more or less by accident, by being part of
the AggStatePerTrans struct, which is palloc0'ed), there was no
observable damage in the latter case before a9c35cf85ca1f, we just
checked too many array elements.
Due to the changes in a9c35cf85ca1f, > 1 argument bug became visible,
because these days fcinfo is a) dynamically allocated without being
zeroed b) exactly the length required for the number of specified
arguments (hardcoded to 2 in this case).
This commit only contains a fairly minimal fix, setting numTransInputs
to a hardcoded 1 when building a pertrans for a combine function. It
seems likely that we'll want to clean this up further (e.g. the
arguments build_pertrans_for_aggref() aren't particularly meaningful
for combine functions). But the wrap date for 12 beta1 is coming up
fast, so it seems good to have a minimal fix in place.
Backpatch to 11. While AggStatePerTrans->numTransInputs was set
wrongly before that, the value was not used for combine functions.
Reported-By: Rajkumar Raghuwanshi
Diagnosed-By: Kyotaro Horiguchi, Jeevan Chalke, Andres Freund, David Rowley
Author: David Rowley, Kyotaro Horiguchi, Andres Freund
Discussion: https://postgr.es/m/CAKcux6=uZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew@mail.gmail.com
M src/backend/executor/nodeAgg.c
M src/test/regress/expected/aggregates.out
M src/test/regress/sql/aggregates.sql
Fix some grammar in documentation of spgist and pgbench
commit : 0950d25acec66ad02d2fc2d6d75a36ec334ed6f8
author : Michael Paquier <michael@paquier.xyz>
date : Mon, 20 May 2019 09:48:27 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Mon, 20 May 2019 09:48:27 +0900
Discussion: https://postgr.es/m/92961161-9b49-e42f-0a72-d5d47e0ed4de@postgrespro.ru
Author: Liudmila Mantrova
Reviewed-by: Jonathan Katz, Tom Lane, Michael Paquier
Backpatch-through: 9.4
M doc/src/sgml/ref/pgbench.sgml
M doc/src/sgml/spgist.sgml
Revert "In the pg_upgrade test suite, don't write to src/test/regress."
commit : 9518978e223b758f0efbc28422c5bf164d521f28
author : Noah Misch <noah@leadboat.com>
date : Sun, 19 May 2019 15:24:42 -0700
committer: Noah Misch <noah@leadboat.com>
date : Sun, 19 May 2019 15:24:42 -0700
This reverts commit bd1592e8570282b1650af6b8eede0016496daecd. It had
multiple defects.
Discussion: https://postgr.es/m/12717.1558304356@sss.pgh.pa.us
M src/bin/pg_upgrade/test.sh
M src/test/regress/input/largeobject.source
M src/test/regress/output/largeobject.source
M src/test/regress/output/largeobject_1.source
M src/tools/msvc/vcregress.pl
In the pg_upgrade test suite, don't write to src/test/regress.
commit : d08d880ab41afff57280e69b89144076ae068999
author : Noah Misch <noah@leadboat.com>
date : Sun, 19 May 2019 14:36:44 -0700
committer: Noah Misch <noah@leadboat.com>
date : Sun, 19 May 2019 14:36:44 -0700
When this suite runs installcheck, redirect file creations from
src/test/regress to src/bin/pg_upgrade/tmp_check/regress. This closes a
race condition in "make -j check-world". If the pg_upgrade suite wrote
to a given src/test/regress/results file in parallel with the regular
src/test/regress invocation writing it, a test failed spuriously. Even
without parallelism, in "make -k check-world", the suite finishing
second overwrote the other's regression.diffs. This revealed test
"largeobject" assuming @abs_builddir@ is getcwd(), so fix that, too.
Buildfarm client REL_10, released forty-five days ago, supports saving
regression.diffs from its new location. When an older client reports a
pg_upgradeCheck failure, it will no longer include regression.diffs.
Back-patch to 9.5, where pg_upgrade moved to src/bin.
Reviewed by Andrew Dunstan.
Discussion: https://postgr.es/m/20181224034411.GA3224776@rfd.leadboat.com
M src/bin/pg_upgrade/test.sh
M src/test/regress/input/largeobject.source
M src/test/regress/output/largeobject.source
M src/test/regress/output/largeobject_1.source
M src/tools/msvc/vcregress.pl
Restructure creation of run-time pruning steps.
commit : 592d5d75be9720e575e76ba35c3ff04659ec0603
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 17 May 2019 19:44:19 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 17 May 2019 19:44:19 -0400
Previously, gen_partprune_steps() always built executor pruning steps
using all suitable clauses, including those containing PARAM_EXEC
Params. This meant that the pruning steps were only completely safe
for executor run-time (scan start) pruning. To prune at executor
startup, we had to ignore the steps involving exec Params. But this
doesn't really work in general, since there may be logic changes
needed as well --- for example, pruning according to the last operator's
btree strategy is the wrong thing if we're not applying that operator.
The rules embodied in gen_partprune_steps() and its minions are
sufficiently complicated that tracking their incremental effects in
other logic seems quite impractical.
Short of a complete redesign, the only safe fix seems to be to run
gen_partprune_steps() twice, once to create executor startup pruning
steps and then again for run-time pruning steps. We can save a few
cycles however by noting during the first scan whether we rejected
any clauses because they involved exec Params --- if not, we don't
need to do the second scan.
In support of this, refactor the internal APIs in partprune.c to make
more use of passing information in the GeneratePruningStepsContext
struct, rather than as separate arguments.
This is, I hope, the last piece of our response to a bug report from
Alan Jackson. Back-patch to v11 where this code came in.
Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com
M src/backend/executor/execPartition.c
M src/backend/nodes/copyfuncs.c
M src/backend/nodes/outfuncs.c
M src/backend/nodes/readfuncs.c
M src/backend/partitioning/partprune.c
M src/include/executor/execPartition.h
M src/include/nodes/plannodes.h
M src/include/partitioning/partprune.h
M src/test/regress/expected/partition_prune.out
M src/test/regress/sql/partition_prune.sql
Fix bogus logic for combining range-partitioned columns during pruning.
commit : 51948c4e1fdef88ba9b953bd7b58d19a348732be
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 16 May 2019 16:25:43 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 16 May 2019 16:25:43 -0400
gen_prune_steps_from_opexps's notion of how to do this was overly
complicated and underly correct.
Per discussion of a report from Alan Jackson (though this fixes only one
aspect of that problem). Back-patch to v11 where this code came in.
Amit Langote
Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com
M src/backend/partitioning/partprune.c
M src/test/regress/expected/partition_prune.out
M src/test/regress/sql/partition_prune.sql
Fix partition pruning to treat stable comparison operators properly.
commit : 10c5cc4b4f88d249751e27034a8dd59ea903a698
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 16 May 2019 11:58:22 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 16 May 2019 11:58:22 -0400
Cross-type comparison operators in a btree or hash opclass might be
only stable not immutable (this is true of timestamp vs. timestamptz
for example). partprune.c ignored this possibility and would perform
plan-time pruning with them anyway, possibly leading to wrong answers
if the environment changed between planning and execution.
To fix, teach gen_partprune_steps() to do things differently when
creating plan-time pruning steps vs. run-time pruning steps.
analyze_partkey_exprs() also needs an extra check, which is rather
annoying but now is not the time to restructure things enough to
avoid that.
While at it, simplify the logic for the plan-time case a little
by insisting that the comparison value be a Const and nothing else.
This relies on the assumption that eval_const_expressions will have
reduced any immutable expression to a Const; which is not quite
100% true, but certainly any case that comes up often enough to be
interesting should have simplification logic there.
Also improve a bunch of inadequate/obsolete/wrong comments.
Per discussion of a report from Alan Jackson (though this fixes only one
aspect of that problem). Back-patch to v11 where this code came in.
David Rowley, with some further hacking by me
Discussion: https://postgr.es/m/FAD28A83-AC73-489E-A058-2681FA31D648@tvsquared.com
M src/backend/partitioning/partprune.c
M src/test/regress/expected/partition_prune.out
M src/test/regress/sql/partition_prune.sql
Add isolation test for INSERT ON CONFLICT speculative insertion failure.
commit : 05cf41973157577aac9706dcc7998054949b0ed4
author : Andres Freund <andres@anarazel.de>
date : Tue, 14 May 2019 11:45:40 -0700
committer: Andres Freund <andres@anarazel.de>
date : Tue, 14 May 2019 11:45:40 -0700
This path previously was not reliably covered. There was some
heuristic coverage via insert-conflict-toast.spec, but that test is
not deterministic, and only tested for a somewhat specific bug.
Backpatch, as this is a complicated and otherwise untested code
path. Unfortunately 9.5 cannot handle two waiting sessions, and thus
cannot execute this test.
Triggered by a conversion with Melanie Plageman.
Author: Andres Freund
Discussion: https://postgr.es/m/CAAKRu_a7hbyrk=wveHYhr4LbcRnRCG=yPUVoQYB9YO1CdUBE9Q@mail.gmail.com
Backpatch: 9.5-
A src/test/isolation/expected/insert-conflict-specconflict.out
M src/test/isolation/isolation_schedule
A src/test/isolation/specs/insert-conflict-specconflict.spec
Fix comment on when HOT update is possible.
commit : 3293330f79af9d66e9df251266c882794edfec4e
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 14 May 2019 13:06:33 +0300
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Tue, 14 May 2019 13:06:33 +0300
The conditions listed in this comment have changed several times, and at
some point the thing that the "if so" referred to was negated.
The text was OK up to 9.6. It was differently wrong in v10, v11 and
master, so fix in all those versions.
M src/backend/access/heap/heapam.c
Doc: Refer to line pointers as item identifiers.
commit : 6bbc2f9b66104de67f29881c54e75fd6f5d2f694
author : Peter Geoghegan <pg@bowt.ie>
date : Mon, 13 May 2019 15:39:05 -0700
committer: Peter Geoghegan <pg@bowt.ie>
date : Mon, 13 May 2019 15:39:05 -0700
An upcoming HEAD-only patch will standardize the terminology around
ItemIdData variables/line pointers, ending the practice of referring to
them as "item pointers". Make the "Database Page Layout" docs
consistent with the new policy. The term "item identifier" is already
used in the same section, so stick with that.
Discussion: https://postgr.es/m/CAH2-Wz=c=MZQjUzde3o9+2PLAPuHTpVZPPdYxN=E4ndQ2--8ew@mail.gmail.com
Backpatch: All supported branches.
M doc/src/sgml/storage.sgml
Fix logical replication's ideas about which type OIDs are built-in.
commit : b6abc2241ac4549623d6894d7855765df6345ad5
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 13 May 2019 17:23:00 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 13 May 2019 17:23:00 -0400
Only hand-assigned type OIDs should be presumed to match across different
PG servers; those assigned during genbki.pl or during initdb are likely
to change due to addition or removal of unrelated objects.
This means that the cutoff should be FirstGenbkiObjectId (in HEAD)
or FirstBootstrapObjectId (before that), not FirstNormalObjectId.
Compare postgres_fdw's is_builtin() test.
It's likely that this error has no observable consequence in a
normally-functioning system, since ATM the only affected type OIDs are
system catalog rowtypes and information_schema types, which would not
typically be interesting for logical replication. But you could
probably break it if you tried hard, so back-patch.
Discussion: https://postgr.es/m/15150.1557257111@sss.pgh.pa.us
M src/backend/replication/logical/relation.c
M src/backend/replication/pgoutput/pgoutput.c
Don't leave behind junk nbtree pages during split.
commit : bf78f50bae0b3b5ffcbf3e3c5b03fd138be15f9a
author : Peter Geoghegan <pg@bowt.ie>
date : Mon, 13 May 2019 10:27:57 -0700
committer: Peter Geoghegan <pg@bowt.ie>
date : Mon, 13 May 2019 10:27:57 -0700
Commit 8fa30f906be reduced the elevel of a number of "can't happen"
_bt_split() errors from PANIC to ERROR. At the same time, the new right
page buffer for the split could continue to be acquired well before the
critical section. This was possible because it was relatively
straightforward to make sure that _bt_split() could not throw an error,
with a few specific exceptions. The exceptional cases were safe because
they involved specific, well understood errors, making it possible to
consistently zero the right page before actually raising an error using
elog(). There was no danger of leaving around a junk page, provided
_bt_split() stuck to this coding rule.
Commit 8224de4f, which introduced INCLUDE indexes, added code to make
_bt_split() truncate away non-key attributes. This happened at a point
that broke the rule around zeroing the right page in _bt_split(). If
truncation failed (perhaps due to palloc() failure), that would result
in an errant right page buffer with junk contents. This could confuse
VACUUM when it attempted to delete the page, and should be avoided on
general principle.
To fix, reorganize _bt_split() so that truncation occurs before the new
right page buffer is even acquired. A junk page/buffer will not be left
behind if _bt_nonkey_truncate()/_bt_truncate() raise an error.
Discussion: https://postgr.es/m/CAH2-WzkcWT_-NH7EeL=Az4efg0KCV+wArygW8zKB=+HoP=VWMw@mail.gmail.com
Backpatch: 11-, where INCLUDE indexes were introduced.
M src/backend/access/nbtree/nbtinsert.c
Fix misuse of an integer as a bool.
commit : 6b0e9411ff0f0116d6f9118a870a682a17eea110
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 13 May 2019 10:53:19 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Mon, 13 May 2019 10:53:19 -0400
pgtls_read_pending is declared to return bool, but what the underlying
SSL_pending function returns is a count of available bytes.
This is actually somewhat harmless if we're using C99 bools, but in
the back branches it's a live bug: if the available-bytes count happened
to be a multiple of 256, it would get converted to a zero char value.
On machines where char is signed, counts of 128 and up could misbehave
as well. The net effect is that when using SSL, libpq might block
waiting for data even though some has already been received.
Broken by careless refactoring in commit 4e86f1b16, so back-patch
to 9.5 where that came in.
Per bug #15802 from David Binderman.
Discussion: https://postgr.es/m/15802-f0911a97f0346526@postgresql.org
M src/interfaces/libpq/fe-misc.c
M src/interfaces/libpq/fe-secure-openssl.c
postgres_fdw: Fix typo in comment.
commit : 6ba0ff47cd9a7e86298dca3ead112eb27ae21265
author : Etsuro Fujita <efujita@postgresql.org>
date : Mon, 13 May 2019 17:30:37 +0900
committer: Etsuro Fujita <efujita@postgresql.org>
date : Mon, 13 May 2019 17:30:37 +0900
M contrib/postgres_fdw/postgres_fdw.c
Fix misoptimization of "{1,1}" quantifiers in regular expressions.
commit : 72ce7acaf3e60da712f0de1916704a4aec06600d
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 12 May 2019 18:53:12 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sun, 12 May 2019 18:53:12 -0400
A bounded quantifier with m = n = 1 might be thought a no-op. But
according to our documentation (which traces back to Henry Spencer's
original man page) it still imposes greediness, or non-greediness in the
case of the non-greedy variant "{1,1}?", on whatever it's attached to.
This turns out not to work though, because parseqatom() optimizes away
the m = n = 1 case without regard for whether it's supposed to change
the greediness of the argument RE.
We can fix this by just not applying the optimization when the greediness
needs to change; the subsequent general cases handle it fine.
The three cases in which we can still apply the optimization are
(a) no quantifier, or quantifier does not impose a preference;
(b) atom has no greediness property, implying it cannot match a
variable amount of text anyway; or
(c) quantifier's greediness is same as atom's.
Note that in most cases where one of these applies, we'd have exited
earlier in the "not a messy case" fast path. I think it's now only
possible to get to the optimization when the atom involves capturing
parentheses or a non-top-level backref.
Back-patch to all supported branches. I'd ordinarily be hesitant to
put a subtle behavioral change into back branches, but in this case
it's very hard to see a reason why somebody would write "{1,1}?" unless
they're trying to get the documented change-of-greediness behavior.
Discussion: https://postgr.es/m/5bb27a41-350d-37bf-901e-9d26f5592dd0@charter.net
M src/backend/regex/regcomp.c
M src/test/regress/expected/regex.out
M src/test/regress/sql/regex.sql
Fail pgwin32_message_to_UTF16() for SQL_ASCII messages.
commit : 4ec14e5aa1f79d01a2558b694ccbe7756c4d186e
author : Noah Misch <noah@leadboat.com>
date : Sun, 12 May 2019 10:33:05 -0700
committer: Noah Misch <noah@leadboat.com>
date : Sun, 12 May 2019 10:33:05 -0700
The function had been interpreting SQL_ASCII messages as UTF8, throwing
an error when they were invalid UTF8. The new behavior is consistent
with pg_do_encoding_conversion(). This affects LOG_DESTINATION_STDERR
and LOG_DESTINATION_EVENTLOG, which will send untranslated bytes to
write() and ReportEventA(). On buildfarm member bowerbird, enabling
log_connections caused an error whenever the role name was not valid
UTF8. Back-patch to 9.4 (all supported versions).
Discussion: https://postgr.es/m/20190512015615.GD1124997@rfd.leadboat.com
M src/backend/utils/mb/mbutils.c
M src/bin/pg_dump/t/010_dump_connstr.pl
M src/bin/scripts/t/200_connstr.pl
Rearrange pgstat_bestart() to avoid failures within its critical section.
commit : eb97242c2f78869376277567dcb8102283368489
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 11 May 2019 21:27:13 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Sat, 11 May 2019 21:27:13 -0400
We long ago decided to design the shared PgBackendStatus data structure to
minimize the cost of writing status updates, which means that writers just
have to increment the st_changecount field twice. That isn't hooked into
any sort of resource management mechanism, which means that if something
were to throw error between the two increments, the st_changecount field
would be left odd indefinitely. That would cause readers to lock up.
Now, since it's also a bad idea to leave the field odd for longer than
absolutely necessary (because readers will spin while we have it set),
the expectation was that we'd treat these segments like spinlock critical
sections, with only short, more or less straight-line, code in them.
That was fine as originally designed, but commit 9029f4b37 broke it
by inserting a significant amount of non-straight-line code into
pgstat_bestart(), code that is very capable of throwing errors, not to
mention taking a significant amount of time during which readers will spin.
We have a report from Neeraj Kumar of readers actually locking up, which
I suspect was due to an encoding conversion error in X509_NAME_to_cstring,
though conceivably it was just a garden-variety OOM failure.
Subsequent commits have loaded even more dubious code into pgstat_bestart's
critical section (and commit fc70a4b0d deserves some kind of booby prize
for managing to miss the critical section entirely, although the negative
consequences seem minimal given that the PgBackendStatus entry should be
seen by readers as inactive at that point).
The right way to fix this mess seems to be to compute all these values
into a local copy of the process' PgBackendStatus struct, and then just
copy the data back within the critical section proper. This plan can't
be implemented completely cleanly because of the struct's heavy reliance
on out-of-line strings, which we must initialize separately within the
critical section. But still, the critical section is far smaller and
safer than it was before.
In hopes of forestalling future errors of the same ilk, rename the
macros for st_changecount management to make it more apparent that
the writer-side macros create a critical section. And to prevent
the worst consequences if we nonetheless manage to mess it up anyway,
adjust those macros so that they really are a critical section, ie
they now bump CritSectionCount. That doesn't add much overhead, and
it guarantees that if we do somehow throw an error while the counter
is odd, it will lead to PANIC and a database restart to reset shared
memory.
Back-patch to 9.5 where the problem was introduced.
In HEAD, also fix an oversight in commit b0b39f72b: it failed to teach
pgstat_read_current_status to copy st_gssstatus data from shared memory to
local memory. Hence, subsequent use of that data within the transaction
would potentially see changing data that it shouldn't see.
Discussion: https://postgr.es/m/CAPR3Wj5Z17=+eeyrn_ZDG3NQGYgMEOY6JV6Y-WRRhGgwc16U3Q@mail.gmail.com
M src/backend/postmaster/pgstat.c
M src/include/pgstat.h
Honor TEMP_CONFIG in TAP suites.
commit : 239dcf8f15b70102ed18d1d8a020e4a7bbc2a6f9
author : Noah Misch <noah@leadboat.com>
date : Sat, 11 May 2019 00:22:38 -0700
committer: Noah Misch <noah@leadboat.com>
date : Sat, 11 May 2019 00:22:38 -0700
The buildfarm client uses TEMP_CONFIG to implement its extra_config
setting. Except for stats_temp_directory, extra_config now applies to
TAP suites; extra_config values seen in the past month are compatible
with this. Back-patch to 9.6, where PostgresNode was introduced, so the
buildfarm can rely on it sooner.
Reviewed by Andrew Dunstan and Tom Lane.
Discussion: https://postgr.es/m/20181229021950.GA3302966@rfd.leadboat.com
M src/bin/pg_ctl/t/001_start_stop.pl
M src/test/perl/PostgresNode.pm
Fix error reporting in reindexdb
commit : e16ab408f3db5ced50d84748b7a9f367ece93d3f
author : Michael Paquier <michael@paquier.xyz>
date : Sat, 11 May 2019 13:01:07 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Sat, 11 May 2019 13:01:07 +0900
When failing to reindex a table or an index, reindexdb would generate an
extra error message related to a database failure, which is misleading.
Backpatch all the way down, as this has been introduced by 85e9a5a0.
Discussion: https://postgr.es/m/CAOBaU_Yo61RwNO3cW6WVYWwH7EYMPuexhKqufb2nFGOdunbcHw@mail.gmail.com
Author: Julien Rouhaud
Reviewed-by: Daniel Gustafsson, Álvaro Herrera, Tom Lane, Michael
Paquier
Backpatch-through: 9.4
M src/bin/scripts/reindexdb.c
Cope with EINVAL and EIDRM shmat() failures in PGSharedMemoryAttach.
commit : 803f90ab795b6bc170ba517cdd0dfddc85a5f961
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 10 May 2019 14:56:41 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Fri, 10 May 2019 14:56:41 -0400
There's a very old race condition in our code to see whether a pre-existing
shared memory segment is still in use by a conflicting postmaster: it's
possible for the other postmaster to remove the segment in between our
shmctl() and shmat() calls. It's a narrow window, and there's no risk
unless both postmasters are using the same port number, but that's possible
during parallelized "make check" tests. (Note that while the TAP tests
take some pains to choose a randomized port number, pg_regress doesn't.)
If it does happen, we treated that as an unexpected case and errored out.
To fix, allow EINVAL to be treated as segment-not-present, and the same
for EIDRM on Linux. AFAICS, the considerations here are basically
identical to the checks for acceptable shmctl() failures, so I documented
and coded it that way.
While at it, adjust PGSharedMemoryAttach's API to remove its undocumented
dependency on UsedShmemSegAddr in favor of passing the attach address
explicitly. This makes it easier to be sure we're using a null shmaddr
when probing for segment conflicts (thus avoiding questions about what
EINVAL means). I don't think there was a bug there, but it required
fragile assumptions about the state of UsedShmemSegAddr during
PGSharedMemoryIsInUse.
Commit c09850992 may have made this failure more probable by applying
the conflicting-segment tests more often. Hence, back-patch to all
supported branches, as that was.
Discussion: https://postgr.es/m/22224.1557340366@sss.pgh.pa.us
M src/backend/port/sysv_shmem.c
Repair issues with faulty generation of merge-append plans.
commit : e7eed0baa049ee2a1b06b7af10f7e4580a3a6cdd
author : Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 9 May 2019 16:52:49 -0400
committer: Tom Lane <tgl@sss.pgh.pa.us>
date : Thu, 9 May 2019 16:52:49 -0400
create_merge_append_plan failed to honor the CP_EXACT_TLIST flag:
it would generate the expected targetlist but then it felt free to
add resjunk sort targets to it. This demonstrably leads to assertion
failures in v11 and HEAD, and it's probably just accidental that we
don't see the same in older branches. I've not looked into whether
there would be any real-world consequences in non-assert builds.
In HEAD, create_append_plan has sprouted the same problem, so fix
that too (although we do not have any test cases that seem able to
reach that bug). This is an oversight in commit 3fc6e2d7f which
invented the CP_EXACT_TLIST flag, so back-patch to 9.6 where that
came in.
convert_subquery_pathkeys would create pathkeys for subquery output
values if they match any EquivalenceClass known in the outer query
and are available in the subquery's syntactic targetlist. However,
the second part of that condition is wrong, because such values might
not appear in the subquery relation's reltarget list, which would
mean that they couldn't be accessed above the level of the subquery
scan. We must check that they appear in the reltarget list, instead.
This can lead to dropping knowledge about the subquery's sort
ordering, but I believe it's okay, because any sort key that the
outer query actually has any interest in would appear in the
reltarget list.
This second issue is of very long standing, but right now there's no
evidence that it causes observable problems before 9.6, so I refrained
from back-patching further than that. We can revisit that choice if
somebody finds a way to make it cause problems in older branches.
(Developing useful test cases for these issues is really problematic;
fixing convert_subquery_pathkeys removes the only known way to exhibit
the create_merge_append_plan bug, and neither of the test cases added
by this patch causes a problem in all branches, even when considering
the issues separately.)
The second issue explains bug #15795 from Suresh Kumar R ("could not
find pathkey item to sort" with nested DISTINCT queries). I stumbled
across the first issue while investigating that.
Discussion: https://postgr.es/m/15795-fadb56c8e44ee73c@postgresql.org
M src/backend/optimizer/path/pathkeys.c
M src/backend/optimizer/plan/createplan.c
M src/test/regress/expected/union.out
M src/test/regress/sql/union.sql
Fix error status of vacuumdb when multiple jobs are used
commit : 25f12acd53f603a581d8bc89920037a811f12f82
author : Michael Paquier <michael@paquier.xyz>
date : Thu, 9 May 2019 10:29:29 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Thu, 9 May 2019 10:29:29 +0900
When running a batch of VACUUM or ANALYZE commands on a given database,
there were cases where it is possible to have vacuumdb not report an
error where it actually should, leading to incorrect status results.
Author: Julien Rouhaud
Reviewed-by: Amit Kapila, Michael Paquier
Discussion: https://postgr.es/m/CAOBaU_ZuTwz7CtqLYJ1Ouuh272bTQPLN8b1bAPk0bCBm4PDMTQ@mail.gmail.com
Backpatch-through: 9.5
M src/bin/scripts/vacuumdb.c
Fix documentation for the privileges required for replication functions.
commit : a9d5383db2e17a602ab6f9f0b4955623a8d444a6
author : Fujii Masao <fujii@postgresql.org>
date : Thu, 9 May 2019 01:35:13 +0900
committer: Fujii Masao <fujii@postgresql.org>
date : Thu, 9 May 2019 01:35:13 +0900
Previously it's documented that use of replication functions is
restricted to superusers. This is true for the functions which
use replication origin, but not for pg_logicl_emit_message() and
functions which use replication slot. For example, not only
superusers but also users with REPLICATION privilege is allowed
to use the functions for replication slot. This commit fixes
the documentation for the privileges required for those replication
functions.
Back-patch to 9.4 (all supported versions).
Author: Matsumura Ryo
Discussion: https://postgr.es/m/03040DFF97E6E54E88D3BFEE5F5480F74ABA6E16@G01JPEXMBYT04
M doc/src/sgml/func.sgml
Probe only 127.0.0.1 when looking for ports on Unix.
commit : 1f3bcb4972009c8af7b71d1526559475a248f77a
author : Thomas Munro <tmunro@postgresql.org>
date : Mon, 6 May 2019 15:02:41 +1200
committer: Thomas Munro <tmunro@postgresql.org>
date : Mon, 6 May 2019 15:02:41 +1200
Commit c0985099, later adjusted by commit 4ab02e81, probed 0.0.0.0
in addition to 127.0.0.1, for the benefit of Windows build farm
animals. It isn't really useful on Unix systems, and turned out to
be a bit inconvenient to users of some corporate firewall software.
Switch back to probing just 127.0.0.1 on non-Windows systems.
Back-patch to 9.6, like the earlier changes.
Discussion: https://postgr.es/m/CA%2BhUKG%2B21EPwfgs4m%2BtqyRtbVqkOUvP8QQ8sWk9%2Bh55Aub1H3A%40mail.gmail.com
M src/test/perl/PostgresNode.pm
Remove leftover reference to old "flat file" mechanism in a comment.
commit : 2bc59f890100f9a90289f8ef10b9403294915ff8
author : Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 8 May 2019 09:32:34 +0300
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>
date : Wed, 8 May 2019 09:32:34 +0300
The flat file mechanism was removed in PostgreSQL 9.0.
M src/backend/access/transam/xact.c
Remove some code related to 7.3 and older servers from tools of src/bin/
commit : 64ad372346b358aeaf7fd7c6d913f636dc4af4db
author : Michael Paquier <michael@paquier.xyz>
date : Tue, 7 May 2019 14:19:56 +0900
committer: Michael Paquier <michael@paquier.xyz>
date : Tue, 7 May 2019 14:19:56 +0900
This code was broken as of 582edc3, and is most likely not used anymore.
Note that pg_dump supports servers down to 8.0, and psql has code to
support servers down to 7.4.
Author: Julien Rouhaud
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAOBaU_Y5y=zo3+2gf+2NJC1pvMYPcbRXoQaPXx=U7+C8Qh4CzQ@mail.gmail.com
M src/bin/scripts/common.c