Stamp 9.3.11.
commit : de07063c05b8ffa86e804c6cc8117a8e8e5cff9b
author : Tom Lane <[email protected]>
date : Mon, 8 Feb 2016 16:17:25 -0500
committer: Tom Lane <[email protected]>
date : Mon, 8 Feb 2016 16:17:25 -0500
M configure
M configure.in
M doc/bug.template
M src/include/pg_config.h.win32
M src/interfaces/libpq/libpq.rc.in
M src/port/win32ver.rc
Translation updates
commit : 454994a9ed73713ea38635ab2bfbf5ace48bcf0a
author : Peter Eisentraut <[email protected]>
date : Mon, 8 Feb 2016 14:41:41 -0500
committer: Peter Eisentraut <[email protected]>
date : Mon, 8 Feb 2016 14:41:41 -0500
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 85e9ea36e147944d4852fe2647c95a26e909bb19
M src/backend/po/de.po
M src/backend/po/pl.po
M src/backend/po/ru.po
M src/bin/pg_controldata/po/ru.po
M src/bin/pg_ctl/po/de.po
M src/bin/pg_ctl/po/ru.po
M src/bin/pg_dump/po/de.po
M src/bin/pg_dump/po/ru.po
M src/bin/pg_resetxlog/po/ru.po
M src/bin/psql/po/de.po
M src/bin/psql/po/ru.po
M src/interfaces/ecpg/preproc/po/pt_BR.po
M src/pl/plperl/po/ru.po
M src/pl/plpython/po/de.po
M src/pl/plpython/po/ru.po
Last-minute updates for release notes.
commit : c846576a6d942f53a0881e0ae83bf5a31e69aa20
author : Tom Lane <[email protected]>
date : Mon, 8 Feb 2016 10:49:38 -0500
committer: Tom Lane <[email protected]>
date : Mon, 8 Feb 2016 10:49:38 -0500
Security: CVE-2016-0773
M doc/src/sgml/release-9.1.sgml
M doc/src/sgml/release-9.2.sgml
M doc/src/sgml/release-9.3.sgml
Fix some regex issues with out-of-range characters and large char ranges.
commit : 6403a6b745d434d3b4275d74f11e1d3cd422119e
author : Tom Lane <[email protected]>
date : Mon, 8 Feb 2016 10:25:40 -0500
committer: Tom Lane <[email protected]>
date : Mon, 8 Feb 2016 10:25:40 -0500
Previously, our regex code defined CHR_MAX as 0xfffffffe, which is a
bad choice because it is outside the range of type "celt" (int32).
Characters approaching that limit could lead to infinite loops in logic
such as "for (c = a; c <= b; c++)" where c is of type celt but the
range bounds are chr. Such loops will work safely only if CHR_MAX+1
is representable in celt, since c must advance to beyond b before the
loop will exit.
Fortunately, there seems no reason not to restrict CHR_MAX to 0x7ffffffe.
It's highly unlikely that Unicode will ever assign codes that high, and
none of our other backend encodings need characters beyond that either.
In addition to modifying the macro, we have to explicitly enforce character
range restrictions on the values of \u, \U, and \x escape sequences, else
the limit is trivially bypassed.
Also, the code for expanding case-independent character ranges in bracket
expressions had a potential integer overflow in its calculation of the
number of characters it could generate, which could lead to allocating too
small a character vector and then overwriting memory. An attacker with the
ability to supply arbitrary regex patterns could easily cause transient DOS
via server crashes, and the possibility for privilege escalation has not
been ruled out.
Quite aside from the integer-overflow problem, the range expansion code was
unnecessarily inefficient in that it always produced a result consisting of
individual characters, abandoning the knowledge that we had a range to
start with. If the input range is large, this requires excessive memory.
Change it so that the original range is reported as-is, and then we add on
any case-equivalent characters that are outside that range. With this
approach, we can bound the number of individual characters allowed without
sacrificing much. This patch allows at most 100000 individual characters,
which I believe to be more than the number of case pairs existing in
Unicode, so that the restriction will never be hit in practice.
It's still possible for range() to take awhile given a large character code
range, so also add statement-cancel detection to its loop. The downstream
function dovec() also lacked cancel detection, and could take a long time
given a large output from range().
Per fuzz testing by Greg Stark. Back-patch to all supported branches.
Security: CVE-2016-0773
M src/backend/regex/regc_lex.c
M src/backend/regex/regc_locale.c
M src/backend/regex/regcomp.c
M src/include/regex/regcustom.h
M src/test/regress/expected/regex.out
M src/test/regress/sql/regex.sql
Improve documentation about PRIMARY KEY constraints.
commit : abcb32d55ed8da8c158936d5a48a0c3b84bfe3ff
author : Tom Lane <[email protected]>
date : Sun, 7 Feb 2016 16:02:44 -0500
committer: Tom Lane <[email protected]>
date : Sun, 7 Feb 2016 16:02:44 -0500
Get rid of the false implication that PRIMARY KEY is exactly equivalent to
UNIQUE + NOT NULL. That was more-or-less true at one time in our
implementation, but the standard doesn't say that, and we've grown various
features (many of them required by spec) that treat a pkey differently from
less-formal constraints. Per recent discussion on pgsql-general.
I failed to resist the temptation to do some other wordsmithing in the
same area.
M doc/src/sgml/ddl.sgml
M doc/src/sgml/ref/create_table.sgml
Release notes for 9.5.1, 9.4.6, 9.3.11, 9.2.15, 9.1.20.
commit : dd48a39388497e91e537cd34190be2dcc6179296
author : Tom Lane <[email protected]>
date : Sun, 7 Feb 2016 14:16:32 -0500
committer: Tom Lane <[email protected]>
date : Sun, 7 Feb 2016 14:16:32 -0500
M doc/src/sgml/release-9.1.sgml
M doc/src/sgml/release-9.2.sgml
M doc/src/sgml/release-9.3.sgml
Force certain "pljava" custom GUCs to be PGC_SUSET.
commit : 34e91736bba04b02a937b4213a66630b248bdadd
author : Noah Misch <[email protected]>
date : Fri, 5 Feb 2016 20:22:51 -0500
committer: Noah Misch <[email protected]>
date : Fri, 5 Feb 2016 20:22:51 -0500
Future PL/Java versions will close CVE-2016-0766 by making these GUCs
PGC_SUSET. This PostgreSQL change independently mitigates that PL/Java
vulnerability, helping sites that update PostgreSQL more frequently than
PL/Java. Back-patch to 9.1 (all supported versions).
M src/backend/utils/misc/guc.c
Update time zone data files to tzdata release 2016a.
commit : 9a3475b84ea9606ce9c3159384caa064eed24cf2
author : Tom Lane <[email protected]>
date : Fri, 5 Feb 2016 10:59:09 -0500
committer: Tom Lane <[email protected]>
date : Fri, 5 Feb 2016 10:59:09 -0500
DST law changes in Cayman Islands, Metlakatla, Trans-Baikal Territory
(Zabaykalsky Krai). Historical corrections for Pakistan.
M src/timezone/data/asia
M src/timezone/data/backward
M src/timezone/data/backzone
M src/timezone/data/europe
M src/timezone/data/northamerica
M src/timezone/data/zone.tab
M src/timezone/data/zone1970.tab
In pg_dump, ensure that view triggers are processed after view rules.
commit : aefbc208bbd113328a8158488c3cd29637004455
author : Tom Lane <[email protected]>
date : Thu, 4 Feb 2016 00:26:10 -0500
committer: Tom Lane <[email protected]>
date : Thu, 4 Feb 2016 00:26:10 -0500
If a view is split into CREATE TABLE + CREATE RULE to break a circular
dependency, then any triggers on the view must be dumped/reloaded after
the CREATE RULE; else the backend may reject the CREATE TRIGGER because
it's the wrong type of trigger for a plain table. This works all right
in plain dump/restore because of pg_dump's sorting heuristic that places
triggers after rules. However, when using parallel restore, the ordering
must be enforced by a dependency --- and we didn't have one.
Fixing this is a mere matter of adding an addObjectDependency() call,
except that we need to be able to find all the triggers belonging to the
view relation, and there was no easy way to do that. Add fields to
pg_dump's TableInfo struct to remember where the associated TriggerInfo
struct(s) are.
Per bug report from Dennis Kögel. The failure can be exhibited at least
as far back as 9.1, so back-patch to all supported branches.
M src/bin/pg_dump/pg_dump.c
M src/bin/pg_dump/pg_dump.h
M src/bin/pg_dump/pg_dump_sort.c
pgbench: Install guard against overflow when dividing by -1.
commit : 014796aa3f07f77fa50128044599c96808e929ee
author : Robert Haas <[email protected]>
date : Wed, 3 Feb 2016 09:15:29 -0500
committer: Robert Haas <[email protected]>
date : Wed, 3 Feb 2016 09:15:29 -0500
Commit 64f5edca2401f6c2f23564da9dd52e92d08b3a20 fixed the same hazard
on master; this is a backport, but the modulo operator does not exist
in older releases.
Michael Paquier
M contrib/pgbench/pgbench.c
Fix IsValidJsonNumber() to notice trailing non-alphanumeric garbage.
commit : 1f2b195ebf1c1646b20111f1c339fbaebb04da56
author : Tom Lane <[email protected]>
date : Wed, 3 Feb 2016 01:39:08 -0500
committer: Tom Lane <[email protected]>
date : Wed, 3 Feb 2016 01:39:08 -0500
Commit e09996ff8dee3f70 was one brick shy of a load: it didn't insist
that the detected JSON number be the whole of the supplied string.
This allowed inputs such as "2016-01-01" to be misdetected as valid JSON
numbers. Per bug #13906 from Dmitry Ryabov.
In passing, be more wary of zero-length input (I'm not sure this can
happen given current callers, but better safe than sorry), and do some
minor cosmetic cleanup.
M contrib/hstore/expected/hstore.out
M contrib/hstore/sql/hstore.sql
M src/backend/utils/adt/json.c
Make sure ecpg header files do not have a comment lasting several lines, one of which is a preprocessor directive. This leads ecpg to incorrectly parse the comment as nested.
commit : 0b55fef393e6fae6086bce7704c3b2a369600a32
author : Michael Meskes <[email protected]>
date : Mon, 1 Feb 2016 13:10:40 +0100
committer: Michael Meskes <[email protected]>
date : Mon, 1 Feb 2016 13:10:40 +0100
M src/interfaces/ecpg/include/datetime.h
M src/interfaces/ecpg/include/decimal.h
Fix error in documentated use of mingw-w64 compilers
commit : ca5f5c45f4542bd699baf7e694e07d8d6de3a1a2
author : Andrew Dunstan <[email protected]>
date : Sat, 30 Jan 2016 19:28:44 -0500
committer: Andrew Dunstan <[email protected]>
date : Sat, 30 Jan 2016 19:28:44 -0500
Error reported by Igal Sapir.
M doc/src/sgml/installation.sgml
Fix incorrect pattern-match processing in psql's \det command.
commit : db678ca161a9dc26701585c3c1e8f0a6d0655aa8
author : Tom Lane <[email protected]>
date : Fri, 29 Jan 2016 10:28:03 +0100
committer: Tom Lane <[email protected]>
date : Fri, 29 Jan 2016 10:28:03 +0100
listForeignTables' invocation of processSQLNamePattern did not match up
with the other ones that handle potentially-schema-qualified names; it
failed to make use of pg_table_is_visible() and also passed the name
arguments in the wrong order. Bug seems to have been aboriginal in commit
0d692a0dc9f0e532. It accidentally sort of worked as long as you didn't
inquire too closely into the behavior, although the silliness was later
exposed by inconsistencies in the test queries added by 59efda3e50ca4de6
(which I probably should have questioned at the time, but didn't).
Per bug #13899 from Reece Hart. Patch by Reece Hart and Tom Lane.
Back-patch to all affected branches.
M src/bin/psql/describe.c
Fix startup so that log prefix %h works for the log_connections message.
commit : 9bbfca8fde6109ce84543b9d71bcbf1c3b144b29
author : Tom Lane <[email protected]>
date : Tue, 26 Jan 2016 15:38:33 -0500
committer: Tom Lane <[email protected]>
date : Tue, 26 Jan 2016 15:38:33 -0500
We entirely randomly chose to initialize port->remote_host just after
printing the log_connections message, when we could perfectly well do it
just before, allowing %h and %r to work for that message. Per gripe from
Artem Tomyuk.
M src/backend/postmaster/postmaster.c
Properly install dynloader.h on MSVC builds
commit : 7a47262ce6b3370564b005b05279b0e07fa15bba
author : Bruce Momjian <[email protected]>
date : Tue, 19 Jan 2016 23:30:28 -0500
committer: Bruce Momjian <[email protected]>
date : Tue, 19 Jan 2016 23:30:28 -0500
This will enable PL/Java to be cleanly compiled, as dynloader.h is a
requirement.
Report by Chapman Flack
Patch by Michael Paquier
Backpatch through 9.1
M src/backend/utils/fmgr/dfmgr.c
M src/tools/msvc/Install.pm
M src/tools/msvc/Solution.pm
M src/tools/msvc/clean.bat
Fix spelling mistake.
commit : f704f434ebbeb71078c592fe1feaddfeb1e23f32
author : Robert Haas <[email protected]>
date : Thu, 14 Jan 2016 23:12:05 -0500
committer: Robert Haas <[email protected]>
date : Thu, 14 Jan 2016 23:12:05 -0500
Same patch submitted independently by David Rowley and Peter Geoghegan.
M contrib/pg_upgrade/controldata.c
Properly close token in sspi authentication
commit : 77d8edcf5443a7047b2dc792300255ec129228cf
author : Magnus Hagander <[email protected]>
date : Thu, 14 Jan 2016 13:06:03 +0100
committer: Magnus Hagander <[email protected]>
date : Thu, 14 Jan 2016 13:06:03 +0100
We can never leak more than one token, but we shouldn't do that. We
don't bother closing it in the error paths since the process will
exit shortly anyway.
Christian Ullrich
M src/backend/libpq/auth.c
Handle extension members when first setting object dump flags in pg_dump.
commit : b87403f7037c001f750d486cf00c4325ce2eb014
author : Tom Lane <[email protected]>
date : Wed, 13 Jan 2016 18:55:27 -0500
committer: Tom Lane <[email protected]>
date : Wed, 13 Jan 2016 18:55:27 -0500
pg_dump's original approach to handling extension member objects was to
run around and clear (or set) their dump flags rather late in its data
collection process. Unfortunately, quite a lot of code expects those flags
to be valid before that; which was an entirely reasonable expectation
before we added extensions. In particular, this explains Karsten Hilbert's
recent report of pg_upgrade failing on a database in which an extension
has been installed into the pg_catalog schema. Its objects are initially
marked as not-to-be-dumped on the strength of their schema, and later we
change them to must-dump because we're doing a binary upgrade of their
extension; but we've already skipped essential tasks like making associated
DO_SHELL_TYPE objects.
To fix, collect extension membership data first, and incorporate it in the
initial setting of the dump flags, so that those are once again correct
from the get-go. This has the undesirable side effect of slightly
lengthening the time taken before pg_dump acquires table locks, but testing
suggests that the increase in that window is not very much.
Along the way, get rid of ugly special-case logic for deciding whether
to dump procedural languages, FDWs, and foreign servers; dump decisions
for those are now correct up-front, too.
In 9.3 and up, this also fixes erroneous logic about when to dump event
triggers (basically, they were *always* dumped before). In 9.5 and up,
transform objects had that problem too.
Since this problem came in with extensions, back-patch to all supported
versions.
M src/bin/pg_dump/common.c
M src/bin/pg_dump/pg_dump.c
M src/bin/pg_dump/pg_dump.h
Avoid dump/reload problems when using both plpython2 and plpython3.
commit : 0ddeaba7ed88151fd90b5d13d50beca2aff0ab39
author : Tom Lane <[email protected]>
date : Mon, 11 Jan 2016 19:55:40 -0500
committer: Tom Lane <[email protected]>
date : Mon, 11 Jan 2016 19:55:40 -0500
Commit 803716013dc1350f installed a safeguard against loading plpython2
and plpython3 at the same time, but asserted that both could still be
used in the same database, just not in the same session. However, that's
not actually all that practical because dumping and reloading will fail
(since both libraries necessarily get loaded into the restoring session).
pg_upgrade is even worse, because it checks for missing libraries by
loading every .so library mentioned in the entire installation into one
session, so that you can have only one across the whole cluster.
We can improve matters by not throwing the error immediately in _PG_init,
but only when and if we're asked to do something that requires calling
into libpython. This ameliorates both of the above situations, since
while execution of CREATE LANGUAGE, CREATE FUNCTION, etc will result in
loading plpython, it isn't asked to do anything interesting (at least
not if check_function_bodies is off, as it will be during a restore).
It's possible that this opens some corner-case holes in which a crash
could be provoked with sufficient effort. However, since plpython
only exists as an untrusted language, any such crash would require
superuser privileges, making it "don't do that" not a security issue.
To reduce the hazards in this area, the error is still FATAL when it
does get thrown.
Per a report from Paul Jones. Back-patch to 9.2, which is as far back
as the patch applies without work. (It could be made to work in 9.1,
but given the lack of previous complaints, I'm disinclined to expend
effort so far back. We've been pretty desultory about support for
Python 3 in 9.1 anyway.)
M src/pl/plpython/plpy_main.c
Clean up some lack-of-STRICT issues in the core code, too.
commit : 8b5cc3ec4b7c9d87bbbbeb2fb6b5a801a701298d
author : Tom Lane <[email protected]>
date : Sat, 9 Jan 2016 16:58:32 -0500
committer: Tom Lane <[email protected]>
date : Sat, 9 Jan 2016 16:58:32 -0500
A scan for missed proisstrict markings in the core code turned up
these functions:
brin_summarize_new_values
pg_stat_reset_single_table_counters
pg_stat_reset_single_function_counters
pg_create_logical_replication_slot
pg_create_physical_replication_slot
pg_drop_replication_slot
The first three of these take OID, so a null argument will normally look
like a zero to them, resulting in "ERROR: could not open relation with OID
0" for brin_summarize_new_values, and no action for the pg_stat_reset_XXX
functions. The other three will dump core on a null argument, though this
is mitigated by the fact that they won't do so until after checking that
the caller is superuser or has rolreplication privilege.
In addition, the pg_logical_slot_get/peek[_binary]_changes family was
intentionally marked nonstrict, but failed to make nullness checks on all
the arguments; so again a null-pointer-dereference crash is possible but
only for superusers and rolreplication users.
Add the missing ARGISNULL checks to the latter functions, and mark the
former functions as strict in pg_proc. Make that change in the back
branches too, even though we can't force initdb there, just so that
installations initdb'd in future won't have the issue. Since none of these
bugs rise to the level of security issues (and indeed the pg_stat_reset_XXX
functions hardly misbehave at all), it seems sufficient to do this.
In addition, fix some order-of-operations oddities in the slot_get_changes
family, mostly cosmetic, but not the part that moves the function's last
few operations into the PG_TRY block. As it stood, there was significant
risk for an error to exit without clearing historical information from
the system caches.
The slot_get_changes bugs go back to 9.4 where that code was introduced.
Back-patch appropriate subsets of the pg_proc changes into all active
branches, as well.
M src/include/catalog/pg_proc.h
Clean up code for widget_in() and widget_out().
commit : 23382c4324c5f3c2cb5d6af5eaed1b0ff0230138
author : Tom Lane <[email protected]>
date : Sat, 9 Jan 2016 13:44:27 -0500
committer: Tom Lane <[email protected]>
date : Sat, 9 Jan 2016 13:44:27 -0500
Given syntactically wrong input, widget_in() could call atof() with an
indeterminate pointer argument, typically leading to a crash; or if it
didn't do that, it might return a NULL pointer, which again would lead
to a crash since old-style C functions aren't supposed to do things
that way. Fix that by correcting the off-by-one syntax test and
throwing a proper error rather than just returning NULL.
Also, since widget_in and widget_out have been marked STRICT for a
long time, their tests for null inputs are just dead code; remove 'em.
In the oldest branches, also improve widget_out to use snprintf not
sprintf, just to be sure.
In passing, get rid of a long-since-useless sprintf into a local buffer
that nothing further is done with, and make some other minor coding
style cleanups.
In the intended regression-testing usage of these functions, none of
this is very significant; but if the regression test database were
left around in a production installation, these bugs could amount
to a minor security hazard.
Piotr Stefaniak, Michael Paquier, and Tom Lane
M src/test/regress/regress.c
Add STRICT to some C functions created by the regression tests.
commit : f2c6804e4e139a1fa7c22a95c046e7833542434a
author : Tom Lane <[email protected]>
date : Sat, 9 Jan 2016 13:02:54 -0500
committer: Tom Lane <[email protected]>
date : Sat, 9 Jan 2016 13:02:54 -0500
These functions readily crash when passed a NULL input value. The tests
themselves do not pass NULL values to them; but when the regression
database is used as a basis for fuzz testing, they cause a lot of noise.
Also, if someone were to leave a regression database lying about in a
production installation, these would create a minor security hazard.
Andreas Seltenreich
M src/test/regress/input/create_function_2.source
M src/test/regress/output/create_function_2.source
Fix unobvious interaction between -X switch and subdirectory creation.
commit : 452064f262fd1c10879311e62181cf96c97dbb09
author : Tom Lane <[email protected]>
date : Thu, 7 Jan 2016 18:20:57 -0500
committer: Tom Lane <[email protected]>
date : Thu, 7 Jan 2016 18:20:57 -0500
Turns out the only reason initdb -X worked is that pg_mkdir_p won't
whine if you point it at something that's a symlink to a directory.
Otherwise, the attempt to create pg_xlog/ just like all the other
subdirectories would have failed. Let's be a little more explicit
about what's happening. Oversight in my patch for bug #13853
(mea culpa for not testing -X ...)
M src/bin/initdb/initdb.c
Use plain mkdir() not pg_mkdir_p() to create subdirectories of PGDATA.
commit : 9486d0202778074e3af4371e6a4580ab4ce18336
author : Tom Lane <[email protected]>
date : Thu, 7 Jan 2016 15:22:01 -0500
committer: Tom Lane <[email protected]>
date : Thu, 7 Jan 2016 15:22:01 -0500
When we're creating subdirectories of PGDATA during initdb, we know darn
well that the parent directory exists (or should exist) and that the new
subdirectory doesn't (or shouldn't). There is therefore no need to use
anything more complicated than mkdir(). Using pg_mkdir_p() just opens us
up to unexpected failure modes, such as the one exhibited in bug #13853
from Nuri Boardman. It's not very clear why pg_mkdir_p() went wrong there,
but it is clear that we didn't need to be trying to create parent
directories in the first place. We're not even saving any code, as proven
by the fact that this patch nets out at minus five lines.
Since this is a response to a field bug report, back-patch to all branches.
M src/bin/initdb/initdb.c
Windows: Make pg_ctl reliably detect service status
commit : 74d4009b8ceff043dcd74af67437934d9eeebca7
author : Alvaro Herrera <[email protected]>
date : Thu, 7 Jan 2016 11:59:08 -0300
committer: Alvaro Herrera <[email protected]>
date : Thu, 7 Jan 2016 11:59:08 -0300
pg_ctl is using isatty() to verify whether the process is running in a
terminal, and if not it sends its output to Windows' Event Log ... which
does the wrong thing when the output has been redirected to a pipe, as
reported in bug #13592.
To fix, make pg_ctl use the code we already have to detect service-ness:
in the master branch, move src/backend/port/win32/security.c to src/port
(with suitable tweaks so that it runs properly in backend and frontend
environments); pg_ctl already has access to pgport so it Just Works. In
older branches, that's likely to cause trouble, so instead duplicate the
required code in pg_ctl.c.
Author: Michael Paquier
Bug report and diagnosis: Egon Kocjan
Backpatch: all supported branches
M src/bin/pg_ctl/pg_ctl.c
Sort $(wildcard) output where needed for reproducible build output.
commit : 6d899f098c195f624b61929e6d43091f836da215
author : Tom Lane <[email protected]>
date : Tue, 5 Jan 2016 15:47:05 -0500
committer: Tom Lane <[email protected]>
date : Tue, 5 Jan 2016 15:47:05 -0500
The order of inclusion of .o files makes a difference in linker output;
not a functional difference, but still a bitwise difference, which annoys
some packagers who would like reproducible builds.
Report and patch by Christoph Berg
M contrib/pg_xlogdump/Makefile
Fix treatment of *lpNumberOfBytesRecvd == 0: that's a completion condition.
commit : 0f527f73bd24710b9eeadc60d930bb8ab9adb213
author : Tom Lane <[email protected]>
date : Mon, 4 Jan 2016 17:41:33 -0500
committer: Tom Lane <[email protected]>
date : Mon, 4 Jan 2016 17:41:33 -0500
pgwin32_recv() has treated a non-error return of zero bytes from WSARecv()
as being a reason to block ever since the current implementation was
introduced in commit a4c40f140d23cefb. However, so far as one can tell
from Microsoft's documentation, that is just wrong: what it means is
graceful connection closure (in stream protocols) or receipt of a
zero-length message (in message protocols), and neither case should result
in blocking here. The only reason the code worked at all was that control
then fell into the retry loop, which did *not* treat zero bytes specially,
so we'd get out after only wasting some cycles. But as of 9.5 we do not
normally reach the retry loop and so the bug is exposed, as reported by
Shay Rojansky and diagnosed by Andres Freund.
Remove the unnecessary test on the byte count, and rearrange the code
in the retry loop so that it looks identical to the initial sequence.
Back-patch of commit 90e61df8130dc7051a108ada1219fb0680cb3eb6. The
original plan was to apply this only to 9.5 and up, but after discussion
and buildfarm testing, it seems better to back-patch. The noblock code
path has been at risk of this problem since it was introduced (in 9.0);
if it did happen in pre-9.5 branches, the symptom would be that a walsender
would wait indefinitely rather than noticing a loss of connection. While
we lack proof that the case has been seen in the field, it seems possible
that it's happened without being reported.
M src/backend/port/win32/socket.c
Teach pg_dump to quote reloption values safely.
commit : 6a0d63d351a73e96fb0f618c92972080a8d024a5
author : Tom Lane <[email protected]>
date : Sat, 2 Jan 2016 19:04:45 -0500
committer: Tom Lane <[email protected]>
date : Sat, 2 Jan 2016 19:04:45 -0500
Commit c7e27becd2e6eb93 fixed this on the backend side, but we neglected
the fact that several code paths in pg_dump were printing reloptions
values that had not gotten massaged by ruleutils. Apply essentially the
same quoting logic in those places, too.
M src/bin/pg_dump/pg_dump.c
M src/bin/pg_dump/pg_dump.h
Fix overly-strict assertions in spgtextproc.c.
commit : 2917155d5805f16584e323b8764fd01b4023361a
author : Tom Lane <[email protected]>
date : Sat, 2 Jan 2016 16:24:50 -0500
committer: Tom Lane <[email protected]>
date : Sat, 2 Jan 2016 16:24:50 -0500
spg_text_inner_consistent is capable of reconstructing an empty string
to pass down to the next index level; this happens if we have an empty
string coming in, no prefix, and a dummy node label. (In practice, what
is needed to trigger that is insertion of a whole bunch of empty-string
values.) Then, we will arrive at the next level with in->level == 0
and a non-NULL (but zero length) in->reconstructedValue, which is valid
but the Assert tests weren't expecting it.
Per report from Andreas Seltenreich. This has no impact in non-Assert
builds, so should not be a problem in production, but back-patch to
all affected branches anyway.
In passing, remove a couple of useless variable initializations and
shorten the code by not duplicating DatumGetPointer() calls.
M src/backend/access/spgist/spgtextproc.c
Adjust back-branch release note description of commits a2a718b22 et al.
commit : 8f56ec243d6b30c772867bd0f26dcf753f522d83
author : Tom Lane <[email protected]>
date : Sat, 2 Jan 2016 15:29:03 -0500
committer: Tom Lane <[email protected]>
date : Sat, 2 Jan 2016 15:29:03 -0500
As pointed out by Michael Paquier, recovery_min_apply_delay didn't exist
in 9.0-9.3, making the release note text not very useful. Instead make it
talk about recovery_target_xid, which did exist then.
9.0 is already out of support, but we can fix the text in the newer
branches' copies of its release notes.
M doc/src/sgml/release-9.0.sgml
M doc/src/sgml/release-9.1.sgml
M doc/src/sgml/release-9.2.sgml
M doc/src/sgml/release-9.3.sgml
Update copyright for 2016
commit : 6d90d6a5ce0ca6fcf17b9a6e15ced5315941aa0e
author : Bruce Momjian <[email protected]>
date : Sat, 2 Jan 2016 13:33:39 -0500
committer: Bruce Momjian <[email protected]>
date : Sat, 2 Jan 2016 13:33:39 -0500
Backpatch certain files through 9.1
M COPYRIGHT
M doc/src/sgml/legal.sgml
Teach flatten_reloptions() to quote option values safely.
commit : babf38e8819152f68ae463ac9c77ca3f7b074991
author : Tom Lane <[email protected]>
date : Fri, 1 Jan 2016 15:27:53 -0500
committer: Tom Lane <[email protected]>
date : Fri, 1 Jan 2016 15:27:53 -0500
flatten_reloptions() supposed that it didn't really need to do anything
beyond inserting commas between reloption array elements. However, in
principle the value of a reloption could be nearly anything, since the
grammar allows a quoted string there. Any restrictions on it would come
from validity checking appropriate to the particular option, if any.
A reloption value that isn't a simple identifier or number could thus lead
to dump/reload failures due to syntax errors in CREATE statements issued
by pg_dump. We've gotten away with not worrying about this so far with
the core-supported reloptions, but extensions might allow reloption values
that cause trouble, as in bug #13840 from Kouhei Sutou.
To fix, split the reloption array elements explicitly, and then convert
any value that doesn't look like a safe identifier to a string literal.
(The details of the quoting rule could be debated, but this way is safe
and requires little code.) While we're at it, also quote reloption names
if they're not safe identifiers; that may not be a likely problem in the
field, but we might as well try to be bulletproof here.
It's been like this for a long time, so back-patch to all supported
branches.
Kouhei Sutou, adjusted some by me
M src/backend/utils/adt/ruleutils.c
Add some more defenses against silly estimates to gincostestimate().
commit : 94114469f3b3802f7bfd082531d46398477e60d3
author : Tom Lane <[email protected]>
date : Fri, 1 Jan 2016 13:42:21 -0500
committer: Tom Lane <[email protected]>
date : Fri, 1 Jan 2016 13:42:21 -0500
A report from Andy Colson showed that gincostestimate() was not being
nearly paranoid enough about whether to believe the statistics it finds in
the index metapage. The problem is that the metapage stats (other than the
pending-pages count) are only updated by VACUUM, and in the worst case
could still reflect the index's original empty state even when it has grown
to many entries. We attempted to deal with that by scaling up the stats to
match the current index size, but if nEntries is zero then scaling it up
still gives zero. Moreover, the proportion of pages that are entry pages
vs. data pages vs. pending pages is unlikely to be estimated very well by
scaling if the index is now orders of magnitude larger than before.
We can improve matters by expanding the use of the rule-of-thumb estimates
I introduced in commit 7fb008c5ee59b040: if the index has grown by more
than a cutoff amount (here set at 4X growth) since VACUUM, then use the
rule-of-thumb numbers instead of scaling. This might not be exactly right
but it seems much less likely to produce insane estimates.
I also improved both the scaling estimate and the rule-of-thumb estimate
to account for numPendingPages, since it's reasonable to expect that that
is accurate in any case, and certainly pages that are in the pending list
are not either entry or data pages.
As a somewhat separate issue, adjust the estimation equations that are
concerned with extra fetches for partial-match searches. These equations
suppose that a fraction partialEntries / numEntries of the entry and data
pages will be visited as a consequence of a partial-match search. Now,
it's physically impossible for that fraction to exceed one, but our
estimate of partialEntries is mostly bunk, and our estimate of numEntries
isn't exactly gospel either, so we could arrive at a silly value. In the
example presented by Andy we were coming out with a value of 100, leading
to insane cost estimates. Clamp the fraction to one to avoid that.
Like the previous patch, back-patch to all supported branches; this
problem can be demonstrated in one form or another in all of them.
M src/backend/utils/adt/selfuncs.c
Document the exponentiation operator as associating left to right.
commit : 3ccc4e9ce3a37db70d049801d09f953d476ac8dc
author : Tom Lane <[email protected]>
date : Mon, 28 Dec 2015 12:09:00 -0500
committer: Tom Lane <[email protected]>
date : Mon, 28 Dec 2015 12:09:00 -0500
Common mathematical convention is that exponentiation associates right to
left. We aren't going to change the parser for this, but we could note
it in the operator's description. (It's already noted in the operator
precedence/associativity table, but users might not look there.)
Per bug #13829 from Henrik Pauli.
M doc/src/sgml/func.sgml
Update documentation about pseudo-types.
commit : e12c74401bf3ad1fbeaa41a435c16227b602fb86
author : Tom Lane <[email protected]>
date : Mon, 28 Dec 2015 11:04:42 -0500
committer: Tom Lane <[email protected]>
date : Mon, 28 Dec 2015 11:04:42 -0500
Tone down an overly strong statement about which pseudo-types PLs are
likely to allow. Add "event_trigger" to the list, as well as
"pg_ddl_command" in 9.5/HEAD. Back-patch to 9.3 where event_trigger
was added.
M doc/src/sgml/datatype.sgml
Fix translation domain in pg_basebackup
commit : 7533d5d35efb08a3af906f38fc267ecf66bea1e0
author : Alvaro Herrera <[email protected]>
date : Mon, 28 Dec 2015 10:50:35 -0300
committer: Alvaro Herrera <[email protected]>
date : Mon, 28 Dec 2015 10:50:35 -0300
For some reason, we've been overlooking the fact that pg_receivexlog
and pg_recvlogical are using wrong translation domains all along,
so their output hasn't ever been translated. The right domain is
pg_basebackup, not their own executable names.
Noticed by Ioseph Kim, who's been working on the Korean translation.
Backpatch pg_receivexlog to 9.2 and pg_recvlogical to 9.4.
M src/bin/pg_basebackup/pg_receivexlog.c
Add forgotten CHECK_FOR_INTERRUPT calls in pgcrypto's crypt()
commit : 0244677cf1031d2f0e6af09b3ea9bb8f4157e340
author : Alvaro Herrera <[email protected]>
date : Sun, 27 Dec 2015 13:03:19 -0300
committer: Alvaro Herrera <[email protected]>
date : Sun, 27 Dec 2015 13:03:19 -0300
Both Blowfish and DES implementations of crypt() can take arbitrarily
long time, depending on the number of rounds specified by the caller;
make sure they can be interrupted.
Author: Andreas Karlsson
Reviewer: Jeff Janes
Backpatch to 9.1.
M contrib/pgcrypto/crypt-blowfish.c
M contrib/pgcrypto/crypt-des.c
Fix factual and grammatical errors in comments for struct _tableInfo.
commit : c03e44245009846d4d8c162630014d35edd0537f
author : Tom Lane <[email protected]>
date : Thu, 24 Dec 2015 10:42:59 -0500
committer: Tom Lane <[email protected]>
date : Thu, 24 Dec 2015 10:42:59 -0500
Amit Langote, further adjusted by me
M src/bin/pg_dump/pg_dump.h
In pg_dump, remember connection passwords no matter how we got them.
commit : 534a4159c290bd1d49bad702f7f600f92cfdcd67
author : Tom Lane <[email protected]>
date : Wed, 23 Dec 2015 14:25:31 -0500
committer: Tom Lane <[email protected]>
date : Wed, 23 Dec 2015 14:25:31 -0500
When pg_dump prompts the user for a password, it remembers the password
for possible re-use by parallel worker processes. However, libpq might
have extracted the password from a connection string originally passed
as "dbname". Since we don't record the original form of dbname but
break it down to host/port/etc, the password gets lost. Fix that by
retrieving the actual password from the PGconn.
(It strikes me that this whole approach is rather broken, as it will also
lose other information such as options that might have been present in
the connection string. But we'll leave that problem for another day.)
In passing, get rid of rather silly use of malloc() for small fixed-size
arrays.
Back-patch to 9.3 where parallel pg_dump was introduced.
Report and fix by Zeus Kronion, adjusted a bit by Michael Paquier and me
M src/bin/pg_dump/pg_backup_db.c
Rework internals of changing a type's ownership
commit : bc72c3b3fa109760c66635450b386edd7a8c29c7
author : Alvaro Herrera <[email protected]>
date : Mon, 21 Dec 2015 19:49:15 -0300
committer: Alvaro Herrera <[email protected]>
date : Mon, 21 Dec 2015 19:49:15 -0300
This is necessary so that REASSIGN OWNED does the right thing with
composite types, to wit, that it also alters ownership of the type's
pg_class entry -- previously, the pg_class entry remained owned by the
original user, which caused later other failures such as the new owner's
inability to use ALTER TYPE to rename an attribute of the affected
composite. Also, if the original owner is later dropped, the pg_class
entry becomes owned by a non-existant user which is bogus.
To fix, create a new routine AlterTypeOwner_oid which knows whether to
pass the request to ATExecChangeOwner or deal with it directly, and use
that in shdepReassignOwner rather than calling AlterTypeOwnerInternal
directly. AlterTypeOwnerInternal is now simpler in that it only
modifies the pg_type entry and recurses to handle a possible array type;
higher-level tasks are handled by either AlterTypeOwner directly or
AlterTypeOwner_oid.
I took the opportunity to add a few more objects to the test rig for
REASSIGN OWNED, so that more cases are exercised. Additional ones could
be added for superuser-only-ownable objects (such as FDWs and event
triggers) but I didn't want to push my luck by adding a new superuser to
the tests on a backpatchable bug fix.
Per bug #13666 reported by Chris Pacejo.
This is a backpatch of commit 756e7b4c9db1 to branches 9.1 -- 9.4.
M src/backend/catalog/pg_shdepend.c
M src/backend/commands/tablecmds.c
M src/backend/commands/typecmds.c
M src/include/commands/typecmds.h
M src/test/regress/expected/dependency.out
M src/test/regress/sql/dependency.sql
adjust ACL owners for REASSIGN and ALTER OWNER TO
commit : 62e6eba8d26618e1a20cf29a94c1ce6d68d40f4e
author : Alvaro Herrera <[email protected]>
date : Mon, 21 Dec 2015 19:16:15 -0300
committer: Alvaro Herrera <[email protected]>
date : Mon, 21 Dec 2015 19:16:15 -0300
When REASSIGN and ALTER OWNER TO are used, both the object owner and ACL
list should be changed from the old owner to the new owner. This patch
fixes types, foreign data wrappers, and foreign servers to change their
ACL list properly; they already changed owners properly.
Report by Alexey Bashtanov
This is a backpatch of commit 59367fdf97c (for bug #9923) by Bruce
Momjian to branches 9.1 - 9.4; it wasn't backpatched originally out of
concerns that it would create a backwards compatibility problem, but per
discussion related to bug #13666 that turns out to have been misguided.
(Therefore, the entry in the 9.5 release notes should be removed.)
Note that 9.1 didn't have privileges on types (which were introduced by
commit 729205571e81), so this commit only changes foreign-data related
objects in that branch.
Discussion: http://www.postgresql.org/message-id/[email protected]
http://www.postgresql.org/message-id/[email protected]
M src/backend/commands/foreigncmds.c
M src/backend/commands/typecmds.c
M src/test/regress/expected/foreign_data.out
Make viewquery a copy in rewriteTargetView()
commit : 4271ed3860e3885d3fbed1ea9ee4f058e8763104
author : Stephen Frost <[email protected]>
date : Mon, 21 Dec 2015 10:34:28 -0500
committer: Stephen Frost <[email protected]>
date : Mon, 21 Dec 2015 10:34:28 -0500
Rather than expect the Query returned by get_view_query() to be
read-only and then copy bits and pieces of it out, simply copy the
entire structure when we get it. This addresses an issue where
AcquireRewriteLocks, which is called by acquireLocksOnSubLinks(),
scribbles on the parsetree passed in, which was actually an entry
in relcache, leading to segfaults with certain view definitions.
This also future-proofs us a bit for anyone adding more code to this
path.
The acquireLocksOnSubLinks() was added in commit c3e0ddd40.
Back-patch to 9.3 as that commit was.
M src/backend/rewrite/rewriteHandler.c
M src/test/regress/expected/updatable_views.out
M src/test/regress/sql/updatable_views.sql
Remove silly completion for "DELETE FROM tabname ...".
commit : 06d4fabfff6944eac905502cb3bf07953e9cfc50
author : Tom Lane <[email protected]>
date : Sun, 20 Dec 2015 18:29:51 -0500
committer: Tom Lane <[email protected]>
date : Sun, 20 Dec 2015 18:29:51 -0500
psql offered USING, WHERE, and SET in this context, but SET is not a valid
possibility here. Seems to have been a thinko in commit f5ab0a14ea83eb6c
which added DELETE's USING option.
M src/bin/psql/tab-complete.c
Fix improper initialization order for readline.
commit : 09b7abc27858b1973afc4201a69f1b34a557bba8
author : Tom Lane <[email protected]>
date : Thu, 17 Dec 2015 16:55:23 -0500
committer: Tom Lane <[email protected]>
date : Thu, 17 Dec 2015 16:55:23 -0500
Turns out we must set rl_basic_word_break_characters *before* we call
rl_initialize() the first time, because it will quietly copy that value
elsewhere --- but only on the first call. (Love these undocumented
dependencies.) I broke this yesterday in commit 2ec477dc8108339d;
like that commit, back-patch to all active branches. Per report from
Pavel Stehule.
M src/bin/psql/input.c
Cope with Readline's failure to track SIGWINCH events outside of input.
commit : 9afe392dc8f277db4b6c490519a88d7097393d38
author : Tom Lane <[email protected]>
date : Wed, 16 Dec 2015 16:58:56 -0500
committer: Tom Lane <[email protected]>
date : Wed, 16 Dec 2015 16:58:56 -0500
It emerges that libreadline doesn't notice terminal window size change
events unless they occur while collecting input. This is easy to stumble
over if you resize the window while using a pager to look at query output,
but it can be demonstrated without any pager involvement. The symptom is
that queries exceeding one line are misdisplayed during subsequent input
cycles, because libreadline has the wrong idea of the screen dimensions.
The safest, simplest way to fix this is to call rl_reset_screen_size()
just before calling readline(). That causes an extra ioctl(TIOCGWINSZ)
for every command; but since it only happens when reading from a tty, the
performance impact should be negligible. A more valid objection is that
this still leaves a tiny window during entry to readline() wherein delivery
of SIGWINCH will be missed; but the practical consequences of that are
probably negligible. In any case, there doesn't seem to be any good way to
avoid the race, since readline exposes no functions that seem safe to call
from a generic signal handler --- rl_reset_screen_size() certainly isn't.
It turns out that we also need an explicit rl_initialize() call, else
rl_reset_screen_size() dumps core when called before the first readline()
call.
rl_reset_screen_size() is not present in old versions of libreadline,
so we need a configure test for that. (rl_initialize() is present at
least back to readline 4.0, so we won't bother with a test for it.)
We would need a configure test anyway since libedit's emulation of
libreadline doesn't currently include such a function. Fortunately,
libedit seems not to have any corresponding bug.
Merlin Moncure, adjusted a bit by me
M configure
M configure.in
M src/bin/psql/input.c
M src/include/pg_config.h.in
Add missing CHECK_FOR_INTERRUPTS in lseg_inside_poly
commit : 871e28062cf55d2bfa967a5cec25d5e03818bd14
author : Alvaro Herrera <[email protected]>
date : Mon, 14 Dec 2015 16:44:40 -0300
committer: Alvaro Herrera <[email protected]>
date : Mon, 14 Dec 2015 16:44:40 -0300
Apparently, there are bugs in this code that cause it to loop endlessly.
That bug still needs more research, but in the meantime it's clear that
the loop is missing a check for interrupts so that it can be cancelled
timely.
Backpatch to 9.1 -- this has been missing since 49475aab8d0d.
M src/backend/utils/adt/geo_ops.c
Fix out-of-memory error handling in ParameterDescription message processing.
commit : dee1ed54f80345250cde37632673b139742024b6
author : Heikki Linnakangas <[email protected]>
date : Mon, 14 Dec 2015 18:19:10 +0200
committer: Heikki Linnakangas <[email protected]>
date : Mon, 14 Dec 2015 18:19:10 +0200
If libpq ran out of memory while constructing the result set, it would hang,
waiting for more data from the server, which might never arrive. To fix,
distinguish between out-of-memory error and not-enough-data cases, and give
a proper error message back to the client on OOM.
There are still similar issues in handling COPY start messages, but let's
handle that as a separate patch.
Michael Paquier, Amit Kapila and me. Backpatch to all supported versions.
M src/interfaces/libpq/fe-protocol3.c
Correct statement to actually be the intended assert statement.
commit : c755b7aaf776235413701813adc6942efedda9fd
author : Andres Freund <[email protected]>
date : Mon, 14 Dec 2015 11:24:59 +0100
committer: Andres Freund <[email protected]>
date : Mon, 14 Dec 2015 11:24:59 +0100
e3f4cfc7 introduced a LWLockHeldByMe() call, without the corresponding
Assert() surrounding it.
Spotted by Coverity.
Backpatch: 9.1+, like the previous commit
M src/backend/storage/buffer/bufmgr.c
Docs: document that psql's "\i -" means read from stdin.
commit : 818a680a6f0bc565cc12aaf9fa67752988b40f21
author : Tom Lane <[email protected]>
date : Sun, 13 Dec 2015 23:42:54 -0500
committer: Tom Lane <[email protected]>
date : Sun, 13 Dec 2015 23:42:54 -0500
This has worked that way for a long time, maybe always, but you would
not have known it from the documentation. Also back-patch the notes
I added to HEAD earlier today about behavior of the "-f -" switch,
which likewise have been valid for many releases.
M doc/src/sgml/ref/psql-ref.sgml
Properly initialize write, flush and replay locations in walsender slots
commit : a1fb84990dc172f21f12a8fb79ab58876e81fe6b
author : Magnus Hagander <[email protected]>
date : Sun, 13 Dec 2015 16:40:37 +0100
committer: Magnus Hagander <[email protected]>
date : Sun, 13 Dec 2015 16:40:37 +0100
These would leak random xlog positions if a walsender used for backup would
a walsender slot previously used by a replication walsender.
In passing also fix a couple of cases where the xlog pointer is directly
compared to zero instead of using XLogRecPtrIsInvalid, noted by
Michael Paquier.
M src/backend/replication/walsender.c
Doc: update external URLs for PostGIS project.
commit : 5f1de605640578667e97894d846d119a5dc5a31f
author : Tom Lane <[email protected]>
date : Sat, 12 Dec 2015 20:02:09 -0500
committer: Tom Lane <[email protected]>
date : Sat, 12 Dec 2015 20:02:09 -0500
Paul Ramsey
M doc/src/sgml/earthdistance.sgml
M doc/src/sgml/external-projects.sgml
M doc/src/sgml/release-8.4.sgml
Fix ALTER TABLE ... SET TABLESPACE for unlogged relations.
commit : 9037bdc88f8a71aaf2fcc5f6fe1cbb98fd1ad470
author : Andres Freund <[email protected]>
date : Sat, 12 Dec 2015 14:19:25 +0100
committer: Andres Freund <[email protected]>
date : Sat, 12 Dec 2015 14:19:25 +0100
Changing the tablespace of an unlogged relation did not WAL log the
creation and content of the init fork. Thus, after a standby is
promoted, unlogged relation cannot be accessed anymore, with errors
like:
ERROR: 58P01: could not open file "pg_tblspc/...": No such file or directory
Additionally the init fork was not synced to disk, independent of the
configured wal_level, a relatively small durability risk.
Investigation of that problem also brought to light that, even for
permanent relations, the creation of !main forks was not WAL logged,
i.e. no XLOG_SMGR_CREATE record were emitted. That mostly turns out not
to be a problem, because these files were created when the actual
relation data is copied; nonexistent files are not treated as an error
condition during replay. But that doesn't work for empty files, and
generally feels a bit haphazard. Luckily, outside init and main forks,
empty forks don't occur often or are not a problem.
Add the required WAL logging and syncing to disk.
Reported-By: Michael Paquier
Author: Michael Paquier and Andres Freund
Discussion: [email protected]
Backpatch: 9.1, where unlogged relations were introduced
M src/backend/commands/tablecmds.c
Add an expected-file to match behavior of latest libxml2.
commit : fee48581527f7a46c9ca75df6d1620c829658526
author : Tom Lane <[email protected]>
date : Fri, 11 Dec 2015 19:08:40 -0500
committer: Tom Lane <[email protected]>
date : Fri, 11 Dec 2015 19:08:40 -0500
Recent releases of libxml2 do not provide error context reports for errors
detected at the very end of the input string. This appears to be a bug, or
at least an infelicity, introduced by the fix for libxml2's CVE-2015-7499.
We can hope that this behavioral change will get undone before too long;
but the security patch is likely to spread a lot faster/further than any
follow-on cleanup, which means this behavior is likely to be present in the
wild for some time to come. As a stopgap, add a variant regression test
expected-file that matches what you get with a libxml2 that acts this way.
A src/test/regress/expected/xml_2.out
For REASSIGN OWNED for foreign user mappings
commit : 4626245bc6d2a484a8dc3169192eadc14dde3a5d
author : Alvaro Herrera <[email protected]>
date : Fri, 11 Dec 2015 18:39:09 -0300
committer: Alvaro Herrera <[email protected]>
date : Fri, 11 Dec 2015 18:39:09 -0300
As reported in bug #13809 by Alexander Ashurkov, the code for REASSIGN
OWNED hadn't gotten word about user mappings. Deal with them in the
same way default ACLs do, which is to ignore them altogether; they are
handled just fine by DROP OWNED. The other foreign object cases are
already handled correctly by both commands.
Also add a REASSIGN OWNED statement to foreign_data test to exercise the
foreign data objects. (The changes are just before the "cleanup" phase,
so it shouldn't remove any existing live test.)
Reported by Alexander Ashurkov, then independently by Jaime Casanova.
M src/backend/catalog/pg_shdepend.c
M src/test/regress/expected/foreign_data.out
M src/test/regress/sql/foreign_data.sql
Install our "missing" script where PGXS builds can find it.
commit : 1ebe75a2cc5ef8db20cd04098bb9b8d1593209f7
author : Tom Lane <[email protected]>
date : Fri, 11 Dec 2015 16:14:27 -0500
committer: Tom Lane <[email protected]>
date : Fri, 11 Dec 2015 16:14:27 -0500
This allows sane behavior in a PGXS build done on a machine where build
tools such as bison are missing.
Jim Nasby
M config/Makefile
Still more fixes for planner's handling of LATERAL references.
commit : 260590e6b2fb958de79edc07d9222a39ba5913e5
author : Tom Lane <[email protected]>
date : Fri, 11 Dec 2015 14:22:20 -0500
committer: Tom Lane <[email protected]>
date : Fri, 11 Dec 2015 14:22:20 -0500
More fuzz testing by Andreas Seltenreich exposed that the planner did not
cope well with chains of lateral references. If relation X references Y
laterally, and Y references Z laterally, then we will have to scan X on the
inside of a nestloop with Z, so for all intents and purposes X is laterally
dependent on Z too. The planner did not understand this and would generate
intermediate joins that could not be used. While that was usually harmless
except for wasting some planning cycles, under the right circumstances it
would lead to "failed to build any N-way joins" or "could not devise a
query plan" planner failures.
To fix that, convert the existing per-relation lateral_relids and
lateral_referencers relid sets into their transitive closures; that is,
they now show all relations on which a rel is directly or indirectly
laterally dependent. This not only fixes the chained-reference problem
but allows some of the relevant tests to be made substantially simpler
and faster, since they can be reduced to simple bitmap manipulations
instead of searches of the LateralJoinInfo list.
Also, when a PlaceHolderVar that is due to be evaluated at a join contains
lateral references, we should treat those references as indirect lateral
dependencies of each of the join's base relations. This prevents us from
trying to join any individual base relations to the lateral reference
source before the join is formed, which again cannot work.
Andreas' testing also exposed another oversight in the "dangerous
PlaceHolderVar" test added in commit 85e5e222b1dd02f1. Simply rejecting
unsafe join paths in joinpath.c is insufficient, because in some cases
we will end up rejecting *all* possible paths for a particular join, again
leading to "could not devise a query plan" failures. The restriction has
to be known also to join_is_legal and its cohort functions, so that they
will not select a join for which that will happen. I chose to move the
supporting logic into joinrels.c where the latter functions are.
Back-patch to 9.3 where LATERAL support was introduced.
M src/backend/optimizer/path/joinpath.c
M src/backend/optimizer/path/joinrels.c
M src/backend/optimizer/plan/initsplan.c
M src/backend/optimizer/util/relnode.c
M src/include/nodes/relation.h
M src/include/optimizer/pathnode.h
M src/include/optimizer/paths.h
M src/test/regress/expected/join.out
M src/test/regress/sql/join.sql
Fix bug leading to restoring unlogged relations from empty files.
commit : b19405a44b1e5d5c59f00763d2f2d8da0ea32368
author : Andres Freund <[email protected]>
date : Thu, 10 Dec 2015 16:25:12 +0100
committer: Andres Freund <[email protected]>
date : Thu, 10 Dec 2015 16:25:12 +0100
At the end of crash recovery, unlogged relations are reset to the empty
state, using their init fork as the template. The init fork is copied to
the main fork without going through shared buffers. Unfortunately WAL
replay so far has not necessarily flushed writes from shared buffers to
disk at that point. In normal crash recovery, and before the
introduction of 'fast promotions' in fd4ced523 / 9.3, the
END_OF_RECOVERY checkpoint flushes the buffers out in time. But with
fast promotions that's not the case anymore.
To fix, force WAL writes targeting the init fork to be flushed
immediately (using the new FlushOneBuffer() function). In 9.5+ that
flush can centrally be triggered from the code dealing with restoring
full page writes (XLogReadBufferForRedoExtended), in earlier releases
that responsibility is in the hands of XLOG_HEAP_NEWPAGE's replay
function.
Backpatch to 9.1, even if this currently is only known to trigger in
9.3+. Flushing earlier is more robust, and it is advantageous to keep
the branches similar.
Typical symptoms of this bug are errors like
'ERROR: index "..." contains unexpected zero page at block 0'
shortly after promoting a node.
Reported-By: Thom Brown
Author: Andres Freund and Michael Paquier
Discussion: [email protected]
Backpatch: 9.1-
M src/backend/access/heap/heapam.c
M src/backend/storage/buffer/bufmgr.c
M src/include/storage/bufmgr.h
Accept flex > 2.5.x on Windows, too.
commit : b3e377a004511df45dcd6bc58353ce1f44932433
author : Tom Lane <[email protected]>
date : Thu, 10 Dec 2015 10:19:13 -0500
committer: Tom Lane <[email protected]>
date : Thu, 10 Dec 2015 10:19:13 -0500
Commit 32f15d05c fixed this in configure, but missed the similar check
in the MSVC scripts.
Michael Paquier, per report from Victor Wagner
M src/tools/msvc/pgflex.pl
Simplify LATERAL-related calculations within add_paths_to_joinrel().
commit : 54497807b3941b5e30bfe77ca6136a6163d4f199
author : Tom Lane <[email protected]>
date : Wed, 9 Dec 2015 18:54:25 -0500
committer: Tom Lane <[email protected]>
date : Wed, 9 Dec 2015 18:54:25 -0500
While convincing myself that commit 7e19db0c09719d79 would solve both of
the problems recently reported by Andreas Seltenreich, I realized that
add_paths_to_joinrel's handling of LATERAL restrictions could be made
noticeably simpler and faster if we were to retain the minimum possible
parameterization for each joinrel (that is, the set of relids supplying
unsatisfied lateral references in it). We already retain that for
baserels, in RelOptInfo.lateral_relids, so we can use that field for
joinrels too.
This is a back-port of commit edca44b1525b3d591263d032dc4fe500ea771e0e.
I originally intended not to back-patch that, but additional hacking
in this area turns out to be needed, making it necessary not optional
to compute lateral_relids for joinrels. In preparation for those fixes,
sync the relevant code with HEAD as much as practical. (I did not risk
rearranging fields of RelOptInfo in released branches, however.)
M src/backend/optimizer/path/joinpath.c
M src/backend/optimizer/util/relnode.c
M src/include/nodes/relation.h
Fix another oversight in checking if a join with LATERAL refs is legal.
commit : 0a34ff7e9a3a232d675fe03dba00d010c689f33c
author : Tom Lane <[email protected]>
date : Mon, 7 Dec 2015 17:41:45 -0500
committer: Tom Lane <[email protected]>
date : Mon, 7 Dec 2015 17:41:45 -0500
It was possible for the planner to decide to join a LATERAL subquery to
the outer side of an outer join before the outer join itself is completed.
Normally that's fine because of the associativity rules, but it doesn't
work if the subquery contains a lateral reference to the inner side of the
outer join. In such a situation the outer join *must* be done first.
join_is_legal() missed this consideration and would allow the join to be
attempted, but the actual path-building code correctly decided that no
valid join path could be made, sometimes leading to planner errors such as
"failed to build any N-way joins".
Per report from Andreas Seltenreich. Back-patch to 9.3 where LATERAL
support was added.
M src/backend/optimizer/path/joinrels.c
M src/backend/optimizer/util/relnode.c
M src/include/optimizer/pathnode.h
M src/test/regress/expected/join.out
M src/test/regress/sql/join.sql
Further improve documentation of the role-dropping process.
commit : fd52958856233bc3248db879c4241bdf08075c42
author : Tom Lane <[email protected]>
date : Fri, 4 Dec 2015 14:44:13 -0500
committer: Tom Lane <[email protected]>
date : Fri, 4 Dec 2015 14:44:13 -0500
In commit 1ea0c73c2 I added a section to user-manag.sgml about how to drop
roles that own objects; but as pointed out by Stephen Frost, I neglected
that shared objects (databases or tablespaces) may need special treatment.
Fix that. Back-patch to supported versions, like the previous patch.
M doc/src/sgml/user-manag.sgml
Make gincostestimate() cope with hypothetical GIN indexes.
commit : 52774e52dd5832ac3cbee29cb23d24b30fb7eed5
author : Tom Lane <[email protected]>
date : Tue, 1 Dec 2015 16:24:34 -0500
committer: Tom Lane <[email protected]>
date : Tue, 1 Dec 2015 16:24:34 -0500
We tried to fetch statistics data from the index metapage, which does not
work if the index isn't actually present. If the index is hypothetical,
instead extrapolate some plausible internal statistics based on the index
page count provided by the index-advisor plugin.
There was already some code in gincostestimate() to invent internal stats
in this way, but since it was only meant as a stopgap for pre-9.1 GIN
indexes that hadn't been vacuumed since upgrading, it was pretty crude.
If we want it to support index advisors, we should try a little harder.
A small amount of testing says that it's better to estimate the entry pages
as 90% of the index, not 100%. Also, estimating the number of entries
(keys) as equal to the heap tuple count could be wildly wrong in either
direction. Instead, let's estimate 100 entries per entry page.
Perhaps someday somebody will want the index advisor to be able to provide
these numbers more directly, but for the moment this should serve.
Problem report and initial patch by Julien Rouhaud; modified by me to
invent less-bogus internal statistics. Back-patch to all supported
branches, since we've supported index advisors since 9.0.
M src/backend/utils/adt/selfuncs.c
Use "g" not "f" format in ecpg's PGTYPESnumeric_from_double().
commit : 3e6e98c5a8551e3b90bbc82d5bdad2f574a6f508
author : Tom Lane <[email protected]>
date : Tue, 1 Dec 2015 11:42:25 -0500
committer: Tom Lane <[email protected]>
date : Tue, 1 Dec 2015 11:42:25 -0500
The previous coding could overrun the provided buffer size for a very large
input, or lose precision for a very small input. Adopt the methodology
that's been in use in the equivalent backend code for a long time.
Per private report from Bas van Schaik. Back-patch to all supported
branches.
M src/interfaces/ecpg/pgtypeslib/numeric.c
Fix failure to consider failure cases in GetComboCommandId().
commit : 6823bc282943b44a57cdde3921621f7d83598d83
author : Tom Lane <[email protected]>
date : Thu, 26 Nov 2015 13:23:02 -0500
committer: Tom Lane <[email protected]>
date : Thu, 26 Nov 2015 13:23:02 -0500
Failure to initially palloc the comboCids array, or to realloc it bigger
when needed, left combocid's data structures in an inconsistent state that
would cause trouble if the top transaction continues to execute. Noted
while examining a user complaint about the amount of memory used for this.
(There's not much we can do about that, but it does point up that repalloc
failure has a non-negligible chance of occurring here.)
In HEAD/9.5, also avoid possible invocation of memcpy() with a null pointer
in SerializeComboCIDState; cf commit 13bba0227.
M src/backend/utils/time/combocid.c
Be more paranoid about null return values from libpq status functions.
commit : 64b7079e50cc75fc0e8730793b4a66bfb7388de2
author : Tom Lane <[email protected]>
date : Wed, 25 Nov 2015 17:31:53 -0500
committer: Tom Lane <[email protected]>
date : Wed, 25 Nov 2015 17:31:53 -0500
PQhost() can return NULL in non-error situations, namely when a Unix-socket
connection has been selected by default. That behavior is a tad debatable
perhaps, but for the moment we should make sure that psql copes with it.
Unfortunately, do_connect() failed to: it could pass a NULL pointer to
strcmp(), resulting in crashes on most platforms. This was reported as a
security issue by ChenQin of Topsec Security Team, but the consensus of
the security list is that it's just a garden-variety bug with no security
implications.
For paranoia's sake, I made the keep_password test not trust PQuser or
PQport either, even though I believe those will never return NULL given
a valid PGconn.
Back-patch to all supported branches.
M src/bin/psql/command.c
pg_upgrade: fix CopyFile() on Windows to fail on file existence
commit : 6638c9aaf881b156fafc05371e8ed21f0cdda66e
author : Bruce Momjian <[email protected]>
date : Tue, 24 Nov 2015 17:18:27 -0500
committer: Bruce Momjian <[email protected]>
date : Tue, 24 Nov 2015 17:18:27 -0500
Also fix getErrorText() to return the right error string on failure.
This behavior now matches that of other operating systems.
Report by Noah Misch
Backpatch through 9.1
M contrib/pg_upgrade/file.c
M contrib/pg_upgrade/util.c
Adopt the GNU convention for handling tar-archive members exceeding 8GB.
commit : 0e6185283c62fc1394ee6242a496fc727fe1136d
author : Tom Lane <[email protected]>
date : Sat, 21 Nov 2015 20:21:32 -0500
committer: Tom Lane <[email protected]>
date : Sat, 21 Nov 2015 20:21:32 -0500
The POSIX standard for tar headers requires archive member sizes to be
printed in octal with at most 11 digits, limiting the representable file
size to 8GB. However, GNU tar and apparently most other modern tars
support a convention in which oversized values can be stored in base-256,
allowing any practical file to be a tar member. Adopt this convention
to remove two limitations:
* pg_dump with -Ft output format failed if the contents of any one table
exceeded 8GB.
* pg_basebackup failed if the data directory contained any file exceeding
8GB. (This would be a fatal problem for installations configured with a
table segment size of 8GB or more, and it has also been seen to fail when
large core dump files exist in the data directory.)
File sizes under 8GB are still printed in octal, so that no compatibility
issues are created except in cases that would have failed entirely before.
In addition, this patch fixes several bugs in the same area:
* In 9.3 and later, we'd defined tarCreateHeader's file-size argument as
size_t, which meant that on 32-bit machines it would write a corrupt tar
header for file sizes between 4GB and 8GB, even though no error was raised.
This broke both "pg_dump -Ft" and pg_basebackup for such cases.
* pg_restore from a tar archive would fail on tables of size between 4GB
and 8GB, on machines where either "size_t" or "unsigned long" is 32 bits.
This happened even with an archive file not affected by the previous bug.
* pg_basebackup would fail if there were files of size between 4GB and 8GB,
even on 64-bit machines.
* In 9.3 and later, "pg_basebackup -Ft" failed entirely, for any file size,
on 64-bit big-endian machines.
In view of these potential data-loss bugs, back-patch to all supported
branches, even though removal of the documented 8GB limit might otherwise
be considered a new feature rather than a bug fix.
M doc/src/sgml/ref/pg_dump.sgml
M src/backend/replication/basebackup.c
M src/bin/pg_basebackup/pg_basebackup.c
M src/bin/pg_dump/pg_backup_tar.c
M src/include/pgtar.h
M src/port/tar.c
Fix handling of inherited check constraints in ALTER COLUMN TYPE (again).
commit : 64349f1d29dc6257551c7663526fc6633b45e90b
author : Tom Lane <[email protected]>
date : Fri, 20 Nov 2015 14:55:29 -0500
committer: Tom Lane <[email protected]>
date : Fri, 20 Nov 2015 14:55:29 -0500
The previous way of reconstructing check constraints was to do a separate
"ALTER TABLE ONLY tab ADD CONSTRAINT" for each table in an inheritance
hierarchy. However, that way has no hope of reconstructing the check
constraints' own inheritance properties correctly, as pointed out in
bug #13779 from Jan Dirk Zijlstra. What we should do instead is to do
a regular "ALTER TABLE", allowing recursion, at the topmost table that
has a particular constraint, and then suppress the work queue entries
for inherited instances of the constraint.
Annoyingly, we'd tried to fix this behavior before, in commit 5ed6546cf,
but we failed to notice that it wasn't reconstructing the pg_constraint
field values correctly.
As long as I'm touching pg_get_constraintdef_worker anyway, tweak it to
always schema-qualify the target table name; this seems like useful backup
to the protections installed by commit 5f173040.
In HEAD/9.5, get rid of get_constraint_relation_oids, which is now unused.
(I could alternatively have modified it to also return conislocal, but that
seemed like a pretty single-purpose API, so let's not pretend it has some
other use.) It's unused in the back branches as well, but I left it in
place just in case some third-party code has decided to use it.
In HEAD/9.5, also rename pg_get_constraintdef_string to
pg_get_constraintdef_command, as the previous name did nothing to explain
what that entry point did differently from others (and its comment was
equally useless). Again, that change doesn't seem like material for
back-patching.
I did a bit of re-pgindenting in tablecmds.c in HEAD/9.5, as well.
Otherwise, back-patch to all supported branches.
M src/backend/commands/tablecmds.c
M src/backend/utils/adt/ruleutils.c
M src/test/regress/expected/alter_table.out
M src/test/regress/sql/alter_table.sql
Accept flex > 2.5.x in configure.
commit : ae81d4fb19baa6409e1a86b3e5c22009ee8d0052
author : Tom Lane <[email protected]>
date : Wed, 18 Nov 2015 17:45:05 -0500
committer: Tom Lane <[email protected]>
date : Wed, 18 Nov 2015 17:45:05 -0500
Per buildfarm member anchovy, 2.6.0 exists in the wild now.
Hopefully it works with Postgres; if not, we'll have to do something
about that, but in any case claiming it's "too old" is pretty silly.
M config/programs.m4
M configure
Fix possible internal overflow in numeric division.
commit : 7df6dc4056acc477d792bf3bb1a58eb3c1047269
author : Tom Lane <[email protected]>
date : Tue, 17 Nov 2015 15:46:47 -0500
committer: Tom Lane <[email protected]>
date : Tue, 17 Nov 2015 15:46:47 -0500
div_var_fast() postpones propagating carries in the same way as mul_var(),
so it has the same corner-case overflow risk we fixed in 246693e5ae8a36f0,
namely that the size of the carries has to be accounted for when setting
the threshold for executing a carry propagation step. We've not devised
a test case illustrating the brokenness, but the required fix seems clear
enough. Like the previous fix, back-patch to all active branches.
Dean Rasheed
M src/backend/utils/adt/numeric.c
Speed up ruleutils' name de-duplication code, and fix overlength-name case.
commit : faf18a90506ecf0a4ab1349acfcb899167385649
author : Tom Lane <[email protected]>
date : Mon, 16 Nov 2015 13:45:17 -0500
committer: Tom Lane <[email protected]>
date : Mon, 16 Nov 2015 13:45:17 -0500
Since commit 11e131854f8231a21613f834c40fe9d046926387, ruleutils.c has
attempted to ensure that each RTE in a query or plan tree has a unique
alias name. However, the code that was added for this could be quite slow,
even as bad as O(N^3) if N identical RTE names must be replaced, as noted
by Jeff Janes. Improve matters by building a transient hash table within
set_rtable_names. The hash table in itself reduces the cost of detecting a
duplicate from O(N) to O(1), and we can save another factor of N by storing
the number of de-duplicated names already created for each entry, so that
we don't have to re-try names already created. This way is probably a bit
slower overall for small range tables, but almost by definition, such cases
should not be a performance problem.
In principle the same problem applies to the column-name-de-duplication
code; but in practice that seems to be less of a problem, first because
N is limited since we don't support extremely wide tables, and second
because duplicate column names within an RTE are fairly rare, so that in
practice the cost is more like O(N^2) not O(N^3). It would be very much
messier to fix the column-name code, so for now I've left that alone.
An independent problem in the same area was that the de-duplication code
paid no attention to the identifier length limit, and would happily produce
identifiers that were longer than NAMEDATALEN and wouldn't be unique after
truncation to NAMEDATALEN. This could result in dump/reload failures, or
perhaps even views that silently behaved differently than before. We can
fix that by shortening the base name as needed. Fix it for both the
relation and column name cases.
In passing, check for interrupts in set_rtable_names, just in case it's
still slow enough to be an issue.
Back-patch to 9.3 where this code was introduced.
M src/backend/utils/adt/ruleutils.c
M src/test/regress/expected/create_view.out
M src/test/regress/sql/create_view.sql
Fix ruleutils.c's dumping of whole-row Vars in ROW() and VALUES() contexts.
commit : 7d0e8720865321fcf124eaac6ef22ef22d13b957
author : Tom Lane <[email protected]>
date : Sun, 15 Nov 2015 14:41:09 -0500
committer: Tom Lane <[email protected]>
date : Sun, 15 Nov 2015 14:41:09 -0500
Normally ruleutils prints a whole-row Var as "foo.*". We already knew that
that doesn't work at top level of a SELECT list, because the parser would
treat the "*" as a directive to expand the reference into separate columns,
not a whole-row Var. However, Joshua Yanovski points out in bug #13776
that the same thing happens at top level of a ROW() construct; and some
nosing around in the parser shows that the same is true in VALUES().
Hence, apply the same workaround already devised for the SELECT-list case,
namely to add a forced cast to the appropriate rowtype in these cases.
(The alternative of just printing "foo" was rejected because it is
difficult to avoid ambiguity against plain columns named "foo".)
Back-patch to all supported branches.
M src/backend/utils/adt/ruleutils.c
M src/test/regress/expected/create_view.out
M src/test/regress/sql/create_view.sql
PL/Python: Make tests pass with Python 3.5
commit : a37ab812c7a2a99917e01f07ac4958712ea02637
author : Peter Eisentraut <[email protected]>
date : Wed, 3 Jun 2015 19:52:08 -0400
committer: Peter Eisentraut <[email protected]>
date : Wed, 3 Jun 2015 19:52:08 -0400
The error message wording for AttributeError has changed in Python 3.5.
For the plpython_error test, add a new expected file. In the
plpython_subtransaction test, we didn't really care what the exception
is, only that it is something coming from Python. So use a generic
exception instead, which has a message that doesn't vary across
versions.
M src/pl/plpython/expected/README
A src/pl/plpython/expected/plpython_error_5.out
M src/pl/plpython/expected/plpython_subtransaction.out
M src/pl/plpython/expected/plpython_subtransaction_0.out
M src/pl/plpython/expected/plpython_subtransaction_5.out
M src/pl/plpython/sql/plpython_subtransaction.sql
pg_upgrade: properly detect file copy failure on Windows
commit : a75efb483649a194d4578404ca294735829ff127
author : Bruce Momjian <[email protected]>
date : Sat, 14 Nov 2015 11:47:11 -0500
committer: Bruce Momjian <[email protected]>
date : Sat, 14 Nov 2015 11:47:11 -0500
Previously, file copy failures were ignored on Windows due to an
incorrect return value check.
Report by Manu Joye
Backpatch through 9.1
M contrib/pg_upgrade/file.c
M contrib/pg_upgrade/pg_upgrade.h
Fix unwanted flushing of libpq's input buffer when socket EOF is seen.
commit : db6e8e1624a8f0357373450136c850f2b6e7fc8a
author : Tom Lane <[email protected]>
date : Thu, 12 Nov 2015 13:03:53 -0500
committer: Tom Lane <[email protected]>
date : Thu, 12 Nov 2015 13:03:53 -0500
In commit 210eb9b743c0645d I centralized libpq's logic for closing down
the backend communication socket, and made the new pqDropConnection
routine always reset the I/O buffers to empty. Many of the call sites
previously had not had such code, and while that amounted to an oversight
in some cases, there was one place where it was intentional and necessary
*not* to flush the input buffer: pqReadData should never cause that to
happen, since we probably still want to process whatever data we read.
This is the true cause of the problem Robert was attempting to fix in
c3e7c24a1d60dc6a, namely that libpq no longer reported the backend's final
ERROR message before reporting "server closed the connection unexpectedly".
But that only accidentally fixed it, by invoking parseInput before the
input buffer got flushed; and very likely there are timing scenarios
where we'd still lose the message before processing it.
To fix, pass a flag to pqDropConnection to tell it whether to flush the
input buffer or not. On review I think flushing is actually correct for
every other call site.
Back-patch to 9.3 where the problem was introduced. In HEAD, also improve
the comments added by c3e7c24a1d60dc6a.
M src/interfaces/libpq/fe-connect.c
M src/interfaces/libpq/fe-misc.c
M src/interfaces/libpq/fe-protocol3.c
M src/interfaces/libpq/libpq-int.h
Improve our workaround for 'TeX capacity exceeded' in building PDF files.
commit : baa42287a04d4cd2ffa529a07ed7e23421e742ae
author : Tom Lane <[email protected]>
date : Tue, 10 Nov 2015 15:59:59 -0500
committer: Tom Lane <[email protected]>
date : Tue, 10 Nov 2015 15:59:59 -0500
In commit a5ec86a7c787832d28d5e50400ec96a5190f2555 I wrote a quick hack
that reduced the number of TeX string pool entries created while converting
our documentation to PDF form. That held the fort for awhile, but as of
HEAD we're back up against the same limitation. It turns out that the
original coding of \FlowObjectSetup actually results in *three* string pool
entries being generated for every "flow object" (that is, potential
cross-reference target) in the documentation, and my previous hack only got
rid of one of them. With a little more care, we can reduce the string
count to one per flow object plus one per actually-cross-referenced flow
object (about 115000 + 5000 as of current HEAD); that should work until
the documentation volume roughly doubles from where it is today.
As a not-incidental side benefit, this change also causes pdfjadetex to
stop emitting unreferenced hyperlink anchors (bookmarks) into the PDF file.
It had been making one willy-nilly for every flow object; now it's just one
per actually-cross-referenced object. This results in close to a 2X
savings in PDF file size. We will still want to run the output through
"jpdftweak" to get it to be compressed; but we no longer need removal of
unreferenced bookmarks, so we might be able to find a quicker tool for
that step.
Although the failure only affects HEAD and US-format output at the moment,
9.5 cannot be more than a few pages short of failing likewise, so it
will inevitably fail after a few rounds of minor-version release notes.
I don't have a lot of faith that we'll never hit the limit in the older
branches; and anyway it would be nice to get rid of jpdftweak across the
board. Therefore, back-patch to all supported branches.
M doc/src/sgml/jadetex.cfg
Don't connect() to a wildcard address in test_postmaster_connection().
commit : 34725292d92d1ba643f46957803b45c3554b5d42
author : Noah Misch <[email protected]>
date : Sun, 8 Nov 2015 17:28:53 -0500
committer: Noah Misch <[email protected]>
date : Sun, 8 Nov 2015 17:28:53 -0500
At least OpenBSD, NetBSD, and Windows don't support it. This repairs
pg_ctl for listen_addresses='0.0.0.0' and listen_addresses='::'. Since
pg_ctl prefers to test a Unix-domain socket, Windows users are most
likely to need this change. Back-patch to 9.1 (all supported versions).
This could change pg_ctl interaction with loopback-interface firewall
rules. Therefore, in 9.4 and earlier (released branches), activate the
change only on known-affected platforms.
Reported (bug #13611) and designed by Kondo Yuta.
M src/bin/pg_ctl/pg_ctl.c
Fix enforcement of restrictions inside regexp lookaround constraints.
commit : 8db652359c1739d4fc216d7f780c149671ed83c5
author : Tom Lane <[email protected]>
date : Sat, 7 Nov 2015 12:43:24 -0500
committer: Tom Lane <[email protected]>
date : Sat, 7 Nov 2015 12:43:24 -0500
Lookahead and lookbehind constraints aren't allowed to contain backrefs,
and parentheses within them are always considered non-capturing. Or so
says the manual. But the regexp parser forgot about these rules once
inside a parenthesized subexpression, so that constructs like (\w)(?=(\1))
were accepted (but then not correctly executed --- a case like this acted
like (\w)(?=\w), without any enforcement that the two \w's match the same
text). And in (?=((foo))) the innermost parentheses would be counted as
capturing parentheses, though no text would ever be captured for them.
To fix, properly pass down the "type" argument to the recursive invocation
of parse().
Back-patch to all supported branches; it was agreed that silent
misexecution of such patterns is worse than throwing an error, even though
new errors in minor releases are generally not desirable.
M src/backend/regex/regcomp.c
M src/test/regress/expected/regex.out
M src/test/regress/sql/regex.sql
Add RMV to list of commands taking AE lock.
commit : 909a83df561246538e97c582521243025b78c161
author : Kevin Grittner <[email protected]>
date : Mon, 2 Nov 2015 06:26:49 -0600
committer: Kevin Grittner <[email protected]>
date : Mon, 2 Nov 2015 06:26:49 -0600
Backpatch to 9.3, where it was initially omitted.
Craig Ringer, with minor adjustment by Kevin Grittner
M doc/src/sgml/mvcc.sgml
Fix serialization anomalies due to race conditions on INSERT.
commit : 18479293cbc531209dbdf68f94cd7f86b5290a80
author : Kevin Grittner <[email protected]>
date : Sat, 31 Oct 2015 14:36:25 -0500
committer: Kevin Grittner <[email protected]>
date : Sat, 31 Oct 2015 14:36:25 -0500
On insert the CheckForSerializableConflictIn() test was performed
before the page(s) which were going to be modified had been locked
(with an exclusive buffer content lock). If another process
acquired a relation SIReadLock on the heap and scanned to a page on
which an insert was going to occur before the page was so locked,
a rw-conflict would be missed, which could allow a serialization
anomaly to be missed. The window between the check and the page
lock was small, so the bug was generally not noticed unless there
was high concurrency with multiple processes inserting into the
same table.
This was reported by Peter Bailis as bug #11732, by Sean Chittenden
as bug #13667, and by others.
The race condition was eliminated in heap_insert() by moving the
check down below the acquisition of the buffer lock, which had been
the very next statement. Because of the loop locking and unlocking
multiple buffers in heap_multi_insert() a check was added after all
inserts were completed. The check before the start of the inserts
was left because it might avoid a large amount of work to detect a
serialization anomaly before performing the all of the inserts and
the related WAL logging.
While investigating this bug, other SSI bugs which were even harder
to hit in practice were noticed and fixed, an unnecessary check
(covered by another check, so redundant) was removed from
heap_update(), and comments were improved.
Back-patch to all supported branches.
Kevin Grittner and Thomas Munro
M src/backend/access/heap/heapam.c
M src/backend/storage/lmgr/predicate.c
Fix incorrect message in ATWrongRelkindError.
commit : 71b1a0ab5f6b2f0fe87b4cb0f610508786731b6b
author : Robert Haas <[email protected]>
date : Wed, 28 Oct 2015 11:44:47 +0100
committer: Robert Haas <[email protected]>
date : Wed, 28 Oct 2015 11:44:47 +0100
Mistake introduced by commit 3bf3ab8c563699138be02f9dc305b7b77a724307.
Etsuro Fujita
M src/backend/commands/tablecmds.c
Fix back-patch of commit 8e3b4d9d40244c037bbc6e182ea3fabb9347d482.
commit : 7cecabca8fdc0ced666369a37b45df3182886230
author : Noah Misch <[email protected]>
date : Tue, 20 Oct 2015 00:57:25 -0400
committer: Noah Misch <[email protected]>
date : Tue, 20 Oct 2015 00:57:25 -0400
master emits an extra context message compared to 9.5 and earlier.
M src/test/regress/expected/plpgsql.out
Eschew "RESET statement_timeout" in tests.
commit : 0c7390d6240693eae1e357dc119b3a26c2836bb0
author : Noah Misch <[email protected]>
date : Tue, 20 Oct 2015 00:37:22 -0400
committer: Noah Misch <[email protected]>
date : Tue, 20 Oct 2015 00:37:22 -0400
Instead, use transaction abort. Given an unlucky bout of latency, the
timeout would cancel the RESET itself. Buildfarm members gharial,
lapwing, mereswine, shearwater, and sungazer witness that. Back-patch
to 9.1 (all supported versions). The query_canceled test still could
timeout before entering its subtransaction; for whatever reason, that
has yet to happen on the buildfarm.
M src/test/regress/expected/plpgsql.out
M src/test/regress/expected/prepared_xacts.out
M src/test/regress/expected/prepared_xacts_1.out
M src/test/regress/sql/plpgsql.sql
M src/test/regress/sql/prepared_xacts.sql
Fix incorrect handling of lookahead constraints in pg_regprefix().
commit : e69d4756ef377f388c88ed8d4faad736751197cc
author : Tom Lane <[email protected]>
date : Mon, 19 Oct 2015 13:54:54 -0700
committer: Tom Lane <[email protected]>
date : Mon, 19 Oct 2015 13:54:54 -0700
pg_regprefix was doing nothing with lookahead constraints, which would
be fine if it were the right kind of nothing, but it isn't: we have to
terminate our search for a fixed prefix, not just pretend the LACON arc
isn't there. Otherwise, if the current state has both a LACON outarc and a
single plain-color outarc, we'd falsely conclude that the color represents
an addition to the fixed prefix, and generate an extracted index condition
that restricts the indexscan too much. (See added regression test case.)
Terminating the search is conservative: we could traverse the LACON arc
(thus assuming that the constraint can be satisfied at runtime) and then
examine the outarcs of the linked-to state. But that would be a lot more
work than it seems worth, because writing a LACON followed by a single
plain character is a pretty silly thing to do.
This makes a difference only in rather contrived cases, but it's a bug,
so back-patch to all supported branches.
M src/backend/regex/regprefix.c
M src/test/regress/expected/regex.out
M src/test/regress/sql/regex.sql
Fix order of arguments in ecpg generated typedef command.
commit : defd2ecf4fbeb07d738630793bb1ba1df286c0d9
author : Michael Meskes <[email protected]>
date : Fri, 16 Oct 2015 17:29:05 +0200
committer: Michael Meskes <[email protected]>
date : Fri, 16 Oct 2015 17:29:05 +0200
M src/interfaces/ecpg/preproc/ecpg.trailer
Miscellaneous cleanup of regular-expression compiler.
commit : 4528f9d69dd2030c8a02218f7e483264627e9ef4
author : Tom Lane <[email protected]>
date : Fri, 16 Oct 2015 15:52:12 -0400
committer: Tom Lane <[email protected]>
date : Fri, 16 Oct 2015 15:52:12 -0400
Revert our previous addition of "all" flags to copyins() and copyouts();
they're no longer needed, and were never anything but an unsightly hack.
Improve a couple of infelicities in the REG_DEBUG code for dumping
the NFA data structure, including adding code to count the total
number of states and arcs.
Add a couple of missed error checks.
Add some more documentation in the README file, and some regression tests
illustrating cases that exceeded the state-count limit and/or took
unreasonable amounts of time before this set of patches.
Back-patch to all supported branches.
M src/backend/regex/README
M src/backend/regex/regc_nfa.c
M src/backend/regex/regcomp.c
M src/test/regress/expected/regex.out
M src/test/regress/sql/regex.sql
Improve memory-usage accounting in regular-expression compiler.
commit : ad5e5a62aff7e8fa9bc97ec83cfd015606026fcb
author : Tom Lane <[email protected]>
date : Fri, 16 Oct 2015 15:36:17 -0400
committer: Tom Lane <[email protected]>
date : Fri, 16 Oct 2015 15:36:17 -0400
This code previously counted the number of NFA states it created, and
complained if a limit was exceeded, so as to prevent bizarre regex patterns
from consuming unreasonable time or memory. That's fine as far as it went,
but the code paid no attention to how many arcs linked those states. Since
regexes can be contrived that have O(N) states but will need O(N^2) arcs
after fixempties() processing, it was still possible to blow out memory,
and take a long time doing it too. To fix, modify the bookkeeping to count
space used by both states and arcs.
I did not bother with including the "color map" in the accounting; it
can only grow to a few megabytes, which is not a lot in comparison to
what we're allowing for states+arcs (about 150MB on 64-bit machines
or half that on 32-bit machines).
Looking at some of the larger real-world regexes captured in the Tcl
regression test suite suggests that the most that is likely to be needed
for regexes found in the wild is under 10MB, so I believe that the current
limit has enough headroom to make it okay to keep it as a hard-wired limit.
In connection with this, redefine REG_ETOOBIG as meaning "regular
expression is too complex"; the previous wording of "nfa has too many
states" was already somewhat inapropos because of the error code's use
for stack depth overrun, and it was not very user-friendly either.
Back-patch to all supported branches.
M src/backend/regex/regc_nfa.c
M src/backend/regex/regcomp.c
M src/include/regex/regerrs.h
M src/include/regex/regex.h
M src/include/regex/regguts.h
Improve performance of pullback/pushfwd in regular-expression compiler.
commit : 2a8d6e4d06d6ea2e1323aeab79bbb867d0f816c5
author : Tom Lane <[email protected]>
date : Fri, 16 Oct 2015 15:11:49 -0400
committer: Tom Lane <[email protected]>
date : Fri, 16 Oct 2015 15:11:49 -0400
The previous coding would create a new intermediate state every time it
wanted to interchange the ordering of two constraint arcs. Certain regex
features such as \Y can generate large numbers of parallel constraint arcs,
and if we needed to reorder the results of that, we created unreasonable
numbers of intermediate states. To improve matters, keep a list of
already-created intermediate states associated with the state currently
being considered by the outer loop; we can re-use such states to place all
the new arcs leading to the same destination or source.
I also took the trouble to redefine push() and pull() to have a less risky
API: they no longer delete any state or arc that the caller might possibly
have a pointer to, except for the specifically-passed constraint arc.
This reduces the risk of re-introducing the same type of error seen in
the failed patch for CVE-2007-4772.
Back-patch to all supported branches.
M src/backend/regex/regc_nfa.c
M src/backend/regex/regcomp.c
Improve performance of fixempties() pass in regular-expression compiler.
commit : 677e64cb8042ff6002069f1b587d933f4cdee925
author : Tom Lane <[email protected]>
date : Fri, 16 Oct 2015 14:58:11 -0400
committer: Tom Lane <[email protected]>
date : Fri, 16 Oct 2015 14:58:11 -0400
The previous coding took something like O(N^4) time to fully process a
chain of N EMPTY arcs. We can't really do much better than O(N^2) because
we have to insert about that many arcs, but we can do lots better than
what's there now. The win comes partly from using mergeins() to amortize
de-duplication of arcs across multiple source states, and partly from
exploiting knowledge of the ordering of arcs for each state to avoid
looking at arcs we don't need to consider during the scan. We do have
to be a bit careful of the possible reordering of arcs introduced by
the sort-merge coding of the previous commit, but that's not hard to
deal with.
Back-patch to all supported branches.
M src/backend/regex/regc_nfa.c
M src/backend/regex/regcomp.c
Fix O(N^2) performance problems in regular-expression compiler.
commit : 296241635bc678f6766040e0c983e95d19b7ddf4
author : Tom Lane <[email protected]>
date : Fri, 16 Oct 2015 14:43:18 -0400
committer: Tom Lane <[email protected]>
date : Fri, 16 Oct 2015 14:43:18 -0400
Change the singly-linked in-arc and out-arc lists to be doubly-linked,
so that arc deletion is constant time rather than having worst-case time
proportional to the number of other arcs on the connected states.
Modify the bulk arc transfer operations copyins(), copyouts(), moveins(),
moveouts() so that they use a sort-and-merge algorithm whenever there's
more than a small number of arcs to be copied or moved. The previous
method is O(N^2) in the number of arcs involved, because it performs
duplicate checking independently for each copied arc. The new method may
change the ordering of existing arcs for the destination state, but nothing
really cares about that.
Provide another bulk arc copying method mergeins(), which is unused as
of this commit but is needed for the next one. It basically is like
copyins(), but the source arcs might not all come from the same state.
Replace the O(N^2) bubble-sort algorithm used in carcsort() with a qsort()
call.
These changes greatly improve the performance of regex compilation for
large or complex regexes, at the cost of extra space for arc storage during
compilation. The original tradeoff was probably fine when it was made, but
now we care more about speed and less about memory consumption.
Back-patch to all supported branches.
M src/backend/regex/regc_nfa.c
M src/backend/regex/regcomp.c
M src/include/regex/regguts.h
Fix regular-expression compiler to handle loops of constraint arcs.
commit : 6e4dda79633c978e73b2bad7ba25db0661d9204e
author : Tom Lane <[email protected]>
date : Fri, 16 Oct 2015 14:14:41 -0400
committer: Tom Lane <[email protected]>
date : Fri, 16 Oct 2015 14:14:41 -0400
It's possible to construct regular expressions that contain loops of
constraint arcs (that is, ^ $ AHEAD BEHIND or LACON arcs). There's no use
in fully traversing such a loop at execution, since you'd just end up in
the same NFA state without having consumed any input. Worse, such a loop
leads to infinite looping in the pullback/pushfwd stage of compilation,
because we keep pushing or pulling the same constraints around the loop
in a vain attempt to move them to the pre or post state. Such looping was
previously recognized in CVE-2007-4772; but the fix only handled the case
of trivial single-state loops (that is, a constraint arc leading back to
its source state) ... and not only that, it was incorrect even for that
case, because it broke the admittedly-not-very-clearly-stated API contract
of the pull() and push() subroutines. The first two regression test cases
added by this commit exhibit patterns that result in assertion failures
because of that (though there seem to be no ill effects in non-assert
builds). The other new test cases exhibit multi-state constraint loops;
in an unpatched build they will run until the NFA state-count limit is
exceeded.
To fix, remove the code added for CVE-2007-4772, and instead create a
general-purpose constraint-loop-breaking phase of regex compilation that
executes before we do pullback/pushfwd. Since we never need to traverse
a constraint loop fully, we can just break the loop at any chosen spot,
if we add clone states that can replicate any sequence of arc transitions
that would've traversed just part of the loop.
Also add some commentary clarifying why we have to have all these
machinations in the first place.
This class of problems has been known for some time --- we had a report
from Marc Mamin about two years ago, for example, and there are related
complaints in the Tcl bug tracker. I had discussed a fix of this kind
off-list with Henry Spencer, but didn't get around to doing something
about it until the issue was rediscovered by Greg Stark recently.
Back-patch to all supported branches.
M src/backend/regex/regc_nfa.c
M src/backend/regex/regcomp.c
M src/test/regress/expected/regex.out
M src/test/regress/sql/regex.sql
On Windows, ensure shared memory handle gets closed if not being used.
commit : bc6b03bb810635ce42d474af0dd2c3a9ac2fe078
author : Tom Lane <[email protected]>
date : Tue, 13 Oct 2015 11:21:33 -0400
committer: Tom Lane <[email protected]>
date : Tue, 13 Oct 2015 11:21:33 -0400
Postmaster child processes that aren't supposed to be attached to shared
memory were not bothering to close the shared memory mapping handle they
inherit from the postmaster process. That's mostly harmless, since the
handle vanishes anyway when the child process exits -- but the syslogger
process, if used, doesn't get killed and restarted during recovery from a
backend crash. That meant that Windows doesn't see the shared memory
mapping as becoming free, so it doesn't delete it and the postmaster is
unable to create a new one, resulting in failure to recover from crashes
whenever logging_collector is turned on.
Per report from Dmitry Vasilyev. It's a bit astonishing that we'd not
figured this out long ago, since it's been broken from the very beginnings
of out native Windows support; probably some previously-unexplained trouble
reports trace to this.
A secondary problem is that on Cygwin (perhaps only in older versions?),
exec() may not detach from the shared memory segment after all, in which
case these child processes did remain attached to shared memory, posing
the risk of an unexpected shared memory clobber if they went off the rails
somehow. That may be a long-gone bug, but we can deal with it now if it's
still live, by detaching within the infrastructure introduced here to deal
with closing the handle.
Back-patch to all supported branches.
Tom Lane and Amit Kapila
M src/backend/port/sysv_shmem.c
M src/backend/port/win32_shmem.c
M src/backend/postmaster/postmaster.c
M src/include/storage/pg_shmem.h
Fix "pg_ctl start -w" to test child process status directly.
commit : dfe572de00a7b4c783daa6313cd1875a3e6c4c9b
author : Tom Lane <[email protected]>
date : Mon, 12 Oct 2015 18:30:37 -0400
committer: Tom Lane <[email protected]>
date : Mon, 12 Oct 2015 18:30:37 -0400
pg_ctl start with -w previously relied on a heuristic that the postmaster
would surely always manage to create postmaster.pid within five seconds.
Unfortunately, that fails much more often than we would like on some of the
slower, more heavily loaded buildfarm members.
We have known for quite some time that we could remove the need for that
heuristic on Unix by using fork/exec instead of system() to launch the
postmaster. This allows us to know the exact PID of the postmaster, which
allows near-certain verification that the postmaster.pid file is the one
we want and not a leftover, and it also lets us use waitpid() to detect
reliably whether the child postmaster has exited or not.
What was blocking this change was not wanting to rewrite the Windows
version of start_postmaster() to avoid use of CMD.EXE. That's doable
in theory but would require fooling about with stdout/stderr redirection,
and getting the handling of quote-containing postmaster switches to
stay the same might be rather ticklish. However, we realized that
we don't have to do that to fix the problem, because we can test
whether the shell process has exited as a proxy for whether the
postmaster is still alive. That doesn't allow an exact check of the
PID in postmaster.pid, but we're no worse off than before in that
respect; and we do get to get rid of the heuristic about how long the
postmaster might take to create postmaster.pid.
On Unix, this change means that a second "pg_ctl start -w" immediately
after another such command will now reliably fail, whereas previously
it would succeed if done within two seconds of the earlier command.
Since that's a saner behavior anyway, it's fine. On Windows, the case can
still succeed within the same time window, since pg_ctl can't tell that the
earlier postmaster's postmaster.pid isn't the pidfile it is looking for.
To ensure stable test results on Windows, we can insert a short sleep into
the test script for pg_ctl, ensuring that the existing pidfile looks stale.
This hack can be removed if we ever do rewrite start_postmaster(), but that
no longer seems like a high-priority thing to do.
Back-patch to all supported versions, both because the current behavior
is buggy and because we must do that if we want the buildfarm failures
to go away.
Tom Lane and Michael Paquier
M src/bin/pg_ctl/pg_ctl.c
Factor out encoding specific tests for json
commit : f844f0ef18707f370f0060b7925c6371298e9242
author : Andrew Dunstan <[email protected]>
date : Wed, 7 Oct 2015 17:41:45 -0400
committer: Andrew Dunstan <[email protected]>
date : Wed, 7 Oct 2015 17:41:45 -0400
This lets us remove the large alternative results files for the main
json and jsonb tests, which makes modifying those tests simpler for
committers and patch submitters.
Backpatch to 9.4 for jsonb and 9.3 for json.
M src/test/regress/expected/json.out
D src/test/regress/expected/json_1.out
A src/test/regress/expected/json_encoding.out
A src/test/regress/expected/json_encoding_1.out
M src/test/regress/parallel_schedule
M src/test/regress/serial_schedule
M src/test/regress/sql/json.sql
A src/test/regress/sql/json_encoding.sql
Improve documentation of the role-dropping process.
commit : 32e53593f95870dc5dc455caa5ca8ad3f36ef271
author : Tom Lane <[email protected]>
date : Wed, 7 Oct 2015 16:12:06 -0400
committer: Tom Lane <[email protected]>
date : Wed, 7 Oct 2015 16:12:06 -0400
In general one may have to run both REASSIGN OWNED and DROP OWNED to get
rid of all the dependencies of a role to be dropped. This was alluded to
in the REASSIGN OWNED man page, but not really spelled out in full; and in
any case the procedure ought to be documented in a more prominent place
than that. Add a section to the "Database Roles" chapter explaining this,
and do a bit of wordsmithing in the relevant commands' man pages.
M doc/src/sgml/ref/drop_owned.sgml
M doc/src/sgml/ref/drop_role.sgml
M doc/src/sgml/ref/drop_user.sgml
M doc/src/sgml/ref/reassign_owned.sgml
M doc/src/sgml/user-manag.sgml
Perform an immediate shutdown if the postmaster.pid file is removed.
commit : 31bc563b9be306623c5e9a52816b432945fa6df9
author : Tom Lane <[email protected]>
date : Tue, 6 Oct 2015 17:15:27 -0400
committer: Tom Lane <[email protected]>
date : Tue, 6 Oct 2015 17:15:27 -0400
The postmaster now checks every minute or so (worst case, at most two
minutes) that postmaster.pid is still there and still contains its own PID.
If not, it performs an immediate shutdown, as though it had received
SIGQUIT.
The original goal behind this change was to ensure that failed buildfarm
runs would get fully cleaned up, even if the test scripts had left a
postmaster running, which is not an infrequent occurrence. When the
buildfarm script removes a test postmaster's $PGDATA directory, its next
check on postmaster.pid will fail and cause it to exit. Previously, manual
intervention was often needed to get rid of such orphaned postmasters,
since they'd block new test postmasters from obtaining the expected socket
address.
However, by checking postmaster.pid and not something else, we can provide
additional robustness: manual removal of postmaster.pid is a frequent DBA
mistake, and now we can at least limit the damage that will ensue if a new
postmaster is started while the old one is still alive.
Back-patch to all supported branches, since we won't get the desired
improvement in buildfarm reliability otherwise.
M src/backend/postmaster/postmaster.c
M src/backend/utils/init/miscinit.c
M src/include/miscadmin.h