commit : 4f0bf3359faee12317808634024627a824d658f5 author : Tom Lane <email@example.com> date : Mon, 5 Nov 2018 16:51:23 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 5 Nov 2018 16:51:23 -0500
Fix copy-paste error in errhint() introduced in 691d79a07933.
commit : b7301e3a7b6362d550727deffb0ebd06363efdba author : Andres Freund <email@example.com> date : Mon, 5 Nov 2018 12:02:25 -0800 committer: Andres Freund <firstname.lastname@example.org> date : Mon, 5 Nov 2018 12:02:25 -0800
Reported-By: Petr Jelinek Discussion: https://email@example.com Backpatch: 9.4-, like the previous commit
commit : 92154ef47730fad0528c48305df8726db1653059 author : Peter Eisentraut <firstname.lastname@example.org> date : Mon, 5 Nov 2018 15:12:15 +0100 committer: Peter Eisentraut <email@example.com> date : Mon, 5 Nov 2018 15:12:15 +0100
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 23063751d2d17da76d34ddfdead3f633041a6cbe
Release notes for 11.1, 10.6, 9.6.11, 9.5.15, 9.4.20, 9.3.25.
commit : 8bc709b37411ba7ad0fd0f1f79c354714424af3d author : Tom Lane <firstname.lastname@example.org> date : Sun, 4 Nov 2018 16:57:15 -0500 committer: Tom Lane <email@example.com> date : Sun, 4 Nov 2018 16:57:15 -0500
Make ts_locale.c's character-type functions cope with UTF-16.
commit : 0ae902e39ed8e20abce8b6db2daec7f2abbadb5b author : Tom Lane <firstname.lastname@example.org> date : Sat, 3 Nov 2018 13:56:10 -0400 committer: Tom Lane <email@example.com> date : Sat, 3 Nov 2018 13:56:10 -0400
On Windows, in UTF8 database encoding, what char2wchar() produces is UTF16 not UTF32, ie, characters above U+FFFF will be represented by surrogate pairs. t_isdigit() and siblings did not account for this and failed to provide a large enough result buffer. That in turn led to bogus "invalid multibyte character for locale" errors, because contrary to what you might think from char2wchar()'s documentation, its Windows code path doesn't cope sanely with buffer overflow. The solution for t_isdigit() and siblings is pretty clear: provide a 3-wchar_t result buffer not 2. char2wchar() also needs some work to provide more consistent, and more accurately documented, buffer overrun behavior. But that's a bigger job and it doesn't actually have any immediate payoff, so leave it for later. Per bug #15476 from Kenji Uno, who deserves credit for identifying the cause of the problem. Back-patch to all active branches. Discussion: https://firstname.lastname@example.org
Yet further rethinking of build changes for macOS Mojave.
commit : 1b5e8b408eb9014d0779515705a601debd22ba56 author : Tom Lane <email@example.com> date : Fri, 2 Nov 2018 18:54:00 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 2 Nov 2018 18:54:00 -0400
The solution arrived at in commit e74dd00f5 presumes that the compiler has a suitable default -isysroot setting ... but further experience shows that in many combinations of macOS version, XCode version, Xcode command line tools version, and phase of the moon, Apple's compiler will *not* supply a default -isysroot value. We could potentially go back to the approach used in commit 68fc227dd, but I don't have a lot of faith in the reliability or life expectancy of that either. Let's just revert to the approach already shipped in 11.0, namely specifying an -isysroot switch globally. As a partial response to the concerns raised by Jakob Egger, adjust the contents of Makefile.global to look like CPPFLAGS = -isysroot $(PG_SYSROOT) ... PG_SYSROOT = /path/to/sysroot This allows overriding the sysroot path at build time in a relatively painless way. Add documentation to installation.sgml about how to use the PG_SYSROOT option. I also took the opportunity to document how to work around macOS's "System Integrity Protection" feature. As before, back-patch to all supported versions. Discussion: https://email@example.com
docs: adjust simpler language for NULL return from ANY/ALL
commit : 72b1af078311d6f56ef8232e9fe96ee57dfb2d9a author : Bruce Momjian <firstname.lastname@example.org> date : Fri, 2 Nov 2018 13:05:30 -0400 committer: Bruce Momjian <email@example.com> date : Fri, 2 Nov 2018 13:05:30 -0400
Adjustment to commit 8610c973ddf1cbf0befc1369d2cf0d56c0efcd0a. Reported-by: Tom Lane Discussion: https://firstname.lastname@example.org Backpatch-through: 9.3
GUC: adjust effective_cache_size docs and SQL description
commit : 060aff97b4979f02229125c00d725efe6a8ec1aa author : Bruce Momjian <email@example.com> date : Fri, 2 Nov 2018 09:10:59 -0400 committer: Bruce Momjian <firstname.lastname@example.org> date : Fri, 2 Nov 2018 09:10:59 -0400
Clarify that effective_cache_size is both kernel buffers and shared buffers. Reported-by: email@example.com Discussion: https://firstname.lastname@example.org Backpatch-through: 9.3
Fix some spelling errors in the documentation
commit : 1ac334b165bb55cfa9441218713ead7b8421f9ca author : Magnus Hagander <email@example.com> date : Fri, 2 Nov 2018 13:55:57 +0100 committer: Magnus Hagander <firstname.lastname@example.org> date : Fri, 2 Nov 2018 13:55:57 +0100
Author: Daniel Gustafsson <email@example.com>
doc: use simpler language for NULL return from ANY/ALL
commit : 273166af8f73670a3c7efdfb9d45893d5e011dba author : Bruce Momjian <firstname.lastname@example.org> date : Fri, 2 Nov 2018 08:54:33 -0400 committer: Bruce Momjian <email@example.com> date : Fri, 2 Nov 2018 08:54:33 -0400
Previously the combination of "does not return" and "any row" caused ambiguity. Reported-by: KES <firstname.lastname@example.org> Discussion: https://email@example.com Reviewed-by: David G. Johnston Backpatch-through: 9.3
Fix error message typo introduced 691d79a07933.
commit : b0fa768c61b329b72e882b7923eccb249c26d74c author : Andres Freund <firstname.lastname@example.org> date : Thu, 1 Nov 2018 10:43:54 -0700 committer: Andres Freund <email@example.com> date : Thu, 1 Nov 2018 10:43:54 -0700
Reported-By: Michael Paquier Discussion: https://postgr.es/m/20181101003405.GB1727@paquier.xyz Backpatch: 9.4-, like the previous commit
Disallow starting server with insufficient wal_level for existing slot.
commit : cf358a2c066ccb8fc9e82dc59358130aa61075ab author : Andres Freund <firstname.lastname@example.org> date : Wed, 31 Oct 2018 14:47:41 -0700 committer: Andres Freund <email@example.com> date : Wed, 31 Oct 2018 14:47:41 -0700
Previously it was possible to create a slot, change wal_level, and restart, even if the new wal_level was insufficient for the slot. That's a problem for both logical and physical slots, because the necessary WAL records are not generated. This removes a few tests in newer versions that, somewhat inexplicably, whether restarting with a too low wal_level worked (a buggy behaviour!). Reported-By: Joshua D. Drake Author: Andres Freund Discussion: https://firstname.lastname@example.org Backpatch: 9.4-, where replication slots where introduced
Fix memory leak in repeated SPGIST index scans.
commit : 95015b1f8e8eef895ddbfe38697c6da5754ba1d4 author : Tom Lane <email@example.com> date : Wed, 31 Oct 2018 17:04:43 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 31 Oct 2018 17:04:43 -0400
spgendscan neglected to pfree all the memory allocated by spgbeginscan. It's possible to get away with that in most normal queries, since the memory is allocated in the executor's per-query context which is about to get deleted anyway; but it causes severe memory leakage during creation or filling of large exclusion-constraint indexes. Also, document that amendscan is supposed to free what ambeginscan allocates. The docs' lack of clarity on that point probably caused this bug to begin with. (There is discussion of changing that API spec going forward, but I don't think it'd be appropriate for the back branches.) Per report from Bruno Wolff. It's been like this since the beginning, so back-patch to all active branches. In HEAD, also fix an independent leak caused by commit 2a6368343 (allocating memory during spgrescan instead of spgbeginscan, which might be all right if it got cleaned up, but it didn't). And do a bit of code beautification on that commit, too. Discussion: https://postgr.es/m/20181024012314.GA27428@wolff.to
Sync our copy of the timezone library with IANA release tzcode2018g.
commit : 4311cdd8e21c9626a7c39b4b145f85a558a872f5 author : Tom Lane <email@example.com> date : Wed, 31 Oct 2018 09:47:53 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 31 Oct 2018 09:47:53 -0400
This patch absorbs an upstream fix to "zic" for a recently-introduced bug that made it output data that some 32-bit clients couldn't read. Given the current source data, the bug only manifests in zones with leap seconds, which we don't generate, so that there's no actual change in our installed timezone data files from this. Still, in case somebody uses our copy of "zic" to do something else, it seems best to apply the fix promptly. Also, update the README's notes about converting upstream code to our conventions.
Update time zone data files to tzdata release 2018g.
commit : d651e9e7c585a4e10566fa1e0d96497e0f630115 author : Tom Lane <email@example.com> date : Wed, 31 Oct 2018 08:35:50 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 31 Oct 2018 08:35:50 -0400
DST law changes in Morocco (with, effectively, zero notice). Historical corrections for Hawaii.
Fix missing whitespace in pg_dump ref page
commit : fe42d4be7b09267f77bccc0968933e45ee5beaed author : Magnus Hagander <email@example.com> date : Mon, 29 Oct 2018 12:34:49 +0100 committer: Magnus Hagander <firstname.lastname@example.org> date : Mon, 29 Oct 2018 12:34:49 +0100
Author: Daniel Gustafsson <email@example.com>
Fix perl searchpath for modern perl for MSVC tools
commit : 698255147391a16493ba6cefd014b272b757b4b8 author : Andrew Dunstan <firstname.lastname@example.org> date : Sun, 28 Oct 2018 12:22:32 -0400 committer: Andrew Dunstan <email@example.com> date : Sun, 28 Oct 2018 12:22:32 -0400
Modern versions of perl no longer include the current directory in the perl searchpath, as it's insecure. Instead of adding the current directory, we get around the problem by adding the directory where the script lives. Problem noted by Victor Wagner. Solution adapted from buildfarm client code. Backpatch to all live versions.
Sync our copy of the timezone library with IANA release tzcode2018f.
commit : 0fead87601e5a9a4d22da738381ca558a4506114 author : Tom Lane <firstname.lastname@example.org> date : Fri, 19 Oct 2018 19:36:34 -0400 committer: Tom Lane <email@example.com> date : Fri, 19 Oct 2018 19:36:34 -0400
About half of this is purely cosmetic changes to reduce the diff between our code and theirs, like inserting "const" markers where they have them. The other half is tracking actual code changes in zic.c and localtime.c. I don't think any of these represent near-term compatibility hazards, but it seems best to stay up to date. I also fixed longstanding bugs in our code for producing the known_abbrevs.txt list, which by chance hadn't been exposed before, but which resulted in some garbage output after applying the upstream changes in zic.c. Notably, because upstream removed their old phony transitions at the Big Bang, it's now necessary to cope with TZif files containing no DST transition times at all.
Update time zone data files to tzdata release 2018f.
commit : 9abbfc35ca3e9fe2dc42875face30cc1b60014ea author : Tom Lane <firstname.lastname@example.org> date : Fri, 19 Oct 2018 17:01:34 -0400 committer: Tom Lane <email@example.com> date : Fri, 19 Oct 2018 17:01:34 -0400
DST law changes in Chile, Fiji, and Russia (Volgograd). Historical corrections for China, Japan, Macau, and North Korea. Note: like the previous tzdata update, this involves a depressingly large amount of semantically-meaningless churn in tzdata.zi. That is a consequence of upstream's data compression method assigning unstable abbreviations to DST rulesets. I complained about that to them last time, and this version now uses an assignment method that pays some heed to not changing abbreviations unnecessarily. So hopefully, that'll be better going forward.
Still further rethinking of build changes for macOS Mojave.
commit : 0749acca50d3962c4a6a721c41347f15fd59db31 author : Tom Lane <firstname.lastname@example.org> date : Thu, 18 Oct 2018 14:55:23 -0400 committer: Tom Lane <email@example.com> date : Thu, 18 Oct 2018 14:55:23 -0400
To avoid the sorts of problems complained of by Jakob Egger, it'd be best if configure didn't emit any references to the sysroot path at all. In the case of PL/Tcl, we can do that just by keeping our hands off the TCL_INCLUDE_SPEC string altogether. In the case of PL/Perl, we need to substitute -iwithsysroot for -I in the compile commands, which is easily handled if we change to using a configure output variable that includes the switch not only the directory name. Since PL/Tcl and PL/Python already do it like that, this seems like good consistency cleanup anyway. Hence, this replaces the advice given to Perl-related extensions in commit 5e2217131; instead of writing "-I$(perl_archlibexp)/CORE", they should just write "$(perl_includespec)". (The old way continues to work, but not on recent macOS.) It's still the case that configure needs to be aware of the sysroot path internally, but that's cleaner than what we had before. As before, back-patch to all supported versions. Discussion: https://firstname.lastname@example.org
Fix minor bug in isolationtester.
commit : 176f6590270c8e59e5d68b339bace9d8696d4047 author : Tom Lane <email@example.com> date : Wed, 17 Oct 2018 15:06:38 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 17 Oct 2018 15:06:38 -0400
If the lock wait query failed, isolationtester would report the PQerrorMessage from some other connection, meaning there would be no message or an unrelated one. This seems like a pretty unlikely occurrence, but if it did happen, this bug could make it really difficult/confusing to figure out what happened. That seems to justify patching all the way back. In passing, clean up another place where the "wrong" conn was used for an error report. That one's not actually buggy because it's a different alias for the same connection, but it's still confusing to the reader.
Improve tzparse's handling of TZDEFRULES ("posixrules") zone data.
commit : ec5fe7f799a4c18c1d2a33ca4eb723d29493e93f author : Tom Lane <email@example.com> date : Wed, 17 Oct 2018 12:26:48 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 17 Oct 2018 12:26:48 -0400
In the IANA timezone code, tzparse() always tries to load the zone file named by TZDEFRULES ("posixrules"). Previously, we'd hacked that logic to skip the load in the "lastditch" code path, which we use only to initialize the default "GMT" zone during GUC initialization. That's critical for a couple of reasons: since we do not support leap seconds, we *must not* allow "GMT" to have leap seconds, and since this case runs before the GUC subsystem is fully alive, we'd really rather not take the risk of pg_open_tzfile throwing any errors. However, that still left the code reading TZDEFRULES on every other call, something we'd noticed to the extent of having added code to cache the result so it was only done once per process not a lot of times. Andres Freund complained about the static data space used up for the cache; but as long as the logic was like this, there was no point in trying to get rid of that space. We can improve matters by looking a bit more closely at what the IANA code actually needs the TZDEFRULES data for. One thing it does is that if "posixrules" is a leap-second-aware zone, the leap-second behavior will be absorbed into every POSIX-style zone specification. However, that's a behavior we'd really prefer to do without, since for our purposes the end effect is to render every POSIX-style zone name unsupported. Otherwise, the TZDEFRULES data is used only if the POSIX zone name specifies DST but doesn't include a transition date rule (e.g., "EST5EDT" rather than "EST5EDT,M3.2.0,M11.1.0"). That is a minority case for our purposes --- in particular, it never happens when tzload() invokes tzparse() to interpret a transition date rule string found in a tzdata zone file. Hence, if we legislate that we're going to ignore leap-second data from "posixrules", we can postpone the TZDEFRULES load into the path where we actually need to substitute for a missing date rule string. That means it will never happen at all in common scenarios, making it reasonable to dynamically allocate the cache space when it does happen. Even when the data is already loaded, this saves some cycles in the common code path since we avoid a memcpy of 23KB or so. And, IMO at least, this is a less ugly hack on the IANA logic than what we had before, since it's not messing with the lastditch-vs-regular code paths. Back-patch to all supported branches, not so much because this is a critical change as that I want to keep all our copies of the IANA timezone code in sync. Discussion: https://email@example.com
Back off using -isysroot on Darwin.
commit : 486e6f8d9c78a5e19d74b4060ea9d7a86162add6 author : Tom Lane <firstname.lastname@example.org> date : Tue, 16 Oct 2018 16:27:15 -0400 committer: Tom Lane <email@example.com> date : Tue, 16 Oct 2018 16:27:15 -0400
Rethink the solution applied in commit 5e2217131 to get PL/Tcl to build on macOS Mojave. I feared that adding -isysroot globally might have undesirable consequences, and sure enough Jakob Egger reported one: it complicates building extensions with a different Xcode version than was used for the core server. (I find that a risky proposition in general, but apparently it works most of the time, so we shouldn't break it if we don't have to.) We'd already adopted the solution for PL/Perl of inserting the sysroot path directly into the -I switches used to find Perl's headers, and we can do the same thing for PL/Tcl by changing the -iwithsysroot switch that Apple's tclConfig.sh reports. This restricts the risks to PL/Perl and PL/Tcl themselves and directly-dependent extensions, which is a lot more pleasing in general than a global -isysroot switch. Along the way, tighten the test to see if we need to inject the sysroot path into $perl_includedir, as I'd speculated about upthread but not gotten round to doing. As before, back-patch to all supported versions. Discussion: https://firstname.lastname@example.org
Avoid rare race condition in privileges.sql regression test.
commit : 4166fb3a754ef03a8d0dd7b709f9599cb7a054d9 author : Tom Lane <email@example.com> date : Tue, 16 Oct 2018 13:56:58 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 16 Oct 2018 13:56:58 -0400
We created a temp table, then switched to a new session, leaving the old session to clean up its temp objects in background. If that took long enough, the eventual attempt to drop the user that owns the temp table could fail, as exhibited today by sidewinder. Fix by dropping the temp table explicitly when we're done with it. It's been like this for quite some time, so back-patch to all supported branches. Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2018-10-16%2014%3A45%3A00
Avoid statically allocating gmtsub()'s timezone workspace.
commit : 27ba589b745f864165005f08f1616a249383955e author : Tom Lane <email@example.com> date : Tue, 16 Oct 2018 11:50:19 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 16 Oct 2018 11:50:19 -0400
localtime.c's "struct state" is a rather large object, ~23KB. We were statically allocating one for gmtsub() to use to represent the GMT timezone, even though that function is not at all heavily used and is never reached in most backends. Let's malloc it on-demand, instead. This does pose the question of how to handle a malloc failure, but there's already a well-defined error report convention here, ie set errno and return NULL. We have but one caller of pg_gmtime in HEAD, and two in back branches, neither of which were troubling to check for error. Make them do so. The possible errors are sufficiently unlikely (out-of-range timestamp, and now malloc failure) that I think elog() is adequate. Back-patch to all supported branches to keep our copies of the IANA timezone code in sync. This particular change is in a stanza that already differs from upstream, so it's a wash for maintenance purposes --- but only as long as we keep the branches the same. Discussion: https://email@example.com
Check for stack overrun in standard_ProcessUtility().
commit : eb01ea2a364404e69a0dbfe1545ba919c9943c63 author : Tom Lane <firstname.lastname@example.org> date : Mon, 15 Oct 2018 14:01:38 -0400 committer: Tom Lane <email@example.com> date : Mon, 15 Oct 2018 14:01:38 -0400
ProcessUtility can recurse, and indeed can be driven to infinite recursion, so it ought to have a check_stack_depth() call. This covers the reported bug (portal trying to execute itself) and a bunch of other cases that could perhaps arise somewhere. Per bug #15428 from Malthe Borch. Back-patch to all supported branches. Discussion: https://firstname.lastname@example.org
Avoid duplicate XIDs at recovery when building initial snapshot
commit : 7c525519d802766988f438eae9c30e3d67e1b71a author : Michael Paquier <email@example.com> date : Sun, 14 Oct 2018 22:23:54 +0900 committer: Michael Paquier <firstname.lastname@example.org> date : Sun, 14 Oct 2018 22:23:54 +0900
On a primary, sets of XLOG_RUNNING_XACTS records are generated on a periodic basis to allow recovery to build the initial state of transactions for a hot standby. The set of transaction IDs is created by scanning all the entries in ProcArray. However it happens that its logic never counted on the fact that two-phase transactions finishing to prepare can put ProcArray in a state where there are two entries with the same transaction ID, one for the initial transaction which gets cleared when prepare finishes, and a second, dummy, entry to track that the transaction is still running after prepare finishes. This way ensures a continuous presence of the transaction so as callers of for example TransactionIdIsInProgress() are always able to see it as alive. So, if a XLOG_RUNNING_XACTS takes a standby snapshot while a two-phase transaction finishes to prepare, the record can finish with duplicated XIDs, which is a state expected by design. If this record gets applied on a standby to initial its recovery state, then it would simply fail, so the odds of facing this failure are very low in practice. It would be tempting to change the generation of XLOG_RUNNING_XACTS so as duplicates are removed on the source, but this requires to hold on ProcArrayLock for longer and this would impact all workloads, particularly those using heavily two-phase transactions. XLOG_RUNNING_XACTS is also actually used only to initialize the standby state at recovery, so instead the solution is taken to discard duplicates when applying the initial snapshot. Diagnosed-by: Konstantin Knizhnik Author: Michael Paquier Discussion: https://email@example.com Backpatch-through: 9.3
Remove abstime, reltime, tinterval tables from old regression databases.
commit : 7b88c1ddd049de7ebddd39ad70ebdc7286be299d author : Tom Lane <firstname.lastname@example.org> date : Fri, 12 Oct 2018 19:33:57 -0400 committer: Tom Lane <email@example.com> date : Fri, 12 Oct 2018 19:33:57 -0400
In the back branches, drop these tables after the regression tests are done with them. This fixes failures of cross-branch pg_upgrade testing caused by these types having been removed in v12. We do lose the ability to test dump/restore behavior with these types in the back branches, but the actual loss of code coverage seems to be nil given that there's nothing very special about these types. Discussion: https://firstname.lastname@example.org
Back-patch addition of the ALLOCSET_FOO_SIZES macros.
commit : ec185747a46e704fd45a8eaa7f420cc200dcb6e7 author : Tom Lane <email@example.com> date : Fri, 12 Oct 2018 14:49:33 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 12 Oct 2018 14:49:33 -0400
These macros were originally added in commit ea268cdc9, and back-patched into 9.6 before 9.6.0. However, some extensions would like to use them in older branches, and there seems no harm in providing them. So add them to all supported branches. Per suggestions from Christoph Berg and Andres Freund. Discussion: https://email@example.com
Fix logical decoding error when system table w/ toast is repeatedly rewritten.
commit : c7b96ba291595c13272911ed65cb7c2c4c8cc2e7 author : Andres Freund <firstname.lastname@example.org> date : Wed, 10 Oct 2018 13:53:03 -0700 committer: Andres Freund <email@example.com> date : Wed, 10 Oct 2018 13:53:03 -0700
Repeatedly rewriting a mapped catalog table with VACUUM FULL or CLUSTER could cause logical decoding to fail with: ERROR, "could not map filenode \"%s\" to relation OID" To trigger the problem the rewritten catalog had to have live tuples with toasted columns. The problem was triggered as during catalog table rewrites the heap_insert() check that prevents logical decoding information to be emitted for system catalogs, failed to treat the new heap's toast table as a system catalog (because the new heap is not recognized as a catalog table via RelationIsLogicallyLogged()). The relmapper, in contrast to the normal catalog contents, does not contain historical information. After a single rewrite of a mapped table the new relation is known to the relmapper, but if the table is rewritten twice before logical decoding occurs, the relfilenode cannot be mapped to a relation anymore. Which then leads us to error out. This only happens for toast tables, because the main table contents aren't re-inserted with heap_insert(). The fix is simple, add a new heap_insert() flag that prevents logical decoding information from being emitted, and accept during decoding that there might not be tuple data for toast tables. Unfortunately that does not fix pre-existing logical decoding errors. Doing so would require not throwing an error when a filenode cannot be mapped to a relation during decoding, and that seems too likely to hide bugs. If it's crucial to fix decoding for an existing slot, temporarily changing the ERROR in ReorderBufferCommit() to a WARNING appears to be the best fix. Author: Andres Freund Discussion: https://firstname.lastname@example.org Backpatch: 9.4-, where logical decoding was introduced
Allow btree comparison functions to return INT_MIN.
commit : 26cc27541d92572dbd2a7898bef903aea66ca18f author : Tom Lane <email@example.com> date : Fri, 5 Oct 2018 16:01:30 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 5 Oct 2018 16:01:30 -0400
Historically we forbade datatype-specific comparison functions from returning INT_MIN, so that it would be safe to invert the sort order just by negating the comparison result. However, this was never really safe for comparison functions that directly return the result of memcmp(), strcmp(), etc, as POSIX doesn't place any such restriction on those library functions. Buildfarm results show that at least on recent Linux on s390x, memcmp() actually does return INT_MIN sometimes, causing sort failures. The agreed-on answer is to remove this restriction and fix relevant call sites to not make such an assumption; code such as "res = -res" should be replaced by "INVERT_COMPARE_RESULT(res)". The same is needed in a few places that just directly negated the result of memcmp or strcmp. To help find places having this problem, I've also added a compile option to nbtcompare.c that causes some of the commonly used comparators to return INT_MIN/INT_MAX instead of their usual -1/+1. It'd likely be a good idea to have at least one buildfarm member running with "-DSTRESS_SORT_INT_MIN". That's far from a complete test of course, but it should help to prevent fresh introductions of such bugs. This is a longstanding portability hazard, so back-patch to all supported branches. Discussion: https://email@example.com
Set snprintf.c's maximum number of NL arguments to be 31.
commit : a5b46fc66dfb2396bfa0995410a1e8487f383d3f author : Tom Lane <firstname.lastname@example.org> date : Tue, 2 Oct 2018 12:41:28 -0400 committer: Tom Lane <email@example.com> date : Tue, 2 Oct 2018 12:41:28 -0400
Previously, we used the platform's NL_ARGMAX if any, otherwise 16. The trouble with this is that the platform value is hugely variable, ranging from the POSIX-minimum 9 to as much as 64K on recent FreeBSD. Values of more than a dozen or two have no practical use and slow down the initialization of the argtypes array. Worse, they cause snprintf.c to consume far more stack space than was the design intention, possibly resulting in stack-overflow crashes. Standardize on 31, which is comfortably more than we need (it looks like no existing translatable message has more than about 10 parameters). I chose that, not 32, to make the array sizes powers of 2, for some possible small gain in speed of the memset. The lack of reported crashes suggests that the set of platforms we use snprintf.c on (in released branches) may have no overlap with the set where NL_ARGMAX has unreasonably large values. But that's not entirely clear, so back-patch to all supported branches. Per report from Mateusz Guzik (via Thomas Munro). Discussion: https://postgr.es/m/CAEepm=3VF=PUp2f8gU8fgZB22yPE_KBS0+e1AHAtQ=09schTHg@mail.gmail.com
Fix corner-case failures in has_foo_privilege() family of functions.
commit : fd81fae67fa0a0b213bfc1bf6d058771c8ada8f2 author : Tom Lane <firstname.lastname@example.org> date : Tue, 2 Oct 2018 11:54:13 -0400 committer: Tom Lane <email@example.com> date : Tue, 2 Oct 2018 11:54:13 -0400
The variants of these functions that take numeric inputs (OIDs or column numbers) are supposed to return NULL rather than failing on bad input; this rule reduces problems with snapshot skew when queries apply the functions to all rows of a catalog. has_column_privilege() had careless handling of the case where the table OID didn't exist. You might get something like this: select has_column_privilege(9999,'nosuchcol','select'); ERROR: column "nosuchcol" of relation "(null)" does not exist or you might get a crash, depending on the platform's printf's response to a null string pointer. In addition, while applying the column-number variant to a dropped column returned NULL as desired, applying the column-name variant did not: select has_column_privilege('mytable','........pg.dropped.2........','select'); ERROR: column "........pg.dropped.2........" of relation "mytable" does not exist It seems better to make this case return NULL as well. Also, the OID-accepting variants of has_foreign_data_wrapper_privilege, has_server_privilege, and has_tablespace_privilege didn't follow the principle of returning NULL for nonexistent OIDs. Superusers got TRUE, everybody else got an error. Per investigation of Jaime Casanova's report of a new crash in HEAD. These behaviors have been like this for a long time, so back-patch to all supported branches. Patch by me; thanks to Stephen Frost for discussion and review Discussion: https://postgr.es/m/CAJGNTeP=-6Gyqq5TN9OvYEydi7Fv1oGyYj650LGTnW44oAzYCg@mail.gmail.com
Fix documentation of pgrowlocks using "lock_type" instead of "modes"
commit : 2bedb0ec7953737c361c7d50849f1177d55c1a42 author : Michael Paquier <firstname.lastname@example.org> date : Tue, 2 Oct 2018 16:36:55 +0900 committer: Michael Paquier <email@example.com> date : Tue, 2 Oct 2018 16:36:55 +0900
The example used in the documentation is outdated as well. This is an oversight from 0ac5ad5, which bumped up pgrowlocks but forgot some bits of the documentation. Reported-by: Chris Wilson Discussion: https://firstname.lastname@example.org Backpatch-through: 9.3
Fix ALTER COLUMN TYPE to not open a relation without any lock.
commit : 26318c4b858a1ba59ffa1d6b186c6302a9153c60 author : Tom Lane <email@example.com> date : Mon, 1 Oct 2018 11:39:14 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 1 Oct 2018 11:39:14 -0400
If the column being modified is referenced by a foreign key constraint of another table, ALTER TABLE would open the other table (to re-parse the constraint's definition) without having first obtained a lock on it. This was evidently intentional, but that doesn't mean it's really safe. It's especially not safe in 9.3, which pre-dates use of MVCC scans for catalog reads, but even in current releases it doesn't seem like a good idea. We know we'll need AccessExclusiveLock shortly to drop the obsoleted constraint, so just get that a little sooner to close the hole. Per testing with a patch that complains if we open a relation without holding any lock on it. I don't plan to back-patch that patch, but we should close the holes it identifies in all supported branches. Discussion: https://email@example.com
Fix detection of the result type of strerror_r().
commit : e5baf8c27e6cc2c83cc81b620e75ad3d571d51c4 author : Tom Lane <firstname.lastname@example.org> date : Sun, 30 Sep 2018 16:24:56 -0400 committer: Tom Lane <email@example.com> date : Sun, 30 Sep 2018 16:24:56 -0400
The method we've traditionally used, of redeclaring strerror_r() to see if the compiler complains of inconsistent declarations, turns out not to work reliably because some compilers only report a warning, not an error. Amazingly, this has gone undetected for years, even though it certainly breaks our detection of whether strerror_r succeeded. Let's instead test whether the compiler will take the result of strerror_r() as a switch() argument. It's possible this won't work universally either, but it's the best idea I could come up with on the spur of the moment. Back-patch of commit 751f532b9. Buildfarm results indicate that only icc-on-Linux actually has an issue here; perhaps the lack of field reports indicates that people don't build PG for production that way. Discussion: https://firstname.lastname@example.org
Recurse to sequences on ownership change for all relkinds
commit : 26b877d280eeb23d57b507a1ebb14f240d317eb8 author : Peter Eisentraut <email@example.com> date : Thu, 14 Jun 2018 23:22:14 -0400 committer: Peter Eisentraut <firstname.lastname@example.org> date : Thu, 14 Jun 2018 23:22:14 -0400
When a table ownership is changed, we must apply that also to any owned sequences. (Otherwise, it would result in a situation that cannot be restored, because linked sequences must have the same owner as the table.) But this was previously only applied to regular tables and materialized views. But it should also apply to at least foreign tables. This patch removes the relkind check altogether, because it doesn't save very much and just introduces the possibility of similar omissions. Bug: #15238 Reported-by: Christoph Berg <email@example.com>
Make some fixes to allow building Postgres on macOS 10.14 ("Mojave").
commit : a5361b5933f7b0e0349c34ce11cc7a9438a04c23 author : Tom Lane <firstname.lastname@example.org> date : Tue, 25 Sep 2018 13:23:29 -0400 committer: Tom Lane <email@example.com> date : Tue, 25 Sep 2018 13:23:29 -0400
Apple's latest rearrangements of the system-supplied headers have broken building of PL/Perl and PL/Tcl. The only practical way to fix PL/Tcl is to start using the "-isysroot" compiler flag to point to SDK-supplied headers, as Apple expects. We must also start distinguishing where to find Perl's headers from where to find its shared library; but that seems like good cleanup anyway. Extensions that formerly did something like -I$(perl_archlibexp)/CORE should now do -I$(perl_includedir)/CORE instead. perl_archlibexp is still the place to look for libperl.so, though. If for some reason you don't like the default -isysroot setting, you can override that by setting PG_SYSROOT in configure's arguments. I don't currently think people would need to do so, unless maybe for cross-version build purposes. In addition, teach configure where to find tclConfig.sh. Our traditional method of searching $auto_path hasn't worked for the last couple of macOS releases, and it now seems clear that Apple's not going to change that. The workaround of manually specifying --with-tclconfig was annoying already, but Mojave's made it a lot more so because the sysroot path now has to be included as well. Let's just wire the knowledge into configure instead. To avoid breaking builds against non-default Tcl installations (e.g. MacPorts) wherein the $auto_path method probably still works, arrange to try the additional case only after all else has failed. Back-patch to all supported versions, since at least the buildfarm cares about that. The changes are set up to not do anything on macOS releases that are old enough to not have functional sysroot trees.
Fix over-allocation of space for array_out()'s result string.
commit : 028fc0bac974a9c0c12acc42a5d58bd0dd9693fc author : Tom Lane <firstname.lastname@example.org> date : Mon, 24 Sep 2018 11:30:51 -0400 committer: Tom Lane <email@example.com> date : Mon, 24 Sep 2018 11:30:51 -0400
array_out overestimated the space needed for its output, possibly by a very substantial amount if the array is multi-dimensional, because of wrong order of operations in the loop that counts the number of curly-brace pairs needed. While the output string is normally short-lived, this could still cause problems in extreme cases. An additional minor error was that it counted one more delimiter than is actually needed. Repair those errors, add an Assert that the space is now correctly calculated, and make some minor improvements in the comments. I also failed to resist the temptation to get rid of an integer modulus operation per array element; a simple comparison is sufficient. This bug dates clear back to Berkeley days, so back-patch to all supported versions. Keiichi Hirobe, minor additional work by me Discussion: https://postgr.es/m/CAH=EFxE9W0tRvQkixR2XJRRCToUYUEDkJZk6tnADXugPBRdcdg@mail.gmail.com
Initialize random() in bootstrap/stand-alone postgres and in initdb.
commit : 401228183a63254a8edbded3693124ad466b185b author : Noah Misch <firstname.lastname@example.org> date : Sun, 23 Sep 2018 22:56:39 -0700 committer: Noah Misch <email@example.com> date : Sun, 23 Sep 2018 22:56:39 -0700
This removes a difference between the standard IsUnderPostmaster execution environment and that of --boot and --single. In a stand-alone backend, "SELECT random()" always started at the same seed. On a system capable of using posix shared memory, initdb could still conclude "selecting dynamic shared memory implementation ... sysv". Crashed --boot or --single postgres processes orphaned shared memory objects having names that collided with the not-actually-random names that initdb probed. The sysv fallback appeared after ten crashes of --boot or --single postgres. Since --boot and --single are rare in production use, systems used for PostgreSQL development are the principal candidate to notice this symptom. Back-patch to 9.3 (all supported versions). PostgreSQL 9.4 introduced dynamic shared memory, but 9.3 does share the "SELECT random()" problem. Reviewed by Tom Lane and Kyotaro HORIGUCHI. Discussion: https://postgr.es/m/20180915221546.GA3159382@rfd.leadboat.com
Fix failure in WHERE CURRENT OF after rewinding the referenced cursor.
commit : 38cb010843636ed27713d54c70b455eb470d06b8 author : Tom Lane <firstname.lastname@example.org> date : Sun, 23 Sep 2018 16:05:45 -0400 committer: Tom Lane <email@example.com> date : Sun, 23 Sep 2018 16:05:45 -0400
In a case where we have multiple relation-scan nodes in a cursor plan, such as a scan of an inheritance tree, it's possible to fetch from a given scan node, then rewind the cursor and fetch some row from an earlier scan node. In such a case, execCurrent.c mistakenly thought that the later scan node was still active, because ExecReScan hadn't done anything to make it look not-active. We'd get some sort of failure in the case of a SeqScan node, because the node's scan tuple slot would be pointing at a HeapTuple whose t_self gets reset to invalid by heapam.c. But it seems possible that for other relation scan node types we'd actually return a valid tuple TID to the caller, resulting in updating or deleting a tuple that shouldn't have been considered current. To fix, forcibly clear the ScanTupleSlot in ExecScanReScan. Another issue here, which seems only latent at the moment but could easily become a live bug in future, is that rewinding a cursor does not necessarily lead to *immediately* applying ExecReScan to every scan-level node in the plan tree. Upper-level nodes will think that they can postpone that call if their child node is already marked with chgParam flags. I don't see a way for that to happen today in a plan tree that's simple enough for execCurrent.c's search_plan_tree to understand, but that's one heck of a fragile assumption. So, add some logic in search_plan_tree to detect chgParam flags being set on nodes that it descended to/through, and assume that that means we should consider lower scan nodes to be logically reset even if their ReScan call hasn't actually happened yet. Per bug #15395 from Matvey Arye. This has been broken for a long time, so back-patch to all supported branches. Discussion: https://firstname.lastname@example.org
docs: remove use of escape strings and use bytea hex output
commit : 9b3477c9ba1497fe6fafeab5c2dbf6d59c92ec6d author : Bruce Momjian <email@example.com> date : Fri, 21 Sep 2018 19:55:06 -0400 committer: Bruce Momjian <firstname.lastname@example.org> date : Fri, 21 Sep 2018 19:55:06 -0400
standard_conforming_strings defaulted to 'on' in PG 9.1. bytea_output defaulted to 'hex' in PG 9.0. Reported-by: André Hänsel Discussion: https://email@example.com Backpatch-through: 9.3
Error out for clang on x86-32 without SSE2 support, no -fexcess-precision.
commit : 29196e13cd704930c174a3827e3aba7fedf06db9 author : Andres Freund <firstname.lastname@example.org> date : Thu, 20 Sep 2018 18:11:10 -0700 committer: Andres Freund <email@example.com> date : Thu, 20 Sep 2018 18:11:10 -0700
As clang currently doesn't support -fexcess-precision=standard, compiling x86-32 code with SSE2 disabled, can lead to problems with floating point overflow checks and the like. This issue was noticed because clang, on at least some BSDs, defaults to i386 compatibility, whereas it defaults to pentium4 on Linux. Our forced usage of __builtin_isinf() lead to some overflow checks not triggering when compiling for i386, e.g. when the result of the calculation didn't overflow in 80bit registers, but did so in 64bit. While we could just fall back to a non-builtin isinf, it seems likely that the use of 80bit registers leads to other problems (which is why we force the flag for GCC already). Therefore error out when detecting clang in that situation. Reported-By: Victor Wagner Analyzed-By: Andrew Gierth and Andres Freund Author: Andres Freund Discussion: https://firstname.lastname@example.org Backpatch: 9.3-, all supported versions are affected
Allow DSM allocation to be interrupted.
commit : c0c5668c6a073f22a1d39175693b1367ae7f09fa author : Thomas Munro <email@example.com> date : Tue, 18 Sep 2018 22:56:36 +1200 committer: Thomas Munro <firstname.lastname@example.org> date : Tue, 18 Sep 2018 22:56:36 +1200
Chris Travers reported that the startup process can repeatedly try to cancel a backend that is in a posix_fallocate()/EINTR loop and cause it to loop forever. Teach the retry loop to give up if an interrupt is pending. Don't actually check for interrupts in that loop though, because a non-local exit would skip some clean-up code in the caller. Back-patch to 9.4 where DSM was added (and posix_fallocate() was later back-patched). Author: Chris Travers Reviewed-by: Ildar Musin, Murat Kabilov, Oleksii Kliukin Tested-by: Oleksii Kliukin Discussion: https://postgr.es/m/CAN-RpxB-oeZve_J3SM_6%3DHXPmvEG%3DHX%2B9V9pi8g2YR7YW0rBBg%40mail.gmail.com
Fix failure with initplans used conditionally during EvalPlanQual rechecks.
commit : 8494755109e97ad22c9817b9dbe550b535961ed4 author : Tom Lane <email@example.com> date : Sat, 15 Sep 2018 13:42:34 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 15 Sep 2018 13:42:34 -0400
The EvalPlanQual machinery assumes that any initplans (that is, uncorrelated sub-selects) used during an EPQ recheck would have already been evaluated during the main query; this is implicit in the fact that execPlan pointers are not copied into the EPQ estate's es_param_exec_vals. But it's possible for that assumption to fail, if the initplan is only reached conditionally. For example, a sub-select inside a CASE expression could be reached during a recheck when it had not been previously, if the CASE test depends on a column that was just updated. This bug is old, appearing to date back to my rewrite of EvalPlanQual in commit 9f2ee8f28, but was not detected until Kyle Samson reported a case. To fix, force all not-yet-evaluated initplans used within the EPQ plan subtree to be evaluated at the start of the recheck, before entering the EPQ environment. This could be inefficient, if such an initplan is expensive and goes unused again during the recheck --- but that's piling one layer of improbability atop another. It doesn't seem worth adding more complexity to prevent that, at least not in the back branches. It was convenient to use the new-in-v11 ExecEvalParamExecParams function to implement this, but I didn't like either its name or the specifics of its API, so revise that. Back-patch all the way. Rather than rewrite the patch to avoid depending on bms_next_member() in the oldest branches, I chose to back-patch that function into 9.4 and 9.3. (This isn't the first time back-patches have needed that, and it exhausted my patience.) I also chose to back-patch some test cases added by commits 71404af2a and 342a1ffa2 into 9.4 and 9.3, so that the 9.x versions of eval-plan-qual.spec are all the same. Andrew Gierth diagnosed the problem and contributed the added test cases, though the actual code changes are by me. Discussion: https://postgr.es/m/A033A40A-B234-4324-BE37-272279F7B627@tripadvisor.com
doc: Update broken links
commit : 3cac7c2e4b837ab4e67aa903e667b0479a128e34 author : Peter Eisentraut <email@example.com> date : Tue, 14 Aug 2018 22:54:52 +0200 committer: Peter Eisentraut <firstname.lastname@example.org> date : Tue, 14 Aug 2018 22:54:52 +0200
Repair bug in regexp split performance improvements.
commit : a389ddc759ae26169f3b124f2b72eea44eaf292a author : Andrew Gierth <email@example.com> date : Wed, 12 Sep 2018 19:31:06 +0100 committer: Andrew Gierth <firstname.lastname@example.org> date : Wed, 12 Sep 2018 19:31:06 +0100
Commit c8ea87e4b introduced a temporary conversion buffer for substrings extracted during regexp splits. Unfortunately the code that sized it was failing to ignore the effects of ignored degenerate regexp matches, so for regexp_split_* calls it could under-size the buffer in such cases. Fix, and add some regression test cases (though those will only catch the bug if run in a multibyte encoding). Backpatch to 9.3 as the faulty code was. Thanks to the PostGIS project, Regina Obe and Paul Ramsey for the report (via IRC) and assistance in analysis. Patch by me.
On all Windows platforms, not just Cygwin, use _timezone and _tzname.
commit : 86e247583310dc4a47cb7ba82201ff0b3c4d79d0 author : Tom Lane <email@example.com> date : Tue, 1 May 2018 12:02:41 -0400 committer: Andrew Dunstan <firstname.lastname@example.org> date : Tue, 1 May 2018 12:02:41 -0400
Back-patch commit 868628e4f into the 9.5 branch, so that we can support building that branch with Visual Studio 2015. This patch itself could go further back, but other VS2015 patches such as 0fb54de9a and c8e81afc6 were only back-patched to 9.5, so there seems little point in handling this one differently. Discussion: https://postgr.es/m/CAD=LzWFg+Z-KUS3Wm8-1J2vOuYErJXbjuE6b7quzswQEBXJWMQ@mail.gmail.com Now that we have backported VS2015 support to 9.4 and 9.3, backport this also.
Support building with Visual Studio 2017
commit : 19acfd6528bcbf55ad996397177a1f2a16001c25 author : Andrew Dunstan <email@example.com> date : Mon, 25 Sep 2017 08:03:05 -0400 committer: Andrew Dunstan <firstname.lastname@example.org> date : Mon, 25 Sep 2017 08:03:05 -0400
Haribabu Kommi, reviewed by Takeshi Ideriha and Christian Ullrich Now backpatched to 9.4 and 9.3
Support building with Visual Studio 2015
commit : 9ca32a6ebc12d91a4df314e47bb1faf933e5bbb4 author : Andrew Dunstan <email@example.com> date : Fri, 29 Apr 2016 07:59:47 -0400 committer: Andrew Dunstan <firstname.lastname@example.org> date : Fri, 29 Apr 2016 07:59:47 -0400
Adjust the way we detect the locale. As a result the minumum Windows version supported by VS2015 and later is Windows Vista. Add some tweaks to remove new compiler warnings. Remove documentation references to the now obsolete msysGit. Michael Paquier, somewhat edited by me, reviewed by Christian Ullrich. Rather belated backpatch to 9.4 and 9.3
Fix past pd_upper write in ginRedoRecompress()
commit : 35ea98f79af299fd946d160eabf0a79e033d8115 author : Alexander Korotkov <email@example.com> date : Sun, 9 Sep 2018 21:19:29 +0300 committer: Alexander Korotkov <firstname.lastname@example.org> date : Sun, 9 Sep 2018 21:19:29 +0300
ginRedoRecompress() replays actions over compressed segments of posting list in-place. However, it might lead to write past pg_upper, because intermediate state during playing the changes can take more space than both original state and final state. This commit fixes that by refuse from in-place modification. Instead page tail is copied once modification is started, and then it's used as the source of original segments. Backpatch to 9.4 where posting list compression was introduced. Reported-by: Sivasubramanian Ramasubramanian Discussion: https://postgr.es/m/1536091151804.6588%40amazon.com Author: Alexander Korotkov based on patch from and ideas by Sivasubramanian Ramasubramanian Review: Sivasubramanian Ramasubramanian Backpatch-through: 9.4
Save/restore SPI's global variables in SPI_connect() and SPI_finish().
commit : d2003339c3969d93750104fc41b5ab27393a952b author : Tom Lane <email@example.com> date : Fri, 7 Sep 2018 20:09:57 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 7 Sep 2018 20:09:57 -0400
This patch removes two sources of interference between nominally independent functions when one SPI-using function calls another, perhaps without knowing that it does so. Chapman Flack pointed out that xml.c's query_to_xml_internal() expects SPI_tuptable and SPI_processed to stay valid across datatype output function calls; but it's possible that such a call could involve re-entrant use of SPI. It seems likely that there are similar hazards elsewhere, if not in the core code then in third-party SPI users. Previously SPI_finish() reset SPI's API globals to zeroes/nulls, which would typically make for a crash in such a situation. Restoring them to the values they had at SPI_connect() seems like a considerably more useful behavior, and it still meets the design goal of not leaving any dangling pointers to tuple tables of the function being exited. Also, cause SPI_connect() to reset these variables to zeroes/nulls after saving them. This prevents interference in the opposite direction: it's possible that a SPI-using function that's only ever been tested standalone contains assumptions that these variables start out as zeroes. That was the case as long as you were the outermost SPI user, but not so much for an inner user. Now it's consistent. Report and fix suggestion by Chapman Flack, actual patch by me. Back-patch to all supported branches. Discussion: https://email@example.com
Limit depth of forced recursion for CLOBBER_CACHE_RECURSIVELY.
commit : 35e39610a3f83e0464cb2ff2b1e4c282d323b602 author : Tom Lane <firstname.lastname@example.org> date : Fri, 7 Sep 2018 18:13:29 -0400 committer: Tom Lane <email@example.com> date : Fri, 7 Sep 2018 18:13:29 -0400
It's somewhat surprising that we got away with this before. (Actually, since nobody tests this routinely AFAIK, it might've been broken for awhile. But it's definitely broken in the wake of commit f868a8143.) It seems sufficient to limit the forced recursion to a small number of levels. Back-patch to all supported branches, like the preceding patch. Discussion: https://firstname.lastname@example.org
Fix longstanding recursion hazard in sinval message processing.
commit : bf919387ecc66e998e0b6c516ed6cd284ba6a11a author : Tom Lane <email@example.com> date : Fri, 7 Sep 2018 18:04:38 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 7 Sep 2018 18:04:38 -0400
LockRelationOid and sibling routines supposed that, if our session already holds the lock they were asked to acquire, they could skip calling AcceptInvalidationMessages on the grounds that we must have already read any remote sinval messages issued against the relation being locked. This is normally true, but there's a critical special case where it's not: processing inside AcceptInvalidationMessages might attempt to access system relations, resulting in a recursive call to acquire a relation lock. Hence, if the outer call had acquired that same system catalog lock, we'd fall through, despite the possibility that there's an as-yet-unread sinval message for that system catalog. This could, for example, result in failure to access a system catalog or index that had just been processed by VACUUM FULL. This is the explanation for buildfarm failures we've been seeing intermittently for the past three months. The bug is far older than that, but commits a54e1f158 et al added a new recursion case within AcceptInvalidationMessages that is apparently easier to hit than any previous case. To fix this, we must not skip calling AcceptInvalidationMessages until we have *finished* a call to it since acquiring a relation lock, not merely acquired the lock. (There's already adequate logic inside AcceptInvalidationMessages to deal with being called recursively.) Fortunately, we can implement that at trivial cost, by adding a flag to LOCALLOCK hashtable entries that tracks whether we know we have completed such a call. There is an API hazard added by this patch for external callers of LockAcquire: if anything is testing for LOCKACQUIRE_ALREADY_HELD, it might be fooled by the new return code LOCKACQUIRE_ALREADY_CLEAR into thinking the lock wasn't already held. This should be a fail-soft condition, though, unless something very bizarre is being done in response to the test. Also, I added an additional output argument to LockAcquireExtended, assuming that that probably isn't called by any outside code given the very limited usefulness of its additional functionality. Back-patch to all supported branches. Discussion: https://email@example.com
Make contrib/unaccent's unaccent() function work when not in search path.
commit : d4ab3962613f39ec81c2077e84b9cfe158433ecd author : Tom Lane <firstname.lastname@example.org> date : Thu, 6 Sep 2018 10:49:45 -0400 committer: Tom Lane <email@example.com> date : Thu, 6 Sep 2018 10:49:45 -0400
Since the fixes for CVE-2018-1058, we've advised people to schema-qualify function references in order to fix failures in code that executes under a minimal search_path setting. However, that's insufficient to make the single-argument form of unaccent() work, because it looks up the "unaccent" text search dictionary using the search path. The most expedient answer seems to be to remove the search_path dependency by making it look in the same schema that the unaccent() function itself is declared in. This will definitely work for the normal usage of this function with the unaccent dictionary provided by the extension. It's barely possible that there are people who were relying on the search-path-dependent behavior to select other dictionaries with the same name; but if there are any such people at all, they can still get that behavior by writing unaccent('unaccent', ...), or possibly unaccent('unaccent'::text::regdictionary, ...) if the lookup has to be postponed to runtime. Per complaint from Gunnlaugur Thor Briem. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAPs+M8LCex6d=DeneofdsoJVijaG59m9V0ggbb3pOH7hZO4+cQ@mail.gmail.com
docs: improve AT TIME ZONE description
commit : 20ce8739902dfef487664f0c310376bec34cd62d author : Bruce Momjian <firstname.lastname@example.org> date : Tue, 4 Sep 2018 22:34:07 -0400 committer: Bruce Momjian <email@example.com> date : Tue, 4 Sep 2018 22:34:07 -0400
The previous description was unclear. Also add a third example, change use of time zone acronyms to more verbose descriptions, and add a mention that using 'time' with AT TIME ZONE uses the current time zone rules. Backpatch-through: 9.3
Fix initial sync of slot parent directory when restoring status
commit : 113020627235adbc934a5898cb74394c3ae44627 author : Michael Paquier <firstname.lastname@example.org> date : Sun, 2 Sep 2018 12:41:06 -0700 committer: Michael Paquier <email@example.com> date : Sun, 2 Sep 2018 12:41:06 -0700
At the beginning of recovery, information from replication slots is recovered from disk to memory. In order to ensure the durability of the information, the status file as well as its parent directory are synced. It happens that the sync on the parent directory was done directly using the status file path, which is logically incorrect, and the current code has been doing a sync on the same object twice in a row. Reported-by: Konstantin Knizhnik Diagnosed-by: Konstantin Knizhnik Author: Michael Paquier Discussion: https://firstname.lastname@example.org Backpatch-through: 9.4-
Doc: fix oversights in "Client/Server Character Set Conversions" table.
commit : 0fef581d61fcd8a13553b71bbe1dd60ef97358d2 author : Tom Lane <email@example.com> date : Sat, 1 Sep 2018 16:02:47 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 1 Sep 2018 16:02:47 -0400
This table claimed that JOHAB could be used as a server encoding, which was true originally but hasn't been true since 8.3. It also lacked entries for EUC_JIS_2004 and SHIFT_JIS_2004. JOHAB problem noted by Lars Kanis, the others by me. Discussion: https://email@example.com
Avoid using potentially-under-aligned page buffers.
commit : 083d9ced14e4af4b48e7607bd6cc7567a68ec899 author : Tom Lane <firstname.lastname@example.org> date : Sat, 1 Sep 2018 15:27:13 -0400 committer: Tom Lane <email@example.com> date : Sat, 1 Sep 2018 15:27:13 -0400
There's a project policy against using plain "char buf[BLCKSZ]" local or static variables as page buffers; preferred style is to palloc or malloc each buffer to ensure it is MAXALIGN'd. However, that policy's been ignored in an increasing number of places. We've apparently got away with it so far, probably because (a) relatively few people use platforms on which misalignment causes core dumps and/or (b) the variables chance to be sufficiently aligned anyway. But this is not something to rely on. Moreover, even if we don't get a core dump, we might be paying a lot of cycles for misaligned accesses. To fix, invent new union types PGAlignedBlock and PGAlignedXLogBlock that the compiler must allocate with sufficient alignment, and use those in place of plain char arrays. I used these types even for variables where there's no risk of a misaligned access, since ensuring proper alignment should make kernel data transfers faster. I also changed some places where we had been palloc'ing short-lived buffers, for coding style uniformity and to save palloc/pfree overhead. Since this seems to be a live portability hazard (despite the lack of field reports), back-patch to all supported versions. Patch by me; thanks to Michael Paquier for review. Discussion: https://firstname.lastname@example.org
Ignore server-side delays when enforcing wal_sender_timeout.
commit : 20cd88857b3a60d40cf019872cf8a5d40888e3ae author : Noah Misch <email@example.com> date : Fri, 31 Aug 2018 22:59:58 -0700 committer: Noah Misch <firstname.lastname@example.org> date : Fri, 31 Aug 2018 22:59:58 -0700
Healthy clients of servers having poor I/O performance, such as buildfarm members hamster and tern, saw unexpected timeouts. That disagreed with documentation. This fix adds one gettimeofday() call whenever ProcessRepliesIfAny() finds no client reply messages. Back-patch to 9.4; the bug's symptom is rare and mild, and the code all moved between 9.3 and 9.4. Discussion: https://postgr.es/m/20180826034600.GA1105084@rfd.leadboat.com
Ensure correct minimum consistent point on standbys
commit : d9638a326f722af7e5c7e92995ed0e22ace670a3 author : Michael Paquier <email@example.com> date : Fri, 31 Aug 2018 11:05:59 -0700 committer: Michael Paquier <firstname.lastname@example.org> date : Fri, 31 Aug 2018 11:05:59 -0700
Startup process has improved its calculation of incorrect minimum consistent point in 8d68ee6, which ensures that all WAL available gets replayed when doing crash recovery, and has introduced an incorrect calculation of the minimum recovery point for non-startup processes, which can cause incorrect page references on a standby when for example the background writer flushed a couple of pages on-disk but was not updating the control file to let a subsequent crash recovery replay to where it should have. The only case where this has been reported to be a problem is when a standby needs to calculate the latest removed xid when replaying a btree deletion record, so one would need connections on a standby that happen just after recovery has thought it reached a consistent point. Using a background worker which is started after the consistent point is reached would be the easiest way to get into problems if it connects to a database. Having clients which attempt to connect periodically could also be a problem, but the odds of seeing this problem are much lower. The fix used is pretty simple, as the idea is to give access to the minimum recovery point written in the control file to non-startup processes so as they use a reference, while the startup process still initializes its own references of the minimum consistent point so as the original problem with incorrect page references happening post-promotion with a crash do not show up. Reported-by: Alexander Kukushkin Diagnosed-by: Alexander Kukushkin Author: Michael Paquier Reviewed-by: Kyotaro Horiguchi, Alexander Kukushkin Discussion: https://email@example.com Backpatch-through: 9.3
Enforce cube dimension limit in all cube construction functions
commit : 7cea5e6ebfb3814d977c62bbe91775e0858acb50 author : Alexander Korotkov <firstname.lastname@example.org> date : Thu, 30 Aug 2018 14:18:53 +0300 committer: Alexander Korotkov <email@example.com> date : Thu, 30 Aug 2018 14:18:53 +0300
contrib/cube has a limit to 100 dimensions for cube datatype. However, it's not enforced everywhere, and one can actually construct cube with more than 100 dimensions having then trouble with dump/restore. This commit add checks for dimensions limit in all functions responsible for cube construction. Backpatch to all supported versions. Reported-by: Andrew Gierth Discussion: https://postgr.es/m/87va7uybt4.fsf%40news-spur.riddles.org.uk Author: Andrey Borodin with small additions by me Review: Tom Lane Backpatch-through: 9.3
Split contrib/cube platform-depended checks into separate test
commit : 41180978b4039aee5128f0e339d1a139ad2eb68e author : Alexander Korotkov <firstname.lastname@example.org> date : Thu, 30 Aug 2018 14:09:25 +0300 committer: Alexander Korotkov <email@example.com> date : Thu, 30 Aug 2018 14:09:25 +0300
We're currently maintaining two outputs for cube regression test. But that appears to be unsuitable, because these outputs are different in out few checks involving scientific notation. So, split checks involving scientific notation into separate test, making contrib/cube easier to maintain. Backpatch to all supported versions in order to make further backpatching easier. Discussion: https://postgr.es/m/CAPpHfdvJgWjxHsJTtT%2Bo1tz3OR8EFHcLQjhp-d3%2BUcmJLh-fQA%40mail.gmail.com Author: Alexander Korotkov Backpatch-through: 9.3
Make checksum_impl.h safe to compile with -fstrict-aliasing.
commit : 20f9cd55dd53cd05bbf53cad38d6ad6058bbd732 author : Tom Lane <firstname.lastname@example.org> date : Fri, 31 Aug 2018 12:26:20 -0400 committer: Tom Lane <email@example.com> date : Fri, 31 Aug 2018 12:26:20 -0400
In general, Postgres requires -fno-strict-aliasing with compilers that implement C99 strict aliasing rules. There's little hope of getting rid of that overall. But it seems like it would be a good idea if storage/checksum_impl.h in particular didn't depend on it, because that header is explicitly intended to be included by external programs. We don't have a lot of control over the compiler switches that an external program might use, as shown by Michael Banck's report of failure in a privately-modified version of pg_verify_checksums. Hence, switch to using a union in place of willy-nilly pointer casting inside this file. I think this makes the code a bit more readable anyway. checksum_impl.h hasn't changed since it was introduced in 9.3, so back-patch all the way. Discussion: https://firstname.lastname@example.org
Avoid quadratic slowdown in regexp match/split functions.
commit : 2ba7c4e6c4624fdb6230b060625a494bc93e0b9b author : Andrew Gierth <email@example.com> date : Tue, 28 Aug 2018 09:52:25 +0100 committer: Andrew Gierth <firstname.lastname@example.org> date : Tue, 28 Aug 2018 09:52:25 +0100
regexp_matches, regexp_split_to_table and regexp_split_to_array all work by compiling a list of match positions as character offsets (NOT byte positions) in the source string. Formerly, they then used text_substr to extract the matched text; but in a multi-byte encoding, that counts the characters in the string, and the characters needed to reach the starting byte position, on every call. Accordingly, the performance degraded as the product of the input string length and the number of match positions, such that splitting a string of a few hundred kbytes could take many minutes. Repair by keeping the wide-character copy of the input string available (only in the case where encoding_max_length is not 1) after performing the match operation, and extracting substrings from that instead. This reduces the complexity to being linear in the number of result bytes, discounting the actual regexp match itself (which is not affected by this patch). In passing, remove cleanup using retail pfree() which was obsoleted by commit ff428cded (Feb 2008) which made cleanup of SRF multi-call contexts automatic. Also increase (to ~134 million) the maximum number of matches and provide an error message when it is reached. Backpatch all the way because this has been wrong forever. Analysis and patch by me; review by Kaiting Chen. Discussion: https://email@example.com see also https://firstname.lastname@example.org
Make syslogger more robust against failures in opening CSV log files.
commit : 48bc1a5252c57381249ca57bfdfd4ae4778e9f7d author : Tom Lane <email@example.com> date : Sun, 26 Aug 2018 14:21:55 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 26 Aug 2018 14:21:55 -0400
The previous coding figured it'd be good enough to postpone opening the first CSV log file until we got a message we needed to write there. This is unsafe, though, because if the open fails we end up in infinite recursion trying to report the failure. Instead make the CSV log file management code look as nearly as possible like the longstanding logic for the stderr log file. In particular, open it immediately at postmaster startup (if enabled), or when we get a SIGHUP in which we find that log_destination has been changed to enable CSV logging. It seems OK to fail if a postmaster-start-time open attempt fails, as we've long done for the stderr log file. But we can't die if we fail to open a CSV log file during SIGHUP, so we're still left with a problem. In that case, write any output meant for the CSV log file to the stderr log file. (This will also cover race-condition cases in which backends send CSV log data before or after we have the CSV log file open.) This patch also fixes an ancient oversight that, if CSV logging was turned off during a SIGHUP, we never actually closed the last CSV log file. In passing, remember to reset whereToSendOutput = DestNone during syslogger start, since (unlike all other postmaster children) it's forked before the postmaster has done that. This made for a platform-dependent difference in error reporting behavior between the syslogger and other children: except on Windows, it'd report problems to the original postmaster stderr as well as the normal error log file(s). It's barely possible that that was intentional at some point; but it doesn't seem likely to be desirable in production, and the platform dependency definitely isn't desirable. Per report from Alexander Kukushkin. It's been like this for a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAFh8B==iLUD_gqC-dAENS0V+kVrCeGiKujtKqSQ7++S-caaChw@mail.gmail.com
doc: add doc link for 'applicable_roles'
commit : fe6131b4a81231526e5841f4ba187ec1d725de7b author : Bruce Momjian <email@example.com> date : Sat, 25 Aug 2018 13:01:24 -0400 committer: Bruce Momjian <firstname.lastname@example.org> date : Sat, 25 Aug 2018 13:01:24 -0400
Reported-by: Ashutosh Sharma Discussion: https://postgr.es/m/CAE9k0PnhnL6MNDLuvkk8USzOa_DpzDzFQPAM_uaGuXbh9HMKYw@mail.gmail.com Author: Ashutosh Sharma Backpatch-through: 9.3
docs: clarify plpython SD and GD dictionary behavior
commit : 3c74f7af075b6efe80416d019bfb3784d4785bf2 author : Bruce Momjian <email@example.com> date : Sat, 25 Aug 2018 11:52:29 -0400 committer: Bruce Momjian <firstname.lastname@example.org> date : Sat, 25 Aug 2018 11:52:29 -0400
Reported-by: Adam Bielański Discussion: https://email@example.com Backpatch-through: 9.3
Reduce an unnecessary O(N^3) loop in lexer.
commit : 6c5ed6836340a801d522f620dceca1469b5bfbbc author : Andrew Gierth <firstname.lastname@example.org> date : Thu, 23 Aug 2018 19:59:38 +0100 committer: Andrew Gierth <email@example.com> date : Thu, 23 Aug 2018 19:59:38 +0100
The lexer's handling of operators contained an O(N^3) hazard when dealing with long strings of + or - characters; it seems hard to prevent this case from being O(N^2), but the additional N multiplier was not needed. Backpatch all the way since this has been there since 7.x, and it presents at least a mild hazard in that trying to do Bind, PREPARE or EXPLAIN on a hostile query could take excessive time (without honouring cancels or timeouts) even if the query was never executed.
Fix set of NLS translation issues
commit : 788ae09f4a9560c344a0c8457ffb66bba42be0a7 author : Michael Paquier <firstname.lastname@example.org> date : Tue, 21 Aug 2018 15:18:24 +0900 committer: Michael Paquier <email@example.com> date : Tue, 21 Aug 2018 15:18:24 +0900
While monitoring the code, a couple of issues related to string translation has showed up: - Some routines for auto-updatable views return an error string, which sometimes missed the shot. A comment regarding string translation is added for each routine to help with future features. - GSSAPI authentication missed two translations. Reported-by: Kyotaro Horiguchi Author: Kyotaro Horiguchi Reviewed-by: Michael Paquier, Tom Lane Discussion: https://firstname.lastname@example.org Backpatch-through: 9.3
Ensure schema qualification in pg_restore DISABLE/ENABLE TRIGGER commands.
commit : a4fdcceaba1fa5c70dde920b5a22b3896c55efd1 author : Tom Lane <email@example.com> date : Fri, 17 Aug 2018 17:12:21 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 17 Aug 2018 17:12:21 -0400
Previously, this code blindly followed the common coding pattern of passing PQserverVersion(AH->connection) as the server-version parameter of fmtQualifiedId. That works as long as we have a connection; but in pg_restore with text output, we don't. Instead we got a zero from PQserverVersion, which fmtQualifiedId interpreted as "server is too old to have schemas", and so the name went unqualified. That still accidentally managed to work in many cases, which is probably why this ancient bug went undetected for so long. It only became obvious in the wake of the changes to force dump/restore to execute with restricted search_path. In HEAD/v11, let's deal with this by ripping out fmtQualifiedId's server- version behavioral dependency, and just making it schema-qualify all the time. We no longer support pg_dump from servers old enough to need the ability to omit schema name, let alone restoring to them. (Also, the few callers outside pg_dump already didn't work with pre-schema servers.) In older branches, that's not an acceptable solution, so instead just tweak the DISABLE/ENABLE TRIGGER logic to ensure it will schema-qualify its output regardless of server version. Per bug #15338 from Oleg somebody. Back-patch to all supported branches. Discussion: https://email@example.com
Set scan direction appropriately for SubPlans (bug #15336)
commit : 3cf3a65cb7da973dd761c6da579cdf7bceb38947 author : Andrew Gierth <firstname.lastname@example.org> date : Fri, 17 Aug 2018 15:04:26 +0100 committer: Andrew Gierth <email@example.com> date : Fri, 17 Aug 2018 15:04:26 +0100
When executing a SubPlan in an expression, the EState's direction field was left alone, resulting in an attempt to execute the subplan backwards if it was encountered during a backwards scan of a cursor. Also, though much less likely, it was possible to reach the execution of an InitPlan while in backwards-scan state. Repair by saving/restoring estate->es_direction and forcing forward scan mode in the relevant places. Backpatch all the way, since this has been broken since 8.3 (prior to commit c7ff7663e, SubPlans had their own EStates rather than sharing the parent plan's, so there was no confusion over scan direction). Per bug #15336 reported by Vladimir Baranoff; analysis and patch by me, review by Tom Lane. Discussion: https://firstname.lastname@example.org
pg_upgrade: issue helpful error message for use on standbys
commit : e626b6b9d318b147f2fd7a97e2117fcb90f6ee77 author : Bruce Momjian <email@example.com> date : Fri, 17 Aug 2018 10:25:48 -0400 committer: Bruce Momjian <firstname.lastname@example.org> date : Fri, 17 Aug 2018 10:25:48 -0400
Commit 777e6ddf1723306bd2bf8fe6f804863f459b0323 checked for a shut down message from a standby and allowed it to continue. This patch reports a helpful error message in these cases, suggesting to use rsync as documented. Diagnosed-by: Martín Marqués Discussion: https://postgr.es/m/CAPdiE1xYCow-reLjrhJ9DqrMu-ppNq0ChUUEvVdxhdjGRD5_eA@mail.gmail.com Backpatch-through: 9.3
Mention ownership requirements for REFRESH MATERIALIZED VIEW in docs
commit : 777192f0dd444d68a95ed9df0fe8c77f50447f53 author : Michael Paquier <email@example.com> date : Fri, 17 Aug 2018 11:29:15 +0900 committer: Michael Paquier <firstname.lastname@example.org> date : Fri, 17 Aug 2018 11:29:15 +0900
Author: Dian Fay Discussion: https://email@example.com Backpatch-through: 9.3
Close the file descriptor in ApplyLogicalMappingFile
commit : ef1ac5b2ad08cf002d8cb7af9719513a0190fd54 author : Tomas Vondra <firstname.lastname@example.org> date : Thu, 16 Aug 2018 16:49:10 +0200 committer: Tomas Vondra <email@example.com> date : Thu, 16 Aug 2018 16:49:10 +0200
The function was forgetting to close the file descriptor, resulting in failures like this: ERROR: 53000: exceeded maxAllocatedDescs (492) while trying to open file "pg_logical/mappings/map-4000-4eb-1_60DE1E08-5376b5-537c6b" LOCATION: OpenTransientFile, fd.c:2161 Simply close the file at the end, and backpatch to 9.4 (where logical decoding was introduced). While at it, fix a nearby typo. Discussion: https://www.postgresql.org/message-id/flat/738a590a-2ce5-9394-2bef-7b1caad89b37%402ndquadrant.com
Make snprintf.c follow the C99 standard for snprintf's result value.
commit : 27c4b0899c0e44259b0ab27ced56490c669e329c author : Tom Lane <firstname.lastname@example.org> date : Wed, 15 Aug 2018 17:25:24 -0400 committer: Tom Lane <email@example.com> date : Wed, 15 Aug 2018 17:25:24 -0400
C99 says that the result should be the number of bytes that would have been emitted given a large enough buffer, not the number we actually were able to put in the buffer. It's time to make our substitute implementation comply with that. Not doing so results in inefficiency in buffer-enlargement cases, and also poses a portability hazard for third-party code that might expect C99-compliant snprintf behavior within Postgres. In passing, remove useless tests for str == NULL; neither C99 nor predecessor standards ever allowed that except when count == 0, so I see no reason to expend cycles on making that a non-crash case for this implementation. Also, don't waste a byte in pg_vfprintf's local I/O buffer; this might have performance benefits by allowing aligned writes during flushbuffer calls. Back-patch of commit 805889d7d. There was some concern about this possibly breaking code that assumes pre-C99 behavior, but there is much more risk (and reality, in our own code) of code that assumes C99 behavior and hence fails to detect buffer overrun without this. Discussion: https://firstname.lastname@example.org
Clean up assorted misuses of snprintf()'s result value.
commit : d371efb39c33f79ad5f6741d76bfae54df21eb55 author : Tom Lane <email@example.com> date : Wed, 15 Aug 2018 16:29:32 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 15 Aug 2018 16:29:32 -0400
Fix a small number of places that were testing the result of snprintf() but doing so incorrectly. The right test for buffer overrun, per C99, is "result >= bufsize" not "result > bufsize". Some places were also checking for failure with "result == -1", but the standard only says that a negative value is delivered on failure. (Note that this only makes these places correct if snprintf() delivers C99-compliant results. But at least now these places are consistent with all the other places where we assume that.) Also, make psql_start_test() and isolation_start_test() check for buffer overrun while constructing their shell commands. There seems like a higher risk of overrun, with more severe consequences, here than there is for the individual file paths that are made elsewhere in the same functions, so this seemed like a worthwhile change. Also fix guc.c's do_serialize() to initialize errno = 0 before calling vsnprintf. In principle, this should be unnecessary because vsnprintf should have set errno if it returns a failure indication ... but the other two places this coding pattern is cribbed from don't assume that, so let's be consistent. These errors are all very old, so back-patch as appropriate. I think that only the shell command overrun cases are even theoretically reachable in practice, but there's not much point in erroneous error checks. Discussion: https://email@example.com
Fix pg_replication_slot example output
commit : ae1011870a039f72efee6bacb02b7408af4714fc author : Alvaro Herrera <firstname.lastname@example.org> date : Fri, 3 Aug 2018 16:25:25 -0400 committer: Alvaro Herrera <email@example.com> date : Fri, 3 Aug 2018 16:25:25 -0400
The example output of pg_replication_slot is wrong. Correct it and make the output stable by explicitly listing columns to output. Author: Kyotaro Horiguchi <firstname.lastname@example.org> Reviewed-by: Michael Paquier <email@example.com> Discussion: https://firstname.lastname@example.org
pg_upgrade: fix shutdown check for standby servers
commit : a034c6737412f4c276ee707ee1b7e60c8591909e author : Bruce Momjian <email@example.com> date : Tue, 14 Aug 2018 17:19:02 -0400 committer: Bruce Momjian <firstname.lastname@example.org> date : Tue, 14 Aug 2018 17:19:02 -0400
Commit 244142d32afd02e7408a2ef1f249b00393983822 only tested for the pg_controldata output for primary servers, but standby servers have different "Database cluster state" output, so check for that too. Diagnosed-by: Michael Paquier Discussion: https://postgr.es/m/20180810164240.GM13638@paquier.xyz Backpatch-through: 9.3
docs: Only first instance of a PREPARE parameter sets data type
commit : 7bfd381c21c841b9766227f5b65cd62fa8408780 author : Bruce Momjian <email@example.com> date : Thu, 9 Aug 2018 10:13:15 -0400 committer: Bruce Momjian <firstname.lastname@example.org> date : Thu, 9 Aug 2018 10:13:15 -0400
If the first reference to $1 is "($1 = col) or ($1 is null)", the data type can be determined, but not for "($1 is null) or ($1 = col)". This change documents this. Reported-by: Morgan Owens Discussion: https://email@example.com Backpatch-through: 9.3
Don't run atexit callbacks in quickdie signal handlers.
commit : d5a9b706ea93a95d9359066488c33aee33a695bc author : Heikki Linnakangas <firstname.lastname@example.org> date : Wed, 8 Aug 2018 19:08:10 +0300 committer: Heikki Linnakangas <email@example.com> date : Wed, 8 Aug 2018 19:08:10 +0300
exit() is not async-signal safe. Even if the libc implementation is, 3rd party libraries might have installed unsafe atexit() callbacks. After receiving SIGQUIT, we really just want to exit as quickly as possible, so we don't really want to run the atexit() callbacks anyway. The original report by Jimmy Yih was a self-deadlock in startup_die(). However, this patch doesn't address that scenario; the signal handling while waiting for the startup packet is more complicated. But at least this alleviates similar problems in the SIGQUIT handlers, like that reported by Asim R P later in the same thread. Backpatch to 9.3 (all supported versions). Discussion: https://www.postgresql.org/message-id/CAOMx_OAuRUHiAuCg2YgicZLzPVv5d9_H4KrL_OFsFP%3DVPekigA%40mail.gmail.com
Don't record FDW user mappings as members of extensions.
commit : 33c5d3bf85d7ae01ee66bb3a4d77abde85c0f8bf author : Tom Lane <firstname.lastname@example.org> date : Tue, 7 Aug 2018 16:32:50 -0400 committer: Tom Lane <email@example.com> date : Tue, 7 Aug 2018 16:32:50 -0400
CreateUserMapping has a recordDependencyOnCurrentExtension call that's been there since extensions were introduced (very possibly my fault). However, there's no support anywhere else for user mappings as members of extensions, nor are they listed as a possible member object type in the documentation. Nor does it really seem like a good idea for user mappings to belong to extensions when roles don't. Hence, remove the bogus call. (As we saw in bug #15310, the lack of any pg_dump support for this case ensures that any such membership record would silently disappear during pg_upgrade. So there's probably no need for us to do anything else about cleaning up after this mistake.) Discussion: https://firstname.lastname@example.org
Fix incorrect initialization of BackendActivityBuffer.
commit : 753051cc721e9123e817b929b257e53a9b97a502 author : Tom Lane <email@example.com> date : Tue, 7 Aug 2018 16:00:44 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 7 Aug 2018 16:00:44 -0400
Since commit c8e8b5a6e, this has been zeroed out using the wrong length. In practice the length would always be too small, leading to not zeroing the whole buffer rather than clobbering additional memory; and that's pretty harmless, both because shmem would likely start out as zeroes and because we'd reinitialize any given entry before use. Still, it's bogus, so fix it. Reported by Petru-Florin Mihancea (bug #15312) Discussion: https://email@example.com
Fix pg_upgrade to handle event triggers in extensions correctly.
commit : fb4e0e8960e0a58124f6cee1110a9d78fd43674b author : Tom Lane <firstname.lastname@example.org> date : Tue, 7 Aug 2018 15:43:49 -0400 committer: Tom Lane <email@example.com> date : Tue, 7 Aug 2018 15:43:49 -0400
pg_dump with --binary-upgrade must emit ALTER EXTENSION ADD commands for all objects that are members of extensions. It forgot to do so for event triggers, as per bug #15310 from Nick Barnes. Back-patch to 9.3 where event triggers were introduced. Haribabu Kommi Discussion: https://firstname.lastname@example.org
Ensure pg_dump_sort.c sorts null vs non-null namespace consistently.
commit : abd04e0dd8a50d531d6d8e428763a6fc7b16558e author : Tom Lane <email@example.com> date : Tue, 7 Aug 2018 13:13:42 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 7 Aug 2018 13:13:42 -0400
The original coding here (which is, I believe, my fault) supposed that it didn't need to concern itself with the possibility that one object of a given type-priority has a namespace while another doesn't. But that's not reliably true anymore, if it ever was; and if it does happen then it's possible that DOTypeNameCompare returns self-inconsistent comparison results. That leads to unspecified behavior in qsort() and a resultant weird output order from pg_dump. This should end up being only a cosmetic problem, because any ordering constraints that actually matter should be enforced by the later dependency-based sort. Still, it's a bug, so back-patch. Report and fix by Jacob Champion, though I editorialized on his patch to the extent of making NULL sort after non-NULL, for consistency with our usual sorting definitions. Discussion: https://postgr.es/m/CABAq_6Hw+V-Kj7PNfD5tgOaWT_-qaYkc+SRmJkPLeUjYXLdxwQ@mail.gmail.com