commit : 34af9129e6b0f163e03fac55b7ffa71aa925d4c7 author : Tom Lane <firstname.lastname@example.org> date : Mon, 8 May 2017 17:19:04 -0400 committer: Tom Lane <email@example.com> date : Mon, 8 May 2017 17:19:04 -0400
Further patch rangetypes_selfuncs.c’s statistics slot management.
commit : f793effdc763381f61d592c9ec8ee8657167b7b9 author : Tom Lane <firstname.lastname@example.org> date : Mon, 8 May 2017 15:02:58 -0400 committer: Tom Lane <email@example.com> date : Mon, 8 May 2017 15:02:58 -0400
Values in a STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM slot are float8, not of the type of the column the statistics are for. This bug is at least partly the fault of sloppy specification comments for get_attstatsslot()/free_attstatsslot(): the type OID they want is that of the stavalues entries, not of the underlying column. (I double-checked other callers and they seem to get this right.) Adjust the comments to be more correct. Per buildfarm. Security: CVE-2017-7484
Last-minute updates for release notes.
commit : abba57b9af951242054bd9cc5ca84764c18649e6 author : Tom Lane <firstname.lastname@example.org> date : Mon, 8 May 2017 12:57:27 -0400 committer: Tom Lane <email@example.com> date : Mon, 8 May 2017 12:57:27 -0400
Security: CVE-2017-7484, CVE-2017-7485, CVE-2017-7486
Fix possibly-uninitialized variable.
commit : d3f3f95680701fb5f5bd8df603ec57d66b5b3d1b author : Tom Lane <firstname.lastname@example.org> date : Mon, 8 May 2017 11:18:40 -0400 committer: Tom Lane <email@example.com> date : Mon, 8 May 2017 11:18:40 -0400
Oversight in e2d4ef8de et al (my fault not Peter's). Per buildfarm. Security: CVE-2017-7484
Match pg_user_mappings limits to information_schema.user_mapping_options.
commit : b2423f0fa21b38e9a33782dccad028dca903ea3d author : Noah Misch <firstname.lastname@example.org> date : Mon, 8 May 2017 07:24:24 -0700 committer: Noah Misch <email@example.com> date : Mon, 8 May 2017 07:24:24 -0700
Both views replace the umoptions field with NULL when the user does not meet qualifications to see it. They used different qualifications, and pg_user_mappings documented qualifications did not match its implemented qualifications. Make its documentation and implementation match those of user_mapping_options. One might argue for stronger qualifications, but these have long, documented tenure. pg_user_mappings has always exhibited this problem, so back-patch to 9.2 (all supported versions). Michael Paquier and Feike Steenbergen. Reviewed by Jeff Janes. Reported by Andrew Wheelwright. Security: CVE-2017-7486
Restore PGREQUIRESSL recognition in libpq.
commit : ed36c1fe172aec866d92d6e5071150a0ec901f8b author : Noah Misch <firstname.lastname@example.org> date : Mon, 8 May 2017 07:24:24 -0700 committer: Noah Misch <email@example.com> date : Mon, 8 May 2017 07:24:24 -0700
Commit 65c3bf19fd3e1f6a591618e92eb4c54d0b217564 moved handling of the, already then, deprecated requiressl parameter into conninfo_storeval(). The default PGREQUIRESSL environment variable was however lost in the change resulting in a potentially silent accept of a non-SSL connection even when set. Its documentation remained. Restore its implementation. Also amend the documentation to mark PGREQUIRESSL as deprecated for those not following the link to requiressl. Back-patch to 9.3, where commit 65c3bf1 first appeared. Behavior has been more complex when the user provides both deprecated and non-deprecated settings. Before commit 65c3bf1, libpq operated according to the first of these found: requiressl=1 PGREQUIRESSL=1 sslmode=* PGSSLMODE=* (Note requiressl=0 didn't override sslmode=*; it would only suppress PGREQUIRESSL=1 or a previous requiressl=1. PGREQUIRESSL=0 had no effect whatsoever.) Starting with commit 65c3bf1, libpq ignored PGREQUIRESSL, and order of precedence changed to this: last of requiressl=* or sslmode=* PGSSLMODE=* Starting now, adopt the following order of precedence: last of requiressl=* or sslmode=* PGSSLMODE=* PGREQUIRESSL=1 This retains the 65c3bf1 behavior for connection strings that contain both requiressl=* and sslmode=*. It retains the 65c3bf1 change that either connection string option overrides both environment variables. For the first time, PGSSLMODE has precedence over PGREQUIRESSL; this avoids reducing security of "PGREQUIRESSL=1 PGSSLMODE=verify-full" configurations originating under v9.3 and later. Daniel Gustafsson Security: CVE-2017-7485
commit : 3cc52ed02ff31890b1dfc3281a4e01f7e008acd1 author : Peter Eisentraut <firstname.lastname@example.org> date : Mon, 8 May 2017 10:15:23 -0400 committer: Peter Eisentraut <email@example.com> date : Mon, 8 May 2017 10:15:23 -0400
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: f7b5a456ece6a8ce7003bb339b5e1fcc265523b5
Add security checks to selectivity estimation functions
commit : 3e5ea1f9b21c94acce01d7a5bf5bfcf36e670b0a author : Peter Eisentraut <firstname.lastname@example.org> date : Fri, 5 May 2017 12:18:48 -0400 committer: Peter Eisentraut <email@example.com> date : Fri, 5 May 2017 12:18:48 -0400
Some selectivity estimation functions run user-supplied operators over data obtained from pg_statistic without security checks, which allows those operators to leak pg_statistic data without having privileges on the underlying tables. Fix by checking that one of the following is satisfied: (1) the user has table or column privileges on the table underlying the pg_statistic data, or (2) the function implementing the user-supplied operator is leak-proof. If neither is satisfied, planning will proceed as if there are no statistics available. At least one of these is satisfied in most cases in practice. The only situations that are negatively impacted are user-defined or not-leak-proof operators on a security-barrier view. Reported-by: Robert Haas <firstname.lastname@example.org> Author: Peter Eisentraut <email@example.com> Author: Tom Lane <firstname.lastname@example.org> Security: CVE-2017-7484
Release notes for 9.6.3, 9.5.7, 9.4.12, 9.3.17, 9.2.21.
commit : a6b6bb64094cdf40488bc1ac64bc2c294e7383b5 author : Tom Lane <email@example.com> date : Sun, 7 May 2017 16:56:03 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 7 May 2017 16:56:03 -0400
Guard against null t->tm_zone in strftime.c.
commit : e829385f5633c38d8a7baa3de31df537da1fe93a author : Tom Lane <email@example.com> date : Sun, 7 May 2017 12:33:12 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 7 May 2017 12:33:12 -0400
The upstream IANA code does not guard against null TM_ZONE pointers in this function, but in our code there is such a check in the other pre-existing use of t->tm_zone. We do have some places that set pg_tm.tm_zone to NULL. I'm not entirely sure it's possible to reach strftime with such a value, but I'm not sure it isn't either, so be safe. Per Coverity complaint.
Install the “posixrules” timezone link in MSVC builds.
commit : 62a2883129ab8e474ae342b7f554cf8a28eea198 author : Tom Lane <email@example.com> date : Sun, 7 May 2017 11:57:41 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 7 May 2017 11:57:41 -0400
Somehow, we'd missed ever doing this. The consequences aren't too severe: basically, the timezone library would fall back on its hardwired notion of the DST transition dates to use for a POSIX-style zone name, rather than obeying US/Eastern which is the intended behavior. The net effect would only be to obey current US DST law further back than it ought to apply; so it's not real surprising that nobody noticed. David Rowley, per report from Amit Kapila Discussion: https://postgr.es/m/CAA4eK1LC7CaNhRAQ__C3ht1JVrPzaAXXhEJRnR5L6bfYHiLmWw@mail.gmail.com
Restore fullname contents before falling through in pg_open_tzfile().
commit : 6eedc6c18a90ef9412a7722695edb15a3a6a09bf author : Tom Lane <email@example.com> date : Sun, 7 May 2017 11:34:31 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 7 May 2017 11:34:31 -0400
Fix oversight in commit af2c5aa88: if the shortcut open() doesn't work, we need to reset fullname to be just the name of the toplevel tzdata directory before we fall through into the pre-existing code. This failed to be exposed in my (tgl's) testing because the fall-through path is actually never taken under normal circumstances. David Rowley, per report from Amit Kapila Discussion: https://postgr.es/m/CAA4eK1LC7CaNhRAQ__C3ht1JVrPzaAXXhEJRnR5L6bfYHiLmWw@mail.gmail.com
Allow queries submitted by postgres_fdw to be canceled.
commit : f14bf0a8fdd5a9fb2c5be692e0c9003185b88fa3 author : Robert Haas <email@example.com> date : Sat, 6 May 2017 22:19:56 -0400 committer: Robert Haas <firstname.lastname@example.org> date : Sat, 6 May 2017 22:19:56 -0400
Back-patch of commits f039eaac7131ef2a4cf63a10cf98486f8bcd09d2 and 1b812afb0eafe125b820cc3b95e7ca03821aa675, which arranged (in 9.6+) to make remote queries interruptible. It was known at the time that the same problem existed in the back-branches, but I did not back-patch for lack of a user complaint. Michael Paquier and Etsuro Fujita, adjusted for older branches by me. Per gripe from Suraj Kharage. This doesn't directly addresss Suraj's gripe, but since the patch that will do so builds up on top of this work, it seems best to back-patch this part first. Discussion: http://postgr.es/m/CAF1DzPU8Kx+fMXEbFoP289xtm3bz3t+ZfxhmKavr98Bh-C0TqQ@mail.gmail.com
commit : 8c681454dca0cf6a4dc8b48ca900851c046c4592 author : Tom Lane <email@example.com> date : Sat, 6 May 2017 14:19:47 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 6 May 2017 14:19:47 -0400
This system function has been there a very long time, but somehow escaped being listed in func.sgml. Fabien Coelho and Tom Lane Discussion: https://postgr.es/m/alpine.DEB.2.20.1705061027580.3896@lancre
Allow MSVC to build with Tcl 8.6.
commit : 41ba2ca080fe98ac4dc599f1e88328b030af8ace author : Alvaro Herrera <email@example.com> date : Fri, 5 May 2017 12:05:34 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Fri, 5 May 2017 12:05:34 -0300
Commit eaba54c20c5 added support for Tcl 8.6 for configure-supported platforms after verifying that pltcl works without further changes, but the MSVC tooling wasn't updated accordingly. Update MSVC to match, restructuring the code to avoid duplicating the logic for every Tcl version supported. Backpatch to all live branches, like eaba54c20c5. In 9.4 and previous, change the patch to use backslashes rather than forward, as in the rest of the file. Reported by Paresh More, who also tested the patch I provided. Discussion: https://postgr.es/m/CAAgiCNGVw3ssBtSi3ZNstrz5k00ax=UV+_ZEHUeW_LMSGL2sew@mail.gmail.com
Give nicer error message when connecting to a v10 server requiring SCRAM.
commit : 96d0f988b150aa0b52b44b8c1adbc7ef59262a1a author : Heikki Linnakangas <email@example.com> date : Fri, 5 May 2017 11:24:02 +0300 committer: Heikki Linnakangas <firstname.lastname@example.org> date : Fri, 5 May 2017 11:24:02 +0300
This is just to give the user a hint that they need to upgrade, if they try to connect to a v10 server that uses SCRAM authentication, with an older client. Commit to all stable branches, but not master. Discussion: https://email@example.com
Fix cursor_to_xml in tableforest false mode
commit : 12dd58d64657ef596c3e2e3181ff2220f771efbc author : Peter Eisentraut <firstname.lastname@example.org> date : Wed, 3 May 2017 21:25:01 -0400 committer: Peter Eisentraut <email@example.com> date : Wed, 3 May 2017 21:25:01 -0400
It only produced <row> elements but no wrapping <table> element. By contrast, cursor_to_xmlschema produced a schema that is now correct but did not previously match the XML data produced by cursor_to_xml. In passing, also fix a minor misunderstanding about moving cursors in the tests related to this. Reported-by: firstname.lastname@example.org Based-on-patch-by: Thomas Munro <email@example.com>
Remove useless and rather expensive stanza in matview regression test.
commit : fcdccb78e56a1828851c6506050560b753173a1d author : Tom Lane <firstname.lastname@example.org> date : Wed, 3 May 2017 19:37:01 -0400 committer: Tom Lane <email@example.com> date : Wed, 3 May 2017 19:37:01 -0400
This removes a test case added by commit b69ec7cc9, which was intended to exercise a corner case involving the rule used at that time that materialized views were unpopulated iff they had physical size zero. We got rid of that rule very shortly later, in commit 1d6c72a55, but kept the test case. However, because the case now asks what VACUUM will do to a zero-sized physical file, it would be pretty surprising if the answer were ever anything but "nothing" ... and if things were indeed that broken, surely we'd find it out from other tests. Since the test involves a table that's fairly large by regression-test standards (100K rows), it's quite slow to run. Dropping it should save some buildfarm cycles, so let's do that. Discussion: https://firstname.lastname@example.org
Improve performance of timezone loading, especially pg_timezone_names view.
commit : 5557b6af5f5b6ffe3a132604f58c2ed2eb3fbc73 author : Tom Lane <email@example.com> date : Tue, 2 May 2017 21:50:35 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 2 May 2017 21:50:35 -0400
tzparse() would attempt to load the "posixrules" timezone database file on each call. That might seem like it would only be an issue when selecting a POSIX-style zone name rather than a zone defined in the timezone database, but it turns out that each zone definition file contains a POSIX-style zone string and tzload() will call tzparse() to parse that. Thus, when scanning the whole timezone file tree as we do in the pg_timezone_names view, "posixrules" was read repetitively for each zone definition file. Fix that by caching the file on first use within any given process. (We cache other zone definitions for the life of the process, so there seems little reason not to cache this one as well.) This probably won't help much in processes that never run pg_timezone_names, but even one additional SET of the timezone GUC would come out ahead. An even worse problem for pg_timezone_names is that pg_open_tzfile() has an inefficient way of identifying the canonical case of a zone name: it basically re-descends the directory tree to the zone file. That's not awful for an individual "SET timezone" operation, but it's pretty horrid when we're inspecting every zone in the database. And it's pointless too because we already know the canonical spelling, having just read it from the filesystem. Fix by teaching pg_open_tzfile() to avoid the directory search if it's not asked for the canonical name, and backfilling the proper result in pg_tzenumerate_next(). In combination these changes seem to make the pg_timezone_names view about 3x faster to read, for me. Since a scan of pg_timezone_names has up to now been one of the slowest queries in the regression tests, this should help some little bit for buildfarm cycle times. Back-patch to all supported branches, not so much because it's likely that users will care much about the view's performance as because tracking changes in the upstream IANA timezone code is really painful if we don't keep all the branches in sync. Discussion: https://email@example.com
Ensure commands in extension scripts see the results of preceding DDL.
commit : c6b3d07061b8becc139b7ce854bda3a675e0ee2a author : Tom Lane <firstname.lastname@example.org> date : Tue, 2 May 2017 18:05:54 -0400 committer: Tom Lane <email@example.com> date : Tue, 2 May 2017 18:05:54 -0400
Due to a missing CommandCounterIncrement() call, parsing of a non-utility command in an extension script would not see the effects of the immediately preceding DDL command, unless that command's execution ends with CommandCounterIncrement() internally ... which some do but many don't. Report by Philippe Beaudoin, diagnosis by Julien Rouhaud. Rather remarkably, this bug has evaded detection since extensions were invented, so back-patch to all supported branches. Discussion: https://firstname.lastname@example.org
Fix perl thinko in commit fed6df486dca
commit : b2ed1c8a467c2f8f91323a13696a900f31a3239d author : Andrew Dunstan <email@example.com> date : Tue, 2 May 2017 08:20:11 -0400 committer: Andrew Dunstan <firstname.lastname@example.org> date : Tue, 2 May 2017 08:20:11 -0400
Report and fix from Vaishnavi Prabakaran Backpatch to 9.4 like original.
Update time zone data files to tzdata release 2017b.
commit : 1c88623465d849c90fa529dc661b0b16730cbe91 author : Tom Lane <email@example.com> date : Mon, 1 May 2017 11:52:59 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 1 May 2017 11:52:59 -0400
DST law changes in Chile, Haiti, and Mongolia. Historical corrections for Ecuador, Kazakhstan, Liberia, and Spain. The IANA crew continue their campaign to replace invented time zone abbrevations with numeric GMT offsets. This update changes numerous zones in South America, the Pacific and Indian oceans, and some Asian and Middle Eastern zones. I kept these abbreviations in the tznames/ data files, however, so that we will still accept them for input. (We may want to start trimming those files someday, but I think we should wait for the upstream dust to settle before deciding what to do.) In passing, add MESZ (Mitteleuropaeische Sommerzeit) to the tznames lists; since we accept MEZ (Mitteleuropaeische Zeit) it seems rather strange not to take the other one. And fix some incorrect, or at least obsolete, comments that certain abbreviations are not traceable to the IANA data.
Allow vcregress.pl to run an arbitrary TAP test set
commit : 3c1e14af8692d48548adac9b8a0cbabaa8a92b78 author : Andrew Dunstan <email@example.com> date : Mon, 1 May 2017 10:12:02 -0400 committer: Andrew Dunstan <firstname.lastname@example.org> date : Mon, 1 May 2017 10:12:02 -0400
Currently only provision for running the bin checks in a single step is provided for. Now these tests can be run individually, as well as tests in other locations (e.g. src.test/recover). Also provide for suppressing unnecessary temp installs by setting the NO_TEMP_INSTALL environment variable just as the Makefiles do. Backpatch to 9.4.
Sync our copy of the timezone library with IANA release tzcode2017b.
commit : 96cad6f24e48adcb9eb0ba0eea4c6bc9b9ed84ff author : Tom Lane <email@example.com> date : Sun, 30 Apr 2017 15:13:51 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 30 Apr 2017 15:13:51 -0400
zic no longer mishandles some transitions in January 2038 when it attempts to work around Qt bug 53071. This fixes a bug affecting Pacific/Tongatapu that was introduced in zic 2016e. localtime.c now contains a workaround, useful when loading a file generated by a buggy zic. There are assorted cosmetic changes as well, notably relocation of a bunch of #defines.
Fix VALIDATE CONSTRAINT to consider NO INHERIT attribute.
commit : 93a07a68eed76d36519ff17eb9bedc376f38e8c5 author : Robert Haas <email@example.com> date : Fri, 28 Apr 2017 14:48:38 -0400 committer: Robert Haas <firstname.lastname@example.org> date : Fri, 28 Apr 2017 14:48:38 -0400
Currently, trying to validate a NO INHERIT constraint on the parent will search for the constraint in child tables (where it is not supposed to exist), wrongly causing a "constraint does not exist" error. Amit Langote, per a report from Hans Buschmann. Discussion: http://email@example.com
Don’t use on-disk snapshots for exported logical decoding snapshot.
commit : b6ecf26ccc2b3e3895979c8fa08d21e8175b81cc author : Andres Freund <firstname.lastname@example.org> date : Thu, 27 Apr 2017 15:28:24 -0700 committer: Andres Freund <email@example.com> date : Thu, 27 Apr 2017 15:28:24 -0700
Logical decoding stores historical snapshots on disk, so that logical decoding can restart without having to reconstruct a snapshot from scratch (for which the resources are not guaranteed to be present anymore). These serialized snapshots were also used when creating a new slot via the walsender interface, which can export a "full" snapshot (i.e. one that can read all tables, not just catalog ones). The problem is that the serialized snapshots are only useful for catalogs and not for normal user tables. Thus the use of such a serialized snapshot could result in an inconsistent snapshot being exported, which could lead to queries returning wrong data. This would only happen if logical slots are created while another logical slot already exists. Author: Petr Jelinek Reviewed-By: Andres Freund Discussion: https://firstname.lastname@example.org Backport: 9.4, where logical decoding was introduced.
Preserve required !catalog tuples while computing initial decoding snapshot.
commit : 5da64613875d387f6d3f60a383ec20189b84ff54 author : Andres Freund <email@example.com> date : Sun, 23 Apr 2017 20:41:29 -0700 committer: Andres Freund <firstname.lastname@example.org> date : Sun, 23 Apr 2017 20:41:29 -0700
The logical decoding machinery already preserved all the required catalog tuples, which is sufficient in the course of normal logical decoding, but did not guarantee that non-catalog tuples were preserved during computation of the initial snapshot when creating a slot over the replication protocol. This could cause a corrupted initial snapshot being exported. The time window for issues is usually not terribly large, but on a busy server it's perfectly possible to it hit it. Ongoing decoding is not affected by this bug. To avoid increased overhead for the SQL API, only retain additional tuples when a logical slot is being created over the replication protocol. To do so this commit changes the signature of CreateInitDecodingContext(), but it seems unlikely that it's being used in an extension, so that's probably ok. In a drive-by fix, fix handling of ReplicationSlotsComputeRequiredXmin's already_locked argument, which should only apply to ProcArrayLock, not ReplicationSlotControlLock. Reported-By: Erik Rijkers Analyzed-By: Petr Jelinek Author: Petr Jelinek, heavily editorialized by Andres Freund Reviewed-By: Andres Freund Discussion: https://email@example.com Backport: 9.4, where logical decoding was introduced.
Fix postmaster’s handling of fork failure for a bgworker process.
commit : 436b560b86ca688ebbebf28f0709ab45eadfa3ce author : Tom Lane <firstname.lastname@example.org> date : Mon, 24 Apr 2017 12:16:58 -0400 committer: Tom Lane <email@example.com> date : Mon, 24 Apr 2017 12:16:58 -0400
This corner case didn't behave nicely at all: the postmaster would (partially) update its state as though the process had started successfully, and be quite confused thereafter. Fix it to act like the worker had crashed, instead. In passing, refactor so that do_start_bgworker contains all the state-change logic for bgworker launch, rather than just some of it. Back-patch as far as 9.4. 9.3 contains similar logic, but it's just enough different that I don't feel comfortable applying the patch without more study; and the use of bgworkers in 9.3 was so small that it doesn't seem worth the extra work. transam/parallel.c is still entirely unprepared for the possibility of bgworker startup failure, but that seems like material for a separate patch. Discussion: https://firstname.lastname@example.org
Fix order of arguments to SubTransSetParent().
commit : 2e14541c4992e5a2de6b5be2f7ed3f3062ed4ec1 author : Tom Lane <email@example.com> date : Sun, 23 Apr 2017 13:10:57 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 23 Apr 2017 13:10:57 -0400
ProcessTwoPhaseBuffer (formerly StandbyRecoverPreparedTransactions) mixed up the parent and child XIDs when calling SubTransSetParent to record the transactions' relationship in pg_subtrans. Remarkably, analysis by Simon Riggs suggests that this doesn't lead to visible problems (at least, not in non-Assert builds). That might explain why we'd not noticed it before. Nonetheless, it's surely wrong. This code was born broken, so back-patch to all supported branches. Discussion: https://email@example.com
doc: Update link
commit : 55cdda91c65b5d2c82672b327857cac3e0e6e07d author : Peter Eisentraut <firstname.lastname@example.org> date : Fri, 21 Apr 2017 19:42:01 -0400 committer: Peter Eisentraut <email@example.com> date : Fri, 21 Apr 2017 19:42:01 -0400
The reference "That is the topic of the next section." has been incorrect since the materialized views documentation got inserted between the section "rules-views" and "rules-update". Author: Zertrin <firstname.lastname@example.org>
Avoid depending on non-POSIX behavior of fcntl(2).
commit : ce521e50eb569847a353b5b4af0d1778c9f748ef author : Tom Lane <email@example.com> date : Fri, 21 Apr 2017 15:55:56 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 21 Apr 2017 15:55:56 -0400
The POSIX standard does not say that the success return value for fcntl(F_SETFD) and fcntl(F_SETFL) is zero; it says only that it's not -1. We had several calls that were making the stronger assumption. Adjust them to test specifically for -1 for strict spec compliance. The standard further leaves open the possibility that the O_NONBLOCK flag bit is not the only active one in F_SETFL's argument. Formally, therefore, one ought to get the current flags with F_GETFL and store them back with only the O_NONBLOCK bit changed when trying to change the nonblock state. In port/noblock.c, we were doing the full pushup in pg_set_block but not in pg_set_noblock, which is just weird. Make both of them do it properly, since they have little business making any assumptions about the socket they're handed. The other places where we're issuing F_SETFL are working with FDs we just got from pipe(2), so it's reasonable to assume the FDs' properties are all default, so I didn't bother adding F_GETFL steps there. Also, while pg_set_block deserves some points for trying to do things right, somebody had decided that it'd be even better to cast fcntl's third argument to "long". Which is completely loony, because POSIX clearly says the third argument for an F_SETFL call is "int". Given the lack of field complaints, these missteps apparently are not of significance on any common platforms. But they're still wrong, so back-patch to all supported branches. Discussion: https://email@example.com
Support OpenSSL 1.1.0 in 9.4 branch.
commit : bb132cddf870885a6e3af102fe2accd04e5da38a author : Tom Lane <firstname.lastname@example.org> date : Sat, 15 Apr 2017 20:16:03 -0400 committer: Tom Lane <email@example.com> date : Sat, 15 Apr 2017 20:16:03 -0400
This commit back-patches the equivalent of the 9.5-branch commits e2838c580 and 48e5ba61e, so that we can work with OpenSSL 1.1.0 in 9.4. (Going further back would be a good thing but will take more work; meanwhile let's see what the buildfarm makes of this.) Original patches by Andreas Karlsson and Heikki Linnakangas, back-patching work by Andreas Karlsson. Patch: https://firstname.lastname@example.org Discussion: https://email@example.com
Provide a way to control SysV shmem attach address in EXEC_BACKEND builds.
commit : 07a990c6e7d151244199f443753f7e15df32e010 author : Tom Lane <firstname.lastname@example.org> date : Sat, 15 Apr 2017 17:27:38 -0400 committer: Tom Lane <email@example.com> date : Sat, 15 Apr 2017 17:27:38 -0400
In standard non-Windows builds, there's no particular reason to care what address the kernel chooses to map the shared memory segment at. However, when building with EXEC_BACKEND, there's a risk that the chosen address won't be available in all child processes. Linux with ASLR enabled (which it is by default) seems particularly at risk because it puts shmem segments into the same area where it maps shared libraries. We can work around that by specifying a mapping address that's outside the range where shared libraries could get mapped. On x86_64 Linux, 0x7e0000000000 seems to work well. This is only meant for testing/debugging purposes, so it doesn't seem necessary to go as far as providing a GUC (or any user-visible documentation, though we might change that later). Instead, it's just controlled by setting an environment variable PG_SHMEM_ADDR to the desired attach address. Back-patch to all supported branches, since the point here is to remove intermittent buildfarm failures on EXEC_BACKEND animals. Owners of affected animals will need to add a suitable setting of PG_SHMEM_ADDR to their build_env configuration. Discussion: https://firstname.lastname@example.org
Further fix pg_trgm’s extraction of trigrams from regular expressions.
commit : e0eda580d23da747e89ec8355d6094eb4557abfe author : Tom Lane <email@example.com> date : Fri, 14 Apr 2017 14:52:03 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 14 Apr 2017 14:52:03 -0400
Commit 9e43e8714 turns out to have been insufficient: not only is it necessary to track tentative parent links while considering a set of arc removals, but it's necessary to track tentative flag additions as well. This is because we always merge arc target states into arc source states; therefore, when considering a merge of the final state with some other, it is the other state that will acquire a new TSTATE_FIN bit. If there's another arc for the same color trigram that would cause merging of that state with the initial state, we failed to recognize the problem. The test cases for the prior commit evidently only exercised situations where a tentative merge with the initial state occurs before one with the final state. If it goes the other way around, we'll happily merge the initial and final states, either producing a broken final graph that would never match anything, or triggering the Assert added by the prior commit. It's tempting to consider switching the merge direction when the merge involves the final state, but I lack the time to analyze that idea in detail. Instead just keep track of the flag changes that would result from proposed merges, in the same way that the prior commit tracked proposed parent links. Along the way, add some more debugging support, because I'm not entirely confident that this is the last bug here. And tweak matters so that the transformed.dot file uses small integers rather than pointer values to identify states; that makes it more readable if you're just eyeballing it rather than fooling with Graphviz. And rename a couple of identically named struct fields to reduce confusion. Per report from Corey Csuhta. Add a test case based on his example. (Note: this case does not trigger the bug under 9.3, apparently because its different measurement of costs causes it to stop merging states before it hits the failure. I spent some time trying to find a variant that would fail in 9.3, without success; but I'm sure such cases exist.) Like the previous patch, back-patch to 9.3 where this code was added. Report: https://postgr.es/m/E2B01A4B-4530-406B-8D17-2F67CF9A16BA@csuhta.com
Fix regexport.c to behave sanely with lookaround constraints.
commit : b179684c77bb16640cdea6fd3d7a1e333829334e author : Tom Lane <email@example.com> date : Thu, 13 Apr 2017 17:18:35 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Thu, 13 Apr 2017 17:18:35 -0400
regexport.c thought it could just ignore LACON arcs, but the correct behavior is to treat them as satisfiable while consuming zero input (rather reminiscently of commit 9f1e642d5). Otherwise, the emitted simplified-NFA representation may contain no paths leading from initial to final state, which unsurprisingly confuses pg_trgm, as seen in bug #14623 from Jeff Janes. Since regexport's output representation has no concept of an arc that consumes zero input, recurse internally to find the next normal arc(s) after any LACON transitions. We'd be forced into changing that representation if a LACON could be the last arc reaching the final state, but fortunately the regex library never builds NFAs with such a configuration, so there always is a next normal arc. Back-patch to 9.3 where this logic was introduced. Discussion: https://email@example.com
Improve castNode notation by introducing list-extraction-specific variants.
commit : 89a41a1b6b7184700394c7ea13471084d05f3cc1 author : Tom Lane <firstname.lastname@example.org> date : Mon, 10 Apr 2017 13:51:29 -0400 committer: Tom Lane <email@example.com> date : Mon, 10 Apr 2017 13:51:29 -0400
This extends the castNode() notation introduced by commit 5bcab1114 to provide, in one step, extraction of a list cell's pointer and coercion to a concrete node type. For example, "lfirst_node(Foo, lc)" is the same as "castNode(Foo, lfirst(lc))". Almost half of the uses of castNode that have appeared so far include a list extraction call, so this is pretty widely useful, and it saves a few more keystrokes compared to the old way. As with the previous patch, back-patch the addition of these macros to pg_list.h, so that the notation will be available when back-patching. Patch by me, after an idea of Andrew Gierth's. Discussion: https://firstname.lastname@example.org
Silence compiler warning in sepgsql
commit : 7e71081426a4e14c50fa5d8ad046f915ddf22c42 author : Joe Conway <email@example.com> date : Thu, 6 Apr 2017 14:21:47 -0700 committer: Joe Conway <firstname.lastname@example.org> date : Thu, 6 Apr 2017 14:21:47 -0700
<selinux/label.h> includes <stdbool.h>, which creates an incompatible We don't care if <stdbool.h> redefines "true"/"false"; those are close enough. Complaint and initial patch by Mike Palmiotto. Final approach per Tom Lane's suggestion, as discussed on hackers. Backpatching to all supported branches. Discussion: https://postgr.es/m/flat/623bcaae-112e-ced0-8c22-a84f75ae0c53%40joeconway.com
Remove dead code and fix comments in fast-path function handling.
commit : 88101abe70cb81c773cd3102eee6a79ac8f77bfb author : Heikki Linnakangas <email@example.com> date : Thu, 6 Apr 2017 09:09:39 +0300 committer: Heikki Linnakangas <firstname.lastname@example.org> date : Thu, 6 Apr 2017 09:09:39 +0300
HandleFunctionRequest() is no longer responsible for reading the protocol message from the client, since commit 2b3a8b20c2. Fix the outdated comments. HandleFunctionRequest() now always returns 0, because the code that used to return EOF was moved in 2b3a8b20c2. Therefore, the caller no longer needs to check the return value. Reported by Andres Freund. Backpatch to all supported versions, even though this doesn't have any user-visible effect, to make backporting future patches in this area easier. Discussion: https://email@example.com
Fix integer-overflow problems in interval comparison.
commit : 8851bcf8813baa0ea393ef9d2894d15b3f13f957 author : Tom Lane <firstname.lastname@example.org> date : Wed, 5 Apr 2017 23:51:28 -0400 committer: Tom Lane <email@example.com> date : Wed, 5 Apr 2017 23:51:28 -0400
When using integer timestamps, the interval-comparison functions tried to compute the overall magnitude of an interval as an int64 number of microseconds. As reported by Frazer McLean, this overflows for intervals exceeding about 296000 years, which is bad since we nominally allow intervals many times larger than that. That results in wrong comparison results, and possibly in corrupted btree indexes for columns containing such large interval values. To fix, compute the magnitude as int128 instead. Although some compilers have native support for int128 calculations, many don't, so create our own support functions that can do 128-bit addition and multiplication if the compiler support isn't there. These support functions are designed with an eye to allowing the int128 code paths in numeric.c to be rewritten for use on all platforms, although this patch doesn't do that, or even provide all the int128 primitives that will be needed for it. Back-patch as far as 9.4. Earlier releases did not guard against overflow of interval values at all (commit 146604ec4 fixed that), so it seems not very exciting to worry about overly-large intervals for them. Before 9.6, we did not assume that unreferenced "static inline" functions would not draw compiler warnings, so omit functions not directly referenced by timestamp.c, the only present consumer of int128.h. (We could have omitted these functions in HEAD too, but since they were written and debugged on the way to the present patch, and they look likely to be needed by numeric.c, let's keep them in HEAD.) I did not bother to try to prevent such warnings in a --disable-integer-datetimes build, though. Before 9.5, configure will never define HAVE_INT128, so the part of int128.h that exploits a native int128 implementation is dead code in the 9.4 branch. I didn't bother to remove it, thinking that keeping the file looking similar in different branches is more useful. In HEAD only, add a simple test harness for int128.h in src/tools/. In back branches, this does not change the float-timestamps code path. That's not subject to the same kind of overflow risk, since it computes the interval magnitude as float8. (No doubt, when this code was originally written, overflow was disregarded for exactly that reason.) There is a precision hazard instead :-(, but we'll avert our eyes from that question, since no complaints have been reported and that code's deprecated anyway. Kyotaro Horiguchi and Tom Lane Discussion: https://postgr.es/m/1490104629.422698.918452336.26FA96B7@webmail.messagingengine.com
Back-patch checkpoint clarification docs and pg_basebackup updates
commit : bd34e7f19b404d025d5b36bac42bcfecf2c09bc5 author : Magnus Hagander <firstname.lastname@example.org> date : Sat, 1 Apr 2017 17:20:05 +0200 committer: Magnus Hagander <email@example.com> date : Sat, 1 Apr 2017 17:20:05 +0200
This backpatches 51e26c9 and 7220c7b, including both documentation updates clarifying the checkpoints at the beginning of base backups and the messages in verbose and progress mdoe of pg_basebackup. Author: Michael Banck Discussion: https://postgr.es/m/21444.1488142764%40sss.pgh.pa.us
Simplify the example of VACUUM in documentation.
commit : cb366b507995cf8fd17c538a9b4dd35204e80de6 author : Fujii Masao <firstname.lastname@example.org> date : Fri, 31 Mar 2017 01:31:15 +0900 committer: Fujii Masao <email@example.com> date : Fri, 31 Mar 2017 01:31:15 +0900
Previously a detailed activity report by VACUUM VERBOSE ANALYZE was described as an example of VACUUM in docs. But it had been obsolete for a long time. For example, commit feb4f44d296b88b7f0723f4a4f3945a371276e0b updated the content of that activity report in 2003, but we had forgotten to update the example. So basically we need to update the example. But since no one cared about the details of VACUUM output and complained about that mistake for such long time, per discussion on hackers, we decided to get rid of the detailed activity report from the example and simplify it. Back-patch to all supported versions. Reported by Masahiko Sawada, patch by me. Discussion: https://postgr.es/m/CAD21AoAGA2pB3p-CWmTkxBsbkZS1bcDGBLcYVcvcDxspG_XAfA@mail.gmail.com
Fix unportable disregard of alignment requirements in RADIUS code.
commit : 55c642c92b974362ece5ddf100fa2fa90b3442c9 author : Tom Lane <firstname.lastname@example.org> date : Sun, 26 Mar 2017 17:35:35 -0400 committer: Tom Lane <email@example.com> date : Sun, 26 Mar 2017 17:35:35 -0400
The compiler is entitled to store a char local variable with no particular alignment requirement. Our RADIUS code cavalierly took such a local variable and cast its address to a struct type that does have alignment requirements. On an alignment-picky machine this would lead to bus errors. To fix, declare the local variable honestly, and then cast its address to char * for use in the I/O calls. Given the lack of field complaints, there must be very few if any people affected; but nonetheless this is a clear portability issue, so back-patch to all supported branches. Noted while looking at a Coverity complaint in the same code.
Revert Windows service check refactoring, and replace with a different fix.
commit : 6423ed7d4eaa5f84e508ad6f03afb8039895f562 author : Heikki Linnakangas <firstname.lastname@example.org> date : Fri, 24 Mar 2017 12:39:01 +0200 committer: Heikki Linnakangas <email@example.com> date : Fri, 24 Mar 2017 12:39:01 +0200
This reverts commit 38bdba54a64bacec78e3266f0848b0b4a824132a, "Fix and simplify check for whether we're running as Windows service". It turns out that older versions of MinGW - like that on buildfarm member narwhal - do not support the CheckTokenMembership() function. This replaces the refactoring with a much smaller fix, to add a check for SE_GROUP_ENABLED to pgwin32_is_service(). Only apply to back-branches, and keep the refactoring in HEAD. It's unlikely that anyone is still really using such an old version of MinGW - aside from narwhal - but let's not change the minimum requirements in minor releases. Discussion: https://firstname.lastname@example.org Patch: https://www.postgresql.org/message-id/CAB7nPqSvfu%3DKpJ%3DNX%2BYAHmgAmQdzA7N5h31BjzXeMgczhGCC%2BQ%40mail.gmail.com
doc: Fix a few typos and awkward links
commit : 03ca8c249953e0051193183a5f987d64fcef4aa5 author : Peter Eisentraut <email@example.com> date : Sat, 18 Mar 2017 23:44:30 -0400 committer: Peter Eisentraut <firstname.lastname@example.org> date : Sat, 18 Mar 2017 23:44:30 -0400
Remove dead link.
commit : 7871d2a9cacf67ab1fd6704eb6be5f6bbdc39372 author : Robert Haas <email@example.com> date : Fri, 17 Mar 2017 09:32:34 -0400 committer: Robert Haas <firstname.lastname@example.org> date : Fri, 17 Mar 2017 09:32:34 -0400
David Christensen Discussion: http://postgr.es/m/82299377-1480-4439-9ABA-5828D71AA22E@endpoint.com
Fix and simplify check for whether we’re running as Windows service.
commit : 6b584c36a40ca68bd7f1943eb34ce1507b16bb2c author : Heikki Linnakangas <email@example.com> date : Fri, 17 Mar 2017 11:14:01 +0200 committer: Heikki Linnakangas <firstname.lastname@example.org> date : Fri, 17 Mar 2017 11:14:01 +0200
If the process token contains SECURITY_SERVICE_RID, but it has been disabled by the SE_GROUP_USE_FOR_DENY_ONLY attribute, win32_is_service() would incorrectly report that we're running as a service. That situation arises, e.g. if postmaster is launched with a restricted security token, with the "Log in as Service" privilege explicitly removed. Replace the broken code with CheckProcessTokenMembership(), which does this correctly. Also replace similar code in win32_is_admin(), even though it got this right, for simplicity and consistency. Per bug #13755, reported by Breen Hagan. Back-patch to all supported versions. Patch by Takayuki Tsunakawa, reviewed by Michael Paquier. Discussion: https://www.postgresql.org/message-id/20151104062315.2745.67143%40wrigleys.postgresql.org
Avoid having vacuum set reltuples to 0 on non-empty relations in the presence of page pins, which leads to serious estimation errors in the planner. This particularly affects small heavily-accessed tables, especially where locking (e.g. from FK constraints) forces frequent vacuums for mxid cleanup.
commit : 269efd052922489cf91fd0bc5a80c2008f553b49 author : Andrew Gierth <email@example.com> date : Thu, 16 Mar 2017 22:32:56 +0000 committer: Andrew Gierth <firstname.lastname@example.org> date : Thu, 16 Mar 2017 22:32:56 +0000
Fix by keeping separate track of pages whose live tuples were actually counted vs. pages that were only scanned for freezing purposes. Thus, reltuples can only be set to 0 if all pages of the relation were actually counted. Backpatch to all supported versions. Per bug #14057 from Nicolas Baccelli, analyzed by me. Discussion: https://email@example.com
commit : e0c8bd704c63baccc03e22112d411f7edb5f80c7 author : Peter Eisentraut <firstname.lastname@example.org> date : Tue, 14 Mar 2017 12:57:10 -0400 committer: Peter Eisentraut <email@example.com> date : Tue, 14 Mar 2017 12:57:10 -0400
From: Josh Soref <firstname.lastname@example.org>
Fix failure to mark init buffers as BM_PERMANENT.
commit : bbd5e600ff5cd23bdbb63687748d34fa02690600 author : Robert Haas <email@example.com> date : Tue, 14 Mar 2017 11:51:11 -0400 committer: Robert Haas <firstname.lastname@example.org> date : Tue, 14 Mar 2017 11:51:11 -0400
This could result in corruption of the init fork of an unlogged index if the ambuildempty routine for that index used shared buffers to create the init fork, which was true for gin, gist, and hash indexes. Patch by me, based on an earlier patch by Michael Paquier, who also reviewed this one. This also incorporates an idea from Artur Zakirov. Discussion: http://postgr.es/m/CACYUyc8yccE4xfxhqxfh_Mh38j7dRFuxfaK1p6dSNAEUakxUyQ@mail.gmail.com
Remove unnecessary dependency on statement_timeout in prepared_xacts test.
commit : 123f377a6602fce63c6d327b5f2304b78c8a1a94 author : Tom Lane <email@example.com> date : Mon, 13 Mar 2017 16:46:32 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 13 Mar 2017 16:46:32 -0400
Rather than waiting around for statement_timeout to expire, we can just try to take the table's lock in nowait mode. This saves some fraction under 4 seconds when running this test with prepared xacts available, and it guards against timeout-expired-anyway failures on very slow machines when prepared xacts are not available, as seen in a recent failure on axolotl for instance. This approach could fail if autovacuum were to take an exclusive lock on the test table concurrently, but there's no reason for it to do so. Since the main point here is to improve stability in the buildfarm, back-patch to all supported branches.
Ecpg should support COMMIT PREPARED and ROLLBACK PREPARED.
commit : e060baaad1ed6c2e1cc9758d8ba3c770c8f27595 author : Michael Meskes <email@example.com> date : Mon, 13 Mar 2017 20:44:13 +0100 committer: Michael Meskes <firstname.lastname@example.org> date : Mon, 13 Mar 2017 20:44:13 +0100
The problem was that "begin transaction" was issued automatically before executing COMMIT/ROLLBACK PREPARED if not in auto commit. This fix by Masahiko Sawada fixes this.
Fix pg_file_write() error handling.
commit : 4b2669ada6cb03c08eba21013321c8b2b412eaa1 author : Noah Misch <email@example.com> date : Sun, 12 Mar 2017 19:35:31 -0400 committer: Noah Misch <firstname.lastname@example.org> date : Sun, 12 Mar 2017 19:35:31 -0400
Detect fclose() failures; given "ln -s /dev/full $PGDATA/devfull", "pg_file_write('devfull', 'x', true)" now fails as it should. Don't leak a stream when fwrite() fails. Remove a born-ineffective test that aimed to skip zero-length writes. Back-patch to 9.2 (all supported versions).
Fix ancient connection leak in dblink
commit : 166dfb3a903eee25637b846a87a4a63c67d5e796 author : Joe Conway <email@example.com> date : Sat, 11 Mar 2017 13:33:14 -0800 committer: Joe Conway <firstname.lastname@example.org> date : Sat, 11 Mar 2017 13:33:14 -0800
When using unnamed connections with dblink, every time a new connection is made, the old one is leaked. Fix that. This has been an issue probably since dblink was first committed. Someone complained almost ten years ago, but apparently I decided not to pursue it at the time, and neither did anyone else, so it slipped between the cracks. Now that someone else has complained, fix in all supported branches. Discussion: (orig) https://postgr.es/m/flat/F680AB59-6D6F-4026-9599-1BE28880273D%40decibel.org#F680AB59-6D6F-4026-9599-1BE28880273D@decibel.org Discussion: (new) https://postgr.es/m/flat/0A3221C70F24FB45833433255569204D1F6ADF8C@G01JPEXMBYT05 Reported by: Jim Nasby and Takayuki Tsunakawa
Sanitize newlines in object names in “pg_restore -l” output.
commit : 64d132c2997ec5972827bd0ae186ca967492ce51 author : Tom Lane <email@example.com> date : Fri, 10 Mar 2017 14:15:09 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 10 Mar 2017 14:15:09 -0500
Commits 89e0bac86 et al replaced newlines with spaces in object names printed in SQL comments, but we neglected to consider that the same names are also printed by "pg_restore -l", and a newline would render the output unparseable by "pg_restore -L". Apply the same replacement in "-l" output. Since "pg_restore -L" doesn't actually examine any object names, only the dump ID field that starts each line, this is enough to fix things for its purposes. The previous fix was treated as a security issue, and we might have done that here as well, except that the issue was reported publicly to start with. Anyway it's hard to see how this could be exploited for SQL injection; "pg_restore -L" doesn't do much with the file except parse it for leading integers. Per bug #14587 from Milos Urbanek. Back-patch to all supported versions. Discussion: https://email@example.com
Fix a potential double-free in ecpg.
commit : f6b9065993e0d5e64dec28aff3a090674934ae52 author : Michael Meskes <firstname.lastname@example.org> date : Fri, 10 Mar 2017 10:32:41 +0100 committer: Michael Meskes <email@example.com> date : Fri, 10 Mar 2017 10:32:41 +0100
Fix timestamptz regression test to still work with latest IANA zone data.
commit : e573bc3f9a0e3d455ed774b5527896a39a2932cf author : Tom Lane <firstname.lastname@example.org> date : Thu, 9 Mar 2017 17:20:11 -0500 committer: Tom Lane <email@example.com> date : Thu, 9 Mar 2017 17:20:11 -0500
The IANA timezone crew continues to chip away at their project of removing timezone abbreviations that have no real-world currency from their database. The tzdata2017a update removes all such abbreviations for South American zones, as well as much of the Pacific. This breaks some test cases in timestamptz.sql that were expecting America/Santiago and America/Caracas to have non-numeric abbreviations. The test cases involving America/Santiago seem to have selected that zone more or less at random, so just replace it with America/New_York, which is of similar longitude. The cases involving America/Caracas are harder since they were chosen to test a time-varying zone abbreviation around a point where it changed meaning in the backwards direction. Fortunately, Europe/Moscow has a similar case in 2014, and the MSK/MSD abbreviations are well enough attested that IANA seems unlikely to decide to remove them from the database in future. With these changes, this regression test should pass when using any IANA zone database from 2015 or later. One could wish that there were a few years more daylight on how out-of-date your zone database can be ... but really the --with-system-tzdata option is only meant for use on platforms where the zone database is kept up-to-date pretty faithfully, so I do not think this is a big objection. Discussion: https://firstname.lastname@example.org
Use doubly-linked block lists in aset.c to reduce large-chunk overhead.
commit : 8dd5c4171fb2c3b78e713c49f531ad8ff3b245ea author : Tom Lane <email@example.com> date : Wed, 8 Mar 2017 12:21:12 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 8 Mar 2017 12:21:12 -0500
Large chunks (those too large for any palloc freelist) are managed as separate blocks. Formerly, realloc'ing or pfree'ing such a chunk required O(N) time in a context with N blocks, since we had to traipse down the singly-linked block list to locate the block's predecessor before we could fix the list links. This can result in O(N^2) runtime in situations where large numbers of such chunks are manipulated within one context. Cases like that were not foreseen in the original design of aset.c, and indeed didn't arise until fairly recently. But such problems can now occur in reorderbuffer.c and in hash joining, both of which make repeated large requests without scaling up their request size as they do so, and which will free their requests in not-necessarily-LIFO order. To fix, change the block list from singly-linked to doubly-linked. This adds another 4 or 8 bytes to ALLOC_BLOCKHDRSZ, but that doesn't seem like unacceptable overhead, since aset.c's blocks are normally 8K or more, and never less than 1K in current practice. In passing, get rid of some redundant AllocChunkGetPointer() calls in AllocSetRealloc (the compiler might be smart enough to optimize these away anyway, but no need to assume that) and improve AllocSetCheck's checking of block header fields. Back-patch to 9.4 where reorderbuffer.c appeared. We could take this further back, but currently there's no evidence that it would be useful. Discussion: https://postgr.es/m/CAMkU=1x1hvue1XYrZoWk_omG0Ja5nBvTdvgrOeVkkeqs71CV8g@mail.gmail.com
pg_xlogdump: Remove extra newline in error message
commit : 9420ea88f95de86114f70460e7c86638330e7155 author : Peter Eisentraut <email@example.com> date : Wed, 8 Mar 2017 09:57:17 -0500 committer: Peter Eisentraut <firstname.lastname@example.org> date : Wed, 8 Mar 2017 09:57:17 -0500
fatal_error() already prints out a trailing newline.
Repair incorrect pg_dump labeling for some comments and security labels.
commit : db9b4b716a4f10eb04066861975d89363c8b89b4 author : Tom Lane <email@example.com> date : Mon, 6 Mar 2017 19:33:59 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 6 Mar 2017 19:33:59 -0500
We attached no schema label to comments for procedural languages, casts, transforms, operator classes, operator families, or text search objects. The first three categories of objects don't really have schemas, but pg_dump treats them as if they do, and it seems like the TocEntry fields for their comments had better match the TocEntry fields for the parent objects. (As an example of a possible hazard, the type names in a CAST will be formatted with the assumption of a particular search_path, so failing to ensure that this same path is active for the COMMENT ON command could lead to an error or to attaching the comment to the wrong cast.) In the last six cases, this was a flat-out error --- possibly mine to begin with, but it was a long time ago. The security label for a procedural language was likewise not correctly labeled as to schema, and both the comment and security label for a procedural language were not correctly labeled as to owner. In simple cases the restore would accidentally work correctly anyway, since these comments and security labels would normally get emitted right after the owning object, and so the search path and active user would be correct anyhow. But it could fail in corner cases; for example a schema-selective restore would omit comments it should include. Giuseppe Broccolo noted the oversight, and proposed the correct fix, for text search dictionary objects; I found the rest by cross-checking other dumpComment() calls. These oversights are ancient, so back-patch all the way. Discussion: https://postgr.es/m/CAFzmHiWwwzLjzwM4x5ki5s_PDMR6NrkipZkjNnO3B0xEpBgJaA@mail.gmail.com
pg_upgrade: Fix large object COMMENTS, SECURITY LABELS
commit : 93598898c8d203ef296f4f5870c5f948e9fda416 author : Stephen Frost <email@example.com> date : Mon, 6 Mar 2017 17:04:22 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Mon, 6 Mar 2017 17:04:22 -0500
When performing a pg_upgrade, we copy the files behind pg_largeobject and pg_largeobject_metadata, allowing us to avoid having to dump out and reload the actual data for large objects and their ACLs. Unfortunately, that isn't all of the information which can be associated with large objects. Currently, we also support COMMENTs and SECURITY LABELs with large objects and these were being silently dropped during a pg_upgrade as pg_dump would skip everything having to do with a large object and pg_upgrade only copied the tables mentioned to the new cluster. As the file copies happen after the catalog dump and reload, we can't simply include the COMMENTs and SECURITY LABELs in pg_dump's binary-mode output but we also have to include the actual large object definition as well. With the definition, comments, and security labels in the pg_dump output and the file copies performed by pg_upgrade, all of the data and metadata associated with large objects is able to be successfully pulled forward across a pg_upgrade. In 9.6 and master, we can simply adjust the dump bitmask to indicate which components we don't want. In 9.5 and earlier, we have to put explciit checks in in dumpBlob() and dumpBlobs() to not include the ACL or the data when in binary-upgrade mode. Adjustments made to the privileges regression test to allow another test (large_object.sql) to be added which explicitly leaves a large object with a comment in place to provide coverage of that case with pg_upgrade. Back-patch to all supported branches. Discussion: https://postgr.es/m/20170221162655.GE9812@tamriel.snowman.net
Fix incorrect variable datatype
commit : 71b33680883a359bb75fdb40d64b11d284b73d5c author : Magnus Hagander <email@example.com> date : Tue, 28 Feb 2017 12:16:42 +0100 committer: Magnus Hagander <firstname.lastname@example.org> date : Tue, 28 Feb 2017 12:16:42 +0100
Both datatypes map to the same underlying one which is why it still worked, but we should use the correct type. Author: Kyotaro HORIGUCHI
Add /config.cache to .gitignore in back branches
commit : 1f0d78863eb88ca8ee9c3b5215d532900e722e57 author : Bruce Momjian <email@example.com> date : Sat, 25 Feb 2017 13:04:22 -0500 committer: Bruce Momjian <firstname.lastname@example.org> date : Sat, 25 Feb 2017 13:04:22 -0500
For some reason config.cache was not being git-ignored in these back branches. Backpatch-through: 9.2 to 9.4
pg_upgrade docs: clarify instructions on standby extensions
commit : ddcfeba95e2cdb90b1f37ac5cae97124373fa076 author : Bruce Momjian <email@example.com> date : Sat, 25 Feb 2017 12:59:23 -0500 committer: Bruce Momjian <firstname.lastname@example.org> date : Sat, 25 Feb 2017 12:59:23 -0500
Previously the pg_upgrade standby upgrade instructions said not to execute pgcrypto.sql, but it should have referenced the extension command "CREATE EXTENSION pgcrypto". This patch makes that doc change. Reported-by: a private bug report Backpatch-through: 9.4, where standby instructions were added
Fix contrib/pg_trgm’s extraction of trigrams from regular expressions.
commit : 98755681a2bde0d4970836b517c7f2ca8c996cfc author : Tom Lane <email@example.com> date : Wed, 22 Feb 2017 15:04:07 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 22 Feb 2017 15:04:07 -0500
The logic for removing excess trigrams from the result was faulty. It intends to avoid merging the initial and final states of the NFA, which is necessary, but in testing whether removal of a specific trigram would cause that, it failed to consider the combined effects of all the state merges that that trigram's removal would cause. This could result in a broken final graph that would never match anything, leading to GIN or GiST indexscans not finding anything. To fix, add a "tentParent" field that is used only within this loop, and set it to show state merges that we are tentatively going to do. While examining a particular arc, we must chase up through tentParent links as well as regular parent links (the former can only appear atop the latter), and we must account for state init/fin flag merges that haven't actually been done yet. To simplify the latter, combine the separate init and fin bool fields into a bitmap flags field. I also chose to get rid of the "children" state list, which seems entirely inessential. Per bug #14563 from Alexey Isayko, which the added test cases are based on. Back-patch to 9.3 where this code was added. Report: https://email@example.com Discussion: https://firstname.lastname@example.org
Make walsender always initialize the buffers.
commit : a3eb715a36bdd28d809d3dd859f169489352882b author : Fujii Masao <email@example.com> date : Wed, 22 Feb 2017 03:11:58 +0900 committer: Fujii Masao <firstname.lastname@example.org> date : Wed, 22 Feb 2017 03:11:58 +0900
Walsender uses the local buffers for each outgoing and incoming message. Previously when creating replication slot, walsender forgot to initialize one of them and which can cause the segmentation fault error. To fix this issue, this commit changes walsender so that it always initialize them before it executes the requested replication command. Back-patch to 9.4 where replication slot was introduced. Problem report and initial patch by Stas Kelvich, modified by me. Report: https://www.postgresql.org/message-id/A1E9CB90-1FAC-4CAD-8DBA-9AA62A6E97C5@postgrespro.ru
Fix sloppy handling of corner-case errors in fd.c.
commit : d9959e6ebb0d42dd385bf06d131468bb221e8b9f author : Tom Lane <email@example.com> date : Tue, 21 Feb 2017 17:51:28 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 21 Feb 2017 17:51:28 -0500
Several places in fd.c had badly-thought-through handling of error returns from lseek() and close(). The fact that those would seldom fail on valid FDs is probably the reason we've not noticed this up to now; but if they did fail, we'd get quite confused. LruDelete and LruInsert actually just Assert'd that lseek never fails, which is pretty awful on its face. In LruDelete, we indeed can't throw an error, because that's likely to get called during error abort and so throwing an error would probably just lead to an infinite loop. But by the same token, throwing an error from the close() right after that was ill-advised, not to mention that it would've left the LRU state corrupted since we'd already unlinked the VFD from the list. I also noticed that really, most of the time, we should know the current seek position and it shouldn't be necessary to do an lseek here at all. As patched, if we don't have a seek position and an lseek attempt doesn't give us one, we'll close the file but then subsequent re-open attempts will fail (except in the somewhat-unlikely case that a FileSeek(SEEK_SET) call comes between and allows us to re-establish a known target seek position). This isn't great but it won't result in any state corruption. Meanwhile, having an Assert instead of an honest test in LruInsert is really dangerous: if that lseek failed, a subsequent read or write would read or write from the start of the file, not where the caller expected, leading to data corruption. In both LruDelete and FileClose, if close() fails, just LOG that and mark the VFD closed anyway. Possibly leaking an FD is preferable to getting into an infinite loop or corrupting the VFD list. Besides, as far as I can tell from the POSIX spec, it's unspecified whether or not the file has been closed, so treating it as still open could be the wrong thing anyhow. I also fixed a number of other places that were being sloppy about behaving correctly when the seekPos is unknown. Also, I changed FileSeek to return -1 with EINVAL for the cases where it detects a bad offset, rather than throwing a hard elog(ERROR). It seemed pretty inconsistent that some bad-offset cases would get a failure return while others got elog(ERROR). It was missing an offset validity check for the SEEK_CUR case on a closed file, too. Back-patch to all supported branches, since all this code is fundamentally identical in all of them. Discussion: https://email@example.com
doc: Update URL for plr
commit : cabfec988191bcaf49c9aefc777ac8b0ba35e6ec author : Peter Eisentraut <firstname.lastname@example.org> date : Tue, 21 Feb 2017 12:35:57 -0500 committer: Peter Eisentraut <email@example.com> date : Tue, 21 Feb 2017 12:35:57 -0500
Fix documentation of to_char/to_timestamp TZ, tz, OF formatting patterns.
commit : 8f2799c993316a3dc436e231630b670a1ebf1fbb author : Tom Lane <firstname.lastname@example.org> date : Mon, 20 Feb 2017 10:05:01 -0500 committer: Tom Lane <email@example.com> date : Mon, 20 Feb 2017 10:05:01 -0500
These are only supported in to_char, not in the other direction, but the documentation failed to mention that. Also, describe TZ/tz as printing the time zone "abbreviation", not "name", because what they print is elsewhere referred to that way. Per bug #14558.
Make src/interfaces/libpq/test clean up after itself.
commit : 045462960f7c30608ebd7085697f1402d1e1500c author : Tom Lane <firstname.lastname@example.org> date : Sun, 19 Feb 2017 17:18:10 -0500 committer: Tom Lane <email@example.com> date : Sun, 19 Feb 2017 17:18:10 -0500
It failed to remove a .o file during "make clean", and it lacked a .gitignore file entirely.
Adjust PL/Tcl regression test to dodge a possible bug or zone dependency.
commit : dd2d437e8ee760040163fd27722a57a818f32d72 author : Tom Lane <firstname.lastname@example.org> date : Sun, 19 Feb 2017 16:14:52 -0500 committer: Tom Lane <email@example.com> date : Sun, 19 Feb 2017 16:14:52 -0500
One case in the PL/Tcl tests is observed to fail on RHEL5 with a Turkish time zone setting. It's not clear if this is an old Tcl bug or something odd about the zone data, but in any case that test is meant to see if the Tcl [clock] command works at all, not what its corner-case behaviors are. Therefore we have no need to test exactly which week a Sunday midnight is considered to fall into. Probe the following Tuesday instead. Discussion: https://firstname.lastname@example.org
Fix help message for pg_basebackup -R
commit : 595c412760449d2a72b6a65a9711a1537b3b7277 author : Magnus Hagander <email@example.com> date : Sat, 18 Feb 2017 13:47:06 +0100 committer: Magnus Hagander <firstname.lastname@example.org> date : Sat, 18 Feb 2017 13:47:06 +0100
The recovery.conf file that's generated is specifically for replication, and not needed (or wanted) for regular backup restore, so indicate that in the message.
Document usage of COPT environment variable for adjusting configure flags.
commit : 156fbdfcef5d5dfae63bddeeaef2f1aac15f2151 author : Tom Lane <email@example.com> date : Fri, 17 Feb 2017 16:11:02 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 17 Feb 2017 16:11:02 -0500
Also add to the existing rather half-baked description of PROFILE, which does exactly the same thing, but I think people use it differently. Discussion: https://email@example.com
Doc: remove duplicate index entry.
commit : 4bd0f83172561beb4ced58231ff96314e2a7a742 author : Tom Lane <firstname.lastname@example.org> date : Thu, 16 Feb 2017 11:30:07 -0500 committer: Tom Lane <email@example.com> date : Thu, 16 Feb 2017 11:30:07 -0500
This causes a warning with the old html-docs toolchain, though not with the new. I had originally supposed that we needed both <indexterm> entries to get both a primary index entry and a see-also link; but evidently not, as pointed out by Fabien Coelho. Discussion: https://postgr.es/m/alpine.DEB.2.20.1702161616060.5445@lancre
Formatting and docs corrections for logical decoding output plugins.
commit : c3b5cfe33cda6e28bbdc75534706c1a00a9e6c4b author : Tom Lane <firstname.lastname@example.org> date : Wed, 15 Feb 2017 18:15:47 -0500 committer: Tom Lane <email@example.com> date : Wed, 15 Feb 2017 18:15:47 -0500
Make the typedefs for output plugins consistent with project style; they were previously not even consistent with each other as to layout or inclusion of parameter names. Make the documentation look the same, and fix errors therein (missing and misdescribed parameters). Back-patch because of the documentation bugs.
Doc: fix typo in logicaldecoding.sgml.
commit : 19c324d9fb4d59db1d0c3e8865356fb6867dc35e author : Tom Lane <firstname.lastname@example.org> date : Wed, 15 Feb 2017 17:31:02 -0500 committer: Tom Lane <email@example.com> date : Wed, 15 Feb 2017 17:31:02 -0500
There's no such field as OutputPluginOptions.output_mode; it's actually output_type. Noted by T. Katsumata. Discussion: https://firstname.lastname@example.org
Make sure that hash join’s bulk-tuple-transfer loops are interruptible.
commit : d0e9c0e3199cefb36dc787293f577cd57b984bfe author : Tom Lane <email@example.com> date : Wed, 15 Feb 2017 16:40:06 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 15 Feb 2017 16:40:06 -0500
The loops in ExecHashJoinNewBatch(), ExecHashIncreaseNumBatches(), and ExecHashRemoveNextSkewBucket() are all capable of iterating over many tuples without ever doing a CHECK_FOR_INTERRUPTS, so that the backend might fail to respond to SIGINT or SIGTERM for an unreasonably long time. Fix that. In the case of ExecHashJoinNewBatch(), it seems useful to put the added CHECK_FOR_INTERRUPTS into ExecHashJoinGetSavedTuple() rather than directly in the loop, because that will also ensure that both principal code paths through ExecHashJoinOuterGetTuple() will do a CHECK_FOR_INTERRUPTS, which seems like a good idea to avoid surprises. Back-patch to all supported branches. Tom Lane and Thomas Munro Discussion: https://email@example.com
Ignore tablespace ACLs when ignoring schema ACLs.
commit : 804aad8ff46cbef9f520507bb8b4522a011cd1b2 author : Noah Misch <firstname.lastname@example.org> date : Sun, 12 Feb 2017 16:03:41 -0500 committer: Noah Misch <email@example.com> date : Sun, 12 Feb 2017 16:03:41 -0500
The ALTER TABLE ALTER TYPE implementation can issue DROP INDEX and CREATE INDEX to refit existing indexes for the new column type. Since this CREATE INDEX is an implementation detail of an index alteration, the ensuing DefineIndex() should skip ACL checks specific to index creation. It already skips the namespace ACL check. Make it skip the tablespace ACL check, too. Back-patch to 9.2 (all supported versions). Reviewed by Tom Lane.
Blind try to fix portability issue in commit 8f93bd851 et al.
commit : 86ef376bbe1b9568fa71e76ecfd3091d522368bb author : Tom Lane <firstname.lastname@example.org> date : Thu, 9 Feb 2017 15:49:58 -0500 committer: Tom Lane <email@example.com> date : Thu, 9 Feb 2017 15:49:58 -0500
The S/390 members of the buildfarm are showing failures indicating that they're having trouble with the rint() calls I added yesterday. There's no good reason for that, and I wonder if it is a compiler bug similar to the one we worked around in d9476b838. Try to fix it using the same method as before, namely to store the result of rint() back into a "double" variable rather than immediately converting to int64. (This isn't entirely waving a dead chicken, since on machines with wider-than-double float registers, the extra store forces a width conversion. I don't know if S/390 is like that, but it seems worth trying.) In passing, merge duplicate ereport() calls in float8_timestamptz(). Per buildfarm.
Fix roundoff problems in float8_timestamptz() and make_interval().
commit : 1888fad440036195c7e7a933fc17410fad8dcc3d author : Tom Lane <firstname.lastname@example.org> date : Wed, 8 Feb 2017 18:04:59 -0500 committer: Tom Lane <email@example.com> date : Wed, 8 Feb 2017 18:04:59 -0500
When converting a float value to integer microseconds, we should be careful to round the value to the nearest integer, typically with rint(); simply assigning to an int64 variable will truncate, causing apparently off-by-one values in cases that should work. Most places in the datetime code got this right, but not these two. float8_timestamptz() is new as of commit e511d878f (9.6). Previous versions effectively depended on interval_mul() to do roundoff correctly, which it does, so this fixes an accuracy regression in 9.6. The problem in make_interval() dates to its introduction in 9.4. Aside from being careful to round not truncate, let's incorporate the hours and minutes inputs into the result with exact integer arithmetic, rather than risk introducing roundoff error where there need not have been any. float8_timestamptz() problem reported by Erik Nordström, though this is not his proposed patch. make_interval() problem found by me. Discussion: https://postgr.es/m/CAHuQZDS76jTYk3LydPbKpNfw9KbACmD=49dC4BrzHcfPv6yA1A@mail.gmail.com
Correct thinko in last-minute release note item.
commit : cd898769cb6635a7440ebbabe15f7ecbf5157d68 author : Tom Lane <firstname.lastname@example.org> date : Tue, 7 Feb 2017 10:24:25 -0500 committer: Tom Lane <email@example.com> date : Tue, 7 Feb 2017 10:24:25 -0500
The CREATE INDEX CONCURRENTLY bug can only be triggered by row updates, not inserts, since the problem would arise from an update incorrectly being made HOT. Noted by Alvaro.