commit : aa3bcba08d466bc6fd2558f8f0bf0e6d6c89b58b author : Tom Lane <firstname.lastname@example.org> date : Mon, 8 May 2017 17:17:18 -0400 committer: Tom Lane <email@example.com> date : Mon, 8 May 2017 17:17:18 -0400
Further patch rangetypes_selfuncs.c's statistics slot management.
commit : 4509b4eb188beeea5c74a52f238127d323093113 author : Tom Lane <firstname.lastname@example.org> date : Mon, 8 May 2017 15:02:58 -0400 committer: Tom Lane <email@example.com> date : Mon, 8 May 2017 15:02:58 -0400
Values in a STATISTIC_KIND_RANGE_LENGTH_HISTOGRAM slot are float8, not of the type of the column the statistics are for. This bug is at least partly the fault of sloppy specification comments for get_attstatsslot()/free_attstatsslot(): the type OID they want is that of the stavalues entries, not of the underlying column. (I double-checked other callers and they seem to get this right.) Adjust the comments to be more correct. Per buildfarm. Security: CVE-2017-7484
Last-minute updates for release notes.
commit : 7603952e751a3b27adae16192b59ab09f0d0ba72 author : Tom Lane <firstname.lastname@example.org> date : Mon, 8 May 2017 12:57:27 -0400 committer: Tom Lane <email@example.com> date : Mon, 8 May 2017 12:57:27 -0400
Security: CVE-2017-7484, CVE-2017-7485, CVE-2017-7486
Fix possibly-uninitialized variable.
commit : a199582ef6d56786cd21aab55bf8011a478ed2d4 author : Tom Lane <firstname.lastname@example.org> date : Mon, 8 May 2017 11:18:40 -0400 committer: Tom Lane <email@example.com> date : Mon, 8 May 2017 11:18:40 -0400
Oversight in e2d4ef8de et al (my fault not Peter's). Per buildfarm. Security: CVE-2017-7484
Match pg_user_mappings limits to information_schema.user_mapping_options.
commit : db2158108674812abe883f7e0bd14eb2024ea8f3 author : Noah Misch <firstname.lastname@example.org> date : Mon, 8 May 2017 07:24:24 -0700 committer: Noah Misch <email@example.com> date : Mon, 8 May 2017 07:24:24 -0700
Both views replace the umoptions field with NULL when the user does not meet qualifications to see it. They used different qualifications, and pg_user_mappings documented qualifications did not match its implemented qualifications. Make its documentation and implementation match those of user_mapping_options. One might argue for stronger qualifications, but these have long, documented tenure. pg_user_mappings has always exhibited this problem, so back-patch to 9.2 (all supported versions). Michael Paquier and Feike Steenbergen. Reviewed by Jeff Janes. Reported by Andrew Wheelwright. Security: CVE-2017-7486
Restore PGREQUIRESSL recognition in libpq.
commit : 96d7454920e28447a1127497bba624cdf0f315c1 author : Noah Misch <firstname.lastname@example.org> date : Mon, 8 May 2017 07:24:24 -0700 committer: Noah Misch <email@example.com> date : Mon, 8 May 2017 07:24:24 -0700
Commit 65c3bf19fd3e1f6a591618e92eb4c54d0b217564 moved handling of the, already then, deprecated requiressl parameter into conninfo_storeval(). The default PGREQUIRESSL environment variable was however lost in the change resulting in a potentially silent accept of a non-SSL connection even when set. Its documentation remained. Restore its implementation. Also amend the documentation to mark PGREQUIRESSL as deprecated for those not following the link to requiressl. Back-patch to 9.3, where commit 65c3bf1 first appeared. Behavior has been more complex when the user provides both deprecated and non-deprecated settings. Before commit 65c3bf1, libpq operated according to the first of these found: requiressl=1 PGREQUIRESSL=1 sslmode=* PGSSLMODE=* (Note requiressl=0 didn't override sslmode=*; it would only suppress PGREQUIRESSL=1 or a previous requiressl=1. PGREQUIRESSL=0 had no effect whatsoever.) Starting with commit 65c3bf1, libpq ignored PGREQUIRESSL, and order of precedence changed to this: last of requiressl=* or sslmode=* PGSSLMODE=* Starting now, adopt the following order of precedence: last of requiressl=* or sslmode=* PGSSLMODE=* PGREQUIRESSL=1 This retains the 65c3bf1 behavior for connection strings that contain both requiressl=* and sslmode=*. It retains the 65c3bf1 change that either connection string option overrides both environment variables. For the first time, PGSSLMODE has precedence over PGREQUIRESSL; this avoids reducing security of "PGREQUIRESSL=1 PGSSLMODE=verify-full" configurations originating under v9.3 and later. Daniel Gustafsson Security: CVE-2017-7485
commit : 769294f36ca86bbdcdace8f82c6eff9c618a45df author : Peter Eisentraut <firstname.lastname@example.org> date : Mon, 8 May 2017 10:13:00 -0400 committer: Peter Eisentraut <email@example.com> date : Mon, 8 May 2017 10:13:00 -0400
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git Source-Git-Hash: 14c4b5cb0f9330a9397159979c48e7076fa856d8
Add security checks to selectivity estimation functions
commit : d45cd7c0edb5364c525f2128c837850c93138b27 author : Peter Eisentraut <firstname.lastname@example.org> date : Fri, 5 May 2017 12:18:48 -0400 committer: Peter Eisentraut <email@example.com> date : Fri, 5 May 2017 12:18:48 -0400
Some selectivity estimation functions run user-supplied operators over data obtained from pg_statistic without security checks, which allows those operators to leak pg_statistic data without having privileges on the underlying tables. Fix by checking that one of the following is satisfied: (1) the user has table or column privileges on the table underlying the pg_statistic data, or (2) the function implementing the user-supplied operator is leak-proof. If neither is satisfied, planning will proceed as if there are no statistics available. At least one of these is satisfied in most cases in practice. The only situations that are negatively impacted are user-defined or not-leak-proof operators on a security-barrier view. Reported-by: Robert Haas <firstname.lastname@example.org> Author: Peter Eisentraut <email@example.com> Author: Tom Lane <firstname.lastname@example.org> Security: CVE-2017-7484
Release notes for 9.6.3, 9.5.7, 9.4.12, 9.3.17, 9.2.21.
commit : 04b4183666409c538cf900a486028ac3fb883b8b author : Tom Lane <email@example.com> date : Sun, 7 May 2017 16:56:03 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 7 May 2017 16:56:03 -0400
Guard against null t->tm_zone in strftime.c.
commit : 74e747fbdd978d01d2a5bdc1af27408b7c667c9c author : Tom Lane <email@example.com> date : Sun, 7 May 2017 12:33:12 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 7 May 2017 12:33:12 -0400
The upstream IANA code does not guard against null TM_ZONE pointers in this function, but in our code there is such a check in the other pre-existing use of t->tm_zone. We do have some places that set pg_tm.tm_zone to NULL. I'm not entirely sure it's possible to reach strftime with such a value, but I'm not sure it isn't either, so be safe. Per Coverity complaint.
Install the "posixrules" timezone link in MSVC builds.
commit : 2f66002df9c1c98203526d2b86332e7aea3777dd author : Tom Lane <email@example.com> date : Sun, 7 May 2017 11:57:41 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 7 May 2017 11:57:41 -0400
Somehow, we'd missed ever doing this. The consequences aren't too severe: basically, the timezone library would fall back on its hardwired notion of the DST transition dates to use for a POSIX-style zone name, rather than obeying US/Eastern which is the intended behavior. The net effect would only be to obey current US DST law further back than it ought to apply; so it's not real surprising that nobody noticed. David Rowley, per report from Amit Kapila Discussion: https://postgr.es/m/CAA4eK1LC7CaNhRAQ__C3ht1JVrPzaAXXhEJRnR5L6bfYHiLmWw@mail.gmail.com
Restore fullname contents before falling through in pg_open_tzfile().
commit : 38ed45c9156ac65ad35647e5c46acf2c5b03bdca author : Tom Lane <email@example.com> date : Sun, 7 May 2017 11:34:31 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 7 May 2017 11:34:31 -0400
Fix oversight in commit af2c5aa88: if the shortcut open() doesn't work, we need to reset fullname to be just the name of the toplevel tzdata directory before we fall through into the pre-existing code. This failed to be exposed in my (tgl's) testing because the fall-through path is actually never taken under normal circumstances. David Rowley, per report from Amit Kapila Discussion: https://postgr.es/m/CAA4eK1LC7CaNhRAQ__C3ht1JVrPzaAXXhEJRnR5L6bfYHiLmWw@mail.gmail.com
Allow queries submitted by postgres_fdw to be canceled.
commit : cdf5a004bb7cc78e70ffe5213049dc853b93e699 author : Robert Haas <email@example.com> date : Sat, 6 May 2017 22:21:37 -0400 committer: Robert Haas <firstname.lastname@example.org> date : Sat, 6 May 2017 22:21:37 -0400
Back-patch of commits f039eaac7131ef2a4cf63a10cf98486f8bcd09d2 and 1b812afb0eafe125b820cc3b95e7ca03821aa675, which arranged (in 9.6+) to make remote queries interruptible. It was known at the time that the same problem existed in the back-branches, but I did not back-patch for lack of a user complaint. Michael Paquier and Etsuro Fujita, adjusted for older branches by me. Per gripe from Suraj Kharage. This doesn't directly addresss Suraj's gripe, but since the patch that will do so builds up on top of this work, it seems best to back-patch this part first. Discussion: http://postgr.es/m/CAF1DzPU8Kx+fMXEbFoP289xtm3bz3t+ZfxhmKavr98Bh-C0TqQ@mail.gmail.com
RLS: Fix ALL vs. SELECT+UPDATE policy usage
commit : d617c7629c0806a245555c0fe74331935c726569 author : Stephen Frost <email@example.com> date : Sat, 6 May 2017 21:46:56 -0400 committer: Stephen Frost <firstname.lastname@example.org> date : Sat, 6 May 2017 21:46:56 -0400
When we add the SELECT-privilege based policies to the RLS with check options (such as for an UPDATE statement, or when we have INSERT ... RETURNING), we need to be sure and use the 'USING' case if the policy is actually an 'ALL' policy (which could have both a USING clause and an independent WITH CHECK clause). This could result in policies acting differently when built using ALL (when the ALL had both USING and WITH CHECK clauses) and when building the policies independently as SELECT and UPDATE policies. Fix this by adding an explicit boolean to add_with_check_options() to indicate when the USING policy should be used, even if the policy has both USING and WITH CHECK policies on it. Reported by: Rod Taylor Back-patch to 9.5 where RLS was introduced.
commit : a5faf1708e24ba1e307d9f6a313bd6935a0afd97 author : Tom Lane <email@example.com> date : Sat, 6 May 2017 14:19:47 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 6 May 2017 14:19:47 -0400
This system function has been there a very long time, but somehow escaped being listed in func.sgml. Fabien Coelho and Tom Lane Discussion: https://postgr.es/m/alpine.DEB.2.20.1705061027580.3896@lancre
Allow MSVC to build with Tcl 8.6.
commit : adfad4222dbd6e086800629ba02eac4a2efc9393 author : Alvaro Herrera <email@example.com> date : Fri, 5 May 2017 12:05:34 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Fri, 5 May 2017 12:05:34 -0300
Commit eaba54c20c5 added support for Tcl 8.6 for configure-supported platforms after verifying that pltcl works without further changes, but the MSVC tooling wasn't updated accordingly. Update MSVC to match, restructuring the code to avoid duplicating the logic for every Tcl version supported. Backpatch to all live branches, like eaba54c20c5. In 9.4 and previous, change the patch to use backslashes rather than forward, as in the rest of the file. Reported by Paresh More, who also tested the patch I provided. Discussion: https://postgr.es/m/CAAgiCNGVw3ssBtSi3ZNstrz5k00ax=UV+_ZEHUeW_LMSGL2sew@mail.gmail.com
Give nicer error message when connecting to a v10 server requiring SCRAM.
commit : f050c847d9698e89e145c20cddb7f41535d9d121 author : Heikki Linnakangas <email@example.com> date : Fri, 5 May 2017 11:24:02 +0300 committer: Heikki Linnakangas <firstname.lastname@example.org> date : Fri, 5 May 2017 11:24:02 +0300
This is just to give the user a hint that they need to upgrade, if they try to connect to a v10 server that uses SCRAM authentication, with an older client. Commit to all stable branches, but not master. Discussion: https://email@example.com
Fix cursor_to_xml in tableforest false mode
commit : 9750a9583aacefbde15d9ed4ebfed84f1fb3aae8 author : Peter Eisentraut <firstname.lastname@example.org> date : Wed, 3 May 2017 21:25:01 -0400 committer: Peter Eisentraut <email@example.com> date : Wed, 3 May 2017 21:25:01 -0400
It only produced <row> elements but no wrapping <table> element. By contrast, cursor_to_xmlschema produced a schema that is now correct but did not previously match the XML data produced by cursor_to_xml. In passing, also fix a minor misunderstanding about moving cursors in the tests related to this. Reported-by: firstname.lastname@example.org Based-on-patch-by: Thomas Munro <email@example.com>
Fix pfree-of-already-freed-tuple when rescanning a GiST index-only scan.
commit : 6cfb428b05a8742ac0ec17145c90c4672d44177d author : Tom Lane <firstname.lastname@example.org> date : Thu, 4 May 2017 13:59:13 -0400 committer: Tom Lane <email@example.com> date : Thu, 4 May 2017 13:59:13 -0400
GiST's getNextNearest() function attempts to pfree the previously-returned tuple if any (that is, scan->xs_hitup in HEAD, or scan->xs_itup in older branches). However, if we are rescanning a plan node after ending a previous scan early, those tuple pointers could be pointing to garbage, because they would be pointing into the scan's pageDataCxt or queueCxt which has been reset. In a debug build this reliably results in a crash, although I think it might sometimes accidentally fail to fail in production builds. To fix, clear the pointer field anyplace we reset a context it might be pointing into. This may be overkill --- I think probably only the queueCxt case is involved in this bug, so that resetting in gistrescan() would be sufficient --- but dangling pointers are generally bad news, so let's avoid them. Another plausible answer might be to just not bother with the pfree in getNextNearest(). The reconstructed tuples would go away anyway in the context resets, and I'm far from convinced that freeing them a bit earlier really saves anything meaningful. I'll stick with the original logic in this patch, but if we find more problems in the same area we should consider that approach. Per bug #14641 from Denis Smirnov. Back-patch to 9.5 where this logic was introduced. Discussion: https://firstname.lastname@example.org
Remove useless and rather expensive stanza in matview regression test.
commit : 2ffe80c06ab353f29a51df25f6db13e7ddefb16f author : Tom Lane <email@example.com> date : Wed, 3 May 2017 19:37:01 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 3 May 2017 19:37:01 -0400
This removes a test case added by commit b69ec7cc9, which was intended to exercise a corner case involving the rule used at that time that materialized views were unpopulated iff they had physical size zero. We got rid of that rule very shortly later, in commit 1d6c72a55, but kept the test case. However, because the case now asks what VACUUM will do to a zero-sized physical file, it would be pretty surprising if the answer were ever anything but "nothing" ... and if things were indeed that broken, surely we'd find it out from other tests. Since the test involves a table that's fairly large by regression-test standards (100K rows), it's quite slow to run. Dropping it should save some buildfarm cycles, so let's do that. Discussion: https://email@example.com
Improve performance of timezone loading, especially pg_timezone_names view.
commit : 724cd4f063a84af65dd02b18c9a019391d2c3494 author : Tom Lane <firstname.lastname@example.org> date : Tue, 2 May 2017 21:50:35 -0400 committer: Tom Lane <email@example.com> date : Tue, 2 May 2017 21:50:35 -0400
tzparse() would attempt to load the "posixrules" timezone database file on each call. That might seem like it would only be an issue when selecting a POSIX-style zone name rather than a zone defined in the timezone database, but it turns out that each zone definition file contains a POSIX-style zone string and tzload() will call tzparse() to parse that. Thus, when scanning the whole timezone file tree as we do in the pg_timezone_names view, "posixrules" was read repetitively for each zone definition file. Fix that by caching the file on first use within any given process. (We cache other zone definitions for the life of the process, so there seems little reason not to cache this one as well.) This probably won't help much in processes that never run pg_timezone_names, but even one additional SET of the timezone GUC would come out ahead. An even worse problem for pg_timezone_names is that pg_open_tzfile() has an inefficient way of identifying the canonical case of a zone name: it basically re-descends the directory tree to the zone file. That's not awful for an individual "SET timezone" operation, but it's pretty horrid when we're inspecting every zone in the database. And it's pointless too because we already know the canonical spelling, having just read it from the filesystem. Fix by teaching pg_open_tzfile() to avoid the directory search if it's not asked for the canonical name, and backfilling the proper result in pg_tzenumerate_next(). In combination these changes seem to make the pg_timezone_names view about 3x faster to read, for me. Since a scan of pg_timezone_names has up to now been one of the slowest queries in the regression tests, this should help some little bit for buildfarm cycle times. Back-patch to all supported branches, not so much because it's likely that users will care much about the view's performance as because tracking changes in the upstream IANA timezone code is really painful if we don't keep all the branches in sync. Discussion: https://firstname.lastname@example.org
Ensure commands in extension scripts see the results of preceding DDL.
commit : d0d3a57bfa18a9188378fe2f9bd94eb939c2ee90 author : Tom Lane <email@example.com> date : Tue, 2 May 2017 18:05:54 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 2 May 2017 18:05:54 -0400
Due to a missing CommandCounterIncrement() call, parsing of a non-utility command in an extension script would not see the effects of the immediately preceding DDL command, unless that command's execution ends with CommandCounterIncrement() internally ... which some do but many don't. Report by Philippe Beaudoin, diagnosis by Julien Rouhaud. Rather remarkably, this bug has evaded detection since extensions were invented, so back-patch to all supported branches. Discussion: https://email@example.com
Fix perl thinko in commit fed6df486dca
commit : df53413ba524a01cced9c12131606d84d52a0fc9 author : Andrew Dunstan <firstname.lastname@example.org> date : Tue, 2 May 2017 08:20:11 -0400 committer: Andrew Dunstan <email@example.com> date : Tue, 2 May 2017 08:20:11 -0400
Report and fix from Vaishnavi Prabakaran Backpatch to 9.4 like original.
Update time zone data files to tzdata release 2017b.
commit : 9a8cc157c6e4724770168c589844539ccff2bd05 author : Tom Lane <firstname.lastname@example.org> date : Mon, 1 May 2017 11:52:59 -0400 committer: Tom Lane <email@example.com> date : Mon, 1 May 2017 11:52:59 -0400
DST law changes in Chile, Haiti, and Mongolia. Historical corrections for Ecuador, Kazakhstan, Liberia, and Spain. The IANA crew continue their campaign to replace invented time zone abbrevations with numeric GMT offsets. This update changes numerous zones in South America, the Pacific and Indian oceans, and some Asian and Middle Eastern zones. I kept these abbreviations in the tznames/ data files, however, so that we will still accept them for input. (We may want to start trimming those files someday, but I think we should wait for the upstream dust to settle before deciding what to do.) In passing, add MESZ (Mitteleuropaeische Sommerzeit) to the tznames lists; since we accept MEZ (Mitteleuropaeische Zeit) it seems rather strange not to take the other one. And fix some incorrect, or at least obsolete, comments that certain abbreviations are not traceable to the IANA data.
Allow vcregress.pl to run an arbitrary TAP test set
commit : 263e33d979fc19d889f914977978f9326cc8a3e4 author : Andrew Dunstan <firstname.lastname@example.org> date : Mon, 1 May 2017 10:12:02 -0400 committer: Andrew Dunstan <email@example.com> date : Mon, 1 May 2017 10:12:02 -0400
Currently only provision for running the bin checks in a single step is provided for. Now these tests can be run individually, as well as tests in other locations (e.g. src.test/recover). Also provide for suppressing unnecessary temp installs by setting the NO_TEMP_INSTALL environment variable just as the Makefiles do. Backpatch to 9.4.
Sync our copy of the timezone library with IANA release tzcode2017b.
commit : 4d4d8fa77eb81c949dc52ffcb401a476fffddb2c author : Tom Lane <firstname.lastname@example.org> date : Sun, 30 Apr 2017 15:13:51 -0400 committer: Tom Lane <email@example.com> date : Sun, 30 Apr 2017 15:13:51 -0400
zic no longer mishandles some transitions in January 2038 when it attempts to work around Qt bug 53071. This fixes a bug affecting Pacific/Tongatapu that was introduced in zic 2016e. localtime.c now contains a workaround, useful when loading a file generated by a buggy zic. There are assorted cosmetic changes as well, notably relocation of a bunch of #defines.
Fix VALIDATE CONSTRAINT to consider NO INHERIT attribute.
commit : a0291c33070d7095bc8aa4f21a1f6ccf714b262f author : Robert Haas <firstname.lastname@example.org> date : Fri, 28 Apr 2017 14:48:38 -0400 committer: Robert Haas <email@example.com> date : Fri, 28 Apr 2017 14:48:38 -0400
Currently, trying to validate a NO INHERIT constraint on the parent will search for the constraint in child tables (where it is not supposed to exist), wrongly causing a "constraint does not exist" error. Amit Langote, per a report from Hans Buschmann. Discussion: http://firstname.lastname@example.org
Don't use on-disk snapshots for exported logical decoding snapshot.
commit : 54270d7ebcdc68a3dd995435e12ffd6596976b1e author : Andres Freund <email@example.com> date : Thu, 27 Apr 2017 15:28:24 -0700 committer: Andres Freund <firstname.lastname@example.org> date : Thu, 27 Apr 2017 15:28:24 -0700
Logical decoding stores historical snapshots on disk, so that logical decoding can restart without having to reconstruct a snapshot from scratch (for which the resources are not guaranteed to be present anymore). These serialized snapshots were also used when creating a new slot via the walsender interface, which can export a "full" snapshot (i.e. one that can read all tables, not just catalog ones). The problem is that the serialized snapshots are only useful for catalogs and not for normal user tables. Thus the use of such a serialized snapshot could result in an inconsistent snapshot being exported, which could lead to queries returning wrong data. This would only happen if logical slots are created while another logical slot already exists. Author: Petr Jelinek Reviewed-By: Andres Freund Discussion: https://email@example.com Backport: 9.4, where logical decoding was introduced.
Preserve required !catalog tuples while computing initial decoding snapshot.
commit : 47f896b5c2ec255d457830945d5801bfc1c67b2f author : Andres Freund <firstname.lastname@example.org> date : Sun, 23 Apr 2017 20:41:29 -0700 committer: Andres Freund <email@example.com> date : Sun, 23 Apr 2017 20:41:29 -0700
The logical decoding machinery already preserved all the required catalog tuples, which is sufficient in the course of normal logical decoding, but did not guarantee that non-catalog tuples were preserved during computation of the initial snapshot when creating a slot over the replication protocol. This could cause a corrupted initial snapshot being exported. The time window for issues is usually not terribly large, but on a busy server it's perfectly possible to it hit it. Ongoing decoding is not affected by this bug. To avoid increased overhead for the SQL API, only retain additional tuples when a logical slot is being created over the replication protocol. To do so this commit changes the signature of CreateInitDecodingContext(), but it seems unlikely that it's being used in an extension, so that's probably ok. In a drive-by fix, fix handling of ReplicationSlotsComputeRequiredXmin's already_locked argument, which should only apply to ProcArrayLock, not ReplicationSlotControlLock. Reported-By: Erik Rijkers Analyzed-By: Petr Jelinek Author: Petr Jelinek, heavily editorialized by Andres Freund Reviewed-By: Andres Freund Discussion: https://firstname.lastname@example.org Backport: 9.4, where logical decoding was introduced.
Fix postmaster's handling of fork failure for a bgworker process.
commit : dba1f310a564e8dc3496cd5ddec14ab6645ffd15 author : Tom Lane <email@example.com> date : Mon, 24 Apr 2017 12:16:58 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 24 Apr 2017 12:16:58 -0400
This corner case didn't behave nicely at all: the postmaster would (partially) update its state as though the process had started successfully, and be quite confused thereafter. Fix it to act like the worker had crashed, instead. In passing, refactor so that do_start_bgworker contains all the state-change logic for bgworker launch, rather than just some of it. Back-patch as far as 9.4. 9.3 contains similar logic, but it's just enough different that I don't feel comfortable applying the patch without more study; and the use of bgworkers in 9.3 was so small that it doesn't seem worth the extra work. transam/parallel.c is still entirely unprepared for the possibility of bgworker startup failure, but that seems like material for a separate patch. Discussion: https://email@example.com
Repair crash with unsortable data in grouping sets.
commit : 7be3678a8cfb55dcfca90fa586485f835ab912a5 author : Andrew Gierth <firstname.lastname@example.org> date : Mon, 24 Apr 2017 07:53:05 +0100 committer: Andrew Gierth <email@example.com> date : Mon, 24 Apr 2017 07:53:05 +0100
Previously the code would generate incorrect results, assertion failures, or crashes if given unsortable (but hashable) columns in grouping sets. Handle by throwing an error instead. Report and patch by Pavan Deolasee (though I changed the error wording slightly); regression test by me. (This affects 9.5 only since the planner was refactored in 9.6.)
Zero padding in replication origin's checkpointed on disk-state.
commit : 81ff04deda21e016e0e1d4ea1755ccb14c47c871 author : Andres Freund <firstname.lastname@example.org> date : Sun, 23 Apr 2017 15:48:31 -0700 committer: Andres Freund <email@example.com> date : Sun, 23 Apr 2017 15:48:31 -0700
This seems to be largely cosmetic, avoiding valgrind bleats and the like. The uninitialized padding influences the CRC of the on-disk entry, but because it's also used when verifying the CRC, that doesn't cause spurious failures. Backpatch nonetheless. It's a bit unfortunate that contrib/test_decoding/sql/replorigin.sql doesn't exercise the checkpoint path, but checkpoints are fairly expensive on weaker machines, and we'd have to stop/start for that to be meaningful. Author: Andres Freund Discussion: https://firstname.lastname@example.org Backpatch: 9.5, where replication origins were introduced
Fix order of arguments to SubTransSetParent().
commit : a66e01bbcc8808674ade41d38cfba711a36b8b15 author : Tom Lane <email@example.com> date : Sun, 23 Apr 2017 13:10:57 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 23 Apr 2017 13:10:57 -0400
ProcessTwoPhaseBuffer (formerly StandbyRecoverPreparedTransactions) mixed up the parent and child XIDs when calling SubTransSetParent to record the transactions' relationship in pg_subtrans. Remarkably, analysis by Simon Riggs suggests that this doesn't lead to visible problems (at least, not in non-Assert builds). That might explain why we'd not noticed it before. Nonetheless, it's surely wrong. This code was born broken, so back-patch to all supported branches. Discussion: https://email@example.com
doc: Update link
commit : f91160c56e71df112209b15212deb8adf46e0680 author : Peter Eisentraut <firstname.lastname@example.org> date : Fri, 21 Apr 2017 19:42:01 -0400 committer: Peter Eisentraut <email@example.com> date : Fri, 21 Apr 2017 19:42:01 -0400
The reference "That is the topic of the next section." has been incorrect since the materialized views documentation got inserted between the section "rules-views" and "rules-update". Author: Zertrin <firstname.lastname@example.org>
Avoid depending on non-POSIX behavior of fcntl(2).
commit : 9f8be754215694a681a5ccb622aad3c4426652a5 author : Tom Lane <email@example.com> date : Fri, 21 Apr 2017 15:55:56 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 21 Apr 2017 15:55:56 -0400
The POSIX standard does not say that the success return value for fcntl(F_SETFD) and fcntl(F_SETFL) is zero; it says only that it's not -1. We had several calls that were making the stronger assumption. Adjust them to test specifically for -1 for strict spec compliance. The standard further leaves open the possibility that the O_NONBLOCK flag bit is not the only active one in F_SETFL's argument. Formally, therefore, one ought to get the current flags with F_GETFL and store them back with only the O_NONBLOCK bit changed when trying to change the nonblock state. In port/noblock.c, we were doing the full pushup in pg_set_block but not in pg_set_noblock, which is just weird. Make both of them do it properly, since they have little business making any assumptions about the socket they're handed. The other places where we're issuing F_SETFL are working with FDs we just got from pipe(2), so it's reasonable to assume the FDs' properties are all default, so I didn't bother adding F_GETFL steps there. Also, while pg_set_block deserves some points for trying to do things right, somebody had decided that it'd be even better to cast fcntl's third argument to "long". Which is completely loony, because POSIX clearly says the third argument for an F_SETFL call is "int". Given the lack of field complaints, these missteps apparently are not of significance on any common platforms. But they're still wrong, so back-patch to all supported branches. Discussion: https://email@example.com
Always build a custom plan node's targetlist from the path's pathtarget.
commit : 6f0f98bb0bced134c09b7acf8fe35ff3a6f1bbd2 author : Tom Lane <firstname.lastname@example.org> date : Mon, 17 Apr 2017 15:29:00 -0400 committer: Tom Lane <email@example.com> date : Mon, 17 Apr 2017 15:29:00 -0400
We were applying the use_physical_tlist optimization to all relation scan plans, even those implemented by custom scan providers. However, that's a bad idea for a couple of reasons. The custom provider might be unable to provide columns that it hadn't expected to be asked for (for example, the custom scan might depend on an index-only scan). Even more to the point, there's no good reason to suppose that this "optimization" is a win for a custom scan; whatever the custom provider is doing is likely not based on simply returning physical heap tuples. (As a counterexample, if the custom scan is an interface to a column store, demanding all columns would be a huge loss.) If it is a win, the custom provider could make that decision for itself and insert a suitable pathtarget into the path, anyway. Per discussion with Dmitry Ivanov. Back-patch to 9.5 where custom scan support was introduced. The argument that the custom provider can adjust the behavior by changing the pathtarget only applies to 9.6+, but on balance it seems more likely that use_physical_tlist will hurt custom scans than help them. Discussion: https://firstname.lastname@example.org
Fix compiler warning
commit : b6e6ae1dc6a647ae6c846c79a8515802a29ebaba author : Peter Eisentraut <email@example.com> date : Sun, 16 Apr 2017 20:50:31 -0400 committer: Peter Eisentraut <firstname.lastname@example.org> date : Sun, 16 Apr 2017 20:50:31 -0400
Introduced by 087e696f066d31cb2f1269a1296a13dfe0bf7a11, happens with gcc 4.7.2.
Provide a way to control SysV shmem attach address in EXEC_BACKEND builds.
commit : bbd4a1b60b6ef311347786e8e849c2110bf0e61f author : Tom Lane <email@example.com> date : Sat, 15 Apr 2017 17:27:38 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 15 Apr 2017 17:27:38 -0400
In standard non-Windows builds, there's no particular reason to care what address the kernel chooses to map the shared memory segment at. However, when building with EXEC_BACKEND, there's a risk that the chosen address won't be available in all child processes. Linux with ASLR enabled (which it is by default) seems particularly at risk because it puts shmem segments into the same area where it maps shared libraries. We can work around that by specifying a mapping address that's outside the range where shared libraries could get mapped. On x86_64 Linux, 0x7e0000000000 seems to work well. This is only meant for testing/debugging purposes, so it doesn't seem necessary to go as far as providing a GUC (or any user-visible documentation, though we might change that later). Instead, it's just controlled by setting an environment variable PG_SHMEM_ADDR to the desired attach address. Back-patch to all supported branches, since the point here is to remove intermittent buildfarm failures on EXEC_BACKEND animals. Owners of affected animals will need to add a suitable setting of PG_SHMEM_ADDR to their build_env configuration. Discussion: https://email@example.com
Further fix pg_trgm's extraction of trigrams from regular expressions.
commit : 9b48ce37734527dc948afdeecf17983d6c5cefe6 author : Tom Lane <firstname.lastname@example.org> date : Fri, 14 Apr 2017 14:52:03 -0400 committer: Tom Lane <email@example.com> date : Fri, 14 Apr 2017 14:52:03 -0400
Commit 9e43e8714 turns out to have been insufficient: not only is it necessary to track tentative parent links while considering a set of arc removals, but it's necessary to track tentative flag additions as well. This is because we always merge arc target states into arc source states; therefore, when considering a merge of the final state with some other, it is the other state that will acquire a new TSTATE_FIN bit. If there's another arc for the same color trigram that would cause merging of that state with the initial state, we failed to recognize the problem. The test cases for the prior commit evidently only exercised situations where a tentative merge with the initial state occurs before one with the final state. If it goes the other way around, we'll happily merge the initial and final states, either producing a broken final graph that would never match anything, or triggering the Assert added by the prior commit. It's tempting to consider switching the merge direction when the merge involves the final state, but I lack the time to analyze that idea in detail. Instead just keep track of the flag changes that would result from proposed merges, in the same way that the prior commit tracked proposed parent links. Along the way, add some more debugging support, because I'm not entirely confident that this is the last bug here. And tweak matters so that the transformed.dot file uses small integers rather than pointer values to identify states; that makes it more readable if you're just eyeballing it rather than fooling with Graphviz. And rename a couple of identically named struct fields to reduce confusion. Per report from Corey Csuhta. Add a test case based on his example. (Note: this case does not trigger the bug under 9.3, apparently because its different measurement of costs causes it to stop merging states before it hits the failure. I spent some time trying to find a variant that would fail in 9.3, without success; but I'm sure such cases exist.) Like the previous patch, back-patch to 9.3 where this code was added. Report: https://postgr.es/m/E2B01A4B-4530-406B-8D17-2F67CF9A16BA@csuhta.com
Fix regexport.c to behave sanely with lookaround constraints.
commit : 67665a71c003d0bc4d261f134cbc0989effc6927 author : Tom Lane <firstname.lastname@example.org> date : Thu, 13 Apr 2017 17:18:35 -0400 committer: Tom Lane <email@example.com> date : Thu, 13 Apr 2017 17:18:35 -0400
regexport.c thought it could just ignore LACON arcs, but the correct behavior is to treat them as satisfiable while consuming zero input (rather reminiscently of commit 9f1e642d5). Otherwise, the emitted simplified-NFA representation may contain no paths leading from initial to final state, which unsurprisingly confuses pg_trgm, as seen in bug #14623 from Jeff Janes. Since regexport's output representation has no concept of an arc that consumes zero input, recurse internally to find the next normal arc(s) after any LACON transitions. We'd be forced into changing that representation if a LACON could be the last arc reaching the final state, but fortunately the regex library never builds NFAs with such a configuration, so there always is a next normal arc. Back-patch to 9.3 where this logic was introduced. Discussion: https://firstname.lastname@example.org
Improve castNode notation by introducing list-extraction-specific variants.
commit : bcb1a27dc039f175dc64a231a742e74a8728fb26 author : Tom Lane <email@example.com> date : Mon, 10 Apr 2017 13:51:29 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 10 Apr 2017 13:51:29 -0400
This extends the castNode() notation introduced by commit 5bcab1114 to provide, in one step, extraction of a list cell's pointer and coercion to a concrete node type. For example, "lfirst_node(Foo, lc)" is the same as "castNode(Foo, lfirst(lc))". Almost half of the uses of castNode that have appeared so far include a list extraction call, so this is pretty widely useful, and it saves a few more keystrokes compared to the old way. As with the previous patch, back-patch the addition of these macros to pg_list.h, so that the notation will be available when back-patching. Patch by me, after an idea of Andrew Gierth's. Discussion: https://email@example.com
Silence compiler warning in sepgsql
commit : 5fcf1f4e0a0dd075b16c628ba0a05521b4b4b179 author : Joe Conway <firstname.lastname@example.org> date : Thu, 6 Apr 2017 14:21:38 -0700 committer: Joe Conway <email@example.com> date : Thu, 6 Apr 2017 14:21:38 -0700
<selinux/label.h> includes <stdbool.h>, which creates an incompatible We don't care if <stdbool.h> redefines "true"/"false"; those are close enough. Complaint and initial patch by Mike Palmiotto. Final approach per Tom Lane's suggestion, as discussed on hackers. Backpatching to all supported branches. Discussion: https://postgr.es/m/flat/623bcaae-112e-ced0-8c22-a84f75ae0c53%40joeconway.com
Remove dead code and fix comments in fast-path function handling.
commit : 39cedd8d95930a7689a7e06dc7730727915a6052 author : Heikki Linnakangas <firstname.lastname@example.org> date : Thu, 6 Apr 2017 09:09:39 +0300 committer: Heikki Linnakangas <email@example.com> date : Thu, 6 Apr 2017 09:09:39 +0300
HandleFunctionRequest() is no longer responsible for reading the protocol message from the client, since commit 2b3a8b20c2. Fix the outdated comments. HandleFunctionRequest() now always returns 0, because the code that used to return EOF was moved in 2b3a8b20c2. Therefore, the caller no longer needs to check the return value. Reported by Andres Freund. Backpatch to all supported versions, even though this doesn't have any user-visible effect, to make backporting future patches in this area easier. Discussion: https://firstname.lastname@example.org
Fix integer-overflow problems in interval comparison.
commit : d68a2b20ae2c3a55ba934a0d9a669592a212f351 author : Tom Lane <email@example.com> date : Wed, 5 Apr 2017 23:51:28 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 5 Apr 2017 23:51:28 -0400
When using integer timestamps, the interval-comparison functions tried to compute the overall magnitude of an interval as an int64 number of microseconds. As reported by Frazer McLean, this overflows for intervals exceeding about 296000 years, which is bad since we nominally allow intervals many times larger than that. That results in wrong comparison results, and possibly in corrupted btree indexes for columns containing such large interval values. To fix, compute the magnitude as int128 instead. Although some compilers have native support for int128 calculations, many don't, so create our own support functions that can do 128-bit addition and multiplication if the compiler support isn't there. These support functions are designed with an eye to allowing the int128 code paths in numeric.c to be rewritten for use on all platforms, although this patch doesn't do that, or even provide all the int128 primitives that will be needed for it. Back-patch as far as 9.4. Earlier releases did not guard against overflow of interval values at all (commit 146604ec4 fixed that), so it seems not very exciting to worry about overly-large intervals for them. Before 9.6, we did not assume that unreferenced "static inline" functions would not draw compiler warnings, so omit functions not directly referenced by timestamp.c, the only present consumer of int128.h. (We could have omitted these functions in HEAD too, but since they were written and debugged on the way to the present patch, and they look likely to be needed by numeric.c, let's keep them in HEAD.) I did not bother to try to prevent such warnings in a --disable-integer-datetimes build, though. Before 9.5, configure will never define HAVE_INT128, so the part of int128.h that exploits a native int128 implementation is dead code in the 9.4 branch. I didn't bother to remove it, thinking that keeping the file looking similar in different branches is more useful. In HEAD only, add a simple test harness for int128.h in src/tools/. In back branches, this does not change the float-timestamps code path. That's not subject to the same kind of overflow risk, since it computes the interval magnitude as float8. (No doubt, when this code was originally written, overflow was disregarded for exactly that reason.) There is a precision hazard instead :-(, but we'll avert our eyes from that question, since no complaints have been reported and that code's deprecated anyway. Kyotaro Horiguchi and Tom Lane Discussion: https://postgr.es/m/1490104629.422698.918452336.26FA96B7@webmail.messagingengine.com
Back-patch checkpoint clarification docs and pg_basebackup updates
commit : 2843d5d657be9e32d65b00b930d81293614c6979 author : Magnus Hagander <email@example.com> date : Sat, 1 Apr 2017 17:20:05 +0200 committer: Magnus Hagander <firstname.lastname@example.org> date : Sat, 1 Apr 2017 17:20:05 +0200
This backpatches 51e26c9 and 7220c7b, including both documentation updates clarifying the checkpoints at the beginning of base backups and the messages in verbose and progress mdoe of pg_basebackup. Author: Michael Banck Discussion: https://postgr.es/m/21444.1488142764%40sss.pgh.pa.us
Don't use bgw_main even to specify in-core bgworker entrypoints.
commit : 0ef26bb394abedb2745bd838c26ecb3131682bda author : Robert Haas <email@example.com> date : Fri, 31 Mar 2017 20:35:51 -0400 committer: Robert Haas <firstname.lastname@example.org> date : Fri, 31 Mar 2017 20:35:51 -0400
On EXEC_BACKEND builds, this can fail if ASLR is in use. Backpatch to 9.5. On master, completely remove the bgw_main field completely, since there is no situation in which it is safe for an EXEC_BACKEND build. On 9.6 and 9.5, leave the field intact to avoid breaking things for third-party code that doesn't care about working under EXEC_BACKEND. Prior to 9.5, there are no in-core bgworker entrypoints. Petr Jelinek, reviewed by me. Discussion: http://email@example.com
Simplify the example of VACUUM in documentation.
commit : 86f0e538955ebacf6b79655807b635ca23ed6d28 author : Fujii Masao <firstname.lastname@example.org> date : Fri, 31 Mar 2017 01:31:15 +0900 committer: Fujii Masao <email@example.com> date : Fri, 31 Mar 2017 01:31:15 +0900
Previously a detailed activity report by VACUUM VERBOSE ANALYZE was described as an example of VACUUM in docs. But it had been obsolete for a long time. For example, commit feb4f44d296b88b7f0723f4a4f3945a371276e0b updated the content of that activity report in 2003, but we had forgotten to update the example. So basically we need to update the example. But since no one cared about the details of VACUUM output and complained about that mistake for such long time, per discussion on hackers, we decided to get rid of the detailed activity report from the example and simplify it. Back-patch to all supported versions. Reported by Masahiko Sawada, patch by me. Discussion: https://postgr.es/m/CAD21AoAGA2pB3p-CWmTkxBsbkZS1bcDGBLcYVcvcDxspG_XAfA@mail.gmail.com
Suppress implicit-conversion warnings seen with newer clang versions.
commit : 16e815279135ea70616692c8bade43d9f1e1d05e author : Tom Lane <firstname.lastname@example.org> date : Tue, 28 Mar 2017 13:16:19 -0400 committer: Tom Lane <email@example.com> date : Tue, 28 Mar 2017 13:16:19 -0400
We were assigning values near 255 through "char *" pointers. On machines where char is signed, that's not entirely kosher, and it's reasonable for compilers to warn about it. A better solution would be to change the pointer type to "unsigned char *", but that would be vastly more invasive. For the moment, let's just apply this simple backpatchable solution. Aleksander Alekseev Discussion: https://postgr.es/m/20170220141239.GD12278@e733.localdomain Discussion: https://firstname.lastname@example.org
Fix unportable disregard of alignment requirements in RADIUS code.
commit : 24fc43d40aad5f888f22af9a543113a168a08307 author : Tom Lane <email@example.com> date : Sun, 26 Mar 2017 17:35:35 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Sun, 26 Mar 2017 17:35:35 -0400
The compiler is entitled to store a char local variable with no particular alignment requirement. Our RADIUS code cavalierly took such a local variable and cast its address to a struct type that does have alignment requirements. On an alignment-picky machine this would lead to bus errors. To fix, declare the local variable honestly, and then cast its address to char * for use in the I/O calls. Given the lack of field complaints, there must be very few if any people affected; but nonetheless this is a clear portability issue, so back-patch to all supported branches. Noted while looking at a Coverity complaint in the same code.
Revert Windows service check refactoring, and replace with a different fix.
commit : 42a60aa7f2074d1e1cd48f278a00c7d1423f2fb6 author : Heikki Linnakangas <email@example.com> date : Fri, 24 Mar 2017 12:39:01 +0200 committer: Heikki Linnakangas <firstname.lastname@example.org> date : Fri, 24 Mar 2017 12:39:01 +0200
This reverts commit 38bdba54a64bacec78e3266f0848b0b4a824132a, "Fix and simplify check for whether we're running as Windows service". It turns out that older versions of MinGW - like that on buildfarm member narwhal - do not support the CheckTokenMembership() function. This replaces the refactoring with a much smaller fix, to add a check for SE_GROUP_ENABLED to pgwin32_is_service(). Only apply to back-branches, and keep the refactoring in HEAD. It's unlikely that anyone is still really using such an old version of MinGW - aside from narwhal - but let's not change the minimum requirements in minor releases. Discussion: https://email@example.com Patch: https://www.postgresql.org/message-id/CAB7nPqSvfu%3DKpJ%3DNX%2BYAHmgAmQdzA7N5h31BjzXeMgczhGCC%2BQ%40mail.gmail.com
doc: Fix a few typos and awkward links
commit : 3e04d005c455153a2242c172ed61db2f0b362bc6 author : Peter Eisentraut <firstname.lastname@example.org> date : Sat, 18 Mar 2017 23:44:30 -0400 committer: Peter Eisentraut <email@example.com> date : Sat, 18 Mar 2017 23:44:30 -0400
Remove dead link.
commit : a84661215c0275942b3ecf3a023a8447c71338eb author : Robert Haas <firstname.lastname@example.org> date : Fri, 17 Mar 2017 09:32:34 -0400 committer: Robert Haas <email@example.com> date : Fri, 17 Mar 2017 09:32:34 -0400
David Christensen Discussion: http://postgr.es/m/82299377-1480-4439-9ABA-5828D71AA22E@endpoint.com
Fix and simplify check for whether we're running as Windows service.
commit : 96fd76dd287593b3b444ebddc1f817bd08bc812a author : Heikki Linnakangas <firstname.lastname@example.org> date : Fri, 17 Mar 2017 11:14:01 +0200 committer: Heikki Linnakangas <email@example.com> date : Fri, 17 Mar 2017 11:14:01 +0200
If the process token contains SECURITY_SERVICE_RID, but it has been disabled by the SE_GROUP_USE_FOR_DENY_ONLY attribute, win32_is_service() would incorrectly report that we're running as a service. That situation arises, e.g. if postmaster is launched with a restricted security token, with the "Log in as Service" privilege explicitly removed. Replace the broken code with CheckProcessTokenMembership(), which does this correctly. Also replace similar code in win32_is_admin(), even though it got this right, for simplicity and consistency. Per bug #13755, reported by Breen Hagan. Back-patch to all supported versions. Patch by Takayuki Tsunakawa, reviewed by Michael Paquier. Discussion: https://www.postgresql.org/message-id/20151104062315.2745.67143%40wrigleys.postgresql.org
Avoid having vacuum set reltuples to 0 on non-empty relations in the presence of page pins, which leads to serious estimation errors in the planner. This particularly affects small heavily-accessed tables, especially where locking (e.g. from FK constraints) forces frequent vacuums for mxid cleanup.
commit : ee78ad5bc0d2b905fdfcee997c76e98292f65fbb author : Andrew Gierth <firstname.lastname@example.org> date : Thu, 16 Mar 2017 22:32:31 +0000 committer: Andrew Gierth <email@example.com> date : Thu, 16 Mar 2017 22:32:31 +0000
Fix by keeping separate track of pages whose live tuples were actually counted vs. pages that were only scanned for freezing purposes. Thus, reltuples can only be set to 0 if all pages of the relation were actually counted. Backpatch to all supported versions. Per bug #14057 from Nicolas Baccelli, analyzed by me. Discussion: https://firstname.lastname@example.org
Fix ancient get_object_address_opf_member bug
commit : 087e696f066d31cb2f1269a1296a13dfe0bf7a11 author : Alvaro Herrera <email@example.com> date : Thu, 16 Mar 2017 12:51:08 -0300 committer: Alvaro Herrera <firstname.lastname@example.org> date : Thu, 16 Mar 2017 12:51:08 -0300
The original coding was trying to use a TypeName as a string Value, which doesn't work; an oversight in my commit a61fd533. Repair. Also, make sure we cover the broken case in the relevant test script. Backpatch to 9.5. Discussion: https://email@example.com
commit : 5adec6b54dc3a813a316a3b323cfecdce00cffcb author : Peter Eisentraut <firstname.lastname@example.org> date : Tue, 14 Mar 2017 12:57:10 -0400 committer: Peter Eisentraut <email@example.com> date : Tue, 14 Mar 2017 12:57:10 -0400
From: Josh Soref <firstname.lastname@example.org>
Fix failure to mark init buffers as BM_PERMANENT.
commit : c17a3f57ebc00615ca34a48bb17eca1ed14f8ceb author : Robert Haas <email@example.com> date : Tue, 14 Mar 2017 11:51:11 -0400 committer: Robert Haas <firstname.lastname@example.org> date : Tue, 14 Mar 2017 11:51:11 -0400
This could result in corruption of the init fork of an unlogged index if the ambuildempty routine for that index used shared buffers to create the init fork, which was true for brin, gin, gist, and hash indexes. Patch by me, based on an earlier patch by Michael Paquier, who also reviewed this one. This also incorporates an idea from Artur Zakirov. Discussion: http://postgr.es/m/CACYUyc8yccE4xfxhqxfh_Mh38j7dRFuxfaK1p6dSNAEUakxUyQ@mail.gmail.com
Remove unnecessary dependency on statement_timeout in prepared_xacts test.
commit : d999b896d801130da461d55560188e7c4d36819a author : Tom Lane <email@example.com> date : Mon, 13 Mar 2017 16:46:32 -0400 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 13 Mar 2017 16:46:32 -0400
Rather than waiting around for statement_timeout to expire, we can just try to take the table's lock in nowait mode. This saves some fraction under 4 seconds when running this test with prepared xacts available, and it guards against timeout-expired-anyway failures on very slow machines when prepared xacts are not available, as seen in a recent failure on axolotl for instance. This approach could fail if autovacuum were to take an exclusive lock on the test table concurrently, but there's no reason for it to do so. Since the main point here is to improve stability in the buildfarm, back-patch to all supported branches.
Ecpg should support COMMIT PREPARED and ROLLBACK PREPARED.
commit : a8b3262ab9790bea1eec8119fb7b00087f0a5a6b author : Michael Meskes <email@example.com> date : Mon, 13 Mar 2017 20:44:13 +0100 committer: Michael Meskes <firstname.lastname@example.org> date : Mon, 13 Mar 2017 20:44:13 +0100
The problem was that "begin transaction" was issued automatically before executing COMMIT/ROLLBACK PREPARED if not in auto commit. This fix by Masahiko Sawada fixes this.
Fix pg_file_write() error handling.
commit : d0e5ac736df074d8334fa5c1e471cbdadfd631a8 author : Noah Misch <email@example.com> date : Sun, 12 Mar 2017 19:35:31 -0400 committer: Noah Misch <firstname.lastname@example.org> date : Sun, 12 Mar 2017 19:35:31 -0400
Detect fclose() failures; given "ln -s /dev/full $PGDATA/devfull", "pg_file_write('devfull', 'x', true)" now fails as it should. Don't leak a stream when fwrite() fails. Remove a born-ineffective test that aimed to skip zero-length writes. Back-patch to 9.2 (all supported versions).
Fix ancient connection leak in dblink
commit : 82f3792a4f576a34495c6b65ba103941f0b69f49 author : Joe Conway <email@example.com> date : Sat, 11 Mar 2017 13:32:40 -0800 committer: Joe Conway <firstname.lastname@example.org> date : Sat, 11 Mar 2017 13:32:40 -0800
When using unnamed connections with dblink, every time a new connection is made, the old one is leaked. Fix that. This has been an issue probably since dblink was first committed. Someone complained almost ten years ago, but apparently I decided not to pursue it at the time, and neither did anyone else, so it slipped between the cracks. Now that someone else has complained, fix in all supported branches. Discussion: (orig) https://postgr.es/m/flat/F680AB59-6D6F-4026-9599-1BE28880273D%40decibel.org#F680AB59-6D6F-4026-9599-1BE28880273D@decibel.org Discussion: (new) https://postgr.es/m/flat/0A3221C70F24FB45833433255569204D1F6ADF8C@G01JPEXMBYT05 Reported by: Jim Nasby and Takayuki Tsunakawa
Sanitize newlines in object names in "pg_restore -l" output.
commit : 88f3743cb18db477aed3fc7eed7da1c43aea4146 author : Tom Lane <email@example.com> date : Fri, 10 Mar 2017 14:15:09 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 10 Mar 2017 14:15:09 -0500
Commits 89e0bac86 et al replaced newlines with spaces in object names printed in SQL comments, but we neglected to consider that the same names are also printed by "pg_restore -l", and a newline would render the output unparseable by "pg_restore -L". Apply the same replacement in "-l" output. Since "pg_restore -L" doesn't actually examine any object names, only the dump ID field that starts each line, this is enough to fix things for its purposes. The previous fix was treated as a security issue, and we might have done that here as well, except that the issue was reported publicly to start with. Anyway it's hard to see how this could be exploited for SQL injection; "pg_restore -L" doesn't do much with the file except parse it for leading integers. Per bug #14587 from Milos Urbanek. Back-patch to all supported versions. Discussion: https://email@example.com
Fix a potential double-free in ecpg.
commit : 466ee7a5326b0071b91b46fd3b657c35b52fd8c6 author : Michael Meskes <firstname.lastname@example.org> date : Fri, 10 Mar 2017 10:32:41 +0100 committer: Michael Meskes <email@example.com> date : Fri, 10 Mar 2017 10:32:41 +0100
Fix timestamptz regression test to still work with latest IANA zone data.
commit : 92f6e6829a4461e686648e68183908b60ed4ae71 author : Tom Lane <firstname.lastname@example.org> date : Thu, 9 Mar 2017 17:20:11 -0500 committer: Tom Lane <email@example.com> date : Thu, 9 Mar 2017 17:20:11 -0500
The IANA timezone crew continues to chip away at their project of removing timezone abbreviations that have no real-world currency from their database. The tzdata2017a update removes all such abbreviations for South American zones, as well as much of the Pacific. This breaks some test cases in timestamptz.sql that were expecting America/Santiago and America/Caracas to have non-numeric abbreviations. The test cases involving America/Santiago seem to have selected that zone more or less at random, so just replace it with America/New_York, which is of similar longitude. The cases involving America/Caracas are harder since they were chosen to test a time-varying zone abbreviation around a point where it changed meaning in the backwards direction. Fortunately, Europe/Moscow has a similar case in 2014, and the MSK/MSD abbreviations are well enough attested that IANA seems unlikely to decide to remove them from the database in future. With these changes, this regression test should pass when using any IANA zone database from 2015 or later. One could wish that there were a few years more daylight on how out-of-date your zone database can be ... but really the --with-system-tzdata option is only meant for use on platforms where the zone database is kept up-to-date pretty faithfully, so I do not think this is a big objection. Discussion: https://firstname.lastname@example.org
Use doubly-linked block lists in aset.c to reduce large-chunk overhead.
commit : 50a9d714ad0dc39d7724073e1df7e381aaab84b6 author : Tom Lane <email@example.com> date : Wed, 8 Mar 2017 12:21:12 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 8 Mar 2017 12:21:12 -0500
Large chunks (those too large for any palloc freelist) are managed as separate blocks. Formerly, realloc'ing or pfree'ing such a chunk required O(N) time in a context with N blocks, since we had to traipse down the singly-linked block list to locate the block's predecessor before we could fix the list links. This can result in O(N^2) runtime in situations where large numbers of such chunks are manipulated within one context. Cases like that were not foreseen in the original design of aset.c, and indeed didn't arise until fairly recently. But such problems can now occur in reorderbuffer.c and in hash joining, both of which make repeated large requests without scaling up their request size as they do so, and which will free their requests in not-necessarily-LIFO order. To fix, change the block list from singly-linked to doubly-linked. This adds another 4 or 8 bytes to ALLOC_BLOCKHDRSZ, but that doesn't seem like unacceptable overhead, since aset.c's blocks are normally 8K or more, and never less than 1K in current practice. In passing, get rid of some redundant AllocChunkGetPointer() calls in AllocSetRealloc (the compiler might be smart enough to optimize these away anyway, but no need to assume that) and improve AllocSetCheck's checking of block header fields. Back-patch to 9.4 where reorderbuffer.c appeared. We could take this further back, but currently there's no evidence that it would be useful. Discussion: https://postgr.es/m/CAMkU=1x1hvue1XYrZoWk_omG0Ja5nBvTdvgrOeVkkeqs71CV8g@mail.gmail.com
pg_xlogdump: Remove extra newline in error message
commit : 197a4c41eef8086e983a442a509bcb510685a895 author : Peter Eisentraut <email@example.com> date : Wed, 8 Mar 2017 09:57:17 -0500 committer: Peter Eisentraut <firstname.lastname@example.org> date : Wed, 8 Mar 2017 09:57:17 -0500
fatal_error() already prints out a trailing newline.
commit : 68392a2af7f0b590859aad6205d1734aae3a2690 author : Magnus Hagander <email@example.com> date : Tue, 7 Mar 2017 22:45:45 -0500 committer: Magnus Hagander <firstname.lastname@example.org> date : Tue, 7 Mar 2017 22:45:45 -0500
Reported by Jeremy Finzel
Repair incorrect pg_dump labeling for some comments and security labels.
commit : b6882e9ecf9405a6f056883fe32148acd6761071 author : Tom Lane <email@example.com> date : Mon, 6 Mar 2017 19:33:59 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 6 Mar 2017 19:33:59 -0500
We attached no schema label to comments for procedural languages, casts, transforms, operator classes, operator families, or text search objects. The first three categories of objects don't really have schemas, but pg_dump treats them as if they do, and it seems like the TocEntry fields for their comments had better match the TocEntry fields for the parent objects. (As an example of a possible hazard, the type names in a CAST will be formatted with the assumption of a particular search_path, so failing to ensure that this same path is active for the COMMENT ON command could lead to an error or to attaching the comment to the wrong cast.) In the last six cases, this was a flat-out error --- possibly mine to begin with, but it was a long time ago. The security label for a procedural language was likewise not correctly labeled as to schema, and both the comment and security label for a procedural language were not correctly labeled as to owner. In simple cases the restore would accidentally work correctly anyway, since these comments and security labels would normally get emitted right after the owning object, and so the search path and active user would be correct anyhow. But it could fail in corner cases; for example a schema-selective restore would omit comments it should include. Giuseppe Broccolo noted the oversight, and proposed the correct fix, for text search dictionary objects; I found the rest by cross-checking other dumpComment() calls. These oversights are ancient, so back-patch all the way. Discussion: https://postgr.es/m/CAFzmHiWwwzLjzwM4x5ki5s_PDMR6NrkipZkjNnO3B0xEpBgJaA@mail.gmail.com
pg_upgrade: Fix large object COMMENTS, SECURITY LABELS
commit : 6be8647f7862dbbefe4d49d842566738cd753963 author : Stephen Frost <email@example.com> date : Mon, 6 Mar 2017 17:04:13 -0500 committer: Stephen Frost <firstname.lastname@example.org> date : Mon, 6 Mar 2017 17:04:13 -0500
When performing a pg_upgrade, we copy the files behind pg_largeobject and pg_largeobject_metadata, allowing us to avoid having to dump out and reload the actual data for large objects and their ACLs. Unfortunately, that isn't all of the information which can be associated with large objects. Currently, we also support COMMENTs and SECURITY LABELs with large objects and these were being silently dropped during a pg_upgrade as pg_dump would skip everything having to do with a large object and pg_upgrade only copied the tables mentioned to the new cluster. As the file copies happen after the catalog dump and reload, we can't simply include the COMMENTs and SECURITY LABELs in pg_dump's binary-mode output but we also have to include the actual large object definition as well. With the definition, comments, and security labels in the pg_dump output and the file copies performed by pg_upgrade, all of the data and metadata associated with large objects is able to be successfully pulled forward across a pg_upgrade. In 9.6 and master, we can simply adjust the dump bitmask to indicate which components we don't want. In 9.5 and earlier, we have to put explciit checks in in dumpBlob() and dumpBlobs() to not include the ACL or the data when in binary-upgrade mode. Adjustments made to the privileges regression test to allow another test (large_object.sql) to be added which explicitly leaves a large object with a comment in place to provide coverage of that case with pg_upgrade. Back-patch to all supported branches. Discussion: https://postgr.es/m/20170221162655.GE9812@tamriel.snowman.net
Avoid dangling pointer to relation name in RLS code path in DoCopy().
commit : 420d9ec0aebb0cd8d3198d25d6acaf07b0a576b9 author : Tom Lane <email@example.com> date : Mon, 6 Mar 2017 16:50:47 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Mon, 6 Mar 2017 16:50:47 -0500
With RLS active, "COPY tab TO ..." failed under -DRELCACHE_FORCE_RELEASE, and would sometimes fail without that, because it used the relation name directly from the relcache as part of the parsetree it's building. That becomes a potentially-dangling pointer as soon as the relcache entry is closed, a bit further down. Typical symptom if the relcache entry chanced to get cleared would be "relation does not exist" error with a garbage relation name, or possibly a core dump; but if you were really truly unlucky, the COPY might copy from the wrong table. Per report from Andrew Dunstan that regression tests fail with -DRELCACHE_FORCE_RELEASE. The core tests now pass for me (but have not tried "make check-world" yet). Discussion: https://postgr.es/m/7b52f900-0579-cda9-ae2e-de5da17090e6@2ndQuadrant.com
In rebuild_relation(), don't access an already-closed relcache entry.
commit : 807df31d19e7014df1d3621292589d3653f614f2 author : Tom Lane <email@example.com> date : Sat, 4 Mar 2017 16:09:33 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Sat, 4 Mar 2017 16:09:33 -0500
This reliably fails with -DRELCACHE_FORCE_RELEASE, as reported by Andrew Dunstan, and could sometimes fail in normal operation, resulting in a wrong persistence value being used for the transient table. It's not immediately clear to me what effects that might have beyond the risk of a crash while accessing OldHeap->rd_rel->relpersistence, but it's probably not good. Bug introduced by commit f41872d0c, and made substantially worse by commit 85b506bbf, which added a second such access significantly later than the heap_close. I doubt the first reference could fail in a production scenario, but the second one definitely could. Discussion: https://postgr.es/m/7b52f900-0579-cda9-ae2e-de5da17090e6@2ndQuadrant.com
Fix incorrect variable datatype
commit : 2cc16380a46d4c2103aa6caa6aaf0170863f8509 author : Magnus Hagander <email@example.com> date : Tue, 28 Feb 2017 12:16:42 +0100 committer: Magnus Hagander <firstname.lastname@example.org> date : Tue, 28 Feb 2017 12:16:42 +0100
Both datatypes map to the same underlying one which is why it still worked, but we should use the correct type. Author: Kyotaro HORIGUCHI
pg_upgrade docs: clarify instructions on standby extensions
commit : 3c6f766f68114785195cf50700c71021942e60a9 author : Bruce Momjian <email@example.com> date : Sat, 25 Feb 2017 12:59:23 -0500 committer: Bruce Momjian <firstname.lastname@example.org> date : Sat, 25 Feb 2017 12:59:23 -0500
Previously the pg_upgrade standby upgrade instructions said not to execute pgcrypto.sql, but it should have referenced the extension command "CREATE EXTENSION pgcrypto". This patch makes that doc change. Reported-by: a private bug report Backpatch-through: 9.4, where standby instructions were added
Fix contrib/pg_trgm's extraction of trigrams from regular expressions.
commit : 513c9f9de2a95f89150e8191a9a0eddc40403bf0 author : Tom Lane <email@example.com> date : Wed, 22 Feb 2017 15:04:07 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 22 Feb 2017 15:04:07 -0500
The logic for removing excess trigrams from the result was faulty. It intends to avoid merging the initial and final states of the NFA, which is necessary, but in testing whether removal of a specific trigram would cause that, it failed to consider the combined effects of all the state merges that that trigram's removal would cause. This could result in a broken final graph that would never match anything, leading to GIN or GiST indexscans not finding anything. To fix, add a "tentParent" field that is used only within this loop, and set it to show state merges that we are tentatively going to do. While examining a particular arc, we must chase up through tentParent links as well as regular parent links (the former can only appear atop the latter), and we must account for state init/fin flag merges that haven't actually been done yet. To simplify the latter, combine the separate init and fin bool fields into a bitmap flags field. I also chose to get rid of the "children" state list, which seems entirely inessential. Per bug #14563 from Alexey Isayko, which the added test cases are based on. Back-patch to 9.3 where this code was added. Report: https://email@example.com Discussion: https://firstname.lastname@example.org
Make walsender always initialize the buffers.
commit : feb659cced9755966190bbabfcead54fcfcddf0e author : Fujii Masao <email@example.com> date : Wed, 22 Feb 2017 03:11:58 +0900 committer: Fujii Masao <firstname.lastname@example.org> date : Wed, 22 Feb 2017 03:11:58 +0900
Walsender uses the local buffers for each outgoing and incoming message. Previously when creating replication slot, walsender forgot to initialize one of them and which can cause the segmentation fault error. To fix this issue, this commit changes walsender so that it always initialize them before it executes the requested replication command. Back-patch to 9.4 where replication slot was introduced. Problem report and initial patch by Stas Kelvich, modified by me. Report: https://www.postgresql.org/message-id/A1E9CB90-1FAC-4CAD-8DBA-9AA62A6E97C5@postgrespro.ru
Fix sloppy handling of corner-case errors in fd.c.
commit : ff1b032a9c9f407bce711400a8edaa57218a2e81 author : Tom Lane <email@example.com> date : Tue, 21 Feb 2017 17:51:28 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Tue, 21 Feb 2017 17:51:28 -0500
Several places in fd.c had badly-thought-through handling of error returns from lseek() and close(). The fact that those would seldom fail on valid FDs is probably the reason we've not noticed this up to now; but if they did fail, we'd get quite confused. LruDelete and LruInsert actually just Assert'd that lseek never fails, which is pretty awful on its face. In LruDelete, we indeed can't throw an error, because that's likely to get called during error abort and so throwing an error would probably just lead to an infinite loop. But by the same token, throwing an error from the close() right after that was ill-advised, not to mention that it would've left the LRU state corrupted since we'd already unlinked the VFD from the list. I also noticed that really, most of the time, we should know the current seek position and it shouldn't be necessary to do an lseek here at all. As patched, if we don't have a seek position and an lseek attempt doesn't give us one, we'll close the file but then subsequent re-open attempts will fail (except in the somewhat-unlikely case that a FileSeek(SEEK_SET) call comes between and allows us to re-establish a known target seek position). This isn't great but it won't result in any state corruption. Meanwhile, having an Assert instead of an honest test in LruInsert is really dangerous: if that lseek failed, a subsequent read or write would read or write from the start of the file, not where the caller expected, leading to data corruption. In both LruDelete and FileClose, if close() fails, just LOG that and mark the VFD closed anyway. Possibly leaking an FD is preferable to getting into an infinite loop or corrupting the VFD list. Besides, as far as I can tell from the POSIX spec, it's unspecified whether or not the file has been closed, so treating it as still open could be the wrong thing anyhow. I also fixed a number of other places that were being sloppy about behaving correctly when the seekPos is unknown. Also, I changed FileSeek to return -1 with EINVAL for the cases where it detects a bad offset, rather than throwing a hard elog(ERROR). It seemed pretty inconsistent that some bad-offset cases would get a failure return while others got elog(ERROR). It was missing an offset validity check for the SEEK_CUR case on a closed file, too. Back-patch to all supported branches, since all this code is fundamentally identical in all of them. Discussion: https://email@example.com
doc: Update URL for plr
commit : 931182fe3a2c7725238d4351f42c5318b4143bee author : Peter Eisentraut <firstname.lastname@example.org> date : Tue, 21 Feb 2017 12:35:57 -0500 committer: Peter Eisentraut <email@example.com> date : Tue, 21 Feb 2017 12:35:57 -0500
Fix documentation of to_char/to_timestamp TZ, tz, OF formatting patterns.
commit : 09e9aab4419e585d07360be3799a23a1d57c8a9c author : Tom Lane <firstname.lastname@example.org> date : Mon, 20 Feb 2017 10:05:00 -0500 committer: Tom Lane <email@example.com> date : Mon, 20 Feb 2017 10:05:00 -0500
These are only supported in to_char, not in the other direction, but the documentation failed to mention that. Also, describe TZ/tz as printing the time zone "abbreviation", not "name", because what they print is elsewhere referred to that way. Per bug #14558.
Make src/interfaces/libpq/test clean up after itself.
commit : cfb022dc9edea701f3e4d1ca31105135e196ad99 author : Tom Lane <firstname.lastname@example.org> date : Sun, 19 Feb 2017 17:18:10 -0500 committer: Tom Lane <email@example.com> date : Sun, 19 Feb 2017 17:18:10 -0500
It failed to remove a .o file during "make clean", and it lacked a .gitignore file entirely.
Adjust PL/Tcl regression test to dodge a possible bug or zone dependency.
commit : a8aeb871d9149e55839ffd0b4c62d24da5760f57 author : Tom Lane <firstname.lastname@example.org> date : Sun, 19 Feb 2017 16:14:52 -0500 committer: Tom Lane <email@example.com> date : Sun, 19 Feb 2017 16:14:52 -0500
One case in the PL/Tcl tests is observed to fail on RHEL5 with a Turkish time zone setting. It's not clear if this is an old Tcl bug or something odd about the zone data, but in any case that test is meant to see if the Tcl [clock] command works at all, not what its corner-case behaviors are. Therefore we have no need to test exactly which week a Sunday midnight is considered to fall into. Probe the following Tuesday instead. Discussion: https://firstname.lastname@example.org
Fix help message for pg_basebackup -R
commit : a83e7405a169d3b9ddef08e273ed1b1769fe3491 author : Magnus Hagander <email@example.com> date : Sat, 18 Feb 2017 13:47:06 +0100 committer: Magnus Hagander <firstname.lastname@example.org> date : Sat, 18 Feb 2017 13:47:06 +0100
The recovery.conf file that's generated is specifically for replication, and not needed (or wanted) for regular backup restore, so indicate that in the message.
Document usage of COPT environment variable for adjusting configure flags.
commit : 447591c70c4aaef950f8cee6db188e4c35bc551f author : Tom Lane <email@example.com> date : Fri, 17 Feb 2017 16:11:02 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Fri, 17 Feb 2017 16:11:02 -0500
Also add to the existing rather half-baked description of PROFILE, which does exactly the same thing, but I think people use it differently. Discussion: https://email@example.com
Doc: remove duplicate index entry.
commit : d993171a7aa212db479ec53b47881b4e677d0b5c author : Tom Lane <firstname.lastname@example.org> date : Thu, 16 Feb 2017 11:30:07 -0500 committer: Tom Lane <email@example.com> date : Thu, 16 Feb 2017 11:30:07 -0500
This causes a warning with the old html-docs toolchain, though not with the new. I had originally supposed that we needed both <indexterm> entries to get both a primary index entry and a see-also link; but evidently not, as pointed out by Fabien Coelho. Discussion: https://postgr.es/m/alpine.DEB.2.20.1702161616060.5445@lancre
Formatting and docs corrections for logical decoding output plugins.
commit : 7d1b6bf407ad19247f0a65aaf742b769e0938b2c author : Tom Lane <firstname.lastname@example.org> date : Wed, 15 Feb 2017 18:15:47 -0500 committer: Tom Lane <email@example.com> date : Wed, 15 Feb 2017 18:15:47 -0500
Make the typedefs for output plugins consistent with project style; they were previously not even consistent with each other as to layout or inclusion of parameter names. Make the documentation look the same, and fix errors therein (missing and misdescribed parameters). Back-patch because of the documentation bugs.
Doc: fix typo in logicaldecoding.sgml.
commit : e0084c32f634213d5e0f371b8b4ac6355ded910e author : Tom Lane <firstname.lastname@example.org> date : Wed, 15 Feb 2017 17:31:02 -0500 committer: Tom Lane <email@example.com> date : Wed, 15 Feb 2017 17:31:02 -0500
There's no such field as OutputPluginOptions.output_mode; it's actually output_type. Noted by T. Katsumata. Discussion: https://firstname.lastname@example.org
Make sure that hash join's bulk-tuple-transfer loops are interruptible.
commit : 96ba17640055286bf19aa750d50808430ff37f1d author : Tom Lane <email@example.com> date : Wed, 15 Feb 2017 16:40:06 -0500 committer: Tom Lane <firstname.lastname@example.org> date : Wed, 15 Feb 2017 16:40:06 -0500
The loops in ExecHashJoinNewBatch(), ExecHashIncreaseNumBatches(), and ExecHashRemoveNextSkewBucket() are all capable of iterating over many tuples without ever doing a CHECK_FOR_INTERRUPTS, so that the backend might fail to respond to SIGINT or SIGTERM for an unreasonably long time. Fix that. In the case of ExecHashJoinNewBatch(), it seems useful to put the added CHECK_FOR_INTERRUPTS into ExecHashJoinGetSavedTuple() rather than directly in the loop, because that will also ensure that both principal code paths through ExecHashJoinOuterGetTuple() will do a CHECK_FOR_INTERRUPTS, which seems like a good idea to avoid surprises. Back-patch to all supported branches. Tom Lane and Thomas Munro Discussion: https://email@example.com
Fix YA unwanted behavioral difference with operator_precedence_warning.
commit : 2b47e29f2081f7b2bbe99d240bdd08f63438357e author : Tom Lane <firstname.lastname@example.org> date : Wed, 15 Feb 2017 14:44:00 -0500 committer: Tom Lane <email@example.com> date : Wed, 15 Feb 2017 14:44:00 -0500
Jeff Janes noted that the error cursor position shown for some errors would vary when operator_precedence_warning is turned on. We'd prefer that option to have no undocumented effects, so this isn't desirable. To fix, make sure that an AEXPR_PAREN node has the same exprLocation as its child node. (Note: it would be a little cheaper to use @2 here instead of an exprLocation call, but there are cases where that wouldn't produce the identical answer, so don't do it like that.) Back-patch to 9.5 where this feature was introduced. Discussion: https://postgr.es/m/CAMkU=1ykK+VhhcQ4Ky8KBo9FoaUJH3f3rDQB8TkTXi-ZsBRUkQ@mail.gmail.com
Ignore tablespace ACLs when ignoring schema ACLs.
commit : 660e457f5b1c4647ad2d41496840f793c87d9208 author : Noah Misch <firstname.lastname@example.org> date : Sun, 12 Feb 2017 16:03:41 -0500 committer: Noah Misch <email@example.com> date : Sun, 12 Feb 2017 16:03:41 -0500
The ALTER TABLE ALTER TYPE implementation can issue DROP INDEX and CREATE INDEX to refit existing indexes for the new column type. Since this CREATE INDEX is an implementation detail of an index alteration, the ensuing DefineIndex() should skip ACL checks specific to index creation. It already skips the namespace ACL check. Make it skip the tablespace ACL check, too. Back-patch to 9.2 (all supported versions). Reviewed by Tom Lane.
Blind try to fix portability issue in commit 8f93bd851 et al.
commit : cf73c6bfc7148077cda6ff68dae551c0a2674182 author : Tom Lane <firstname.lastname@example.org> date : Thu, 9 Feb 2017 15:49:57 -0500 committer: Tom Lane <email@example.com> date : Thu, 9 Feb 2017 15:49:57 -0500
The S/390 members of the buildfarm are showing failures indicating that they're having trouble with the rint() calls I added yesterday. There's no good reason for that, and I wonder if it is a compiler bug similar to the one we worked around in d9476b838. Try to fix it using the same method as before, namely to store the result of rint() back into a "double" variable rather than immediately converting to int64. (This isn't entirely waving a dead chicken, since on machines with wider-than-double float registers, the extra store forces a width conversion. I don't know if S/390 is like that, but it seems worth trying.) In passing, merge duplicate ereport() calls in float8_timestamptz(). Per buildfarm.
Fix roundoff problems in float8_timestamptz() and make_interval().
commit : 7786b984825ea720aed3a11ee465dc3d6cfc8d96 author : Tom Lane <firstname.lastname@example.org> date : Wed, 8 Feb 2017 18:04:59 -0500 committer: Tom Lane <email@example.com> date : Wed, 8 Feb 2017 18:04:59 -0500
When converting a float value to integer microseconds, we should be careful to round the value to the nearest integer, typically with rint(); simply assigning to an int64 variable will truncate, causing apparently off-by-one values in cases that should work. Most places in the datetime code got this right, but not these two. float8_timestamptz() is new as of commit e511d878f (9.6). Previous versions effectively depended on interval_mul() to do roundoff correctly, which it does, so this fixes an accuracy regression in 9.6. The problem in make_interval() dates to its introduction in 9.4. Aside from being careful to round not truncate, let's incorporate the hours and minutes inputs into the result with exact integer arithmetic, rather than risk introducing roundoff error where there need not have been any. float8_timestamptz() problem reported by Erik Nordström, though this is not his proposed patch. make_interval() problem found by me. Discussion: https://postgr.es/m/CAHuQZDS76jTYk3LydPbKpNfw9KbACmD=49dC4BrzHcfPv6yA1A@mail.gmail.com
Correct thinko in last-minute release note item.
commit : 13b30ada998988f14031f57f61396605ae4e9c33 author : Tom Lane <firstname.lastname@example.org> date : Tue, 7 Feb 2017 10:24:25 -0500 committer: Tom Lane <email@example.com> date : Tue, 7 Feb 2017 10:24:25 -0500
The CREATE INDEX CONCURRENTLY bug can only be triggered by row updates, not inserts, since the problem would arise from an update incorrectly being made HOT. Noted by Alvaro.