PostgreSQL 9.2.24 commit log

Stamp 9.2.24.

  
commit   : 8786f783ab2398468a8c4d8eac937fc6533d16e3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Nov 2017 17:17:39 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Nov 2017 17:17:39 -0500    

Click here for diff

  
  

Last-minute updates for release notes.

  
commit   : 203b965f275061894621a5a359213ac77558d33f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Nov 2017 12:02:30 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Nov 2017 12:02:30 -0500    

Click here for diff

  
Security: CVE-2017-12172, CVE-2017-15098, CVE-2017-15099  
  

start-scripts: switch to $PGUSER before opening $PGLOG.

  
commit   : eda780281c9c09599d12e783c51905078674b2e8    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 6 Nov 2017 07:11:10 -0800    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 6 Nov 2017 07:11:10 -0800    

Click here for diff

  
By default, $PGUSER has permission to unlink $PGLOG.  If $PGUSER  
replaces $PGLOG with a symbolic link, the server will corrupt the  
link-targeted file by appending log messages.  Since these scripts open  
$PGLOG as root, the attack works regardless of target file ownership.  
  
"make install" does not install these scripts anywhere.  Users having  
manually installed them in the past should repeat that process to  
acquire this fix.  Most script users have $PGLOG writable to root only,  
located in $PGDATA.  Just before updating one of these scripts, such  
users should rename $PGLOG to $PGLOG.old.  The script will then recreate  
$PGLOG with proper ownership.  
  
Reviewed by Peter Eisentraut.  Reported by Antoine Scemama.  
  
Security: CVE-2017-12172  
  

Translation updates

  
commit   : f6a926757e3ebd5efb25e80d47834b57c0919c32    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 5 Nov 2017 17:07:04 -0500    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 5 Nov 2017 17:07:04 -0500    

Click here for diff

  
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git  
Source-Git-Hash: d2182acc2b014c9348a6ab60bfd1ce2576506338  
  

Release notes for 10.1, 9.6.6, 9.5.10, 9.4.15, 9.3.20, 9.2.24.

  
commit   : 50714dd8606015d4307fc8edc931a56c142ce8e4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 5 Nov 2017 13:47:57 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 5 Nov 2017 13:47:57 -0500    

Click here for diff

  
In the v10 branch, also back-patch the effects of 1ff01b390 and c29c57890  
on these files, to reduce future maintenance issues.  (I'd do it further  
back, except that the 9.X branches differ anyway due to xlog-to-wal  
link tag renaming.)  
  

Doc: update URL for check_postgres.

  
commit   : 13406fe23d3c6f3889014df6c001eb8652114a64    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 1 Nov 2017 22:07:14 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 1 Nov 2017 22:07:14 -0400    

Click here for diff

  
Reported by Dan Vianello.  
  
Discussion: https://postgr.es/m/e6e12f18f70e46848c058084d42fb651@KSTLMEXGP001.CORP.CHARTERCOM.com  
  

Dept of second thoughts: keep aliasp_item in sync with tlistitem.

  
commit   : a4c11c103164be667eef362157b48469ed2d0c10    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 18:16:25 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 18:16:25 -0400    

Click here for diff

  
Commit d5b760ecb wasn't quite right, on second thought: if the  
caller didn't ask for column names then it would happily emit  
more Vars than if the caller did ask for column names.  This  
is surely not a good idea.  Advance the aliasp_item whether or  
not we're preparing a colnames list.  
  

Fix crash when columns have been added to the end of a view.

  
commit   : 80e79718d0ead9d4291ab320bea89ec8f24050ad    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 17:10:21 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 17:10:21 -0400    

Click here for diff

  
expandRTE() supposed that an RTE_SUBQUERY subquery must have exactly  
as many non-junk tlist items as the RTE has column aliases for it.  
This was true at the time the code was written, and is still true so  
far as parse analysis is concerned --- but when the function is used  
during planning, the subquery might have appeared through insertion  
of a view that now has more columns than it did when the outer query  
was parsed.  This results in a core dump if, for instance, we have  
to expand a whole-row Var that references the subquery.  
  
To avoid crashing, we can either stop expanding the RTE when we run  
out of aliases, or invent new aliases for the added columns.  While  
the latter might be more useful, the former is consistent with what  
expandRTE() does for composite-returning functions in the RTE_FUNCTION  
case, so it seems like we'd better do it that way.  
  
Per bug #14876 from Samuel Horwitz.  This has been busted since commit  
ff1ea2173 allowed views to acquire more columns, so back-patch to all  
supported branches.  
  
Discussion: https://postgr.es/m/20171026184035.1471.82810@wrigleys.postgresql.org  
  

Rethink the dependencies recorded for FieldSelect/FieldStore nodes.

  
commit   : adcfa7bd16f4b256b0523bd2b4c35a1bc6e84199    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 12:18:57 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 12:18:57 -0400    

Click here for diff

  
On closer investigation, commits f3ea3e3e8 et al were a few bricks  
shy of a load.  What we need is not so much to lock down the result  
type of a FieldSelect, as to lock down the existence of the column  
it's trying to extract.  Otherwise, we can break it by dropping that  
column.  The dependency on the result type is then held indirectly  
through the column, and doesn't need to be recorded explicitly.  
  
Out of paranoia, I left in the code to record a dependency on the  
result type, but it's used only if we can't identify the pg_class OID  
for the column.  That shouldn't ever happen right now, AFAICS, but  
it seems possible that in future the input node could be marked as  
being of type RECORD rather than some specific composite type.  
  
Likewise for FieldStore.  
  
Like the previous patch, back-patch to all supported branches.  
  
Discussion: https://postgr.es/m/22571.1509064146@sss.pgh.pa.us  
  

Doc: mention that you can’t PREPARE TRANSACTION after NOTIFY.

  
commit   : f39fc2b27d2b61827d65562e222d5177a1b4dfe0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 10:46:07 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 10:46:07 -0400    

Click here for diff

  
The NOTIFY page said this already, but the PREPARE TRANSACTION page  
missed it.  
  
Discussion: https://postgr.es/m/20171024010602.1488.80066@wrigleys.postgresql.org  
  

Improve gendef.pl diagnostic on failure to open sym file

  
commit   : d5fb450aab740b52601b4bfb354c4be2662a940b    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Thu, 26 Oct 2017 10:10:37 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Thu, 26 Oct 2017 10:10:37 -0400    

Click here for diff

  
There have been numerous buildfarm failures but the diagnostic is  
currently silent about the reason for failure to open the file. Let's  
see if we can get to the bottom of it.  
  
Backpatch to all live branches.  
  

Fix libpq to not require user’s home directory to exist.

  
commit   : caeae886e216b6ec2996061f0e08e3cb02f2751b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 25 Oct 2017 19:32:25 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 25 Oct 2017 19:32:25 -0400    

Click here for diff

  
Some people like to run libpq-using applications in environments where  
there's no home directory.  We've broken that scenario before (cf commits  
5b4067798 and bd58d9d88), and commit ba005f193 broke it again, by making  
it a hard error if we fail to get the home directory name while looking  
for ~/.pgpass.  The previous precedent is that if we can't get the home  
directory name, we should just silently act as though the file we hoped  
to find there doesn't exist.  Rearrange the new code to honor that.  
  
Looking around, the service-file code added by commit 41a4e4595 had the  
same disease.  Apparently, that escaped notice because it only runs when  
a service name has been specified, which I guess the people who use this  
scenario don't do.  Nonetheless, it's wrong too, so fix that case as well.  
  
Add a comment about this policy to pqGetHomeDirectory, in the probably  
vain hope of forestalling the same error in future.  And upgrade the  
rather miserable commenting in parseServiceInfo, too.  
  
In passing, also back off parseServiceInfo's assumption that only ENOENT  
is an ignorable error from stat() when checking a service file.  We would  
need to ignore at least ENOTDIR as well (cf 5b4067798), and seeing that  
the far-better-tested code for ~/.pgpass treats all stat() failures alike,  
I think this code ought to as well.  
  
Per bug #14872 from Dan Watson.  Back-patch the .pgpass change to v10  
where ba005f193 came in.  The service-file bugs are far older, so  
back-patch the other changes to all supported branches.  
  
Discussion: https://postgr.es/m/20171025200457.1471.34504@wrigleys.postgresql.org  
  

Update time zone data files to tzdata release 2017c.

  
commit   : 7e8d84c366e3a117d4e8e373b5b076a76968a89b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 23 Oct 2017 18:15:36 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 23 Oct 2017 18:15:36 -0400    

Click here for diff

  
DST law changes in Fiji, Namibia, Northern Cyprus, Sudan, Tonga,  
and Turks & Caicos Islands.  Historical corrections for Alaska, Apia,  
Burma, Calcutta, Detroit, Ireland, Namibia, and Pago Pago.  
  

Sync our copy of the timezone library with IANA release tzcode2017c.

  
commit   : 1317d130132743544cd078f0b3d1927e56858053    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 23 Oct 2017 17:54:09 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 23 Oct 2017 17:54:09 -0400    

Click here for diff

  
This is a trivial update containing only cosmetic changes.  The point  
is just to get back to being synced with an official release of tzcode,  
rather than some ad-hoc point in their commit history, which is where  
commit 47f849a3c left it.  
  

Fix some oversights in expression dependency recording.

  
commit   : 900a9fd64a0c64b2cfd39dd745950c4ce2dcb114    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 23 Oct 2017 13:57:46 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 23 Oct 2017 13:57:46 -0400    

Click here for diff

  
find_expr_references() neglected to record a dependency on the result type  
of a FieldSelect node, allowing a DROP TYPE to break a view or rule that  
contains such an expression.  I think we'd omitted this case intentionally,  
reasoning that there would always be a related dependency ensuring that the  
DROP would cascade to the view.  But at least with nested field selection  
expressions, that's not true, as shown in bug #14867 from Mansur Galiev.  
Add the dependency, and for good measure a dependency on the node's exposed  
collation.  
  
Likewise add a dependency on the result type of a FieldStore.  I think here  
the reasoning was that it'd only appear within an assignment to a field,  
and the dependency on the field's column would be enough ... but having  
seen this example, I think that's wrong for nested-composites cases.  
  
Looking at nearby code, I notice we're not recording a dependency on the  
exposed collation of CoerceViaIO, which seems inconsistent with our choices  
for related node types.  Maybe that's OK but I'm feeling suspicious of this  
code today, so let's add that; it certainly can't hurt.  
  
This patch does not do anything to protect already-existing views, only  
views created after it's installed.  But seeing that the issue has been  
there a very long time and nobody noticed till now, that's probably good  
enough.  
  
Back-patch to all supported branches.  
  
Discussion: https://postgr.es/m/20171023150118.1477.19174@wrigleys.postgresql.org  
  

Fix typcache’s failure to treat ranges as container types.

  
commit   : 0270ad1f7474249f993a825d8e15f09666a900eb    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 20 Oct 2017 17:12:28 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 20 Oct 2017 17:12:28 -0400    

Click here for diff

  
Like the similar logic for arrays and records, it's necessary to examine  
the range's subtype to decide whether the range type can support hashing.  
We can omit checking the subtype for btree-defined operations, though,  
since range subtypes are required to have those operations.  (Possibly  
that simplification for btree cases led us to overlook that it does  
not apply for hash cases.)  
  
This is only an issue if the subtype lacks hash support, which is not  
true of any built-in range type, but it's easy to demonstrate a problem  
with a range type over, eg, money: you can get a "could not identify  
a hash function" failure when the planner is misled into thinking that  
hash join or aggregation would work.  
  
This was born broken, so back-patch to all supported branches.  
  

Doc: fix missing explanation of default object privileges.

  
commit   : bc639d4885158677ca682360b22f83709435a8ec    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 11 Oct 2017 16:56:23 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 11 Oct 2017 16:56:23 -0400    

Click here for diff

  
The GRANT reference page, which lists the default privileges for new  
objects, failed to mention that USAGE is granted by default for data  
types and domains.  As a lesser sin, it also did not specify anything  
about the initial privileges for sequences, FDWs, foreign servers,  
or large objects.  Fix that, and add a comment to acldefault() in the  
probably vain hope of getting people to maintain this list in future.  
  
Noted by Laurenz Albe, though I editorialized on the wording a bit.  
Back-patch to all supported branches, since they all have this behavior.  
  
Discussion: https://postgr.es/m/1507620895.4152.1.camel@cybertec.at  
  

Fix low-probability loss of NOTIFY messages due to XID wraparound.

  
commit   : 525b09adae78adb7d0714f980e54ee788ee8dcec    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 11 Oct 2017 14:28:34 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 11 Oct 2017 14:28:34 -0400    

Click here for diff

  
Up to now async.c has used TransactionIdIsInProgress() to detect whether  
a notify message's source transaction is still running.  However, that  
function has a quick-exit path that reports that XIDs before RecentXmin  
are no longer running.  If a listening backend is doing nothing but  
listening, and not running any queries, there is nothing that will advance  
its value of RecentXmin.  Once 2 billion transactions elapse, the  
RecentXmin check causes active transactions to be reported as not running.  
If they aren't committed yet according to CLOG, async.c decides they  
aborted and discards their messages.  The timing for that is a bit tight  
but it can happen when multiple backends are sending notifies concurrently.  
The net symptom therefore is that a sufficiently-long-surviving  
listen-only backend starts to miss some fraction of NOTIFY traffic,  
but only under heavy load.  
  
The only function that updates RecentXmin is GetSnapshotData().  
A brute-force fix would therefore be to take a snapshot before  
processing incoming notify messages.  But that would add cycles,  
as well as contention for the ProcArrayLock.  We can be smarter:  
having taken the snapshot, let's use that to check for running  
XIDs, and not call TransactionIdIsInProgress() at all.  In this  
way we reduce the number of ProcArrayLock acquisitions from one  
per message to one per notify interrupt; that's the same under  
light load but should be a benefit under heavy load.  Light testing  
says that this change is a wash performance-wise for normal loads.  
  
I looked around for other callers of TransactionIdIsInProgress()  
that might be at similar risk, and didn't find any; all of them  
are inside transactions that presumably have already taken a  
snapshot.  
  
Problem report and diagnosis by Marko Tiikkaja, patch by me.  
Back-patch to all supported branches, since it's been like this  
since 9.0.  
  
Discussion: https://postgr.es/m/20170926182935.14128.65278@wrigleys.postgresql.org  
  

Fix access-off-end-of-array in clog.c.

  
commit   : 6d2ef1cb99ebaa334c9f0406be142635d02c53d0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 6 Oct 2017 12:20:13 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 6 Oct 2017 12:20:13 -0400    

Click here for diff

  
Sloppy loop coding in set_status_by_pages() resulted in fetching one array  
element more than it should from the subxids[] array.  The odds of this  
resulting in SIGSEGV are pretty small, but we've certainly seen that happen  
with similar mistakes elsewhere.  While at it, we can get rid of an extra  
TransactionIdToPage() calculation per loop.  
  
Per report from David Binderman.  Back-patch to all supported branches,  
since this code is quite old.  
  
Discussion: https://postgr.es/m/HE1PR0802MB2331CBA919CBFFF0C465EB429C710@HE1PR0802MB2331.eurprd08.prod.outlook.com  
  

Fix coding rules violations in walreceiver.c

  
commit   : c9c37e335e3a357a854fb825e4de7dcdc4c24680    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 3 Oct 2017 14:58:25 +0200    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 3 Oct 2017 14:58:25 +0200    

Click here for diff

  
1. Since commit b1a9bad9e744 we had pstrdup() inside a  
spinlock-protected critical section; reported by Andreas Seltenreich.  
Turn those into strlcpy() to stack-allocated variables instead.  
Backpatch to 9.6.  
  
2. Since commit 9ed551e0a4fd we had a pfree() uselessly inside a  
spinlock-protected critical section.  Tom Lane noticed in code review.  
Move down.  Backpatch to 9.6.  
  
3. Since commit 64233902d22b we had GetCurrentTimestamp() (a kernel  
call) inside a spinlock-protected critical section.  Tom Lane noticed in  
code review.  Move it up.  Backpatch to 9.2.  
  
4. Since commit 1bb2558046cc we did elog(PANIC) while holding spinlock.  
Tom Lane noticed in code review.  Release spinlock before dying.  
Backpatch to 9.2.  
  
Discussion: https://postgr.es/m/87h8vhtgj2.fsf@ansel.ydns.eu  
  

Fix behavior when converting a float infinity to numeric.

  
commit   : 72d4fd08ea63e2855c0b54317a09047455ff8369    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 27 Sep 2017 17:05:54 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 27 Sep 2017 17:05:54 -0400    

Click here for diff

  
float8_numeric() and float4_numeric() failed to consider the possibility  
that the input is an IEEE infinity.  The results depended on the  
platform-specific behavior of sprintf(): on most platforms you'd get  
something like  
  
ERROR:  invalid input syntax for type numeric: "inf"  
  
but at least on Windows it's possible for the conversion to succeed and  
deliver a finite value (typically 1), due to a nonstandard output format  
from sprintf and lack of syntax error checking in these functions.  
  
Since our numeric type lacks the concept of infinity, a suitable conversion  
is impossible; the best thing to do is throw an explicit error before  
letting sprintf do its thing.  
  
While at it, let's use snprintf not sprintf.  Overrunning the buffer  
should be impossible if sprintf does what it's supposed to, but this  
is cheap insurance against a stack smash if it doesn't.  
  
Problem reported by Taiki Kondo.  Patch by me based on fix suggestion  
from KaiGai Kohei.  Back-patch to all supported branches.  
  
Discussion: https://postgr.es/m/12A9442FBAE80D4E8953883E0B84E088C8C7A2@BPXM01GP.gisp.nec.co.jp  
  

Don’t recommend “DROP SCHEMA information_schema CASCADE”.

  
commit   : 6525a3a70968007666b5fce440f57b8bf7e6303f    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Tue, 26 Sep 2017 22:39:44 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Tue, 26 Sep 2017 22:39:44 -0700    

Click here for diff

  
It drops objects outside information_schema that depend on objects  
inside information_schema.  For example, it will drop a user-defined  
view if the view query refers to information_schema.  
  
Discussion: https://postgr.es/m/20170831025345.GE3963697@rfd.leadboat.com  
  

Improve wording of error message added in commit 714805010.

  
commit   : 9d0d0fe57cccc81430930957ab12fa270e5351d2    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 26 Sep 2017 15:25:57 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 26 Sep 2017 15:25:57 -0400    

Click here for diff

  
Per suggestions from Peter Eisentraut and David Johnston.  
Back-patch, like the previous commit.  
  
Discussion: https://postgr.es/m/E1dv9jI-0006oT-Fn@gemulon.postgresql.org  
  

Fix saving and restoring umask

  
commit   : 2eb84e54a25098618724ee633fbad78d5e417489    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Fri, 22 Sep 2017 16:50:59 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Fri, 22 Sep 2017 16:50:59 -0400    

Click here for diff

  
In two cases, we set a different umask for some piece of code and  
restore it afterwards.  But if the contained code errors out, the umask  
is not restored.  So add TRY/CATCH blocks to fix that.  
  

Sync our copy of the timezone library with IANA tzcode master.

  
commit   : a07105afacba5895df2e0aa38adb65e242c75b22    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 22 Sep 2017 00:04:21 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 22 Sep 2017 00:04:21 -0400    

Click here for diff

  
This patch absorbs a few unreleased fixes in the IANA code.  
It corresponds to commit 2d8b944c1cec0808ac4f7a9ee1a463c28f9cd00a  
in https://github.com/eggert/tz.  Non-cosmetic changes include:  
  
TZDEFRULESTRING is updated to match current US DST practice,  
rather than what it was over ten years ago.  This only matters  
for interpretation of POSIX-style zone names (e.g., "EST5EDT"),  
and only if the timezone database doesn't include either an exact  
match for the zone name or a "posixrules" entry.  The latter  
should not be true in any current Postgres installation, but  
this could possibly matter when using --with-system-tzdata.  
  
Get rid of a nonportable use of "++var" on a bool var.  
This is part of a larger fix that eliminates some vestigial  
support for consecutive leap seconds, and adds checks to  
the "zic" compiler that the data files do not specify that.  
  
Remove a couple of ancient compatibility hacks.  The IANA  
crew think these are obsolete, and I tend to agree.  But  
perhaps our buildfarm will think different.  
  
Back-patch to all supported branches, in line with our policy  
that all branches should be using current IANA code.  Before v10,  
this includes application of current pgindent rules, to avoid  
whitespace problems in future back-patches.  
  
Discussion: https://postgr.es/m/E1dsWhf-0000pT-F9@gemulon.postgresql.org  
  

Give a better error for duplicate entries in VACUUM/ANALYZE column list.

  
commit   : e56facd8b3005aee30fe3d10ef82504d99a2f236    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 21 Sep 2017 18:13:11 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 21 Sep 2017 18:13:11 -0400    

Click here for diff

  
Previously, the code didn't think about this case and would just try to  
analyze such a column twice.  That would fail at the point of inserting  
the second version of the pg_statistic row, with obscure error messsages  
like "duplicate key value violates unique constraint" or "tuple already  
updated by self", depending on context and PG version.  We could allow  
the case by ignoring duplicate column specifications, but it seems better  
to reject it explicitly.  
  
The bogus error messages seem like arguably a bug, so back-patch to  
all supported versions.  
  
Nathan Bossart, per a report from Michael Paquier, and whacked  
around a bit by me.  
  
Discussion: https://postgr.es/m/E061A8E3-5E3D-494D-94F0-E8A9B312BBFC@amazon.com  
  

Fix possible dangling pointer dereference in trigger.c.

  
commit   : 4cd6cd21d311f09e87190e6810f6c3f04ff6776b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 17 Sep 2017 14:50:01 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 17 Sep 2017 14:50:01 -0400    

Click here for diff

  
AfterTriggerEndQuery correctly notes that the query_stack could get  
repalloc'd during a trigger firing, but it nonetheless passes the address  
of a query_stack entry to afterTriggerInvokeEvents, so that if such a  
repalloc occurs, afterTriggerInvokeEvents is already working with an  
obsolete dangling pointer while it scans the rest of the events.  Oops.  
The only code at risk is its "delete_ok" cleanup code, so we can  
prevent unsafe behavior by passing delete_ok = false instead of true.  
  
However, that could have a significant performance penalty, because the  
point of passing delete_ok = true is to not have to re-scan possibly  
a large number of dead trigger events on the next time through the loop.  
There's more than one way to skin that cat, though.  What we can do is  
delete all the "chunks" in the event list except the last one, since  
we know all events in them must be dead.  Deleting the chunks is work  
we'd have had to do later in AfterTriggerEndQuery anyway, and it ends  
up saving rescanning of just about the same events we'd have gotten  
rid of with delete_ok = true.  
  
In v10 and HEAD, we also have to be careful to mop up any per-table  
after_trig_events pointers that would become dangling.  This is slightly  
annoying, but I don't think that normal use-cases will traverse this code  
path often enough for it to be a performance problem.  
  
It's pretty hard to hit this in practice because of the unlikelihood  
of the query_stack getting resized at just the wrong time.  Nonetheless,  
it's definitely a live bug of ancient standing, so back-patch to all  
supported branches.  
  
Discussion: https://postgr.es/m/2891.1505419542@sss.pgh.pa.us  
  

Fix macro-redefinition warning on MSVC.

  
commit   : a64a0bfaae0c003f09b7338766fd0b5e21b247b0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 3 Sep 2017 11:01:08 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 3 Sep 2017 11:01:08 -0400    

Click here for diff

  
In commit 9d6b160d7, I tweaked pg_config.h.win32 to use  
"#define HAVE_LONG_LONG_INT_64 1" rather than defining it as empty,  
for consistency with what happens in an autoconf'd build.  
But Solution.pm injects another definition of that macro into  
ecpg_config.h, leading to justifiable (though harmless) compiler whining.  
Make that one consistent too.  Back-patch, like the previous patch.  
  
Discussion: https://postgr.es/m/CAEepm=1dWsXROuSbRg8PbKLh0S=8Ou-V8sr05DxmJOF5chBxqQ@mail.gmail.com  
  

doc: Fix typos and other minor issues

  
commit   : c109785e385b670302513b1aea062f89f4252d38    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Fri, 1 Sep 2017 22:59:27 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Fri, 1 Sep 2017 22:59:27 -0400    

Click here for diff

  
Author: Alexander Lakhin <exclusion@gmail.com>  
  

Make [U]INT64CONST safe for use in #if conditions.

  
commit   : f60a236bab805c2aaf6818940f1b2ce04f4a7081    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 1 Sep 2017 15:14:18 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 1 Sep 2017 15:14:18 -0400    

Click here for diff

  
Instead of using a cast to force the constant to be the right width,  
assume we can plaster on an L, UL, LL, or ULL suffix as appropriate.  
The old approach to this is very hoary, dating from before we were  
willing to require compilers to have working int64 types.  
  
This fix makes the PG_INT64_MIN, PG_INT64_MAX, and PG_UINT64_MAX  
constants safe to use in preprocessor conditions, where a cast  
doesn't work.  Other symbolic constants that might be defined using  
[U]INT64CONST are likewise safer than before.  
  
Also fix the SIZE_MAX macro to be similarly safe, if we are forced  
to provide a definition for that.  The test added in commit 2e70d6b5e  
happens to do what we want even with the hack "(size_t) -1" definition,  
but we could easily get burnt on other tests in future.  
  
Back-patch to all supported branches, like the previous commits.  
  
Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us  
  

Ensure SIZE_MAX can be used throughout our code.

  
commit   : 0bfcda990405b9774dbe9b7d4ee71fa28a2599b3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 1 Sep 2017 13:52:54 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 1 Sep 2017 13:52:54 -0400    

Click here for diff

  
Pre-C99 platforms may lack <stdint.h> and thereby SIZE_MAX.  We have  
a couple of places using the hack "(size_t) -1" as a fallback, but  
it wasn't universally available; which means the code added in commit  
2e70d6b5e fails to compile everywhere.  Move that hack to c.h so that  
we can rely on having SIZE_MAX everywhere.  
  
Per discussion, it'd be a good idea to make the macro's value safe  
for use in #if-tests, but that will take a bit more work.  This is  
just a quick expedient to get the buildfarm green again.  
  
Back-patch to all supported branches, like the previous commit.  
  
Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us  
  

Doc: document libpq’s restriction to INT_MAX rows in a PGresult.

  
commit   : 2a82170a56cb5b7c8e8444d2089b15ba6492b102    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Aug 2017 15:38:05 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Aug 2017 15:38:05 -0400    

Click here for diff

  
As long as PQntuples, PQgetvalue, etc, use "int" for row numbers, we're  
pretty much stuck with this limitation.  The documentation formerly stated  
that the result of PQntuples "might overflow on 32-bit operating systems",  
which is just nonsense: that's not where the overflow would happen, and  
if you did reach an overflow it would not be on a 32-bit machine, because  
you'd have OOM'd long since.  
  
Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com  
  

Teach libpq to detect integer overflow in the row count of a PGresult.

  
commit   : a07058a6d43b954c4b6b5dd41299febf9102bdfd    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Aug 2017 15:18:01 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Aug 2017 15:18:01 -0400    

Click here for diff

  
Adding more than 1 billion rows to a PGresult would overflow its ntups and  
tupArrSize fields, leading to client crashes.  It'd be desirable to use  
wider fields on 64-bit machines, but because all of libpq's external APIs  
use plain "int" for row counters, that's going to be hard to accomplish  
without an ABI break.  Given the lack of complaints so far, and the general  
pain that would be involved in using such huge PGresults, let's settle for  
just preventing the overflow and reporting a useful error message if it  
does happen.  Also, for a couple more lines of code we can increase the  
threshold of trouble from INT_MAX/2 to INT_MAX rows.  
  
To do that, refactor pqAddTuple() to allow returning an error message that  
replaces the default assumption that it failed because of out-of-memory.  
  
Along the way, fix PQsetvalue() so that it reports all failures via  
pqInternalNotice().  It already did so in the case of bad field number,  
but neglected to report anything for other error causes.  
  
Because of the potential for crashes, this seems like a back-patchable  
bug fix, despite the lack of field reports.  
  
Michael Paquier, per a complaint from Igor Korot.  
  
Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com  
  

Improve docs about numeric formatting patterns (to_char/to_number).

  
commit   : d4ab7808bc583180d857821ce0b15219b25698de    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Aug 2017 09:34:21 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Aug 2017 09:34:21 -0400    

Click here for diff

  
The explanation about "0" versus "9" format characters was confusing  
and arguably wrong; the discussion of sign handling wasn't very good  
either.  Notably, while it's accurate to say that "FM" strips leading  
zeroes in date/time values, what it really does with numeric values  
is to strip *trailing* zeroes, and then only if you wrote "9" rather  
than "0".  Per gripes from Erwin Brandstetter.  
  
Discussion: https://postgr.es/m/CAGHENJ7jgRbTn6nf48xNZ=FHgL2WQ4X8mYsUAU57f-vq8PubEw@mail.gmail.com  
Discussion: https://postgr.es/m/CAGHENJ45ymd=GOCu1vwV9u7GmCR80_5tW0fP9C_gJKbruGMHvQ@mail.gmail.com