PostgreSQL 9.3.20 commit log

Stamp 9.3.20.

  
commit   : f3eff7b5c053735868c3967b7426d9f28d86873f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Nov 2017 17:15:48 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Nov 2017 17:15:48 -0500    

Click here for diff

  
  

Last-minute updates for release notes.

  
commit   : fb3930ab1fdb53ad842307a47ddaa1fed4e85d5c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Nov 2017 12:02:30 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Nov 2017 12:02:30 -0500    

Click here for diff

  
Security: CVE-2017-12172, CVE-2017-15098, CVE-2017-15099  
  

Make json{b}_populate_recordset() use the right tuple descriptor.

  
commit   : c0c8807ded2f59c25b375998ef24ff09994563a1    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Nov 2017 10:29:17 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Nov 2017 10:29:17 -0500    

Click here for diff

  
json{b}_populate_recordset() used the tuple descriptor created from the  
query-level AS clause without worrying about whether it matched the actual  
input record type.  If it didn't, that would usually result in a crash,  
though disclosure of server memory contents seems possible as well, for a  
skilled attacker capable of issuing crafted SQL commands.  Instead, use  
the query-supplied descriptor only when there is no input tuple to look at,  
and otherwise get a tuple descriptor based on the input tuple's own type  
marking.  The core code will detect any type mismatch in the latter case.  
  
Michael Paquier and Tom Lane, per a report from David Rowley.  
Back-patch to 9.3 where this functionality was introduced.  
  
Security: CVE-2017-15098  
  

start-scripts: switch to $PGUSER before opening $PGLOG.

  
commit   : b5002976804cfd42ada725b30cff324ebd3e9638    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 6 Nov 2017 07:11:10 -0800    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 6 Nov 2017 07:11:10 -0800    

Click here for diff

  
By default, $PGUSER has permission to unlink $PGLOG.  If $PGUSER  
replaces $PGLOG with a symbolic link, the server will corrupt the  
link-targeted file by appending log messages.  Since these scripts open  
$PGLOG as root, the attack works regardless of target file ownership.  
  
"make install" does not install these scripts anywhere.  Users having  
manually installed them in the past should repeat that process to  
acquire this fix.  Most script users have $PGLOG writable to root only,  
located in $PGDATA.  Just before updating one of these scripts, such  
users should rename $PGLOG to $PGLOG.old.  The script will then recreate  
$PGLOG with proper ownership.  
  
Reviewed by Peter Eisentraut.  Reported by Antoine Scemama.  
  
Security: CVE-2017-12172  
  

Translation updates

  
commit   : 1ea3f6ae0d1f2805100ce1b2e1d7e86f63b1f17b    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 5 Nov 2017 17:05:18 -0500    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 5 Nov 2017 17:05:18 -0500    

Click here for diff

  
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git  
Source-Git-Hash: 8cd11c648f4c98378ff0a2b5e1e92ab54f69a4a5  
  

Release notes for 10.1, 9.6.6, 9.5.10, 9.4.15, 9.3.20, 9.2.24.

  
commit   : 0f894849b7ea3a6d6eb97ef135ed8ca89b1d4480    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 5 Nov 2017 13:47:57 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 5 Nov 2017 13:47:57 -0500    

Click here for diff

  
In the v10 branch, also back-patch the effects of 1ff01b390 and c29c57890  
on these files, to reduce future maintenance issues.  (I'd do it further  
back, except that the 9.X branches differ anyway due to xlog-to-wal  
link tag renaming.)  
  

Improve error message for incorrect number inputs in libecpg.

  
commit   : deb429b51ed37e5c069f5d1fd659244a29d3a769    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Fri, 3 Nov 2017 11:14:30 +0100    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Fri, 3 Nov 2017 11:14:30 +0100    

Click here for diff

  
  

Fix float parsing in ecpg INFORMIX mode.

  
commit   : 7a35507acceb07c4ed1a7a0c82db50eee3101df3    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Thu, 2 Nov 2017 20:46:34 +0100    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Thu, 2 Nov 2017 20:46:34 +0100    

Click here for diff

  
  

Revert bogus fixes of HOT-freezing bug

  
commit   : f05ae2fa94b4e8c8fae4ccbc8d79cfbaa6a0e7b2    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Thu, 2 Nov 2017 15:51:05 +0100    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Thu, 2 Nov 2017 15:51:05 +0100    

Click here for diff

  
It turns out we misdiagnosed what the real problem was.  Revert the  
previous changes, because they may have worse consequences going  
forward.  A better fix is forthcoming.  
  
The simplistic test case is kept, though disabled.  
  
Discussion: https://postgr.es/m/20171102112019.33wb7g5wp4zpjelu@alap3.anarazel.de  
  

Doc: update URL for check_postgres.

  
commit   : e89867c033291b01614cb1a0e6fe76ce4c478ea6    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 1 Nov 2017 22:07:14 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 1 Nov 2017 22:07:14 -0400    

Click here for diff

  
Reported by Dan Vianello.  
  
Discussion: https://postgr.es/m/e6e12f18f70e46848c058084d42fb651@KSTLMEXGP001.CORP.CHARTERCOM.com  
  

Make sure ecpglib does accepts digits behind decimal point even for integers in Informix mode.

  
commit   : d64a4d3683d760ad7ba4e346b07f7f6022fa8930    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Wed, 1 Nov 2017 13:32:18 +0100    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Wed, 1 Nov 2017 13:32:18 +0100    

Click here for diff

  
Spotted and fixed by 高增琦 <pgf00a@gmail.com>  
  

Dept of second thoughts: keep aliasp_item in sync with tlistitem.

  
commit   : e06b9e9dc8377598d53791f340b8a973c6513b98    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 18:16:25 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 18:16:25 -0400    

Click here for diff

  
Commit d5b760ecb wasn't quite right, on second thought: if the  
caller didn't ask for column names then it would happily emit  
more Vars than if the caller did ask for column names.  This  
is surely not a good idea.  Advance the aliasp_item whether or  
not we're preparing a colnames list.  
  

Fix crash when columns have been added to the end of a view.

  
commit   : 9d15b8b36a9185f9990309770f480900310c75d4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 17:10:21 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 17:10:21 -0400    

Click here for diff

  
expandRTE() supposed that an RTE_SUBQUERY subquery must have exactly  
as many non-junk tlist items as the RTE has column aliases for it.  
This was true at the time the code was written, and is still true so  
far as parse analysis is concerned --- but when the function is used  
during planning, the subquery might have appeared through insertion  
of a view that now has more columns than it did when the outer query  
was parsed.  This results in a core dump if, for instance, we have  
to expand a whole-row Var that references the subquery.  
  
To avoid crashing, we can either stop expanding the RTE when we run  
out of aliases, or invent new aliases for the added columns.  While  
the latter might be more useful, the former is consistent with what  
expandRTE() does for composite-returning functions in the RTE_FUNCTION  
case, so it seems like we'd better do it that way.  
  
Per bug #14876 from Samuel Horwitz.  This has been busted since commit  
ff1ea2173 allowed views to acquire more columns, so back-patch to all  
supported branches.  
  
Discussion: https://postgr.es/m/20171026184035.1471.82810@wrigleys.postgresql.org  
  

Rethink the dependencies recorded for FieldSelect/FieldStore nodes.

  
commit   : be203c36a6e8c6b94230e884f6390a57b7e2387a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 12:18:57 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 12:18:57 -0400    

Click here for diff

  
On closer investigation, commits f3ea3e3e8 et al were a few bricks  
shy of a load.  What we need is not so much to lock down the result  
type of a FieldSelect, as to lock down the existence of the column  
it's trying to extract.  Otherwise, we can break it by dropping that  
column.  The dependency on the result type is then held indirectly  
through the column, and doesn't need to be recorded explicitly.  
  
Out of paranoia, I left in the code to record a dependency on the  
result type, but it's used only if we can't identify the pg_class OID  
for the column.  That shouldn't ever happen right now, AFAICS, but  
it seems possible that in future the input node could be marked as  
being of type RECORD rather than some specific composite type.  
  
Likewise for FieldStore.  
  
Like the previous patch, back-patch to all supported branches.  
  
Discussion: https://postgr.es/m/22571.1509064146@sss.pgh.pa.us  
  

Doc: mention that you can’t PREPARE TRANSACTION after NOTIFY.

  
commit   : 7102efd9d71f1398bdd9d695c2d1e730deaeb4d2    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 10:46:07 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Oct 2017 10:46:07 -0400    

Click here for diff

  
The NOTIFY page said this already, but the PREPARE TRANSACTION page  
missed it.  
  
Discussion: https://postgr.es/m/20171024010602.1488.80066@wrigleys.postgresql.org  
  

Improve gendef.pl diagnostic on failure to open sym file

  
commit   : 0cf721244a819cbd67b2bc0a8c8a97adcb53944e    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Thu, 26 Oct 2017 10:10:37 -0400    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Thu, 26 Oct 2017 10:10:37 -0400    

Click here for diff

  
There have been numerous buildfarm failures but the diagnostic is  
currently silent about the reason for failure to open the file. Let's  
see if we can get to the bottom of it.  
  
Backpatch to all live branches.  
  

Fix libpq to not require user’s home directory to exist.

  
commit   : 6dd7a12075c208e1af6d3b38e65c0003a0921509    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 25 Oct 2017 19:32:24 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 25 Oct 2017 19:32:24 -0400    

Click here for diff

  
Some people like to run libpq-using applications in environments where  
there's no home directory.  We've broken that scenario before (cf commits  
5b4067798 and bd58d9d88), and commit ba005f193 broke it again, by making  
it a hard error if we fail to get the home directory name while looking  
for ~/.pgpass.  The previous precedent is that if we can't get the home  
directory name, we should just silently act as though the file we hoped  
to find there doesn't exist.  Rearrange the new code to honor that.  
  
Looking around, the service-file code added by commit 41a4e4595 had the  
same disease.  Apparently, that escaped notice because it only runs when  
a service name has been specified, which I guess the people who use this  
scenario don't do.  Nonetheless, it's wrong too, so fix that case as well.  
  
Add a comment about this policy to pqGetHomeDirectory, in the probably  
vain hope of forestalling the same error in future.  And upgrade the  
rather miserable commenting in parseServiceInfo, too.  
  
In passing, also back off parseServiceInfo's assumption that only ENOENT  
is an ignorable error from stat() when checking a service file.  We would  
need to ignore at least ENOTDIR as well (cf 5b4067798), and seeing that  
the far-better-tested code for ~/.pgpass treats all stat() failures alike,  
I think this code ought to as well.  
  
Per bug #14872 from Dan Watson.  Back-patch the .pgpass change to v10  
where ba005f193 came in.  The service-file bugs are far older, so  
back-patch the other changes to all supported branches.  
  
Discussion: https://postgr.es/m/20171025200457.1471.34504@wrigleys.postgresql.org  
  

Update time zone data files to tzdata release 2017c.

  
commit   : da82bb1d8fab221f319ae9b1b9739cdd2125df09    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 23 Oct 2017 18:15:36 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 23 Oct 2017 18:15:36 -0400    

Click here for diff

  
DST law changes in Fiji, Namibia, Northern Cyprus, Sudan, Tonga,  
and Turks & Caicos Islands.  Historical corrections for Alaska, Apia,  
Burma, Calcutta, Detroit, Ireland, Namibia, and Pago Pago.  
  

Sync our copy of the timezone library with IANA release tzcode2017c.

  
commit   : 9c74dd2d5bc2013dec97f3bcd92a0487bb35a061    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 23 Oct 2017 17:54:09 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 23 Oct 2017 17:54:09 -0400    

Click here for diff

  
This is a trivial update containing only cosmetic changes.  The point  
is just to get back to being synced with an official release of tzcode,  
rather than some ad-hoc point in their commit history, which is where  
commit 47f849a3c left it.  
  

Fix some oversights in expression dependency recording.

  
commit   : dde99de1169e186fe01491959fefcaea5a13f7f0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 23 Oct 2017 13:57:46 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 23 Oct 2017 13:57:46 -0400    

Click here for diff

  
find_expr_references() neglected to record a dependency on the result type  
of a FieldSelect node, allowing a DROP TYPE to break a view or rule that  
contains such an expression.  I think we'd omitted this case intentionally,  
reasoning that there would always be a related dependency ensuring that the  
DROP would cascade to the view.  But at least with nested field selection  
expressions, that's not true, as shown in bug #14867 from Mansur Galiev.  
Add the dependency, and for good measure a dependency on the node's exposed  
collation.  
  
Likewise add a dependency on the result type of a FieldStore.  I think here  
the reasoning was that it'd only appear within an assignment to a field,  
and the dependency on the field's column would be enough ... but having  
seen this example, I think that's wrong for nested-composites cases.  
  
Looking at nearby code, I notice we're not recording a dependency on the  
exposed collation of CoerceViaIO, which seems inconsistent with our choices  
for related node types.  Maybe that's OK but I'm feeling suspicious of this  
code today, so let's add that; it certainly can't hurt.  
  
This patch does not do anything to protect already-existing views, only  
views created after it's installed.  But seeing that the issue has been  
there a very long time and nobody noticed till now, that's probably good  
enough.  
  
Back-patch to all supported branches.  
  
Discussion: https://postgr.es/m/20171023150118.1477.19174@wrigleys.postgresql.org  
  

Fix typcache’s failure to treat ranges as container types.

  
commit   : 7c70a129ef030220748060e9fc7c5c916a9c70da    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 20 Oct 2017 17:12:27 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 20 Oct 2017 17:12:27 -0400    

Click here for diff

  
Like the similar logic for arrays and records, it's necessary to examine  
the range's subtype to decide whether the range type can support hashing.  
We can omit checking the subtype for btree-defined operations, though,  
since range subtypes are required to have those operations.  (Possibly  
that simplification for btree cases led us to overlook that it does  
not apply for hash cases.)  
  
This is only an issue if the subtype lacks hash support, which is not  
true of any built-in range type, but it's easy to demonstrate a problem  
with a range type over, eg, money: you can get a "could not identify  
a hash function" failure when the planner is misled into thinking that  
hash join or aggregation would work.  
  
This was born broken, so back-patch to all supported branches.  
  

Fix misparsing of non-newline-terminated pg_hba.conf files.

  
commit   : 06b2a73edaf880dcf36b6e1108a9bb7cf007dc57    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 17 Oct 2017 12:15:08 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 17 Oct 2017 12:15:08 -0400    

Click here for diff

  
This back-patches the v10-cycle commit 1e5a5d03d into 9.3 - 9.6.  
I had noticed at the time that that was fixing a bug, namely that  
next_token() might advance *lineptr past the line-terminating '\0',  
but given the lack of field complaints I too easily convinced myself  
that the problem was only latent.  It's not, because tokenize_file()  
decides whether there's more on the line using "strlen(lineptr)".  
  
The bug is indeed latent on a newline-terminated line, because then  
the newline-stripping bit in tokenize_file() means we'll have two  
or more consecutive '\0's in the buffer, masking the fact that we  
accidentally advanced over the first one.  But the last line in  
the file might not be null-terminated, allowing the loop to see  
and process garbage, as reported by Mark Jones in bug #14859.  
  
The bug doesn't exist in <= 9.2; there next_token() is reading directly  
from a file, and termination of the outer loop relies on an feof() test  
not a buffer pointer check.  Probably commit 7f49a67f9 can be blamed  
for this bug, but I didn't track it down exactly.  
  
Commit 1e5a5d03d does a bit more than the minimum needed to fix the  
bug, but I felt the rest of it was good cleanup, so applying it all.  
  
Discussion: https://postgr.es/m/20171017141814.8203.27280@wrigleys.postgresql.org  
  

Doc: fix missing explanation of default object privileges.

  
commit   : f7126b4d6524b1118cebd2a4cfdbe02f80eaa965    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 11 Oct 2017 16:56:23 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 11 Oct 2017 16:56:23 -0400    

Click here for diff

  
The GRANT reference page, which lists the default privileges for new  
objects, failed to mention that USAGE is granted by default for data  
types and domains.  As a lesser sin, it also did not specify anything  
about the initial privileges for sequences, FDWs, foreign servers,  
or large objects.  Fix that, and add a comment to acldefault() in the  
probably vain hope of getting people to maintain this list in future.  
  
Noted by Laurenz Albe, though I editorialized on the wording a bit.  
Back-patch to all supported branches, since they all have this behavior.  
  
Discussion: https://postgr.es/m/1507620895.4152.1.camel@cybertec.at  
  

Fix low-probability loss of NOTIFY messages due to XID wraparound.

  
commit   : 7573d122f157b136b7d2af2f3f112e702c8d7437    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 11 Oct 2017 14:28:33 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 11 Oct 2017 14:28:33 -0400    

Click here for diff

  
Up to now async.c has used TransactionIdIsInProgress() to detect whether  
a notify message's source transaction is still running.  However, that  
function has a quick-exit path that reports that XIDs before RecentXmin  
are no longer running.  If a listening backend is doing nothing but  
listening, and not running any queries, there is nothing that will advance  
its value of RecentXmin.  Once 2 billion transactions elapse, the  
RecentXmin check causes active transactions to be reported as not running.  
If they aren't committed yet according to CLOG, async.c decides they  
aborted and discards their messages.  The timing for that is a bit tight  
but it can happen when multiple backends are sending notifies concurrently.  
The net symptom therefore is that a sufficiently-long-surviving  
listen-only backend starts to miss some fraction of NOTIFY traffic,  
but only under heavy load.  
  
The only function that updates RecentXmin is GetSnapshotData().  
A brute-force fix would therefore be to take a snapshot before  
processing incoming notify messages.  But that would add cycles,  
as well as contention for the ProcArrayLock.  We can be smarter:  
having taken the snapshot, let's use that to check for running  
XIDs, and not call TransactionIdIsInProgress() at all.  In this  
way we reduce the number of ProcArrayLock acquisitions from one  
per message to one per notify interrupt; that's the same under  
light load but should be a benefit under heavy load.  Light testing  
says that this change is a wash performance-wise for normal loads.  
  
I looked around for other callers of TransactionIdIsInProgress()  
that might be at similar risk, and didn't find any; all of them  
are inside transactions that presumably have already taken a  
snapshot.  
  
Problem report and diagnosis by Marko Tiikkaja, patch by me.  
Back-patch to all supported branches, since it's been like this  
since 9.0.  
  
Discussion: https://postgr.es/m/20170926182935.14128.65278@wrigleys.postgresql.org  
  

Fix access-off-end-of-array in clog.c.

  
commit   : d45fb6e9d8ca02c4e047d43cb493066a372256d3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 6 Oct 2017 12:20:13 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 6 Oct 2017 12:20:13 -0400    

Click here for diff

  
Sloppy loop coding in set_status_by_pages() resulted in fetching one array  
element more than it should from the subxids[] array.  The odds of this  
resulting in SIGSEGV are pretty small, but we've certainly seen that happen  
with similar mistakes elsewhere.  While at it, we can get rid of an extra  
TransactionIdToPage() calculation per loop.  
  
Per report from David Binderman.  Back-patch to all supported branches,  
since this code is quite old.  
  
Discussion: https://postgr.es/m/HE1PR0802MB2331CBA919CBFFF0C465EB429C710@HE1PR0802MB2331.eurprd08.prod.outlook.com  
  

Fix traversal of half-frozen update chains

  
commit   : b052d524ca712450b0853bb0d5dbd2d2ce139c05    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 6 Oct 2017 17:14:42 +0200    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 6 Oct 2017 17:14:42 +0200    

Click here for diff

  
When some tuple versions in an update chain are frozen due to them being  
older than freeze_min_age, the xmax/xmin trail can become broken.  This  
breaks HOT (and probably other things).  A subsequent VACUUM can break  
things in more serious ways, such as leaving orphan heap-only tuples  
whose root HOT redirect items were removed.  This can be seen because  
index creation (or REINDEX) complain like  
  ERROR:  XX000: failed to find parent tuple for heap-only tuple at (0,7) in table "t"  
  
Because of relfrozenxid contraints, we cannot avoid the freezing of the  
early tuples, so we must cope with the results: whenever we see an Xmin  
of FrozenTransactionId, consider it a match for whatever the previous  
Xmax value was.  
  
This problem seems to have appeared in 9.3 with multixact changes,  
though strictly speaking it seems unrelated.  
  
Since 9.4 we have commit 37484ad2a "Change the way we mark tuples as  
frozen", so the fix is simple: just compare the raw Xmin (still stored  
in the tuple header, since freezing merely set an infomask bit) to the  
Xmax.  But in 9.3 we rewrite the Xmin value to FrozenTransactionId, so  
the original value is lost and we have nothing to compare the Xmax with.  
To cope with that case we need to compare the Xmin with FrozenXid,  
assume it's a match, and hope for the best.  Sadly, since you can  
pg_upgrade a 9.3 instance containing half-frozen pages to newer  
releases, we need to keep the old check in newer versions too, which  
seems a bit brittle; I hope we can somehow get rid of that.  
  
I didn't optimize the new function for performance.  The new coding is  
probably a bit slower than before, since there is a function call rather  
than a straight comparison, but I'd rather have it work correctly than  
be fast but wrong.  
  
This is a followup after 20b655224249 fixed a few related problems.  
Apparently, in 9.6 and up there are more ways to get into trouble, but  
in 9.3 - 9.5 I cannot reproduce a problem anymore with this patch, so  
there must be a separate bug.  
  
Reported-by: Peter Geoghegan  
Diagnosed-by: Peter Geoghegan, Michael Paquier, Daniel Wood,  
	Yi Wen Wong, Álvaro  
Discussion: https://postgr.es/m/CAH2-Wznm4rCrhFAiwKPWTpEw2bXDtgROZK7jWWGucXeH3D1fmA@mail.gmail.com  
  

Fix coding rules violations in walreceiver.c

  
commit   : b24f15f86de48a5f1dd499c8af4d16c696b2c656    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 3 Oct 2017 14:58:25 +0200    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 3 Oct 2017 14:58:25 +0200    

Click here for diff

  
1. Since commit b1a9bad9e744 we had pstrdup() inside a  
spinlock-protected critical section; reported by Andreas Seltenreich.  
Turn those into strlcpy() to stack-allocated variables instead.  
Backpatch to 9.6.  
  
2. Since commit 9ed551e0a4fd we had a pfree() uselessly inside a  
spinlock-protected critical section.  Tom Lane noticed in code review.  
Move down.  Backpatch to 9.6.  
  
3. Since commit 64233902d22b we had GetCurrentTimestamp() (a kernel  
call) inside a spinlock-protected critical section.  Tom Lane noticed in  
code review.  Move it up.  Backpatch to 9.2.  
  
4. Since commit 1bb2558046cc we did elog(PANIC) while holding spinlock.  
Tom Lane noticed in code review.  Release spinlock before dying.  
Backpatch to 9.2.  
  
Discussion: https://postgr.es/m/87h8vhtgj2.fsf@ansel.ydns.eu  
  

Fix freezing of a dead HOT-updated tuple

  
commit   : d149aa762c05ba904e77f8cf27da7ad821f5ecd0    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Thu, 28 Sep 2017 16:44:01 +0200    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Thu, 28 Sep 2017 16:44:01 +0200    

Click here for diff

  
Vacuum calls page-level HOT prune to remove dead HOT tuples before doing  
liveness checks (HeapTupleSatisfiesVacuum) on the remaining tuples.  But  
concurrent transaction commit/abort may turn DEAD some of the HOT tuples  
that survived the prune, before HeapTupleSatisfiesVacuum tests them.  
This happens to activate the code that decides to freeze the tuple ...  
which resuscitates it, duplicating data.  
  
(This is especially bad if there's any unique constraints, because those  
are now internally violated due to the duplicate entries, though you  
won't know until you try to REINDEX or dump/restore the table.)  
  
One possible fix would be to simply skip doing anything to the tuple,  
and hope that the next HOT prune would remove it.  But there is a  
problem: if the tuple is older than freeze horizon, this would leave an  
unfrozen XID behind, and if no HOT prune happens to clean it up before  
the containing pg_clog segment is truncated away, it'd later cause an  
error when the XID is looked up.  
  
Fix the problem by having the tuple freezing routines cope with the  
situation: don't freeze the tuple (and keep it dead).  In the cases that  
the XID is older than the freeze age, set the HEAP_XMAX_COMMITTED flag  
so that there is no need to look up the XID in pg_clog later on.  
  
An isolation test is included, authored by Michael Paquier, loosely  
based on Daniel Wood's original reproducer.  It only tests one  
particular scenario, though, not all the possible ways for this problem  
to surface; it be good to have a more reliable way to test this more  
fully, but it'd require more work.  
In message https://postgr.es/m/20170911140103.5akxptyrwgpc25bw@alvherre.pgsql  
I outlined another test case (more closely matching Dan Wood's) that  
exposed a few more ways for the problem to occur.  
  
Backpatch all the way back to 9.3, where this problem was introduced by  
multixact juggling.  In branches 9.3 and 9.4, this includes a backpatch  
of commit e5ff9fefcd50 (of 9.5 era), since the original is not  
correctable without matching the coding pattern in 9.5 up.  
  
Reported-by: Daniel Wood  
Diagnosed-by: Daniel Wood  
Reviewed-by: Yi Wen Wong, Michaël Paquier  
Discussion: https://postgr.es/m/E5711E62-8FDF-4DCA-A888-C200BF6B5742@amazon.com  
  

Fix behavior when converting a float infinity to numeric.

  
commit   : 2e82fba0e6a9f5d3253b89e5f2de2b1bf042d71c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 27 Sep 2017 17:05:54 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 27 Sep 2017 17:05:54 -0400    

Click here for diff

  
float8_numeric() and float4_numeric() failed to consider the possibility  
that the input is an IEEE infinity.  The results depended on the  
platform-specific behavior of sprintf(): on most platforms you'd get  
something like  
  
ERROR:  invalid input syntax for type numeric: "inf"  
  
but at least on Windows it's possible for the conversion to succeed and  
deliver a finite value (typically 1), due to a nonstandard output format  
from sprintf and lack of syntax error checking in these functions.  
  
Since our numeric type lacks the concept of infinity, a suitable conversion  
is impossible; the best thing to do is throw an explicit error before  
letting sprintf do its thing.  
  
While at it, let's use snprintf not sprintf.  Overrunning the buffer  
should be impossible if sprintf does what it's supposed to, but this  
is cheap insurance against a stack smash if it doesn't.  
  
Problem reported by Taiki Kondo.  Patch by me based on fix suggestion  
from KaiGai Kohei.  Back-patch to all supported branches.  
  
Discussion: https://postgr.es/m/12A9442FBAE80D4E8953883E0B84E088C8C7A2@BPXM01GP.gisp.nec.co.jp  
  

Don’t recommend “DROP SCHEMA information_schema CASCADE”.

  
commit   : 43661926deb7d412ae3a22a43dadb9def7b7e46c    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Tue, 26 Sep 2017 22:39:44 -0700    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Tue, 26 Sep 2017 22:39:44 -0700    

Click here for diff

  
It drops objects outside information_schema that depend on objects  
inside information_schema.  For example, it will drop a user-defined  
view if the view query refers to information_schema.  
  
Discussion: https://postgr.es/m/20170831025345.GE3963697@rfd.leadboat.com  
  

Improve wording of error message added in commit 714805010.

  
commit   : 6c77b47b2e8edbe6446a42adc6099d1543e69214    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 26 Sep 2017 15:25:57 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 26 Sep 2017 15:25:57 -0400    

Click here for diff

  
Per suggestions from Peter Eisentraut and David Johnston.  
Back-patch, like the previous commit.  
  
Discussion: https://postgr.es/m/E1dv9jI-0006oT-Fn@gemulon.postgresql.org  
  

Fix saving and restoring umask

  
commit   : e0f5710c5e8b9502ac8bcd821d3418053ed38f7a    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Fri, 22 Sep 2017 16:50:59 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Fri, 22 Sep 2017 16:50:59 -0400    

Click here for diff

  
In two cases, we set a different umask for some piece of code and  
restore it afterwards.  But if the contained code errors out, the umask  
is not restored.  So add TRY/CATCH blocks to fix that.  
  

Sync our copy of the timezone library with IANA tzcode master.

  
commit   : 2020f90bf6753dea790caa7dd9983b6edd5b17c5    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 22 Sep 2017 00:04:21 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 22 Sep 2017 00:04:21 -0400    

Click here for diff

  
This patch absorbs a few unreleased fixes in the IANA code.  
It corresponds to commit 2d8b944c1cec0808ac4f7a9ee1a463c28f9cd00a  
in https://github.com/eggert/tz.  Non-cosmetic changes include:  
  
TZDEFRULESTRING is updated to match current US DST practice,  
rather than what it was over ten years ago.  This only matters  
for interpretation of POSIX-style zone names (e.g., "EST5EDT"),  
and only if the timezone database doesn't include either an exact  
match for the zone name or a "posixrules" entry.  The latter  
should not be true in any current Postgres installation, but  
this could possibly matter when using --with-system-tzdata.  
  
Get rid of a nonportable use of "++var" on a bool var.  
This is part of a larger fix that eliminates some vestigial  
support for consecutive leap seconds, and adds checks to  
the "zic" compiler that the data files do not specify that.  
  
Remove a couple of ancient compatibility hacks.  The IANA  
crew think these are obsolete, and I tend to agree.  But  
perhaps our buildfarm will think different.  
  
Back-patch to all supported branches, in line with our policy  
that all branches should be using current IANA code.  Before v10,  
this includes application of current pgindent rules, to avoid  
whitespace problems in future back-patches.  
  
Discussion: https://postgr.es/m/E1dsWhf-0000pT-F9@gemulon.postgresql.org  
  

Give a better error for duplicate entries in VACUUM/ANALYZE column list.

  
commit   : a09d8be7ddaf3d5bccbd1cc1138895fde379d15e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 21 Sep 2017 18:13:11 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 21 Sep 2017 18:13:11 -0400    

Click here for diff

  
Previously, the code didn't think about this case and would just try to  
analyze such a column twice.  That would fail at the point of inserting  
the second version of the pg_statistic row, with obscure error messsages  
like "duplicate key value violates unique constraint" or "tuple already  
updated by self", depending on context and PG version.  We could allow  
the case by ignoring duplicate column specifications, but it seems better  
to reject it explicitly.  
  
The bogus error messages seem like arguably a bug, so back-patch to  
all supported versions.  
  
Nathan Bossart, per a report from Michael Paquier, and whacked  
around a bit by me.  
  
Discussion: https://postgr.es/m/E061A8E3-5E3D-494D-94F0-E8A9B312BBFC@amazon.com  
  

Fixed ECPG to correctly handle out-of-scope cursor declarations with pointers or array variables.

  
commit   : 149cfdb3a2e9969cbadc1d6b5bfee88f974086f4    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Mon, 11 Sep 2017 21:10:36 +0200    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Mon, 11 Sep 2017 21:10:36 +0200    

Click here for diff

  
  

Fix possible dangling pointer dereference in trigger.c.

  
commit   : b1be3359368cf22a9566713c4b1cec22f61bb428    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 17 Sep 2017 14:50:01 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 17 Sep 2017 14:50:01 -0400    

Click here for diff

  
AfterTriggerEndQuery correctly notes that the query_stack could get  
repalloc'd during a trigger firing, but it nonetheless passes the address  
of a query_stack entry to afterTriggerInvokeEvents, so that if such a  
repalloc occurs, afterTriggerInvokeEvents is already working with an  
obsolete dangling pointer while it scans the rest of the events.  Oops.  
The only code at risk is its "delete_ok" cleanup code, so we can  
prevent unsafe behavior by passing delete_ok = false instead of true.  
  
However, that could have a significant performance penalty, because the  
point of passing delete_ok = true is to not have to re-scan possibly  
a large number of dead trigger events on the next time through the loop.  
There's more than one way to skin that cat, though.  What we can do is  
delete all the "chunks" in the event list except the last one, since  
we know all events in them must be dead.  Deleting the chunks is work  
we'd have had to do later in AfterTriggerEndQuery anyway, and it ends  
up saving rescanning of just about the same events we'd have gotten  
rid of with delete_ok = true.  
  
In v10 and HEAD, we also have to be careful to mop up any per-table  
after_trig_events pointers that would become dangling.  This is slightly  
annoying, but I don't think that normal use-cases will traverse this code  
path often enough for it to be a performance problem.  
  
It's pretty hard to hit this in practice because of the unlikelihood  
of the query_stack getting resized at just the wrong time.  Nonetheless,  
it's definitely a live bug of ancient standing, so back-patch to all  
supported branches.  
  
Discussion: https://postgr.es/m/2891.1505419542@sss.pgh.pa.us  
  

Fix macro-redefinition warning on MSVC.

  
commit   : a42f8d979299a0ba804a7499e14dcff0e43af163    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 3 Sep 2017 11:01:08 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 3 Sep 2017 11:01:08 -0400    

Click here for diff

  
In commit 9d6b160d7, I tweaked pg_config.h.win32 to use  
"#define HAVE_LONG_LONG_INT_64 1" rather than defining it as empty,  
for consistency with what happens in an autoconf'd build.  
But Solution.pm injects another definition of that macro into  
ecpg_config.h, leading to justifiable (though harmless) compiler whining.  
Make that one consistent too.  Back-patch, like the previous patch.  
  
Discussion: https://postgr.es/m/CAEepm=1dWsXROuSbRg8PbKLh0S=8Ou-V8sr05DxmJOF5chBxqQ@mail.gmail.com  
  

doc: Fix typos and other minor issues

  
commit   : 727add80d6c2c9b5362516718e1b0c86e800beba    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Fri, 1 Sep 2017 22:59:27 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Fri, 1 Sep 2017 22:59:27 -0400    

Click here for diff

  
Author: Alexander Lakhin <exclusion@gmail.com>  
  

Make [U]INT64CONST safe for use in #if conditions.

  
commit   : dd344de6718ba144e6c6def5b095bf4e220733a0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 1 Sep 2017 15:14:18 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 1 Sep 2017 15:14:18 -0400    

Click here for diff

  
Instead of using a cast to force the constant to be the right width,  
assume we can plaster on an L, UL, LL, or ULL suffix as appropriate.  
The old approach to this is very hoary, dating from before we were  
willing to require compilers to have working int64 types.  
  
This fix makes the PG_INT64_MIN, PG_INT64_MAX, and PG_UINT64_MAX  
constants safe to use in preprocessor conditions, where a cast  
doesn't work.  Other symbolic constants that might be defined using  
[U]INT64CONST are likewise safer than before.  
  
Also fix the SIZE_MAX macro to be similarly safe, if we are forced  
to provide a definition for that.  The test added in commit 2e70d6b5e  
happens to do what we want even with the hack "(size_t) -1" definition,  
but we could easily get burnt on other tests in future.  
  
Back-patch to all supported branches, like the previous commits.  
  
Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us  
  

Ensure SIZE_MAX can be used throughout our code.

  
commit   : 074985b26a434f0d3b5c4724834f716b3a480e17    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 1 Sep 2017 13:52:54 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 1 Sep 2017 13:52:54 -0400    

Click here for diff

  
Pre-C99 platforms may lack <stdint.h> and thereby SIZE_MAX.  We have  
a couple of places using the hack "(size_t) -1" as a fallback, but  
it wasn't universally available; which means the code added in commit  
2e70d6b5e fails to compile everywhere.  Move that hack to c.h so that  
we can rely on having SIZE_MAX everywhere.  
  
Per discussion, it'd be a good idea to make the macro's value safe  
for use in #if-tests, but that will take a bit more work.  This is  
just a quick expedient to get the buildfarm green again.  
  
Back-patch to all supported branches, like the previous commit.  
  
Discussion: https://postgr.es/m/15883.1504278595@sss.pgh.pa.us  
  

Doc: document libpq’s restriction to INT_MAX rows in a PGresult.

  
commit   : 1e6c3626063571443a828e5fd2e59d9d8b9f0e91    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Aug 2017 15:38:05 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Aug 2017 15:38:05 -0400    

Click here for diff

  
As long as PQntuples, PQgetvalue, etc, use "int" for row numbers, we're  
pretty much stuck with this limitation.  The documentation formerly stated  
that the result of PQntuples "might overflow on 32-bit operating systems",  
which is just nonsense: that's not where the overflow would happen, and  
if you did reach an overflow it would not be on a 32-bit machine, because  
you'd have OOM'd long since.  
  
Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com  
  

Teach libpq to detect integer overflow in the row count of a PGresult.

  
commit   : d391fb6c3802954661f84ac434c2b557e7437670    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Aug 2017 15:18:01 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Aug 2017 15:18:01 -0400    

Click here for diff

  
Adding more than 1 billion rows to a PGresult would overflow its ntups and  
tupArrSize fields, leading to client crashes.  It'd be desirable to use  
wider fields on 64-bit machines, but because all of libpq's external APIs  
use plain "int" for row counters, that's going to be hard to accomplish  
without an ABI break.  Given the lack of complaints so far, and the general  
pain that would be involved in using such huge PGresults, let's settle for  
just preventing the overflow and reporting a useful error message if it  
does happen.  Also, for a couple more lines of code we can increase the  
threshold of trouble from INT_MAX/2 to INT_MAX rows.  
  
To do that, refactor pqAddTuple() to allow returning an error message that  
replaces the default assumption that it failed because of out-of-memory.  
  
Along the way, fix PQsetvalue() so that it reports all failures via  
pqInternalNotice().  It already did so in the case of bad field number,  
but neglected to report anything for other error causes.  
  
Because of the potential for crashes, this seems like a back-patchable  
bug fix, despite the lack of field reports.  
  
Michael Paquier, per a complaint from Igor Korot.  
  
Discussion: https://postgr.es/m/CA+FnnTxyLWyjY1goewmJNxC==HQCCF4fKkoCTa9qR36oRAHDPw@mail.gmail.com  
  

Improve docs about numeric formatting patterns (to_char/to_number).

  
commit   : 669bef911c00808b6a7672044ea6dcf1f124cd13    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Aug 2017 09:34:21 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 29 Aug 2017 09:34:21 -0400    

Click here for diff

  
The explanation about "0" versus "9" format characters was confusing  
and arguably wrong; the discussion of sign handling wasn't very good  
either.  Notably, while it's accurate to say that "FM" strips leading  
zeroes in date/time values, what it really does with numeric values  
is to strip *trailing* zeroes, and then only if you wrote "9" rather  
than "0".  Per gripes from Erwin Brandstetter.  
  
Discussion: https://postgr.es/m/CAGHENJ7jgRbTn6nf48xNZ=FHgL2WQ4X8mYsUAU57f-vq8PubEw@mail.gmail.com  
Discussion: https://postgr.es/m/CAGHENJ45ymd=GOCu1vwV9u7GmCR80_5tW0fP9C_gJKbruGMHvQ@mail.gmail.com