PostgreSQL 9.1.16 commit log

Last-minute updates for release notes.

commit   : 5b461f239eae2ad67f268e31ada7d79331b89652    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 19 May 2015 18:33:58 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 19 May 2015 18:33:58 -0400    

Click here for diff

Revise description of CVE-2015-3166, in line with scaled-back patch.  
Change release date.  
  
Security: CVE-2015-3166  

M doc/src/sgml/release-9.0.sgml
M doc/src/sgml/release-9.1.sgml

Revert error-throwing wrappers for the printf family of functions.

commit   : 0510cff6e8018882a578512b52120989ea164681    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 19 May 2015 18:18:16 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 19 May 2015 18:18:16 -0400    

Click here for diff

This reverts commit 16304a013432931e61e623c8d85e9fe24709d9ba, except  
for its changes in src/port/snprintf.c; as well as commit  
cac18a76bb6b08f1ecc2a85e46c9d2ab82dd9d23 which is no longer needed.  
  
Fujii Masao reported that the previous commit caused failures in psql on  
OS X, since if one exits the pager program early while viewing a query  
result, psql sees an EPIPE error from fprintf --- and the wrapper function  
thought that was reason to panic.  (It's a bit surprising that the same  
does not happen on Linux.)  Further discussion among the security list  
concluded that the risk of other such failures was far too great, and  
that the one-size-fits-all approach to error handling embodied in the  
previous patch is unlikely to be workable.  
  
This leaves us again exposed to the possibility of the type of failure  
envisioned in CVE-2015-3166.  However, that failure mode is strictly  
hypothetical at this point: there is no concrete reason to believe that  
an attacker could trigger information disclosure through the supposed  
mechanism.  In the first place, the attack surface is fairly limited,  
since so much of what the backend does with format strings goes through  
stringinfo.c or psprintf(), and those already had adequate defenses.  
In the second place, even granting that an unprivileged attacker could  
control the occurrence of ENOMEM with some precision, it's a stretch to  
believe that he could induce it just where the target buffer contains some  
valuable information.  So we concluded that the risk of non-hypothetical  
problems induced by the patch greatly outweighs the security risks.  
We will therefore revert, and instead undertake closer analysis to  
identify specific calls that may need hardening, rather than attempt a  
universal solution.  
  
We have kept the portion of the previous patch that improved snprintf.c's  
handling of errors when it calls the platform's sprintf().  That seems to  
be an unalloyed improvement.  
  
Security: CVE-2015-3166  

M src/include/port.h
M src/interfaces/ecpg/compatlib/Makefile
M src/interfaces/ecpg/ecpglib/.gitignore
M src/interfaces/ecpg/ecpglib/Makefile
M src/interfaces/ecpg/pgtypeslib/.gitignore
M src/interfaces/ecpg/pgtypeslib/Makefile
M src/interfaces/libpq/.gitignore
M src/interfaces/libpq/Makefile
M src/interfaces/libpq/bcc32.mak
M src/interfaces/libpq/win32.mak
M src/pl/plperl/plperl.h
M src/port/Makefile
M src/port/snprintf.c
D src/port/syswrap.c
M src/tools/msvc/Mkvcbuild.pm

Fix off-by-one error in Assertion.

commit   : 2c2c7bc4592d561ccde1663dc7d28abc004e47dd    
  
author   : Heikki Linnakangas <[email protected]>    
date     : Tue, 19 May 2015 19:21:46 +0300    
  
committer: Heikki Linnakangas <[email protected]>    
date     : Tue, 19 May 2015 19:21:46 +0300    

Click here for diff

The point of the assertion is to ensure that the arrays allocated in stack  
are large enough, but the check was one item short.  
  
This won't matter in practice because MaxIndexTuplesPerPage is an  
overestimate, so you can't have that many items on a page in reality.  
But let's be tidy.  
  
Spotted by Anastasia Lubennikova. Backpatch to all supported versions, like  
the patch that added the assertion.  

M src/backend/storage/page/bufpage.c

Don't MultiXactIdIsRunning when in recovery

commit   : 2360eea3be67fb9650067817a4e50fc2f1b8cff7    
  
author   : Alvaro Herrera <[email protected]>    
date     : Mon, 18 May 2015 17:44:21 -0300    
  
committer: Alvaro Herrera <[email protected]>    
date     : Mon, 18 May 2015 17:44:21 -0300    

Click here for diff

In 9.1 and earlier, it is possible for index_getnext() to try to examine  
a heap buffer for possible HOT-prune when in recovery; this causes a  
problem when a multixact is found in a tuple's Xmax, because  
GetMultiXactIdMembers refuses to run when in recovery, raising an error:  
	ERROR:  cannot GetMultiXactIdMembers() during recovery  
  
This can be solved easily by having MultiXactIdIsRunning always return  
false when in recovery, which is reasonable because a HOT standby cannot  
acquire further tuple locks nor update/delete tuples.  
  
(Note: it doesn't look like this specific code path has a problem in  
9.2, because instead of doing HeapTupleSatisfiesUpdate directly,  
heap_hot_search_buffer uses HeapTupleIsSurelyDead instead.  Still, there  
may be other paths affected by the same bug, for instance in pgrowlocks,  
and the multixact code hasn't changed; so apply the same fix  
throughout.)  
  
Apply this fix to 9.0 through 9.2.  In 9.3 the multixact code has been  
changed completely and is no longer subject to this problem.  
  
Per report from Marko Tiikkaja,  
https://www.postgresql.org/message-id/[email protected]  
Analysis by Andres Freund  

M src/backend/access/transam/multixact.c

Stamp 9.1.16.

commit   : 46c877ee466ba213ae31ddee0b61b03fc12d90f7    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 18 May 2015 14:36:42 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 18 May 2015 14:36:42 -0400    

Click here for diff

M configure
M configure.in
M doc/bug.template
M src/include/pg_config.h.win32
M src/interfaces/libpq/libpq.rc.in
M src/port/win32ver.rc

Fix error message in pre_sync_fname.

commit   : b965c808ab8189be5716f9572c6d00124767ed80    
  
author   : Robert Haas <[email protected]>    
date     : Mon, 18 May 2015 12:53:09 -0400    
  
committer: Robert Haas <[email protected]>    
date     : Mon, 18 May 2015 12:53:09 -0400    

Click here for diff

The old one didn't include %m anywhere, and required extra  
translation.  
  
Report by Peter Eisentraut. Fix by me. Review by Tom Lane.  

M src/backend/storage/file/fd.c

Last-minute updates for release notes.

commit   : a3ddf2f292060e88edeb3f3fdfa7b81c2541aa8f    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 18 May 2015 12:09:03 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 18 May 2015 12:09:03 -0400    

Click here for diff

Add entries for security issues.  
  
Security: CVE-2015-3165 through CVE-2015-3167  

M doc/src/sgml/release-9.0.sgml
M doc/src/sgml/release-9.1.sgml

pgcrypto: Report errant decryption as "Wrong key or corrupt data".

commit   : e5981aebde61521d8dcace6f45b52d1dc9035a90    
  
author   : Noah Misch <[email protected]>    
date     : Mon, 18 May 2015 10:02:31 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Mon, 18 May 2015 10:02:31 -0400    

Click here for diff

This has been the predominant outcome.  When the output of decrypting  
with a wrong key coincidentally resembled an OpenPGP packet header,  
pgcrypto could instead report "Corrupt data", "Not text data" or  
"Unsupported compression algorithm".  The distinct "Corrupt data"  
message added no value.  The latter two error messages misled when the  
decrypted payload also exhibited fundamental integrity problems.  Worse,  
error message variance in other systems has enabled cryptologic attacks;  
see RFC 4880 section "14. Security Considerations".  Whether these  
pgcrypto behaviors are likewise exploitable is unknown.  
  
In passing, document that pgcrypto does not resist side-channel attacks.  
Back-patch to 9.0 (all supported versions).  
  
Security: CVE-2015-3167  

M contrib/pgcrypto/expected/pgp-decrypt.out
M contrib/pgcrypto/expected/pgp-pubkey-decrypt.out
M contrib/pgcrypto/mbuf.c
M contrib/pgcrypto/pgp-decrypt.c
M contrib/pgcrypto/pgp.h
M contrib/pgcrypto/px.c
M contrib/pgcrypto/px.h
M contrib/pgcrypto/sql/pgp-decrypt.sql
M doc/src/sgml/pgcrypto.sgml

Check return values of sensitive system library calls.

commit   : 2cb9f2cabe3071f1cbd25ab8fe3cbc0b8d1a83d3    
  
author   : Noah Misch <[email protected]>    
date     : Mon, 18 May 2015 10:02:31 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Mon, 18 May 2015 10:02:31 -0400    

Click here for diff

PostgreSQL already checked the vast majority of these, missing this  
handful that nearly cannot fail.  If putenv() failed with ENOMEM in  
pg_GSS_recvauth(), authentication would proceed with the wrong keytab  
file.  If strftime() returned zero in cache_locale_time(), using the  
unspecified buffer contents could lead to information exposure or a  
crash.  Back-patch to 9.0 (all supported versions).  
  
Other unchecked calls to these functions, especially those in frontend  
code, pose negligible security concern.  This patch does not address  
them.  Nonetheless, it is always better to check return values whose  
specification provides for indicating an error.  
  
In passing, fix an off-by-one error in strftime_win32()'s invocation of  
WideCharToMultiByte().  Upon retrieving a value of exactly MAX_L10N_DATA  
bytes, strftime_win32() would overrun the caller's buffer by one byte.  
MAX_L10N_DATA is chosen to exceed the length of every possible value, so  
the vulnerable scenario probably does not arise.  
  
Security: CVE-2015-3166  

M src/backend/libpq/auth.c
M src/backend/utils/adt/pg_locale.c

Add error-throwing wrappers for the printf family of functions.

commit   : e58f042d9a2cfcf47e2b3734eb9fd0e6d9a6bfb0    
  
author   : Noah Misch <[email protected]>    
date     : Mon, 18 May 2015 10:02:31 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Mon, 18 May 2015 10:02:31 -0400    

Click here for diff

All known standard library implementations of these functions can fail  
with ENOMEM.  A caller neglecting to check for failure would experience  
missing output, information exposure, or a crash.  Check return values  
within wrappers and code, currently just snprintf.c, that bypasses the  
wrappers.  The wrappers do not return after an error, so their callers  
need not check.  Back-patch to 9.0 (all supported versions).  
  
Popular free software standard library implementations do take pains to  
bypass malloc() in simple cases, but they risk ENOMEM for floating point  
numbers, positional arguments, large field widths, and large precisions.  
No specification demands such caution, so this commit regards every call  
to a printf family function as a potential threat.  
  
Injecting the wrappers implicitly is a compromise between patch scope  
and design goals.  I would prefer to edit each call site to name a  
wrapper explicitly.  libpq and the ECPG libraries would, ideally, convey  
errors to the caller rather than abort().  All that would be painfully  
invasive for a back-patched security fix, hence this compromise.  
  
Security: CVE-2015-3166  

M src/include/port.h
M src/interfaces/ecpg/compatlib/Makefile
M src/interfaces/ecpg/ecpglib/.gitignore
M src/interfaces/ecpg/ecpglib/Makefile
M src/interfaces/ecpg/pgtypeslib/.gitignore
M src/interfaces/ecpg/pgtypeslib/Makefile
M src/interfaces/libpq/.gitignore
M src/interfaces/libpq/Makefile
M src/interfaces/libpq/bcc32.mak
M src/interfaces/libpq/win32.mak
M src/pl/plperl/plperl.h
M src/port/Makefile
M src/port/snprintf.c
A src/port/syswrap.c
M src/tools/msvc/Mkvcbuild.pm

Permit use of vsprintf() in PostgreSQL code.

commit   : b544dcdad219cbd14837b149f63e3703952c992f    
  
author   : Noah Misch <[email protected]>    
date     : Mon, 18 May 2015 10:02:31 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Mon, 18 May 2015 10:02:31 -0400    

Click here for diff

The next commit needs it.  Back-patch to 9.0 (all supported versions).  

M src/include/port.h
M src/port/snprintf.c

Prevent a double free by not reentering be_tls_close().

commit   : 6675ab595ade396c43ff6c0ee7c99ccb5f0bc6f4    
  
author   : Noah Misch <[email protected]>    
date     : Mon, 18 May 2015 10:02:31 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Mon, 18 May 2015 10:02:31 -0400    

Click here for diff

Reentering this function with the right timing caused a double free,  
typically crashing the backend.  By synchronizing a disconnection with  
the authentication timeout, an unauthenticated attacker could achieve  
this somewhat consistently.  Call be_tls_close() solely from within  
proc_exit_prepare().  Back-patch to 9.0 (all supported versions).  
  
Benkocs Norbert Attila  
  
Security: CVE-2015-3165  

M src/backend/libpq/be-secure.c
M src/backend/libpq/pqcomm.c
M src/backend/postmaster/postmaster.c

Translation updates

commit   : b584e45c9d9b70fba06ade7279763acf49e8af14    
  
author   : Peter Eisentraut <[email protected]>    
date     : Mon, 18 May 2015 08:51:06 -0400    
  
committer: Peter Eisentraut <[email protected]>    
date     : Mon, 18 May 2015 08:51:06 -0400    

Click here for diff

Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git  
Source-Git-Hash: 3fd92c72461f8fa03989609f4f2513fe1d865582  

M src/backend/po/de.po
M src/backend/po/fr.po
M src/backend/po/pt_BR.po
M src/backend/po/ru.po
M src/bin/initdb/po/fr.po
M src/bin/pg_config/po/fr.po
M src/bin/pg_controldata/po/fr.po
M src/bin/pg_ctl/po/fr.po
M src/bin/pg_dump/po/fr.po
M src/bin/pg_resetxlog/po/de.po
M src/bin/pg_resetxlog/po/fr.po
M src/bin/pg_resetxlog/po/pt_BR.po
M src/bin/pg_resetxlog/po/ru.po
M src/bin/psql/po/de.po
M src/bin/psql/po/fr.po
M src/bin/psql/po/pt_BR.po
M src/bin/psql/po/ru.po
M src/bin/scripts/po/fr.po
M src/interfaces/ecpg/ecpglib/po/fr.po
M src/interfaces/ecpg/preproc/po/fr.po
M src/interfaces/libpq/po/fr.po
M src/pl/plperl/po/fr.po
M src/pl/plpgsql/src/po/fr.po
M src/pl/plpython/po/fr.po
M src/pl/plpython/po/pt_BR.po
M src/pl/tcl/po/fr.po

Fix typos

commit   : c410881f84a7224ebd2a70af4e104db08d1fdbce    
  
author   : Peter Eisentraut <[email protected]>    
date     : Sun, 17 May 2015 22:21:36 -0400    
  
committer: Peter Eisentraut <[email protected]>    
date     : Sun, 17 May 2015 22:21:36 -0400    

Click here for diff

M contrib/pg_upgrade/pg_upgrade.c
M src/bin/pg_resetxlog/pg_resetxlog.c

Release notes for 9.4.2, 9.3.7, 9.2.11, 9.1.16, 9.0.20.

commit   : b4348017e6625a02a95fcb4aa1a243ab7cac54d3    
  
author   : Tom Lane <[email protected]>    
date     : Sun, 17 May 2015 15:54:20 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sun, 17 May 2015 15:54:20 -0400    

Click here for diff

M doc/src/sgml/release-9.0.sgml
M doc/src/sgml/release-9.1.sgml

Fix docs typo

commit   : 67e7a497d6492852205622a030e8e182c20c56e1    
  
author   : Tom Lane <[email protected]>    
date     : Sat, 16 May 2015 13:28:27 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sat, 16 May 2015 13:28:27 -0400    

Click here for diff

I don't think "respectfully" is what was meant here ...  

M doc/src/sgml/client-auth.sgml

pg_upgrade: force timeline 1 in the new cluster

commit   : acd75b2643579a26adfbfaa4918e60121ae2c26f    
  
author   : Bruce Momjian <[email protected]>    
date     : Sat, 16 May 2015 00:40:18 -0400    
  
committer: Bruce Momjian <[email protected]>    
date     : Sat, 16 May 2015 00:40:18 -0400    

Click here for diff

Previously, this prevented promoted standby servers from being upgraded  
because of a missing WAL history file.  (Timeline 1 doesn't need a  
history file, and we don't copy WAL files anyway.)  
  
Report by Christian Echerer(?), Alexey Klyukin  
  
Backpatch through 9.0  

M contrib/pg_upgrade/pg_upgrade.c

pg_upgrade: only allow template0 to be non-connectable

commit   : 321db71239cb45ed2f2d3113ff5745757a64581a    
  
author   : Bruce Momjian <[email protected]>    
date     : Sat, 16 May 2015 00:10:03 -0400    
  
committer: Bruce Momjian <[email protected]>    
date     : Sat, 16 May 2015 00:10:03 -0400    

Click here for diff

This patch causes pg_upgrade to error out during its check phase if:  
  
(1) template0 is marked connectable  
or  
(2) any other database is marked non-connectable  
  
This is done because, in the first case, pg_upgrade would fail because  
the pg_dumpall --globals restore would fail, and in the second case, the  
database would not be restored, leading to data loss.  
  
Report by Matt Landry (1), Stephen Frost (2)  
  
Backpatch through 9.0  

M contrib/pg_upgrade/check.c

Update time zone data files to tzdata release 2015d.

commit   : 436f3560925620b623d4ea6cdd8f7b38a117b643    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 15 May 2015 19:35:29 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 15 May 2015 19:35:29 -0400    

Click here for diff

DST law changes in Egypt, Mongolia, Palestine.  
Historical corrections for Canada and Chile.  
Revised zone abbreviation for America/Adak (HST/HDT not HAST/HADT).  

M src/timezone/data/africa
M src/timezone/data/antarctica
M src/timezone/data/asia
M src/timezone/data/australasia
M src/timezone/data/backward
M src/timezone/data/backzone
M src/timezone/data/europe
M src/timezone/data/northamerica
M src/timezone/data/southamerica
M src/timezone/known_abbrevs.txt
M src/timezone/tznames/America.txt
M src/timezone/tznames/Asia.txt
M src/timezone/tznames/Default
M src/timezone/tznames/Pacific.txt

Docs: fix erroneous claim about max byte length of GB18030.

commit   : 66184871735051450a0b551e7af505f276736bff    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 14 May 2015 14:59:00 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 14 May 2015 14:59:00 -0400    

Click here for diff

This encoding has characters up to 4 bytes long, not 2.  

M doc/src/sgml/charset.sgml

Fix RBM_ZERO_AND_LOCK mode to not acquire lock on local buffers.

commit   : f6c4a8690c9374be828c7fca4c2b6199f39b27fa    
  
author   : Heikki Linnakangas <[email protected]>    
date     : Wed, 13 May 2015 09:44:43 +0300    
  
committer: Heikki Linnakangas <[email protected]>    
date     : Wed, 13 May 2015 09:44:43 +0300    

Click here for diff

Commit 81c45081 introduced a new RBM_ZERO_AND_LOCK mode to ReadBuffer, which  
takes a lock on the buffer before zeroing it. However, you cannot take a  
lock on a local buffer, and you got a segfault instead. The version of that  
patch committed to master included a check for !isLocalBuf, and therefore  
didn't crash, but oddly I missed that in the back-patched versions. This  
patch adds that check to the back-branches too.  
  
RBM_ZERO_AND_LOCK mode is only used during WAL replay, and in hash indexes.  
WAL replay only deals with shared buffers, so the only way to trigger the  
bug is with a temporary hash index.  
  
Reported by Artem Ignatyev, analysis by Tom Lane.  

M src/backend/storage/buffer/bufmgr.c

Fix incorrect checking of deferred exclusion constraint after a HOT update.

commit   : dd75518d523a1e3650b5d5ad20c755a000425739    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 11 May 2015 12:25:28 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 11 May 2015 12:25:28 -0400    

Click here for diff

If a row that potentially violates a deferred exclusion constraint is  
HOT-updated later in the same transaction, the exclusion constraint would  
be reported as violated when the check finally occurs, even if the row(s)  
the new row originally conflicted with have since been removed.  This  
happened because the wrong TID was passed to check_exclusion_constraint(),  
causing the live HOT-updated row to be seen as a conflicting row rather  
than recognized as the row-under-test.  
  
Per bug #13148 from Evan Martin.  It's been broken since exclusion  
constraints were invented, so back-patch to all supported branches.  

M src/backend/commands/constraint.c
M src/test/regress/input/constraints.source
M src/test/regress/output/constraints.source

Recommend include_realm=1 in docs

commit   : edfef090a555b3cc820821437389d05c041279e0    
  
author   : Stephen Frost <[email protected]>    
date     : Fri, 8 May 2015 19:39:52 -0400    
  
committer: Stephen Frost <[email protected]>    
date     : Fri, 8 May 2015 19:39:52 -0400    

Click here for diff

As discussed, the default setting of include_realm=0 can be dangerous in  
multi-realm environments because it is then impossible to differentiate  
users with the same username but who are from two different realms.  
  
Recommend include_realm=1 and note that the default setting may change  
in a future version of PostgreSQL and therefore users may wish to  
explicitly set include_realm to avoid issues while upgrading.  

M doc/src/sgml/client-auth.sgml

Properly send SCM status updates when shutting down service on Windows

commit   : b9ded152904bba708a4332cf535098be46bb20b2    
  
author   : Magnus Hagander <[email protected]>    
date     : Thu, 7 May 2015 15:04:13 +0200    
  
committer: Magnus Hagander <[email protected]>    
date     : Thu, 7 May 2015 15:04:13 +0200    

Click here for diff

The Service Control Manager should be notified regularly during a shutdown  
that takes a long time. Previously we would increaes the counter, but forgot  
to actually send the notification to the system. The loop counter was also  
incorrectly initalized in the event that the startup of the system took long  
enough for it to increase, which could cause the shutdown process not to wait  
as long as expected.  
  
Krystian Bigaj, reviewed by Michael Paquier  

M src/bin/pg_ctl/pg_ctl.c

citext's regexp_matches() functions weren't documented, either.

commit   : 272f99f8ad88c0253926113e3e19ee8da1513ef1    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 5 May 2015 16:11:01 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 5 May 2015 16:11:01 -0400    

Click here for diff

M doc/src/sgml/citext.sgml

Fix incorrect declaration of citext's regexp_matches() functions.

commit   : 801e250a8a0a73aef5afdbaac7a12a9af91e589b    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 5 May 2015 15:50:53 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 5 May 2015 15:50:53 -0400    

Click here for diff

These functions should return SETOF TEXT[], like the core functions they  
are wrappers for; but they were incorrectly declared as returning just  
TEXT[].  This mistake had two results: first, if there was no match you got  
a scalar null result, whereas what you should get is an empty set (zero  
rows).  Second, the 'g' flag was effectively ignored, since you would get  
only one result array even if there were multiple matches, as reported by  
Jeff Certain.  
  
While ignoring 'g' is a clear bug, the behavior for no matches might well  
have been thought to be the intended behavior by people who hadn't compared  
it carefully to the core regexp_matches() functions.  So we should tread  
carefully about introducing this change in the back branches.  Still, it  
clearly is a bug and so providing some fix is desirable.  
  
After discussion, the conclusion was to introduce the change in a 1.1  
version of the citext extension (as we would need to do anyway); 1.0 still  
contains the incorrect behavior.  1.1 is the default and only available  
version in HEAD, but it is optional in the back branches, where 1.0 remains  
the default version.  People wishing to adopt the fix in back branches will  
need to explicitly do ALTER EXTENSION citext UPDATE TO '1.1'.  (I also  
provided a downgrade script in the back branches, so people could go back  
to 1.0 if necessary.)  
  
This should be called out as an incompatible change in the 9.5 release  
notes, although we'll also document it in the next set of back-branch  
release notes.  The notes should mention that any views or rules that use  
citext's regexp_matches() functions will need to be dropped before  
upgrading to 1.1, and then recreated again afterwards.  
  
Back-patch to 9.1.  The bug goes all the way back to citext's introduction  
in 8.4, but pre-9.1 there is no extension mechanism with which to manage  
the change.  Given the lack of previous complaints it seems unnecessary to  
change this behavior in 9.0, anyway.  

M contrib/citext/Makefile
A contrib/citext/citext–1.0–1.1.sql
A contrib/citext/citext–1.1–1.0.sql
A contrib/citext/citext–1.1.sql

Fix some problems with patch to fsync the data directory.

commit   : 6ee1a7738ad04a1e6e481c81149770b2e565c0c1    
  
author   : Robert Haas <[email protected]>    
date     : Tue, 5 May 2015 08:30:28 -0400    
  
committer: Robert Haas <[email protected]>    
date     : Tue, 5 May 2015 08:30:28 -0400    

Click here for diff

pg_win32_is_junction() was a typo for pgwin32_is_junction().  open()  
was used not only in a two-argument form, which breaks on Windows,  
but also where BasicOpenFile() should have been used.  
  
Per reports from Andrew Dunstan and David Rowley.  

M src/backend/storage/file/fd.c

Recursively fsync() the data directory after a crash.

commit   : 4b71d28d586fa9af55713a0652614e64027789a7    
  
author   : Robert Haas <[email protected]>    
date     : Mon, 4 May 2015 12:06:53 -0400    
  
committer: Robert Haas <[email protected]>    
date     : Mon, 4 May 2015 12:06:53 -0400    

Click here for diff

Otherwise, if there's another crash, some writes from after the first  
crash might make it to disk while writes from before the crash fail  
to make it to disk.  This could lead to data corruption.  
  
Back-patch to all supported versions.  
  
Abhijit Menon-Sen, reviewed by Andres Freund and slightly revised  
by me.  

M src/backend/access/transam/xlog.c
M src/backend/storage/file/fd.c
M src/include/storage/fd.h

Build libecpg with -DFRONTEND in all supported versions.

commit   : 1dadd36dbb54fda1c43f1163ca39bbe35741952b    
  
author   : Noah Misch <[email protected]>    
date     : Sun, 26 Apr 2015 17:20:10 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Sun, 26 Apr 2015 17:20:10 -0400    

Click here for diff

Fix an oversight in commit 151e74719b0cc5c040bd3191b51b95f925773dd1 by  
back-patching commit 44c5d387eafb4ba1a032f8d7b13d85c553d69181 to 9.0.  

M src/interfaces/ecpg/ecpglib/Makefile

Prevent improper reordering of antijoins vs. outer joins.

commit   : 2e38198f605f39e96544c81a43f1984147e0890c    
  
author   : Tom Lane <[email protected]>    
date     : Sat, 25 Apr 2015 16:44:27 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sat, 25 Apr 2015 16:44:27 -0400    

Click here for diff

An outer join appearing within the RHS of an antijoin can't commute with  
the antijoin, but somehow I missed teaching make_outerjoininfo() about  
that.  In Teodor Sigaev's recent trouble report, this manifests as a  
"could not find RelOptInfo for given relids" error within eqjoinsel();  
but I think silently wrong query results are possible too, if the planner  
misorders the joins and doesn't happen to trigger any internal consistency  
checks.  It's broken as far back as we had antijoins, so back-patch to all  
supported branches.  

M src/backend/optimizer/plan/initsplan.c
M src/test/regress/expected/join.out
M src/test/regress/sql/join.sql

Build every ECPG library with -DFRONTEND.

commit   : f221c44cda52c324ced27e3b1fbec3458a2683d4    
  
author   : Noah Misch <[email protected]>    
date     : Fri, 24 Apr 2015 19:29:02 -0400    
  
committer: Noah Misch <[email protected]>    
date     : Fri, 24 Apr 2015 19:29:02 -0400    

Click here for diff

Each of the libraries incorporates src/port files, which often check  
FRONTEND.  Build systems disagreed on whether to build libpgtypes this  
way.  Only libecpg incorporates files that rely on it today.  Back-patch  
to 9.0 (all supported versions) to forestall surprises.  

M src/interfaces/ecpg/compatlib/Makefile
M src/interfaces/ecpg/pgtypeslib/Makefile
M src/tools/msvc/Mkvcbuild.pm

Fix deadlock at startup, if max_prepared_transactions is too small.

commit   : e8528a8f5d411f0d9a1a9f927eae332992949aa1    
  
author   : Heikki Linnakangas <[email protected]>    
date     : Thu, 23 Apr 2015 21:25:44 +0300    
  
committer: Heikki Linnakangas <[email protected]>    
date     : Thu, 23 Apr 2015 21:25:44 +0300    

Click here for diff

When the startup process recovers transactions by scanning pg_twophase  
directory, it should clear MyLockedGxact after it's done processing each  
transaction. Like we do during normal operation, at PREPARE TRANSACTION.  
Otherwise, if the startup process exits due to an error, it will try to  
clear the locking_backend field of the last recovered transaction. That's  
usually harmless, but if the error happens in MarkAsPreparing, while  
holding TwoPhaseStateLock, the shmem-exit hook will try to acquire  
TwoPhaseStateLock again, and deadlock with itself.  
  
This fixes bug #13128 reported by Grant McAlister. The bug was introduced  
by commit bb38fb0d, so backpatch to all supported versions like that  
commit.  

M src/backend/access/transam/twophase.c

Fix typo in comment

commit   : 42f522714e5eea90feb1c2d5d2fe2d6018b1e628    
  
author   : Alvaro Herrera <[email protected]>    
date     : Tue, 14 Apr 2015 12:12:18 -0300    
  
committer: Alvaro Herrera <[email protected]>    
date     : Tue, 14 Apr 2015 12:12:18 -0300    

Click here for diff

SLRU_SEGMENTS_PER_PAGE -> SLRU_PAGES_PER_SEGMENT  
  
I introduced this ancient typo in subtrans.c and later propagated it to  
multixact.c.  I fixed the latter in f741300c, but only back to 9.3;  
backpatch to all supported branches for consistency.  

M src/backend/access/transam/multixact.c
M src/backend/access/transam/subtrans.c

Don't archive bogus recycled or preallocated files after timeline switch.

commit   : ad2925e2032336a7a78cafe3839efa96cec86dcd    
  
author   : Heikki Linnakangas <[email protected]>    
date     : Mon, 13 Apr 2015 16:53:49 +0300    
  
committer: Heikki Linnakangas <[email protected]>    
date     : Mon, 13 Apr 2015 16:53:49 +0300    

Click here for diff

After a timeline switch, we would leave behind recycled WAL segments that  
are in the future, but on the old timeline. After promotion, and after they  
become old enough to be recycled again, we would notice that they don't have  
a .ready or .done file, create a .ready file for them, and archive them.  
That's bogus, because the files contain garbage, recycled from an older  
timeline (or prealloced as zeros). We shouldn't archive such files.  
  
This could happen when we're following a timeline switch during replay, or  
when we switch to new timeline at end-of-recovery.  
  
To fix, whenever we switch to a new timeline, scan the data directory for  
WAL segments on the old timeline, but with a higher segment number, and  
remove them. Those don't belong to our timeline history, and are most  
likely bogus recycled or preallocated files. They could also be valid files  
that we streamed from the primary ahead of time, but in any case, they're  
not needed to recover to the new timeline.  

M src/backend/access/transam/xlog.c

Fix incorrect punctuation

commit   : d75e0949dfd4f6983c0170149d1b85c1fb1efd49    
  
author   : Magnus Hagander <[email protected]>    
date     : Thu, 9 Apr 2015 13:35:30 +0200    
  
committer: Magnus Hagander <[email protected]>    
date     : Thu, 9 Apr 2015 13:35:30 +0200    

Click here for diff

Amit Langote  

M doc/src/sgml/mvcc.sgml

Fix autovacuum launcher shutdown sequence

commit   : cf5d3f27484177cc4a44ec2bcdd786c9b8e59746    
  
author   : Alvaro Herrera <[email protected]>    
date     : Wed, 8 Apr 2015 13:19:49 -0300    
  
committer: Alvaro Herrera <[email protected]>    
date     : Wed, 8 Apr 2015 13:19:49 -0300    

Click here for diff

It was previously possible to have the launcher re-execute its main loop  
before shutting down if some other signal was received or an error  
occurred after getting SIGTERM, as reported by Qingqing Zhou.  
  
While investigating, Tom Lane further noticed that if autovacuum had  
been disabled in the config file, it would misbehave by trying to start  
a new worker instead of bailing out immediately -- it would consider  
itself as invoked in emergency mode.  
  
Fix both problems by checking the shutdown flag in a few more places.  
These problems have existed since autovacuum was introduced, so  
backpatch all the way back.  

M src/backend/postmaster/autovacuum.c

Fix assorted inconsistent function declarations.

commit   : c68b06356dd75230df18a763b87736f717e13b5c    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 7 Apr 2015 16:56:21 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 7 Apr 2015 16:56:21 -0400    

Click here for diff

While gcc doesn't complain if you declare a function "static" and then  
define it not-static, other compilers do; and in any case the code is  
highly misleading this way.  Add the missing "static" keywords to a  
couple of recent patches.  Per buildfarm member pademelon.  

M contrib/pg_upgrade/pg_upgrade.c
M src/bin/pg_resetxlog/pg_resetxlog.c

Fix typo in libpq.sgml.

commit   : 53e97a69e0d1329d6ec08d0f39d87a29364e882f    
  
author   : Fujii Masao <[email protected]>    
date     : Mon, 6 Apr 2015 12:15:20 +0900    
  
committer: Fujii Masao <[email protected]>    
date     : Mon, 6 Apr 2015 12:15:20 +0900    

Click here for diff

Back-patch to all supported versions.  
  
Michael Paquier  

M doc/src/sgml/libpq.sgml

Fix incorrect matching of subexpressions in outer-join plan nodes.

commit   : 3b5d67102789e1202d35fbbcb4b3f1a1c181cc02    
  
author   : Tom Lane <[email protected]>    
date     : Sat, 4 Apr 2015 19:55:15 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sat, 4 Apr 2015 19:55:15 -0400    

Click here for diff

Previously we would re-use input subexpressions in all expression trees  
attached to a Join plan node.  However, if it's an outer join and the  
subexpression appears in the nullable-side input, this is potentially  
incorrect for apparently-matching subexpressions that came from above  
the outer join (ie, targetlist and qpqual expressions), because the  
executor will treat the subexpression value as NULL when maybe it should  
not be.  
  
The case is fairly hard to hit because (a) you need a non-strict  
subexpression (else NULL is correct), and (b) we don't usually compute  
expressions in the outputs of non-toplevel plan nodes.  But we might do  
so if the expressions are sort keys for a mergejoin, for example.  
  
Probably in the long run we should make a more explicit distinction between  
Vars appearing above and below an outer join, but that will be a major  
planner redesign and not at all back-patchable.  For the moment, just hack  
set_join_references so that it will not match any non-Var expressions  
coming from nullable inputs to expressions that came from above the join.  
(This is somewhat overkill, in that a strict expression could still be  
matched, but it doesn't seem worth the effort to check that.)  
  
Per report from Qingqing Zhou.  The added regression test case is based  
on his example.  
  
This has been broken for a very long time, so back-patch to all active  
branches.  

M src/backend/optimizer/plan/setrefs.c
M src/test/regress/expected/join.out
M src/test/regress/sql/join.sql

Remove unnecessary variables in _hash_splitbucket().

commit   : 3b828379d110dc37613ed490537ab65f0e12eadf    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 3 Apr 2015 16:49:12 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 3 Apr 2015 16:49:12 -0400    

Click here for diff

Commit ed9cc2b5df59fdbc50cce37399e26b03ab2c1686 made it unnecessary to pass  
start_nblkno to _hash_splitbucket(), and for that matter unnecessary to  
have the internal nblkno variable either.  My compiler didn't complain  
about that, but some did.  I also rearranged the use of oblkno a bit to  
make that case more parallel.  
  
Report and initial patch by Petr Jelinek, rearranged a bit by me.  
Back-patch to all branches, like the previous patch.  

M src/backend/access/hash/hashpage.c

psql: fix \connect with URIs and conninfo strings

commit   : 276591bc4f2680485cd01626d87443b9c978e189    
  
author   : Alvaro Herrera <[email protected]>    
date     : Wed, 1 Apr 2015 20:00:07 -0300    
  
committer: Alvaro Herrera <[email protected]>    
date     : Wed, 1 Apr 2015 20:00:07 -0300    

Click here for diff

psql was already accepting conninfo strings as the first parameter in  
\connect, but the way it worked wasn't sane; some of the other  
parameters would get the previous connection's values, causing it to  
connect to a completely unexpected server or, more likely, not finding  
any server at all because of completely wrong combinations of  
parameters.  
  
Fix by explicitely checking for a conninfo-looking parameter in the  
dbname position; if one is found, use its complete specification rather  
than mix with the other arguments.  Also, change tab-completion to not  
try to complete conninfo/URI-looking "dbnames" and document that  
conninfos are accepted as first argument.  
  
There was a weak consensus to backpatch this, because while the behavior  
of using the dbname as a conninfo is nowhere documented for \connect, it  
is reasonable to expect that it works because it does work in many other  
contexts.  Therefore this is backpatched all the way back to 9.0.  
  
To implement this, routines previously private to libpq have been  
duplicated so that psql can decide what looks like a conninfo/URI  
string.  In back branches, just duplicate the same code all the way back  
to 9.2, where URIs where introduced; 9.0 and 9.1 have a simpler version.  
In master, the routines are moved to src/common and renamed.  
  
Author: David Fetter, Andrew Dunstan.  Some editorialization by me  
(probably earning a Gierth's "Sloppy" badge in the process.)  
Reviewers: Andrew Gierth, Erik Rijkers, Pavel Stěhule, Stephen Frost,  
Robert Haas, Andrew Dunstan.  

M doc/src/sgml/ref/psql-ref.sgml
M src/bin/psql/command.c
M src/bin/psql/common.c
M src/bin/psql/common.h
M src/bin/psql/help.c
M src/bin/psql/tab-complete.c

Fix incorrect markup in documentation of window frame clauses.

commit   : 41d2cb823bd0a826ff1adf419a22ee0f83ad1f30    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 31 Mar 2015 20:02:40 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 31 Mar 2015 20:02:40 -0400    

Click here for diff

You're required to write either RANGE or ROWS to start a frame clause,  
but the documentation incorrectly implied this is optional.  Noted by  
David Johnston.  

M doc/src/sgml/ref/select.sgml
M doc/src/sgml/syntax.sgml

Remove spurious semicolons.

commit   : d6a892e1e60be36975d6768e39e50cd970840ecd    
  
author   : Heikki Linnakangas <[email protected]>    
date     : Tue, 31 Mar 2015 15:12:27 +0300    
  
committer: Heikki Linnakangas <[email protected]>    
date     : Tue, 31 Mar 2015 15:12:27 +0300    

Click here for diff

Petr Jelinek  

M src/backend/parser/gram.y
M src/backend/utils/adt/oracle_compat.c

Run pg_upgrade and pg_resetxlog with restricted token on Windows

commit   : 22b3f5b26e4eea5e598c7898a77e9368ffadbebb    
  
author   : Andrew Dunstan <[email protected]>    
date     : Mon, 30 Mar 2015 17:17:54 -0400    
  
committer: Andrew Dunstan <[email protected]>    
date     : Mon, 30 Mar 2015 17:17:54 -0400    

Click here for diff

As with initdb these programs need to run with a restricted token, and  
if they don't pg_upgrade will fail when run as a user with Adminstrator  
privileges.  
  
Backpatch to all live branches. On the development branch the code is  
reorganized so that the restricted token code is now in a single  
location. On the stable bramches a less invasive change is made by  
simply copying the relevant code to pg_upgrade.c and pg_resetxlog.c.  
  
Patches and bug report from Muhammad Asif Naeem, reviewed by Michael  
Paquier, slightly edited by me.  

M contrib/pg_upgrade/pg_upgrade.c
M src/bin/pg_resetxlog/pg_resetxlog.c

Fix bogus concurrent use of _hash_getnewbuf() in bucket split code.

commit   : 46bfe44e863f659faca18ed3b7c4d956db368039    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 30 Mar 2015 16:40:05 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 30 Mar 2015 16:40:05 -0400    

Click here for diff

_hash_splitbucket() obtained the base page of the new bucket by calling  
_hash_getnewbuf(), but it held no exclusive lock that would prevent some  
other process from calling _hash_getnewbuf() at the same time.  This is  
contrary to _hash_getnewbuf()'s API spec and could in fact cause failures.  
In practice, we must only call that function while holding write lock on  
the hash index's metapage.  
  
An additional problem was that we'd already modified the metapage's bucket  
mapping data, meaning that failure to extend the index would leave us with  
a corrupt index.  
  
Fix both issues by moving the _hash_getnewbuf() call to just before we  
modify the metapage in _hash_expandtable().  
  
Unfortunately there's still a large problem here, which is that we could  
also incur ENOSPC while trying to get an overflow page for the new bucket.  
That would leave the index corrupt in a more subtle way, namely that some  
index tuples that should be in the new bucket might still be in the old  
one.  Fixing that seems substantially more difficult; even preallocating as  
many pages as we could possibly need wouldn't entirely guarantee that the  
bucket split would complete successfully.  So for today let's just deal  
with the base case.  
  
Per report from Antonin Houska.  Back-patch to all active branches.  

M src/backend/access/hash/hashpage.c

Add vacuum_delay_point call in compute_index_stats's per-sample-row loop.

commit   : ab02d35e08274f2c1084e00e5106e72863a6c85b    
  
author   : Tom Lane <[email protected]>    
date     : Sun, 29 Mar 2015 15:04:09 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sun, 29 Mar 2015 15:04:09 -0400    

Click here for diff

Slow functions in index expressions might cause this loop to take long  
enough to make it worth being cancellable.  Probably it would be enough  
to call CHECK_FOR_INTERRUPTS here, but for consistency with other  
per-sample-row loops in this file, let's use vacuum_delay_point.  
  
Report and patch by Jeff Janes.  Back-patch to all supported branches.  

M src/backend/commands/analyze.c

Fix ExecOpenScanRelation to take a lock on a ROW_MARK_COPY relation.

commit   : 054723bcc5b03e40d6341b26b2dce5222a66ce4c    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 24 Mar 2015 15:53:06 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 24 Mar 2015 15:53:06 -0400    

Click here for diff

ExecOpenScanRelation assumed that any relation listed in the ExecRowMark  
list has been locked by InitPlan; but this is not true if the rel's  
markType is ROW_MARK_COPY, which is possible if it's a foreign table.  
  
In most (possibly all) cases, failure to acquire a lock here isn't really  
problematic because the parser, planner, or plancache would have taken the  
appropriate lock already.  In principle though it might leave us vulnerable  
to working with a relation that we hold no lock on, and in any case if the  
executor isn't depending on previously-taken locks otherwise then it should  
not do so for ROW_MARK_COPY relations.  
  
Noted by Etsuro Fujita.  Back-patch to all active versions, since the  
inconsistency has been there a long time.  (It's almost certainly  
irrelevant in 9.0, since that predates foreign tables, but the code's  
still wrong on its own terms.)  

M src/backend/executor/execMain.c
M src/backend/executor/execUtils.c

Replace insertion sort in contrib/intarray with qsort().

commit   : 9288645b596efde3a58e267896b38f8f089a2920    
  
author   : Tom Lane <[email protected]>    
date     : Sun, 15 Mar 2015 23:22:03 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sun, 15 Mar 2015 23:22:03 -0400    

Click here for diff

It's all very well to claim that a simplistic sort is fast in easy  
cases, but O(N^2) in the worst case is not good ... especially if the  
worst case is as easy to hit as "descending order input".  Replace that  
bit with our standard qsort.  
  
Per bug #12866 from Maksym Boguk.  Back-patch to all active branches.  

M contrib/intarray/_int_tool.c

Remove workaround for ancient incompatibility between readline and libedit.

commit   : 043fe5c5a62a71455162b322c5ac819716702b02    
  
author   : Tom Lane <[email protected]>    
date     : Sat, 14 Mar 2015 13:43:00 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sat, 14 Mar 2015 13:43:00 -0400    

Click here for diff

GNU readline defines the return value of write_history() as "zero if OK,  
else an errno code".  libedit's version of that function used to have a  
different definition (to wit, "-1 if error, else the number of lines  
written to the file").  We tried to work around that by checking whether  
errno had become nonzero, but this method has never been kosher according  
to the published API of either library.  It's reportedly completely broken  
in recent Ubuntu releases: psql bleats about "No such file or directory"  
when saving ~/.psql_history, even though the write worked fine.  
  
However, libedit has been following the readline definition since somewhere  
around 2006, so it seems all right to finally break compatibility with  
ancient libedit releases and trust that the return value is what readline  
specifies.  (I'm not sure when the various Linux distributions incorporated  
this fix, but I did find that OS X has been shipping fixed versions since  
10.5/Leopard.)  
  
If anyone is still using such an ancient libedit, they will find that psql  
complains it can't write ~/.psql_history at exit, even when the file was  
written correctly.  This is no worse than the behavior we're fixing for  
current releases.  
  
Back-patch to all supported branches.  

M src/bin/psql/input.c

Ensure tableoid reads correctly in EvalPlanQual-manufactured tuples.

commit   : 4a4fd2b0ceedb339118b0cf10ca78e472ce20a90    
  
author   : Tom Lane <[email protected]>    
date     : Thu, 12 Mar 2015 13:38:49 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Thu, 12 Mar 2015 13:38:49 -0400    

Click here for diff

The ROW_MARK_COPY path in EvalPlanQualFetchRowMarks() was just setting  
tableoid to InvalidOid, I think on the assumption that the referenced  
RTE must be a subquery or other case without a meaningful OID.  However,  
foreign tables also use this code path, and they do have meaningful  
table OIDs; so failure to set the tuple field can lead to user-visible  
misbehavior.  Fix that by fetching the appropriate OID from the range  
table.  
  
There's still an issue about whether CTID can ever have a meaningful  
value in this case; at least with postgres_fdw foreign tables, it does.  
But that is a different problem that seems to require a significantly  
different patch --- it's debatable whether postgres_fdw really wants to  
use this code path at all.  
  
Simplified version of a patch by Etsuro Fujita, who also noted the  
problem to begin with.  The issue can be demonstrated in all versions  
having FDWs, so back-patch to 9.1.  

M src/backend/executor/execMain.c

Fix documentation for libpq's PQfn().

commit   : 92bb008d8b62fe38c0272a43628b2ae22682c512    
  
author   : Tom Lane <[email protected]>    
date     : Sun, 8 Mar 2015 13:35:28 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sun, 8 Mar 2015 13:35:28 -0400    

Click here for diff

The SGML docs claimed that 1-byte integers could be sent or received with  
the "isint" options, but no such behavior has ever been implemented in  
pqGetInt() or pqPutInt().  The in-code documentation header for PQfn() was  
even less in tune with reality, and the code itself used parameter names  
matching neither the SGML docs nor its libpq-fe.h declaration.  Do a bit  
of additional wordsmithing on the SGML docs while at it.  
  
Since the business about 1-byte integers is a clear documentation bug,  
back-patch to all supported branches.  

M doc/src/sgml/libpq.sgml
M src/interfaces/libpq/fe-exec.c

Fix contrib/file_fdw's expected file

commit   : 1352ee9bde859719994a31f1f9bbc04dfdbf065a    
  
author   : Alvaro Herrera <[email protected]>    
date     : Fri, 6 Mar 2015 11:47:09 -0300    
  
committer: Alvaro Herrera <[email protected]>    
date     : Fri, 6 Mar 2015 11:47:09 -0300    

Click here for diff

I forgot to update it on yesterday's cf34e373fcf.  

M contrib/file_fdw/output/file_fdw.source

Fix user mapping object description

commit   : 8167ef8e2cd4e3c4c875ed5d29f40cf9cc4c88e5    
  
author   : Alvaro Herrera <[email protected]>    
date     : Thu, 5 Mar 2015 18:03:16 -0300    
  
committer: Alvaro Herrera <[email protected]>    
date     : Thu, 5 Mar 2015 18:03:16 -0300    

Click here for diff

We were using "user mapping for user XYZ" as description for user mappings, but  
that's ambiguous because users can have mappings on multiple foreign  
servers; therefore change it to "for user XYZ on server UVW" instead.  
Object identities for user mappings are also updated in the same way, in  
branches 9.3 and above.  
  
The incomplete description string was introduced together with the whole  
SQL/MED infrastructure by commit cae565e503 of 8.4 era, so backpatch all  
the way back.  

M src/backend/catalog/dependency.c
M src/test/regress/expected/foreign_data.out

Fix pg_dump handling of extension config tables

commit   : dcb467b8e246262c13d29ca54bacae28ea613188    
  
author   : Stephen Frost <[email protected]>    
date     : Mon, 2 Mar 2015 14:12:43 -0500    
  
committer: Stephen Frost <[email protected]>    
date     : Mon, 2 Mar 2015 14:12:43 -0500    

Click here for diff

Since 9.1, we've provided extensions with a way to denote  
"configuration" tables- tables created by an extension which the user  
may modify.  By marking these as "configuration" tables, the extension  
is asking for the data in these tables to be pg_dump'd (tables which  
are not marked in this way are assumed to be entirely handled during  
CREATE EXTENSION and are not included at all in a pg_dump).  
  
Unfortunately, pg_dump neglected to consider foreign key relationships  
between extension configuration tables and therefore could end up  
trying to reload the data in an order which would cause FK violations.  
  
This patch teaches pg_dump about these dependencies, so that the data  
dumped out is done so in the best order possible.  Note that there's no  
way to handle circular dependencies, but those have yet to be seen in  
the wild.  
  
The release notes for this should include a caution to users that  
existing pg_dump-based backups may be invalid due to this issue.  The  
data is all there, but restoring from it will require extracting the  
data for the configuration tables and then loading them in the correct  
order by hand.  
  
Discussed initially back in bug #6738, more recently brought up by  
Gilles Darold, who provided an initial patch which was further reworked  
by Michael Paquier.  Further modifications and documentation updates  
by me.  
  
Back-patch to 9.1 where we added the concept of extension configuration  
tables.  

M doc/src/sgml/extend.sgml
M src/bin/pg_dump/pg_dump.c

commit   : 1c966854b241f0df77bec24612142e7ad7fc9e38    
  
author   : Noah Misch <[email protected]>    
date     : Sun, 1 Mar 2015 13:05:23 -0500    
  
committer: Noah Misch <[email protected]>    
date     : Sun, 1 Mar 2015 13:05:23 -0500    

Click here for diff

When the library already exists in the build directory, "ar" preserves  
members not named on its command line.  This mattered when, for example,  
a "configure" rerun dropped a file from $(LIBOBJS).  libpgport carried  
the obsolete member until "make clean".  Back-patch to 9.0 (all  
supported versions).  

M src/Makefile.shlib
M src/port/Makefile

Reconsider when to wait for WAL flushes/syncrep during commit.

commit   : 5c8dabecdb2e8324436e9c6a5a98fc3162743cb1    
  
author   : Andres Freund <[email protected]>    
date     : Thu, 26 Feb 2015 12:50:08 +0100    
  
committer: Andres Freund <[email protected]>    
date     : Thu, 26 Feb 2015 12:50:08 +0100    

Click here for diff

Up to now RecordTransactionCommit() waited for WAL to be flushed (if  
synchronous_commit != off) and to be synchronously replicated (if  
enabled), even if a transaction did not have a xid assigned. The primary  
reason for that is that sequence's nextval() did not assign a xid, but  
are worthwhile to wait for on commit.  
  
This can be problematic because sometimes read only transactions do  
write WAL, e.g. HOT page prune records. That then could lead to read only  
transactions having to wait during commit. Not something people expect  
in a read only transaction.  
  
This lead to such strange symptoms as backends being seemingly stuck  
during connection establishment when all synchronous replicas are  
down. Especially annoying when said stuck connection is the standby  
trying to reconnect to allow syncrep again...  
  
This behavior also is involved in a rather complicated <= 9.4 bug where  
the transaction started by catchup interrupt processing waited for  
syncrep using latches, but didn't get the wakeup because it was already  
running inside the same overloaded signal handler. Fix the issue here  
doesn't properly solve that issue, merely papers over the problems. In  
9.5 catchup interrupts aren't processed out of signal handlers anymore.  
  
To fix all this, make nextval() acquire a top level xid, and only wait for  
transaction commit if a transaction both acquired a xid and emitted WAL  
records.  If only a xid has been assigned we don't uselessly want to  
wait just because of writes to temporary/unlogged tables; if only WAL  
has been written we don't want to wait just because of HOT prunes.  
  
The xid assignment in nextval() is unlikely to cause overhead in  
real-world workloads. For one it only happens SEQ_LOG_VALS/32 values  
anyway, for another only usage of nextval() without using the result in  
an insert or similar is affected.  
  
Discussion: [email protected],  
    369698E947874884A77849D8FE3680C2@maumau,  
    5CF4ABBA67674088B3941894E22A0D25@maumau  
  
Per complaint from maumau and Thom Brown  
  
Backpatch all the way back; 9.0 doesn't have syncrep, but it seems  
better to be consistent behavior across all maintained branches.  

M src/backend/access/transam/xact.c
M src/backend/commands/sequence.c

Free SQLSTATE and SQLERRM no earlier than other PL/pgSQL variables.

commit   : 034d05dbdf6340247b58c80640b2cdfc772f4ee1    
  
author   : Noah Misch <[email protected]>    
date     : Wed, 25 Feb 2015 23:48:28 -0500    
  
committer: Noah Misch <[email protected]>    
date     : Wed, 25 Feb 2015 23:48:28 -0500    

Click here for diff

"RETURN SQLERRM" prompted plpgsql_exec_function() to read from freed  
memory.  Back-patch to 9.0 (all supported versions).  Little code ran  
between the premature free and the read, so non-assert builds are  
unlikely to witness user-visible consequences.  

M src/pl/plpgsql/src/pl_exec.c
M src/test/regress/expected/plpgsql.out
M src/test/regress/sql/plpgsql.sql

Fix dumping of views that are just VALUES(...) but have column aliases.

commit   : f7b41902a6737c682e6311455976f1205014cb8b    
  
author   : Tom Lane <[email protected]>    
date     : Wed, 25 Feb 2015 12:01:12 -0500    
  
committer: Tom Lane <[email protected]>    
date     : Wed, 25 Feb 2015 12:01:12 -0500    

Click here for diff

The "simple" path for printing VALUES clauses doesn't work if we need  
to attach nondefault column aliases, because there's noplace to do that  
in the minimal VALUES() syntax.  So modify get_simple_values_rte() to  
detect nondefault aliases and treat that as a non-simple case.  This  
further exposes that the "non-simple" path never actually worked;  
it didn't produce valid syntax.  Fix that too.  Per bug #12789 from  
Curtis McEnroe, and analysis by Andrew Gierth.  
  
Back-patch to all supported branches.  Before 9.3, this also requires  
back-patching the part of commit 092d7ded29f36b0539046b23b81b9f0bf2d637f1  
that created get_simple_values_rte() to begin with; inserting the extra  
test into the old factorization of that logic would've been too messy.  

M src/backend/utils/adt/ruleutils.c
M src/test/regress/expected/rules.out
M src/test/regress/sql/rules.sql

Guard against spurious signals in LockBufferForCleanup.

commit   : 25576bee25dd6f48669a523ec54a54192faa734b    
  
author   : Andres Freund <[email protected]>    
date     : Mon, 23 Feb 2015 16:11:11 +0100    
  
committer: Andres Freund <[email protected]>    
date     : Mon, 23 Feb 2015 16:11:11 +0100    

Click here for diff

When LockBufferForCleanup() has to wait for getting a cleanup lock on a  
buffer it does so by setting a flag in the buffer header and then wait  
for other backends to signal it using ProcWaitForSignal().  
Unfortunately LockBufferForCleanup() missed that ProcWaitForSignal() can  
return for other reasons than the signal it is hoping for. If such a  
spurious signal arrives the wait flags on the buffer header will still  
be set. That then triggers "ERROR: multiple backends attempting to wait  
for pincount 1".  
  
The fix is simple, unset the flag if still set when retrying. That  
implies an additional spinlock acquisition/release, but that's unlikely  
to matter given the cost of waiting for a cleanup lock.  Alternatively  
it'd have been possible to move responsibility for maintaining the  
relevant flag to the waiter all together, but that might have had  
negative consequences due to possible floods of signals. Besides being  
more invasive.  
  
This looks to be a very longstanding bug. The relevant code in  
LockBufferForCleanup() hasn't changed materially since its introduction  
and ProcWaitForSignal() was documented to return for unrelated reasons  
since 8.2.  The master only patch series removing ImmediateInterruptOK  
made it much easier to hit though, as ProcSendSignal/ProcWaitForSignal  
now uses a latch shared with other tasks.  
  
Per discussion with Kevin Grittner, Tom Lane and me.  
  
Backpatch to all supported branches.  
  
Discussion: [email protected]  

M src/backend/storage/buffer/bufmgr.c

Fix potential deadlock with libpq non-blocking mode.

commit   : 7052abbb6ce28f273fca9cda7afd3ab4a5391807    
  
author   : Heikki Linnakangas <[email protected]>    
date     : Mon, 23 Feb 2015 13:32:34 +0200    
  
committer: Heikki Linnakangas <[email protected]>    
date     : Mon, 23 Feb 2015 13:32:34 +0200    

Click here for diff

If libpq output buffer is full, pqSendSome() function tries to drain any  
incoming data. This avoids deadlock, if the server e.g. sends a lot of  
NOTICE messages, and blocks until we read them. However, pqSendSome() only  
did that in blocking mode. In non-blocking mode, the deadlock could still  
happen.  
  
To fix, take a two-pronged approach:  
  
1. Change the documentation to instruct that when PQflush() returns 1, you  
should wait for both read- and write-ready, and call PQconsumeInput() if it  
becomes read-ready. That fixes the deadlock, but applications are not going  
to change overnight.  
  
2. In pqSendSome(), drain the input buffer before returning 1. This  
alleviates the problem for applications that only wait for write-ready. In  
particular, a slow but steady stream of NOTICE messages during COPY FROM  
STDIN will no longer cause a deadlock. The risk remains that the server  
attempts to send a large burst of data and fills its output buffer, and at  
the same time the client also sends enough data to fill its output buffer.  
The application will deadlock if it goes to sleep, waiting for the socket  
to become write-ready, before the server's data arrives. In practice,  
NOTICE messages and such that the server might be sending are usually  
short, so it's highly unlikely that the server would fill its output buffer  
so quickly.  
  
Backpatch to all supported versions.  

M doc/src/sgml/libpq.sgml
M src/interfaces/libpq/fe-misc.c

Fix failure to honor -Z compression level option in pg_dump -Fd.

commit   : b0d53b2e3025a25499d7e81772a04560e749e876    
  
author   : Tom Lane <[email protected]>    
date     : Wed, 18 Feb 2015 11:43:00 -0500    
  
committer: Tom Lane <[email protected]>    
date     : Wed, 18 Feb 2015 11:43:00 -0500    

Click here for diff

cfopen() and cfopen_write() failed to pass the compression level through  
to zlib, so that you always got the default compression level if you got  
any at all.  
  
In passing, also fix these and related functions so that the correct errno  
is reliably returned on failure; the original coding supposes that free()  
cannot change errno, which is untrue on at least some platforms.  
  
Per bug #12779 from Christoph Berg.  Back-patch to 9.1 where the faulty  
code was introduced.  
  
Michael Paquier  

M src/bin/pg_dump/compress_io.c

Minor cleanup of column-level priv fix

commit   : cfc14b2bf348528bc8f0f04fe1ff80e7abdf0529    
  
author   : Stephen Frost <[email protected]>    
date     : Tue, 17 Feb 2015 15:36:24 -0500    
  
committer: Stephen Frost <[email protected]>    
date     : Tue, 17 Feb 2015 15:36:24 -0500    

Click here for diff

Commit 9406884af19e2620a14059e64d4eb6ab430ab328 cleaned up  
column-privilege related leaks in various error-message paths, but ended  
up including a few more things than it should have in the back branches.  
  
Specifically, there's no need for the GetModifiedColumns macro in  
execMain.c as 9.1 and older didn't include the row in check constraint  
violations.  Further, the regression tests added to check those cases  
aren't necessary.  
  
This patch removes the GetModifiedColumns macro from execMain.c, removes  
the comment which was added to trigger.c related to the duplicate macro  
definition, and removes the check-constraint-related regression tests.  
  
Pointed out by Robert.  
  
Back-patched to 9.1 and 9.0.  

M src/backend/commands/trigger.c
M src/backend/executor/execMain.c
M src/test/regress/expected/privileges.out
M src/test/regress/sql/privileges.sql

Remove code to match IPv4 pg_hba.conf entries to IPv4-in-IPv6 addresses.

commit   : 64e0458383c1fad7660499c312ab3e419513690f    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 17 Feb 2015 12:49:18 -0500    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 17 Feb 2015 12:49:18 -0500    

Click here for diff

In investigating yesterday's crash report from Hugo Osvaldo Barrera, I only  
looked back as far as commit f3aec2c7f51904e7 where the breakage occurred  
(which is why I thought the IPv4-in-IPv6 business was undocumented).  But  
actually the logic dates back to commit 3c9bb8886df7d56a and was simply  
broken by erroneous refactoring in the later commit.  A bit of archives  
excavation shows that we added the whole business in response to a report  
that some 2003-era Linux kernels would report IPv4 connections as having  
IPv4-in-IPv6 addresses.  The fact that we've had no complaints since 9.0  
seems to be sufficient confirmation that no modern kernels do that, so  
let's just rip it all out rather than trying to fix it.  
  
Do this in the back branches too, thus essentially deciding that our  
effective behavior since 9.0 is correct.  If there are any platforms on  
which the kernel reports IPv4-in-IPv6 addresses as such, yesterday's fix  
would have made for a subtle and potentially security-sensitive change in  
the effective meaning of IPv4 pg_hba.conf entries, which does not seem like  
a good thing to do in minor releases.  So let's let the post-9.0 behavior  
stand, and change the documentation to match it.  
  
In passing, I failed to resist the temptation to wordsmith the description  
of pg_hba.conf IPv4 and IPv6 address entries a bit.  A lot of this text  
hasn't been touched since we were IPv4-only.  

M doc/src/sgml/client-auth.sgml
M src/backend/libpq/hba.c
M src/backend/libpq/ip.c
M src/include/libpq/ip.h

Improve pg_check_dir's handling of closedir() failures.

commit   : d7d294f5935e157f239b32c6d1f3d4e923a4eed5    
  
author   : Robert Haas <[email protected]>    
date     : Tue, 17 Feb 2015 10:19:30 -0500    
  
committer: Robert Haas <[email protected]>    
date     : Tue, 17 Feb 2015 10:19:30 -0500    

Click here for diff

Avoid losing errno if readdir() fails and closedir() works.  This also  
avoids leaking the directory handle when readdir() fails.  Commit  
6f03927fce038096f53ca67eeab9adb24938f8a6 introduced logic to better  
handle readdir() and closedir() failures, bu it missed these cases.  
  
Extracted from a larger patch by Marco Nenciarini.  

M src/port/pgcheckdir.c

Fix misuse of memcpy() in check_ip().

commit   : 2df854f842bee71cb59f7307b5bad9c3235be2ec    
  
author   : Tom Lane <[email protected]>    
date     : Mon, 16 Feb 2015 16:17:48 -0500    
  
committer: Tom Lane <[email protected]>    
date     : Mon, 16 Feb 2015 16:17:48 -0500    

Click here for diff

The previous coding copied garbage into a local variable, pretty much  
ensuring that the intended test of an IPv6 connection address against a  
promoted IPv4 address from pg_hba.conf would never match.  The lack of  
field complaints likely indicates that nobody realized this was supposed  
to work, which is unsurprising considering that no user-facing docs suggest  
it should work.  
  
In principle this could have led to a SIGSEGV due to reading off the end of  
memory, but since the source address would have pointed to somewhere in the  
function's stack frame, that's quite unlikely.  What led to discovery of  
the bug is Hugo Osvaldo Barrera's report of a crash after an OS upgrade,  
which is probably because he is now running a system in which memcpy raises  
abort() upon detecting overlapping source and destination areas.  (You'd  
have to additionally suppose some things about the stack frame layout to  
arrive at this conclusion, but it seems plausible.)  
  
This has been broken since the code was added, in commit f3aec2c7f51904e7,  
so back-patch to all supported branches.  

M src/backend/libpq/hba.c

pg_regress: Write processed input/*.source into output dir

commit   : 94e7b84c32b47841bb565d611f1b273e6842a646    
  
author   : Peter Eisentraut <[email protected]>    
date     : Sat, 14 Feb 2015 21:33:41 -0500    
  
committer: Peter Eisentraut <[email protected]>    
date     : Sat, 14 Feb 2015 21:33:41 -0500    

Click here for diff

Before, it was writing the processed files into the input directory,  
which is incorrect in a vpath build.  

M src/test/regress/pg_regress.c

Fix broken #ifdef for __sparcv8

commit   : ebdc2e1e20da61b57518a59eb4fc1786f5d1a403    
  
author   : Heikki Linnakangas <[email protected]>    
date     : Fri, 13 Feb 2015 23:51:23 +0200    
  
committer: Heikki Linnakangas <[email protected]>    
date     : Fri, 13 Feb 2015 23:51:23 +0200    

Click here for diff

Rob Rowan. Backpatch to all supported versions, like the patch that added  
the broken #ifdef.  

M src/include/storage/s_lock.h

pg_upgrade: quote directory names in delete_old_cluster script

commit   : 08aaae40e18e89065b191dbcaae54ab87ce63979    
  
author   : Bruce Momjian <[email protected]>    
date     : Wed, 11 Feb 2015 22:06:04 -0500    
  
committer: Bruce Momjian <[email protected]>    
date     : Wed, 11 Feb 2015 22:06:04 -0500    

Click here for diff

This allows the delete script to properly function when special  
characters appear in directory paths, e.g. spaces.  
  
Backpatch through 9.0  

M contrib/pg_upgrade/check.c

pg_upgrade: preserve freeze info for postgres/template1 dbs

commit   : 55179b03ea05af8d05adfe53657e1f1b742d7ceb    
  
author   : Bruce Momjian <[email protected]>    
date     : Wed, 11 Feb 2015 21:02:07 -0500    
  
committer: Bruce Momjian <[email protected]>    
date     : Wed, 11 Feb 2015 21:02:07 -0500    

Click here for diff

pg_database.datfrozenxid and pg_database.datminmxid were not preserved  
for the 'postgres' and 'template1' databases.  This could cause missing  
clog file errors on access to user tables and indexes after upgrades in  
these databases.  
  
Backpatch through 9.0  

M src/bin/pg_dump/pg_dumpall.c

Fixed array handling in ecpg.

commit   : 32e6331958390ef8d09c5d696ec0afb1f34cd1e9    
  
author   : Michael Meskes <[email protected]>    
date     : Wed, 11 Feb 2015 11:13:11 +0100    
  
committer: Michael Meskes <[email protected]>    
date     : Wed, 11 Feb 2015 11:13:11 +0100    

Click here for diff

When ecpg was rewritten to the new protocol version not all variable types  
were corrected. This patch rewrites the code for these types to fix that. It  
also fixes the documentation to correctly tell the status of array handling.  

M doc/src/sgml/ecpg.sgml
M src/interfaces/ecpg/ecpglib/data.c
M src/interfaces/ecpg/ecpglib/execute.c

Fix pg_dump's heuristic for deciding which casts to dump.

commit   : 14794f9b8ef9e555b93fd1d8f55594b821410f8e    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 10 Feb 2015 22:38:26 -0500    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 10 Feb 2015 22:38:26 -0500    

Click here for diff

Back in 2003 we had a discussion about how to decide which casts to dump.  
At the time pg_dump really only considered an object's containing schema  
to decide what to dump (ie, dump whatever's not in pg_catalog), and so  
we chose a complicated idea involving whether the underlying types were to  
be dumped (cf commit a6790ce85752b67ad994f55fdf1a450262ccc32e).  But users  
are allowed to create casts between built-in types, and we failed to dump  
such casts.  Let's get rid of that heuristic, which has accreted even more  
ugliness since then, in favor of just looking at the cast's OID to decide  
if it's a built-in cast or not.  
  
In passing, also fix some really ancient code that supposed that it had to  
manufacture a dependency for the cast on its cast function; that's only  
true when dumping from a pre-7.3 server.  This just resulted in some wasted  
cycles and duplicate dependency-list entries with newer servers, but we  
might as well improve it.  
  
Per gripes from a number of people, most recently Greg Sabino Mullane.  
Back-patch to all supported branches.  

M src/bin/pg_dump/pg_dump.c

Fix GEQO to not assume its join order heuristic always works.

commit   : 52579d507e624d3e24a062aba741b985b60c40b7    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 10 Feb 2015 20:37:29 -0500    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 10 Feb 2015 20:37:29 -0500    

Click here for diff

Back in commit 400e2c934457bef4bc3cc9a3e49b6289bd761bc0 I rewrote GEQO's  
gimme_tree function to improve its heuristic for modifying the given tour  
into a legal join order.  In what can only be called a fit of hubris,  
I supposed that this new heuristic would *always* find a legal join order,  
and ripped out the old logic that allowed gimme_tree to sometimes fail.  
  
The folly of this is exposed by bug #12760, in which the "greedy" clumping  
behavior of merge_clump() can lead it into a dead end which could only be  
recovered from by un-clumping.  We have no code for that and wouldn't know  
exactly what to do with it if we did.  Rather than try to improve the  
heuristic rules still further, let's just recognize that it *is* a  
heuristic and probably must always have failure cases.  So, put back the  
code removed in the previous commit to allow for failure (but comment it  
a bit better this time).  
  
It's possible that this code was actually fully correct at the time and  
has only been broken by the introduction of LATERAL.  But having seen this  
example I no longer have much faith in that proposition, so back-patch to  
all supported branches.  

M src/backend/optimizer/geqo/geqo_eval.c
M src/backend/optimizer/geqo/geqo_main.c
M src/backend/optimizer/geqo/geqo_pool.c

Report WAL flush, not insert, position in replication IDENTIFY_SYSTEM

commit   : 0d36d9f2b9a2178bb19ffbe663e39818f51a0f82    
  
author   : Heikki Linnakangas <[email protected]>    
date     : Fri, 6 Feb 2015 11:18:14 +0200    
  
committer: Heikki Linnakangas <[email protected]>    
date     : Fri, 6 Feb 2015 11:18:14 +0200    

Click here for diff

When beginning streaming replication, the client usually issues the  
IDENTIFY_SYSTEM command, which used to return the current WAL insert  
position. That's not suitable for the intended purpose of that field,  
however. pg_receivexlog uses it to start replication from the reported  
point, but if it hasn't been flushed to disk yet, it will fail. Change  
IDENTIFY_SYSTEM to report the flush position instead.  
  
Backpatch to 9.1 and above. 9.0 doesn't report any WAL position.  

M doc/src/sgml/protocol.sgml
M src/backend/replication/walsender.c

Add missing float.h include to snprintf.c.

commit   : 490a91894f14df1000273973b54f2b254ab42ed9    
  
author   : Andres Freund <[email protected]>    
date     : Wed, 4 Feb 2015 13:27:31 +0100    
  
committer: Andres Freund <[email protected]>    
date     : Wed, 4 Feb 2015 13:27:31 +0100    

Click here for diff

On windows _isnan() (which isnan() is redirected to in port/win32.h)  
is declared in float.h, not math.h.  
  
Per buildfarm animal currawong.  
  
Backpatch to all supported branches.  

M src/port/snprintf.c