PostgreSQL 9.3.6 commit log

Stamp 9.3.6.

  
commit   : b5ea07b06d58519c54aa3f15067f9a44d84f6d8e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 2 Feb 2015 15:43:50 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 2 Feb 2015 15:43:50 -0500    

Click here for diff

  
  

Last-minute updates for release notes.

  
commit   : 0a819b6f6239188ac9d6c9d7f463ff9c6ca9e4ec    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 2 Feb 2015 11:24:05 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 2 Feb 2015 11:24:05 -0500    

Click here for diff

  
Add entries for security issues.  
  
Security: CVE-2015-0241 through CVE-2015-0244  
  

Be more careful to not lose sync in the FE/BE protocol.

  
commit   : cd19848bd555223eb613c699a5f6360b4133f7fa    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 2 Feb 2015 17:09:04 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 2 Feb 2015 17:09:04 +0200    

Click here for diff

  
If any error occurred while we were in the middle of reading a protocol  
message from the client, we could lose sync, and incorrectly try to  
interpret a part of another message as a new protocol message. That will  
usually lead to an "invalid frontend message" error that terminates the  
connection. However, this is a security issue because an attacker might  
be able to deliberately cause an error, inject a Query message in what's  
supposed to be just user data, and have the server execute it.  
  
We were quite careful to not have CHECK_FOR_INTERRUPTS() calls or other  
operations that could ereport(ERROR) in the middle of processing a message,  
but a query cancel interrupt or statement timeout could nevertheless cause  
it to happen. Also, the V2 fastpath and COPY handling were not so careful.  
It's very difficult to recover in the V2 COPY protocol, so we will just  
terminate the connection on error. In practice, that's what happened  
previously anyway, as we lost protocol sync.  
  
To fix, add a new variable in pqcomm.c, PqCommReadingMsg, that is set  
whenever we're in the middle of reading a message. When it's set, we cannot  
safely ERROR out and continue running, because we might've read only part  
of a message. PqCommReadingMsg acts somewhat similarly to critical sections  
in that if an error occurs while it's set, the error handler will force the  
connection to be terminated, as if the error was FATAL. It's not  
implemented by promoting ERROR to FATAL in elog.c, like ERROR is promoted  
to PANIC in critical sections, because we want to be able to use  
PG_TRY/CATCH to recover and regain protocol sync. pq_getmessage() takes  
advantage of that to prevent an OOM error from terminating the connection.  
  
To prevent unnecessary connection terminations, add a holdoff mechanism  
similar to HOLD/RESUME_INTERRUPTS() that can be used hold off query cancel  
interrupts, but still allow die interrupts. The rules on which interrupts  
are processed when are now a bit more complicated, so refactor  
ProcessInterrupts() and the calls to it in signal handlers so that the  
signal handlers always call it if ImmediateInterruptOK is set, and  
ProcessInterrupts() can decide to not do anything if the other conditions  
are not met.  
  
Reported by Emil Lenngren. Patch reviewed by Noah Misch and Andres Freund.  
Backpatch to all supported versions.  
  
Security: CVE-2015-0244  
  

Cherry-pick security-relevant fixes from upstream imath library.

  
commit   : a558ad3a7e014d42bc7e22df53c4a019e639df14    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 2 Feb 2015 10:00:45 -0500    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 2 Feb 2015 10:00:45 -0500    

Click here for diff

  
This covers alterations to buffer sizing and zeroing made between imath  
1.3 and imath 1.20.  Valgrind Memcheck identified the buffer overruns  
and reliance on uninitialized data; their exploit potential is unknown.  
Builds specifying --with-openssl are unaffected, because they use the  
OpenSSL BIGNUM facility instead of imath.  Back-patch to 9.0 (all  
supported versions).  
  
Security: CVE-2015-0243  
  

Fix buffer overrun after incomplete read in pullf_read_max().

  
commit   : 6994f07907b90ff03f661ca00e0341a9078fa843    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 2 Feb 2015 10:00:45 -0500    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 2 Feb 2015 10:00:45 -0500    

Click here for diff

  
Most callers pass a stack buffer.  The ensuing stack smash can crash the  
server, and we have not ruled out the viability of attacks that lead to  
privilege escalation.  Back-patch to 9.0 (all supported versions).  
  
Marko Tiikkaja  
  
Security: CVE-2015-0243  
  

port/snprintf(): fix overflow and do padding

  
commit   : bc4d5f2e57379feb7a577d25da2681a77e7ca854    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Mon, 2 Feb 2015 10:00:45 -0500    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Mon, 2 Feb 2015 10:00:45 -0500    

Click here for diff

  
Prevent port/snprintf() from overflowing its local fixed-size  
buffer and pad to the desired number of digits with zeros, even  
if the precision is beyond the ability of the native sprintf().  
port/snprintf() is only used on systems that lack a native  
snprintf().  
  
Reported by Bruce Momjian. Patch by Tom Lane.	Backpatch to all  
supported versions.  
  
Security: CVE-2015-0242  
  

to_char(): prevent writing beyond the allocated buffer

  
commit   : fe2526990b821efb9452fa8601ee216a487202ff    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Mon, 2 Feb 2015 10:00:45 -0500    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Mon, 2 Feb 2015 10:00:45 -0500    

Click here for diff

  
Previously very long localized month and weekday strings could  
overflow the allocated buffers, causing a server crash.  
  
Reported and patch reviewed by Noah Misch.  Backpatch to all  
supported versions.  
  
Security: CVE-2015-0241  
  

to_char(): prevent accesses beyond the allocated buffer

  
commit   : b8b5801478e9cdd1c74bd392017b944dcc0891dc    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Mon, 2 Feb 2015 10:00:44 -0500    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Mon, 2 Feb 2015 10:00:44 -0500    

Click here for diff

  
Previously very long field masks for floats could access memory  
beyond the existing buffer allocated to hold the result.  
  
Reported by Andres Freund and Peter Geoghegan.	Backpatch to all  
supported versions.  
  
Security: CVE-2015-0241  
  

Doc: fix syntax description for psql’s \setenv.

  
commit   : fa06ce595a7a955c3f1f6dbaada71a204f6c7724    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 2 Feb 2015 00:18:54 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 2 Feb 2015 00:18:54 -0500    

Click here for diff

  
The variable name isn't optional --- looks like a copy-and-paste-o from  
the \set command, where it is.  
  
Dilip Kumar  
  

Translation updates

  
commit   : 52472bdcf03e3768c8e972535d9aaf7b3ba255dd    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 1 Feb 2015 23:08:39 -0500    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 1 Feb 2015 23:08:39 -0500    

Click here for diff

  
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git  
Source-Git-Hash: 2ba4cf334b8ed1d46593e3127ecc673eb96bc7a8  
  

doc: Improve claim about location of pg_service.conf

  
commit   : 6b9b705c98c53576d4e1208e6d291fb09c42ea0e    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 1 Feb 2015 22:36:44 -0500    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 1 Feb 2015 22:36:44 -0500    

Click here for diff

  
The previous wording claimed that the file was always in /etc, but of  
course this varies with the installation layout.  Write instead that it  
can be found via `pg_config --sysconfdir`.  Even though this is still  
somewhat incorrect because it doesn't account of moved installations, it  
at least conveys that the location depends on the installation.  
  

Release notes for 9.4.1, 9.3.6, 9.2.10, 9.1.15, 9.0.19.

  
commit   : 9f8ba18278b4e4cf3b857d16d6df8a24f8fcabcb    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 1 Feb 2015 16:53:20 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 1 Feb 2015 16:53:20 -0500    

Click here for diff

  
  

Fix documentation of psql’s ECHO all mode.

  
commit   : c0b5127c118000a94bf2af8ea425c89b0f2ef53d    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 31 Jan 2015 18:35:21 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 31 Jan 2015 18:35:21 -0500    

Click here for diff

  
"ECHO all" is ignored for interactive input, and has been for a very long  
time, though possibly not for as long as the documentation has claimed the  
opposite.  Fix that, and also note that empty lines aren't echoed, which  
while dubious is another longstanding behavior (it's embedded in our  
regression test files for one thing).  Per bug #12721 from Hans Ginzel.  
  
In HEAD, also improve the code comments in this area, and suppress an  
unnecessary fflush(stdout) when we're not echoing.  That would likely  
be safe to back-patch, but I'll not risk it mere hours before a release  
wrap.  
  

Update time zone data files to tzdata release 2015a.

  
commit   : 8470cd4735470b4acd5b7c4620d28ad6907fd7f6    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 30 Jan 2015 22:45:44 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 30 Jan 2015 22:45:44 -0500    

Click here for diff

  
DST law changes in Chile and Mexico (state of Quintana Roo).  
Historical changes for Iceland.  
  

Fix Coverity warning about contrib/pgcrypto’s mdc_finish().

  
commit   : f08cf8ad9098bd26a57fabc5ecbfd7d38e6c2cee    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 30 Jan 2015 13:05:01 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 30 Jan 2015 13:05:01 -0500    

Click here for diff

  
Coverity points out that mdc_finish returns a pointer to a local buffer  
(which of course is gone as soon as the function returns), leaving open  
a risk of misbehaviors possibly as bad as a stack overwrite.  
  
In reality, the only possible call site is in process_data_packets()  
which does not examine the returned pointer at all.  So there's no  
live bug, but nonetheless the code is confusing and risky.  Refactor  
to avoid the issue by letting process_data_packets() call mdc_finish()  
directly instead of going through the pullf_read() API.  
  
Although this is only cosmetic, it seems good to back-patch so that  
the logic in pgp-decrypt.c stays in sync across all branches.  
  
Marko Kreen  
  

Fix assorted oversights in range selectivity estimation.

  
commit   : 527ff8baf259263cc861b2058385d92829d1ba28    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 30 Jan 2015 12:30:43 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 30 Jan 2015 12:30:43 -0500    

Click here for diff

  
calc_rangesel() failed outright when comparing range variables to empty  
constant ranges with < or >=, as a result of missing cases in a switch.  
It also produced a bogus estimate for > comparison to an empty range.  
  
On top of that, the >= and > cases were mislabeled throughout.  For  
nonempty constant ranges, they managed to produce the right answers  
anyway as a result of counterbalancing typos.  
  
Also, default_range_selectivity() omitted cases for elem <@ range,  
range &< range, and range &> range, so that rather dubious defaults  
were applied for these operators.  
  
In passing, rearrange the code in rangesel() so that the elem <@ range  
case is handled in a less opaque fashion.  
  
Report and patch by Emre Hasegeli, some additional work by me  
  

Allow pg_dump to use jobs and serializable transactions together.

  
commit   : cc609c46fbf0d0e8a7995207c07286c6248eb732    
  
author   : Kevin Grittner <kgrittn@postgresql.org>    
date     : Fri, 30 Jan 2015 09:01:36 -0600    
  
committer: Kevin Grittner <kgrittn@postgresql.org>    
date     : Fri, 30 Jan 2015 09:01:36 -0600    

Click here for diff

  
Since 9.3, when the --jobs option was introduced, using it together  
with the --serializable-deferrable option generated multiple  
errors.  We can get correct behavior by allowing the connection  
which acquires the snapshot to use SERIALIZABLE, READ ONLY,  
DEFERRABLE and pass that to the workers running the other  
connections using REPEATABLE READ, READ ONLY.  This is a bit of a  
kluge since the SERIALIZABLE behavior is achieved by running some  
of the participating connections at a different isolation level,  
but it is a simple and safe change, suitable for back-patching.  
  
This will be followed by a proposal for a more invasive fix with  
some slight behavioral changes on just the master branch, based on  
suggestions from Andres Freund, but the kluge will be applied to  
master until something is agreed along those lines.  
  
Back-patched to 9.3, where the --jobs option was added.  
  
Based on report from Alexander Korotkov  
  

Fix BuildIndexValueDescription for expressions

  
commit   : 39c46c5f28b2e46f2b8ce7fe5fac3fa66f1f0abf    
  
author   : Stephen Frost <sfrost@snowman.net>    
date     : Thu, 29 Jan 2015 21:59:51 -0500    
  
committer: Stephen Frost <sfrost@snowman.net>    
date     : Thu, 29 Jan 2015 21:59:51 -0500    

Click here for diff

  
In 804b6b6db4dcfc590a468e7be390738f9f7755fb we modified  
BuildIndexValueDescription to pay attention to which columns are visible  
to the user, but unfortunatley that commit neglected to consider indexes  
which are built on expressions.  
  
Handle error-reporting of violations of constraint indexes based on  
expressions by not returning any detail when the user does not have  
table-level SELECT rights.  
  
Backpatch to 9.0, as the prior commit was.  
  
Pointed out by Tom.  
  

Handle unexpected query results, especially NULLs, safely in connectby().

  
commit   : 53ae2469233a7b017407ac11851aa27c3c294f0f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 29 Jan 2015 20:18:40 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 29 Jan 2015 20:18:40 -0500    

Click here for diff

  
connectby() didn't adequately check that the constructed SQL query returns  
what it's expected to; in fact, since commit 08c33c426bfebb32 it wasn't  
checking that at all.  This could result in a null-pointer-dereference  
crash if the constructed query returns only one column instead of the  
expected two.  Less excitingly, it could also result in surprising data  
conversion failures if the constructed query returned values that were  
not I/O-conversion-compatible with the types specified by the query  
calling connectby().  
  
In all branches, insist that the query return at least two columns;  
this seems like a minimal sanity check that can't break any reasonable  
use-cases.  
  
In HEAD, insist that the constructed query return the types specified by  
the outer query, including checking for typmod incompatibility, which the  
code never did even before it got broken.  This is to hide the fact that  
the implementation does a conversion to text and back; someday we might  
want to improve that.  
  
In back branches, leave that alone, since adding a type check in a minor  
release is more likely to break things than make people happy.  Type  
inconsistencies will continue to work so long as the actual type and  
declared type are I/O representation compatible, and otherwise will fail  
the same way they used to.  
  
Also, in all branches, be on guard for NULL results from the constructed  
query, which formerly would cause null-pointer dereference crashes.  
We now print the row with the NULL but don't recurse down from it.  
  
In passing, get rid of the rather pointless idea that  
build_tuplestore_recursively() should return the same tuplestore that's  
passed to it.  
  
Michael Paquier, adjusted somewhat by me  
  

Properly terminate the array returned by GetLockConflicts().

  
commit   : 9be655632bf23fa08bf48ffe672a6be125af8780    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Thu, 29 Jan 2015 17:49:03 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Thu, 29 Jan 2015 17:49:03 +0100    

Click here for diff

  
GetLockConflicts() has for a long time not properly terminated the  
returned array. During normal processing the returned array is zero  
initialized which, while not pretty, is sufficient to be recognized as  
a invalid virtual transaction id. But the HotStandby case is more than  
aesthetically broken: The allocated (and reused) array is neither  
zeroed upon allocation, nor reinitialized, nor terminated.  
  
Not having a terminating element means that the end of the array will  
not be recognized and that recovery conflict handling will thus read  
ahead into adjacent memory. Only terminating when hitting memory  
content that looks like a invalid virtual transaction id.  Luckily  
this seems so far not have caused significant problems, besides making  
recovery conflict more expensive.  
  
Discussion: 20150127142713.GD29457@awork2.anarazel.de  
  
Backpatch into all supported branches.  
  

Fix bug where GIN scan keys were not initialized with gin_fuzzy_search_limit.

  
commit   : 1c2774f3771353cac002ffcb4325331a20613305    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 29 Jan 2015 19:35:55 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 29 Jan 2015 19:35:55 +0200    

Click here for diff

  
When gin_fuzzy_search_limit was used, we could jump out of startScan()  
without calling startScanKey(). That was harmless in 9.3 and below, because  
startScanKey()() didn't do anything interesting, but in 9.4 it initializes  
information needed for skipping entries (aka GIN fast scans), and you  
readily get a segfault if it's not done. Nevertheless, it was clearly wrong  
all along, so backpatch all the way to 9.1 where the early return was  
introduced.  
  
(AFAICS startScanKey() did nothing useful in 9.3 and below, because the  
fields it initialized were already initialized in ginFillScanKey(), but I  
don't dare to change that in a minor release. ginFillScanKey() is always  
called in gingetbitmap() even though there's a check there to see if the  
scan keys have already been initialized, because they never are; ginrescan()  
free's them.)  
  
In the passing, remove unnecessary if-check from the second inner loop in  
startScan(). We already check in the first loop that the condition is true  
for all entries.  
  
Reported by Olaf Gawenda, bug #12694, Backpatch to 9.1 and above, although  
AFAICS it causes a live bug only in 9.4.  
  

Clean up range-table building in copy.c

  
commit   : d5d46e626ab3107908bce887b55f75d3bc13db94    
  
author   : Stephen Frost <sfrost@snowman.net>    
date     : Wed, 28 Jan 2015 17:43:02 -0500    
  
committer: Stephen Frost <sfrost@snowman.net>    
date     : Wed, 28 Jan 2015 17:43:02 -0500    

Click here for diff

  
Commit 804b6b6db4dcfc590a468e7be390738f9f7755fb added the build of a  
range table in copy.c to initialize the EState es_range_table since it  
can be needed in error paths.  Unfortunately, that commit didn't  
appreciate that some code paths might end up not initializing the rte  
which is used to build the range table.  
  
Fix that and clean up a couple others things along the way- build it  
only once and don't explicitly set it on the !is_from path as it  
doesn't make any sense there (cstate is palloc0'd, so this isn't an  
issue from an initializing standpoint either).  
  
The prior commit went back to 9.0, but this only goes back to 9.1 as  
prior to that the range table build happens immediately after building  
the RTE and therefore doesn't suffer from this issue.  
  
Pointed out by Robert.  
  

Fix column-privilege leak in error-message paths

  
commit   : 4b987421612131eb15e9cdb0c0363b2a9fb95325    
  
author   : Stephen Frost <sfrost@snowman.net>    
date     : Mon, 12 Jan 2015 17:04:11 -0500    
  
committer: Stephen Frost <sfrost@snowman.net>    
date     : Mon, 12 Jan 2015 17:04:11 -0500    

Click here for diff

  
While building error messages to return to the user,  
BuildIndexValueDescription, ExecBuildSlotValueDescription and  
ri_ReportViolation would happily include the entire key or entire row in  
the result returned to the user, even if the user didn't have access to  
view all of the columns being included.  
  
Instead, include only those columns which the user is providing or which  
the user has select rights on.  If the user does not have any rights  
to view the table or any of the columns involved then no detail is  
provided and a NULL value is returned from BuildIndexValueDescription  
and ExecBuildSlotValueDescription.  Note that, for key cases, the user  
must have access to all of the columns for the key to be shown; a  
partial key will not be returned.  
  
Back-patch all the way, as column-level privileges are now in all  
supported versions.  
  
This has been assigned CVE-2014-8161, but since the issue and the patch  
have already been publicized on pgsql-hackers, there's no point in trying  
to hide this commit.  
  

Fix NUMERIC field access macros to treat NaNs consistently.

  
commit   : 78f79b66990d5fc469dcf370d8fa4cc91f87cc85    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 27 Jan 2015 12:06:38 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 27 Jan 2015 12:06:38 -0500    

Click here for diff

  
Commit 145343534c153d1e6c3cff1fa1855787684d9a38 arranged to store numeric  
NaN values as short-header numerics, but the field access macros did not  
get the memo: they thought only "SHORT" numerics have short headers.  
  
Most of the time this makes no difference because we don't access the  
weight or dscale of a NaN; but numeric_send does that.  As pointed out  
by Andrew Gierth, this led to fetching uninitialized bytes.  
  
AFAICS this could not have any worse consequences than that; in particular,  
an unaligned stored numeric would have been detoasted by PG_GETARG_NUMERIC,  
so that there's no risk of a fetch off the end of memory.  Still, the code  
is wrong on its own terms, and it's not hard to foresee future changes that  
might expose us to real risks.  So back-patch to all affected branches.  
  

  
commit   : e50e2e08232426f235a7dd7be078cd617be9202a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 26 Jan 2015 15:17:39 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 26 Jan 2015 15:17:39 -0500    

Click here for diff

  
Some fields of the sinfo struct are modified within PG_TRY and then  
referenced within PG_CATCH, so as with recent patch to async.c, "volatile"  
is necessary for strict POSIX compliance; and that propagates to a couple  
of subroutines as well as materializeQueryResult() itself.  I think the  
risk of actual issues here is probably higher than in async.c, because  
storeQueryResult() is likely to get inlined into materializeQueryResult(),  
leaving the compiler free to conclude that its stores into sinfo fields are  
dead code.  
  

Fix volatile-safety issue in pltcl_SPI_execute_plan().

  
commit   : 22967ed5b457e9a8646c5960856fca79b7d1de36    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 26 Jan 2015 12:18:25 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 26 Jan 2015 12:18:25 -0500    

Click here for diff

  
The "callargs" variable is modified within PG_TRY and then referenced  
within PG_CATCH, which is exactly the coding pattern we've now found  
to be unsafe.  Marking "callargs" volatile would be problematic because  
it is passed by reference to some Tcl functions, so fix the problem  
by not modifying it within PG_TRY.  We can just postpone the free()  
till we exit the PG_TRY construct, as is already done elsewhere in this  
same file.  
  
Also, fix failure to free(callargs) when exiting on too-many-arguments  
error.  This is only a minor memory leak, but a leak nonetheless.  
  
In passing, remove some unnecessary "volatile" markings in the same  
function.  Those doubtless are there because gcc 2.95.3 whinged about  
them, but we now know that its algorithm for complaining is many bricks  
shy of a load.  
  
This is certainly a live bug with compilers that optimize similarly  
to current gcc, so back-patch to all active branches.  
  

Fix volatile-safety issue in asyncQueueReadAllNotifications().

  
commit   : 6ae1513a797208340c1137654c24a38dc90b05a0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 26 Jan 2015 11:57:40 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 26 Jan 2015 11:57:40 -0500    

Click here for diff

  
The "pos" variable is modified within PG_TRY and then referenced  
within PG_CATCH, so for strict POSIX conformance it must be marked  
volatile.  Superficially the code looked safe because pos's address  
was taken, which was sufficient to force it into memory ... but it's  
not sufficient to ensure that the compiler applies updates exactly  
where the program text says to.  The volatility marking has to extend  
into a couple of subroutines too, but I think that's probably a good  
thing because the risk of out-of-order updates is mostly in those  
subroutines not asyncQueueReadAllNotifications() itself.  In principle  
the compiler could have re-ordered operations such that an error could  
be thrown while "pos" had an incorrect value.  
  
It's unclear how real the risk is here, but for safety back-patch  
to all active branches.  
  

Replace a bunch more uses of strncpy() with safer coding.

  
commit   : 7240f9200c96cb58fdf96d39b46dfe56b4965a58    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 24 Jan 2015 13:05:49 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 24 Jan 2015 13:05:49 -0500    

Click here for diff

  
strncpy() has a well-deserved reputation for being unsafe, so make an  
effort to get rid of nearly all occurrences in HEAD.  
  
A large fraction of the remaining uses were passing length less than or  
equal to the known strlen() of the source, in which case no null-padding  
can occur and the behavior is equivalent to memcpy(), though doubtless  
slower and certainly harder to reason about.  So just use memcpy() in  
these cases.  
  
In other cases, use either StrNCpy() or strlcpy() as appropriate (depending  
on whether padding to the full length of the destination buffer seems  
useful).  
  
I left a few strncpy() calls alone in the src/timezone/ code, to keep it  
in sync with upstream (the IANA tzcode distribution).  There are also a  
few such calls in ecpg that could possibly do with more analysis.  
  
AFAICT, none of these changes are more than cosmetic, except for the four  
occurrences in fe-secure-openssl.c, which are in fact buggy: an overlength  
source leads to a non-null-terminated destination buffer and ensuing  
misbehavior.  These don't seem like security issues, first because no stack  
clobber is possible and second because if your values of sslcert etc are  
coming from untrusted sources then you've got problems way worse than this.  
Still, it's undesirable to have unpredictable behavior for overlength  
inputs, so back-patch those four changes to all active branches.  
  

Improve documentation of random() function.

  
commit   : b8e5f669910cb5e26fdf3d255e981e875f87a7d7    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 20 Jan 2015 21:21:34 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 20 Jan 2015 21:21:34 -0500    

Click here for diff

  
Move random() and setseed() to a separate table, to have them grouped  
together. Also add a notice that random() is not cryptographically secure.  
  
Back-patch of commit 75fdcec14543b60cc0c67483d8cc47d5c7adf1a8 into  
all supported versions, per discussion of the need to document that  
random() is just a wrapper around random(3).  
  

In pg_regress, remove the temporary installation upon successful exit.

  
commit   : 1681e2f740a7b7e6b59f68cd72d9b1f61a8dcb4e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 19 Jan 2015 23:44:24 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 19 Jan 2015 23:44:24 -0500    

Click here for diff

  
This results in a very substantial reduction in disk space usage during  
"make check-world", since that sequence involves creation of numerous  
temporary installations.  It should also help a bit in the buildfarm, even  
though the buildfarm script doesn't create as many temp installations,  
because the current script misses deleting some of them; and anyway it  
seems better to do this once in one place rather than expecting that  
script to get it right every time.  
  
In 9.4 and HEAD, also undo the unwise choice in commit b1aebbb6a86e96d7  
to report strerror(errno) after a rmtree() failure.  rmtree has already  
reported that, possibly for multiple failures with distinct errnos; and  
what's more, by the time it returns there is no good reason to assume  
that errno still reflects the last reportable error.  So reporting errno  
here is at best redundant and at worst badly misleading.  
  
Back-patch to all supported branches, so that future revisions of the  
buildfarm script can rely on this behavior.  
  

Adjust “pgstat wait timeout” message to be a translatable LOG message.

  
commit   : 19794e9976746e97ef33ea4a459b00ea1b12c53e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 19 Jan 2015 23:01:39 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 19 Jan 2015 23:01:39 -0500    

Click here for diff

  
Per discussion, change the log level of this message to be LOG not WARNING.  
The main point of this change is to avoid causing buildfarm run failures  
when the stats collector is exceptionally slow to respond, which it not  
infrequently is on some of the smaller/slower buildfarm members.  
  
This change does lose notice to an interactive user when his stats query  
is looking at out-of-date stats, but the majority opinion (not necessarily  
that of yours truly) is that WARNING messages would probably not get  
noticed anyway on heavily loaded production systems.  A LOG message at  
least ensures that the problem is recorded somewhere where bulk auditing  
for the issue is possible.  
  
Also, instead of an untranslated "pgstat wait timeout" message, provide  
a translatable and hopefully more understandable message "using stale  
statistics instead of current ones because stats collector is not  
responding".  The original text was written hastily under the assumption  
that it would never really happen in practice, which we now know to be  
unduly optimistic.  
  
Back-patch to all active branches, since we've seen the buildfarm issue  
in all branches.  
  

Fix use of already freed memory when dumping a database’s security label.

  
commit   : 509da5929ae0f5a5a51e5f86f4c9ae659c189b1a    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Sun, 18 Jan 2015 15:57:55 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Sun, 18 Jan 2015 15:57:55 +0100    

Click here for diff

  
pg_dump.c:dumDatabase() called ArchiveEntry() with the results of a a  
query that was PQclear()ed a couple lines earlier.  
  
Backpatch to 9.2 where security labels for shared objects where  
introduced.  
  

Fix namespace handling in xpath function

  
commit   : e32cb8d0e02f8f3fa2c66d80b5cf0ad762c79c9d    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Tue, 6 Jan 2015 23:06:13 -0500    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Tue, 6 Jan 2015 23:06:13 -0500    

Click here for diff

  
Previously, the xml value resulting from an xpath query would not have  
namespace declarations if the namespace declarations were attached to  
an ancestor element in the input xml value.  That means the output value  
was not correct XML.  Fix that by running the result value through  
xmlCopyNode(), which produces the correct namespace declarations.  
  
Author: Ali Akbar <the.apaan@gmail.com>  
  

Another attempt at fixing Windows Norwegian locale.

  
commit   : 1619442a1f592a1ea3988f3f74ffd14880962e0e    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 16 Jan 2015 12:12:49 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 16 Jan 2015 12:12:49 +0200    

Click here for diff

  
Previous fix mapped "Norwegian (Bokmål)" locale, which contains a non-ASCII  
character, to the pure ASCII alias "norwegian-bokmal". However, it turns  
out that more recent versions of the CRT library, in particular MSVCR110  
(Visual Studio 2012), changed the behaviour of setlocale() so that if  
you pass "norwegian-bokmal" to setlocale, it returns "Norwegian_Norway".  
  
That meant trouble, when setlocale(..., NULL) first returned  
"Norwegian (Bokmål)_Norway", which we mapped to "norwegian-bokmal_Norway",  
but another call to setlocale(..., "norwegian-bokmal_Norway") returned  
"Norwegian_Norway". That caused PostgreSQL to think that they are different  
locales, and therefore not compatible. That caused initdb to fail at  
CREATE DATABASE.  
  
Older CRT versions seem to accept "Norwegian_Norway" too, so change the  
mapping to return "Norwegian_Norway" instead of "norwegian-bokmal".  
  
Backpatch to 9.2 like the previous attempt. We haven't made a release that  
includes the previous fix yet, so we don't need to worry about changing the  
locale of existing clusters from "norwegian-bokmal" to "Norwegian_Norway".  
(Doing any mapping like this at all requires changing the locale of  
existing databases; the release notes need to include instructions for  
that).  
  

Update “pg_regress –no-locale” for Darwin and Windows.

  
commit   : 03f80d9e9afcfa485f2c583ba6c486cd93a5bda5    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Fri, 16 Jan 2015 01:27:31 -0500    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Fri, 16 Jan 2015 01:27:31 -0500    

Click here for diff

  
Commit 894459e59ffa5c7fee297b246c17e1f72564db1d revealed this option to  
be broken for NLS builds on Darwin, but "make -C contrib/unaccent check"  
and the buildfarm client rely on it.  Fix that configuration by  
redefining the option to imply LANG=C on Darwin.  In passing, use LANG=C  
instead of LANG=en on Windows; since only postmaster startup uses that  
value, testers are unlikely to notice the change.  Back-patch to 9.0,  
like the predecessor commit.  
  

Fix use-of-already-freed-memory problem in EvalPlanQual processing.

  
commit   : 34668c8eca065d745bf1166a92c9efc588e7aee2    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 15 Jan 2015 18:52:28 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 15 Jan 2015 18:52:28 -0500    

Click here for diff

  
Up to now, the "child" executor state trees generated for EvalPlanQual  
rechecks have simply shared the ResultRelInfo arrays used for the original  
execution tree.  However, this leads to dangling-pointer problems, because  
ExecInitModifyTable() is all too willing to scribble on some fields of the  
ResultRelInfo(s) even when it's being run in one of those child trees.  
This trashes those fields from the perspective of the parent tree, because  
even if the generated subtree is logically identical to what was in use in  
the parent, it's in a memory context that will go away when we're done  
with the child state tree.  
  
We do however want to share information in the direction from the parent  
down to the children; in particular, fields such as es_instrument *must*  
be shared or we'll lose the stats arising from execution of the children.  
So the simplest fix is to make a copy of the parent's ResultRelInfo array,  
but not copy any fields back at end of child execution.  
  
Per report from Manuel Kniep.  The added isolation test is based on his  
example.  In an unpatched memory-clobber-enabled build it will reliably  
fail with "ctid is NULL" errors in all branches back to 9.1, as a  
consequence of junkfilter->jf_junkAttNo being overwritten with $7f7f.  
This test cannot be run as-is before that for lack of WITH syntax; but  
I have no doubt that some variant of this problem can arise in older  
branches, so apply the code change all the way back.  
  

Improve performance of EXPLAIN with large range tables.

  
commit   : 939f0fb6765ef16874a4bd268efeb27cbc965e43    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 15 Jan 2015 13:18:19 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 15 Jan 2015 13:18:19 -0500    

Click here for diff

  
As of 9.3, ruleutils.c goes to some lengths to ensure that table and column  
aliases used in its output are unique.  Of course this takes more time than  
was required before, which in itself isn't fatal.  However, EXPLAIN was set  
up so that recalculation of the unique aliases was repeated for each  
subexpression printed in a plan.  That results in O(N^2) time and memory  
consumption for large plan trees, which did not happen in older branches.  
  
Fortunately, the expensive work is the same across a whole plan tree,  
so there is no need to repeat it; we can do most of the initialization  
just once per query and re-use it for each subexpression.  This buys  
back most (not all) of the performance loss since 9.2.  
  
We need an extra ExplainState field to hold the precalculated deparse  
context.  That's no problem in HEAD, but in the back branches, expanding  
sizeof(ExplainState) seems risky because third-party extensions might  
have local variables of that struct type.  So, in 9.4 and 9.3, introduce  
an auxiliary struct to keep sizeof(ExplainState) the same.  We should  
refactor the APIs to avoid such local variables in future, but that's  
material for a separate HEAD-only commit.  
  
Per gripe from Alexey Bashtanov.  Back-patch to 9.3 where the issue  
was introduced.  
  

pg_standby: Avoid writing one byte beyond the end of the buffer.

  
commit   : ebbef4f3959501f65041739759ea6c5b34437091    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Thu, 15 Jan 2015 09:26:03 -0500    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Thu, 15 Jan 2015 09:26:03 -0500    

Click here for diff

  
Previously, read() might have returned a length equal to the buffer  
length, and then the subsequent store to buf[len] would write a  
zero-byte one byte past the end.  This doesn't seem likely to be  
a security issue, but there's some chance it could result in  
pg_standby misbehaving.  
  
Spotted by Coverity; patch by Michael Paquier, reviewed by me.  
  

Make logging_collector=on work with non-windows EXEC_BACKEND again.

  
commit   : cc7a3a45a8d861caa0807af7280277d38f9bf85a    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Tue, 13 Jan 2015 21:02:47 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Tue, 13 Jan 2015 21:02:47 +0100    

Click here for diff

  
Commit b94ce6e80 reordered postmaster's startup sequence so that the  
tempfile directory is only cleaned up after all the necessary state  
for pg_ctl is collected.  Unfortunately the chosen location is after  
the syslogger has been started; which normally is fine, except for  
!WIN32 EXEC_BACKEND builds, which pass information to children via  
files in the temp directory.  
  
Move the call to RemovePgTempFiles() to just before the syslogger has  
started. That's the first child we fork.  
  
Luckily EXEC_BACKEND is pretty much only used by endusers on windows,  
which has a separate method to pass information to children. That  
means the real world impact of this bug is very small.  
  
Discussion: 20150113182344.GF12272@alap3.anarazel.de  
  
Backpatch to 9.1, just as the previous commit was.  
  

Fix some functions that were declared static then defined not-static.

  
commit   : 71942a8b3ad516251cc1ba87362917a6614ee435    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 12 Jan 2015 16:08:49 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 12 Jan 2015 16:08:49 -0500    

Click here for diff

  
Per testing with a compiler that whines about this.  
  

Avoid unexpected slowdown in vacuum regression test.

  
commit   : c704977e72358bdfb179c7734a25e9515b285ba9    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 12 Jan 2015 15:13:34 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 12 Jan 2015 15:13:34 -0500    

Click here for diff

  
I noticed the "vacuum" regression test taking really significantly longer  
than it used to on a slow machine.  Investigation pointed the finger at  
commit e415b469b33ba328765e39fd62edcd28f30d9c3c, which added creation of  
an index using an extremely expensive index function.  That function was  
evidently meant to be applied only twice ... but the test re-used an  
existing test table, which up till a couple lines before that had had over  
two thousand rows.  Depending on timing of the concurrent regression tests,  
the intervening VACUUMs might have been unable to remove those  
recently-dead rows, and then the index build would need to create index  
entries for them too, leading to the wrap_do_analyze() function being  
executed 2000+ times not twice.  Avoid this by using a different table  
that is guaranteed to have only the intended two rows in it.  
  
Back-patch to 9.0, like the commit that created the problem.  
  

Use correct text domain for errcontext() appearing within ereport().

  
commit   : 19f326619696bfb1be9a3b80090e5f39e805b359    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 12 Jan 2015 12:40:16 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 12 Jan 2015 12:40:16 -0500    

Click here for diff

  
The mechanism added in commit dbdf9679d7d61b03a3bf73af9b095831b7010eb5  
for associating the correct translation domain with errcontext strings  
potentially fails in cases where errcontext() is used within an ereport()  
macro.  Such usage was not originally envisioned for errcontext(), but we  
do have a few places that do it.  In this situation, the intended comma  
expression becomes just a couple of arguments to errfinish(), which the  
compiler might choose to evaluate right-to-left.  
  
Fortunately, in such cases the textdomain for the errcontext string must  
be the same as for the surrounding ereport.  So we can fix this by letting  
errstart initialize context_domain along with domain; then it will have  
the correct value no matter which order the calls occur in.  (Note that  
error stack callback functions are not invoked until errfinish, so normal  
usage of errcontext won't affect what happens for errcontext calls within  
the ereport macro.)  
  
In passing, make sure that errcontext calls within the main backend set  
context_domain to something non-NULL.  This isn't a live bug because  
NULL would select the current textdomain() setting which should be the  
right thing anyway --- but it seems better to handle this completely  
consistently with the regular domain field.  
  
Per report from Dmitry Voronin.  Backpatch to 9.3; before that, there  
wasn't any attempt to ensure that errcontext strings were translated  
in an appropriate domain.  
  

Skip dead backends in MinimumActiveBackends

  
commit   : e71111972d7e914b84bf79ea346b438d7d815108    
  
author   : Stephen Frost <sfrost@snowman.net>    
date     : Mon, 12 Jan 2015 10:13:18 -0500    
  
committer: Stephen Frost <sfrost@snowman.net>    
date     : Mon, 12 Jan 2015 10:13:18 -0500    

Click here for diff

  
Back in ed0b409, PGPROC was split and moved to static variables in  
procarray.c, with procs in ProcArrayStruct replaced by an array of  
integers representing process numbers (pgprocnos), with -1 indicating a  
dead process which has yet to be removed.  Access to procArray is  
generally done under ProcArrayLock and therefore most code does not have  
to concern itself with -1 entries.  
  
However, MinimumActiveBackends intentionally does not take  
ProcArrayLock, which means it has to be extra careful when accessing  
procArray.  Prior to ed0b409, this was handled by checking for a NULL  
in the pointer array, but that check was no longer valid after the  
split.  Coverity pointed out that the check could never happen and so  
it was removed in 5592eba.  That didn't make anything worse, but it  
didn't fix the issue either.  
  
The correct fix is to check for pgprocno == -1 and skip over that entry  
if it is encountered.  
  
Back-patch to 9.2, since there can be attempts to access the arrays  
prior to their start otherwise.  Note that the changes prior to 9.4 will  
look a bit different due to the change in 5592eba.  
  
Note that MinimumActiveBackends only returns a bool for heuristic  
purposes and any pre-array accesses are strictly read-only and so there  
is no security implication and the lack of fields complaints indicates  
it's very unlikely to run into issues due to this.  
  
Pointed out by Noah.  
  

xlogreader.c: Fix report_invalid_record translatability flag

  
commit   : a1aa00657e7af9475f47dc2723fd1a375a55b09e    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 9 Jan 2015 12:34:24 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 9 Jan 2015 12:34:24 -0300    

Click here for diff

  
For some reason I overlooked in GETTEXT_TRIGGERS that the right argument  
be read by gettext in 7fcbf6a405ffc12a4546a25b98592ee6733783fc.  This  
will drop the translation percentages for the backend all the way back  
to 9.3 ...  
  
Problem reported by Heikki.  
  

On Darwin, detect and report a multithreaded postmaster.

  
commit   : 1a366d51effd0e9f3301d68c2d3dce09c8c43d1f    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 7 Jan 2015 22:35:44 -0500    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 7 Jan 2015 22:35:44 -0500    

Click here for diff

  
Darwin --enable-nls builds use a substitute setlocale() that may start a  
thread.  Buildfarm member orangutan experienced BackendList corruption  
on account of different postmaster threads executing signal handlers  
simultaneously.  Furthermore, a multithreaded postmaster risks undefined  
behavior from sigprocmask() and fork().  Emit LOG messages about the  
problem and its workaround.  Back-patch to 9.0 (all supported versions).  
  

Always set the six locale category environment variables in main().

  
commit   : 230865e308f65d24d32aba1e6c1e4502b9047347    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 7 Jan 2015 22:34:57 -0500    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 7 Jan 2015 22:34:57 -0500    

Click here for diff

  
Typical server invocations already achieved that.  Invalid locale  
settings in the initial postmaster environment interfered, as could  
malloc() failure.  Setting "LC_MESSAGES=pt_BR.utf8 LC_ALL=invalid" in  
the postmaster environment will now choose C-locale messages, not  
Brazilian Portuguese messages.  Most localized programs, including all  
PostgreSQL frontend executables, do likewise.  Users are unlikely to  
observe changes involving locale categories other than LC_MESSAGES.  
CheckMyDatabase() ensures that we successfully set LC_COLLATE and  
LC_CTYPE; main() sets the remaining three categories to locale "C",  
which almost cannot fail.  Back-patch to 9.0 (all supported versions).  
  

Reject ANALYZE commands during VACUUM FULL or another ANALYZE.

  
commit   : 4dd51b366eb4f66ea2f1f30b5fda5960c633893d    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 7 Jan 2015 22:33:58 -0500    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 7 Jan 2015 22:33:58 -0500    

Click here for diff

  
vacuum()'s static variable handling makes it non-reentrant; an ensuing  
null pointer deference crashed the backend.  Back-patch to 9.0 (all  
supported versions).  
  

Improve relcache invalidation handling of currently invisible relations.

  
commit   : 6fdcf886d5270dd22f7c89523cf8df491291c77c    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Wed, 7 Jan 2015 00:10:18 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Wed, 7 Jan 2015 00:10:18 +0100    

Click here for diff

  
The corner case where a relcache invalidation tried to rebuild the  
entry for a referenced relation but couldn't find it in the catalog  
wasn't correct.  
  
The code tried to RelationCacheDelete/RelationDestroyRelation the  
entry. That didn't work when assertions are enabled because the latter  
contains an assertion ensuring the refcount is zero. It's also more  
generally a bad idea, because by virtue of being referenced somebody  
might actually look at the entry, which is possible if the error is  
trapped and handled via a subtransaction abort.  
  
Instead just error out, without deleting the entry. As the entry is  
marked invalid, the worst that can happen is that the invalid (and at  
some point unused) entry lingers in the relcache.  
  
Discussion: 22459.1418656530@sss.pgh.pa.us  
  
There should be no way to hit this case < 9.4 where logical decoding  
introduced a bug that can hit this. But since the code for handling  
the corner case is there it should do something halfway sane, so  
backpatch all the the way back.  The logical decoding bug will be  
handled in a separate commit.  
  

Fix thinko in plpython error message

  
commit   : 319ecff1ebbcdfbff88e9ebb726fb17fbd9f0d9a    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 6 Jan 2015 15:16:29 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 6 Jan 2015 15:16:29 -0300    

Click here for diff

  
  

  
commit   : 2153002be3facf5329db2d4f3349bcae3554eac3    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Tue, 6 Jan 2015 11:43:46 -0500    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Tue, 6 Jan 2015 11:43:46 -0500    

Click here for diff

  
Backpatch certain files through 9.0  
  

Fix broken pg_dump code for dumping comments on event triggers.

  
commit   : bb1e2426bf90ddd700d59678dfc9703162a34783    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 5 Jan 2015 19:27:09 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 5 Jan 2015 19:27:09 -0500    

Click here for diff

  
This never worked, I think.  Per report from Marc Munro.  
  
In passing, fix funny spacing in the COMMENT ON command as a result of  
excess space in the "label" string.  
  

Fix thinko in lock mode enum

  
commit   : 54a8abc2b7d150ae7e2738f4b0e687fd9cd9011a    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Sun, 4 Jan 2015 15:48:29 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Sun, 4 Jan 2015 15:48:29 -0300    

Click here for diff

  
Commit 0e5680f4737a9c6aa94aa9e77543e5de60411322 contained a thinko  
mixing LOCKMODE with LockTupleMode.  This caused misbehavior in the case  
where a tuple is marked with a multixact with at most a FOR SHARE lock,  
and another transaction tries to acquire a FOR NO KEY EXCLUSIVE lock;  
this case should block but doesn't.  
  
Include a new isolation tester spec file to explicitely try all the  
tuple lock combinations; without the fix it shows the problem:  
  
    starting permutation: s1_begin s1_lcksvpt s1_tuplock2 s2_tuplock3 s1_commit  
    step s1_begin: BEGIN;  
    step s1_lcksvpt: SELECT * FROM multixact_conflict FOR KEY SHARE; SAVEPOINT foo;  
    a  
  
    1  
    step s1_tuplock2: SELECT * FROM multixact_conflict FOR SHARE;  
    a  
  
    1  
    step s2_tuplock3: SELECT * FROM multixact_conflict FOR NO KEY UPDATE;  
    a  
  
    1  
    step s1_commit: COMMIT;  
  
With the fixed code, step s2_tuplock3 blocks until session 1 commits,  
which is the correct behavior.  
  
All other cases behave correctly.  
  
Backpatch to 9.3, like the commit that introduced the problem.  
  

Correctly handle test durations of more than 2147s in pg_test_timing.

  
commit   : a68b8aec71c8ab0aefe9888041172d1482c7d276    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Sun, 4 Jan 2015 15:44:49 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Sun, 4 Jan 2015 15:44:49 +0100    

Click here for diff

  
Previously the computation of the total test duration, measured in  
microseconds, accidentally overflowed due to accidentally using signed  
32bit arithmetic.  As the only consequence is that pg_test_timing  
invocations with such, overly large, durations never finished the  
practical consequences of this bug are minor.  
  
Pointed out by Coverity.  
  
Backpatch to 9.2 where pg_test_timing was added.  
  

Fix off-by-one in pg_xlogdump’s fuzzy_open_file().

  
commit   : f0e2770956a8a6975dd70dd0bc3fdec073b50493    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Sun, 4 Jan 2015 15:35:47 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Sun, 4 Jan 2015 15:35:47 +0100    

Click here for diff

  
In the unlikely case of stdin (fd 0) being closed, the off-by-one  
would lead to pg_xlogdump failing to open files.  
  
Spotted by Coverity.  
  
Backpatch to 9.3 where pg_xlogdump was introduced.  
  

Add missing va_end() call to a early exit in dmetaphone.c’s StringAt().

  
commit   : d33f36f16e6a80b81afe55401917f8d23e924f83    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Sun, 4 Jan 2015 15:35:47 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Sun, 4 Jan 2015 15:35:47 +0100    

Click here for diff

  
Pointed out by Coverity.  
  
Backpatch to all supported branches, the code has been that way for a  
long while.  
  

Fix inconsequential fd leak in the new mark_file_as_archived() function.

  
commit   : ec14f16014ecc04f41d9c285cb97922550097653    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Sun, 4 Jan 2015 14:36:22 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Sun, 4 Jan 2015 14:36:22 +0100    

Click here for diff

  
As every error in mark_file_as_archived() will lead to a failure of  
pg_basebackup the FD leak couldn't ever lead to a real problem.  It  
seems better to fix the leak anyway though, rather than silence  
Coverity, as the usage of the function might get extended or copied at  
some point in the future.  
  
Pointed out by Coverity.  
  
Backpatch to 9.2, like the relevant part of the previous patch.  
  

Prevent WAL files created by pg_basebackup -x/X from being archived again.

  
commit   : f6cea45029dfc0ad09ef24f73cac936c676f83ed    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Sat, 3 Jan 2015 20:51:52 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Sat, 3 Jan 2015 20:51:52 +0100    

Click here for diff

  
WAL (and timeline history) files created by pg_basebackup did not  
maintain the new base backup's archive status. That's currently not a  
problem if the new node is used as a standby - but if that node is  
promoted all still existing files can get archived again.  With a high  
wal_keep_segment settings that can happen a significant time later -  
which is quite confusing.  
  
Change both the backend (for the -x/-X fetch case) and pg_basebackup  
(for -X stream) itself to always mark WAL/timeline files included in  
the base backup as .done. That's in line with walreceiver.c doing so.  
  
The verbosity of the pg_basebackup changes show pretty clearly that it  
needs some refactoring, but that'd result in not be backpatchable  
changes.  
  
Backpatch to 9.1 where pg_basebackup was introduced.  
  
Discussion: 20141205002854.GE21964@awork2.anarazel.de  
  

Add pg_string_endswith as the start of a string helper library in src/common.

  
commit   : bb2e2ce6e2ea4835ed99593508d0909af0e402d6    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Sat, 3 Jan 2015 20:51:52 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Sat, 3 Jan 2015 20:51:52 +0100    

Click here for diff

  
Backpatch to 9.3 where src/common was introduce, because a bugfix that  
needs to be backpatched, requires the function. Earlier branches will  
have to duplicate the code.  
  

Make path to pg_service.conf absolute in documentation

  
commit   : 15331540cde241e63a8d29a6a594d2b637d9102c    
  
author   : Magnus Hagander <magnus@hagander.net>    
date     : Sat, 3 Jan 2015 13:18:54 +0100    
  
committer: Magnus Hagander <magnus@hagander.net>    
date     : Sat, 3 Jan 2015 13:18:54 +0100    

Click here for diff

  
The system file is always in the absolute path /etc/, not relative.  
  
David Fetter  
  

Docs: improve descriptions of ISO week-numbering date features.

  
commit   : 453151a0615f4782618c412fe89a7898b2c6277a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 31 Dec 2014 16:42:48 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 31 Dec 2014 16:42:48 -0500    

Click here for diff

  
Use the phraseology "ISO 8601 week-numbering year" in place of just  
"ISO year", and make related adjustments to other terminology.  
  
The point of this change is that it seems some people see "ISO year"  
and think "standard year", whereupon they're surprised when constructs  
like to_char(..., "IYYY-MM-DD") produce nonsensical results.  Perhaps  
hanging a few more adjectives on it will discourage them from jumping  
to false conclusions.  I put in an explicit warning against that  
specific usage, too, though the main point is to discourage people  
who haven't read this far down the page.  
  
In passing fix some nearby markup and terminology inconsistencies.  
  

Improve consistency of parsing of psql’s magic variables.

  
commit   : 7582cce56616c991e62e1122873ce8c694e6f8a0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 31 Dec 2014 12:17:00 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 31 Dec 2014 12:17:00 -0500    

Click here for diff

  
For simple boolean variables such as ON_ERROR_STOP, psql has for a long  
time recognized variant spellings of "on" and "off" (such as "1"/"0"),  
and it also made a point of warning you if you'd misspelled the setting.  
But these conveniences did not exist for other keyword-valued variables.  
In particular, though ECHO_HIDDEN and ON_ERROR_ROLLBACK include "on" and  
"off" as possible values, none of the alternative spellings for those were  
recognized; and to make matters worse the code would just silently assume  
"on" was meant for any unrecognized spelling.  Several people have reported  
getting bitten by this, so let's fix it.  In detail, this patch:  
  
* Allows all spellings recognized by ParseVariableBool() for ECHO_HIDDEN  
and ON_ERROR_ROLLBACK.  
  
* Reports a warning for unrecognized values for COMP_KEYWORD_CASE, ECHO,  
ECHO_HIDDEN, HISTCONTROL, ON_ERROR_ROLLBACK, and VERBOSITY.  
  
* Recognizes all values for all these variables case-insensitively;  
previously there was a mishmash of case-sensitive and case-insensitive  
behaviors.  
  
Back-patch to all supported branches.  There is a small risk of breaking  
existing scripts that were accidentally failing to malfunction; but the  
consensus is that the chance of detecting real problems and preventing  
future mistakes outweighs this.  
  

Fix resource leak pointed out by Coverity.

  
commit   : ed0e03283579eb6c79be7c98f050cf6c447a9986    
  
author   : Tatsuo Ishii <ishii@postgresql.org>    
date     : Tue, 30 Dec 2014 20:19:50 +0900    
  
committer: Tatsuo Ishii <ishii@postgresql.org>    
date     : Tue, 30 Dec 2014 20:19:50 +0900    

Click here for diff

  
  

Backpatch variable renaming in formatting.c

  
commit   : b1cbb26f116814fa3642178bb2c62a3654870f75    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Mon, 29 Dec 2014 21:25:23 -0500    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Mon, 29 Dec 2014 21:25:23 -0500    

Click here for diff

  
Backpatch a9c22d1480aa8e6d97a000292d05ef2b31bbde4e to make future  
backpatching easier.  
  
Backpatch through 9.0  
  

Assorted minor fixes for psql metacommand docs.

  
commit   : b02ee82c9cd1a01620c12525dc076fb536545d86    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 29 Dec 2014 14:21:00 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 29 Dec 2014 14:21:00 -0500    

Click here for diff

  
Document the long forms of \H \i \ir \o \p \r \w ... apparently, we have  
a long and dishonorable history of leaving out the unabbreviated names of  
psql backslash commands.  
  
Avoid saying "Unix shell"; we can just say "shell" with equal clarity,  
and not leave Windows users wondering whether the feature works for them.  
  
Improve consistency of documentation of \g \o \w metacommands.  There's  
no reason to use slightly different wording or markup for each one.  
  

Grab heavyweight tuple lock only before sleeping

  
commit   : 048912386da00d7325e6563875864cf711cc97a5    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 26 Dec 2014 13:52:27 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 26 Dec 2014 13:52:27 -0300    

Click here for diff

  
We were trying to acquire the lock even when we were subsequently  
not sleeping in some other transaction, which opens us up unnecessarily  
to deadlocks.  In particular, this is troublesome if an update tries to  
lock an updated version of a tuple and finds itself doing EvalPlanQual  
update chain walking; more than two sessions doing this concurrently  
will find themselves sleeping on each other because the HW tuple lock  
acquisition in heap_lock_tuple called from EvalPlanQualFetch races with  
the same tuple lock being acquired in heap_update -- one of these  
sessions sleeps on the other one to finish while holding the tuple lock,  
and the other one sleeps on the tuple lock.  
  
Per trouble report from Andrew Sackville-West in  
http://www.postgresql.org/message-id/20140731233051.GN17765@andrew-ThinkPad-X230  
  
His scenario can be simplified down to a relatively simple  
isolationtester spec file which I don't include in this commit; the  
reason is that the current isolationtester is not able to deal with more  
than one blocked session concurrently and it blocks instead of raising  
the expected deadlock.  In the future, if we improve isolationtester, it  
would be good to include the spec file in the isolation schedule.  I  
posted it in  
http://www.postgresql.org/message-id/20141212205254.GC1768@alvh.no-ip.org  
  
Hat tip to Mark Kirkwood, who helped diagnose the trouble.  
  

Have config_sspi_auth() permit IPv6 localhost connections.

  
commit   : 5bd5968001c76a9b2d2885ad3b92c27bea024844    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Thu, 25 Dec 2014 13:52:03 -0500    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Thu, 25 Dec 2014 13:52:03 -0500    

Click here for diff

  
Windows versions later than Windows Server 2003 map "localhost" to ::1.  
Account for that in the generated pg_hba.conf, fixing another oversight  
in commit f6dc6dd5ba54d52c0733aaafc50da2fbaeabb8b0.  Back-patch to 9.0,  
like that commit.  
  
David Rowley and Noah Misch  
  

Add CST (China Standard Time) to our lists of timezone abbreviations.

  
commit   : 0190f0a76cf2628a3bc281d9998c57942664fcdb    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 24 Dec 2014 16:35:23 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 24 Dec 2014 16:35:23 -0500    

Click here for diff

  
For some reason this seems to have been missed when the lists in  
src/timezone/tznames/ were first constructed.  We can't put it in Default  
because of the conflict with US CST, but we should certainly list it among  
the alternative entries in Asia.txt.  (I checked for other oversights, but  
all the other abbreviations that are in current use according to the IANA  
files seem to be accounted for.)  Noted while responding to bug #12326.  
  

Further tidy up on json aggregate documentation

  
commit   : 11863bf4a022b86238341d88c380a561d6b981c5    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 22 Dec 2014 18:31:38 -0500    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 22 Dec 2014 18:31:38 -0500    

Click here for diff

  
  

Fix documentation of argument type of json_agg and jsonb_agg

  
commit   : cc82141d9a4d2d83aea6691fad2d8fbbd1a277c2    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 22 Dec 2014 14:20:19 -0500    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 22 Dec 2014 14:20:19 -0500    

Click here for diff

  
json_agg was originally designed to aggregate records. However, it soon  
became clear that it is useful for aggregating all kinds of values and  
that's what we have on 9.3 and 9.4, and in head for it and jsonb_agg.  
The documentation suggested otherwise, so this fixes it.  
  

Docs: clarify treatment of variadic functions with zero variadic arguments.

  
commit   : acbcb3262984d9a78021cfc8518a6b30c880312e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 21 Dec 2014 15:30:39 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 21 Dec 2014 15:30:39 -0500    

Click here for diff

  
Explain that you have to use "VARIADIC ARRAY[]" to pass an empty array  
to a variadic parameter position.  This was already implicit in the text  
but it seems better to spell it out.  
  
Per a suggestion from David Johnston, though I didn't use his proposed  
wording.  Back-patch to all supported branches.  
  

Fix timestamp in end-of-recovery WAL records.

  
commit   : f8c51fe6bbb095133e4218170ac70f7b86f928f5    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 19 Dec 2014 17:00:21 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 19 Dec 2014 17:00:21 +0200    

Click here for diff

  
We used time(null) to set a TimestampTz field, which gave bogus results.  
Noticed while looking at pg_xlogdump output.  
  
Backpatch to 9.3 and above, where the fast promotion was introduced.  
  

Prevent potentially hazardous compiler/cpu reordering during lwlock release.

  
commit   : 0e68570e8b4419b0484a0f96ee30ab34561c3a91    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Fri, 19 Dec 2014 14:29:52 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Fri, 19 Dec 2014 14:29:52 +0100    

Click here for diff

  
In LWLockRelease() (and in 9.4+ LWLockUpdateVar()) we release enqueued  
waiters using PGSemaphoreUnlock(). As there are other sources of such  
unlocks backends only wake up if MyProc->lwWaiting is set to false;  
which is only done in the aforementioned functions.  
  
Before this commit there were dangers because the store to lwWaitLink  
could become visible before the store to lwWaitLink. This could both  
happen due to compiler reordering (on most compilers) and on some  
platforms due to the CPU reordering stores.  
  
The possible consequence of this is that a backend stops waiting  
before lwWaitLink is set to NULL. If that backend then tries to  
acquire another lock and has to wait there the list could become  
corrupted once the lwWaitLink store is finally performed.  
  
Add a write memory barrier to prevent that issue.  
  
Unfortunately the barrier support has been only added in 9.2. Given  
that the issue has not knowingly been observed in praxis it seems  
sufficient to prohibit compiler reordering using volatile for 9.0 and  
9.1. Actual problems due to compiler reordering are more likely  
anyway.  
  
Discussion: 20140210134625.GA15246@awork2.anarazel.de  
  

Improve documentation about CASE and constant subexpressions.

  
commit   : ef8472bc7adb2740010ad4fefdae63151bc66f39    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 18 Dec 2014 16:38:58 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 18 Dec 2014 16:38:58 -0500    

Click here for diff

  
The possibility that constant subexpressions of a CASE might be evaluated  
at planning time was touched on in 9.17.1 (CASE expressions), but it really  
ought to be explained in 4.2.14 (Expression Evaluation Rules) which is the  
primary discussion of such topics.  Add text and an example there, and  
revise the <note> under CASE to link there.  
  
Back-patch to all supported branches, since it's acted like this for a  
long time (though 9.2+ is probably worse because of its more aggressive  
use of constant-folding via replanning of nominally-prepared statements).  
Pre-9.4, also back-patch text added in commit 0ce627d4 about CASE versus  
aggregate functions.  
  
Tom Lane and David Johnston, per discussion of bug #12273.  
  

Recognize Makefile line continuations in fetchRegressOpts().

  
commit   : a47f38e526f70b397c1941f331e729e5fab95d27    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Thu, 18 Dec 2014 03:55:17 -0500    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Thu, 18 Dec 2014 03:55:17 -0500    

Click here for diff

  
Back-patch to 9.0 (all supported versions).  This is mere  
future-proofing in the context of the master branch, but commit  
f6dc6dd5ba54d52c0733aaafc50da2fbaeabb8b0 requires it of older branches.  
  

Fix (re-)starting from a basebackup taken off a standby after a failure.

  
commit   : 2ab19ce5e3799cd2bb031212af76f463eee0bc1c    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Thu, 18 Dec 2014 08:35:27 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Thu, 18 Dec 2014 08:35:27 +0100    

Click here for diff

  
When starting up from a basebackup taken off a standby extra logic has  
to be applied to compute the point where the data directory is  
consistent. Normal base backups use a WAL record for that purpose, but  
that isn't possible on a standby.  
  
That logic had a error check ensuring that the cluster's control file  
indicates being in recovery. Unfortunately that check was too strict,  
disregarding the fact that the control file could also indicate that  
the cluster was shut down while in recovery.  
  
That's possible when the a cluster starting from a basebackup is shut  
down before the backup label has been removed. When everything goes  
well that's a short window, but when either restore_command or  
primary_conninfo isn't configured correctly the window can get much  
wider. That's because inbetween reading and unlinking the label we  
restore the last checkpoint from WAL which can need additional WAL.  
  
To fix simply also allow starting when the control file indicates  
"shutdown in recovery". There's nicer fixes imaginable, but they'd be  
more invasive.  
  
Backpatch to 9.2 where support for taking basebackups from standbys  
was added.  
  

Lock down regression testing temporary clusters on Windows.

  
commit   : 442dc2c358236351cfc7914f632bba3302430a07    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 17 Dec 2014 22:48:40 -0500    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 17 Dec 2014 22:48:40 -0500    

Click here for diff

  
Use SSPI authentication to allow connections exclusively from the OS  
user that launched the test suite.  This closes on Windows the  
vulnerability that commit be76a6d39e2832d4b88c0e1cc381aa44a7f86881  
closed on other platforms.  Users of "make installcheck" or custom test  
harnesses can run "pg_regress --config-auth=DATADIR" to activate the  
same authentication configuration that "make check" would use.  
Back-patch to 9.0 (all supported versions).  
  
Security: CVE-2014-0067  
  

Fix another poorly worded error message.

  
commit   : 83fffec6000d7123a63e5f5a7d9e263e11033ab5    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 17 Dec 2014 13:22:07 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 17 Dec 2014 13:22:07 -0500    

Click here for diff

  
Spotted by Álvaro Herrera.  
  

Update .gitignore for pg_upgrade

  
commit   : 741cf193b7885986e297a63d1bdef4b79d6c7d0f    
  
author   : Magnus Hagander <magnus@hagander.net>    
date     : Wed, 17 Dec 2014 11:55:22 +0100    
  
committer: Magnus Hagander <magnus@hagander.net>    
date     : Wed, 17 Dec 2014 11:55:22 +0100    

Click here for diff

  
Add Windows versions of generated scripts, and make sure we only  
ignore the scripts int he root directory.  
  
Michael Paquier  
  

Fix off-by-one loop count in MapArrayTypeName, and get rid of static array.

  
commit   : 53960e7eb34618c96f4d17216e6a3f92ac98c749    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 16 Dec 2014 15:35:40 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 16 Dec 2014 15:35:40 -0500    

Click here for diff

  
MapArrayTypeName would copy up to NAMEDATALEN-1 bytes of the base type  
name, which of course is wrong: after prepending '_' there is only room for  
NAMEDATALEN-2 bytes.  Aside from being the wrong result, this case would  
lead to overrunning the statically allocated work buffer.  This would be a  
security bug if the function were ever used outside bootstrap mode, but it  
isn't, at least not in any currently supported branches.  
  
Aside from fixing the off-by-one loop logic, this patch gets rid of the  
static work buffer by having MapArrayTypeName pstrdup its result; the sole  
caller was already doing that, so this just requires moving the pstrdup  
call.  This saves a few bytes but mainly it makes the API a lot cleaner.  
  
Back-patch on the off chance that there is some third-party code using  
MapArrayTypeName with less-secure input.  Pushing pstrdup into the function  
should not cause any serious problems for such hypothetical code; at worst  
there might be a short term memory leak.  
  
Per Coverity scanning.  
  

Fix file descriptor leak after failure of a \setshell command in pgbench.

  
commit   : 3b750ec155be3b8d658eadd8effe4d3c31955852    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 16 Dec 2014 13:31:42 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 16 Dec 2014 13:31:42 -0500    

Click here for diff

  
If the called command fails to return data, runShellCommand forgot to  
pclose() the pipe before returning.  This is fairly harmless in the current  
code, because pgbench would then abandon further processing of that client  
thread; so no more than nclients descriptors could be leaked this way.  But  
it's not hard to imagine future improvements whereby that wouldn't be true.  
In any case, it's sloppy coding, so patch all branches.  Found by Coverity.  
  

Misc comment typo fixes.

  
commit   : ea78d1381e01d96bb7d6426f25b8033fd11b9c14    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 16 Dec 2014 16:34:56 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 16 Dec 2014 16:34:56 +0200    

Click here for diff

  
Backpatch the applicable parts, just to make backpatching future patches  
easier.  
  

Revert misguided change to postgres_fdw FOR UPDATE/SHARE code.

  
commit   : cfc878a45c4834ce8a09523d0d72b42ece8bab7a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 12 Dec 2014 12:41:55 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 12 Dec 2014 12:41:55 -0500    

Click here for diff

  
In commit 462bd95705a0c23ba0b0ba60a78d32566a0384c1, I changed postgres_fdw  
to rely on get_plan_rowmark() instead of get_parse_rowmark().  I still  
think that's a good idea in the long run, but as Etsuro Fujita pointed out,  
it doesn't work today because planner.c forces PlanRowMarks to have  
markType = ROW_MARK_COPY for all foreign tables.  There's no urgent reason  
to change this in the back branches, so let's just revert that part of  
yesterday's commit rather than trying to design a better solution under  
time pressure.  
  
Also, add a regression test case showing what postgres_fdw does with FOR  
UPDATE/SHARE.  I'd blithely assumed there was one already, else I'd have  
realized yesterday that this code didn't work.  
  

Fix planning of SELECT FOR UPDATE on child table with partial index.

  
commit   : 2ae8a01ca1af074c166e3f882869dfe307b91b2c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 11 Dec 2014 21:02:31 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 11 Dec 2014 21:02:31 -0500    

Click here for diff

  
Ordinarily we can omit checking of a WHERE condition that matches a partial  
index's condition, when we are using an indexscan on that partial index.  
However, in SELECT FOR UPDATE we must include the "redundant" filter  
condition in the plan so that it gets checked properly in an EvalPlanQual  
recheck.  The planner got this mostly right, but improperly omitted the  
filter condition if the index in question was on an inheritance child  
table.  In READ COMMITTED mode, this could result in incorrectly returning  
just-updated rows that no longer satisfy the filter condition.  
  
The cause of the error is using get_parse_rowmark() when get_plan_rowmark()  
is what should be used during planning.  In 9.3 and up, also fix the same  
mistake in contrib/postgres_fdw.  It's currently harmless there (for lack  
of inheritance support) but wrong is wrong, and the incorrect code might  
get copied to someplace where it's more significant.  
  
Report and fix by Kyotaro Horiguchi.  Back-patch to all supported branches.  
  

Fix corner case where SELECT FOR UPDATE could return a row twice.

  
commit   : f14196c3597364f3562e65bf869f7942fc9188b4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 11 Dec 2014 19:37:07 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 11 Dec 2014 19:37:07 -0500    

Click here for diff

  
In READ COMMITTED mode, if a SELECT FOR UPDATE discovers it has to redo  
WHERE-clause checking on rows that have been updated since the SELECT's  
snapshot, it invokes EvalPlanQual processing to do that.  If this first  
occurs within a non-first child table of an inheritance tree, the previous  
coding could accidentally re-return a matching row from an earlier,  
already-scanned child table.  (And, to add insult to injury, I think this  
could make it miss returning a row that should have been returned, if the  
updated row that this happens on should still have passed the WHERE qual.)  
Per report from Kyotaro Horiguchi; the added isolation test is based on his  
test case.  
  
This has been broken for quite awhile, so back-patch to all supported  
branches.  
  

Fix assorted confusion between Oid and int32.

  
commit   : 2c96b0ba8dc90b5a5f9f787b1acbc0ff27b1778f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 11 Dec 2014 15:41:23 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 11 Dec 2014 15:41:23 -0500    

Click here for diff

  
In passing, also make some debugging elog's in pgstat.c a bit more  
consistently worded.  
  
Back-patch as far as applicable (9.3 or 9.4; none of these mistakes are  
really old).  
  
Mark Dilger identified and patched the type violations; the message  
rewordings are mine.  
  

Give a proper error message if initdb password file is empty.

  
commit   : 2df66f01abe8b18e7e194989132dd263b5881b04    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 5 Dec 2014 14:27:56 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 5 Dec 2014 14:27:56 +0200    

Click here for diff

  
Used to say just "could not read password from file "...": Success", which  
isn't very informative.  
  
Mats Erik Andersson. Backpatch to all supported versions.  
  

Fix JSON aggregates to work properly when final function is re-executed.

  
commit   : 8571ecb24f57a3aefc412eaf775423f9e456e47f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 2 Dec 2014 15:02:43 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 2 Dec 2014 15:02:43 -0500    

Click here for diff

  
Davide S. reported that json_agg() sometimes produced multiple trailing  
right brackets.  This turns out to be because json_agg_finalfn() attaches  
the final right bracket, and was doing so by modifying the aggregate state  
in-place.  That's verboten, though unfortunately it seems there's no way  
for nodeAgg.c to check for such mistakes.  
  
Fix that back to 9.3 where the broken code was introduced.  In 9.4 and  
HEAD, likewise fix json_object_agg(), which had copied the erroneous logic.  
Make some cosmetic cleanups as well.  
  

Guard against bad “dscale” values in numeric_recv().

  
commit   : 10b81fbdc56aecb07c7cfaa1a70da8c1ad70b37d    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 1 Dec 2014 15:25:08 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 1 Dec 2014 15:25:08 -0500    

Click here for diff

  
We were not checking to see if the supplied dscale was valid for the given  
digit array when receiving binary-format numeric values.  While dscale can  
validly be more than the number of nonzero fractional digits, it shouldn't  
be less; that case causes fractional digits to be hidden on display even  
though they're there and participate in arithmetic.  
  
Bug #12053 from Tommaso Sala indicates that there's at least one broken  
client library out there that sometimes supplies an incorrect dscale value,  
leading to strange behavior.  This suggests that simply throwing an error  
might not be the best response; it would lead to failures in applications  
that might seem to be working fine today.  What seems the least risky fix  
is to truncate away any digits that would be hidden by dscale.  This  
preserves the existing behavior in terms of what will be printed for the  
transmitted value, while preventing subsequent arithmetic from producing  
results inconsistent with that.  
  
In passing, throw a specific error for the case of dscale being outside  
the range that will fit into a numeric's header.  Before you got "value  
overflows numeric format", which is a bit misleading.  
  
Back-patch to all supported branches.  
  

Fix backpatching error in commit 55c88079

  
commit   : 9e05f3b97e63c34411669aee139c4a6236c68030    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 1 Dec 2014 12:48:35 -0500    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 1 Dec 2014 12:48:35 -0500    

Click here for diff

  
  

Fix hstore_to_json_loose’s detection of valid JSON number values.

  
commit   : 55c8807978e86f615623eb7922c114b841f14557    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 1 Dec 2014 11:28:45 -0500    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 1 Dec 2014 11:28:45 -0500    

Click here for diff

  
We expose a function IsValidJsonNumber that internally calls the lexer  
for json numbers. That allows us to use the same test everywhere,  
instead of inventing a broken test for hstore conversions. The new  
function is also used in datum_to_json, replacing the code that is now  
moved to the new function.  
  
Backpatch to 9.3 where hstore_to_json_loose was introduced.  
  

Fix missing space in documentation

  
commit   : 5c9a4a866efcc15ec041134538e01f25c2e2bd88    
  
author   : Magnus Hagander <magnus@hagander.net>    
date     : Mon, 1 Dec 2014 12:12:07 +0100    
  
committer: Magnus Hagander <magnus@hagander.net>    
date     : Mon, 1 Dec 2014 12:12:07 +0100    

Click here for diff

  
Ian Barwick  
  

Fix minor bugs in commit 30bf4689a96cd283af33edcdd6b7210df3f20cd8 et al.

  
commit   : 179a9afdd77e04c089013fd3d5d5abe2d96516ed    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 30 Nov 2014 12:20:51 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 30 Nov 2014 12:20:51 -0500    

Click here for diff

  
Coverity complained that the "else" added to fillPGconn() was unreachable,  
which it was.  Remove the dead code.  In passing, rearrange the tests so as  
not to bother trying to fetch values for options that can't be assigned.  
  
Pre-9.3 did not have that issue, but it did have a "return" that should be  
"goto oom_error" to ensure that a suitable error message gets filled in.  
  

Update transaction README for persistent multixacts

  
commit   : 0dafa9584e64243cb4d305be903623d6e709c584    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 28 Nov 2014 18:06:18 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 28 Nov 2014 18:06:18 -0300    

Click here for diff

  
Multixacts are now maintained during recovery, but the README didn't get  
the memo.  Backpatch to 9.3, where the divergence was introduced.  
  

Make \watch respect the user’s \pset null setting.

  
commit   : 4b1953079fb020da7089e58e3b0543058d867803    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Fri, 28 Nov 2014 02:42:43 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Fri, 28 Nov 2014 02:42:43 +0900    

Click here for diff

  
Previously \watch always ignored the user's \pset null setting.  
\pset null setting should be ignored for \d and similar queries.  
For those, the code can reasonably have an opinion about what  
the presentation should be like, since it knows what SQL query  
it's issuing. This argument surely doesn't apply to \watch,  
so this commit makes \watch use the user's \pset null setting.  
  
Back-patch to 9.3 where \watch was added.  
  

Mark response messages for translation in pg_isready.

  
commit   : 75bac647f08beb2e64215b0118e9c505ca9feb9d    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Fri, 28 Nov 2014 02:12:45 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Fri, 28 Nov 2014 02:12:45 +0900    

Click here for diff

  
Back-patch to 9.3 where pg_isready was added.  
  
Mats Erik Andersson  
  

Free libxml2/libxslt resources in a safer order.

  
commit   : c393847a1f8b35252f880853a2fd5eabd9a6d7b0    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 27 Nov 2014 11:12:51 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 27 Nov 2014 11:12:51 -0500    

Click here for diff

  
Mark Simonetti reported that libxslt sometimes crashes for him, and that  
swapping xslt_process's object-freeing calls around to do them in reverse  
order of creation seemed to fix it.  I've not reproduced the crash, but  
valgrind clearly shows a reference to already-freed memory, which is  
consistent with the idea that shutdown of the xsltTransformContext is  
trying to reference the already-freed stylesheet or input document.  
With this patch, valgrind is no longer unhappy.  
  
I have an inquiry in to see if this is a libxslt bug or if we're just  
abusing the library; but even if it's a library bug, we'd want to adjust  
our code so it doesn't fail with unpatched libraries.  
  
Back-patch to all supported branches, because we've been doing this in  
the wrong(?) order for a long time.  
  

Allow “dbname” from connection string to be overridden in PQconnectDBParams

  
commit   : 08cd4d9a64b2313b87625a0abbca05096345deab    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 25 Nov 2014 17:12:07 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 25 Nov 2014 17:12:07 +0200    

Click here for diff

  
If the "dbname" attribute in PQconnectDBParams contained a connection string  
or URI (and expand_dbname = TRUE), the database name from the connection  
string could not be overridden by a subsequent "dbname" keyword in the  
array. That was not intentional; all other options can be overridden.  
Furthermore, any subsequent "dbname" caused the connection string from the  
first dbname value to be processed again, overriding any values for the same  
options that were given between the connection string and the second dbname  
option.  
  
In the passing, clarify in the docs that only the first dbname option in the  
array is parsed as a connection string.  
  
Alex Shulgin. Backpatch to all supported versions.  
  

Check return value of strdup() in libpq connection option parsing.

  
commit   : d3b162a3dd6f40b75bf4eed2efac0d4be5c22e15    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 25 Nov 2014 12:55:00 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 25 Nov 2014 12:55:00 +0200    

Click here for diff

  
An out-of-memory in most of these would lead to strange behavior, like  
connecting to a different database than intended, but some would lead to  
an outright segfault.  
  
Alex Shulgin and me. Backpatch to all supported versions.  
  

Fix mishandling of system columns in FDW queries.

  
commit   : c57cdc9c1af213ceebc75ce72bd08eb229ba9bda    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 22 Nov 2014 16:01:12 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 22 Nov 2014 16:01:12 -0500    

Click here for diff

  
postgres_fdw would send query conditions involving system columns to the  
remote server, even though it makes no effort to ensure that system  
columns other than CTID match what the remote side thinks.  tableoid,  
in particular, probably won't match and might have some use in queries.  
Hence, prevent sending conditions that include non-CTID system columns.  
  
Also, create_foreignscan_plan neglected to check local restriction  
conditions while determining whether to set fsSystemCol for a foreign  
scan plan node.  This again would bollix the results for queries that  
test a foreign table's tableoid.  
  
Back-patch the first fix to 9.3 where postgres_fdw was introduced.  
Back-patch the second to 9.2.  The code is probably broken in 9.1 as  
well, but the patch doesn't apply cleanly there; given the weak state  
of support for FDWs in 9.1, it doesn't seem worth fixing.  
  
Etsuro Fujita, reviewed by Ashutosh Bapat, and somewhat modified by me  
  

Improve documentation’s description of JOIN clauses.

  
commit   : be2dfe4081b0ec59e1fe6bf77fca1406943a0d74    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 19 Nov 2014 16:00:30 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 19 Nov 2014 16:00:30 -0500    

Click here for diff

  
In bug #12000, Andreas Kunert complained that the documentation was  
misleading in saying "FROM T1 CROSS JOIN T2 is equivalent to FROM T1, T2".  
That's correct as far as it goes, but the equivalence doesn't hold when  
you consider three or more tables, since JOIN binds more tightly than  
comma.  I added a <note> to explain this, and ended up rearranging some  
of the existing text so that the note would make sense in context.  
  
In passing, rewrite the description of JOIN USING, which was unnecessarily  
vague, and hadn't been helped any by somebody's reliance on markup as a  
substitute for clear writing.  (Mostly this involved reintroducing a  
concrete example that was unaccountably removed by commit 032f3b7e166cfa28.)  
  
Back-patch to all supported branches.  
  

Avoid file descriptor leak in pg_test_fsync.

  
commit   : 8cf825974cd10323013f3be0ed1cd35176fbe364    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Wed, 19 Nov 2014 11:57:54 -0500    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Wed, 19 Nov 2014 11:57:54 -0500    

Click here for diff

  
This can cause problems on Windows, where files that are still open  
can't be unlinked.  
  
Jeff Janes  
  

Don’t require bleeding-edge timezone data in timestamptz regression test.

  
commit   : f6fb7fb17d89e333c606ab46fe2ff07e3e3d8289    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 18 Nov 2014 21:36:46 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 18 Nov 2014 21:36:46 -0500    

Click here for diff

  
The regression test cases added in commits b2cbced9e et al depended in part  
on the Russian timezone offset changes of Oct 2014.  While this is of no  
particular concern for a default Postgres build, it was possible for a  
build using --with-system-tzdata to fail the tests if the system tzdata  
database wasn't au courant.  Bjorn Munch and Christoph Berg both complained  
about this while packaging 9.4rc1, so we probably shouldn't insist on the  
system tzdata being up-to-date.  Instead, make an equivalent test using a  
zone change that occurred in Venezuela in 2007.  With this patch, the  
regression tests should pass using any tzdata set from 2012 or later.  
(I can't muster much sympathy for somebody using --with-system-tzdata  
on a machine whose system tzdata is more than three years out-of-date.)  
  

Fix some bogus direct uses of realloc().

  
commit   : 8824bae87b78036c41ba794d96a0e68859df5990    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 18 Nov 2014 13:28:13 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 18 Nov 2014 13:28:13 -0500    

Click here for diff

  
pg_dump/parallel.c was using realloc() directly with no error check.  
While the odds of an actual failure here seem pretty low, Coverity  
complains about it, so fix by using pg_realloc() instead.  
  
While looking for other instances, I noticed a couple of places in  
psql that hadn't gotten the memo about the availability of pg_realloc.  
These aren't bugs, since they did have error checks, but verbosely  
inconsistent code is not a good thing.  
  
Back-patch as far as 9.3.  9.2 did not have pg_dump/parallel.c, nor  
did it have pg_realloc available in all frontend code.  
  

Update time zone data files to tzdata release 2014j.

  
commit   : ab45d907b2e0ee2e27c5a2307d77dc235abec00d    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 17 Nov 2014 12:08:02 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 17 Nov 2014 12:08:02 -0500    

Click here for diff

  
DST law changes in the Turks & Caicos Islands (America/Grand_Turk) and  
in Fiji.  New zone Pacific/Bougainville for portions of Papua New Guinea.  
Historical changes for Korea and Vietnam.  
  

Fix initdb –sync-only to also sync tablespaces.

  
commit   : 26a4e0ed7166f7b87b5ad3bb33dbde2ebc6ff82b    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Fri, 14 Nov 2014 18:22:12 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Fri, 14 Nov 2014 18:22:12 +0100    

Click here for diff

  
630cd14426dc added initdb --sync-only, for use by pg_upgrade, by just  
exposing the existing fsync code. That's wrong, because initdb so far  
had absolutely no reason to deal with tablespaces.  
  
Fix --sync-only by additionally explicitly syncing each of the  
tablespaces.  
  
Backpatch to 9.3 where --sync-only was introduced.  
  
Abhijit Menon-Sen and Andres Freund  
  

Sync unlogged relations to disk after they have been reset.

  
commit   : 2c3ebfd1ad3523987ec6f268df020a5e1431572e    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Fri, 14 Nov 2014 18:21:30 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Fri, 14 Nov 2014 18:21:30 +0100    

Click here for diff

  
Unlogged relations are only reset when performing a unclean  
restart. That means they have to be synced to disk during clean  
shutdowns. During normal processing that's achieved by registering a  
buffer's file to be fsynced at the next checkpoint when flushed. But  
ResetUnloggedRelations() doesn't go through the buffer manager, so  
nothing will force reset relations to disk before the next shutdown  
checkpoint.  
  
So just make ResetUnloggedRelations() fsync the newly created main  
forks to disk.  
  
Discussion: 20140912112246.GA4984@alap3.anarazel.de  
  
Backpatch to 9.1 where unlogged tables were introduced.  
  
Abhijit Menon-Sen and Andres Freund  
  

Ensure unlogged tables are reset even if crash recovery errors out.

  
commit   : 672b43e68b41d405143e5a474acf882ed7c154f3    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Fri, 14 Nov 2014 18:20:59 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Fri, 14 Nov 2014 18:20:59 +0100    

Click here for diff

  
Unlogged relations are reset at the end of crash recovery as they're  
only synced to disk during a proper shutdown. Unfortunately that and  
later steps can fail, e.g. due to running out of space. This reset  
was, up to now performed after marking the database as having finished  
crash recovery successfully. As out of space errors trigger a crash  
restart that could lead to the situation that not all unlogged  
relations are reset.  
  
Once that happend usage of unlogged relations could yield errors like  
"could not open file "...": No such file or directory". Luckily  
clusters that show the problem can be fixed by performing a immediate  
shutdown, and starting the database again.  
  
To fix, just call ResetUnloggedRelations(UNLOGGED_RELATION_INIT)  
earlier, before marking the database as having successfully recovered.  
  
Discussion: 20140912112246.GA4984@alap3.anarazel.de  
  
Backpatch to 9.1 where unlogged tables were introduced.  
  
Abhijit Menon-Sen and Andres Freund  
  

Backport “Expose fsync_fname as a public API”.

  
commit   : c7299d32f64d09cfc1f586bdb7902b02259ff202    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Sat, 15 Nov 2014 01:09:05 +0100    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Sat, 15 Nov 2014 01:09:05 +0100    

Click here for diff

  
Backport commit cc52d5b33ff5df29de57dcae9322214cfe9c8464 back to 9.1  
to allow backpatching some unlogged table fixes that use fsync_fname.  
  

Allow interrupting GetMultiXactIdMembers

  
commit   : d45e8dc527f9e66c56dd91ae3e79eac1b094b96e    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 14 Nov 2014 15:14:02 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 14 Nov 2014 15:14:02 -0300    

Click here for diff

  
This function has a loop which can lead to uninterruptible process  
"stalls" (actually infinite loops) when some bugs are triggered.  Avoid  
that unpleasant situation by adding a check for interrupts in a place  
that shouldn't degrade performance in the normal case.  
  
Backpatch to 9.3.  Older branches have an identical loop here, but the  
aforementioned bugs are only a problem starting in 9.3 so there doesn't  
seem to be any point in backpatching any further.  
  

Fix pg_dumpall to restore its ability to dump from ancient servers.

  
commit   : 9fc8871212d3000c2964698dfc844b94ba049599    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 13 Nov 2014 18:19:32 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 13 Nov 2014 18:19:32 -0500    

Click here for diff

  
Fix breakage induced by commits d8d3d2a4f37f6df5d0118b7f5211978cca22091a  
and 463f2625a5fb183b6a8925ccde98bb3889f921d9: pg_dumpall has crashed when  
attempting to dump from pre-8.1 servers since then, due to faulty  
construction of the query used for dumping roles from older servers.  
The query was erroneous as of the earlier commit, but it wasn't exposed  
unless you tried to use --binary-upgrade, which you presumably wouldn't  
with a pre-8.1 server.  However commit 463f2625a made it fail always.  
  
In HEAD, also fix additional breakage induced in the same query by  
commit 491c029dbc4206779cf659aa0ff986af7831d2ff, which evidently wasn't  
tested against pre-8.1 servers either.  
  
The bug is only latent in 9.1 because 463f2625a hadn't landed yet, but  
it seems best to back-patch all branches containing the faulty query.  
  
Gilles Darold  
  

Fix race condition between hot standby and restoring a full-page image.

  
commit   : 861d3aa43d2f38f19773cf4636937837bb0c2167    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 13 Nov 2014 19:47:44 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 13 Nov 2014 19:47:44 +0200    

Click here for diff

  
There was a window in RestoreBackupBlock where a page would be zeroed out,  
but not yet locked. If a backend pinned and locked the page in that window,  
it saw the zeroed page instead of the old page or new page contents, which  
could lead to missing rows in a result set, or errors.  
  
To fix, replace RBM_ZERO with RBM_ZERO_AND_LOCK, which atomically pins,  
zeroes, and locks the page, if it's not in the buffer cache already.  
  
In stable branches, the old RBM_ZERO constant is renamed to RBM_DO_NOT_USE,  
to avoid breaking any 3rd party extensions that might use RBM_ZERO. More  
importantly, this avoids renumbering the other enum values, which would  
cause even bigger confusion in extensions that use ReadBufferExtended, but  
haven't been recompiled.  
  
Backpatch to all supported versions; this has been racy since hot standby  
was introduced.  
  

Explicitly support the case that a plancache’s raw_parse_tree is NULL.

  
commit   : c2b4ed19f6b64fa56fee5b6241463b499144f23e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 12 Nov 2014 15:58:44 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 12 Nov 2014 15:58:44 -0500    

Click here for diff

  
This only happens if a client issues a Parse message with an empty query  
string, which is a bit odd; but since it is explicitly called out as legal  
by our FE/BE protocol spec, we'd probably better continue to allow it.  
  
Fix by adding tests everywhere that the raw_parse_tree field is passed to  
functions that don't or shouldn't accept NULL.  Also make it clear in the  
relevant comments that NULL is an expected case.  
  
This reverts commits a73c9dbab0165b3395dfe8a44a7dfd16166963c4 and  
2e9650cbcff8c8fb0d9ef807c73a44f241822eee, which fixed specific crash  
symptoms by hacking things at what now seems to be the wrong end, ie the  
callee functions.  Making the callees allow NULL is superficially more  
robust, but it's not always true that there is a defensible thing for the  
callee to do in such cases.  The caller has more context and is better  
able to decide what the empty-query case ought to do.  
  
Per followup discussion of bug #11335.  Back-patch to 9.2.  The code  
before that is sufficiently different that it would require development  
of a separate patch, which doesn't seem worthwhile for what is believed  
to be an essentially cosmetic change.  
  

Loop when necessary in contrib/pgcrypto’s pktreader_pull().

  
commit   : 419de696a76f5884e26ecd0905a084b0f57afc93    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 11 Nov 2014 17:22:15 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 11 Nov 2014 17:22:15 -0500    

Click here for diff

  
This fixes a scenario in which pgp_sym_decrypt() failed with "Wrong key  
or corrupt data" on messages whose length is 6 less than a power of 2.  
  
Per bug #11905 from Connor Penhale.  Fix by Marko Tiikkaja, regression  
test case from Jeff Janes.  
  

Fix dependency searching for case where column is visited before table.

  
commit   : 2a83e0349c222047af87cf54d4ca87764806c291    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 11 Nov 2014 17:00:21 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 11 Nov 2014 17:00:21 -0500    

Click here for diff

  
When the recursive search in dependency.c visits a column and then later  
visits the whole table containing the column, it needs to propagate the  
drop-context flags for the table to the existing target-object entry for  
the column.  Otherwise we might refuse the DROP (if not CASCADE) on the  
incorrect grounds that there was no automatic drop pathway to the column.  
Remarkably, this has not been reported before, though it's possible at  
least when an extension creates both a datatype and a table using that  
datatype.  
  
Rather than just marking the column as allowed to be dropped, it might  
seem good to skip the DROP COLUMN step altogether, since the later DROP  
of the table will surely get the job done.  The problem with that is that  
the datatype would then be dropped before the table (since the whole  
situation occurred because we visited the datatype, and then recursed to  
the dependent column, before visiting the table).  That seems pretty risky,  
and the case is rare enough that it doesn't seem worth expending a lot of  
effort or risk to make the drops happen in a safe order.  So we just play  
dumb and delete the column separately according to the existing drop  
ordering rules.  
  
Per report from Petr Jelinek, though this is different from his proposed  
patch.  
  
Back-patch to 9.1, where extensions were introduced.  There's currently  
no evidence that such cases can arise before 9.1, and in any case we would  
also need to back-patch cb5c2ba2d82688d29b5902d86b993a54355cad4d to 9.0  
if we wanted to back-patch this.  
  

Ensure that whole-row Vars produce nonempty column names.

  
commit   : 07ab4ec4cf5447d79828bb536dcc69b16e0cd4f7    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Nov 2014 15:21:20 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 10 Nov 2014 15:21:20 -0500    

Click here for diff

  
At one time it wasn't terribly important what column names were associated  
with the fields of a composite Datum, but since the introduction of  
operations like row_to_json(), it's important that looking up the rowtype  
ID embedded in the Datum returns the column names that users would expect.  
However, that doesn't work terribly well: you could get the column names  
of the underlying table, or column aliases from any level of the query,  
depending on minor details of the plan tree.  You could even get totally  
empty field names, which is disastrous for cases like row_to_json().  
  
It seems unwise to change this behavior too much in stable branches,  
however, since users might not have noticed that they weren't getting  
the least-unintuitive choice of field names.  Therefore, in the back  
branches, only change the results when the child plan has returned an  
actually-empty field name.  (We assume that can't happen with a named  
rowtype, so this also dodges the issue of possibly producing RECORD-typed  
output from a Var with a named composite result type.)  As in the sister  
patch for HEAD, we can get a better name to use from the Var's  
corresponding RTE.  There is no need to touch the RowExpr code since it  
was already using a copy of the RTE's alias list for RECORD cases.  
  
Back-patch as far as 9.2.  Before that we did not have row_to_json()  
so there were no core functions potentially affected by bogus field  
names.  While 9.1 and earlier do have contrib's hstore(record) which  
is also affected, those versions don't seem to produce empty field names  
(at least not in the known problem cases), so we'll leave them alone.  
  

Cope with more than 64K phrases in a thesaurus dictionary.

  
commit   : 5a74ff373c6af1fe0747555596d62b6156eeef3e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 6 Nov 2014 20:52:40 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 6 Nov 2014 20:52:40 -0500    

Click here for diff

  
dict_thesaurus stored phrase IDs in uint16 fields, so it would get confused  
and even crash if there were more than 64K entries in the configuration  
file.  It turns out to be basically free to widen the phrase IDs to uint32,  
so let's just do so.  
  
This was complained of some time ago by David Boutin (in bug #7793);  
he later submitted an informal patch but it was never acted on.  
We now have another complaint (bug #11901 from Luc Ouellette) so it's  
time to make something happen.  
  
This is basically Boutin's patch, but for future-proofing I also added a  
defense against too many words per phrase.  Note that we don't need any  
explicit defense against overflow of the uint32 counters, since before that  
happens we'd hit array allocation sizes that repalloc rejects.  
  
Back-patch to all supported branches because of the crash risk.  
  

Prevent the unnecessary creation of .ready file for the timeline history file.

  
commit   : 3a3b7e316f85566771d3a28ea874e030fb12126b    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 6 Nov 2014 21:24:40 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 6 Nov 2014 21:24:40 +0900    

Click here for diff

  
Previously .ready file was created for the timeline history file at the end  
of an archive recovery even when WAL archiving was not enabled.  
This creation is unnecessary and causes .ready file to remain infinitely.  
  
This commit changes an archive recovery so that it creates .ready file for  
the timeline history file only when WAL archiving is enabled.  
  
Backpatch to all supported versions.  
  

Fix volatility markings of some contrib I/O functions.

  
commit   : 0247935c7b44526a2051184963b76cdd62ea0ab8    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 5 Nov 2014 11:34:19 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 5 Nov 2014 11:34:19 -0500    

Click here for diff

  
In general, datatype I/O functions are supposed to be immutable or at  
worst stable.  Some contrib I/O functions were, through oversight, not  
marked with any volatility property at all, which made them VOLATILE.  
Since (most of) these functions actually behave immutably, the erroneous  
marking isn't terribly harmful; but it can be user-visible in certain  
circumstances, as per a recent bug report from Joe Van Dyk in which a  
cast to text was disallowed in an expression index definition.  
  
To fix, just adjust the declarations in the extension SQL scripts.  If we  
were being very fussy about this, we'd bump the extension version numbers,  
but that seems like more trouble (for both developers and users) than the  
problem is worth.  
  
A fly in the ointment is that chkpass_in actually is volatile, because  
of its use of random() to generate a fresh salt when presented with a  
not-yet-encrypted password.  This is bad because of the general assumption  
that I/O functions aren't volatile: the consequence is that records or  
arrays containing chkpass elements may have input behavior a bit different  
from a bare chkpass column.  But there seems no way to fix this without  
breaking existing usage patterns for chkpass, and the consequences of the  
inconsistency don't seem bad enough to justify that.  So for the moment,  
just document it in a comment.  
  
Since we're not bumping version numbers, there seems no harm in  
back-patching these fixes; at least future installations will get the  
functions marked correctly.  
  

Avoid integer overflow and buffer overrun in hstore_to_json().

  
commit   : f44290b7b3763f339ed66f883c0e85bb3c3c4e88    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Nov 2014 16:54:59 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Nov 2014 16:54:59 -0500    

Click here for diff

  
This back-patches commit 0c5783ff301ae3e470000c918bfc2395129de4c5 into the  
9.3 branch.  At the time, Heikki just thought he was fixing an unlikely  
integer-overflow scenario, but in point of fact the original coding was  
hopelessly broken: it supposed that escape_json never enlarges the data  
more than 2X, which is wrong on its face.  The revised code eliminates  
making any a-priori assumptions about the output length.  
  
Per report from Saul Costa.  The bogus code doesn't exist before 9.3,  
so no other branches need fixing.  
  

Drop no-longer-needed buffers during ALTER DATABASE SET TABLESPACE.

  
commit   : f88300168b1c5786c4b167de17e1a0bbb252337e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Nov 2014 13:24:14 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 4 Nov 2014 13:24:14 -0500    

Click here for diff

  
The previous coding assumed that we could just let buffers for the  
database's old tablespace age out of the buffer arena naturally.  
The folly of that is exposed by bug #11867 from Marc Munro: the user could  
later move the database back to its original tablespace, after which any  
still-surviving buffers would match lookups again and appear to contain  
valid data.  But they'd be missing any changes applied while the database  
was in the new tablespace.  
  
This has been broken since ALTER SET TABLESPACE was introduced, so  
back-patch to all supported branches.  
  

Add missing #include

  
commit   : d233f0a52f1a0a0b3b8654fd8b93b00a9e77563e    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 3 Nov 2014 19:29:33 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 3 Nov 2014 19:29:33 +0200    

Click here for diff

  
Fixes compiler warning I introduced while fixing bug #11431.  
  
Report and fix by Michael Paquier  
  

Docs: fix incorrect spelling of contrib/pgcrypto option.

  
commit   : 65b0de44f54653021ebec9ad0bb7d60fcb50ff75    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 3 Nov 2014 11:11:34 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 3 Nov 2014 11:11:34 -0500    

Click here for diff

  
pgp_sym_encrypt's option is spelled "sess-key", not "enable-session-key".  
Spotted by Jeff Janes.  
  
In passing, improve a comment in pgp-pgsql.c to make it clearer that  
the debugging options are intentionally undocumented.  
  

  
commit   : 9decab8b7834b99ca6375fdf533ad5f22f91a204    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sun, 2 Nov 2014 21:43:20 -0500    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sun, 2 Nov 2014 21:43:20 -0500    

Click here for diff

  
Back-patch to 9.2, like commit db29620d4d16e08241f965ccd70d0f65883ff0de.  
  

PL/Python: Fix example

  
commit   : 42a78568dc34584a1b7aad6209f6e2f1d28f0054    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Sat, 1 Nov 2014 11:31:35 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Sat, 1 Nov 2014 11:31:35 -0400    

Click here for diff

  
Revert "6f6b46c9c0ca3d96acbebc5499c32ee6369e1eec", which was broken.  
  
Reported-by: Jonathan Rogers <jrogers@socialserve.com>  
  

Test IsInTransactionChain, not IsTransactionBlock, in vac_update_relstats.

  
commit   : e65b550b37ec25102813d8e39eeb70f0c7944433    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 30 Oct 2014 13:03:28 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 30 Oct 2014 13:03:28 -0400    

Click here for diff

  
As noted by Noah Misch, my initial cut at fixing bug #11638 didn't cover  
all cases where ANALYZE might be invoked in an unsafe context.  We need to  
test the result of IsInTransactionChain not IsTransactionBlock; which is  
notationally a pain because IsInTransactionChain requires an isTopLevel  
flag, which would have to be passed down through several levels of callers.  
I chose to pass in_outer_xact (ie, the result of IsInTransactionChain)  
rather than isTopLevel per se, as that seemed marginally more apropos  
for the intermediate functions to know about.  
  

Avoid corrupting tables when ANALYZE inside a transaction is rolled back.

  
commit   : 81f0a5e38a05811f762b1dec9d344be923025f40    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 29 Oct 2014 18:12:08 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 29 Oct 2014 18:12:08 -0400    

Click here for diff

  
VACUUM and ANALYZE update the target table's pg_class row in-place, that is  
nontransactionally.  This is OK, more or less, for the statistical columns,  
which are mostly nontransactional anyhow.  It's not so OK for the DDL hint  
flags (relhasindex etc), which might get changed in response to  
transactional changes that could still be rolled back.  This isn't a  
problem for VACUUM, since it can't be run inside a transaction block nor  
in parallel with DDL on the table.  However, we allow ANALYZE inside a  
transaction block, so if the transaction had earlier removed the last  
index, rule, or trigger from the table, and then we roll back the  
transaction after ANALYZE, the table would be left in a corrupted state  
with the hint flags not set though they should be.  
  
To fix, suppress the hint-flag updates if we are InTransactionBlock().  
This is safe enough because it's always OK to postpone hint maintenance  
some more; the worst-case consequence is a few extra searches of pg_index  
et al.  There was discussion of instead using a transactional update,  
but that would change the behavior in ways that are not all desirable:  
in most scenarios we're better off keeping ANALYZE's statistical values  
even if the ANALYZE itself rolls back.  In any case we probably don't want  
to change this behavior in back branches.  
  
Per bug #11638 from Casey Shobe.  This has been broken for a good long  
time, so back-patch to all supported branches.  
  
Tom Lane and Michael Paquier, initial diagnosis by Andres Freund  
  

Reset error message at PQreset()

  
commit   : 1325b239b9e862cc31de2af363f1b73f0d15d920    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 29 Oct 2014 14:32:01 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Wed, 29 Oct 2014 14:32:01 +0200    

Click here for diff

  
If you call PQreset() repeatedly, and the connection cannot be  
re-established, the error messages from the failed connection attempts  
kept accumulating in the error string.  
  
Fixes bug #11455 reported by Caleb Epstein. Backpatch to all supported  
versions.  
  

Fix two bugs in tsquery @> operator.

  
commit   : 1aa526f3f18feb2434870b1f1c78cfc0aa87d56f    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 27 Oct 2014 10:50:41 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 27 Oct 2014 10:50:41 +0200    

Click here for diff

  
1. The comparison for matching terms used only the CRC to decide if there's  
a match. Two different terms with the same CRC gave a match.  
  
2. It assumed that if the second operand has more terms than the first, it's  
never a match. That assumption is bogus, because there can be duplicate  
terms in either operand.  
  
Rewrite the implementation in a way that doesn't have those bugs.  
  
Backpatch to all supported versions.  
  

Improve planning of btree index scans using ScalarArrayOpExpr quals.

  
commit   : f6abf8f08ac36825e3cbcaf6ef923822e65877f4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 26 Oct 2014 16:12:29 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 26 Oct 2014 16:12:29 -0400    

Click here for diff

  
Since we taught btree to handle ScalarArrayOpExpr quals natively (commit  
9e8da0f75731aaa7605cf4656c21ea09e84d2eb1), the planner has always included  
ScalarArrayOpExpr quals in index conditions if possible.  However, if the  
qual is for a non-first index column, this could result in an inferior plan  
because we can no longer take advantage of index ordering (cf. commit  
807a40c551dd30c8dd5a0b3bd82f5bbb1e7fd285).  It can be better to omit the  
ScalarArrayOpExpr qual from the index condition and let it be done as a  
filter, so that the output doesn't need to get sorted.  Indeed, this is  
true for the query introduced as a test case by the latter commit.  
  
To fix, restructure get_index_paths and build_index_paths so that we  
consider paths both with and without ScalarArrayOpExpr quals in non-first  
index columns.  Redesign the API of build_index_paths so that it reports  
what it found, saving useless second or third calls.  
  
Report and patch by Andrew Gierth (though rather heavily modified by me).  
Back-patch to 9.2 where this code was introduced, since the issue can  
result in significant performance regressions compared to plans produced  
by 9.1 and earlier.  
  

Oops, I fumbled the backpatch of pg_upgrade changes.

  
commit   : 9945f4e0f546f2e53b430fba32831ae6c73abb78    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Sat, 25 Oct 2014 20:59:22 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Sat, 25 Oct 2014 20:59:22 +0300    

Click here for diff

  
Somehow I got 9.2 and 9.4 correct, but fumbled 9.3.  
  

Work around Windows locale name with non-ASCII character.

  
commit   : 8f80dcf3c646679ae1270eb55f29fed9b0009c80    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 24 Oct 2014 19:56:03 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 24 Oct 2014 19:56:03 +0300    

Click here for diff

  
Windows has one a locale whose name contains a non-ASCII character:  
"Norwegian (Bokmål)" (that's an 'a' with a ring on top). That causes  
trouble; when passing it setlocale(), it's not clear what encoding the  
argument should be in. Another problem is that the locale name is stored in  
pg_database catalog table, and the encoding used there depends on what  
server encoding happens to be in use when the database is created. For  
example, if you issue the CREATE DATABASE when connected to a UTF-8  
database, the locale name is stored in pg_database in UTF-8. As long as all  
locale names are pure ASCII, that's not a problem.  
  
To work around that, map the troublesome locale name to a pure-ASCII alias  
of the same locale, "norwegian-bokmal".  
  
Now, this doesn't change the existing values that are already in  
pg_database and in postgresql.conf. Old clusters will need to be fixed  
manually. Instructions for that need to be put in the release notes.  
  
This fixes bug #11431 reported by Alon Siman-Tov. Backpatch to 9.2;  
backpatching further would require more work than seems worth it.  
  

Make the locale comparison in pg_upgrade more lenient

  
commit   : 2a1b34959a25d5b395ba52101eccf28a8e189c10    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 24 Oct 2014 19:26:44 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 24 Oct 2014 19:26:44 +0300    

Click here for diff

  
If the locale names are not equal, try to canonicalize both of them by  
passing them to setlocale(). Before, we only canonicalized the old cluster's  
locale if upgrading from a 8.4-9.2 server, but we also need to canonicalize  
when upgrading from a pre-8.4 server. That was an oversight in the code. But  
we should also canonicalize on newer server versions, so that we cope if the  
canonical form changes from one release to another. I'm about to do just  
that to fix bug #11431, by mapping a locale name that contains non-ASCII  
characters to a pure-ASCII alias of the same locale.  
  
This is partial backpatch of commit 33755e8edf149dabfc0ed9b697a84f70b0cca0de  
in master. Apply to 9.2, 9.3 and 9.4. The canonicalization code didn't exist  
before 9.2. In 9.2 and 9.3, this effectively also back-patches the changes  
from commit 58274728fb8e087049df67c0eee903d9743fdeda, to be more lax about  
the spelling of the encoding in the locale names.  
  

Improve ispell dictionary’s defenses against bad affix files.

  
commit   : 385f0d98a4357f144e7ed9b18db7ead9079b9318    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Oct 2014 13:11:34 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Oct 2014 13:11:34 -0400    

Click here for diff

  
Don't crash if an ispell dictionary definition contains flags but not  
any compound affixes.  (This isn't a security issue since only superusers  
can install affix files, but still it's a bad thing.)  
  
Also, be more careful about detecting whether an affix-file FLAG command  
is old-format (ispell) or new-format (myspell/hunspell).  And change the  
error message about mixed old-format and new-format commands into something  
intelligible.  
  
Per bug #11770 from Emre Hasegeli.  Back-patch to all supported branches.  
  

Prevent the already-archived WAL file from being archived again.

  
commit   : d45cd9e19ffdab8758f088a36be0cd7f3588533c    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 23 Oct 2014 16:21:27 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 23 Oct 2014 16:21:27 +0900    

Click here for diff

  
Previously the archive recovery always created .ready file for  
the last WAL file of the old timeline at the end of recovery even when  
it's restored from the archive and has .done file. That is, there was  
the case where the WAL file had both .ready and .done files.  
This caused the already-archived WAL file to be archived again.  
  
This commit prevents the archive recovery from creating .ready file  
for the last WAL file if it has .done file, in order to prevent it from  
being archived again.  
  
This bug was added when cascading replication feature was introduced,  
i.e., the commit 5286105800c7d5902f98f32e11b209c471c0c69c.  
So, back-patch to 9.2, where cascading replication was added.  
  
Reviewed by Michael Paquier  
  

Ensure libpq reports a suitable error message on unexpected socket EOF.

  
commit   : 52ef33f7257436aa24ee79cf2006a2d7e12f9021    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 22 Oct 2014 18:41:51 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 22 Oct 2014 18:41:51 -0400    

Click here for diff

  
The EOF-detection logic in pqReadData was a bit confused about who should  
set up the error message in case the kernel gives us read-ready-but-no-data  
rather than ECONNRESET or some other explicit error condition.  Since the  
whole point of this situation is that the lower-level functions don't know  
there's anything wrong, pqReadData itself must set up the message.  But  
keep the assumption that if an errno was reported, a message was set up at  
lower levels.  
  
Per bug #11712 from Marko Tiikkaja.  It's been like this for a very long  
time, so back-patch to all supported branches.  
  

Flush unlogged table’s buffers when copying or moving databases.

  
commit   : d7624e5621aefe76c31798f79db47e952240225d    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Mon, 20 Oct 2014 23:43:46 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Mon, 20 Oct 2014 23:43:46 +0200    

Click here for diff

  
CREATE DATABASE and ALTER DATABASE .. SET TABLESPACE copy the source  
database directory on the filesystem level. To ensure the on disk  
state is consistent they block out users of the affected database and  
force a checkpoint to flush out all data to disk. Unfortunately, up to  
now, that checkpoint didn't flush out dirty buffers from unlogged  
relations.  
  
That bug means there could be leftover dirty buffers in either the  
template database, or the database in its old location. Leading to  
problems when accessing relations in an inconsistent state; and to  
possible problems during shutdown in the SET TABLESPACE case because  
buffers belonging files that don't exist anymore are flushed.  
  
This was reported in bug #10675 by Maxim Boguk.  
  
Fix by Pavan Deolasee, modified somewhat by me. Reviewed by MauMau and  
Fujii Masao.  
  
Backpatch to 9.1 where unlogged tables were introduced.  
  

Fix mishandling of FieldSelect-on-whole-row-Var in nested lateral queries.

  
commit   : 4e54685d0bcc74b840fdeff54c79c343941d3681    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 20 Oct 2014 12:23:48 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 20 Oct 2014 12:23:48 -0400    

Click here for diff

  
If an inline-able SQL function taking a composite argument is used in a  
LATERAL subselect, and the composite argument is a lateral reference,  
the planner could fail with "variable not found in subplan target list",  
as seen in bug #11703 from Karl Bartel.  (The outer function call used in  
the bug report and in the committed regression test is not really necessary  
to provoke the bug --- you can get it if you manually expand the outer  
function into "LATERAL (SELECT inner_function(outer_relation))", too.)  
  
The cause of this is that we generate the reltargetlist for the referenced  
relation before doing eval_const_expressions() on the lateral sub-select's  
expressions (cf find_lateral_references()), so what's scheduled to be  
emitted by the referenced relation is a whole-row Var, not the simplified  
single-column Var produced by optimizing the function's FieldSelect on the  
whole-row Var.  Then setrefs.c fails to match up that lateral reference to  
what's available from the outer scan.  
  
Preserving the FieldSelect optimization in such cases would require either  
major planner restructuring (to recursively do expression simplification  
on sub-selects much earlier) or some amazingly ugly kluge to change the  
reltargetlist of a possibly-already-planned relation.  It seems better  
just to skip the optimization when the Var is from an upper query level;  
the case is not so common that it's likely anyone will notice a few  
wasted cycles.  
  
AFAICT this problem only occurs for uplevel LATERAL references, so  
back-patch to 9.3 where LATERAL was added.  
  

Declare mkdtemp() only if we’re providing it.

  
commit   : 7804f350eef6b2603b7fced00d603352a1c5ff4b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 17 Oct 2014 22:55:27 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 17 Oct 2014 22:55:27 -0400    

Click here for diff

  
Follow our usual style of providing an "extern" for a standard library  
function only when we're also providing the implementation.  This avoids  
issues when the system headers declare the function slightly differently  
than we do, as noted by Caleb Welton.  
  
We might have to go to the extent of probing to see if the system headers  
declare the function, but let's not do that until it's demonstrated to be  
necessary.  
  
Oversight in commit 9e6b1bf258170e62dac555fc82ff0536dfe01d29.  Back-patch  
to all supported branches, as that was.  
  

Avoid core dump in _outPathInfo() for Path without a parent RelOptInfo.

  
commit   : d4f5cf5ce5106504316e5d890782091567a02d57    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 17 Oct 2014 22:33:07 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 17 Oct 2014 22:33:07 -0400    

Click here for diff

  
Nearly all Paths have parents, but a ResultPath representing an empty FROM  
clause does not.  Avoid a core dump in such cases.  I believe this is only  
a hazard for debugging usage, not for production, else we'd have heard  
about it before.  Nonetheless, back-patch to 9.1 where the troublesome code  
was introduced.  Noted while poking at bug #11703.  
  

Fix core dump in pg_dump –binary-upgrade on zero-column composite type.

  
commit   : 9a540c1ef6dfa8b32b79bed7a811ad24c380075c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 17 Oct 2014 12:49:06 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 17 Oct 2014 12:49:06 -0400    

Click here for diff

  
This reverts nearly all of commit 28f6cab61ab8958b1a7dfb019724687d92722538  
in favor of just using the typrelid we already have in pg_dump's TypeInfo  
struct for the composite type.  As coded, it'd crash if the composite type  
had no attributes, since then the query would return no rows.  
  
Back-patch to all supported versions.  It seems to not really be a problem  
in 9.0 because that version rejects the syntax "create type t as ()", but  
we might as well keep the logic similar in all affected branches.  
  
Report and fix by Rushabh Lathia.  
  

Support timezone abbreviations that sometimes change.

  
commit   : 137e7c16449f8f103b8f72aff02628058076bc38    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 16 Oct 2014 15:22:17 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 16 Oct 2014 15:22:17 -0400    

Click here for diff

  
Up to now, PG has assumed that any given timezone abbreviation (such as  
"EDT") represents a constant GMT offset in the usage of any particular  
region; we had a way to configure what that offset was, but not for it  
to be changeable over time.  But, as with most things horological, this  
view of the world is too simplistic: there are numerous regions that have  
at one time or another switched to a different GMT offset but kept using  
the same timezone abbreviation.  Almost the entire Russian Federation did  
that a few years ago, and later this month they're going to do it again.  
And there are similar examples all over the world.  
  
To cope with this, invent the notion of a "dynamic timezone abbreviation",  
which is one that is referenced to a particular underlying timezone  
(as defined in the IANA timezone database) and means whatever it currently  
means in that zone.  For zones that use or have used daylight-savings time,  
the standard and DST abbreviations continue to have the property that you  
can specify standard or DST time and get that time offset whether or not  
DST was theoretically in effect at the time.  However, the abbreviations  
mean what they meant at the time in question (or most recently before that  
time) rather than being absolutely fixed.  
  
The standard abbreviation-list files have been changed to use this behavior  
for abbreviations that have actually varied in meaning since 1970.  The  
old simple-numeric definitions are kept for abbreviations that have not  
changed, since they are a bit faster to resolve.  
  
While this is clearly a new feature, it seems necessary to back-patch it  
into all active branches, because otherwise use of Russian zone  
abbreviations is going to become even more problematic than it already was.  
This change supersedes the changes in commit 513d06ded et al to modify the  
fixed meanings of the Russian abbreviations; since we've not shipped that  
yet, this will avoid an undesirably incompatible (not to mention incorrect)  
change in behavior for timestamps between 2011 and 2014.  
  
This patch makes some cosmetic changes in ecpglib to keep its usage of  
datetime lookup tables as similar as possible to the backend code, but  
doesn't do anything about the increasingly obsolete set of timezone  
abbreviation definitions that are hard-wired into ecpglib.  Whatever we  
do about that will likely not be appropriate material for back-patching.  
Also, a potential free() of a garbage pointer after an out-of-memory  
failure in ecpglib has been fixed.  
  
This patch also fixes pre-existing bugs in DetermineTimeZoneOffset() that  
caused it to produce unexpected results near a timezone transition, if  
both the "before" and "after" states are marked as standard time.  We'd  
only ever thought about or tested transitions between standard and DST  
time, but that's not what's happening when a zone simply redefines their  
base GMT offset.  
  
In passing, update the SGML documentation to refer to the Olson/zoneinfo/  
zic timezone database as the "IANA" database, since it's now being  
maintained under the auspices of IANA.  
  

Suppress dead, unportable src/port/crypt.c code.

  
commit   : 69e29b17eca1954ece32638a5e7740a1de25a244    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Sun, 12 Oct 2014 23:27:06 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Sun, 12 Oct 2014 23:27:06 -0400    

Click here for diff

  
This file used __int64, which is specific to native Windows, rather than  
int64.  Suppress the long-unused union field of this type.  Noticed on  
Cygwin x86_64 with -lcrypt not installed.  Back-patch to 9.0 (all  
supported versions).  
  

Fix broken example in PL/pgSQL document.

  
commit   : 090ad74b00e023ba8af10cc3376b3e5a415de2e7    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Fri, 10 Oct 2014 03:18:01 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Fri, 10 Oct 2014 03:18:01 +0900    

Click here for diff

  
Back-patch to all supported branches.  
  
Marti Raudsepp, per a report from Marko Tiikkaja  
  

Fix array overrun in ecpg’s version of ParseDateTime().

  
commit   : d3cfe20c6dc498f9294d07c7803a8cc776f8db31    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Oct 2014 21:23:20 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Oct 2014 21:23:20 -0400    

Click here for diff

  
The code wrote a value into the caller's field[] array before checking  
to see if there was room, which of course is backwards.  Per report from  
Michael Paquier.  
  
I fixed the equivalent bug in the backend's version of this code way back  
in 630684d3a130bb93, but failed to think about ecpg's copy.  Fortunately  
this doesn't look like it would be exploitable for anything worse than a  
core dump: an external attacker would have no control over the single word  
that gets written.  
  

Cannot rely on %z printf length modifier.

  
commit   : 3cd085ee251ba1499a3dd94cbea809fe4bdc55a5    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Sun, 5 Oct 2014 09:21:45 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Sun, 5 Oct 2014 09:21:45 +0300    

Click here for diff

  
Before version 9.4, we didn't require sprintf to support the %z length  
modifier. Use %lu instead.  
  
Reported by Peter Eisentraut. Apply to 9.3 and earlier.  
  

Update time zone data files to tzdata release 2014h.

  
commit   : c66199151c78c546f92744de5023b79b84f954cf    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 4 Oct 2014 14:18:33 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 4 Oct 2014 14:18:33 -0400    

Click here for diff

  
Most zones in the Russian Federation are subtracting one or two hours  
as of 2014-10-26.  Update the meanings of the abbreviations IRKT, KRAT,  
MAGT, MSK, NOVT, OMST, SAKT, VLAT, YAKT, YEKT to match.  
  
The IANA timezone database has adopted abbreviations of the form AxST/AxDT  
for all Australian time zones, reflecting what they believe to be current  
majority practice Down Under.  These names do not conflict with usage  
elsewhere (other than ACST for Acre Summer Time, which has been in disuse  
since 1994).  Accordingly, adopt these names into our "Default" timezone  
abbreviation set.  The "Australia" abbreviation set now contains only  
CST,EAST,EST,SAST,SAT,WST, all of which are thought to be mostly historical  
usage.  Note that SAST has also been changed to be South Africa Standard  
Time in the "Default" abbreviation set.  
  
Add zone abbreviations SRET (Asia/Srednekolymsk) and XJT (Asia/Urumqi),  
and use WSST/WSDT for western Samoa.  
  
Also a DST law change in the Turks & Caicos Islands (America/Grand_Turk),  
and numerous corrections for historical time zone data.  
  

Update time zone abbreviations lists.

  
commit   : 9701f238bcb710b524c577cd2ea882b42e24dddf    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 3 Oct 2014 17:44:38 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 3 Oct 2014 17:44:38 -0400    

Click here for diff

  
This updates known_abbrevs.txt to be what it should have been already,  
were my -P patch not broken; and updates some tznames/ entries that  
missed getting any love in previous timezone data updates because zic  
failed to flag the change of abbreviation.  
  
The non-cosmetic updates:  
  
* Remove references to "ADT" as "Arabia Daylight Time", an abbreviation  
that's been out of use since 2007; therefore, claiming there is a conflict  
with "Atlantic Daylight Time" doesn't seem especially helpful.  (We have  
left obsolete entries in the files when they didn't conflict with anything,  
but that seems like a different situation.)  
  
* Fix entirely incorrect GMT offsets for CKT (Cook Islands), FJT, FJST  
(Fiji); we didn't even have them on the proper side of the date line.  
(Seems to have been aboriginal errors in our tznames data; there's no  
evidence anything actually changed recently.)  
  
* FKST (Falkland Islands Summer Time) is now used all year round, so  
don't mark it as a DST abbreviation.  
  
* Update SAKT (Sakhalin) to mean GMT+11 not GMT+10.  
  
In cosmetic changes, I fixed a bunch of wrong (or at least obsolete)  
claims about abbreviations not being present in the zic files, and  
tried to be consistent about how obsolete abbreviations are labeled.  
  
Note the underlying timezone/data files are still at release 2014e;  
this is just trying to get us in sync with what those files actually  
say before we go to the next update.  
  

Fix bogus logic for zic -P option.

  
commit   : 909e48dddc50182e12c76734a6669a4fdbe080b3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 3 Oct 2014 14:48:11 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 3 Oct 2014 14:48:11 -0400    

Click here for diff

  
The quick hack I added to zic to dump out currently-in-use timezone  
abbreviations turns out to have a nasty bug: within each zone, it was  
printing the last "struct ttinfo" to be *defined*, not necessarily the  
last one in use.  This was mainly a problem in zones that had changed the  
meaning of their zone abbreviation (to another GMT offset value) and later  
changed it back.  
  
As a result of this error, we'd missed out updating the tznames/ files  
for some jurisdictions that have changed their zone abbreviations since  
the tznames/ files were originally created.  I'll address the missing data  
updates in a separate commit.  
  

Don’t balance vacuum cost delay when per-table settings are in effect

  
commit   : 67ed9d53136dc98191c5945043a17a12e10b480b    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 3 Oct 2014 13:01:27 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 3 Oct 2014 13:01:27 -0300    

Click here for diff

  
When there are cost-delay-related storage options set for a table,  
trying to make that table participate in the autovacuum cost-limit  
balancing algorithm produces undesirable results: instead of using the  
configured values, the global values are always used,  
as illustrated by Mark Kirkwood in  
http://www.postgresql.org/message-id/52FACF15.8020507@catalyst.net.nz  
  
Since the mechanism is already complicated, just disable it for those  
cases rather than trying to make it cope.  There are undesirable  
side-effects from this too, namely that the total I/O impact on the  
system will be higher whenever such tables are vacuumed.  However, this  
is seen as less harmful than slowing down vacuum, because that would  
cause bloat to accumulate.  Anyway, in the new system it is possible to  
tweak options to get the precise behavior one wants, whereas with the  
previous system one was simply hosed.  
  
This has been broken forever, so backpatch to all supported branches.  
This might affect systems where cost_limit and cost_delay have been set  
for individual tables.  
  

Check for GiST index tuples that don’t fit on a page.

  
commit   : ef8ac584e0a7062101dc244566bfe0ca7a13496d    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 3 Oct 2014 12:07:10 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 3 Oct 2014 12:07:10 +0300    

Click here for diff

  
The page splitting code would go into infinite recursion if you try to  
insert an index tuple that doesn't fit even on an empty page.  
  
Per analysis and suggested fix by Andrew Gierth. Fixes bug #11555, reported  
by Bryan Seitz (analysis happened over IRC). Backpatch to all supported  
versions.  
  

Fix typo in error message.

  
commit   : 49b1d9806199bac31f87c793b6f7ff1acc64e4e0    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 2 Oct 2014 15:51:31 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Thu, 2 Oct 2014 15:51:31 +0300    

Click here for diff

  
  

Fix some more problems with nested append relations.

  
commit   : b2b95de61e2e1c4647fa902c3b946109c55451c4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 1 Oct 2014 19:30:30 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 1 Oct 2014 19:30:30 -0400    

Click here for diff

  
As of commit a87c72915 (which later got backpatched as far as 9.1),  
we're explicitly supporting the notion that append relations can be  
nested; this can occur when UNION ALL constructs are nested, or when  
a UNION ALL contains a table with inheritance children.  
  
Bug #11457 from Nelson Page, as well as an earlier report from Elvis  
Pranskevichus, showed that there were still nasty bugs associated with such  
cases: in particular the EquivalenceClass mechanism could try to generate  
"join" clauses connecting an appendrel child to some grandparent appendrel,  
which would result in assertion failures or bogus plans.  
  
Upon investigation I concluded that all current callers of  
find_childrel_appendrelinfo() need to be fixed to explicitly consider  
multiple levels of parent appendrels.  The most complex fix was in  
processing of "broken" EquivalenceClasses, which are ECs for which we have  
been unable to generate all the derived equality clauses we would like to  
because of missing cross-type equality operators in the underlying btree  
operator family.  That code path is more or less entirely untested by  
the regression tests to date, because no standard opfamilies have such  
holes in them.  So I wrote a new regression test script to try to exercise  
it a bit, which turned out to be quite a worthwhile activity as it exposed  
existing bugs in all supported branches.  
  
The present patch is essentially the same as far back as 9.2, which is  
where parameterized paths were introduced.  In 9.0 and 9.1, we only need  
to back-patch a small fragment of commit 5b7b5518d, which fixes failure to  
propagate out the original WHERE clauses when a broken EC contains constant  
members.  (The regression test case results show that these older branches  
are noticeably stupider than 9.2+ in terms of the quality of the plans  
generated; but we don't really care about plan quality in such cases,  
only that the plan not be outright wrong.  A more invasive fix in the  
older branches would not be a good idea anyway from a plan-stability  
standpoint.)  
  

Block signals while computing the sleep time in postmaster’s main loop.

  
commit   : cbd9619aca44c2a1991076325612302118ee6975    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Wed, 1 Oct 2014 14:23:43 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Wed, 1 Oct 2014 14:23:43 +0200    

Click here for diff

  
DetermineSleepTime() was previously called without blocked  
signals. That's not good, because it allows signal handlers to  
interrupt its workings.  
  
DetermineSleepTime() was added in 9.3 with the addition of background  
workers (da07a1e856511), where it only read from  
BackgroundWorkerList.  
  
Since 9.4, where dynamic background workers were added (7f7485a0cde),  
the list is also manipulated in DetermineSleepTime(). That's bad  
because the list now can be persistently corrupted if modified by both  
a signal handler and DetermineSleepTime().  
  
This was discovered during the investigation of hangs on buildfarm  
member anole. It's unclear whether this bug is the source of these  
hangs or not, but it's worth fixing either way. I have confirmed that  
it can cause crashes.  
  
It luckily looks like this only can cause problems when bgworkers are  
actively used.  
  
Discussion: 20140929193733.GB14400@awork2.anarazel.de  
  
Backpatch to 9.3 where background workers were introduced.  
  

Correct stdin/stdout usage in COPY .. PROGRAM

  
commit   : 9adda98c7738f5a8d4fdaa5a7a5bfb3f11c94899    
  
author   : Stephen Frost <sfrost@snowman.net>    
date     : Tue, 30 Sep 2014 15:55:28 -0400    
  
committer: Stephen Frost <sfrost@snowman.net>    
date     : Tue, 30 Sep 2014 15:55:28 -0400    

Click here for diff

  
The COPY documentation incorrectly stated, for the PROGRAM case,  
that we read from stdin and wrote to stdout.  Fix that, and improve  
consistency by referring to the 'PostgreSQL' user instead of the  
'postgres' user, as is done in the rest of the COPY documentation.  
  
Pointed out by Peter van Dijk.  
  
Back-patch to 9.3 where COPY .. PROGRAM was introduced.  
  

Fix identify_locking_dependencies for schema-only dumps.

  
commit   : d72ecc91c38b0d117f8d2137ab920f6cbc6e913d    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Fri, 26 Sep 2014 11:21:35 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Fri, 26 Sep 2014 11:21:35 -0400    

Click here for diff

  
Without this fix, parallel restore of a schema-only dump can deadlock,  
because when the dump is schema-only, the dependency will still be  
pointing at the TABLE item rather than the TABLE DATA item.  
  
Robert Haas and Tom Lane  
  

Fix VPATH builds of the replication parser from git for some !gcc compilers.

  
commit   : a8acf4d001549e7ff65def6ef1c1c61d77566e7a    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Thu, 25 Sep 2014 15:22:26 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Thu, 25 Sep 2014 15:22:26 +0200    

Click here for diff

  
Some compilers don't automatically search the current directory for  
included files. 9cc2c182fc2 fixed that for builds from tarballs by  
adding an include to the source directory. But that doesn't work when  
the scanner is generated in the VPATH directory. Use the same search  
path as the other parsers in the tree.  
  
One compiler that definitely was affected is solaris' sun cc.  
  
Backpatch to 9.1 which introduced using an actual parser for  
replication commands.  
  

Fix incorrect search for “x?” style matches in creviterdissect().

  
commit   : bbfdf5d75cd88522a3258ea7ec8d25cd51c35180    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 23 Sep 2014 20:25:36 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 23 Sep 2014 20:25:36 -0400    

Click here for diff

  
When the number of allowed iterations is limited (either a "?" quantifier  
or a bound expression), the last sub-match has to reach to the end of the  
target string.  The previous coding here first tried the shortest possible  
match (one character, usually) and then gave up and back-tracked if that  
didn't work, typically leading to failure to match overall, as shown in  
bug #11478 from Christoph Berg.  The minimum change to fix that would be to  
not decrement k before "goto backtrack"; but that would be a pretty stupid  
solution, because we'd laboriously try each possible sub-match length  
before finally discovering that only ending at the end can work.  Instead,  
force the sub-match endpoint limit up to the end for even the first  
shortest() call if we cannot have any more sub-matches after this one.  
  
Bug introduced in my rewrite that added the iterdissect logic, commit  
173e29aa5deefd9e71c183583ba37805c8102a72.  The shortest-first search code  
was too closely modeled on the longest-first code, which hasn't got this  
issue since it tries a match reaching to the end to start with anyway.  
Back-patch to all affected branches.  
  

Fix mishandling of CreateEventTrigStmt’s eventname field.

  
commit   : e35db342aa6a67181d54b09cb80d088805a5f408    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Mon, 22 Sep 2014 16:05:51 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Mon, 22 Sep 2014 16:05:51 -0400    

Click here for diff

  
It's a string, not a scalar.  
  
Petr Jelinek  
  

Fix failure of contrib/auto_explain to print per-node timing information.

  
commit   : 9474c9d8107ba21ed9b8b9c1efde433f3d20e45f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 19 Sep 2014 13:19:02 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 19 Sep 2014 13:19:02 -0400    

Click here for diff

  
This has been broken since commit af7914c6627bcf0b0ca614e9ce95d3f8056602bf,  
which added the EXPLAIN (TIMING) option.  Although that commit included  
updates to auto_explain, they evidently weren't tested very carefully,  
because the code failed to print node timings even when it should, due to  
failure to set es.timing in the ExplainState struct.  Reported off-list by  
Neelakanth Nadgir of Salesforce.  
  
In passing, clean up the documentation for auto_explain's options a  
little bit, including re-ordering them into what seems to me a more  
logical order.  
  

Mark x86’s memory barrier inline assembly as clobbering the cpu flags.

  
commit   : 855cabb6f59c06f3ffcddade59bfbee60dab61aa    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Fri, 19 Sep 2014 17:04:00 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Fri, 19 Sep 2014 17:04:00 +0200    

Click here for diff

  
x86's memory barrier assembly was marked as clobbering "memory" but  
not "cc" even though 'addl' sets various flags. As it turns out gcc on  
x86 implicitly assumes "cc" on every inline assembler statement, so  
it's not a bug. But as that's poorly documented and might get copied  
to architectures or compilers where that's not the case, it seems  
better to be precise.  
  
Discussion: 20140919100016.GH4277@alap3.anarazel.de  
  
To keep the code common, backpatch to 9.2 where explicit memory  
barriers were introduced.  
  

doc: Fix documentation of local_preload_libraries

  
commit   : 710524eb949832b60c457bd7116f3f2a071d9a2e    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 14 Sep 2014 10:50:04 -0400    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Sun, 14 Sep 2014 10:50:04 -0400    

Click here for diff

  
The documentation used to suggest setting this parameter with ALTER ROLE  
SET, but that never worked, so replace it with a working suggestion.  
  
Reported-by: Kyotaro Horiguchi <horiguchi.kyotaro@lab.ntt.co.jp>  
  

Handle border = 3 in expanded mode

  
commit   : 7ec3990944b9db39c2a1f44bf4e7929ee74c6491    
  
author   : Stephen Frost <sfrost@snowman.net>    
date     : Fri, 12 Sep 2014 11:24:09 -0400    
  
committer: Stephen Frost <sfrost@snowman.net>    
date     : Fri, 12 Sep 2014 11:24:09 -0400    

Click here for diff

  
In psql, expanded mode was not being displayed correctly when using  
the normal ascii or unicode linestyles and border set to '3'.  Now,  
per the documentation, border '3' is really only sensible for HTML  
and LaTeX formats, however, that's no excuse for ascii/unicode to  
break in that case, and provisions had been made for psql to cleanly  
handle this case (and it did, in non-expanded mode).  
  
This was broken when ascii/unicode was initially added a good five  
years ago because print_aligned_vertical_line wasn't passed in the  
border setting being used by print_aligned_vertical but instead was  
given the whole printTableContent.  There really isn't a good reason  
for vertical_line to have the entire printTableContent structure, so  
just pass in the printTextFormat and border setting (similar to how  
this is handled in horizontal_line).  
  
Pointed out by Pavel Stehule, fix by me.  
  
Back-patch to all currently-supported versions.  
  

Fix power_var_int() for large integer exponents.

  
commit   : 25bf13fe1b5e9385ec230e457d488684e5933a32    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 11 Sep 2014 23:30:57 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 11 Sep 2014 23:30:57 -0400    

Click here for diff

  
The code for raising a NUMERIC value to an integer power wasn't very  
careful about large powers.  It got an outright wrong answer for an  
exponent of INT_MIN, due to failure to consider overflow of the Abs(exp)  
operation; which is fixable by using an unsigned rather than signed  
exponent value after that point.  Also, even though the number of  
iterations of the power-computation loop is pretty limited, it's easy for  
the repeated squarings to result in ridiculously enormous intermediate  
values, which can take unreasonable amounts of time/memory to process,  
or even overflow the internal "weight" field and so produce a wrong answer.  
We can forestall misbehaviors of that sort by bailing out as soon as the  
weight value exceeds what will fit in int16, since then the final answer  
must overflow (if exp > 0) or underflow (if exp < 0) the packed numeric  
format.  
  
Per off-list report from Pavel Stehule.  Back-patch to all supported  
branches.  
  

pg_upgrade: preserve the timestamp epoch

  
commit   : 5724f491d2e02d6017e3101c9b7421437f2129e5    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Thu, 11 Sep 2014 18:39:46 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Thu, 11 Sep 2014 18:39:46 -0400    

Click here for diff

  
This is useful for replication tools like Slony and Skytools.  This is a  
backpatch of a74a4aa23bb95b590ff01ee564219d2eacea3706.  
  
Report by Sergey Konoplev  
  
Backpatch through 9.3  
  

Fix typo in solaris spinlock fix.

  
commit   : bca91236a7531fdd6562280156823db2c4f891b8    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Tue, 9 Sep 2014 13:57:38 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Tue, 9 Sep 2014 13:57:38 +0200    

Click here for diff

  
07968dbfaad03 missed part of the S_UNLOCK define when building for  
sparcv8+.  
  

Fix spinlock implementation for some !solaris sparc platforms.

  
commit   : 27ef6b65308c28836b08a5f5b16e03d37a905ad9    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Tue, 9 Sep 2014 00:47:32 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Tue, 9 Sep 2014 00:47:32 +0200    

Click here for diff

  
Some Sparc CPUs can be run in various coherence models, ranging from  
RMO (relaxed) over PSO (partial) to TSO (total). Solaris has always  
run CPUs in TSO mode while in userland, but linux didn't use to and  
the various *BSDs still don't. Unfortunately the sparc TAS/S_UNLOCK  
were only correct under TSO. Fix that by adding the necessary memory  
barrier instructions. On sparcv8+, which should be all relevant CPUs,  
these are treated as NOPs if the current consistency model doesn't  
require the barriers.  
  
Discussion: 20140630222854.GW26930@awork2.anarazel.de  
  
Will be backpatched to all released branches once a few buildfarm  
cycles haven't shown up problems. As I've no access to sparc, this is  
blindly written.  
  

Fix psql \s to work with recent libedit, and add pager support.

  
commit   : b0fd5c552ea994d4c09e9c820a5bab9ed4ff68d9    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 8 Sep 2014 16:09:52 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 8 Sep 2014 16:09:52 -0400    

Click here for diff

  
psql's \s (print command history) doesn't work at all with recent libedit  
versions when printing to the terminal, because libedit tries to do an  
fchmod() on the target file which will fail if the target is /dev/tty.  
(We'd already noted this in the context of the target being /dev/null.)  
Even before that, it didn't work pleasantly, because libedit likes to  
encode the command history file (to ensure successful reloading), which  
renders it nigh unreadable, not to mention significantly different-looking  
depending on exactly which libedit version you have.  So let's forget using  
write_history() for this purpose, and instead print the data ourselves,  
using logic similar to that used to iterate over the history for newline  
encoding/decoding purposes.  
  
While we're at it, insert the ability to use the pager when \s is printing  
to the terminal.  This has been an acknowledged shortcoming of \s for many  
years, so while you could argue it's not exactly a back-patchable bug fix  
it still seems like a good improvement.  Anyone who's seriously annoyed  
at this can use "\s /dev/tty" or local equivalent to get the old behavior.  
  
Experimentation with this showed that the history iteration logic was  
actually rather broken when used with libedit.  It turns out that with  
libedit you have to use previous_history() not next_history() to advance  
to more recent history entries.  The easiest and most robust fix for this  
seems to be to make a run-time test to verify which function to call.  
We had not noticed this because libedit doesn't really need the newline  
encoding logic: its own encoding ensures that command entries containing  
newlines are reloaded correctly (unlike libreadline).  So the effective  
behavior with recent libedits was that only the oldest history entry got  
newline-encoded or newline-decoded.  However, because of yet other bugs in  
history_set_pos(), some old versions of libedit allowed the existing loop  
logic to reach entries besides the oldest, which means there may be libedit  
~/.psql_history files out there containing encoded newlines in more than  
just the oldest entry.  To ensure we can reload such files, it seems  
appropriate to back-patch this fix, even though that will result in some  
incompatibility with older psql versions (ie, multiline history entries  
written by a psql with this fix will look corrupted to a psql without it,  
if its libedit is reasonably up to date).  
  
Stepan Rutz and Tom Lane  
  

Documentation fix: sum(float4) returns float4, not float8.

  
commit   : b640d231230c486ed1b2b94167478fc8ced5dcff    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 7 Sep 2014 22:40:41 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 7 Sep 2014 22:40:41 -0400    

Click here for diff

  
The old claim is from my commit d06ebdb8d3425185d7e641d15e45908658a0177d of  
2000-07-17, but it seems to have been a plain old thinko; sum(float4) has  
been distinct from sum(float8) since Berkeley days.  Noted by KaiGai Kohei.  
  
While at it, mention the existence of sum(money), which is also of  
embarrassingly ancient vintage.  
  

Fix segmentation fault that an empty prepared statement could cause.

  
commit   : 52eed3d4267faf671dae0450d99982cb9ba1ac52    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Fri, 5 Sep 2014 02:17:57 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Fri, 5 Sep 2014 02:17:57 +0900    

Click here for diff

  
Back-patch to all supported branches.  
  
Per bug #11335 from Haruka Takatsuka  
  

doc: Various typo/grammar fixes

  
commit   : 3eb02dc045cc098563dc5fcd1ee073da254a7fbf    
  
author   : Kevin Grittner <kgrittn@postgresql.org>    
date     : Sat, 30 Aug 2014 11:03:23 -0500    
  
committer: Kevin Grittner <kgrittn@postgresql.org>    
date     : Sat, 30 Aug 2014 11:03:23 -0500    

Click here for diff

  
Errors detected using Topy (https://github.com/intgr/topy), all  
changes verified by hand and some manual tweaks added.  
  
Marti Raudsepp  
  
Individual changes backpatched, where applicable, as far as 9.0.  
  

Fix citext upgrade script for disallowance of oidvector element assignment.

  
commit   : 0ad403c984e683a643d2f698c80b15df9deb0ff3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 28 Aug 2014 18:21:14 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 28 Aug 2014 18:21:14 -0400    

Click here for diff

  
In commit 45e02e3232ac7cc5ffe36f7986159b5e0b1f6fdc, we intentionally  
disallowed updates on individual elements of oidvector columns.  While that  
still seems like a sane idea in the abstract, we (I) forgot that citext's  
"upgrade from unpackaged" script did in fact perform exactly such updates,  
in order to fix the problem that citext indexes should have a collation  
but would not in databases dumped or upgraded from pre-9.1 installations.  
  
Even if we wanted to add casts to allow such updates, there's no practical  
way to do so in the back branches, so the only real alternative is to make  
citext's kluge even klugier.  In this patch, I cast the oidvector to text,  
fix its contents with regexp_replace, and cast back to oidvector.  (Ugh!)  
  
Since the aforementioned commit went into all active branches, we have to  
fix this in all branches that contain the now-broken update script.  
  
Per report from Eric Malm.  
  

Fix typos in some error messages thrown by extension scripts when fed to psql.

  
commit   : 10cb39da28afcaf3bedee2a273ea6f7c3336869c    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Mon, 25 Aug 2014 18:30:46 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Mon, 25 Aug 2014 18:30:46 +0200    

Click here for diff

  
Some of the many error messages introduced in 458857cc missed 'FROM  
unpackaged'. Also e016b724 and 45ffeb7e forgot to quote extension  
version numbers.  
  
Backpatch to 9.1, just like 458857cc which introduced the messages. Do  
so because the error messages thrown when the wrong command is copy &  
pasted aren't easy to understand.  
  

Backpatch: Fix typo in update scripts for some contrib modules.

  
commit   : 7b6407d93d78948e44e81a0328b3b26608db2a00    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Mon, 25 Aug 2014 18:30:46 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Mon, 25 Aug 2014 18:30:46 +0200    

Click here for diff

  
Backpatch as discussed in 20140702192641.GD22738@awork2.anarazel.de  
ff. as the error messages are user facing and possibly confusing.  
  
Original commit: 6f9e39bc9993c18686f0950f9b9657c7c97c7450  
  

Fix outdated comment

  
commit   : 13b037f93886b7cea393edf691e2dca14cf7b3e9    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 22 Aug 2014 13:55:34 -0400    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Fri, 22 Aug 2014 13:55:34 -0400    

Click here for diff

  
  

Install libpq DLL with $(INSTALL_SHLIB).

  
commit   : 318fe2321ed0207c8771dc0e24dfa84560dc6b51    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Mon, 18 Aug 2014 23:00:38 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Mon, 18 Aug 2014 23:00:38 -0400    

Click here for diff

  
Programs need execute permission on a DLL file to load it.  MSYS  
"install" ignores the mode argument, and our Cygwin build statically  
links libpq into programs.  That explains the lack of buildfarm trouble.  
Back-patch to 9.0 (all supported versions).  
  

Fix obsolete mention of non-int64 support in CREATE SEQUENCE documentation.

  
commit   : 2730d7254c5f30c14509859e4353f3f7120151d9    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 18 Aug 2014 01:17:49 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 18 Aug 2014 01:17:49 -0400    

Click here for diff

  
The old text explained what happened if we didn't have working int64  
arithmetic.  Since that case has been explicitly rejected by configure  
since 8.4.3, documenting it in the 9.x branches can only produce confusion.  
  

Fix bogus return macros in range_overright_internal().

  
commit   : f2d4f45c0ec7073dba75ffec8402b6d61de3bcfc    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 16 Aug 2014 13:48:46 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 16 Aug 2014 13:48:46 -0400    

Click here for diff

  
PG_RETURN_BOOL() should only be used in functions following the V1 SQL  
function API.  This coding accidentally fails to fail since letting the  
compiler coerce the Datum representation of bool back to plain bool  
does give the right answer; but that doesn't make it a good idea.  
  
Back-patch to older branches just to avoid unnecessary code divergence.  
  

Update SysV parameter configuration documentation for FreeBSD.

  
commit   : bdd62aabb4c3695dbb57155905b8bed7de56fd46    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 14 Aug 2014 16:05:52 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 14 Aug 2014 16:05:52 -0400    

Click here for diff

  
FreeBSD hasn't made any use of kern.ipc.semmap since 1.1, and newer  
releases reject attempts to set it altogether; so stop recommending  
that it be adjusted.  Per bug #11161.  
  
Back-patch to all supported branches.  Before 9.3, also incorporate  
commit 7a42dff47, which touches the same text and for some reason  
was not back-patched at the time.  
  

Fix help message in pg_ctl.

  
commit   : 4f292b5d52e51bf9ce7aeaa5293df5e8041194b8    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 14 Aug 2014 13:57:52 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 14 Aug 2014 13:57:52 +0900    

Click here for diff

  
Previously the help message described that -m is an option for  
"stop", "restart" and "promote" commands in pg_ctl. But actually  
that's not an option for "promote". So this commit fixes that  
incorrect description in the help message.  
  
Back-patch to 9.3 where the incorrect description was added.  
  

Fix failure to follow the directions when “init” fork was added.

  
commit   : a91fcd93ca038e60034efdf681c9323e4bce4385    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Mon, 11 Aug 2014 23:19:23 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Mon, 11 Aug 2014 23:19:23 +0900    

Click here for diff

  
Specifically this commit updates forkname_to_number() so that the HINT  
message includes "init" fork, and also adds the description of "init" fork  
into pg_relation_size() document.  
  
This is a part of the commit 2d00190495b22e0d0ba351b2cda9c95fb2e3d083  
which has fixed the same oversight in master and 9.4. Back-patch to  
9.1 where "init" fork was added.  
  

Fix documentation oversights about pageinspect and initialization fork.

  
commit   : 79b0bc1e9530c05dbeb5c91232faf3ee184439f3    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Mon, 11 Aug 2014 22:52:16 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Mon, 11 Aug 2014 22:52:16 +0900    

Click here for diff

  
The initialization fork was added in 9.1, but has not been taken into  
consideration in documents of get_raw_page function in pageinspect and  
storage layout. This commit fixes those oversights.  
  
get_raw_page can read not only a table but also an index, etc. So it  
should be documented that the function can read any relation. This commit  
also fixes the document of pageinspect that way.  
  
Back-patch to 9.1 where those oversights existed.  
  
Vik Fearing, review by MauMau  
  

Clarify type resolution behavior for domain types.

  
commit   : d4b13fab4e009c52567283b4b42b190c4696c355    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 10 Aug 2014 16:13:19 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 10 Aug 2014 16:13:19 -0400    

Click here for diff

  
The user documentation was vague and not entirely accurate about how  
we treat domain inputs for ambiguous operators/functions.  Clarify  
that, and add an example and some commentary.  Per a recent question  
from Adam Mackler.  
  
It's acted like this ever since we added domains, so back-patch  
to all supported branches.  
  

Fix conversion of domains to JSON in 9.3 and 9.2.

  
commit   : 8b65e0a33fdb6846d17a9200709dc3d64cf4d23a    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 9 Aug 2014 18:40:34 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 9 Aug 2014 18:40:34 -0400    

Click here for diff

  
In commit 0ca6bda8e7501947c05f30c127f6d12ff90b5a64, I rewrote the json.c  
code that decided how to convert SQL data types into JSON values, so that  
it no longer relied on typcategory which is a pretty untrustworthy guide  
to the output format of user-defined datatypes.  However, I overlooked the  
fact that CREATE DOMAIN inherits typcategory from the base type, so that  
the old coding did have the desirable property of treating domains like  
their base types --- but only in some cases, because not all its decisions  
turned on typcategory.  The version of the patch that went into 9.4 and  
up did a getBaseType() call to ensure that domains were always treated  
like their base types, but I omitted that from the older branches, because  
it would result in a behavioral change for domains over json or hstore;  
a change that's arguably a bug fix, but nonetheless a change that users  
had not asked for.  What I overlooked was that this meant that domains  
over numerics and boolean were no longer treated like their base types,  
and that we *did* get a complaint about, ie bug #11103 from David Grelaud.  
So let's do the getBaseType() call in the older branches as well, to  
restore their previous behavior in these cases.  That means 9.2 and 9.3  
will now make these decisions just like 9.4.  We could probably kluge  
things to still ignore the domain's base type if it's json etc, but that  
seems a bit silly.  
  

Reject duplicate column names in foreign key referenced-columns lists.

  
commit   : 7a9c8cefb8292c8cdf50ee96ade2b073813964e4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 9 Aug 2014 13:46:42 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 9 Aug 2014 13:46:42 -0400    

Click here for diff

  
Such cases are disallowed by the SQL spec, and even if we wanted to allow  
them, the semantics seem ambiguous: how should the FK columns be matched up  
with the columns of a unique index?  (The matching could be significant in  
the presence of opclasses with different notions of equality, so this issue  
isn't just academic.)  However, our code did not previously reject such  
cases, but instead would either fail to match to any unique index, or  
generate a bizarre opclass-lookup error because of sloppy thinking in the  
index-matching code.  
  
David Rowley  
  

pg_upgrade: prevent oid conflicts with new-cluster TOAST tables

  
commit   : fca9f349ba6815ccf4f6ad0747a86549ccf8685e    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Thu, 7 Aug 2014 14:56:13 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Thu, 7 Aug 2014 14:56:13 -0400    

Click here for diff

  
Previously, TOAST tables only required in the new cluster could cause  
oid conflicts if they were auto-numbered and a later conflicting oid had  
to be assigned.  
  
Backpatch through 9.3  
  

pg_upgrade: remove reference to autovacuum_multixact_freeze_max_age

  
commit   : 24ae44914d5bd38598e2822eda51785de6eca8ab    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Mon, 4 Aug 2014 11:45:45 -0400    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Mon, 4 Aug 2014 11:45:45 -0400    

Click here for diff

  
autovacuum_multixact_freeze_max_age was added as a pg_ctl start  
parameter in 9.3.X to prevent autovacuum from running.  However, only  
some 9.3.X releases have autovacuum_multixact_freeze_max_age as it was  
added in a minor PG 9.3 release.  It also isn't needed because -b turns  
off autovacuum in 9.1+.  
  
Without this fix, trying to upgrade from an early 9.3 release to 9.4  
would fail.  
  
Report by EDB  
  
Backpatch through 9.3  
  

Add missing PQclear() calls into pg_receivexlog.

  
commit   : 9747a9898518080d37fa40e86ea1aa6602061abf    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Sat, 2 Aug 2014 15:18:09 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Sat, 2 Aug 2014 15:18:09 +0900    

Click here for diff

  
Back-patch to 9.3.  
  

Fix bug in pg_receivexlog –verbose.

  
commit   : 39217ce414190c11df02db4f500acbc52bacfb0a    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Sat, 2 Aug 2014 14:57:21 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Sat, 2 Aug 2014 14:57:21 +0900    

Click here for diff

  
In 9.2, pg_receivexlog with verbose option has emitted the messages  
at the end of each WAL file. But the commit 0b63291 suppressed such  
messages by mistake. This commit fixes the bug so that pg_receivexlog  
--verbose outputs such messages again.  
  
Back-patch to 9.3 where the bug was added.  
  

Fix typo in user manual

  
commit   : 8c4fdfbc9f84a2dd84d7ff4f5826d0ebf82573ca    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 1 Aug 2014 21:13:17 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 1 Aug 2014 21:13:17 +0300    

Click here for diff

  
  

Avoid wholesale autovacuuming when autovacuum is nominally off.

  
commit   : 4cbecdaaac0617a0a16d40f0d2cc87d1c278e197    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 30 Jul 2014 14:41:35 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 30 Jul 2014 14:41:35 -0400    

Click here for diff

  
When autovacuum is nominally off, we will still launch autovac workers  
to vacuum tables that are at risk of XID wraparound.  But after we'd done  
that, an autovac worker would proceed to autovacuum every table in the  
targeted database, if they meet the usual thresholds for autovacuuming.  
This is at best pretty unexpected; at worst it delays response to the  
wraparound threat.  Fix it so that if autovacuum is nominally off, we  
*only* do forced vacuums and not any other work.  
  
Per gripe from Andrey Zhidenkov.  This has been like this all along,  
so back-patch to all supported branches.  
  

Fix mishandling of background worker PGPROCs in EXEC_BACKEND builds.

  
commit   : 05c0059b3573a0423370d98789b5f292fb296041    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Wed, 30 Jul 2014 11:25:58 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Wed, 30 Jul 2014 11:25:58 -0400    

Click here for diff

  
InitProcess() relies on IsBackgroundWorker to decide whether the PGPROC  
for a new backend should be taken from ProcGlobal's freeProcs or from  
bgworkerFreeProcs.  In EXEC_BACKEND builds, InitProcess() is called  
sooner than in non-EXEC_BACKEND builds, and IsBackgroundWorker wasn't  
getting initialized soon enough.  
  
Report by Noah Misch.  Diagnosis and fix by me.  
  

Treat 2PC commit/abort the same as regular xacts in recovery.

  
commit   : a2a718b2231a84bc120c4c81d12c9350ad78e3d4    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 29 Jul 2014 10:33:15 +0300    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Tue, 29 Jul 2014 10:33:15 +0300    

Click here for diff

  
There were several oversights in recovery code where COMMIT/ABORT PREPARED  
records were ignored:  
  
* pg_last_xact_replay_timestamp() (wasn't updated for 2PC commits)  
* recovery_min_apply_delay (2PC commits were applied immediately)  
* recovery_target_xid (recovery would not stop if the XID used 2PC)  
  
The first of those was reported by Sergiy Zuban in bug #11032, analyzed by  
Tom Lane and Andres Freund. The bug was always there, but was masked before  
commit d19bd29f07aef9e508ff047d128a4046cc8bc1e2, because COMMIT PREPARED  
always created an extra regular transaction that was WAL-logged.  
  
Backpatch to all supported versions (older versions didn't have all the  
features and therefore didn't have all of the above bugs).  
  

Fix a performance problem in pg_dump’s dump order selection logic.

  
commit   : 51fc6133488a80a1310972b8a0ad20aca13f5b02    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 25 Jul 2014 19:48:48 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 25 Jul 2014 19:48:48 -0400    

Click here for diff

  
findDependencyLoops() was not bright about cases where there are multiple  
dependency paths between the same two dumpable objects.  In most scenarios  
this did not hurt us too badly; but since the introduction of section  
boundary pseudo-objects in commit a1ef01fe163b304760088e3e30eb22036910a495,  
it was possible for this code to take unreasonable amounts of time (tens  
of seconds on a database with a couple thousand objects), as reported in  
bug #11033 from Joe Van Dyk.  Joe's particular problem scenario involved  
"pg_dump -a" mode with long chains of foreign key constraints, but I think  
that similar problems could arise with other situations as long as there  
were enough objects.  To fix, add a flag array that lets us notice when we  
arrive at the same object again while searching from a given start object.  
This simple change seems to be enough to eliminate the performance problem.  
  
Back-patch to 9.1, like the patch that introduced section boundary objects.  
  

Avoid access to already-released lock in LockRefindAndRelease.

  
commit   : b9fecd5330b6313f3c2fb5bba584a9dfdd1524c2    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Thu, 24 Jul 2014 08:19:19 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Thu, 24 Jul 2014 08:19:19 -0400    

Click here for diff

  
Spotted by Tom Lane.  
  

Rearrange documentation paragraph describing pg_relation_size().

  
commit   : c7ec796a2613a304e65392a788cc6dd2c4dda8de    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 23 Jul 2014 15:20:37 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 23 Jul 2014 15:20:37 -0400    

Click here for diff

  
Break the list of available options into an <itemizedlist> instead of  
inline sentences.  This is mostly motivated by wanting to ensure that the  
cross-references to the FSM and VM docs don't cross page boundaries in PDF  
format; but it seems to me to read more easily this way anyway.  I took the  
liberty of editorializing a bit further while at it.  
  
Per complaint from Magnus about 9.0.18 docs not building in A4 format.  
Patch all active branches so we don't get blind-sided by this particular  
issue again in future.  
  

Report success when Windows kill() emulation signals an exiting process.

  
commit   : 9f519faf8048d6c17644f5a398733960380aaf6e    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 23 Jul 2014 00:35:13 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 23 Jul 2014 00:35:13 -0400    

Click here for diff

  
This is consistent with the POSIX verdict that kill() shall not report  
ESRCH for a zombie process.  Back-patch to 9.0 (all supported versions).  
Test code from commit d7cdf6ee36adeac9233678fb8f2a112e6678a770 depends  
on it, and log messages about kill() reporting "Invalid argument" will  
cease to appear for this not-unexpected condition.  
  

MSVC: Substitute $(top_builddir) in REGRESS_OPTS.

  
commit   : f12976ab8393463b2b0815e7412dba26702dd0a3    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Wed, 23 Jul 2014 00:35:07 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Wed, 23 Jul 2014 00:35:07 -0400    

Click here for diff

  
Commit d7cdf6ee36adeac9233678fb8f2a112e6678a770 introduced a usage  
thereof.  Back-patch to 9.0, like that commit.  
  

Re-enable error for “SELECT … OFFSET -1”.

  
commit   : 6306d07122d8b6678f47c273165540de02a0d242    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 22 Jul 2014 13:30:01 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 22 Jul 2014 13:30:01 -0400    

Click here for diff

  
The executor has thrown errors for negative OFFSET values since 8.4 (see  
commit bfce56eea45b1369b7bb2150a150d1ac109f5073), but in a moment of brain  
fade I taught the planner that OFFSET with a constant negative value was a  
no-op (commit 1a1832eb085e5bca198735e5d0e766a3cb61b8fc).  Reinstate the  
former behavior by only discarding OFFSET with a value of exactly 0.  In  
passing, adjust a planner comment that referenced the ancient behavior.  
  
Back-patch to 9.3 where the mistake was introduced.  
  

Check block number against the correct fork in get_raw_page().

  
commit   : f59c8eff7dd3dea4116a34e1890ee219f6d7f8d5    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 22 Jul 2014 11:45:53 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 22 Jul 2014 11:45:53 -0400    

Click here for diff

  
get_raw_page tried to validate the supplied block number against  
RelationGetNumberOfBlocks(), which of course is only right when  
accessing the main fork.  In most cases, the main fork is longer  
than the others, so that the check was too weak (allowing a  
lower-level error to be reported, but no real harm to be done).  
However, very small tables could have an FSM larger than their heap,  
in which case the mistake prevented access to some FSM pages.  
Per report from Torsten Foertsch.  
  
In passing, make the bad-block-number error into an ereport not elog  
(since it's certainly not an internal error); and fix sloppily  
maintained comment for RelationGetNumberOfBlocksInFork.  
  
This has been wrong since we invented relation forks, so back-patch  
to all supported branches.  
  

Diagnose incompatible OpenLDAP versions during build and test.

  
commit   : 07115248fdaf06ba9b4f1a5f557cbf0282d835dd    
  
author   : Noah Misch <noah@leadboat.com>    
date     : Tue, 22 Jul 2014 11:01:03 -0400    
  
committer: Noah Misch <noah@leadboat.com>    
date     : Tue, 22 Jul 2014 11:01:03 -0400    

Click here for diff

  
With OpenLDAP versions 2.4.24 through 2.4.31, inclusive, PostgreSQL  
backends can crash at exit.  Raise a warning during "configure" based on  
the compile-time OpenLDAP version number, and test the crash scenario in  
the dblink test suite.  Back-patch to 9.0 (all supported versions).  
  

Reject out-of-range numeric timezone specifications.

  
commit   : 7672bbca0e7b8f2cbdf8a984e13a891d919fde7b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 21 Jul 2014 22:41:27 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 21 Jul 2014 22:41:27 -0400    

Click here for diff

  
In commit 631dc390f49909a5c8ebd6002cfb2bcee5415a9d, we started to handle  
simple numeric timezone offsets via the zic library instead of the old  
CTimeZone/HasCTZSet kluge.  However, we overlooked the fact that the zic  
code will reject UTC offsets exceeding a week (which seems a bit arbitrary,  
but not because it's too tight ...).  This led to possibly setting  
session_timezone to NULL, which results in crashes in most timezone-related  
operations as of 9.4, and crashes in a small number of places even before  
that.  So check for NULL return from pg_tzset_offset() and report an  
appropriate error message.  Per bug #11014 from Duncan Gillis.  
  
Back-patch to all supported branches, like the previous patch.  
(Unfortunately, as of today that no longer includes 8.4.)