PostgreSQL 9.2.13 commit log

Stamp 9.2.13.

commit   : 582eff507eb3e3acae8c7d2d562ac2beb00b344f    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 9 Jun 2015 15:33:16 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 9 Jun 2015 15:33:16 -0400    

Click here for diff

M configure
M configure.in
M doc/bug.template
M src/include/pg_config.h.win32
M src/interfaces/libpq/libpq.rc.in
M src/port/win32ver.rc

Release notes for 9.4.4, 9.3.9, 9.2.13, 9.1.18, 9.0.22.

commit   : 2d3f41a3707bf88092636c1017267bdec70e09aa    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 9 Jun 2015 14:33:43 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 9 Jun 2015 14:33:43 -0400    

Click here for diff

M doc/src/sgml/release-9.0.sgml
M doc/src/sgml/release-9.1.sgml
M doc/src/sgml/release-9.2.sgml

Report more information if pg_perm_setlocale() fails at startup.

commit   : 7a4211ebd2a57f6b78ae05c6d93efc5fd1d94735    
  
author   : Tom Lane <[email protected]>    
date     : Tue, 9 Jun 2015 13:37:08 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Tue, 9 Jun 2015 13:37:08 -0400    

Click here for diff

We don't know why a few Windows users have seen this fail, but the  
taciturnity of the error message certainly isn't helping debug it.  
Let's at least find out which LC category isn't working.  

M src/backend/main/main.c

Allow HotStandbyActiveInReplay() to be called in single user mode.

commit   : 18935145e7f31ca975e0763f73c8c3f12aa62672    
  
author   : Andres Freund <[email protected]>    
date     : Mon, 8 Jun 2015 00:30:26 +0200    
  
committer: Andres Freund <[email protected]>    
date     : Mon, 8 Jun 2015 00:30:26 +0200    

Click here for diff

HotStandbyActiveInReplay, introduced in 061b079f, only allowed WAL  
replay to happen in the startup process, missing the single user case.  
  
This buglet is fairly harmless as it only causes problems when single  
user mode in an assertion enabled build is used to replay a btree vacuum  
record.  
  
Backpatch to 9.2. 061b079f was backpatched further, but the assertion  
was not.  

M src/backend/access/transam/xlog.c

Use a safer method for determining whether relcache init file is stale.

commit   : 3e69a73b98abbbca7aefd77451e9197a4eea2b6e    
  
author   : Tom Lane <[email protected]>    
date     : Sun, 7 Jun 2015 15:32:09 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Sun, 7 Jun 2015 15:32:09 -0400    

Click here for diff

When we invalidate the relcache entry for a system catalog or index, we  
must also delete the relcache "init file" if the init file contains a copy  
of that rel's entry.  The old way of doing this relied on a specially  
maintained list of the OIDs of relations present in the init file: we made  
the list either when reading the file in, or when writing the file out.  
The problem is that when writing the file out, we included only rels  
present in our local relcache, which might have already suffered some  
deletions due to relcache inval events.  In such cases we correctly decided  
not to overwrite the real init file with incomplete data --- but we still  
used the incomplete initFileRelationIds list for the rest of the current  
session.  This could result in wrong decisions about whether the session's  
own actions require deletion of the init file, potentially allowing an init  
file created by some other concurrent session to be left around even though  
it's been made stale.  
  
Since we don't support changing the schema of a system catalog at runtime,  
the only likely scenario in which this would cause a problem in the field  
involves a "vacuum full" on a catalog concurrently with other activity, and  
even then it's far from easy to provoke.  Remarkably, this has been broken  
since 2002 (in commit 786340441706ac1957a031f11ad1c2e5b6e18314), but we had  
never seen a reproducible test case until recently.  If it did happen in  
the field, the symptoms would probably involve unexpected "cache lookup  
failed" errors to begin with, then "could not open file" failures after the  
next checkpoint, as all accesses to the affected catalog stopped working.  
Recovery would require manually removing the stale "pg_internal.init" file.  
  
To fix, get rid of the initFileRelationIds list, and instead consult  
syscache.c's list of relations used in catalog caches to decide whether a  
relation is included in the init file.  This should be a tad more efficient  
anyway, since we're replacing linear search of a list with ~100 entries  
with a binary search.  It's a bit ugly that the init file contents are now  
so directly tied to the catalog caches, but in practice that won't make  
much difference.  
  
Back-patch to all supported branches.  

M src/backend/utils/cache/inval.c
M src/backend/utils/cache/relcache.c
M src/backend/utils/cache/syscache.c
M src/include/utils/relcache.h
M src/include/utils/syscache.h

Fix incorrect order of database-locking operations in InitPostgres().

commit   : 04358dab214c592b57d300e30e46a0b28178bd1a    
  
author   : Tom Lane <[email protected]>    
date     : Fri, 5 Jun 2015 13:22:27 -0400    
  
committer: Tom Lane <[email protected]>    
date     : Fri, 5 Jun 2015 13:22:27 -0400    

Click here for diff

We should set MyProc->databaseId after acquiring the per-database lock,  
not beforehand.  The old way risked deadlock against processes trying to  
copy or delete the target database, since they would first acquire the lock  
and then wait for processes with matching databaseId to exit; that left a  
window wherein an incoming process could set its databaseId and then block  
on the lock, while the other process had the lock and waited in vain for  
the incoming process to exit.  
  
CountOtherDBBackends() would time out and fail after 5 seconds, so this  
just resulted in an unexpected failure not a permanent lockup, but it's  
still annoying when it happens.  A real-world example of a use-case is that  
short-duration connections to a template database should not cause CREATE  
DATABASE to fail.  
  
Doing it in the other order should be fine since the contract has always  
been that processes searching the ProcArray for a database ID must hold the  
relevant per-database lock while searching.  Thus, this actually removes  
the former race condition that required an assumption that storing to  
MyProc->databaseId is atomic.  
  
It's been like this for a long time, so back-patch to all active branches.  

M src/backend/utils/init/postinit.c