PostgreSQL 9.3.9 commit log

Stamp 9.3.9.

  
commit   : 553e576e05b50f9faffbd3dd721e44fc3746898d    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 9 Jun 2015 15:31:32 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 9 Jun 2015 15:31:32 -0400    

Click here for diff

  
  

Release notes for 9.4.4, 9.3.9, 9.2.13, 9.1.18, 9.0.22.

  
commit   : d7705f759830dd4c48a7bf869f81d48e220a8658    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 9 Jun 2015 14:33:43 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 9 Jun 2015 14:33:43 -0400    

Click here for diff

  
  

Report more information if pg_perm_setlocale() fails at startup.

  
commit   : e7da27ce025af681c1ab9d9f9a29e9ffb31472c3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 9 Jun 2015 13:37:08 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 9 Jun 2015 13:37:08 -0400    

Click here for diff

  
We don't know why a few Windows users have seen this fail, but the  
taciturnity of the error message certainly isn't helping debug it.  
Let's at least find out which LC category isn't working.  
  

Allow HotStandbyActiveInReplay() to be called in single user mode.

  
commit   : 82f81ba0852a3d732b39aae131a9fae419fee4a6    
  
author   : Andres Freund <andres@anarazel.de>    
date     : Mon, 8 Jun 2015 00:30:26 +0200    
  
committer: Andres Freund <andres@anarazel.de>    
date     : Mon, 8 Jun 2015 00:30:26 +0200    

Click here for diff

  
HotStandbyActiveInReplay, introduced in 061b079f, only allowed WAL  
replay to happen in the startup process, missing the single user case.  
  
This buglet is fairly harmless as it only causes problems when single  
user mode in an assertion enabled build is used to replay a btree vacuum  
record.  
  
Backpatch to 9.2. 061b079f was backpatched further, but the assertion  
was not.  
  

Use a safer method for determining whether relcache init file is stale.

  
commit   : 4f2458dd78d6da4a716a3d976644b3b2e627bc75    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 7 Jun 2015 15:32:09 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 7 Jun 2015 15:32:09 -0400    

Click here for diff

  
When we invalidate the relcache entry for a system catalog or index, we  
must also delete the relcache "init file" if the init file contains a copy  
of that rel's entry.  The old way of doing this relied on a specially  
maintained list of the OIDs of relations present in the init file: we made  
the list either when reading the file in, or when writing the file out.  
The problem is that when writing the file out, we included only rels  
present in our local relcache, which might have already suffered some  
deletions due to relcache inval events.  In such cases we correctly decided  
not to overwrite the real init file with incomplete data --- but we still  
used the incomplete initFileRelationIds list for the rest of the current  
session.  This could result in wrong decisions about whether the session's  
own actions require deletion of the init file, potentially allowing an init  
file created by some other concurrent session to be left around even though  
it's been made stale.  
  
Since we don't support changing the schema of a system catalog at runtime,  
the only likely scenario in which this would cause a problem in the field  
involves a "vacuum full" on a catalog concurrently with other activity, and  
even then it's far from easy to provoke.  Remarkably, this has been broken  
since 2002 (in commit 786340441706ac1957a031f11ad1c2e5b6e18314), but we had  
never seen a reproducible test case until recently.  If it did happen in  
the field, the symptoms would probably involve unexpected "cache lookup  
failed" errors to begin with, then "could not open file" failures after the  
next checkpoint, as all accesses to the affected catalog stopped working.  
Recovery would require manually removing the stale "pg_internal.init" file.  
  
To fix, get rid of the initFileRelationIds list, and instead consult  
syscache.c's list of relations used in catalog caches to decide whether a  
relation is included in the init file.  This should be a tad more efficient  
anyway, since we're replacing linear search of a list with ~100 entries  
with a binary search.  It's a bit ugly that the init file contents are now  
so directly tied to the catalog caches, but in practice that won't make  
much difference.  
  
Back-patch to all supported branches.  
  

Fix incorrect order of database-locking operations in InitPostgres().

  
commit   : ac86eda633c680e2af4dd7276638fee2575b507d    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 5 Jun 2015 13:22:27 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 5 Jun 2015 13:22:27 -0400    

Click here for diff

  
We should set MyProc->databaseId after acquiring the per-database lock,  
not beforehand.  The old way risked deadlock against processes trying to  
copy or delete the target database, since they would first acquire the lock  
and then wait for processes with matching databaseId to exit; that left a  
window wherein an incoming process could set its databaseId and then block  
on the lock, while the other process had the lock and waited in vain for  
the incoming process to exit.  
  
CountOtherDBBackends() would time out and fail after 5 seconds, so this  
just resulted in an unexpected failure not a permanent lockup, but it's  
still annoying when it happens.  A real-world example of a use-case is that  
short-duration connections to a template database should not cause CREATE  
DATABASE to fail.  
  
Doing it in the other order should be fine since the contract has always  
been that processes searching the ProcArray for a database ID must hold the  
relevant per-database lock while searching.  Thus, this actually removes  
the former race condition that required an assumption that storing to  
MyProc->databaseId is atomic.  
  
It's been like this for a long time, so back-patch to all active branches.  
  

Cope with possible failure of the oldest MultiXact to exist.

  
commit   : 2a9b01928f193f529b885ac577051c4fd00bd427    
  
author   : Robert Haas <rhaas@postgresql.org>    
date     : Fri, 5 Jun 2015 08:34:52 -0400    
  
committer: Robert Haas <rhaas@postgresql.org>    
date     : Fri, 5 Jun 2015 08:34:52 -0400    

Click here for diff

  
Recent commits, mainly b69bf30b9bfacafc733a9ba77c9587cf54d06c0c and  
53bb309d2d5a9432d2602c93ed18e58bd2924e15, introduced mechanisms to  
protect against wraparound of the MultiXact member space: the number  
of multixacts that can exist at one time is limited to 2^32, but the  
total number of members in those multixacts is also limited to 2^32,  
and older code did not take care to enforce the second limit,  
potentially allowing old data to be overwritten while it was still  
needed.  
  
Unfortunately, these new mechanisms failed to account for the fact  
that the code paths in which they run might be executed during  
recovery or while the cluster was in an inconsistent state.  Also,  
they failed to account for the fact that users who used pg_upgrade  
to upgrade a PostgreSQL version between 9.3.0 and 9.3.4 might have  
might oldestMultiXid = 1 in the control file despite the true value  
being larger.  
  
To fix these problems, first, avoid unnecessarily examining the  
mmembers of MultiXacts when the cluster is not known to be consistent.  
TruncateMultiXact has done this for a long time, and this patch does  
not fix that.  But the new calls used to prevent member wraparound  
are not needed until we reach normal running, so avoid calling them  
earlier.  (SetMultiXactIdLimit is actually called before InRecovery  
is set, so we can't rely on that; we invent our own multixact-specific  
flag instead.)  
  
Second, make failure to look up the members of a MultiXact a non-fatal  
error.  Instead, if we're unable to determine the member offset at  
which wraparound would occur, postpone arming the member wraparound  
defenses until we are able to do so.  If we're unable to determine the  
member offset that should force autovacuum, force it continuously  
until we are able to do so.  If we're unable to deterine the member  
offset at which we should truncate the members SLRU, log a message and  
skip truncation.  
  
An important consequence of these changes is that anyone who does have  
a bogus oldestMultiXid = 1 value in pg_control will experience  
immediate emergency autovacuuming when upgrading to a release that  
contains this fix.  The release notes should highlight this fact.  If  
a user has no pg_multixact/offsets/0000 file, but has oldestMultiXid = 1  
in the control file, they may wish to vacuum any tables with  
relminmxid = 1 prior to upgrading in order to avoid an immediate  
emergency autovacuum after the upgrade.  This must be done with a  
PostgreSQL version 9.3.5 or newer and with vacuum_multixact_freeze_min_age  
and vacuum_multixact_freeze_table_age set to 0.  
  
This patch also adds an additional log message at each database server  
startup, indicating either that protections against member wraparound  
have been engaged, or that they have not.  In the latter case, once  
autovacuum has advanced oldestMultiXid to a sane value, the message  
indicating that the guards have been engaged will appear at the next  
checkpoint.  A few additional messages have also been added at the DEBUG1  
level so that the correct operation of this code can be properly audited.  
  
Along the way, this patch fixes another, related bug in TruncateMultiXact  
that has existed since PostgreSQL 9.3.0: when no MultiXacts exist at  
all, the truncation code looks up NextMultiXactId, which doesn't exist  
yet.  This can lead to TruncateMultiXact removing every file in  
pg_multixact/offsets instead of keeping one around, as it should.  
This in turn will cause the database server to refuse to start  
afterwards.  
  
Patch by me.  Review by Álvaro Herrera, Andres Freund, Noah Misch, and  
Thomas Munro.  
  

pgindent run on access/transam/multixact.c

  
commit   : 746092a779b7d6fa88c64476839ffe6c7857f8a5    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Thu, 4 Jun 2015 15:20:28 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Thu, 4 Jun 2015 15:20:28 -0300    

Click here for diff

  
This file has been patched over and over, and the differences to master  
caused by pgindent are annoying enough that it seems saner to make the  
older branches look the same.  
  
Backpatch to 9.3, which is as far back as backpatching of bugfixes is  
necessary.  
  

Fix some issues in pg_class.relminmxid and pg_database.datminmxid documentation.

  
commit   : f051c163c7d18f7d48e868e287efb31fe335f6dc    
  
author   : Fujii Masao <fujii@postgresql.org>    
date     : Thu, 4 Jun 2015 13:22:49 +0900    
  
committer: Fujii Masao <fujii@postgresql.org>    
date     : Thu, 4 Jun 2015 13:22:49 +0900    

Click here for diff

  
- Correct the name of directory which those catalog columns allow to be shrunk.  
- Correct the name of symbol which is used as the value of pg_class.relminmxid  
  when the relation is not a table.  
- Fix "ID ID" typo.  
  
Backpatch to 9.3 where those cataog columns were introduced.  
  

Fix planner’s cost estimation for SEMI/ANTI joins with inner indexscans.

  
commit   : d3fdec6aeeb64aab45f065e05e70abdc535ba4af    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 3 Jun 2015 11:58:47 -0400    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 3 Jun 2015 11:58:47 -0400    

Click here for diff

  
When the inner side of a nestloop SEMI or ANTI join is an indexscan that  
uses all the join clauses as indexquals, it can be presumed that both  
matched and unmatched outer rows will be processed very quickly: for  
matched rows, we'll stop after fetching one row from the indexscan, while  
for unmatched rows we'll have an indexscan that finds no matching index  
entries, which should also be quick.  The planner already knew about this,  
but it was nonetheless charging for at least one full run of the inner  
indexscan, as a consequence of concerns about the behavior of materialized  
inner scans --- but those concerns don't apply in the fast case.  If the  
inner side has low cardinality (many matching rows) this could make an  
indexscan plan look far more expensive than it actually is.  To fix,  
rearrange the work in initial_cost_nestloop/final_cost_nestloop so that we  
don't add the inner scan cost until we've inspected the indexquals, and  
then we can add either the full-run cost or just the first tuple's cost as  
appropriate.  
  
Experimentation with this fix uncovered another problem: add_path and  
friends were coded to disregard cheap startup cost when considering  
parameterized paths.  That's usually okay (and desirable, because it thins  
the path herd faster); but in this fast case for SEMI/ANTI joins, it could  
result in throwing away the desired plain indexscan path in favor of a  
bitmap scan path before we ever get to the join costing logic.  In the  
many-matching-rows cases of interest here, a bitmap scan will do a lot more  
work than required, so this is a problem.  To fix, add a per-relation flag  
consider_param_startup that works like the existing consider_startup flag,  
but applies to parameterized paths, and set it for relations that are the  
inside of a SEMI or ANTI join.  
  
To make this patch reasonably safe to back-patch, care has been taken to  
avoid changing the planner's behavior except in the very narrow case of  
SEMI/ANTI joins with inner indexscans.  There are places in  
compare_path_costs_fuzzily and add_path_precheck that are not terribly  
consistent with the new approach, but changing them will affect planner  
decisions at the margins in other cases, so we'll leave that for a  
HEAD-only fix.  
  
Back-patch to 9.3; before that, the consider_startup flag didn't exist,  
meaning that the second aspect of the patch would be too invasive.  
  
Per a complaint from Peter Holzer and analysis by Tomas Vondra.