PostgreSQL 9.0.7 commit log

Stamp 9.0.7.

  
commit   : f054f631a087fed80e7d570e89bed395859f2dc3    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Feb 2012 17:56:26 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Feb 2012 17:56:26 -0500    

Click here for diff

  
  

Last-minute release note updates.

  
commit   : 09189cb6059f28aea59cca4f419c535123f499b8    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Feb 2012 17:48:05 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Feb 2012 17:48:05 -0500    

Click here for diff

  
Security: CVE-2012-0866, CVE-2012-0867, CVE-2012-0868  
  

Convert newlines to spaces in names written in pg_dump comments.

  
commit   : 02f013ee0228337626071d71abaf2dcb143614a4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Feb 2012 15:53:24 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Feb 2012 15:53:24 -0500    

Click here for diff

  
pg_dump was incautious about sanitizing object names that are emitted  
within SQL comments in its output script.  A name containing a newline  
would at least render the script syntactically incorrect.  Maliciously  
crafted object names could present a SQL injection risk when the script  
is reloaded.  
  
Reported by Heikki Linnakangas, patch by Robert Haas  
  
Security: CVE-2012-0868  
  

Remove arbitrary limitation on length of common name in SSL certificates.

  
commit   : 850d341ff72b2be53ecea7e05a0bdf9a88ade154    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Feb 2012 15:48:14 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Feb 2012 15:48:14 -0500    

Click here for diff

  
Both libpq and the backend would truncate a common name extracted from a  
certificate at 32 bytes.  Replace that fixed-size buffer with dynamically  
allocated string so that there is no hard limit.  While at it, remove the  
code for extracting peer_dn, which we weren't using for anything; and  
don't bother to store peer_cn longer than we need it in libpq.  
  
This limit was not so terribly unreasonable when the code was written,  
because we weren't using the result for anything critical, just logging it.  
But now that there are options for checking the common name against the  
server host name (in libpq) or using it as the user's name (in the server),  
this could result in undesirable failures.  In the worst case it even seems  
possible to spoof a server name or user name, if the correct name is  
exactly 32 bytes and the attacker can persuade a trusted CA to issue a  
certificate in which that string is a prefix of the certificate's common  
name.  (To exploit this for a server name, he'd also have to send the  
connection astray via phony DNS data or some such.)  The case that this is  
a realistic security threat is a bit thin, but nonetheless we'll treat it  
as one.  
  
Back-patch to 8.4.  Older releases contain the faulty code, but it's not  
a security problem because the common name wasn't used for anything  
interesting.  
  
Reported and patched by Heikki Linnakangas  
  
Security: CVE-2012-0867  
  

Require execute permission on the trigger function for CREATE TRIGGER.

  
commit   : de323d534c8989bc713c1ac5313024cb6d7a4277    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Feb 2012 15:39:07 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 23 Feb 2012 15:39:07 -0500    

Click here for diff

  
This check was overlooked when we added function execute permissions to the  
system years ago.  For an ordinary trigger function it's not a big deal,  
since trigger functions execute with the permissions of the table owner,  
so they couldn't do anything the user issuing the CREATE TRIGGER couldn't  
have done anyway.  However, if a trigger function is SECURITY DEFINER,  
that is not the case.  The lack of checking would allow another user to  
install it on his own table and then invoke it with, essentially, forged  
input data; which the trigger function is unlikely to realize, so it might  
do something undesirable, for instance insert false entries in an audit log  
table.  
  
Reported by Dinesh Kumar, patch by Robert Haas  
  
Security: CVE-2012-0866  
  

Translation updates

  
commit   : 144fcf754fc2615d1a4643adfce41b89ccf6ba68    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Thu, 23 Feb 2012 20:36:36 +0200    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Thu, 23 Feb 2012 20:36:36 +0200    

Click here for diff

  
  

Remove inappropriate quotes

  
commit   : 31f26140b3be4b9c59d6095012602dbf42b3a505    
  
author   : Peter Eisentraut <peter_e@gmx.net>    
date     : Thu, 23 Feb 2012 12:51:33 +0200    
  
committer: Peter Eisentraut <peter_e@gmx.net>    
date     : Thu, 23 Feb 2012 12:51:33 +0200    

Click here for diff

  
And adjust wording for consistency.  
  

Draft release notes for 9.1.3, 9.0.7, 8.4.11, 8.3.18.

  
commit   : c2d11d2d3ed287f8281e6d735bf5dfd202142d41    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 22 Feb 2012 18:11:56 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 22 Feb 2012 18:11:56 -0500    

Click here for diff

  
  

REASSIGN OWNED: Support foreign data wrappers and servers

  
commit   : 140766dff659cb9429fe9a5b8341b757d6c4c169    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 21 Feb 2012 17:58:02 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Tue, 21 Feb 2012 17:58:02 -0300    

Click here for diff

  
This was overlooked when implementing those kinds of objects, in commit  
cae565e503c42a0942ca1771665243b4453c5770.  
  
Per report from Pawel Casperek.  
  

Correctly initialise shared recoveryLastRecPtr in recovery. Previously we used ReadRecPtr rather than EndRecPtr, which was not a serious error but caused pg_stat_replication to report incorrect replay_location until at least one WAL record is replayed.

  
commit   : 315cb2f9672e93f6b34726ce81c7e75a374aedd2    
  
author   : Simon Riggs <simon@2ndQuadrant.com>    
date     : Wed, 22 Feb 2012 13:55:04 +0000    
  
committer: Simon Riggs <simon@2ndQuadrant.com>    
date     : Wed, 22 Feb 2012 13:55:04 +0000    

Click here for diff

  
Fujii Masao  
  

Don’t clear btpo_cycleid during _bt_vacuum_one_page.

  
commit   : e0eb63238a8ba66e685ac2308d1f597e15cb4216    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 21 Feb 2012 15:03:50 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 21 Feb 2012 15:03:50 -0500    

Click here for diff

  
When "vacuuming" a single btree page by removing LP_DEAD tuples, we are not  
actually within a vacuum operation, but rather in an ordinary insertion  
process that could well be running concurrently with a vacuum.  So clearing  
the cycleid is incorrect, and could cause the concurrent vacuum to miss  
removing tuples that it needs to remove.  This is a longstanding bug  
introduced by commit e6284649b9e30372b3990107a082bc7520325676 of  
2006-07-25.  I believe it explains Maxim Boguk's recent report of index  
corruption, and probably some other previously unexplained reports.  
  
In 9.0 and up this is a one-line fix; before that we need to introduce a  
flag to tell _bt_delitems what to do.  
  

Avoid double close of file handle in syslogger on win32

  
commit   : d1ed3363f6077a0f616782ccb26c9571e7f3cae1    
  
author   : Magnus Hagander <magnus@hagander.net>    
date     : Tue, 21 Feb 2012 17:12:25 +0100    
  
committer: Magnus Hagander <magnus@hagander.net>    
date     : Tue, 21 Feb 2012 17:12:25 +0100    

Click here for diff

  
This causes an exception when running under a debugger or in particular  
when running on a debug version of Windows.  
  
Patch from MauMau  
  

Don’t reject threaded Python on FreeBSD.

  
commit   : 29f65a844bfcfd1eb22bb21ccdb5b8ace092a97c    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 20 Feb 2012 16:21:41 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 20 Feb 2012 16:21:41 -0500    

Click here for diff

  
According to Chris Rees, this has worked for awhile, and the current  
FreeBSD port is removing the test anyway.  
  

Fix regex back-references that are directly quantified with *.

  
commit   : 1fee1bf042d47de0a8c4f52c9196a833be468f9e    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 20 Feb 2012 00:52:49 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 20 Feb 2012 00:52:49 -0500    

Click here for diff

  
The syntax "\n*", that is a backref with a * quantifier directly applied  
to it, has never worked correctly in Spencer's library.  This has been an  
open bug in the Tcl bug tracker since 2005:  
https://sourceforge.net/tracker/index.php?func=detail&aid=1115587&group_id=10894&atid=110894  
  
The core of the problem is in parseqatom(), which first changes "\n*" to  
"\n+|" and then applies repeat() to the NFA representing the backref atom.  
repeat() thinks that any arc leading into its "rp" argument is part of the  
sub-NFA to be repeated.  Unfortunately, since parseqatom() already created  
the arc that was intended to represent the empty bypass around "\n+", this  
arc gets moved too, so that it now leads into the state loop created by  
repeat().  Thus, what was supposed to be an "empty" bypass gets turned into  
something that represents zero or more repetitions of the NFA representing  
the backref atom.  In the original example, in place of  
	^([bc])\1*$  
we now have something that acts like  
	^([bc])(\1+|[bc]*)$  
At runtime, the branch involving the actual backref fails, as it's supposed  
to, but then the other branch succeeds anyway.  
  
We could no doubt fix this by some rearrangement of the operations in  
parseqatom(), but that code is plenty ugly already, and what's more the  
whole business of converting "x*" to "x+|" probably needs to go away to fix  
another problem I'll mention in a moment.  Instead, this patch suppresses  
the *-conversion when the target is a simple backref atom, leaving the case  
of m == 0 to be handled at runtime.  This makes the patch in regcomp.c a  
one-liner, at the cost of having to tweak cbrdissect() a little.  In the  
event I went a bit further than that and rewrote cbrdissect() to check all  
the string-length-related conditions before it starts comparing characters.  
It seems a bit stupid to possibly iterate through many copies of an  
n-character backreference, only to fail at the end because the target  
string's length isn't a multiple of n --- we could have found that out  
before starting.  The existing coding could only be a win if integer  
division is hugely expensive compared to character comparison, but I don't  
know of any modern machine where that might be true.  
  
This does not fix all the problems with quantified back-references.  In  
particular, the code is still broken for back-references that appear within  
a larger expression that is quantified (so that direct insertion of the  
quantification limits into the BACKREF node doesn't apply).  I think fixing  
that will take some major surgery on the NFA code, specifically introducing  
an explicit iteration node type instead of trying to transform iteration  
into concatenation of modified regexps.  
  
Back-patch to all supported branches.  In HEAD, also add a regression test  
case for this.  (It may seem a bit silly to create a regression test file  
for just one test case; but I'm expecting that we will soon import a whole  
bunch of regex regression tests from Tcl, so might as well create the  
infrastructure now.)  
  

Fix longstanding error in contrib/intarray’s int[] & int[] operator.

  
commit   : f559846a68df902715d05579c800f836b8f7226b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 16 Feb 2012 20:00:23 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 16 Feb 2012 20:00:23 -0500    

Click here for diff

  
The array intersection code would give wrong results if the first entry of  
the correct output array would be "1".  (I think only this value could be  
at risk, since the previous word would always be a lower-bound entry with  
that fixed value.)  
  
Problem spotted by Julien Rouhaud, initial patch by Guillaume Lelarge,  
cosmetic improvements by me.  
  

Do not use the variable name when defining a varchar structure in ecpg.

  
commit   : ebc37d6924df785f3601f135c9c900fd7ec465c7    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Mon, 13 Feb 2012 13:19:57 +0100    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Mon, 13 Feb 2012 13:19:57 +0100    

Click here for diff

  
With a unique counter being added anyway, there is no need anymore to have the variable name listed, too.  
  

Fix auto-explain JSON output to be valid JSON.

  
commit   : 0f3fcbbb613467fd78744da17d5228c813c5958b    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 13 Feb 2012 08:23:13 -0500    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Mon, 13 Feb 2012 08:23:13 -0500    

Click here for diff

  
Problem reported by Peter Eisentraut.  
  
Backpatched to release 9.0.  
  

  
commit   : 03c66ca5dfd4e0e8fec506315f581b21817f47f4    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 11 Feb 2012 18:06:35 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 11 Feb 2012 18:06:35 -0500    

Click here for diff

  
Datatype I/O functions are allowed to leak memory in CurrentMemoryContext,  
since they are generally called in short-lived contexts.  However, plpgsql  
calls such functions for purposes of type conversion, and was calling them  
in its procedure context.  Therefore, any leaked memory would not be  
recovered until the end of the plpgsql function.  If such a conversion  
was done within a loop, quite a bit of memory could get consumed.  Fix by  
calling such functions in the transient "eval_econtext", and adjust other  
logic to match.  Back-patch to all supported versions.  
  
Andres Freund, Jan UrbaƄski, Tom Lane  
  

Fix brain fade in previous pg_dump patch.

  
commit   : 54660723715df39ba2c1b71ee0c44012b3d63acc    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 10 Feb 2012 14:09:31 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 10 Feb 2012 14:09:31 -0500    

Click here for diff

  
In pre-7.3 databases, pg_attribute.attislocal doesn't exist.  The easiest  
way to make sure the new inheritance logic behaves sanely is to assume it's  
TRUE, not FALSE.  This will result in printing child columns even when  
they're not really needed.  We could work harder at trying to reconstruct a  
value for attislocal, but there is little evidence that anyone still cares  
about dumping from such old versions, so just do the minimum necessary to  
have a valid dump.  
  
I had this correct in the original draft of the patch, but for some  
unaccountable reason decided it wasn't necessary to change the value.  
Testing against an old server shows otherwise...  
  

Fix pg_dump for better handling of inherited columns.

  
commit   : ab060c74ba0f0767237e7a3bc6022a84e177abab    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 10 Feb 2012 13:28:17 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 10 Feb 2012 13:28:17 -0500    

Click here for diff

  
Revise pg_dump's handling of inherited columns, which was last looked at  
seriously in 2001, to eliminate several misbehaviors associated with  
inherited default expressions and NOT NULL flags.  In particular make sure  
that a column is printed in a child table's CREATE TABLE command if and  
only if it has attislocal = true; the former behavior would sometimes cause  
a column to become marked attislocal when it was not so marked in the  
source database.  Also, stop relying on textual comparison of default  
expressions to decide if they're inherited; instead, don't use  
default-expression inheritance at all, but just install the default  
explicitly at each level of the hierarchy.  This fixes the  
search-path-related misbehavior recently exhibited by Chester Young, and  
also removes some dubious assumptions about the order in which ALTER TABLE  
SET DEFAULT commands would be executed.  
  
Back-patch to all supported branches.  
  

Fix postmaster to attempt restart after a hot-standby crash.

  
commit   : cc0f6fff1a0dd44d5481836154ac3639be0c056f    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Feb 2012 15:29:26 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Feb 2012 15:29:26 -0500    

Click here for diff

  
The postmaster was coded to treat any unexpected exit of the startup  
process (i.e., the WAL replay process) as a catastrophic crash, and not try  
to restart it. This was OK so long as the startup process could not have  
any sibling postmaster children.  However, if a hot-standby backend  
crashes, we SIGQUIT the startup process along with everything else, and the  
resulting exit is hardly "unexpected".  Treating it as such meant we failed  
to restart a standby server after any child crash at all, not only a crash  
of the WAL replay process as intended.  Adjust that.  Back-patch to 9.0  
where hot standby was introduced.  
  

Avoid throwing ERROR during WAL replay of DROP TABLESPACE.

  
commit   : 07e36415e5205b67dca37e56857bd3e68b1553f7    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Feb 2012 14:44:10 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Feb 2012 14:44:10 -0500    

Click here for diff

  
Although we will not even issue an XLOG_TBLSPC_DROP WAL record unless  
removal of the tablespace's directories succeeds, that does not guarantee  
that the same operation will succeed during WAL replay.  Foreseeable  
reasons for it to fail include temp files created in the tablespace by Hot  
Standby backends, wrong directory permissions on a standby server, etc etc.  
The original coding threw ERROR if replay failed to remove the directories,  
but that is a serious overreaction.  Throwing an error aborts recovery,  
and worse means that manual intervention will be needed to get the database  
to start again, since otherwise the same error will recur on subsequent  
attempts to replay the same WAL record.  And the consequence of failing to  
remove the directories is only that some probably-small amount of disk  
space is wasted, so it hardly seems justified to throw an error.  
Accordingly, arrange to report such failures as LOG messages and keep going  
when a failure occurs during replay.  
  
Back-patch to 9.0 where Hot Standby was introduced.  In principle such  
problems can occur in earlier releases, but Hot Standby increases the odds  
of trouble significantly.  Given the lack of field reports of such issues,  
I'm satisfied with patching back as far as the patch applies easily.  
  

Avoid problems with OID wraparound during WAL replay.

  
commit   : 852a5cded3726b7d33c2e65b87ae775772d03bc5    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Feb 2012 13:14:52 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 6 Feb 2012 13:14:52 -0500    

Click here for diff

  
Fix a longstanding thinko in replay of NEXTOID and checkpoint records: we  
tried to advance nextOid only if it was behind the value in the WAL record,  
but the comparison would draw the wrong conclusion if OID wraparound had  
occurred since the previous value.  Better to just unconditionally assign  
the new value, since OID assignment shouldn't be happening during replay  
anyway.  
  
The consequences of a failure to update nextOid would be pretty minimal,  
since we have long had the code set up to obtain another OID and try again  
if the generated value is already in use.  But in the worst case there  
could be significant performance glitches while such loops iterate through  
many already-used OIDs before finding a free one.  
  
The odds of a wraparound happening during WAL replay would be small in a  
crash-recovery scenario, and the length of any ensuing OID-assignment stall  
quite limited anyway.  But neither of these statements hold true for a  
replication slave that follows a WAL stream for a long period; its behavior  
upon going live could be almost unboundedly bad.  Hence it seems worth  
back-patching this fix into all supported branches.  
  
Already fixed in HEAD in commit c6d76d7c82ebebb7210029f7382c0ebe2c558bca.  
  

fe-misc.c depends on pg_config_paths.h

  
commit   : 94c5aa639e7ee75eb271c320f4e592ef5b10b3b1    
  
author   : Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Mon, 6 Feb 2012 11:53:22 -0300    
  
committer: Alvaro Herrera <alvherre@alvh.no-ip.org>    
date     : Mon, 6 Feb 2012 11:53:22 -0300    

Click here for diff

  
Declare this in Makefile to avoid failures in parallel compiles.  
  
Author: Lionel Elie Mamane  
  

Fix transient clobbering of shared buffers during WAL replay.

  
commit   : 2b196f01efd8ef4f5178d52d1e27cd79747e9568    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 5 Feb 2012 15:49:17 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sun, 5 Feb 2012 15:49:17 -0500    

Click here for diff

  
RestoreBkpBlocks was in the habit of zeroing and refilling the target  
buffer; which was perfectly safe when the code was written, but is unsafe  
during Hot Standby operation.  The reason is that we have coding rules  
that allow backends to continue accessing a tuple in a heap relation while  
holding only a pin on its buffer.  Such a backend could see transiently  
zeroed data, if WAL replay had occasion to change other data on the page.  
This has been shown to be the cause of bug #6425 from Duncan Rance (who  
deserves kudos for developing a sufficiently-reproducible test case) as  
well as Bridget Frey's re-report of bug #6200.  It most likely explains the  
original report as well, though we don't yet have confirmation of that.  
  
To fix, change the code so that only bytes that are supposed to change will  
change, even transiently.  This actually saves cycles in RestoreBkpBlocks,  
since it's not writing the same bytes twice.  
  
Also fix seq_redo, which has the same disease, though it has to work a bit  
harder to meet the requirement.  
  
So far as I can tell, no other WAL replay routines have this type of bug.  
In particular, the index-related replay routines, which would certainly be  
broken if they had to meet the same standard, are not at risk because we  
do not have coding rules that allow access to an index page when not  
holding a buffer lock on it.  
  
Back-patch to 9.0 where Hot Standby was added.  
  

Resolve timing issue with logging locks for Hot Standby. We log AccessExclusiveLocks for replay onto standby nodes, but because of timing issues on ProcArray it is possible to log a lock that is still held by a just committed transaction that is very soon to be removed. To avoid any timing issue we avoid applying locks made by transactions with InvalidXid.

  
commit   : a286b6f6c7f6b327fad2a7081d7df88a4c83ce11    
  
author   : Simon Riggs <simon@2ndQuadrant.com>    
date     : Wed, 1 Feb 2012 09:33:16 +0000    
  
committer: Simon Riggs <simon@2ndQuadrant.com>    
date     : Wed, 1 Feb 2012 09:33:16 +0000    

Click here for diff

  
Simon Riggs, bug report Tom Lane, diagnosis Pavan Deolasee  
  

Accept a non-existent value in “ALTER USER/DATABASE SET …” command.

  
commit   : 8bff6407ba142423e2498d638e7a49ed32e89553    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 30 Jan 2012 10:32:46 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 30 Jan 2012 10:32:46 +0200    

Click here for diff

  
When default_text_search_config, default_tablespace, or temp_tablespaces  
setting is set per-user or per-database, with an "ALTER USER/DATABASE SET  
..." statement, don't throw an error if the text search configuration or  
tablespace does not exist. In case of text search configuration, even if  
it doesn't exist in the current database, it might exist in another  
database, where the setting is intended to have its effect. This behavior  
is now the same as search_path's.  
  
Tablespaces are cluster-wide, so the same argument doesn't hold for  
tablespaces, but there's a problem with pg_dumpall: it dumps "ALTER USER  
SET ..." statements before the "CREATE TABLESPACE" statements. Arguably  
that's pg_dumpall's fault - it should dump the statements in such an order  
that the tablespace is created first and then the "ALTER USER SET  
default_tablespace ..." statements after that - but it seems better to be  
consistent with search_path and default_text_search_config anyway. Besides,  
you could still create a dump that throws an error, by creating the  
tablespace, running "ALTER USER SET default_tablespace", then dropping the  
tablespace and running pg_dumpall on that.  
  
Backpatch to all supported versions.  
  

Fix error detection in contrib/pgcrypto’s encrypt_iv() and decrypt_iv().

  
commit   : a752952d2600c65d06b470f7de43dd068a11f673    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Jan 2012 23:09:16 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 27 Jan 2012 23:09:16 -0500    

Click here for diff

  
Due to oversights, the encrypt_iv() and decrypt_iv() functions failed to  
report certain types of invalid-input errors, and would instead return  
random garbage values.  
  
Marko Kreen, per report from Stefan Kaltenbrunner  
  

Fix wording, per Peter Geoghegan

  
commit   : 2f66c1a2ff8918086099cdfb3b9d9759c8658382    
  
author   : Magnus Hagander <magnus@hagander.net>    
date     : Fri, 27 Jan 2012 10:36:27 +0100    
  
committer: Magnus Hagander <magnus@hagander.net>    
date     : Fri, 27 Jan 2012 10:36:27 +0100    

Click here for diff

  
  

Fix CLUSTER/VACUUM FULL for toast values owned by recently-updated rows.

  
commit   : e5f97c5f81874695f9436fe980f7aa51b637bd54    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 12 Jan 2012 16:40:24 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Thu, 12 Jan 2012 16:40:24 -0500    

Click here for diff

  
In commit 7b0d0e9356963d5c3e4d329a917f5fbb82a2ef05, I made CLUSTER and  
VACUUM FULL try to preserve toast value OIDs from the original toast table  
to the new one.  However, if we have to copy both live and recently-dead  
versions of a row that has a toasted column, those versions may well  
reference the same toast value with the same OID.  The patch then led to  
duplicate-key failures as we tried to insert the toast value twice with the  
same OID.  (The previous behavior was not very desirable either, since it  
would have silently inserted the same value twice with different OIDs.  
That wastes space, but what's worse is that the toast values inserted for  
already-dead heap rows would not be reclaimed by subsequent ordinary  
VACUUMs, since they go into the new toast table marked live not deleted.)  
  
To fix, check if the copied OID already exists in the new toast table, and  
if so, assume that it stores the desired value.  This is reasonably safe  
since the only case where we will copy an OID from a previous toast pointer  
is when toast_insert_or_update was given that toast pointer and so we just  
pulled the data from the old table; if we got two different values that way  
then we have big problems anyway.  We do have to assume that no other  
backend is inserting items into the new toast table concurrently, but  
that's surely safe for CLUSTER and VACUUM FULL.  
  
Per bug #6393 from Maxim Boguk.  Back-patch to 9.0, same as the previous  
patch.  
  

Fix one-byte buffer overrun in contrib/test_parser.

  
commit   : e3fce282b5d507b8105c26543b079bc279da4000    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 9 Jan 2012 19:56:27 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Mon, 9 Jan 2012 19:56:27 -0500    

Click here for diff

  
The original coding examined the next character before verifying that  
there *is* a next character.  In the worst case with the input buffer  
right up against the end of memory, this would result in a segfault.  
  
Problem spotted by Paul Guyot; this commit extends his patch to fix an  
additional case.  In addition, make the code a tad more readable by not  
overloading the usage of *tlen.  
  

Use __sync_lock_test_and_set() for spinlocks on ARM, if available.

  
commit   : bb65cb8cdf864e61bc939d3c4b28bbd43d926700    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 7 Jan 2012 15:39:05 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Sat, 7 Jan 2012 15:39:05 -0500    

Click here for diff

  
Historically we've used the SWPB instruction for TAS() on ARM, but this  
is deprecated and not available on ARMv6 and later.  Instead, make use  
of a GCC builtin if available.  We'll still fall back to SWPB if not,  
so as not to break existing ports using older GCC versions.  
  
Eventually we might want to try using __sync_lock_test_and_set() on some  
other architectures too, but for now that seems to present only risk and  
not reward.  
  
Back-patch to all supported versions, since people might want to use any  
of them on more recent ARM chips.  
  
Martin Pitt  
  

Fix pg_restore’s direct-to-database mode for INSERT-style table data.

  
commit   : 1f996adab3ec30c12b5ffaa418045e4b2c93d818    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 6 Jan 2012 13:04:24 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 6 Jan 2012 13:04:24 -0500    

Click here for diff

  
In commit 6545a901aaf84cb05212bb6a7674059908f527c3, I removed the mini SQL  
lexer that was in pg_backup_db.c, thinking that it had no real purpose  
beyond separating COPY data from SQL commands, which purpose had been  
obsoleted by long-ago fixes in pg_dump's archive file format.  
Unfortunately this was in error: that code was also used to identify  
command boundaries in INSERT-style table data, which is run together as a  
single string in the archive file for better compressibility.  As a result,  
direct-to-database restores from archive files made with --inserts or  
--column-inserts fail in our latest releases, as reported by Dick Visser.  
  
To fix, restore the mini SQL lexer, but simplify it by adjusting the  
calling logic so that it's only required to cope with INSERT-style table  
data, not arbitrary SQL commands.  This allows us to not have to deal with  
SQL comments, E'' strings, or dollar-quoted strings, none of which have  
ever been emitted by dumpTableData_insert.  
  
Also, fix the lexer to cope with standard-conforming strings, which was the  
actual bug that the previous patch was meant to solve.  
  
Back-patch to all supported branches.  The previous patch went back to 8.2,  
which unfortunately means that the EOL release of 8.2 contains this bug,  
but I don't think we're doing another 8.2 release just because of that.  
  

Make executor’s SELECT INTO code save and restore original tuple receiver.

  
commit   : c024a3b3be1c86459f9b47f81f61cb8a67ee2712    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 4 Jan 2012 18:31:08 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 4 Jan 2012 18:31:08 -0500    

Click here for diff

  
As previously coded, the QueryDesc's dest pointer was left dangling  
(pointing at an already-freed receiver object) after ExecutorEnd.  It's a  
bit astonishing that it took us this long to notice, and I'm not sure that  
the known problem case with SQL functions is the only one.  Fix it by  
saving and restoring the original receiver pointer, which seems the most  
bulletproof way of ensuring any related bugs are also covered.  
  
Per bug #6379 from Paul Ramsey.  Back-patch to 8.4 where the current  
handling of SELECT INTO was introduced.  
  

Update per-column ACLs, not only per-table ACL, when changing table owner.

  
commit   : 7443ab2b348d190e8784a2684a5b6ae91f7dcd4b    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 21 Dec 2011 18:23:24 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Wed, 21 Dec 2011 18:23:24 -0500    

Click here for diff

  
We forgot to modify column ACLs, so privileges were still shown as having  
been granted by the old owner.  This meant that neither the new owner nor  
a superuser could revoke the now-untraceable-to-table-owner permissions.  
Per bug #6350 from Marc Balmer.  
  
This has been wrong since column ACLs were added, so back-patch to 8.4.  
  

Avoid crashing when we have problems unlinking files post-commit.

  
commit   : 61dd2ffaff1be5768151e72aca030d7755255b26    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 20 Dec 2011 15:00:47 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Tue, 20 Dec 2011 15:00:47 -0500    

Click here for diff

  
smgrdounlink takes care to not throw an ERROR if it fails to unlink  
something, but that caution was rendered useless by commit  
3396000684b41e7e9467d1abc67152b39e697035, which put an smgrexists call in  
front of it; smgrexists *does* throw error if anything looks funny, such  
as getting a permissions error from trying to open the file.  If that  
happens post-commit, you get a PANIC, and what's worse the same logic  
appears in the WAL replay code, so the database even fails to restart.  
  
Restore the intended behavior by removing the smgrexists call --- it isn't  
accomplishing anything that we can't do better by adjusting mdunlink's  
ideas of whether it ought to warn about ENOENT or not.  
  
Per report from Joseph Shraibman of unrecoverable crash after trying to  
drop a table whose FSM fork had somehow gotten chmod'd to 000 permissions.  
Backpatch to 8.4, where the bogus coding was introduced.  
  

In ecpg removed old leftover check for given connection name.

  
commit   : 458a83a526967dcc1ddbfc5edd5d48ae7db7a2a3    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Sun, 18 Dec 2011 15:34:33 +0100    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Sun, 18 Dec 2011 15:34:33 +0100    

Click here for diff

  
Ever since we introduced real prepared statements this should work for  
different connections. The old solution just emulating prepared statements,  
though, wasn't able to handle this.  
  
Closes: #6309  
  

Fix reference to “verify-ca” and “verify-full” in a note in the docs.

  
commit   : faa695580b07fd27a8407b9385df31f4ec01a582    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 16 Dec 2011 15:03:36 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 16 Dec 2011 15:03:36 +0200    

Click here for diff

  
  

Disable excessive FP optimization by recent versions of gcc.

  
commit   : 517462faf0af96fccc3acfc30ca8242a13859ca5    
  
author   : Andrew Dunstan <andrew@dunslane.net>    
date     : Wed, 14 Dec 2011 17:13:01 -0500    
  
committer: Andrew Dunstan <andrew@dunslane.net>    
date     : Wed, 14 Dec 2011 17:13:01 -0500    

Click here for diff

  
Suggested solution from Tom Lane. Problem discovered, probably not  
for the first time, while testing the mingw-w64 32 bit compiler.  
  
Backpatched to all live branches.  
  

Revert the behavior of inet/cidr functions to not unpack the arguments.

  
commit   : 6c0a375adf9b27fbb8cab8d5cae5dc6b58ea6b24    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 12 Dec 2011 09:49:47 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Mon, 12 Dec 2011 09:49:47 +0200    

Click here for diff

  
I forgot to change the functions to use the PG_GETARG_INET_PP() macro,  
when I changed DatumGetInetP() to unpack the datum, like Datum*P macros  
usually do. Also, I screwed up the definition of the PG_GETARG_INET_PP()  
macro, and didn't notice because it wasn't used.  
  
This fixes the memory leak when sorting inet values, as reported  
by Jochen Erwied and debugged by Andres Freund. Backpatch to 8.3, like  
the previous patch that broke it.  
  

Don’t set reachedMinRecoveryPoint during crash recovery. In crash recovery, we don’t reach consistency before replaying all of the WAL. Rename the variable to reachedConsistency, to make its intention clearer.

  
commit   : 94b18c60c7a52452356bb49a04c1495083ea67f6    
  
author   : Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 9 Dec 2011 14:32:42 +0200    
  
committer: Heikki Linnakangas <heikki.linnakangas@iki.fi>    
date     : Fri, 9 Dec 2011 14:32:42 +0200    

Click here for diff

  
In master, that was an active bug because of the recent patch to  
immediately PANIC if a reference to a missing page is found in WAL after  
reaching consistency, as Tom Lane's test case demonstrated. In 9.1 and 9.0,  
the only consequence was a misleading "consistent recovery state reached at  
%X/%X" message in the log at the beginning of crash recovery (the database  
is not consistent at that point yet). In 8.4, the log message was not  
printed in crash recovery, even though there was a similar  
reachedMinRecoveryPoint local variable that was also set early. So,  
backpatch to 9.1 and 9.0.  
  

In pg_upgrade, allow tables using regclass to be upgraded because we preserve pg_class oids since PG 9.0.

  
commit   : ec218056feb4533932bce4af820523829d831f92    
  
author   : Bruce Momjian <bruce@momjian.us>    
date     : Mon, 5 Dec 2011 16:45:01 -0500    
  
committer: Bruce Momjian <bruce@momjian.us>    
date     : Mon, 5 Dec 2011 16:45:01 -0500    

Click here for diff

  
  

Applied another patch by Zoltan to fix memory alignement issues in ecpg’s sqlda code.

  
commit   : 621fd4d4c02d7c390c554f356ce161f8518fe865    
  
author   : Michael Meskes <meskes@postgresql.org>    
date     : Sat, 3 Dec 2011 21:03:57 +0100    
  
committer: Michael Meskes <meskes@postgresql.org>    
date     : Sat, 3 Dec 2011 21:03:57 +0100    

Click here for diff

  
  

Treat ENOTDIR as ENOENT when looking for client certificate file

  
commit   : f3bbd7d814c152a3582365734dc7b95c4fb3a863    
  
author   : Magnus Hagander <magnus@hagander.net>    
date     : Sat, 3 Dec 2011 15:02:53 +0100    
  
committer: Magnus Hagander <magnus@hagander.net>    
date     : Sat, 3 Dec 2011 15:02:53 +0100    

Click here for diff

  
This makes it possible to use a libpq app with home directory set  
to /dev/null, for example - treating it the same as if the file  
doesn't exist (which it doesn't).  
  
Per bug #6302, reported by Diego Elio Petteno  
  

Add some weasel wording about threaded usage of PGresults.

  
commit   : 8af71fc56d0103e7bc0ebb12af152ed3b6ab0250    
  
author   : Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Dec 2011 11:33:53 -0500    
  
committer: Tom Lane <tgl@sss.pgh.pa.us>    
date     : Fri, 2 Dec 2011 11:33:53 -0500    

Click here for diff

  
PGresults used to be read-only from the application's viewpoint, but now  
that we've exposed various functions that allow modification of a PGresult,  
that sweeping statement is no longer accurate.  Noted by Dmitriy Igrishin.