From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cuda.sgi.com (cuda1.sgi.com [192.48.157.11]) by oss.sgi.com (8.14.3/8.14.3/SuSE Linux 0.8) with ESMTP id q2TJ82Ak156978 for ; Thu, 29 Mar 2012 14:08:03 -0500 Received: from bombadil.infradead.org (173-166-109-252-newengland.hfc.comcastbusiness.net [173.166.109.252]) by cuda.sgi.com with ESMTP id YoflaiFDn3uTC9Bt (version=TLSv1 cipher=AES256-SHA bits=256 verify=NO) for ; Thu, 29 Mar 2012 12:08:01 -0700 (PDT) Date: Thu, 29 Mar 2012 15:07:59 -0400 From: Christoph Hellwig Subject: Re: [PATCH] xfs: fix buffer lookup race on allocation failure Message-ID: <20120329190759.GA8622@infradead.org> References: <1333022846-12697-1-git-send-email-david@fromorbit.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1333022846-12697-1-git-send-email-david@fromorbit.com> List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xfs-bounces@oss.sgi.com Errors-To: xfs-bounces@oss.sgi.com To: Dave Chinner Cc: xfs@oss.sgi.com I don't like this solution. What speaks against building up the page array after the first buffer lookup failed, but before linking the buffer into the rbtree? That's the the inode cache and all the VFS caches do. _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs