From mboxrd@z Thu Jan 1 00:00:00 1970 From: Nick Piggin Subject: Re: Inode Lock Scalability V7 (was V6) Date: Fri, 22 Oct 2010 13:41:52 +1100 Message-ID: <20101022024152.GA6618@amd> References: <1287622186-1935-1-git-send-email-david@fromorbit.com> <20101021050422.GP32255@dastard> <20101021132034.GB13620@amd> <20101021235227.GI12506@dastard> <20101022004540.GA5920@amd> <20101022022010.GG19804@ZenIV.linux.org.uk> <20101022023444.GA6573@amd> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Al Viro , Dave Chinner , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org To: Nick Piggin Return-path: Content-Disposition: inline In-Reply-To: <20101022023444.GA6573@amd> Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org On Fri, Oct 22, 2010 at 01:34:44PM +1100, Nick Piggin wrote: > On Fri, Oct 22, 2010 at 03:20:10AM +0100, Al Viro wrote: > > On Fri, Oct 22, 2010 at 11:45:40AM +1100, Nick Piggin wrote: > > majority already checks for I_FREEING/I_WILL_FREE, refusing to pick such > > inodes. It's not an accidental subtle property of the code, it's bloody > > fundamental. > > I didn't miss that, and I agree that at the point of my initial lock > break up, the locking is "wrong". Whether you correct it by changing > the lock ordering or by using RCU to do lookups is something I want to > debate further. > > I think it is natural to be able to lock the inode and have it lock the > icache state. Importantly, to be able to manipulate the icache state in any number of steps, under a consistent lock. Exactly like we have with inode_lock today. Stepping away from that, adding code to handle new concurrencies, before inode_lock is able to be lifted is just wrong. The locking in my lock break patch is ugly and wrong, yes. But it is always an intermediate step. I want to argue that with RCU inode work *anyway*, there is not much point to reducing the strength of the i_lock property because locking can be cleaned up nicely and still keep i_lock ~= inode_lock (for a single inode).