From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Howells Subject: Re: [RFC] get_write_access()/deny_write_access() without inode->i_lock Date: Mon, 20 Jun 2011 13:47:17 +0100 Message-ID: <21839.1308574037@redhat.com> References: <20110619235147.GQ11521@ZenIV.linux.org.uk> Return-path: In-Reply-To: <20110619235147.GQ11521@ZenIV.linux.org.uk> Sender: linux-fsdevel-owner@vger.kernel.org To: Al Viro Cc: dhowells@redhat.com, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Linus Torvalds List-Id: linux-arch.vger.kernel.org Al Viro wrote: > + for (v = atomic_read(&inode->i_writecount); v >= 0; v = v1) { > + v1 = atomic_cmpxchg(&inode->i_writecount, v, v + 1); > + if (likely(v1 == v)) > + return 0; > + } You don't need to reissue the atomic_read(). atomic_cmpxchg() returns the current value of the memory location. Just set v to v1 before going round the loop again. David From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com ([209.132.183.28]:54307 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752248Ab1FTMrX (ORCPT ); Mon, 20 Jun 2011 08:47:23 -0400 From: David Howells In-Reply-To: <20110619235147.GQ11521@ZenIV.linux.org.uk> References: <20110619235147.GQ11521@ZenIV.linux.org.uk> Subject: Re: [RFC] get_write_access()/deny_write_access() without inode->i_lock Date: Mon, 20 Jun 2011 13:47:17 +0100 Message-ID: <21839.1308574037@redhat.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Al Viro Cc: dhowells@redhat.com, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Linus Torvalds Message-ID: <20110620124717.qZ32K0hL1gw-2HWSA1bMC4egpZna5JLg4tn1Zngphnk@z> Al Viro wrote: > + for (v = atomic_read(&inode->i_writecount); v >= 0; v = v1) { > + v1 = atomic_cmpxchg(&inode->i_writecount, v, v + 1); > + if (likely(v1 == v)) > + return 0; > + } You don't need to reissue the atomic_read(). atomic_cmpxchg() returns the current value of the memory location. Just set v to v1 before going round the loop again. David