From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Wilcox Subject: [PATCH v8 36/63] mm: Convert truncate to XArray Date: Tue, 6 Mar 2018 11:23:46 -0800 Message-ID: <20180306192413.5499-37-willy@infradead.org> References: <20180306192413.5499-1-willy@infradead.org> Return-path: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=HLE60Qxvj4J0NukoT1bwHFgs12ctHyLAiMGCv1xVoVE=; b=fqg+IBfexHBro4EvZ5ek7Ixmd ZEXk7Rjj478AyzCMxoHlzruy8Gy6cV6brSrH7WMTZIKRznrouERTO/+plS8HkThZmbzO81i76NNIf ZRJudgbHQH2aasju4AouMzsOF7fpj6wabDSgXkzlj65hCBlG9P0D0s3BRkHNYscA5tFeBlLb+5M6h xiiPhRvQ4AWdSMzpQ54jMSUha0HgZGFOSdL1OlH5XvSqt1mlaaug9/EZ59EsF0+E0iL6yWau0XmZX nKpvZ2L+tgT20kc5XxIWHV+ZAe2BLf6l8DRBC18u2Twv8KIjywKVpU64FrlxloQz4cL9dLkpvsdwH In-Reply-To: <20180306192413.5499-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Andrew Morton Cc: Matthew Wilcox , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Ryusuke Konishi , linux-nilfs@vger.kernel.org, linux-btrfs@vger.kernel.org From: Matthew Wilcox This is essentially xa_cmpxchg() with the locking handled above us, and it doesn't have to handle replacing a NULL entry. Signed-off-by: Matthew Wilcox --- mm/truncate.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index ed778555c9f3..45d68e90b703 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -33,15 +33,12 @@ static inline void __clear_shadow_entry(struct address_space *mapping, pgoff_t index, void *entry) { - struct radix_tree_node *node; - void **slot; + XA_STATE(xas, &mapping->i_pages, index); - if (!__radix_tree_lookup(&mapping->i_pages, index, &node, &slot)) + xas_set_update(&xas, workingset_update_node); + if (xas_load(&xas) != entry) return; - if (*slot != entry) - return; - __radix_tree_replace(&mapping->i_pages, node, slot, NULL, - workingset_update_node); + xas_store(&xas, NULL); mapping->nrexceptional--; } @@ -738,10 +735,10 @@ int invalidate_inode_pages2_range(struct address_space *mapping, index++; } /* - * For DAX we invalidate page tables after invalidating radix tree. We + * For DAX we invalidate page tables after invalidating page cache. We * could invalidate page tables while invalidating each entry however * that would be expensive. And doing range unmapping before doesn't - * work as we have no cheap way to find whether radix tree entry didn't + * work as we have no cheap way to find whether page cache entry didn't * get remapped later. */ if (dax_mapping(mapping)) { -- 2.16.1