From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Wilcox Subject: [PATCH v5 33/78] mm: Convert truncate to XArray Date: Fri, 15 Dec 2017 14:04:05 -0800 Message-ID: <20171215220450.7899-34-willy@infradead.org> References: <20171215220450.7899-1-willy@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=sourceforge.net; s=x; h=References:In-Reply-To:Message-Id:Date:Subject:Cc: To:From:Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=z2wUjX53zsGmPoUiEiA7zW7Pqnae/hfF/ikw47JMqAo=; b=M2+iYgao319Z89CtVNerkDTXUp nFUSGBvUZxWae2kzkBLI/ze5j6Yw7egKuyI9DcvGJCjVLpSt9JvvycbLagwAfToLYbGxO8NOHeLFh rYaLt7In6JtfpAhvlBHZnaWFycuaT9mmsHCBj5jkW1ogbunj9ZdKxVK/fYB1Hnm6JLRw=; DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=sf.net; s=x ; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To :MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=z2wUjX53zsGmPoUiEiA7zW7Pqnae/hfF/ikw47JMqAo=; b=FNru5UsMiCJPU+o8dIqGS6jpoH CRKnBxluzJjCrD5RcBHOW0r/pgzB60osLs0jT3MmQPtOS/3t7Jgd6UkkunaX5G/V+zuL+9WBjd3VP ByjqsKe1aO+G+DHH6yTkn13a0qrsCah0Kbz6LYxpu4G5cFuXB2QZCS7z2gJmImB3N7mY=; DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=z2wUjX53zsGmPoUiEiA7zW7Pqnae/hfF/ikw47JMqAo=; b=XZoMD2N5ZdJjKnIFhxUKRlBZJ 1K72dxbaKm9rqXAqwhjQ+2fPkpAGnsRdaJCs+4ECg38ECvoPvamrATOYm6tD+OCqyTl5y9HKbVJ7U IRBQ0B20Tak5pINYn6ffbd0yvlImlc8/0weVI4Eo3FW+WIipwGa25/dgKR7dCaB0vEMByebd+looa Mvg9WwnKNKqZhLi/WiAPPl6AU8UKA4/nbzfmtMbU2sqteZkm4Gsk9LuyNRx/HBol2OGW0+UxKIE0M 3gA45XvV+fYM3c/Ax0Azkbw0qTGbyNn8h5gAwsXWJ7kuHxUwdc3g9jp8SKnlH2XFb4e3DqbNojP53 UDMjuEhWQ==; In-Reply-To: <20171215220450.7899-1-willy@infradead.org> List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-f2fs-devel-bounces@lists.sourceforge.net To: linux-kernel@vger.kernel.org Cc: Jens Axboe , linux-xfs@vger.kernel.org, linux-nilfs@vger.kernel.org, linux-raid@vger.kernel.org, Matthew Wilcox , Marc Zyngier , linux-usb@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, David Howells , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Ross Zwisler , Rehas Sachdeva , Shaohua Li , linux-btrfs@vger.kernel.org From: Matthew Wilcox This is essentially xa_cmpxchg() with the locking handled above us, and it doesn't have to handle replacing a NULL entry. Signed-off-by: Matthew Wilcox --- mm/truncate.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index 69bb743dd7e5..70323c347298 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -33,15 +33,12 @@ static inline void __clear_shadow_entry(struct address_space *mapping, pgoff_t index, void *entry) { - struct radix_tree_node *node; - void **slot; + XA_STATE(xas, &mapping->pages, index); - if (!__radix_tree_lookup(&mapping->pages, index, &node, &slot)) + xas_set_update(&xas, workingset_update_node); + if (xas_load(&xas) != entry) return; - if (*slot != entry) - return; - __radix_tree_replace(&mapping->pages, node, slot, NULL, - workingset_update_node); + xas_store(&xas, NULL); mapping->nrexceptional--; } @@ -746,10 +743,10 @@ int invalidate_inode_pages2_range(struct address_space *mapping, index++; } /* - * For DAX we invalidate page tables after invalidating radix tree. We + * For DAX we invalidate page tables after invalidating page cache. We * could invalidate page tables while invalidating each entry however * that would be expensive. And doing range unmapping before doesn't - * work as we have no cheap way to find whether radix tree entry didn't + * work as we have no cheap way to find whether page cache entry didn't * get remapped later. */ if (dax_mapping(mapping)) { -- 2.15.1 ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot