From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Wilcox Subject: [PATCH v8 49/63] shmem: Convert shmem_alloc_hugepage to XArray Date: Tue, 6 Mar 2018 11:23:59 -0800 Message-ID: <20180306192413.5499-50-willy@infradead.org> References: <20180306192413.5499-1-willy@infradead.org> Return-path: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=rXzXhtVcMtSI73Tb4Vc4OGoMQ9ed2P7MXl58GYL2T3o=; b=DOxZg9+nxN/YGVmhJv0hZIC/J rXxt+DCwV4YJ2fK0uq9LSxyo2bGk1sdtnMzMqNc/imQGqDwZV7u+sgufpOc6eozCQQQ8w0T5St3a3 vZKsi9STxBwhbW2nCED7w3K5Ta+oW/M8GbseYGoU/MNMnE6mmRc3/N5SvD2OpWp1fRrOvF8T8Qlt6 AxQ+TrhkRnnfRM2qeblIBhTOfoRq8kKD2qauQcj2Jt7Skg1TU+8g5XHUAWszkuvwWjRRDNrrys0iR HtCQncgQ51wWlneTZPNMbfBs4cONX9ADQkfl0vbZtnjOuWGpC2aeBbBVSN7rvazpRdr87cQdttTxK In-Reply-To: <20180306192413.5499-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Andrew Morton Cc: Matthew Wilcox , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Ryusuke Konishi , linux-nilfs@vger.kernel.org, linux-btrfs@vger.kernel.org From: Matthew Wilcox xa_find() is a slightly easier API to use than radix_tree_gang_lookup_slot() because it contains its own RCU locking. Signed-off-by: Matthew Wilcox --- mm/shmem.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index fb06fb3e644a..a0a354a87f3b 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1416,23 +1416,17 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { struct vm_area_struct pvma; - struct inode *inode = &info->vfs_inode; - struct address_space *mapping = inode->i_mapping; - pgoff_t idx, hindex; - void __rcu **results; + struct address_space *mapping = info->vfs_inode.i_mapping; + pgoff_t hindex; struct page *page; if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) return NULL; hindex = round_down(index, HPAGE_PMD_NR); - rcu_read_lock(); - if (radix_tree_gang_lookup_slot(&mapping->i_pages, &results, &idx, - hindex, 1) && idx < hindex + HPAGE_PMD_NR) { - rcu_read_unlock(); + if (xa_find(&mapping->i_pages, &hindex, hindex + HPAGE_PMD_NR - 1, + XA_PRESENT)) return NULL; - } - rcu_read_unlock(); shmem_pseudo_vma_init(&pvma, info, hindex); page = alloc_pages_vma(gfp | __GFP_COMP | __GFP_NORETRY | __GFP_NOWARN, -- 2.16.1