From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Wilcox Subject: [PATCH v11 30/63] mm: Convert page migration to XArray Date: Sat, 14 Apr 2018 07:12:43 -0700 Message-ID: <20180414141316.7167-31-willy@infradead.org> References: <20180414141316.7167-1-willy@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=sourceforge.net; s=x; h=References:In-Reply-To:Message-Id:Date:Subject:Cc: To:From:Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=mHc5PGiQh6aJZix6sVqoBuTzZIK0NG61T/6bCwCLQAo=; b=XmpUbXqAjORg5Vtc2VdfWtwPJH scDFfupis2NfAF1czeKn7xd7YvA0ZZPs3QZVJtFHVL7bjhqBngEe0V3LCewZ5hQCLfixufSkiCgA1 jk2KsM0GrgDIzQHljEw/OJ+Pr/ajBoN/5deYTBpxUcQfNU2n+GSVryCquNTJEb/meSJ0=; DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=sf.net; s=x ; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To :MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=mHc5PGiQh6aJZix6sVqoBuTzZIK0NG61T/6bCwCLQAo=; b=CI+gYq4VeYOhhXTUew9bYTijJ4 UUXR82FKIi9Ye64RRxUefOL81ke2QlcJjODpuRuBaNdbs8a6pVFma4hBUcoQ5zUg5TmCbDP0LB4zX p1Hto6GMRle0jim4C7BiJ3B+1Ye1VEHTKeYd1XEPZ6pJUGFFZ9RCctoo/h+uhH7AOwHg=; DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=mHc5PGiQh6aJZix6sVqoBuTzZIK0NG61T/6bCwCLQAo=; b=L3gHtbkoUiuqJlM+nu3keRMbS XDNGRBprTYXDjM8Pn0oCZF2HsXTX0Gm///su8KQ1iuju//sQ+yc6/Yj1Pm5VSEWF8sV0CHuhu6S74 sqldBXblDpzoxvyCT7rtrWaIQlwYVQiMLVJZKW8R8OMh02AdN3EzZdcTErayip3hWQDQ//wIxXaTd VOT9di1LV5ffBWVbwOtreUitR1FYPudGi93F7oduQRXt8OBSdv5P4zlwjhlYo50KURiSBfLdrliry QoBoLvJqvrDoqyly7fV7Qw5hF8fOU7Ha2pKS/dZU22CZxyq7sB0MVaLFO6Elo5EgkqU5pyEmC3y6K XxjDpfd7A==; In-Reply-To: <20180414141316.7167-1-willy@infradead.org> List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-f2fs-devel-bounces@lists.sourceforge.net To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Cc: linux-nilfs@vger.kernel.org, Jan Kara , Jeff Layton , Matthew Wilcox , James Simmons , Jaegeuk Kim , Andreas Dilger , Nicholas Piggin , linux-f2fs-devel@lists.sourceforge.net, Oleg Drokin , Ryusuke Konishi , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues , Mike Kravetz From: Matthew Wilcox Signed-off-by: Matthew Wilcox --- mm/migrate.c | 41 ++++++++++++++++------------------------- 1 file changed, 16 insertions(+), 25 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index f65dd69e1fd1..de1a602d0056 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -323,7 +323,7 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, page = migration_entry_to_page(entry); /* - * Once radix-tree replacement of page migration started, page_count + * Once page cache replacement of page migration started, page_count * *must* be zero. And, we don't want to call wait_on_page_locked() * against a page without get_page(). * So, we use get_page_unless_zero(), here. Even failed, page fault @@ -438,10 +438,10 @@ int migrate_page_move_mapping(struct address_space *mapping, struct buffer_head *head, enum migrate_mode mode, int extra_count) { + XA_STATE(xas, &mapping->i_pages, page_index(page)); struct zone *oldzone, *newzone; int dirty; int expected_count = 1 + extra_count; - void **pslot; /* * Device public or private pages have an extra refcount as they are @@ -467,21 +467,16 @@ int migrate_page_move_mapping(struct address_space *mapping, oldzone = page_zone(page); newzone = page_zone(newpage); - xa_lock_irq(&mapping->i_pages); - - pslot = radix_tree_lookup_slot(&mapping->i_pages, - page_index(page)); + xas_lock_irq(&xas); expected_count += 1 + page_has_private(page); - if (page_count(page) != expected_count || - radix_tree_deref_slot_protected(pslot, - &mapping->i_pages.xa_lock) != page) { - xa_unlock_irq(&mapping->i_pages); + if (page_count(page) != expected_count || xas_load(&xas) != page) { + xas_unlock_irq(&xas); return -EAGAIN; } if (!page_ref_freeze(page, expected_count)) { - xa_unlock_irq(&mapping->i_pages); + xas_unlock_irq(&xas); return -EAGAIN; } @@ -495,7 +490,7 @@ int migrate_page_move_mapping(struct address_space *mapping, if (mode == MIGRATE_ASYNC && head && !buffer_migrate_lock_buffers(head, mode)) { page_ref_unfreeze(page, expected_count); - xa_unlock_irq(&mapping->i_pages); + xas_unlock_irq(&xas); return -EAGAIN; } @@ -523,7 +518,7 @@ int migrate_page_move_mapping(struct address_space *mapping, SetPageDirty(newpage); } - radix_tree_replace_slot(&mapping->i_pages, pslot, newpage); + xas_store(&xas, newpage); /* * Drop cache reference from old page by unfreezing @@ -532,7 +527,7 @@ int migrate_page_move_mapping(struct address_space *mapping, */ page_ref_unfreeze(page, expected_count - 1); - xa_unlock(&mapping->i_pages); + xas_unlock(&xas); /* Leave irq disabled to prevent preemption while updating stats */ /* @@ -572,22 +567,18 @@ EXPORT_SYMBOL(migrate_page_move_mapping); int migrate_huge_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page) { + XA_STATE(xas, &mapping->i_pages, page_index(page)); int expected_count; - void **pslot; - - xa_lock_irq(&mapping->i_pages); - - pslot = radix_tree_lookup_slot(&mapping->i_pages, page_index(page)); + xas_lock_irq(&xas); expected_count = 2 + page_has_private(page); - if (page_count(page) != expected_count || - radix_tree_deref_slot_protected(pslot, &mapping->i_pages.xa_lock) != page) { - xa_unlock_irq(&mapping->i_pages); + if (page_count(page) != expected_count || xas_load(&xas) != page) { + xas_unlock_irq(&xas); return -EAGAIN; } if (!page_ref_freeze(page, expected_count)) { - xa_unlock_irq(&mapping->i_pages); + xas_unlock_irq(&xas); return -EAGAIN; } @@ -596,11 +587,11 @@ int migrate_huge_page_move_mapping(struct address_space *mapping, get_page(newpage); - radix_tree_replace_slot(&mapping->i_pages, pslot, newpage); + xas_store(&xas, newpage); page_ref_unfreeze(page, expected_count - 1); - xa_unlock_irq(&mapping->i_pages); + xas_unlock_irq(&xas); return MIGRATEPAGE_SUCCESS; } -- 2.17.0 ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot