From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Wilcox Subject: [PATCH v11 43/63] memfd: Convert shmem_tag_pins to XArray Date: Sat, 14 Apr 2018 07:12:56 -0700 Message-ID: <20180414141316.7167-44-willy@infradead.org> References: <20180414141316.7167-1-willy@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from [172.30.20.202] (helo=mx.sourceforge.net) by sfs-ml-2.v29.lw.sourceforge.com with esmtps (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.90_1) (envelope-from ) id 1f7Nj6-0006yo-RW for linux-f2fs-devel@lists.sourceforge.net; Sat, 14 Apr 2018 16:08:32 +0000 Received: from bombadil.infradead.org ([198.137.202.133]) by sfi-mx-2.v28.lw.sourceforge.com with esmtps (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.90_1) id 1f7Nit-004UgE-Ag for linux-f2fs-devel@lists.sourceforge.net; Sat, 14 Apr 2018 16:08:32 +0000 In-Reply-To: <20180414141316.7167-1-willy@infradead.org> List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-f2fs-devel-bounces@lists.sourceforge.net To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Cc: linux-nilfs@vger.kernel.org, Jan Kara , Jeff Layton , Matthew Wilcox , James Simmons , Jaegeuk Kim , Andreas Dilger , Nicholas Piggin , linux-f2fs-devel@lists.sourceforge.net, Oleg Drokin , Ryusuke Konishi , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues , Mike Kravetz From: Matthew Wilcox Switch to a batch-processing model like shmem_wait_for_pins() and use the xa_state previously set up by shmem_wait_for_pins(). Signed-off-by: Matthew Wilcox Reviewed-by: Mike Kravetz --- mm/shmem.c | 44 ++++++++++++++++++-------------------------- 1 file changed, 18 insertions(+), 26 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 017340fe933d..2283872a84a1 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2577,43 +2577,35 @@ static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence) } /* - * We need a tag: a new tag would expand every radix_tree_node by 8 bytes, + * We need a tag: a new tag would expand every xa_node by 8 bytes, * so reuse a tag which we firmly believe is never set or cleared on shmem. */ #define SHMEM_TAG_PINNED PAGECACHE_TAG_TOWRITE #define LAST_SCAN 4 /* about 150ms max */ -static void shmem_tag_pins(struct address_space *mapping) +static void shmem_tag_pins(struct xa_state *xas) { - struct radix_tree_iter iter; - void **slot; - pgoff_t start; struct page *page; + unsigned int tagged = 0; lru_add_drain(); - start = 0; - rcu_read_lock(); - radix_tree_for_each_slot(slot, &mapping->i_pages, &iter, start) { - page = radix_tree_deref_slot(slot); - if (!page || radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - } else if (page_count(page) - page_mapcount(page) > 1) { - xa_lock_irq(&mapping->i_pages); - radix_tree_tag_set(&mapping->i_pages, iter.index, - SHMEM_TAG_PINNED); - xa_unlock_irq(&mapping->i_pages); - } + xas_lock_irq(xas); + xas_for_each(xas, page, ULONG_MAX) { + if (xa_is_value(page)) + continue; + if (page_count(page) - page_mapcount(page) > 1) + xas_set_tag(xas, SHMEM_TAG_PINNED); - if (need_resched()) { - slot = radix_tree_iter_resume(slot, &iter); - cond_resched_rcu(); - } + if (++tagged % XA_CHECK_SCHED) + continue; + + xas_pause(xas); + xas_unlock_irq(xas); + cond_resched(); + xas_lock_irq(xas); } - rcu_read_unlock(); + xas_unlock_irq(xas); } /* @@ -2631,7 +2623,7 @@ static int shmem_wait_for_pins(struct address_space *mapping) struct page *page; int error, scan; - shmem_tag_pins(mapping); + shmem_tag_pins(&xas); error = 0; for (scan = 0; scan <= LAST_SCAN; scan++) { -- 2.17.0 ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot