From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1313CC43461 for ; Mon, 14 Sep 2020 13:01:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 852EA206B2 for ; Mon, 14 Sep 2020 13:01:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="T7PNI+Bs" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 852EA206B2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6CD456B005C; Mon, 14 Sep 2020 09:00:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6303B900004; Mon, 14 Sep 2020 09:00:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 457F36B0062; Mon, 14 Sep 2020 09:00:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0134.hostedemail.com [216.40.44.134]) by kanga.kvack.org (Postfix) with ESMTP id 2C2796B005C for ; Mon, 14 Sep 2020 09:00:59 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A21AC81CE7D5 for ; Mon, 14 Sep 2020 13:00:58 +0000 (UTC) X-FDA: 77261676996.21.skate04_531377a27108 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 16EF51830BB40 for ; Mon, 14 Sep 2020 13:00:50 +0000 (UTC) X-HE-Tag: skate04_531377a27108 X-Filterd-Recvd-Size: 8386 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Mon, 14 Sep 2020 13:00:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=yySVSybL1mTB1A4lQ83rL5lBae29uMlsQBC0Q6w44nw=; b=T7PNI+BsAORvNzNwBXD0Q9QKUq cia5IcglZlR4O+/XFRG4FRipdg5hg9DaSxLDpFjFVNP5TPMMAutvHBEGU96IE5YDH2O4bhCW28MxI unw+4zK2piy7LiPwAFB1RHzouF0B8vyXOpGishKGyMeW2kaOFlRbnWxfzqbWr2CMU/1+rEXDnuE0g 6qo40KVlU1OsDEn9xJGIMHSyYBcBioxEUSwKwFYpf3uhcM8Xo2FDuzHo18tB2rLFD3rzY3cza2q1e EuIq9OKMZYYx0IlGrX/pddzFH6eivYD+RQNpQ+/xKjDggISBOtSvIyl0s8D4tnLXrT+pxDMjHkdnW Y/stSmJw==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kHo69-00030E-Rl; Mon, 14 Sep 2020 13:00:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , Andrew Morton , Hugh Dickins , William Kucharski , Johannes Weiner , Jan Kara , Yang Shi , Dave Chinner , linux-kernel@vger.kernel.org Subject: [PATCH v2 07/12] mm: Add an 'end' parameter to pagevec_lookup_entries Date: Mon, 14 Sep 2020 14:00:37 +0100 Message-Id: <20200914130042.11442-8-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200914130042.11442-1-willy@infradead.org> References: <20200914130042.11442-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 16EF51830BB40 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Simplifies the callers and uses the existing functionality in find_get_entries(). We can also drop the final argument of truncate_exceptional_pvec_entries() and simplify the logic in that function. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagevec.h | 5 ++--- mm/swap.c | 8 ++++---- mm/truncate.c | 41 ++++++++++------------------------------- 3 files changed, 16 insertions(+), 38 deletions(-) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 081d934eda64..4b245592262c 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -26,9 +26,8 @@ struct pagevec { void __pagevec_release(struct pagevec *pvec); void __pagevec_lru_add(struct pagevec *pvec); unsigned pagevec_lookup_entries(struct pagevec *pvec, - struct address_space *mapping, - pgoff_t start, unsigned nr_entries, - pgoff_t *indices); + struct address_space *mapping, pgoff_t start, pgoff_t end, + unsigned nr_entries, pgoff_t *indices); void pagevec_remove_exceptionals(struct pagevec *pvec); unsigned pagevec_lookup_range(struct pagevec *pvec, struct address_space *mapping, diff --git a/mm/swap.c b/mm/swap.c index fcf6ccb94b09..b6e56a84b466 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -1036,6 +1036,7 @@ void __pagevec_lru_add(struct pagevec *pvec) * @pvec: Where the resulting entries are placed * @mapping: The address_space to search * @start: The starting entry index + * @end: The highest index to return (inclusive). * @nr_entries: The maximum number of pages * @indices: The cache indices corresponding to the entries in @pvec * @@ -1056,11 +1057,10 @@ void __pagevec_lru_add(struct pagevec *pvec) * found. */ unsigned pagevec_lookup_entries(struct pagevec *pvec, - struct address_space *mapping, - pgoff_t start, unsigned nr_entries, - pgoff_t *indices) + struct address_space *mapping, pgoff_t start, pgoff_t end, + unsigned nr_entries, pgoff_t *indices) { - pvec->nr =3D find_get_entries(mapping, start, ULONG_MAX, nr_entries, + pvec->nr =3D find_get_entries(mapping, start, end, nr_entries, pvec->pages, indices); return pagevec_count(pvec); } diff --git a/mm/truncate.c b/mm/truncate.c index 5dbe0c77b5ac..69ea72e7fc1c 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -57,11 +57,10 @@ static void clear_shadow_entry(struct address_space *= mapping, pgoff_t index, * exceptional entries similar to what pagevec_remove_exceptionals does. */ static void truncate_exceptional_pvec_entries(struct address_space *mapp= ing, - struct pagevec *pvec, pgoff_t *indices, - pgoff_t end) + struct pagevec *pvec, pgoff_t *indices) { int i, j; - bool dax, lock; + bool dax; =20 /* Handled by shmem itself */ if (shmem_mapping(mapping)) @@ -75,8 +74,7 @@ static void truncate_exceptional_pvec_entries(struct ad= dress_space *mapping, return; =20 dax =3D dax_mapping(mapping); - lock =3D !dax && indices[j] < end; - if (lock) + if (!dax) xa_lock_irq(&mapping->i_pages); =20 for (i =3D j; i < pagevec_count(pvec); i++) { @@ -88,9 +86,6 @@ static void truncate_exceptional_pvec_entries(struct ad= dress_space *mapping, continue; } =20 - if (index >=3D end) - continue; - if (unlikely(dax)) { dax_delete_mapping_entry(mapping, index); continue; @@ -99,7 +94,7 @@ static void truncate_exceptional_pvec_entries(struct ad= dress_space *mapping, __clear_shadow_entry(mapping, index, page); } =20 - if (lock) + if (!dax) xa_unlock_irq(&mapping->i_pages); pvec->nr =3D j; } @@ -329,7 +324,7 @@ void truncate_inode_pages_range(struct address_space = *mapping, while (index < end && find_lock_entries(mapping, index, end - 1, &pvec, indices)) { index =3D indices[pagevec_count(&pvec) - 1] + 1; - truncate_exceptional_pvec_entries(mapping, &pvec, indices, end); + truncate_exceptional_pvec_entries(mapping, &pvec, indices); for (i =3D 0; i < pagevec_count(&pvec); i++) truncate_cleanup_page(mapping, pvec.pages[i]); delete_from_page_cache_batch(mapping, &pvec); @@ -381,8 +376,8 @@ void truncate_inode_pages_range(struct address_space = *mapping, index =3D start; for ( ; ; ) { cond_resched(); - if (!pagevec_lookup_entries(&pvec, mapping, index, - min(end - index, (pgoff_t)PAGEVEC_SIZE), indices)) { + if (!pagevec_lookup_entries(&pvec, mapping, index, end - 1, + PAGEVEC_SIZE, indices)) { /* If all gone from start onwards, we're done */ if (index =3D=3D start) break; @@ -390,23 +385,12 @@ void truncate_inode_pages_range(struct address_spac= e *mapping, index =3D start; continue; } - if (index =3D=3D start && indices[0] >=3D end) { - /* All gone out of hole to be punched, we're done */ - pagevec_remove_exceptionals(&pvec); - pagevec_release(&pvec); - break; - } =20 for (i =3D 0; i < pagevec_count(&pvec); i++) { struct page *page =3D pvec.pages[i]; =20 /* We rely upon deletion not changing page->index */ index =3D indices[i]; - if (index >=3D end) { - /* Restart punch to make sure all gone */ - index =3D start - 1; - break; - } =20 if (xa_is_value(page)) continue; @@ -417,7 +401,7 @@ void truncate_inode_pages_range(struct address_space = *mapping, truncate_inode_page(mapping, page); unlock_page(page); } - truncate_exceptional_pvec_entries(mapping, &pvec, indices, end); + truncate_exceptional_pvec_entries(mapping, &pvec, indices); pagevec_release(&pvec); index++; } @@ -528,8 +512,6 @@ unsigned long invalidate_mapping_pages(struct address= _space *mapping, =20 /* We rely upon deletion not changing page->index */ index =3D indices[i]; - if (index > end) - break; =20 if (xa_is_value(page)) { invalidate_exceptional_entry(mapping, index, @@ -629,16 +611,13 @@ int invalidate_inode_pages2_range(struct address_sp= ace *mapping, =20 pagevec_init(&pvec); index =3D start; - while (index <=3D end && pagevec_lookup_entries(&pvec, mapping, index, - min(end - index, (pgoff_t)PAGEVEC_SIZE - 1) + 1, - indices)) { + while (pagevec_lookup_entries(&pvec, mapping, index, end, + PAGEVEC_SIZE, indices)) { for (i =3D 0; i < pagevec_count(&pvec); i++) { struct page *page =3D pvec.pages[i]; =20 /* We rely upon deletion not changing page->index */ index =3D indices[i]; - if (index > end) - break; =20 if (xa_is_value(page)) { if (!invalidate_exceptional_entry2(mapping, --=20 2.28.0