linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: linux-fsdevel@vger.kernel.org
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: [PATCH v5 27/39] mm: Remove assumptions of THP size
Date: Thu, 28 May 2020 19:58:12 -0700	[thread overview]
Message-ID: <20200529025824.32296-28-willy@infradead.org> (raw)
In-Reply-To: <20200529025824.32296-1-willy@infradead.org>

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Remove direct uses of HPAGE_PMD_NR in paths that aren't necessarily
PMD sized.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/huge_memory.c | 15 ++++++++-------
 mm/rmap.c        | 10 +++++-----
 2 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 15a86b06befc..4c4f92349829 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2518,7 +2518,7 @@ static void remap_page(struct page *page)
 	if (PageTransHuge(page)) {
 		remove_migration_ptes(page, page, true);
 	} else {
-		for (i = 0; i < HPAGE_PMD_NR; i++)
+		for (i = 0; i < hpage_nr_pages(page); i++)
 			remove_migration_ptes(page + i, page + i, true);
 	}
 }
@@ -2593,6 +2593,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
 	struct lruvec *lruvec;
 	struct address_space *swap_cache = NULL;
 	unsigned long offset = 0;
+	unsigned int nr = hpage_nr_pages(head);
 	int i;
 
 	lruvec = mem_cgroup_page_lruvec(head, pgdat);
@@ -2608,7 +2609,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
 		xa_lock(&swap_cache->i_pages);
 	}
 
-	for (i = HPAGE_PMD_NR - 1; i >= 1; i--) {
+	for (i = nr - 1; i >= 1; i--) {
 		__split_huge_page_tail(head, i, lruvec, list);
 		/* Some pages can be beyond i_size: drop them from page cache */
 		if (head[i].index >= end) {
@@ -2649,7 +2650,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
 
 	remap_page(head);
 
-	for (i = 0; i < HPAGE_PMD_NR; i++) {
+	for (i = 0; i < nr; i++) {
 		struct page *subpage = head + i;
 		if (subpage == page)
 			continue;
@@ -2731,14 +2732,14 @@ int page_trans_huge_mapcount(struct page *page, int *total_mapcount)
 	page = compound_head(page);
 
 	_total_mapcount = ret = 0;
-	for (i = 0; i < HPAGE_PMD_NR; i++) {
+	for (i = 0; i < hpage_nr_pages(page); i++) {
 		mapcount = atomic_read(&page[i]._mapcount) + 1;
 		ret = max(ret, mapcount);
 		_total_mapcount += mapcount;
 	}
 	if (PageDoubleMap(page)) {
 		ret -= 1;
-		_total_mapcount -= HPAGE_PMD_NR;
+		_total_mapcount -= hpage_nr_pages(page);
 	}
 	mapcount = compound_mapcount(page);
 	ret += mapcount;
@@ -2755,9 +2756,9 @@ bool can_split_huge_page(struct page *page, int *pextra_pins)
 
 	/* Additional pins from page cache */
 	if (PageAnon(page))
-		extra_pins = PageSwapCache(page) ? HPAGE_PMD_NR : 0;
+		extra_pins = PageSwapCache(page) ? hpage_nr_pages(page) : 0;
 	else
-		extra_pins = HPAGE_PMD_NR;
+		extra_pins = hpage_nr_pages(page);
 	if (pextra_pins)
 		*pextra_pins = extra_pins;
 	return total_mapcount(page) == page_count(page) - extra_pins - 1;
diff --git a/mm/rmap.c b/mm/rmap.c
index f79a206b271a..9da4b5121baa 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1199,7 +1199,7 @@ void page_add_file_rmap(struct page *page, bool compound)
 	VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page);
 	lock_page_memcg(page);
 	if (compound && PageTransHuge(page)) {
-		for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) {
+		for (i = 0, nr = 0; i < hpage_nr_pages(page); i++) {
 			if (atomic_inc_and_test(&page[i]._mapcount))
 				nr++;
 		}
@@ -1241,7 +1241,7 @@ static void page_remove_file_rmap(struct page *page, bool compound)
 
 	/* page still mapped by someone else? */
 	if (compound && PageTransHuge(page)) {
-		for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) {
+		for (i = 0, nr = 0; i < hpage_nr_pages(page); i++) {
 			if (atomic_add_negative(-1, &page[i]._mapcount))
 				nr++;
 		}
@@ -1290,7 +1290,7 @@ static void page_remove_anon_compound_rmap(struct page *page)
 		 * Subpages can be mapped with PTEs too. Check how many of
 		 * them are still mapped.
 		 */
-		for (i = 0, nr = 0; i < HPAGE_PMD_NR; i++) {
+		for (i = 0, nr = 0; i < hpage_nr_pages(page); i++) {
 			if (atomic_add_negative(-1, &page[i]._mapcount))
 				nr++;
 		}
@@ -1300,10 +1300,10 @@ static void page_remove_anon_compound_rmap(struct page *page)
 		 * page of the compound page is unmapped, but at least one
 		 * small page is still mapped.
 		 */
-		if (nr && nr < HPAGE_PMD_NR)
+		if (nr && nr < hpage_nr_pages(page))
 			deferred_split_huge_page(page);
 	} else {
-		nr = HPAGE_PMD_NR;
+		nr = hpage_nr_pages(page);
 	}
 
 	if (unlikely(PageMlocked(page)))
-- 
2.26.2


  parent reply	other threads:[~2020-05-29  3:00 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-29  2:57 [PATCH v5 00/39] Large pages in the page cache Matthew Wilcox
2020-05-29  2:57 ` [PATCH v5 01/39] mm: Move PageDoubleMap bit Matthew Wilcox
2020-05-29  2:57 ` [PATCH v5 02/39] mm: Simplify PageDoubleMap with PF_SECOND policy Matthew Wilcox
2020-05-29  2:57 ` [PATCH v5 03/39] mm: Allow hpages to be arbitrary order Matthew Wilcox
2020-05-29  2:57 ` [PATCH v5 04/39] mm: Introduce thp_size Matthew Wilcox
2020-05-29  2:57 ` [PATCH v5 05/39] mm: Introduce thp_order Matthew Wilcox
2020-05-29  2:57 ` [PATCH v5 06/39] mm: Introduce offset_in_thp Matthew Wilcox
2020-05-29  2:57 ` [PATCH v5 07/39] fs: Add a filesystem flag for large pages Matthew Wilcox
2020-05-29  2:57 ` [PATCH v5 08/39] fs: Do not update nr_thps for large page mappings Matthew Wilcox
2020-05-29  2:57 ` [PATCH v5 09/39] fs: Introduce i_blocks_per_page Matthew Wilcox
2020-05-29  2:57 ` [PATCH v5 10/39] fs: Make page_mkwrite_check_truncate thp-aware Matthew Wilcox
2020-05-29  2:57 ` [PATCH v5 11/39] fs: Support THPs in zero_user_segments Matthew Wilcox
2020-05-29  2:57 ` [PATCH v5 12/39] bio: Add bio_for_each_thp_segment_all Matthew Wilcox
2020-05-29  2:57 ` [PATCH v5 13/39] iomap: Support arbitrarily many blocks per page Matthew Wilcox
2020-05-29  2:57 ` [PATCH v5 14/39] iomap: Support large pages in iomap_adjust_read_range Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 15/39] iomap: Support large pages in invalidatepage Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 16/39] iomap: Support large pages in read paths Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 17/39] iomap: Support large pages in write paths Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 18/39] iomap: Inline data shouldn't see large pages Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 19/39] iomap: Handle tail pages in iomap_page_mkwrite Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 20/39] xfs: Support large pages Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 21/39] mm: Make prep_transhuge_page return its argument Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 22/39] mm: Add __page_cache_alloc_order Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 23/39] mm: Allow large pages to be added to the page cache Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 24/39] mm: Allow large pages to be removed from " Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 25/39] mm: Remove page fault assumption of compound page size Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 26/39] mm: Fix total_mapcount assumption of " Matthew Wilcox
2020-05-29  2:58 ` Matthew Wilcox [this message]
2020-05-29  2:58 ` [PATCH v5 28/39] mm: Avoid splitting large pages Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 29/39] mm: Fix truncation for pages of arbitrary size Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 30/39] mm: Handle truncates that split large pages Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 31/39] mm: Support storing shadow entries for " Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 32/39] mm: Support retrieving tail pages from the page cache Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 33/39] mm: Support tail pages in wait_for_stable_page Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 34/39] mm: Add DEFINE_READAHEAD Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 35/39] mm: Make page_cache_readahead_unbounded take a readahead_control Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 36/39] mm: Make __do_page_cache_readahead " Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 37/39] mm: Allow PageReadahead to be set on head pages Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 38/39] mm: Add large page readahead Matthew Wilcox
2020-05-29  2:58 ` [PATCH v5 39/39] mm: Align THP mappings for non-DAX Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200529025824.32296-28-willy@infradead.org \
    --to=willy@infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).