From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org,
linux-fsdevel@vger.kernel.org,
"David Hildenbrand" <david@redhat.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
"Tejun Heo" <tj@kernel.org>, "Zefan Li" <lizefan.x@bytedance.com>,
"Johannes Weiner" <hannes@cmpxchg.org>,
"Michal Koutný" <mkoutny@suse.com>,
"Jonathan Corbet" <corbet@lwn.net>,
"Andy Lutomirski" <luto@kernel.org>,
"Thomas Gleixner" <tglx@linutronix.de>,
"Ingo Molnar" <mingo@redhat.com>,
"Borislav Petkov" <bp@alien8.de>,
"Dave Hansen" <dave.hansen@linux.intel.com>
Subject: [PATCH v1 03/17] mm/rmap: use folio_large_nr_pages() in add/remove functions
Date: Thu, 29 Aug 2024 18:56:06 +0200 [thread overview]
Message-ID: <20240829165627.2256514-4-david@redhat.com> (raw)
In-Reply-To: <20240829165627.2256514-1-david@redhat.com>
Let's just use the "large" variant in code where we are sure that we
have a large folio in our hands: this way we are sure that we don't
perform any unnecessary "large" checks.
While at it, convert the VM_BUG_ON_VMA to a VM_WARN_ON_ONCE.
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/rmap.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index 78529cf0fd668..6594c122a5895 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1184,7 +1184,7 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
if (first) {
nr = atomic_add_return_relaxed(ENTIRELY_MAPPED, mapped);
if (likely(nr < ENTIRELY_MAPPED + ENTIRELY_MAPPED)) {
- *nr_pmdmapped = folio_nr_pages(folio);
+ *nr_pmdmapped = folio_large_nr_pages(folio);
nr = *nr_pmdmapped - (nr & FOLIO_PAGES_MAPPED);
/* Raced ahead of a remove and another add? */
if (unlikely(nr < 0))
@@ -1418,14 +1418,11 @@ void folio_add_anon_rmap_pmd(struct folio *folio, struct page *page,
void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
unsigned long address, rmap_t flags)
{
- const int nr = folio_nr_pages(folio);
const bool exclusive = flags & RMAP_EXCLUSIVE;
- int nr_pmdmapped = 0;
+ int nr = 1, nr_pmdmapped = 0;
VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio);
VM_WARN_ON_FOLIO(!exclusive && !folio_test_locked(folio), folio);
- VM_BUG_ON_VMA(address < vma->vm_start ||
- address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
/*
* VM_DROPPABLE mappings don't swap; instead they're just dropped when
@@ -1443,6 +1440,7 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
} else if (!folio_test_pmd_mappable(folio)) {
int i;
+ nr = folio_large_nr_pages(folio);
for (i = 0; i < nr; i++) {
struct page *page = folio_page(folio, i);
@@ -1456,6 +1454,7 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
atomic_set(&folio->_large_mapcount, nr - 1);
atomic_set(&folio->_nr_pages_mapped, nr);
} else {
+ nr = folio_large_nr_pages(folio);
/* increment count (starts at -1) */
atomic_set(&folio->_entire_mapcount, 0);
/* increment count (starts at -1) */
@@ -1466,6 +1465,9 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
nr_pmdmapped = nr;
}
+ VM_WARN_ON_ONCE(address < vma->vm_start ||
+ address + (nr << PAGE_SHIFT) > vma->vm_end);
+
__folio_mod_stat(folio, nr, nr_pmdmapped);
mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON, 1);
}
@@ -1557,7 +1559,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
if (last) {
nr = atomic_sub_return_relaxed(ENTIRELY_MAPPED, mapped);
if (likely(nr < ENTIRELY_MAPPED)) {
- nr_pmdmapped = folio_nr_pages(folio);
+ nr_pmdmapped = folio_large_nr_pages(folio);
nr = nr_pmdmapped - (nr & FOLIO_PAGES_MAPPED);
/* Raced ahead of another remove and an add? */
if (unlikely(nr < 0))
--
2.46.0
next prev parent reply other threads:[~2024-08-29 16:57 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-29 16:56 [PATCH v1 00/17] mm: MM owner tracking for large folios (!hugetlb) + CONFIG_NO_PAGE_MAPCOUNT David Hildenbrand
2024-08-29 16:56 ` [PATCH v1 01/17] mm: factor out large folio handling from folio_order() into folio_large_order() David Hildenbrand
2024-09-23 4:44 ` Lance Yang
2024-10-23 11:11 ` Kirill A. Shutemov
2024-08-29 16:56 ` [PATCH v1 02/17] mm: factor out large folio handling from folio_nr_pages() into folio_large_nr_pages() David Hildenbrand
2024-10-23 11:18 ` Kirill A. Shutemov
2024-12-06 10:29 ` David Hildenbrand
2024-08-29 16:56 ` David Hildenbrand [this message]
2024-10-23 11:22 ` [PATCH v1 03/17] mm/rmap: use folio_large_nr_pages() in add/remove functions Kirill A. Shutemov
2024-08-29 16:56 ` [PATCH v1 04/17] mm: let _folio_nr_pages overlay memcg_data in first tail page David Hildenbrand
2024-10-23 11:38 ` Kirill A. Shutemov
2024-10-23 11:40 ` David Hildenbrand
2024-08-29 16:56 ` [PATCH v1 05/17] mm/rmap: pass dst_vma to page_try_dup_anon_rmap() and page_dup_file_rmap() David Hildenbrand
2024-08-29 16:56 ` [PATCH v1 06/17] mm/rmap: pass vma to __folio_add_rmap() David Hildenbrand
2024-08-29 16:56 ` [PATCH v1 07/17] mm/rmap: abstract large mapcount operations for large folios (!hugetlb) David Hildenbrand
2024-08-29 16:56 ` [PATCH v1 08/17] mm/rmap: initial MM owner tracking " David Hildenbrand
2024-10-23 13:08 ` Kirill A. Shutemov
2024-10-23 13:28 ` David Hildenbrand
2024-08-29 16:56 ` [PATCH v1 09/17] bit_spinlock: __always_inline (un)lock functions David Hildenbrand
2024-08-29 16:56 ` [PATCH v1 10/17] mm: COW reuse support for PTE-mapped THP with CONFIG_MM_ID David Hildenbrand
2024-08-29 16:56 ` [PATCH v1 11/17] mm: CONFIG_NO_PAGE_MAPCOUNT to prepare for not maintain per-page mapcounts in large folios David Hildenbrand
2024-08-29 16:56 ` [PATCH v1 12/17] mm: remove per-page mapcount dependency in folio_likely_mapped_shared() (CONFIG_NO_PAGE_MAPCOUNT) David Hildenbrand
2024-08-29 16:56 ` [PATCH v1 13/17] fs/proc/page: remove per-page mapcount dependency for /proc/kpagecount (CONFIG_NO_PAGE_MAPCOUNT) David Hildenbrand
2024-08-29 16:56 ` [PATCH v1 14/17] fs/proc/task_mmu: remove per-page mapcount dependency for PM_MMAP_EXCLUSIVE (CONFIG_NO_PAGE_MAPCOUNT) David Hildenbrand
2024-08-29 16:56 ` [PATCH v1 15/17] fs/proc/task_mmu: remove per-page mapcount dependency for "mapmax" (CONFIG_NO_PAGE_MAPCOUNT) David Hildenbrand
2024-08-29 16:56 ` [PATCH v1 16/17] fs/proc/task_mmu: remove per-page mapcount dependency for smaps/smaps_rollup (CONFIG_NO_PAGE_MAPCOUNT) David Hildenbrand
2024-08-29 16:56 ` [PATCH v1 17/17] mm: stop maintaining the per-page mapcount of large folios (CONFIG_NO_PAGE_MAPCOUNT) David Hildenbrand
2024-10-23 9:10 ` [PATCH v1 00/17] mm: MM owner tracking for large folios (!hugetlb) + CONFIG_NO_PAGE_MAPCOUNT David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240829165627.2256514-4-david@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=bp@alien8.de \
--cc=cgroups@vger.kernel.org \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=hannes@cmpxchg.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lizefan.x@bytedance.com \
--cc=luto@kernel.org \
--cc=mingo@redhat.com \
--cc=mkoutny@suse.com \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).