From: Vlastimil Babka <vbabka@suse.cz>
To: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: Suren Baghdasaryan <surenb@google.com>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Matthew Wilcox <willy@infradead.org>,
"Paul E . McKenney" <paulmck@kernel.org>,
Jann Horn <jannh@google.com>,
David Hildenbrand <david@redhat.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Muchun Song <muchun.song@linux.dev>,
Richard Henderson <richard.henderson@linaro.org>,
Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
Matt Turner <mattst88@gmail.com>,
Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
"James E . J . Bottomley" <James.Bottomley@HansenPartnership.com>,
Helge Deller <deller@gmx.de>, Chris Zankel <chris@zankel.net>,
Max Filippov <jcmvbkbc@gmail.com>, Arnd Bergmann <arnd@arndb.de>,
linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-arch@vger.kernel.org,
Shuah Khan <shuah@kernel.org>,
Christian Brauner <brauner@kernel.org>,
linux-kselftest@vger.kernel.org,
Sidhartha Kumar <sidhartha.kumar@oracle.com>,
Jeff Xu <jeffxu@chromium.org>,
Christoph Hellwig <hch@infradead.org>,
linux-api@vger.kernel.org, John Hubbard <jhubbard@nvidia.com>
Subject: Re: [PATCH v2 1/5] mm: pagewalk: add the ability to install PTEs
Date: Mon, 21 Oct 2024 15:27:55 +0200 [thread overview]
Message-ID: <fdd2be0a-cae9-4508-ba20-eb04c9a1e7f9@suse.cz> (raw)
In-Reply-To: <cf91e3936c2dee42aa8ac15af3e76c90c098d570.1729440856.git.lorenzo.stoakes@oracle.com>
On 10/20/24 18:20, Lorenzo Stoakes wrote:
> The existing generic pagewalk logic permits the walking of page tables,
> invoking callbacks at individual page table levels via user-provided
> mm_walk_ops callbacks.
>
> This is useful for traversing existing page table entries, but precludes
> the ability to establish new ones.
>
> Existing mechanism for performing a walk which also installs page table
> entries if necessary are heavily duplicated throughout the kernel, each
> with semantic differences from one another and largely unavailable for use
> elsewhere.
>
> Rather than add yet another implementation, we extend the generic pagewalk
> logic to enable the installation of page table entries by adding a new
> install_pte() callback in mm_walk_ops. If this is specified, then upon
> encountering a missing page table entry, we allocate and install a new one
> and continue the traversal.
>
> If a THP huge page is encountered, we make use of existing logic to split
> it. Then once we reach the PTE level, we invoke the install_pte() callback
> which provides a PTE entry to install. We do not support hugetlb at this
> stage.
>
> If this function returns an error, or an allocation fails during the
> operation, we abort the operation altogether. It is up to the caller to
> deal appropriately with partially populated page table ranges.
>
> If install_pte() is defined, the semantics of pte_entry() change - this
> callback is then only invoked if the entry already exists. This is a useful
> property, as it allows a caller to handle existing PTEs while installing
> new ones where necessary in the specified range.
>
> If install_pte() is not defined, then there is no functional difference to
> this patch, so all existing logic will work precisely as it did before.
>
> As we only permit the installation of PTEs where a mapping does not already
> exist there is no need for TLB management, however we do invoke
> update_mmu_cache() for architectures which require manual maintenance of
> mappings for other CPUs.
>
> We explicitly do not allow the existing page walk API to expose this
> feature as it is dangerous and intended for internal mm use only. Therefore
> we provide a new walk_page_range_mm() function exposed only to
> mm/internal.h.
>
> Reviewed-by: Jann Horn <jannh@google.com>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
<snip>
> /*
> * We want to know the real level where a entry is located ignoring any
> * folding of levels which may be happening. For example if p4d is folded then
> @@ -29,9 +34,23 @@ static int walk_pte_range_inner(pte_t *pte, unsigned long addr,
> int err = 0;
>
> for (;;) {
> - err = ops->pte_entry(pte, addr, addr + PAGE_SIZE, walk);
> - if (err)
> - break;
> + if (ops->install_pte && pte_none(ptep_get(pte))) {
> + pte_t new_pte;
> +
> + err = ops->install_pte(addr, addr + PAGE_SIZE, &new_pte,
> + walk);
> + if (err)
> + break;
> +
> + set_pte_at(walk->mm, addr, pte, new_pte);
While the guard pages install ptes unconditionally, maybe some install_pte
handler implementation would sometimes want to skip, should ve define an
error code that means its skipped and just continue instead of set_pte_at()?
Or leave it until such handler appears.
> + /* Non-present before, so for arches that need it. */
> + if (!WARN_ON_ONCE(walk->no_vma))
> + update_mmu_cache(walk->vma, addr, pte);
> + } else {
> + err = ops->pte_entry(pte, addr, addr + PAGE_SIZE, walk);
> + if (err)
> + break;
> + }
> if (addr >= end - PAGE_SIZE)
> break;
> addr += PAGE_SIZE;
> @@ -89,11 +108,14 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
> again:
> next = pmd_addr_end(addr, end);
> if (pmd_none(*pmd)) {
> - if (ops->pte_hole)
> + if (ops->install_pte)
> + err = __pte_alloc(walk->mm, pmd);
> + else if (ops->pte_hole)
> err = ops->pte_hole(addr, next, depth, walk);
> if (err)
> break;
> - continue;
> + if (!ops->install_pte)
> + continue;
> }
>
> walk->action = ACTION_SUBTREE;
> @@ -116,7 +138,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
> */
> if ((!walk->vma && (pmd_leaf(*pmd) || !pmd_present(*pmd))) ||
> walk->action == ACTION_CONTINUE ||
> - !(ops->pte_entry))
> + !(ops->pte_entry || ops->install_pte))
> continue;
BTW, I find it hard to read this condition even before your patch, oh well.
But if I read it correctly, does it mean we're going to split a pmd-mapped
THP if we have a install_pte handler? But is that really necessary if the
pmd splitting results in all ptes populated, and thus the install_pte
handler can't do anything with any pte anyway?
> if (walk->vma)
> @@ -148,11 +170,14 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
> again:
> next = pud_addr_end(addr, end);
> if (pud_none(*pud)) {
> - if (ops->pte_hole)
> + if (ops->install_pte)
> + err = __pmd_alloc(walk->mm, pud, addr);
> + else if (ops->pte_hole)
> err = ops->pte_hole(addr, next, depth, walk);
> if (err)
> break;
> - continue;
> + if (!ops->install_pte)
> + continue;
> }
>
> walk->action = ACTION_SUBTREE;
> @@ -167,7 +192,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
>
> if ((!walk->vma && (pud_leaf(*pud) || !pud_present(*pud))) ||
> walk->action == ACTION_CONTINUE ||
> - !(ops->pmd_entry || ops->pte_entry))
> + !(ops->pmd_entry || ops->pte_entry || ops->install_pte))
> continue;
Ditto?
next prev parent reply other threads:[~2024-10-21 13:27 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-20 16:20 [PATCH v2 0/5] implement lightweight guard pages Lorenzo Stoakes
2024-10-20 16:20 ` [PATCH v2 1/5] mm: pagewalk: add the ability to install PTEs Lorenzo Stoakes
2024-10-21 13:27 ` Vlastimil Babka [this message]
2024-10-21 13:50 ` Lorenzo Stoakes
2024-10-20 16:20 ` [PATCH v2 2/5] mm: add PTE_MARKER_GUARD PTE marker Lorenzo Stoakes
2024-10-21 13:45 ` Vlastimil Babka
2024-10-21 19:57 ` Lorenzo Stoakes
2024-10-21 20:42 ` Lorenzo Stoakes
2024-10-21 21:13 ` Lorenzo Stoakes
2024-10-21 21:20 ` Dave Hansen
2024-10-21 14:13 ` Vlastimil Babka
2024-10-21 14:33 ` Lorenzo Stoakes
2024-10-21 14:54 ` Vlastimil Babka
2024-10-21 15:33 ` Lorenzo Stoakes
2024-10-21 15:41 ` Lorenzo Stoakes
2024-10-21 16:00 ` David Hildenbrand
2024-10-21 16:23 ` Lorenzo Stoakes
2024-10-21 16:44 ` David Hildenbrand
2024-10-21 16:51 ` Lorenzo Stoakes
2024-10-21 17:00 ` David Hildenbrand
2024-10-21 17:14 ` Lorenzo Stoakes
2024-10-21 17:21 ` David Hildenbrand
2024-10-21 17:26 ` Vlastimil Babka
2024-10-22 19:13 ` David Hildenbrand
2024-10-20 16:20 ` [PATCH v2 3/5] mm: madvise: implement lightweight guard page mechanism Lorenzo Stoakes
2024-10-21 17:05 ` David Hildenbrand
2024-10-21 17:15 ` Lorenzo Stoakes
2024-10-21 17:23 ` David Hildenbrand
2024-10-21 19:25 ` John Hubbard
2024-10-21 19:39 ` Lorenzo Stoakes
2024-10-21 20:18 ` David Hildenbrand
2024-10-21 20:11 ` Vlastimil Babka
2024-10-21 20:17 ` David Hildenbrand
2024-10-21 20:25 ` Vlastimil Babka
2024-10-21 20:30 ` Lorenzo Stoakes
2024-10-21 20:37 ` David Hildenbrand
2024-10-21 20:49 ` Lorenzo Stoakes
2024-10-21 21:20 ` David Hildenbrand
2024-10-21 21:33 ` Lorenzo Stoakes
2024-10-21 21:35 ` Vlastimil Babka
2024-10-21 21:46 ` Lorenzo Stoakes
2024-10-22 19:18 ` David Hildenbrand
2024-10-21 20:27 ` Lorenzo Stoakes
2024-10-21 20:45 ` Vlastimil Babka
2024-10-22 19:08 ` Jann Horn
2024-10-22 19:35 ` Lorenzo Stoakes
2024-10-22 19:57 ` Jann Horn
2024-10-22 20:45 ` Lorenzo Stoakes
2024-10-20 16:20 ` [PATCH v2 4/5] tools: testing: update tools UAPI header for mman-common.h Lorenzo Stoakes
2024-10-20 16:20 ` [PATCH v2 5/5] selftests/mm: add self tests for guard page feature Lorenzo Stoakes
2024-10-21 21:31 ` Shuah Khan
2024-10-22 10:25 ` Lorenzo Stoakes
2024-10-20 17:37 ` [PATCH v2 0/5] implement lightweight guard pages Florian Weimer
2024-10-20 19:45 ` Lorenzo Stoakes
2024-10-23 6:24 ` Dmitry Vyukov
2024-10-23 7:19 ` David Hildenbrand
2024-10-23 8:11 ` Lorenzo Stoakes
2024-10-23 8:56 ` Dmitry Vyukov
2024-10-23 9:06 ` Vlastimil Babka
2024-10-23 9:13 ` David Hildenbrand
2024-10-23 9:18 ` Lorenzo Stoakes
2024-10-23 9:29 ` David Hildenbrand
2024-10-23 11:31 ` Marco Elver
2024-10-23 11:36 ` David Hildenbrand
2024-10-23 11:40 ` Lorenzo Stoakes
2024-10-23 9:17 ` Dmitry Vyukov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fdd2be0a-cae9-4508-ba20-eb04c9a1e7f9@suse.cz \
--to=vbabka@suse.cz \
--cc=James.Bottomley@HansenPartnership.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=arnd@arndb.de \
--cc=brauner@kernel.org \
--cc=chris@zankel.net \
--cc=david@redhat.com \
--cc=deller@gmx.de \
--cc=hch@infradead.org \
--cc=ink@jurassic.park.msu.ru \
--cc=jannh@google.com \
--cc=jcmvbkbc@gmail.com \
--cc=jeffxu@chromium.org \
--cc=jhubbard@nvidia.com \
--cc=linux-alpha@vger.kernel.org \
--cc=linux-api@vger.kernel.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-parisc@vger.kernel.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mattst88@gmail.com \
--cc=muchun.song@linux.dev \
--cc=paulmck@kernel.org \
--cc=richard.henderson@linaro.org \
--cc=shuah@kernel.org \
--cc=sidhartha.kumar@oracle.com \
--cc=surenb@google.com \
--cc=tsbogend@alpha.franken.de \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox