From: David Hildenbrand <david@redhat.com>
To: Ryan Roberts <ryan.roberts@arm.com>, linux-kernel@vger.kernel.org
Cc: Catalin Marinas <catalin.marinas@arm.com>,
linux-mm@kvack.org, sparclinux@vger.kernel.org,
Alexander Gordeev <agordeev@linux.ibm.com>,
Will Deacon <will@kernel.org>,
linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org,
Russell King <linux@armlinux.org.uk>,
Matthew Wilcox <willy@infradead.org>,
"Aneesh Kumar K.V" <aneesh.kumar@kernel.org>,
"Naveen N. Rao" <naveen.n.rao@linux.ibm.com>,
Gerald Schaefer <gerald.schaefer@linux.ibm.com>,
Christian Borntraeger <borntraeger@linux.ibm.com>,
Albert Ou <aou@eecs.berkeley.edu>,
Vasily Gorbik <gor@linux.ibm.com>,
Heiko Carstens <hca@linux.ibm.com>,
Nicholas Piggin <npiggin@gmail.com>,
Paul Walmsley <paul.walmsley@sifive.com>,
linux-arm-kernel@lists.infradead.org,
Dinh Nguyen <dinguyen@kernel.org>,
Palmer Dabbelt <palmer@dabbelt.com>,
Sven Schnelle <svens@linux.ibm.com>,
Andrew Morton <akpm@linux-foundation.org>,
linuxppc-dev@lists.ozlabs.org,
"David S. Miller" <davem@davemloft.net>
Subject: Re: [PATCH v1 10/11] mm/memory: ignore dirty/accessed/soft-dirty bits in folio_pte_batch()
Date: Tue, 23 Jan 2024 15:13:35 +0100 [thread overview]
Message-ID: <8eb5db8e-33cc-4cbf-a1bf-0da7af230fab@redhat.com> (raw)
In-Reply-To: <c92c2460-c66a-46c7-b84f-0732965dcf73@redhat.com>
>> Although now I'm wondering if there is a race here... What happens if a page in
>> the parent becomes dirty after you have checked it but before you write protect
>> it? Isn't that already a problem with the current non-batched version? Why do we
>> even to preserve dirty in the child for private mappings?
>
> I suspect, because the parent could zap the anon folio. If the folio is
> clean, but the PTE dirty, I suspect that we could lose data of the child
> if we were to evict that clean folio (swapout).
>
> So I assume we simply copy the dirty PTE bit, so the system knows that
> that folio is actually dirty, because one PTE is dirty.
Oh, and regarding your race concern: it's undefined which page state
would see if some write is racing with fork, so it also doesn't matter
if we would copy the PTE dirty bit or not, if it gets set in a racy fashion.
I'll not experiment with:
From 14e83ff2a422a96ce5701f9c8454a49f9ed947e3 Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@redhat.com>
Date: Sat, 30 Dec 2023 12:54:35 +0100
Subject: [PATCH] mm/memory: ignore dirty/accessed/soft-dirty bits in
folio_pte_batch()
Let's always ignore the accessed/young bit: we'll always mark the PTE
as old in our child process during fork, and upcoming users will
similarly not care.
Ignore the dirty bit only if we don't want to duplicate the dirty bit
into the child process during fork. Maybe, we could just set all PTEs
in the child dirty if any PTE is dirty. For now, let's keep the behavior
unchanged.
Ignore the soft-dirty bit only if the bit doesn't have any meaning in
the src vma.
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/memory.c | 34 ++++++++++++++++++++++++++++++----
1 file changed, 30 insertions(+), 4 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 7690994929d26..9aba1b0e871ca 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -953,24 +953,44 @@ static __always_inline void __copy_present_ptes(struct vm_area_struct *dst_vma,
set_ptes(dst_vma->vm_mm, addr, dst_pte, pte, nr);
}
+/* Flags for folio_pte_batch(). */
+typedef int __bitwise fpb_t;
+
+/* Compare PTEs after pte_mkclean(), ignoring the dirty bit. */
+#define FPB_IGNORE_DIRTY ((__force fpb_t)BIT(0))
+
+/* Compare PTEs after pte_clear_soft_dirty(), ignoring the soft-dirty bit. */
+#define FPB_IGNORE_SOFT_DIRTY ((__force fpb_t)BIT(1))
+
+static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
+{
+ if (flags & FPB_IGNORE_DIRTY)
+ pte = pte_mkclean(pte);
+ if (likely(flags & FPB_IGNORE_SOFT_DIRTY))
+ pte = pte_clear_soft_dirty(pte);
+ return pte_mkold(pte);
+}
+
/*
* Detect a PTE batch: consecutive (present) PTEs that map consecutive
* pages of the same folio.
*
* All PTEs inside a PTE batch have the same PTE bits set, excluding the PFN.
+ * the accessed bit, dirty bit (with FPB_IGNORE_DIRTY) and soft-dirty bit
+ * (with FPB_IGNORE_SOFT_DIRTY).
*/
static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
- pte_t *start_ptep, pte_t pte, int max_nr)
+ pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags)
{
unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
const pte_t *end_ptep = start_ptep + max_nr;
- pte_t expected_pte = pte_next_pfn(pte);
+ pte_t expected_pte = __pte_batch_clear_ignored(pte_next_pfn(pte), flags);
pte_t *ptep = start_ptep + 1;
VM_WARN_ON_FOLIO(!pte_present(pte), folio);
while (ptep != end_ptep) {
- pte = ptep_get(ptep);
+ pte = __pte_batch_clear_ignored(ptep_get(ptep), flags);
if (!pte_same(pte, expected_pte))
break;
@@ -1004,6 +1024,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
{
struct page *page;
struct folio *folio;
+ fpb_t flags = 0;
int err, nr;
page = vm_normal_page(src_vma, addr, pte);
@@ -1018,7 +1039,12 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
* by keeping the batching logic separate.
*/
if (unlikely(!*prealloc && folio_test_large(folio) && max_nr != 1)) {
- nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr);
+ if (src_vma->vm_flags & VM_SHARED)
+ flags |= FPB_IGNORE_DIRTY;
+ if (!vma_soft_dirty_enabled(src_vma))
+ flags |= FPB_IGNORE_SOFT_DIRTY;
+
+ nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags);
folio_ref_add(folio, nr);
if (folio_test_anon(folio)) {
if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page,
--
2.43.0
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-01-23 14:14 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-22 19:41 [PATCH v1 00/11] mm/memory: optimize fork() with PTE-mapped THP David Hildenbrand
2024-01-22 19:41 ` [PATCH v1 01/11] arm/pgtable: define PFN_PTE_SHIFT on arm and arm64 David Hildenbrand
2024-01-23 10:34 ` Ryan Roberts
2024-01-23 10:48 ` David Hildenbrand
2024-01-23 11:02 ` David Hildenbrand
2024-01-23 11:17 ` Ryan Roberts
2024-01-23 11:33 ` David Hildenbrand
2024-01-23 11:44 ` Ryan Roberts
2024-01-23 11:08 ` Ryan Roberts
2024-01-23 11:16 ` Christophe Leroy
2024-01-23 11:31 ` David Hildenbrand
2024-01-23 11:38 ` Ryan Roberts
2024-01-23 11:40 ` David Hildenbrand
2024-01-24 5:45 ` Aneesh Kumar K.V
2024-01-23 11:48 ` Christophe Leroy
2024-01-23 11:53 ` David Hildenbrand
2024-01-24 5:46 ` Aneesh Kumar K.V
2024-01-23 11:10 ` Christophe Leroy
2024-01-23 15:01 ` Matthew Wilcox
2024-01-23 15:22 ` Ryan Roberts
2024-01-22 19:41 ` [PATCH v1 02/11] nios2/pgtable: define PFN_PTE_SHIFT David Hildenbrand
2024-01-22 19:41 ` [PATCH v1 03/11] powerpc/pgtable: " David Hildenbrand
2024-01-22 19:41 ` [PATCH v1 04/11] risc: pgtable: " David Hildenbrand
2024-01-22 20:03 ` Alexandre Ghiti
2024-01-22 20:08 ` David Hildenbrand
2024-01-22 19:41 ` [PATCH v1 05/11] s390/pgtable: " David Hildenbrand
2024-01-22 19:41 ` [PATCH v1 06/11] sparc/pgtable: " David Hildenbrand
2024-01-22 19:41 ` [PATCH v1 07/11] mm/memory: factor out copying the actual PTE in copy_present_pte() David Hildenbrand
2024-01-23 10:45 ` Ryan Roberts
2024-01-22 19:41 ` [PATCH v1 08/11] mm/memory: pass PTE to copy_present_pte() David Hildenbrand
2024-01-23 10:47 ` Ryan Roberts
2024-01-22 19:41 ` [PATCH v1 09/11] mm/memory: optimize fork() with PTE-mapped THP David Hildenbrand
2024-01-23 12:01 ` Ryan Roberts
2024-01-23 12:19 ` David Hildenbrand
2024-01-23 12:28 ` Ryan Roberts
2024-01-22 19:41 ` [PATCH v1 10/11] mm/memory: ignore dirty/accessed/soft-dirty bits in folio_pte_batch() David Hildenbrand
2024-01-23 12:25 ` Ryan Roberts
2024-01-23 13:06 ` David Hildenbrand
2024-01-23 13:42 ` Ryan Roberts
2024-01-23 13:55 ` David Hildenbrand
2024-01-23 14:13 ` David Hildenbrand [this message]
2024-01-23 14:27 ` Ryan Roberts
2024-01-22 19:42 ` [PATCH v1 11/11] mm/memory: ignore writable bit " David Hildenbrand
2024-01-23 12:35 ` Ryan Roberts
2024-01-23 19:15 ` [PATCH v1 00/11] mm/memory: optimize fork() with PTE-mapped THP Ryan Roberts
2024-01-23 19:33 ` David Hildenbrand
2024-01-23 19:43 ` Ryan Roberts
2024-01-23 20:14 ` David Hildenbrand
2024-01-23 20:43 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8eb5db8e-33cc-4cbf-a1bf-0da7af230fab@redhat.com \
--to=david@redhat.com \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@kernel.org \
--cc=aou@eecs.berkeley.edu \
--cc=borntraeger@linux.ibm.com \
--cc=catalin.marinas@arm.com \
--cc=davem@davemloft.net \
--cc=dinguyen@kernel.org \
--cc=gerald.schaefer@linux.ibm.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-riscv@lists.infradead.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=naveen.n.rao@linux.ibm.com \
--cc=npiggin@gmail.com \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=ryan.roberts@arm.com \
--cc=sparclinux@vger.kernel.org \
--cc=svens@linux.ibm.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).