From: Nicholas Piggin <npiggin@gmail.com>
To: linux-mm@kvack.org
Cc: Nicholas Piggin <npiggin@gmail.com>,
linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org,
linuxppc-dev@lists.ozlabs.org,
Andrew Morton <akpm@linux-foundation.org>,
Linus Torvalds <torvalds@linux-foundation.org>
Subject: [PATCH 2/3] mm/cow: optimise pte dirty/accessed bits handling in fork
Date: Tue, 28 Aug 2018 21:20:33 +1000 [thread overview]
Message-ID: <20180828112034.30875-3-npiggin@gmail.com> (raw)
In-Reply-To: <20180828112034.30875-1-npiggin@gmail.com>
fork clears dirty/accessed bits from new ptes in the child. This logic
has existed since mapped page reclaim was done by scanning ptes when
it may have been quite important. Today with physical based pte
scanning, there is less reason to clear these bits. Dirty bits are all
tested and cleared together and any dirty bit is the same as many
dirty bits. Any young bit is treated similarly to many young bits, but
not quite the same. A comment has been added where there is some
difference.
This eliminates a major source of faults powerpc/radix requires to set
dirty/accessed bits in ptes, speeding up a fork/exit microbenchmark by
about 5% on POWER9 (16600 -> 17500 fork/execs per second).
Skylake appears to have a micro-fault overhead too -- a test which
allocates 4GB anonymous memory, reads each page, then forks, and times
the child reading a byte from each page. The first pass over the pages
takes about 1000 cycles per page, the second pass takes about 27
cycles (TLB miss). With no additional minor faults measured due to
either child pass, and the page array well exceeding TLB capacity, the
large cost must be caused by micro faults caused by setting accessed
bit.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
mm/huge_memory.c | 2 --
mm/memory.c | 10 +++++-----
mm/vmscan.c | 8 ++++++++
3 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d9bae12978ef..5fb1a43e12e0 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -977,7 +977,6 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pmdp_set_wrprotect(src_mm, addr, src_pmd);
pmd = pmd_wrprotect(pmd);
}
- pmd = pmd_mkold(pmd);
set_pmd_at(dst_mm, addr, dst_pmd, pmd);
ret = 0;
@@ -1071,7 +1070,6 @@ int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pudp_set_wrprotect(src_mm, addr, src_pud);
pud = pud_wrprotect(pud);
}
- pud = pud_mkold(pud);
set_pud_at(dst_mm, addr, dst_pud, pud);
ret = 0;
diff --git a/mm/memory.c b/mm/memory.c
index b616a69ad770..3d8bf8220bd0 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1038,12 +1038,12 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
}
/*
- * If it's a shared mapping, mark it clean in
- * the child
+ * Child inherits dirty and young bits from parent. There is no
+ * point clearing them because any cleaning or aging has to walk
+ * all ptes anyway, and it will notice the bits set in the parent.
+ * Leaving them set avoids stalls and even page faults on CPUs that
+ * handle these bits in software.
*/
- if (vm_flags & VM_SHARED)
- pte = pte_mkclean(pte);
- pte = pte_mkold(pte);
page = vm_normal_page(vma, addr, pte);
if (page) {
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7e7d25504651..52fe64af3d80 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1021,6 +1021,14 @@ static enum page_references page_check_references(struct page *page,
* to look twice if a mapped file page is used more
* than once.
*
+ * fork() will set referenced bits in child ptes despite
+ * not having been accessed, to avoid micro-faults of
+ * setting accessed bits. This heuristic is not perfectly
+ * accurate in other ways -- multiple map/unmap in the
+ * same time window would be treated as multiple references
+ * despite same number of actual memory accesses made by
+ * the program.
+ *
* Mark it and spare it for another trip around the
* inactive list. Another page table reference will
* lead to its activation.
--
2.18.0
next prev parent reply other threads:[~2018-08-28 11:20 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-28 11:20 [PATCH 0/3] mm: dirty/accessed pte optimisations Nicholas Piggin
2018-08-28 11:20 ` [PATCH 1/3] mm/cow: don't bother write protectig already write-protected huge pages Nicholas Piggin
2018-08-28 11:20 ` Nicholas Piggin [this message]
2018-08-29 15:42 ` [PATCH 2/3] mm/cow: optimise pte dirty/accessed bits handling in fork Linus Torvalds
2018-08-29 23:12 ` Nicholas Piggin
2018-08-29 23:15 ` Linus Torvalds
2018-08-29 23:57 ` Nicholas Piggin
2018-08-28 11:20 ` [PATCH 3/3] mm: optimise pte dirty/accessed bit setting by demand based pte insertion Nicholas Piggin
2018-09-05 14:29 ` Guenter Roeck
2018-09-05 22:18 ` Nicholas Piggin
2018-09-06 0:36 ` Guenter Roeck
2018-09-17 17:53 ` Nicholas Piggin
2018-09-21 8:42 ` Ley Foon Tan
2018-09-23 9:23 ` Nicholas Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180828112034.30875-3-npiggin@gmail.com \
--to=npiggin@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).