linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
@ 2024-05-10  6:51 Byungchul Park
  2024-05-10  6:51 ` [PATCH v10 01/12] x86/tlb: add APIs manipulating tlb batch's arch data Byungchul Park
                   ` (15 more replies)
  0 siblings, 16 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-10  6:51 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

Hi everyone,

While I'm working with a tiered memory system e.g. CXL memory, I have
been facing migration overhead esp. tlb shootdown on promotion or
demotion between different tiers.  Yeah..  most tlb shootdowns on
migration through hinting fault can be avoided thanks to Huang Ying's
work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
is inaccessible").  See the following link for more information:

https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/

However, it's only for migration through hinting fault.  I thought it'd
be much better if we have a general mechanism to reduce all the tlb
numbers that we can apply to any unmap code, that we normally believe
tlb flush should be followed.

I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), defers tlb flush
until folios that have been unmapped and freed, eventually get allocated
again.  It's safe for folios that had been mapped read-only and were
unmapped, since the contents of the folios don't change while staying in
pcp or buddy so we can still read the data through the stale tlb entries.

tlb flush can be defered when folios get unmapped as long as it
guarantees to perform tlb flush needed, before the folios actually
become used, of course, only if all the corresponding ptes don't have
write permission.  Otherwise, the system will get messed up.

To achieve that:

   1. For the folios that map only to non-writable tlb entries, prevent
      tlb flush during unmapping but perform it just before the folios
      actually become used, out of buddy or pcp.

   2. When any non-writable ptes change to writable e.g. through fault
      handler, give up luf mechanism and perform tlb flush required
      right away.

   3. When a writable mapping is created e.g. through mmap(), give up
      luf mechanism and perform tlb flush required right away.

No matter what type of workload is used for performance evaluation, the
result would be positive thanks to the unconditional reduction of tlb
flushes, tlb misses and interrupts.  For the test, I picked up one of
the most popular and heavy workload, llama.cpp that is a
LLM(Large Language Model) inference engine.

The result would depend on memory latency and how often reclaim runs,
which implies tlb miss overhead and how many times unmapping happens.
In my system, the result shows:

   1. tlb flushes are reduced about 95%.
   2. tlb misses(itlb) are reduced about 80%.
   3. tlb misses(dtlb store) are reduced about 57%.
   4. tlb misses(dtlb load) are reduced about 24%.
   5. tlb shootdown interrupts are reduced about 95%.
   6. The test program runtime is reduced about 5%.

The test environment and the result is like:

   Machine: bare metal, x86_64, Intel(R) Xeon(R) Gold 6430
   CPU: 1 socket 64 core with hyper thread on
   Numa: 2 nodes (64 CPUs DRAM 42GB, no CPUs CXL expander 98GB)
   Config: swap off, numa balancing tiering on, demotion enabled

   The test set:

      llama.cpp/main -m $(70G_model1) -p "who are you?" -s 1 -t 15 -n 20 &
      llama.cpp/main -m $(70G_model2) -p "who are you?" -s 1 -t 15 -n 20 &
      llama.cpp/main -m $(70G_model3) -p "who are you?" -s 1 -t 15 -n 20 &
      wait

      where -t: nr of threads, -s: seed used to make the runtime stable,
      -n: nr of tokens that determines the runtime, -p: prompt to ask,
      -m: LLM model to use.

   Run the test set 10 times successively with caches dropped every run
   via 'echo 3 > /proc/sys/vm/drop_caches'.  Each inference prints its
   runtime at the end of each.

   1. Runtime from the output of llama.cpp:

   BEFORE
   ------
   llama_print_timings:       total time = 1002461.95 ms /    24 tokens
   llama_print_timings:       total time = 1044978.38 ms /    24 tokens
   llama_print_timings:       total time = 1000653.09 ms /    24 tokens
   llama_print_timings:       total time = 1047104.80 ms /    24 tokens
   llama_print_timings:       total time = 1069430.36 ms /    24 tokens
   llama_print_timings:       total time = 1068201.16 ms /    24 tokens
   llama_print_timings:       total time = 1078092.59 ms /    24 tokens
   llama_print_timings:       total time = 1073200.45 ms /    24 tokens
   llama_print_timings:       total time = 1067136.00 ms /    24 tokens
   llama_print_timings:       total time = 1076442.56 ms /    24 tokens
   llama_print_timings:       total time = 1004142.64 ms /    24 tokens
   llama_print_timings:       total time = 1042942.65 ms /    24 tokens
   llama_print_timings:       total time =  999933.76 ms /    24 tokens
   llama_print_timings:       total time = 1046548.83 ms /    24 tokens
   llama_print_timings:       total time = 1068671.48 ms /    24 tokens
   llama_print_timings:       total time = 1068285.76 ms /    24 tokens
   llama_print_timings:       total time = 1077789.63 ms /    24 tokens
   llama_print_timings:       total time = 1071558.93 ms /    24 tokens
   llama_print_timings:       total time = 1066181.55 ms /    24 tokens
   llama_print_timings:       total time = 1076767.53 ms /    24 tokens
   llama_print_timings:       total time = 1004065.63 ms /    24 tokens
   llama_print_timings:       total time = 1044522.13 ms /    24 tokens
   llama_print_timings:       total time =  999725.33 ms /    24 tokens
   llama_print_timings:       total time = 1047510.77 ms /    24 tokens
   llama_print_timings:       total time = 1068010.27 ms /    24 tokens
   llama_print_timings:       total time = 1068999.31 ms /    24 tokens
   llama_print_timings:       total time = 1077648.05 ms /    24 tokens
   llama_print_timings:       total time = 1071378.96 ms /    24 tokens
   llama_print_timings:       total time = 1066326.32 ms /    24 tokens
   llama_print_timings:       total time = 1077088.92 ms /    24 tokens

   AFTER
   -----
   llama_print_timings:       total time =  988522.03 ms /    24 tokens
   llama_print_timings:       total time =  997204.52 ms /    24 tokens
   llama_print_timings:       total time =  996605.86 ms /    24 tokens
   llama_print_timings:       total time =  991985.50 ms /    24 tokens
   llama_print_timings:       total time = 1035143.31 ms /    24 tokens
   llama_print_timings:       total time =  993660.18 ms /    24 tokens
   llama_print_timings:       total time =  983082.14 ms /    24 tokens
   llama_print_timings:       total time =  990431.36 ms /    24 tokens
   llama_print_timings:       total time =  992707.09 ms /    24 tokens
   llama_print_timings:       total time =  992673.27 ms /    24 tokens
   llama_print_timings:       total time =  989285.43 ms /    24 tokens
   llama_print_timings:       total time =  996710.06 ms /    24 tokens
   llama_print_timings:       total time =  996534.64 ms /    24 tokens
   llama_print_timings:       total time =  991344.17 ms /    24 tokens
   llama_print_timings:       total time = 1035210.84 ms /    24 tokens
   llama_print_timings:       total time =  994714.13 ms /    24 tokens
   llama_print_timings:       total time =  984184.15 ms /    24 tokens
   llama_print_timings:       total time =  990909.45 ms /    24 tokens
   llama_print_timings:       total time =  991881.48 ms /    24 tokens
   llama_print_timings:       total time =  993918.03 ms /    24 tokens
   llama_print_timings:       total time =  990061.34 ms /    24 tokens
   llama_print_timings:       total time =  998076.69 ms /    24 tokens
   llama_print_timings:       total time =  997082.59 ms /    24 tokens
   llama_print_timings:       total time =  990677.58 ms /    24 tokens
   llama_print_timings:       total time = 1036054.94 ms /    24 tokens
   llama_print_timings:       total time =  994125.93 ms /    24 tokens
   llama_print_timings:       total time =  982467.01 ms /    24 tokens
   llama_print_timings:       total time =  990191.60 ms /    24 tokens
   llama_print_timings:       total time =  993319.24 ms /    24 tokens
   llama_print_timings:       total time =  992540.57 ms /    24 tokens

   2. tlb shootdowns from 'cat /proc/interrupts':

   BEFORE
   ------
   TLB:
   125553646  141418810  161932620  176853972  186655697  190399283
   192143823  196414038  192872439  193313658  193395617  192521416
   190788161  195067598  198016061  193607347  194293972  190786732
   191545637  194856822  191801931  189634535  190399803  196365922
   195268398  190115840  188050050  193194908  195317617  190820190
   190164820  185556071  226797214  229592631  216112464  209909495
   205575979  205950252  204948111  197999795  198892232  205287952
   199344631  195015158  195869844  198858745  195692876  200961904
   203463252  205921722  199850838  206145986  199613202  199961345
   200129577  203020521  207873649  203697671  197093386  204243803
   205993323  200934664  204193128  194435376  TLB shootdowns

   AFTER
   -----
   TLB:
     5648092    6610142    7032849    7882308    8088518    8352310
     8656536    8705136    8647426    8905583    8985408    8704522
     8884344    9026261    8929974    8869066    8877575    8810096
     8770984    8754503    8801694    8865925    8787524    8656432
     8755912    8682034    8773935    8832925    8797997    8515777
     8481240    8891258   10595243   10285973    9756935    9573681
     9398968    9069244    9242984    8899009    9310690    9029095
     9069758    9105825    9092703    9270202    9460287    9258546
     9180415    9232723    9270611    9175020    9490420    9360316
     9420818    9057663    9525631    9310152    9152242    8654483
     9181804    9050847    8919916    8883856  TLB shootdowns

   3. tlb numbers from 'perf stat' per test set:

   BEFORE
   ------
   3163679332	dTLB-load-misses
   2017751856	dTLB-store-misses
   327092903	iTLB-load-misses
   1357543886	tlb:tlb_flush

   AFTER
   -----
   2394694609	dTLB-load-misses
   861144167	dTLB-store-misses
   64055579	iTLB-load-misses
   69175002	tlb:tlb_flush

---

Changes from v9:

	1. Expand the candidate to apply this mechanism:

	   BEFORE - The souce folios at any type of migration.
	   AFTER  - Any folios that have been unmapped and freed.

	2. Change the workload for test:

	   BEFORE - XSBench
	   AFTER  - llama.cpp (one of the most popluar real workload)

	3. Change the test environment:

	   BEFORE - qemu machine, too small DRAM(1GB), large remote mem
	   AFTER  - bare metal, real CXL memory, practical memory size

	4. Rename the mechanism from MIGRC(Migration Read Copy) to
	   LUF(Lazy Unmap Flush) to reflect the current version of the
	   mechanism can be applied not only to unmap during migration
	   but any unmap code e.g. unmap in shrink_folio_list().

	5. Fix build error for riscv. (feedbacked by kernel test bot)

	6. Supplement commit messages to describe what this mechanism is
	   for, especially in the patches for arch code. (feedbacked by
	   Thomas Gleixner)

	7. Clean up some trivial things.

Changes from v8:

	1. Rebase on akpm/mm.git mm-unstable as of April 18, 2024.
	2. Supplement comments and commit message.
	3. Change the candidate to apply migrc mechanism:

	   BEFORE - The source folios at demotion and promotion.
	   AFTER  - The souce folios at any type of migration.

	4. Change how migrc mechanism works:

	   BEFORE - Reduce tlb flushes by deferring folio_free() for
	            source folios during demotion and promotion.
	   AFTER  - Reduce tlb flushes by deferring tlb flush until they
	            actually become used, out of pcp or buddy. The
		    current version of migrc does *not* defer calling
	            folio_free() but let it go as it is as the same as
		    vanilla kernel, with the folios marked kind of 'need
		    to tlb flush'. And then handle the flush when the
		    page exits from pcp or buddy so as to prevent
		    changing vm stats e.g. free pages.

Changes from v7:

	1. Rewrite cover letter to explain what 'migrc' mechasism is.
	   (feedbacked by Andrew Morton)
	2. Supplement the commit message of a patch 'mm: Add APIs to
	   free a folio directly to the buddy bypassing pcp'.
	   (feedbacked by Andrew Morton)

Changes from v6:

	1. Fix build errors in case of
	   CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH disabled by moving
	   migrc_flush_{start,end}() calls from arch code to
	   try_to_unmap_flush() in mm/rmap.c.

Changes from v5:

	1. Fix build errors in case of CONFIG_MIGRATION disabled or
	   CONFIG_HWPOISON_INJECT moduled. (feedbacked by kernel test
	   bot and Raymond Jay Golo)
	2. Organize migrc code with two kconfigs, CONFIG_MIGRATION and
	   CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH.

Changes from v4:

	1. Rebase on v6.7.
	2. Fix build errors in arm64 that is doing nothing for tlb flush
	   but has CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH. (reported
	   by kernel test robot)
	3. Don't use any page flag. So the system would give up migrc
	   mechanism more often but it's okay. The final improvement is
	   good enough.
	4. Instead, optimize full tlb flush(arch_tlbbatch_flush()) by
	   avoiding redundant CPUs from tlb flush.

Changes from v3:

	1. Don't use the kconfig, CONFIG_MIGRC, and remove sysctl knob,
	   migrc_enable. (feedbacked by Nadav)
	2. Remove the optimization skipping CPUs that have already
	   performed tlb flushes needed by any reason when performing
	   tlb flushes by migrc because I can't tell the performance
	   difference between w/ the optimization and w/o that.
	   (feedbacked by Nadav)
	3. Minimize arch-specific code. While at it, move all the migrc
           declarations and inline functions from include/linux/mm.h to
           mm/internal.h (feedbacked by Dave Hansen, Nadav)
	4. Separate a part making migrc paused when the system is in
	   high memory pressure to another patch. (feedbacked by Nadav)
	5. Rename:
	      a. arch_tlbbatch_clean() to arch_tlbbatch_clear(),
	      b. tlb_ubc_nowr to tlb_ubc_ro,
	      c. migrc_try_flush_free_folios() to migrc_flush_free_folios(),
	      d. migrc_stop to migrc_pause.
	   (feedbacked by Nadav)
	6. Use ->lru list_head instead of introducing a new llist_head.
	   (feedbacked by Nadav)
	7. Use non-atomic operations of page-flag when it's safe.
	   (feedbacked by Nadav)
	8. Use stack instead of keeping a pointer of 'struct migrc_req'
	   in struct task, which is for manipulating it locally.
	   (feedbacked by Nadav)
	9. Replace a lot of simple functions to inline functions placed
	   in a header, mm/internal.h. (feedbacked by Nadav)
	10. Add additional sufficient comments. (feedbacked by Nadav)
	11. Remove a lot of wrapper functions. (feedbacked by Nadav)

Changes from RFC v2:

	1. Remove additional occupation in struct page. To do that,
	   unioned with lru field for migrc's list and added a page
	   flag. I know page flag is a thing that we don't like to add
	   but no choice because migrc should distinguish folios under
	   migrc's control from others. Instead, I force migrc to be
	   used only on 64 bit system to mitigate you guys from getting
	   angry.
	2. Remove meaningless internal object allocator that I
	   introduced to minimize impact onto the system. However, a ton
	   of tests showed there was no difference.
	3. Stop migrc from working when the system is in high memory
	   pressure like about to perform direct reclaim. At the
	   condition where the swap mechanism is heavily used, I found
	   the system suffered from regression without this control.
	4. Exclude folios that pte_dirty() == true from migrc's interest
	   so that migrc can work simpler.
	5. Combine several patches that work tightly coupled to one.
	6. Add sufficient comments for better review.
	7. Manage migrc's request in per-node manner (from globally).
	8. Add tlb miss improvement in commit message.
	9. Test with more CPUs(4 -> 16) to see bigger improvement.

Changes from RFC:

	1. Fix a bug triggered when a destination folio at the previous
	   migration becomes a source folio at the next migration,
	   before the folio gets handled properly so that the folio can
	   play with another migration. There was inconsistency in the
	   folio's state. Fixed it.
	2. Split the patch set into more pieces so that the folks can
	   review better. (Feedbacked by Nadav Amit)
	3. Fix a wrong usage of barrier e.g. smp_mb__after_atomic().
	   (Feedbacked by Nadav Amit)
	4. Tried to add sufficient comments to explain the patch set
	   better. (Feedbacked by Nadav Amit)

Byungchul Park (12):
  x86/tlb: add APIs manipulating tlb batch's arch data
  arm64: tlbflush: add APIs manipulating tlb batch's arch data
  riscv, tlb: add APIs manipulating tlb batch's arch data
  x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of
    arch_tlbbatch_flush()
  mm: buddy: make room for a new variable, ugen, in struct page
  mm: add folio_put_ugen() to deliver unmap generation number to pcp or
    buddy
  mm: add a parameter, unmap generation number, to free_unref_folios()
  mm/rmap: recognize read-only tlb entries during batched tlb flush
  mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get
    unmapped
  mm: separate move/undo parts from migrate_pages_batch()
  mm, migrate: apply luf mechanism to unmapping during migration
  mm, vmscan: apply luf mechanism to unmapping during folio reclaim

 arch/arm64/include/asm/tlbflush.h |  18 ++
 arch/riscv/include/asm/tlbflush.h |  21 ++
 arch/riscv/mm/tlbflush.c          |   1 -
 arch/x86/include/asm/tlbflush.h   |  18 ++
 arch/x86/mm/tlb.c                 |   2 -
 include/linux/mm.h                |  22 ++
 include/linux/mm_types.h          |  40 +++-
 include/linux/rmap.h              |   7 +-
 include/linux/sched.h             |  11 +
 mm/compaction.c                   |  10 +
 mm/internal.h                     | 115 +++++++++-
 mm/memory.c                       |   8 +
 mm/migrate.c                      | 184 ++++++++++------
 mm/mmap.c                         |   8 +
 mm/page_alloc.c                   | 157 +++++++++++---
 mm/page_isolation.c               |   6 +
 mm/page_reporting.c               |  10 +
 mm/rmap.c                         | 345 +++++++++++++++++++++++++++++-
 mm/swap.c                         |  18 +-
 mm/vmscan.c                       |  29 ++-
 20 files changed, 904 insertions(+), 126 deletions(-)


base-commit: f52bcd4a9f6058704a6f6b6b50418f579defd4fe
-- 
2.17.1



^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v10 01/12] x86/tlb: add APIs manipulating tlb batch's arch data
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
@ 2024-05-10  6:51 ` Byungchul Park
  2024-05-10  6:51 ` [PATCH v10 02/12] arm64: tlbflush: " Byungchul Park
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-10  6:51 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again.  It's
safe for folios that had been mapped read-only and were unmapped, since
the contents of the folios wouldn't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

This is a preparation for the mechanism that needs to recognize
read-only tlb entries by separating tlb batch arch data into two, one is
for read-only entries and the other is for writable ones, and merging
those two when needed.

It also optimizes tlb shootdown by skipping CPUs that have already
performed tlb flush needed since.  To support it, added APIs
manipulating arch data for x86.

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 arch/x86/include/asm/tlbflush.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 25726893c6f4..a14f77c5cdde 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -5,6 +5,7 @@
 #include <linux/mm_types.h>
 #include <linux/mmu_notifier.h>
 #include <linux/sched.h>
+#include <linux/cpumask.h>
 
 #include <asm/processor.h>
 #include <asm/cpufeature.h>
@@ -293,6 +294,23 @@ static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
 
 extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
 
+static inline void arch_tlbbatch_clear(struct arch_tlbflush_unmap_batch *batch)
+{
+	cpumask_clear(&batch->cpumask);
+}
+
+static inline void arch_tlbbatch_fold(struct arch_tlbflush_unmap_batch *bdst,
+		struct arch_tlbflush_unmap_batch *bsrc)
+{
+	cpumask_or(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask);
+}
+
+static inline bool arch_tlbbatch_done(struct arch_tlbflush_unmap_batch *bdst,
+		struct arch_tlbflush_unmap_batch *bsrc)
+{
+	return !cpumask_andnot(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask);
+}
+
 static inline bool pte_flags_need_flush(unsigned long oldflags,
 					unsigned long newflags,
 					bool ignore_access)
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v10 02/12] arm64: tlbflush: add APIs manipulating tlb batch's arch data
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
  2024-05-10  6:51 ` [PATCH v10 01/12] x86/tlb: add APIs manipulating tlb batch's arch data Byungchul Park
@ 2024-05-10  6:51 ` Byungchul Park
  2024-05-10  6:51 ` [PATCH v10 03/12] riscv, tlb: " Byungchul Park
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-10  6:51 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again.  It's
safe for folios that had been mapped read only and were unmapped, since
the contents of the folios don't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

This is a preparation for the mechanism that requires to manipulate tlb
batch's arch data.  Even though arm64 does nothing for tlb things, arch
with CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH should provide the APIs.

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 arch/arm64/include/asm/tlbflush.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index a75de2665d84..b8c7fbc1c68e 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -347,6 +347,24 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 	dsb(ish);
 }
 
+static inline void arch_tlbbatch_clear(struct arch_tlbflush_unmap_batch *batch)
+{
+	/* nothing to do */
+}
+
+static inline void arch_tlbbatch_fold(struct arch_tlbflush_unmap_batch *bdst,
+			       struct arch_tlbflush_unmap_batch *bsrc)
+{
+	/* nothing to do */
+}
+
+static inline bool arch_tlbbatch_done(struct arch_tlbflush_unmap_batch *bdst,
+			       struct arch_tlbflush_unmap_batch *bsrc)
+{
+	/* Kernel can consider tlb batch always has been done. */
+	return true;
+}
+
 /*
  * This is meant to avoid soft lock-ups on large TLB flushing ranges and not
  * necessarily a performance improvement.
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v10 03/12] riscv, tlb: add APIs manipulating tlb batch's arch data
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
  2024-05-10  6:51 ` [PATCH v10 01/12] x86/tlb: add APIs manipulating tlb batch's arch data Byungchul Park
  2024-05-10  6:51 ` [PATCH v10 02/12] arm64: tlbflush: " Byungchul Park
@ 2024-05-10  6:51 ` Byungchul Park
  2024-05-10  6:51 ` [PATCH v10 04/12] x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush() Byungchul Park
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-10  6:51 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again.  It's
safe for folios that had been mapped read only and were unmapped, since
the contents of the folios don't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

This is a preparation for the mechanism that needs to recognize
read-only tlb entries by separating tlb batch arch data into two, one is
for read-only entries and the other is for writable ones, and merging
those two when needed.

It also optimizes tlb shootdown by skipping CPUs that have already
performed tlb flush needed since.  To support it, added APIs
manipulating arch data for riscv.

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 arch/riscv/include/asm/tlbflush.h | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
index 4112cc8d1d69..480c082ccde3 100644
--- a/arch/riscv/include/asm/tlbflush.h
+++ b/arch/riscv/include/asm/tlbflush.h
@@ -8,6 +8,7 @@
 #define _ASM_RISCV_TLBFLUSH_H
 
 #include <linux/mm_types.h>
+#include <linux/cpumask.h>
 #include <asm/smp.h>
 #include <asm/errata_list.h>
 
@@ -55,6 +56,26 @@ void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
 void arch_flush_tlb_batched_pending(struct mm_struct *mm);
 void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
 
+static inline void arch_tlbbatch_clear(struct arch_tlbflush_unmap_batch *batch)
+{
+	cpumask_clear(&batch->cpumask);
+
+}
+
+static inline void arch_tlbbatch_fold(struct arch_tlbflush_unmap_batch *bdst,
+		struct arch_tlbflush_unmap_batch *bsrc)
+{
+	cpumask_or(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask);
+
+}
+
+static inline bool arch_tlbbatch_done(struct arch_tlbflush_unmap_batch *bdst,
+		struct arch_tlbflush_unmap_batch *bsrc)
+{
+	return !cpumask_andnot(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask);
+
+}
+
 #else /* CONFIG_SMP && CONFIG_MMU */
 
 #define flush_tlb_all() local_flush_tlb_all()
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v10 04/12] x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush()
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
                   ` (2 preceding siblings ...)
  2024-05-10  6:51 ` [PATCH v10 03/12] riscv, tlb: " Byungchul Park
@ 2024-05-10  6:51 ` Byungchul Park
  2024-05-10  6:51 ` [PATCH v10 05/12] mm: buddy: make room for a new variable, ugen, in struct page Byungchul Park
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-10  6:51 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again.  It's
safe for folios that had been mapped read only and were unmapped, since
the contents of the folios don't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

This is a preparation for the mechanism that requires to avoid redundant
tlb flush by manipulating tlb batch's arch data.  To achieve that, we
need to separate the part clearing the tlb batch's arch data out of
arch_tlbbatch_flush().

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 arch/riscv/mm/tlbflush.c | 1 -
 arch/x86/mm/tlb.c        | 2 --
 mm/rmap.c                | 1 +
 3 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index 07d743f87b3f..9cbd27148357 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -234,5 +234,4 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 {
 	__flush_tlb_range(&batch->cpumask, FLUSH_TLB_NO_ASID, 0,
 			  FLUSH_TLB_MAX_SIZE, PAGE_SIZE);
-	cpumask_clear(&batch->cpumask);
 }
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 44ac64f3a047..24bce69222cd 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -1265,8 +1265,6 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 		local_irq_enable();
 	}
 
-	cpumask_clear(&batch->cpumask);
-
 	put_flush_tlb_info();
 	put_cpu();
 }
diff --git a/mm/rmap.c b/mm/rmap.c
index 2608c40dffad..cf8a99a49aef 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -649,6 +649,7 @@ void try_to_unmap_flush(void)
 		return;
 
 	arch_tlbbatch_flush(&tlb_ubc->arch);
+	arch_tlbbatch_clear(&tlb_ubc->arch);
 	tlb_ubc->flush_required = false;
 	tlb_ubc->writable = false;
 }
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v10 05/12] mm: buddy: make room for a new variable, ugen, in struct page
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
                   ` (3 preceding siblings ...)
  2024-05-10  6:51 ` [PATCH v10 04/12] x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush() Byungchul Park
@ 2024-05-10  6:51 ` Byungchul Park
  2024-05-10  6:52 ` [PATCH v10 06/12] mm: add folio_put_ugen() to deliver unmap generation number to pcp or buddy Byungchul Park
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-10  6:51 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

Functionally, no change.  This is a preparation for luf mechanism that
tracks need of tlb flush for each page residing in buddy, using a
generation number in struct page.

Fortunately, since the private field in struct page is used only to
store page order in buddy, ranging from 0 to MAX_PAGE_ORDER, that can be
covered with unsigned short int.  So splitted it into two smaller ones,
order and ugen, so that the both can be used in buddy at the same time.

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 include/linux/mm_types.h | 40 +++++++++++++++++++++++++++++++++-------
 mm/internal.h            |  4 ++--
 mm/page_alloc.c          | 13 ++++++++-----
 3 files changed, 43 insertions(+), 14 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index db0adf5721cc..cd4ec0d10ffb 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -108,13 +108,25 @@ struct page {
 				pgoff_t index;		/* Our offset within mapping. */
 				unsigned long share;	/* share count for fsdax */
 			};
-			/**
-			 * @private: Mapping-private opaque data.
-			 * Usually used for buffer_heads if PagePrivate.
-			 * Used for swp_entry_t if PageSwapCache.
-			 * Indicates order in the buddy system if PageBuddy.
-			 */
-			unsigned long private;
+			union {
+				/**
+				 * @private: Mapping-private opaque data.
+				 * Usually used for buffer_heads if PagePrivate.
+				 * Used for swp_entry_t if PageSwapCache.
+				 */
+				unsigned long private;
+				struct {
+					/*
+					 * Indicates order in the buddy system if PageBuddy.
+					 */
+					unsigned short int order;
+					/*
+					 * Tracks need of tlb flush used by luf,
+					 * which stands for lazy unmap flush.
+					 */
+					unsigned short int ugen;
+				};
+			};
 		};
 		struct {	/* page_pool used by netstack */
 			/**
@@ -521,6 +533,20 @@ static inline void set_page_private(struct page *page, unsigned long private)
 	page->private = private;
 }
 
+#define page_buddy_order(page)		((page)->order)
+
+static inline void set_page_buddy_order(struct page *page, unsigned int order)
+{
+	page->order = (unsigned short int)order;
+}
+
+#define page_buddy_ugen(page)		((page)->ugen)
+
+static inline void set_page_buddy_ugen(struct page *page, unsigned short int ugen)
+{
+	page->ugen = ugen;
+}
+
 static inline void *folio_get_private(struct folio *folio)
 {
 	return folio->private;
diff --git a/mm/internal.h b/mm/internal.h
index c6483f73ec13..eb9c7d8650fc 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -453,7 +453,7 @@ struct alloc_context {
 static inline unsigned int buddy_order(struct page *page)
 {
 	/* PageBuddy() must be checked by the caller */
-	return page_private(page);
+	return page_buddy_order(page);
 }
 
 /*
@@ -467,7 +467,7 @@ static inline unsigned int buddy_order(struct page *page)
  * times, potentially observing different values in the tests and the actual
  * use of the result.
  */
-#define buddy_order_unsafe(page)	READ_ONCE(page_private(page))
+#define buddy_order_unsafe(page)	READ_ONCE(page_buddy_order(page))
 
 /*
  * This function checks whether a page is free && is the buddy
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 33d4a1be927b..917b22b429d1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -565,9 +565,12 @@ void prep_compound_page(struct page *page, unsigned int order)
 	prep_compound_head(page, order);
 }
 
-static inline void set_buddy_order(struct page *page, unsigned int order)
+static inline void set_buddy_order_ugen(struct page *page,
+					unsigned int order,
+					unsigned short int ugen)
 {
-	set_page_private(page, order);
+	set_page_buddy_order(page, order);
+	set_page_buddy_ugen(page, order);
 	__SetPageBuddy(page);
 }
 
@@ -834,7 +837,7 @@ static inline void __free_one_page(struct page *page,
 	}
 
 done_merging:
-	set_buddy_order(page, order);
+	set_buddy_order_ugen(page, order, 0);
 
 	if (fpi_flags & FPI_TO_TAIL)
 		to_tail = true;
@@ -1344,7 +1347,7 @@ static inline void expand(struct zone *zone, struct page *page,
 			continue;
 
 		__add_to_free_list(&page[size], zone, high, migratetype, false);
-		set_buddy_order(&page[size], high);
+		set_buddy_order_ugen(&page[size], high, 0);
 		nr_added += size;
 	}
 	account_freepages(zone, nr_added, migratetype);
@@ -6802,7 +6805,7 @@ static void break_down_buddy_pages(struct zone *zone, struct page *page,
 			continue;
 
 		add_to_free_list(current_buddy, zone, high, migratetype, false);
-		set_buddy_order(current_buddy, high);
+		set_buddy_order_ugen(current_buddy, high, 0);
 	}
 }
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v10 06/12] mm: add folio_put_ugen() to deliver unmap generation number to pcp or buddy
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
                   ` (4 preceding siblings ...)
  2024-05-10  6:51 ` [PATCH v10 05/12] mm: buddy: make room for a new variable, ugen, in struct page Byungchul Park
@ 2024-05-10  6:52 ` Byungchul Park
  2024-05-10  6:52 ` [PATCH v10 07/12] mm: add a parameter, unmap generation number, to free_unref_folios() Byungchul Park
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-10  6:52 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

Introduced a new API, folio_put_ugen(), to deliver unmap generation
number to pcp or buddy that will be used by luf mechanism to track need
of tlb flush for each page residing in pcp or buddy.

For now, the delivery should work for the following call path that is of
releasing source folios during migration:

	folio_put_ugen()
	   __folio_put_ugen()
	      free_unref_page()
	         free_unref_page_commit()
	         free_one_page()
	            __free_one_page()

The generation number should be handed over properly when pages travel
between pcp and buddy, and must do necessary handling on exit from pcp
or buddy.  This patch doesn't include actual body for tlb flush on the
exit, which will be filled by the main patch of luf mechanism.

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 include/linux/mm.h    |  22 +++++++
 include/linux/sched.h |   1 +
 mm/compaction.c       |  10 +++
 mm/internal.h         |  70 +++++++++++++++++++-
 mm/page_alloc.c       | 144 ++++++++++++++++++++++++++++++++++--------
 mm/page_isolation.c   |   6 ++
 mm/page_reporting.c   |  10 +++
 mm/swap.c             |  12 +++-
 8 files changed, 247 insertions(+), 28 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index dc33f8269fb5..2369ebedb8bd 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1312,6 +1312,7 @@ static inline struct folio *virt_to_folio(const void *x)
 }
 
 void __folio_put(struct folio *folio);
+void __folio_put_ugen(struct folio *folio, unsigned short int ugen);
 
 void put_pages_list(struct list_head *pages);
 
@@ -1509,6 +1510,27 @@ static inline void folio_put(struct folio *folio)
 		__folio_put(folio);
 }
 
+/**
+ * folio_put_ugen - Decrement the last reference count on a folio.
+ * @folio: The folio.
+ * @ugen: The unmap generation # of TLB flush that the folio requires.
+ *
+ * The folio's reference count should be one since the only user, folio
+ * migration code, calls folio_put_ugen() only when the folio has no
+ * reference else.  The memory will be released back to the page
+ * allocator and may be used by another allocation immediately.  Do not
+ * access the memory or the struct folio after calling folio_put_ugen().
+ *
+ * Context: May be called in process or interrupt context, but not in NMI
+ * context.  May be called while holding a spinlock.
+ */
+static inline void folio_put_ugen(struct folio *folio, unsigned short int ugen)
+{
+	if (WARN_ON(!folio_put_testzero(folio)))
+		return;
+	__folio_put_ugen(folio, ugen);
+}
+
 /**
  * folio_put_refs - Reduce the reference count on a folio.
  * @folio: The folio.
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4118b3f959c3..2aa48adad226 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1339,6 +1339,7 @@ struct task_struct {
 #endif
 
 	struct tlbflush_unmap_batch	tlb_ubc;
+	unsigned short int		ugen;
 
 	/* Cache last used pipe for splice(): */
 	struct pipe_inode_info		*splice_pipe;
diff --git a/mm/compaction.c b/mm/compaction.c
index e731d45befc7..13799fbb2a9a 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -701,6 +701,11 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
 	if (locked)
 		spin_unlock_irqrestore(&cc->zone->lock, flags);
 
+	/*
+	 * Check and flush before using the isolated pages.
+	 */
+	check_flush_task_ugen();
+
 	/*
 	 * Be careful to not go outside of the pageblock.
 	 */
@@ -1673,6 +1678,11 @@ static void fast_isolate_freepages(struct compact_control *cc)
 
 		spin_unlock_irqrestore(&cc->zone->lock, flags);
 
+		/*
+		 * Check and flush before using the isolated pages.
+		 */
+		check_flush_task_ugen();
+
 		/* Skip fast search if enough freepages isolated */
 		if (cc->nr_freepages >= cc->nr_migratepages)
 			break;
diff --git a/mm/internal.h b/mm/internal.h
index eb9c7d8650fc..332662047c17 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -638,7 +638,7 @@ extern bool free_pages_prepare(struct page *page, unsigned int order);
 
 extern int user_min_free_kbytes;
 
-void free_unref_page(struct page *page, unsigned int order);
+void free_unref_page(struct page *page, unsigned int order, unsigned short int ugen);
 void free_unref_folios(struct folio_batch *fbatch);
 
 extern void zone_pcp_reset(struct zone *zone);
@@ -1512,4 +1512,72 @@ static inline void shrinker_debugfs_remove(struct dentry *debugfs_entry,
 void workingset_update_node(struct xa_node *node);
 extern struct list_lru shadow_nodes;
 
+#if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH)
+static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b)
+{
+	if (!a || !b)
+		return a + b;
+
+	/*
+	 * The ugen is wrapped around so let's use this trick.
+	 */
+	if ((short int)(a - b) < 0)
+		return b;
+	else
+		return a;
+}
+
+static inline void update_task_ugen(unsigned short int ugen)
+{
+	current->ugen = ugen_latest(current->ugen, ugen);
+}
+
+static inline unsigned short int hand_over_task_ugen(void)
+{
+	unsigned short int ret = current->ugen;
+
+	current->ugen = 0;
+	return ret;
+}
+
+static inline void check_flush_task_ugen(void)
+{
+	/*
+	 * XXX: luf mechanism will handle this. For now, do nothing but
+	 * reset current's ugen to finalize this turn.
+	 */
+	current->ugen = 0;
+}
+
+/*
+ * Check the constratints of what luf currently supports.
+ */
+static inline bool can_luf_folio(struct folio *f)
+{
+	bool can_luf = true;
+
+	/*
+	 * XXX: Remove the constraint once luf handles zone device folio.
+	 */
+	can_luf = can_luf && likely(!folio_is_zone_device(f));
+
+	/*
+	 * XXX: Remove the constraint once luf handles hugetlb folio.
+	 */
+	can_luf = can_luf && likely(!folio_test_hugetlb(f));
+
+	/*
+	 * XXX: Remove the constraint once luf handles large folio.
+	 */
+	can_luf = can_luf && likely(!folio_test_large(f));
+
+	return can_luf;
+}
+#else /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
+static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b) { return 0; }
+static inline void update_task_ugen(unsigned short int ugen) {}
+static inline unsigned short int hand_over_task_ugen(void) { return 0; }
+static inline void check_flush_task_ugen(void) {}
+static inline bool can_luf_folio(struct folio *f) { return false; }
+#endif
 #endif	/* __MM_INTERNAL_H */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 917b22b429d1..2cd278c207d1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -696,6 +696,7 @@ static inline void __del_page_from_free_list(struct page *page, struct zone *zon
 	if (page_reported(page))
 		__ClearPageReported(page);
 
+	update_task_ugen(page_buddy_ugen(page));
 	list_del(&page->buddy_list);
 	__ClearPageBuddy(page);
 	set_page_private(page, 0);
@@ -768,7 +769,7 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
 static inline void __free_one_page(struct page *page,
 		unsigned long pfn,
 		struct zone *zone, unsigned int order,
-		int migratetype, fpi_t fpi_flags)
+		int migratetype, fpi_t fpi_flags, unsigned short int ugen)
 {
 	struct capture_control *capc = task_capc(zone);
 	unsigned long buddy_pfn = 0;
@@ -783,12 +784,22 @@ static inline void __free_one_page(struct page *page,
 	VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
 	VM_BUG_ON_PAGE(bad_range(zone, page), page);
 
+	/*
+	 * Ensure private is zero before using it inside buddy.
+	 */
+	set_page_private(page, 0);
+
 	account_freepages(zone, 1 << order, migratetype);
 
 	while (order < MAX_PAGE_ORDER) {
 		int buddy_mt = migratetype;
 
 		if (compaction_capture(capc, page, order, migratetype)) {
+			/*
+			 * Capturer will check_flush_task_ugen() through
+			 * prep_new_page().
+			 */
+			update_task_ugen(ugen);
 			account_freepages(zone, -(1 << order), migratetype);
 			return;
 		}
@@ -819,6 +830,11 @@ static inline void __free_one_page(struct page *page,
 		if (page_is_guard(buddy))
 			clear_page_guard(zone, buddy, order);
 		else
+			/*
+			 * __del_page_from_free_list() updates current's
+			 * ugen that pairs with hand_over_task_ugen() below
+			 * in this funtion.
+			 */
 			__del_page_from_free_list(buddy, zone, order, buddy_mt);
 
 		if (unlikely(buddy_mt != migratetype)) {
@@ -837,7 +853,8 @@ static inline void __free_one_page(struct page *page,
 	}
 
 done_merging:
-	set_buddy_order_ugen(page, order, 0);
+	ugen = ugen_latest(ugen, hand_over_task_ugen());
+	set_buddy_order_ugen(page, order, ugen);
 
 	if (fpi_flags & FPI_TO_TAIL)
 		to_tail = true;
@@ -1048,6 +1065,11 @@ __always_inline bool free_pages_prepare(struct page *page,
 
 	VM_BUG_ON_PAGE(PageTail(page), page);
 
+	/*
+	 * Ensure private is zero before using it inside pcp.
+	 */
+	set_page_private(page, 0);
+
 	trace_mm_page_free(page, order);
 	kmsan_free_page(page, order);
 
@@ -1179,17 +1201,23 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 		do {
 			unsigned long pfn;
 			int mt;
+			unsigned short int ugen;
 
 			page = list_last_entry(list, struct page, pcp_list);
 			pfn = page_to_pfn(page);
 			mt = get_pfnblock_migratetype(page, pfn);
 
+			/*
+			 * pcp uses private to store ugen.
+			 */
+			ugen = page_private(page);
+
 			/* must delete to avoid corrupting pcp list */
 			list_del(&page->pcp_list);
 			count -= nr_pages;
 			pcp->count -= nr_pages;
 
-			__free_one_page(page, pfn, zone, order, mt, FPI_NONE);
+			__free_one_page(page, pfn, zone, order, mt, FPI_NONE, ugen);
 			trace_mm_page_pcpu_drain(page, order, mt);
 		} while (count > 0 && !list_empty(list));
 	}
@@ -1199,14 +1227,14 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 
 static void free_one_page(struct zone *zone, struct page *page,
 			  unsigned long pfn, unsigned int order,
-			  fpi_t fpi_flags)
+			  fpi_t fpi_flags, unsigned short int ugen)
 {
 	unsigned long flags;
 	int migratetype;
 
 	spin_lock_irqsave(&zone->lock, flags);
 	migratetype = get_pfnblock_migratetype(page, pfn);
-	__free_one_page(page, pfn, zone, order, migratetype, fpi_flags);
+	__free_one_page(page, pfn, zone, order, migratetype, fpi_flags, ugen);
 	spin_unlock_irqrestore(&zone->lock, flags);
 }
 
@@ -1219,7 +1247,7 @@ static void __free_pages_ok(struct page *page, unsigned int order,
 	if (!free_pages_prepare(page, order))
 		return;
 
-	free_one_page(zone, page, pfn, order, fpi_flags);
+	free_one_page(zone, page, pfn, order, fpi_flags, 0);
 
 	__count_vm_events(PGFREE, 1 << order);
 }
@@ -1484,6 +1512,10 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
 static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 							unsigned int alloc_flags)
 {
+	/*
+	 * Check and flush before using the pages.
+	 */
+	check_flush_task_ugen();
 	post_alloc_hook(page, order, gfp_flags);
 
 	if (order && (gfp_flags & __GFP_COMP))
@@ -1519,6 +1551,10 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
 		page = get_page_from_free_area(area, migratetype);
 		if (!page)
 			continue;
+		/*
+		 * del_page_from_free_list() updates current's ugen that
+		 * pairs with check_flush_task_ugen() in prep_new_page().
+		 */
 		del_page_from_free_list(page, zone, current_order, migratetype);
 		expand(zone, page, order, current_order, migratetype);
 		trace_mm_page_alloc_zone_locked(page, order, migratetype,
@@ -1681,7 +1717,8 @@ static unsigned long find_large_buddy(unsigned long start_pfn)
 
 /* Split a multi-block free page into its individual pageblocks */
 static void split_large_buddy(struct zone *zone, struct page *page,
-			      unsigned long pfn, int order)
+			      unsigned long pfn, int order,
+			      unsigned short int ugen)
 {
 	unsigned long end_pfn = pfn + (1 << order);
 
@@ -1694,7 +1731,7 @@ static void split_large_buddy(struct zone *zone, struct page *page,
 	while (pfn != end_pfn) {
 		int mt = get_pfnblock_migratetype(page, pfn);
 
-		__free_one_page(page, pfn, zone, pageblock_order, mt, FPI_NONE);
+		__free_one_page(page, pfn, zone, pageblock_order, mt, FPI_NONE, ugen);
 		pfn += pageblock_nr_pages;
 		page = pfn_to_page(pfn);
 	}
@@ -1736,22 +1773,34 @@ bool move_freepages_block_isolate(struct zone *zone, struct page *page,
 	if (pfn != start_pfn) {
 		struct page *buddy = pfn_to_page(pfn);
 		int order = buddy_order(buddy);
+		unsigned short int ugen;
 
+		/*
+		 * del_page_from_free_list() updates current's ugen that
+		 * pairs with the following hand_over_task_ugen().
+		 */
 		del_page_from_free_list(buddy, zone, order,
 					get_pfnblock_migratetype(buddy, pfn));
+		ugen = hand_over_task_ugen();
 		set_pageblock_migratetype(page, migratetype);
-		split_large_buddy(zone, buddy, pfn, order);
+		split_large_buddy(zone, buddy, pfn, order, ugen);
 		return true;
 	}
 
 	/* We're the starting block of a larger buddy */
 	if (PageBuddy(page) && buddy_order(page) > pageblock_order) {
 		int order = buddy_order(page);
+		unsigned short int ugen;
 
+		/*
+		 * del_page_from_free_list() updates current's ugen that
+		 * pairs with the following hand_over_task_ugen().
+		 */
 		del_page_from_free_list(page, zone, order,
 					get_pfnblock_migratetype(page, pfn));
+		ugen = hand_over_task_ugen();
 		set_pageblock_migratetype(page, migratetype);
-		split_large_buddy(zone, page, pfn, order);
+		split_large_buddy(zone, page, pfn, order, ugen);
 		return true;
 	}
 move:
@@ -1871,6 +1920,10 @@ steal_suitable_fallback(struct zone *zone, struct page *page,
 
 	/* Take ownership for orders >= pageblock_order */
 	if (current_order >= pageblock_order) {
+		/*
+		 * del_page_from_free_list() updates current's ugen that
+		 * pairs with check_flush_task_ugen() in prep_new_page().
+		 */
 		del_page_from_free_list(page, zone, current_order, block_type);
 		change_pageblock_range(page, current_order, start_type);
 		expand(zone, page, order, current_order, start_type);
@@ -1926,6 +1979,10 @@ steal_suitable_fallback(struct zone *zone, struct page *page,
 	}
 
 single_page:
+	/*
+	 * del_page_from_free_list() updates current's ugen that pairs
+	 * with check_flush_task_ugen() in prep_new_page().
+	 */
 	del_page_from_free_list(page, zone, current_order, block_type);
 	expand(zone, page, order, current_order, block_type);
 	return page;
@@ -2547,7 +2604,7 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
 
 static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
 				   struct page *page, int migratetype,
-				   unsigned int order)
+				   unsigned int order, unsigned short int ugen)
 {
 	int high, batch;
 	int pindex;
@@ -2561,6 +2618,11 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
 	pcp->alloc_factor >>= 1;
 	__count_vm_events(PGFREE, 1 << order);
 	pindex = order_to_pindex(migratetype, order);
+
+	/*
+	 * pcp uses private to store ugen.
+	 */
+	set_page_private(page, ugen);
 	list_add(&page->pcp_list, &pcp->lists[pindex]);
 	pcp->count += 1 << order;
 
@@ -2596,7 +2658,8 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
 /*
  * Free a pcp page
  */
-void free_unref_page(struct page *page, unsigned int order)
+void free_unref_page(struct page *page, unsigned int order,
+		     unsigned short int ugen)
 {
 	unsigned long __maybe_unused UP_flags;
 	struct per_cpu_pages *pcp;
@@ -2622,7 +2685,7 @@ void free_unref_page(struct page *page, unsigned int order)
 	migratetype = get_pfnblock_migratetype(page, pfn);
 	if (unlikely(migratetype >= MIGRATE_PCPTYPES)) {
 		if (unlikely(is_migrate_isolate(migratetype))) {
-			free_one_page(page_zone(page), page, pfn, order, FPI_NONE);
+			free_one_page(page_zone(page), page, pfn, order, FPI_NONE, ugen);
 			return;
 		}
 		migratetype = MIGRATE_MOVABLE;
@@ -2632,10 +2695,10 @@ void free_unref_page(struct page *page, unsigned int order)
 	pcp_trylock_prepare(UP_flags);
 	pcp = pcp_spin_trylock(zone->per_cpu_pageset);
 	if (pcp) {
-		free_unref_page_commit(zone, pcp, page, migratetype, order);
+		free_unref_page_commit(zone, pcp, page, migratetype, order, ugen);
 		pcp_spin_unlock(pcp);
 	} else {
-		free_one_page(zone, page, pfn, order, FPI_NONE);
+		free_one_page(zone, page, pfn, order, FPI_NONE, ugen);
 	}
 	pcp_trylock_finish(UP_flags);
 }
@@ -2666,7 +2729,7 @@ void free_unref_folios(struct folio_batch *folios)
 		 */
 		if (!pcp_allowed_order(order)) {
 			free_one_page(folio_zone(folio), &folio->page,
-				      pfn, order, FPI_NONE);
+				      pfn, order, FPI_NONE, 0);
 			continue;
 		}
 		folio->private = (void *)(unsigned long)order;
@@ -2702,7 +2765,7 @@ void free_unref_folios(struct folio_batch *folios)
 			 */
 			if (is_migrate_isolate(migratetype)) {
 				free_one_page(zone, &folio->page, pfn,
-					      order, FPI_NONE);
+					      order, FPI_NONE, 0);
 				continue;
 			}
 
@@ -2715,7 +2778,7 @@ void free_unref_folios(struct folio_batch *folios)
 			if (unlikely(!pcp)) {
 				pcp_trylock_finish(UP_flags);
 				free_one_page(zone, &folio->page, pfn,
-					      order, FPI_NONE);
+					      order, FPI_NONE, 0);
 				continue;
 			}
 			locked_zone = zone;
@@ -2730,7 +2793,7 @@ void free_unref_folios(struct folio_batch *folios)
 
 		trace_mm_page_free_batched(&folio->page);
 		free_unref_page_commit(zone, pcp, &folio->page, migratetype,
-				order);
+				order, 0);
 	}
 
 	if (pcp) {
@@ -2781,6 +2844,11 @@ int __isolate_free_page(struct page *page, unsigned int order)
 			return 0;
 	}
 
+	/*
+	 * del_page_from_free_list() updates current's ugen. The user of
+	 * the isolated page should check_flush_task_ugen() before using
+	 * it.
+	 */
 	del_page_from_free_list(page, zone, order, mt);
 
 	/*
@@ -2822,7 +2890,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
 
 	/* Return isolated page to tail of freelist. */
 	__free_one_page(page, page_to_pfn(page), zone, order, mt,
-			FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL);
+			FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL, 0);
 }
 
 /*
@@ -2965,6 +3033,11 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order,
 		}
 
 		page = list_first_entry(list, struct page, pcp_list);
+
+		/*
+		 * Pairs with check_flush_task_ugen() in prep_new_page().
+		 */
+		update_task_ugen(page_private(page));
 		list_del(&page->pcp_list);
 		pcp->count -= 1 << order;
 	} while (check_new_pages(page, order));
@@ -4791,11 +4864,11 @@ void __free_pages(struct page *page, unsigned int order)
 	struct alloc_tag *tag = pgalloc_tag_get(page);
 
 	if (put_page_testzero(page))
-		free_unref_page(page, order);
+		free_unref_page(page, order, 0);
 	else if (!head) {
 		pgalloc_tag_sub_pages(tag, (1 << order) - 1);
 		while (order-- > 0)
-			free_unref_page(page + (1 << order), order);
+			free_unref_page(page + (1 << order), order, 0);
 	}
 }
 EXPORT_SYMBOL(__free_pages);
@@ -4857,7 +4930,7 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
 	VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);
 
 	if (page_ref_sub_and_test(page, count))
-		free_unref_page(page, compound_order(page));
+		free_unref_page(page, compound_order(page), 0);
 }
 EXPORT_SYMBOL(__page_frag_cache_drain);
 
@@ -4898,7 +4971,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
 			goto refill;
 
 		if (unlikely(nc->pfmemalloc)) {
-			free_unref_page(page, compound_order(page));
+			free_unref_page(page, compound_order(page), 0);
 			goto refill;
 		}
 
@@ -4942,7 +5015,7 @@ void page_frag_free(void *addr)
 	struct page *page = virt_to_head_page(addr);
 
 	if (unlikely(put_page_testzero(page)))
-		free_unref_page(page, compound_order(page));
+		free_unref_page(page, compound_order(page), 0);
 }
 EXPORT_SYMBOL(page_frag_free);
 
@@ -6751,10 +6824,19 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
 		BUG_ON(!PageBuddy(page));
 		VM_WARN_ON(get_pageblock_migratetype(page) != MIGRATE_ISOLATE);
 		order = buddy_order(page);
+		/*
+		 * del_page_from_free_list() updates current's ugen that
+		 * pairs with check_flush_task_ugen() below in this function.
+		 */
 		del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE);
 		pfn += (1 << order);
 	}
 	spin_unlock_irqrestore(&zone->lock, flags);
+
+	/*
+	 * Check and flush before using it.
+	 */
+	check_flush_task_ugen();
 }
 #endif
 
@@ -6830,6 +6912,11 @@ bool take_page_off_buddy(struct page *page)
 			int migratetype = get_pfnblock_migratetype(page_head,
 								   pfn_head);
 
+			/*
+			 * del_page_from_free_list() updates current's
+			 * ugen that pairs with check_flush_task_ugen() below
+			 * in this function.
+			 */
 			del_page_from_free_list(page_head, zone, page_order,
 						migratetype);
 			break_down_buddy_pages(zone, page_head, page, 0,
@@ -6842,6 +6929,11 @@ bool take_page_off_buddy(struct page *page)
 			break;
 	}
 	spin_unlock_irqrestore(&zone->lock, flags);
+
+	/*
+	 * Check and flush before using it.
+	 */
+	check_flush_task_ugen();
 	return ret;
 }
 
@@ -6860,7 +6952,7 @@ bool put_page_back_buddy(struct page *page)
 		int migratetype = get_pfnblock_migratetype(page, pfn);
 
 		ClearPageHWPoisonTakenOff(page);
-		__free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE);
+		__free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE, 0);
 		if (TestClearPageHWPoison(page)) {
 			ret = true;
 		}
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 042937d5abe4..5823da60a621 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -260,6 +260,12 @@ static void unset_migratetype_isolate(struct page *page, int migratetype)
 	zone->nr_isolate_pageblock--;
 out:
 	spin_unlock_irqrestore(&zone->lock, flags);
+
+	/*
+	 * Check and flush for the pages that have been isolated.
+	 */
+	if (isolated_page)
+		check_flush_task_ugen();
 }
 
 static inline struct page *
diff --git a/mm/page_reporting.c b/mm/page_reporting.c
index e4c428e61d8c..4f94a3ea1b22 100644
--- a/mm/page_reporting.c
+++ b/mm/page_reporting.c
@@ -221,6 +221,11 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone,
 		/* release lock before waiting on report processing */
 		spin_unlock_irq(&zone->lock);
 
+		/*
+		 * Check and flush before using the isolated pages.
+		 */
+		check_flush_task_ugen();
+
 		/* begin processing pages in local list */
 		err = prdev->report(prdev, sgl, PAGE_REPORTING_CAPACITY);
 
@@ -253,6 +258,11 @@ page_reporting_cycle(struct page_reporting_dev_info *prdev, struct zone *zone,
 
 	spin_unlock_irq(&zone->lock);
 
+	/*
+	 * Check and flush before using the isolated pages.
+	 */
+	check_flush_task_ugen();
+
 	return err;
 }
 
diff --git a/mm/swap.c b/mm/swap.c
index f0d478eee292..0fc5a5e8457f 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -126,10 +126,20 @@ void __folio_put(struct folio *folio)
 	if (folio_test_large(folio) && folio_test_large_rmappable(folio))
 		folio_undo_large_rmappable(folio);
 	mem_cgroup_uncharge(folio);
-	free_unref_page(&folio->page, folio_order(folio));
+	free_unref_page(&folio->page, folio_order(folio), 0);
 }
 EXPORT_SYMBOL(__folio_put);
 
+void __folio_put_ugen(struct folio *folio, unsigned short int ugen)
+{
+	if (WARN_ON(!can_luf_folio(folio)))
+		return;
+
+	page_cache_release(folio);
+	mem_cgroup_uncharge(folio);
+	free_unref_page(&folio->page, 0, ugen);
+}
+
 /**
  * put_pages_list() - release a list of pages
  * @pages: list of pages threaded on page->lru
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v10 07/12] mm: add a parameter, unmap generation number, to free_unref_folios()
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
                   ` (5 preceding siblings ...)
  2024-05-10  6:52 ` [PATCH v10 06/12] mm: add folio_put_ugen() to deliver unmap generation number to pcp or buddy Byungchul Park
@ 2024-05-10  6:52 ` Byungchul Park
  2024-05-10  6:52 ` [PATCH v10 08/12] mm/rmap: recognize read-only tlb entries during batched tlb flush Byungchul Park
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-10  6:52 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

Unmap generation number is used by luf mechanism to track need of tlb
flush for each page residing in pcp or buddy.

The number should be delivered to pcp or buddy via free_unref_folios()
that is for releasing folios that have been unmapped during reclaim in
shrink_folio_list().

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 mm/internal.h   |  2 +-
 mm/page_alloc.c | 10 +++++-----
 mm/swap.c       |  6 +++---
 mm/vmscan.c     |  8 ++++----
 4 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 332662047c17..0d4c74e76de6 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -639,7 +639,7 @@ extern bool free_pages_prepare(struct page *page, unsigned int order);
 extern int user_min_free_kbytes;
 
 void free_unref_page(struct page *page, unsigned int order, unsigned short int ugen);
-void free_unref_folios(struct folio_batch *fbatch);
+void free_unref_folios(struct folio_batch *fbatch, unsigned short int ugen);
 
 extern void zone_pcp_reset(struct zone *zone);
 extern void zone_pcp_disable(struct zone *zone);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2cd278c207d1..63f14305f4de 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2706,7 +2706,7 @@ void free_unref_page(struct page *page, unsigned int order,
 /*
  * Free a batch of folios
  */
-void free_unref_folios(struct folio_batch *folios)
+void free_unref_folios(struct folio_batch *folios, unsigned short int ugen)
 {
 	unsigned long __maybe_unused UP_flags;
 	struct per_cpu_pages *pcp = NULL;
@@ -2729,7 +2729,7 @@ void free_unref_folios(struct folio_batch *folios)
 		 */
 		if (!pcp_allowed_order(order)) {
 			free_one_page(folio_zone(folio), &folio->page,
-				      pfn, order, FPI_NONE, 0);
+				      pfn, order, FPI_NONE, ugen);
 			continue;
 		}
 		folio->private = (void *)(unsigned long)order;
@@ -2765,7 +2765,7 @@ void free_unref_folios(struct folio_batch *folios)
 			 */
 			if (is_migrate_isolate(migratetype)) {
 				free_one_page(zone, &folio->page, pfn,
-					      order, FPI_NONE, 0);
+					      order, FPI_NONE, ugen);
 				continue;
 			}
 
@@ -2778,7 +2778,7 @@ void free_unref_folios(struct folio_batch *folios)
 			if (unlikely(!pcp)) {
 				pcp_trylock_finish(UP_flags);
 				free_one_page(zone, &folio->page, pfn,
-					      order, FPI_NONE, 0);
+					      order, FPI_NONE, ugen);
 				continue;
 			}
 			locked_zone = zone;
@@ -2793,7 +2793,7 @@ void free_unref_folios(struct folio_batch *folios)
 
 		trace_mm_page_free_batched(&folio->page);
 		free_unref_page_commit(zone, pcp, &folio->page, migratetype,
-				order, 0);
+				order, ugen);
 	}
 
 	if (pcp) {
diff --git a/mm/swap.c b/mm/swap.c
index 0fc5a5e8457f..1937ac937b8f 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -163,11 +163,11 @@ void put_pages_list(struct list_head *pages)
 		/* LRU flag must be clear because it's passed using the lru */
 		if (folio_batch_add(&fbatch, folio) > 0)
 			continue;
-		free_unref_folios(&fbatch);
+		free_unref_folios(&fbatch, 0);
 	}
 
 	if (fbatch.nr)
-		free_unref_folios(&fbatch);
+		free_unref_folios(&fbatch, 0);
 	INIT_LIST_HEAD(pages);
 }
 EXPORT_SYMBOL(put_pages_list);
@@ -1029,7 +1029,7 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs)
 
 	folios->nr = j;
 	mem_cgroup_uncharge_folios(folios);
-	free_unref_folios(folios);
+	free_unref_folios(folios, 0);
 }
 EXPORT_SYMBOL(folios_put_refs);
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 49bd94423961..bb0ff11f9ec9 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1460,7 +1460,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 		if (folio_batch_add(&free_folios, folio) == 0) {
 			mem_cgroup_uncharge_folios(&free_folios);
 			try_to_unmap_flush();
-			free_unref_folios(&free_folios);
+			free_unref_folios(&free_folios, 0);
 		}
 		continue;
 
@@ -1527,7 +1527,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 
 	mem_cgroup_uncharge_folios(&free_folios);
 	try_to_unmap_flush();
-	free_unref_folios(&free_folios);
+	free_unref_folios(&free_folios, 0);
 
 	list_splice(&ret_folios, folio_list);
 	count_vm_events(PGACTIVATE, pgactivate);
@@ -1869,7 +1869,7 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec,
 			if (folio_batch_add(&free_folios, folio) == 0) {
 				spin_unlock_irq(&lruvec->lru_lock);
 				mem_cgroup_uncharge_folios(&free_folios);
-				free_unref_folios(&free_folios);
+				free_unref_folios(&free_folios, 0);
 				spin_lock_irq(&lruvec->lru_lock);
 			}
 
@@ -1891,7 +1891,7 @@ static unsigned int move_folios_to_lru(struct lruvec *lruvec,
 	if (free_folios.nr) {
 		spin_unlock_irq(&lruvec->lru_lock);
 		mem_cgroup_uncharge_folios(&free_folios);
-		free_unref_folios(&free_folios);
+		free_unref_folios(&free_folios, 0);
 		spin_lock_irq(&lruvec->lru_lock);
 	}
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v10 08/12] mm/rmap: recognize read-only tlb entries during batched tlb flush
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
                   ` (6 preceding siblings ...)
  2024-05-10  6:52 ` [PATCH v10 07/12] mm: add a parameter, unmap generation number, to free_unref_folios() Byungchul Park
@ 2024-05-10  6:52 ` Byungchul Park
  2024-05-10  6:52 ` [PATCH v10 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped Byungchul Park
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-10  6:52 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

Functionally, no change.  This is a preparation for luf mechanism that
requires to recognize read-only tlb entries and handle them in a
different way.  The newly introduced API in this patch, fold_ubc(), will
be used by luf mechanism.

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 include/linux/sched.h |  1 +
 mm/internal.h         |  4 ++++
 mm/rmap.c             | 34 ++++++++++++++++++++++++++++++++--
 3 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 2aa48adad226..0915390b1b5e 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1339,6 +1339,7 @@ struct task_struct {
 #endif
 
 	struct tlbflush_unmap_batch	tlb_ubc;
+	struct tlbflush_unmap_batch	tlb_ubc_ro;
 	unsigned short int		ugen;
 
 	/* Cache last used pipe for splice(): */
diff --git a/mm/internal.h b/mm/internal.h
index 0d4c74e76de6..805f0e6ecab4 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1100,6 +1100,7 @@ extern struct workqueue_struct *mm_percpu_wq;
 void try_to_unmap_flush(void);
 void try_to_unmap_flush_dirty(void);
 void flush_tlb_batched_pending(struct mm_struct *mm);
+void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src);
 #else
 static inline void try_to_unmap_flush(void)
 {
@@ -1110,6 +1111,9 @@ static inline void try_to_unmap_flush_dirty(void)
 static inline void flush_tlb_batched_pending(struct mm_struct *mm)
 {
 }
+static inline void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batch *src)
+{
+}
 #endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
 
 extern const struct trace_print_flags pageflag_names[];
diff --git a/mm/rmap.c b/mm/rmap.c
index cf8a99a49aef..328b5e2217e6 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -635,6 +635,28 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
 }
 
 #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+
+void fold_ubc(struct tlbflush_unmap_batch *dst,
+	      struct tlbflush_unmap_batch *src)
+{
+	if (!src->flush_required)
+		return;
+
+	/*
+	 * Fold src to dst.
+	 */
+	arch_tlbbatch_fold(&dst->arch, &src->arch);
+	dst->writable = dst->writable || src->writable;
+	dst->flush_required = true;
+
+	/*
+	 * Reset src.
+	 */
+	arch_tlbbatch_clear(&src->arch);
+	src->flush_required = false;
+	src->writable = false;
+}
+
 /*
  * Flush TLB entries for recently unmapped pages from remote CPUs. It is
  * important if a PTE was dirty when it was unmapped that it's flushed
@@ -644,7 +666,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
 void try_to_unmap_flush(void)
 {
 	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+	struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
 
+	fold_ubc(tlb_ubc, tlb_ubc_ro);
 	if (!tlb_ubc->flush_required)
 		return;
 
@@ -658,8 +682,9 @@ void try_to_unmap_flush(void)
 void try_to_unmap_flush_dirty(void)
 {
 	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+	struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
 
-	if (tlb_ubc->writable)
+	if (tlb_ubc->writable || tlb_ubc_ro->writable)
 		try_to_unmap_flush();
 }
 
@@ -676,13 +701,18 @@ void try_to_unmap_flush_dirty(void)
 static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval,
 				      unsigned long uaddr)
 {
-	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+	struct tlbflush_unmap_batch *tlb_ubc;
 	int batch;
 	bool writable = pte_dirty(pteval);
 
 	if (!pte_accessible(mm, pteval))
 		return;
 
+	if (pte_write(pteval))
+		tlb_ubc = &current->tlb_ubc;
+	else
+		tlb_ubc = &current->tlb_ubc_ro;
+
 	arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr);
 	tlb_ubc->flush_required = true;
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v10 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
                   ` (7 preceding siblings ...)
  2024-05-10  6:52 ` [PATCH v10 08/12] mm/rmap: recognize read-only tlb entries during batched tlb flush Byungchul Park
@ 2024-05-10  6:52 ` Byungchul Park
  2024-05-10  6:52 ` [PATCH v10 10/12] mm: separate move/undo parts from migrate_pages_batch() Byungchul Park
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-10  6:52 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again.  It's
safe for folios that had been mapped read-only and were unmapped, since
the contents of the folios don't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

tlb flush can be defered when folios get unmapped as long as it
guarantees to perform tlb flush needed, before the folios actually
become used, of course, only if all the corresponding ptes don't have
write permission.  Otherwise, the system will get messed up.

To achieve that:

   1. For the folios that map only to non-writable tlb entries, prevent
      tlb flush during unmapping but perform it just before the folios
      actually become used, out of buddy or pcp.

   2. When any non-writable ptes change to writable e.g. through fault
      handler, give up luf mechanism and perform tlb flush required
      right away.

   3. When a writable mapping is created e.g. through mmap(), give up
      luf mechanism and perform tlb flush required right away.

No matter what type of workload is used for performance evaluation, the
result would be positive thanks to the unconditional reduction of tlb
flushes, tlb misses and interrupts.  For the test, I picked up one of
the most popular and heavy workload, llama.cpp that is a
LLM(Large Language Model) inference engine.

The result would depend on memory latency and how often reclaim runs,
which implies tlb miss overhead and how many times unmapping happens.
In my system, the result shows:

   1. tlb flushes are reduced about 95%.
   2. tlb misses(itlb) are reduced about 80%.
   3. tlb misses(dtlb store) are reduced about 57%.
   4. tlb misses(dtlb load) are reduced about 24%.
   5. tlb shootdown interrupts are reduced about 95%.
   6. The test program runtime is reduced about 5%.

The test environment and the result is like:

   Machine: bare metal, x86_64, Intel(R) Xeon(R) Gold 6430
   CPU: 1 socket 64 core with hyper thread on
   Numa: 2 nodes (64 CPUs DRAM 42GB, no CPUs CXL expander 98GB)
   Config: swap off, numa balancing tiering on, demotion enabled

   The test set:

      llama.cpp/main -m $(70G_model1) -p "who are you?" -s 1 -t 15 -n 20 &
      llama.cpp/main -m $(70G_model2) -p "who are you?" -s 1 -t 15 -n 20 &
      llama.cpp/main -m $(70G_model3) -p "who are you?" -s 1 -t 15 -n 20 &
      wait

      where -t: nr of threads, -s: seed used to make the runtime stable,
      -n: nr of tokens that determines the runtime, -p: prompt to ask,
      -m: LLM model to use.

   Run the test set 10 times successively with caches dropped every run
   via 'echo 3 > /proc/sys/vm/drop_caches'.  Each inference prints its
   runtime at the end of each.

   1. Runtime from the output of llama.cpp:

   BEFORE
   ------
   llama_print_timings:       total time = 1002461.95 ms /    24 tokens
   llama_print_timings:       total time = 1044978.38 ms /    24 tokens
   llama_print_timings:       total time = 1000653.09 ms /    24 tokens
   llama_print_timings:       total time = 1047104.80 ms /    24 tokens
   llama_print_timings:       total time = 1069430.36 ms /    24 tokens
   llama_print_timings:       total time = 1068201.16 ms /    24 tokens
   llama_print_timings:       total time = 1078092.59 ms /    24 tokens
   llama_print_timings:       total time = 1073200.45 ms /    24 tokens
   llama_print_timings:       total time = 1067136.00 ms /    24 tokens
   llama_print_timings:       total time = 1076442.56 ms /    24 tokens
   llama_print_timings:       total time = 1004142.64 ms /    24 tokens
   llama_print_timings:       total time = 1042942.65 ms /    24 tokens
   llama_print_timings:       total time =  999933.76 ms /    24 tokens
   llama_print_timings:       total time = 1046548.83 ms /    24 tokens
   llama_print_timings:       total time = 1068671.48 ms /    24 tokens
   llama_print_timings:       total time = 1068285.76 ms /    24 tokens
   llama_print_timings:       total time = 1077789.63 ms /    24 tokens
   llama_print_timings:       total time = 1071558.93 ms /    24 tokens
   llama_print_timings:       total time = 1066181.55 ms /    24 tokens
   llama_print_timings:       total time = 1076767.53 ms /    24 tokens
   llama_print_timings:       total time = 1004065.63 ms /    24 tokens
   llama_print_timings:       total time = 1044522.13 ms /    24 tokens
   llama_print_timings:       total time =  999725.33 ms /    24 tokens
   llama_print_timings:       total time = 1047510.77 ms /    24 tokens
   llama_print_timings:       total time = 1068010.27 ms /    24 tokens
   llama_print_timings:       total time = 1068999.31 ms /    24 tokens
   llama_print_timings:       total time = 1077648.05 ms /    24 tokens
   llama_print_timings:       total time = 1071378.96 ms /    24 tokens
   llama_print_timings:       total time = 1066326.32 ms /    24 tokens
   llama_print_timings:       total time = 1077088.92 ms /    24 tokens

   AFTER
   -----
   llama_print_timings:       total time =  988522.03 ms /    24 tokens
   llama_print_timings:       total time =  997204.52 ms /    24 tokens
   llama_print_timings:       total time =  996605.86 ms /    24 tokens
   llama_print_timings:       total time =  991985.50 ms /    24 tokens
   llama_print_timings:       total time = 1035143.31 ms /    24 tokens
   llama_print_timings:       total time =  993660.18 ms /    24 tokens
   llama_print_timings:       total time =  983082.14 ms /    24 tokens
   llama_print_timings:       total time =  990431.36 ms /    24 tokens
   llama_print_timings:       total time =  992707.09 ms /    24 tokens
   llama_print_timings:       total time =  992673.27 ms /    24 tokens
   llama_print_timings:       total time =  989285.43 ms /    24 tokens
   llama_print_timings:       total time =  996710.06 ms /    24 tokens
   llama_print_timings:       total time =  996534.64 ms /    24 tokens
   llama_print_timings:       total time =  991344.17 ms /    24 tokens
   llama_print_timings:       total time = 1035210.84 ms /    24 tokens
   llama_print_timings:       total time =  994714.13 ms /    24 tokens
   llama_print_timings:       total time =  984184.15 ms /    24 tokens
   llama_print_timings:       total time =  990909.45 ms /    24 tokens
   llama_print_timings:       total time =  991881.48 ms /    24 tokens
   llama_print_timings:       total time =  993918.03 ms /    24 tokens
   llama_print_timings:       total time =  990061.34 ms /    24 tokens
   llama_print_timings:       total time =  998076.69 ms /    24 tokens
   llama_print_timings:       total time =  997082.59 ms /    24 tokens
   llama_print_timings:       total time =  990677.58 ms /    24 tokens
   llama_print_timings:       total time = 1036054.94 ms /    24 tokens
   llama_print_timings:       total time =  994125.93 ms /    24 tokens
   llama_print_timings:       total time =  982467.01 ms /    24 tokens
   llama_print_timings:       total time =  990191.60 ms /    24 tokens
   llama_print_timings:       total time =  993319.24 ms /    24 tokens
   llama_print_timings:       total time =  992540.57 ms /    24 tokens

   2. tlb shootdowns from 'cat /proc/interrupts':

   BEFORE
   ------
   TLB:
   125553646  141418810  161932620  176853972  186655697  190399283
   192143823  196414038  192872439  193313658  193395617  192521416
   190788161  195067598  198016061  193607347  194293972  190786732
   191545637  194856822  191801931  189634535  190399803  196365922
   195268398  190115840  188050050  193194908  195317617  190820190
   190164820  185556071  226797214  229592631  216112464  209909495
   205575979  205950252  204948111  197999795  198892232  205287952
   199344631  195015158  195869844  198858745  195692876  200961904
   203463252  205921722  199850838  206145986  199613202  199961345
   200129577  203020521  207873649  203697671  197093386  204243803
   205993323  200934664  204193128  194435376  TLB shootdowns

   AFTER
   -----
   TLB:
     5648092    6610142    7032849    7882308    8088518    8352310
     8656536    8705136    8647426    8905583    8985408    8704522
     8884344    9026261    8929974    8869066    8877575    8810096
     8770984    8754503    8801694    8865925    8787524    8656432
     8755912    8682034    8773935    8832925    8797997    8515777
     8481240    8891258   10595243   10285973    9756935    9573681
     9398968    9069244    9242984    8899009    9310690    9029095
     9069758    9105825    9092703    9270202    9460287    9258546
     9180415    9232723    9270611    9175020    9490420    9360316
     9420818    9057663    9525631    9310152    9152242    8654483
     9181804    9050847    8919916    8883856  TLB shootdowns

   3. tlb numbers from 'perf stat' per test set:

   BEFORE
   ------
   3163679332	dTLB-load-misses
   2017751856	dTLB-store-misses
   327092903	iTLB-load-misses
   1357543886	tlb:tlb_flush

   AFTER
   -----
   2394694609	dTLB-load-misses
   861144167	dTLB-store-misses
   64055579	iTLB-load-misses
   69175002	tlb:tlb_flush

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 include/linux/sched.h |   9 ++
 mm/internal.h         |  43 +++++-
 mm/memory.c           |   8 ++
 mm/mmap.c             |   8 ++
 mm/rmap.c             | 308 +++++++++++++++++++++++++++++++++++++++++-
 5 files changed, 366 insertions(+), 10 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 0915390b1b5e..6f83703ec284 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1340,8 +1340,17 @@ struct task_struct {
 
 	struct tlbflush_unmap_batch	tlb_ubc;
 	struct tlbflush_unmap_batch	tlb_ubc_ro;
+	struct tlbflush_unmap_batch	tlb_ubc_luf;
 	unsigned short int		ugen;
 
+#if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH)
+	/*
+	 * whether all the mappings of a folio during unmap are read-only
+	 * so that luf can work on the folio
+	 */
+	bool				can_luf;
+#endif
+
 	/* Cache last used pipe for splice(): */
 	struct pipe_inode_info		*splice_pipe;
 
diff --git a/mm/internal.h b/mm/internal.h
index 805f0e6ecab4..2a44194f5d39 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1517,6 +1517,38 @@ void workingset_update_node(struct xa_node *node);
 extern struct list_lru shadow_nodes;
 
 #if defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH)
+unsigned short int try_to_unmap_luf(void);
+void check_luf_flush(unsigned short int ugen);
+void luf_flush(void);
+
+/*
+ * Reset the indicator indicating there are no writable mappings at the
+ * beginning of every rmap traverse for unmap.  luf can work only when
+ * all the mappings are read-only.
+ */
+static inline void can_luf_init(void)
+{
+	current->can_luf = true;
+}
+
+/*
+ * Mark the folio is not applicable to luf once it found a writble or
+ * dirty pte during rmap traverse for unmap.
+ */
+static inline void can_luf_fail(void)
+{
+	current->can_luf = false;
+}
+
+/*
+ * Check if all the mappings are read-only and read-only mappings even
+ * exist.
+ */
+static inline bool can_luf_test(void)
+{
+	return current->can_luf && current->tlb_ubc_ro.flush_required;
+}
+
 static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b)
 {
 	if (!a || !b)
@@ -1546,10 +1578,7 @@ static inline unsigned short int hand_over_task_ugen(void)
 
 static inline void check_flush_task_ugen(void)
 {
-	/*
-	 * XXX: luf mechanism will handle this. For now, do nothing but
-	 * reset current's ugen to finalize this turn.
-	 */
+	check_luf_flush(current->ugen);
 	current->ugen = 0;
 }
 
@@ -1578,6 +1607,12 @@ static inline bool can_luf_folio(struct folio *f)
 	return can_luf;
 }
 #else /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
+static inline unsigned short int try_to_unmap_luf(void) { return 0; }
+static inline void check_luf_flush(unsigned short int ugen) {}
+static inline void luf_flush(void) {}
+static inline void can_luf_init(void) {}
+static inline void can_luf_fail(void) {}
+static inline bool can_luf_test(void) { return false; }
 static inline unsigned short int ugen_latest(unsigned short int a, unsigned short int b) { return 0; }
 static inline void update_task_ugen(unsigned short int ugen) {}
 static inline unsigned short int hand_over_task_ugen(void) { return 0; }
diff --git a/mm/memory.c b/mm/memory.c
index 33d87b64d15d..f218c275d307 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3617,6 +3617,14 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
 	if (vmf->page)
 		folio = page_folio(vmf->page);
 
+	/*
+	 * The folio may or may not be one that is under luf's control
+	 * and might be about to change its permission to writable.
+	 * Conservatively give up deferring tlb flush just in case.
+	 */
+	if (folio)
+		luf_flush();
+
 	/*
 	 * Shared mapping: we are guaranteed to have VM_WRITE and
 	 * FAULT_FLAG_WRITE set at this point.
diff --git a/mm/mmap.c b/mm/mmap.c
index 47363e7f7ea2..3b3bece4b079 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1271,6 +1271,14 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
 			pkey = 0;
 	}
 
+	/*
+	 * This mmap may or may not be mapping to ones that is under
+	 * luf's control.  However, conservatively give up deferring tlb
+	 * flush just in case.
+	 */
+	if (prot & PROT_WRITE)
+		luf_flush();
+
 	/* Do simple checking here so the lower-level routines won't have
 	 * to. we assume access permissions have been handled by the open
 	 * of the memory object, so we don't do any here.
diff --git a/mm/rmap.c b/mm/rmap.c
index 328b5e2217e6..e42783c02114 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -635,6 +635,270 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
 }
 
 #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+static struct tlbflush_unmap_batch luf_ubc;
+static DEFINE_SPINLOCK(luf_lock);
+
+/*
+ * Don't be zero to distinguish from invalid ugen, 0.
+ */
+static unsigned short int ugen_next(unsigned short int a)
+{
+	return a + 1 ?: a + 2;
+}
+
+static bool ugen_before(unsigned short int a, unsigned short int b)
+{
+	return (short int)(a - b) < 0;
+}
+
+/*
+ * Need to synchronize between tlb flush and managing pending CPUs in
+ * luf_ubc.  Take a look at the following scenario, where CPU0 is in
+ * try_to_unmap_flush() and CPU1 is in migrate_pages_batch():
+ *
+ *	CPU0			CPU1
+ *	----			----
+ *	tlb flush
+ *				unmap folios (needing tlb flush)
+ *				add pending CPUs to luf_ubc
+ *				<-- not performed tlb flush needed by
+ *				    the unmap above yet but the request
+ *				    will be cleared by CPU0 shortly. bug!
+ *	clear the CPUs from luf_ubc
+ *
+ * The pending CPUs added in CPU1 should not be cleared from luf_ubc
+ * in CPU0 because the tlb flush for luf_ubc added in CPU1 has not
+ * been performed this turn.  To avoid this, using 'on_flushing'
+ * variable, prevent adding pending CPUs to luf_ubc and give up luf
+ * mechanism if someone is in the middle of tlb flush, like:
+ *
+ *	CPU0			CPU1
+ *	----			----
+ *	on_flushing++
+ *	tlb flush
+ *				unmap folios (needing tlb flush)
+ *				if on_flushing == 0:
+ *				   add pending CPUs to luf_ubc
+ *				else: <-- hit
+ *				   give up luf mechanism
+ *	clear the CPUs from luf_ubc
+ *	on_flushing--
+ *
+ * Only the following case would be allowed for luf mechanism to work:
+ *
+ *	CPU0			CPU1
+ *	----			----
+ *				unmap folios (needing tlb flush)
+ *				if on_flushing == 0: <-- hit
+ *				   add pending CPUs to luf_ubc
+ *				else:
+ *				   give up luf mechanism
+ *	on_flushing++
+ *	tlb flush
+ *	clear the CPUs from luf_ubc
+ *	on_flushing--
+ */
+static int on_flushing;
+
+/*
+ * When more than one thread enter check_luf_flush() at the same
+ * time, each should wait for the request on progress to be done to
+ * avoid the following scenario, where the both CPUs are in
+ * check_luf_flush():
+ *
+ *	CPU0			CPU1
+ *	----			----
+ *	if !luf_ubc.flush_required:
+ *	   return
+ *	luf_ubc.flush_required = false
+ *				if !luf_ubc.flush_requied: <-- hit
+ *				   return <-- not performed tlb flush
+ *				              needed yet but return. bug!
+ *				luf_ubc.flush_required = false
+ *				try_to_unmap_flush()
+ *				finalize
+ *	try_to_unmap_flush() <-- performs tlb flush needed
+ *	finalize
+ *
+ * So it should be handled:
+ *
+ *	CPU0			CPU1
+ *	----			----
+ *	atomically execute {
+ *	   if luf_on_flushing:
+ *	      wait for the completion
+ *	      return
+ *	   if !luf_ubc.flush_required:
+ *	      return
+ *	   luf_ubc.flush_required = false
+ *	   luf_on_flushing = true
+ *	}
+ *				atomically execute {
+ *				   if luf_on_flushing: <-- hit
+ *				      wait for the completion
+ *				      return <-- tlb flush needed is done
+ *				   if !luf_ubc.flush_requied:
+ *				      return
+ *				   luf_ubc.flush_required = false
+ *				   luf_on_flushing = true
+ *				}
+ *
+ *				try_to_unmap_flush()
+ *				luf_on_flushing = false
+ *				finalize
+ *	try_to_unmap_flush() <-- performs tlb flush needed
+ *	luf_on_flushing = false
+ *	finalize
+ */
+static bool luf_on_flushing;
+
+/*
+ * Generation number for the current request of deferred tlb flush.
+ */
+static unsigned short int luf_gen;
+
+/*
+ * Generation number for the next request.
+ */
+static unsigned short int luf_gen_next = 1;
+
+/*
+ * Generation number for the latest request handled.
+ */
+static unsigned short int luf_gen_done;
+
+unsigned short int try_to_unmap_luf(void)
+{
+	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+	struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;
+	unsigned long flags;
+	unsigned short int ugen;
+
+	if (!spin_trylock_irqsave(&luf_lock, flags)) {
+		/*
+		 * Give up luf mechanism.  Just let tlb flush needed
+		 * handled by try_to_unmap_flush() at the caller side.
+		 */
+		fold_ubc(tlb_ubc, tlb_ubc_luf);
+		return 0;
+	}
+
+	if (on_flushing || luf_on_flushing) {
+		spin_unlock_irqrestore(&luf_lock, flags);
+
+		/*
+		 * Give up luf mechanism.  Just let tlb flush needed
+		 * handled by try_to_unmap_flush() at the caller side.
+		 */
+		fold_ubc(tlb_ubc, tlb_ubc_luf);
+		return 0;
+	}
+
+	fold_ubc(&luf_ubc, tlb_ubc_luf);
+	ugen = luf_gen = luf_gen_next;
+	spin_unlock_irqrestore(&luf_lock, flags);
+
+	return ugen;
+}
+
+static void rmap_flush_start(void)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&luf_lock, flags);
+	on_flushing++;
+	spin_unlock_irqrestore(&luf_lock, flags);
+}
+
+static void rmap_flush_end(struct tlbflush_unmap_batch *batch)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&luf_lock, flags);
+	if (arch_tlbbatch_done(&luf_ubc.arch, &batch->arch)) {
+		luf_ubc.flush_required = false;
+		luf_ubc.writable = false;
+	}
+	on_flushing--;
+	spin_unlock_irqrestore(&luf_lock, flags);
+}
+
+/*
+ * It must be guaranteed to have completed tlb flush requested on return.
+ */
+void check_luf_flush(unsigned short int ugen)
+{
+	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+	unsigned long flags;
+
+	/*
+	 * Nothing has been requested.  We are done.
+	 */
+	if (!ugen)
+		return;
+retry:
+	/*
+	 * We can see a larger value than or equal to luf_gen_done,
+	 * which means the tlb flush we need has been done.
+	 */
+	if (!ugen_before(READ_ONCE(luf_gen_done), ugen))
+		return;
+
+	spin_lock_irqsave(&luf_lock, flags);
+
+	/*
+	 * With luf_lock held, we might read luf_gen_done updated.
+	 */
+	if (ugen_next(luf_gen_done) != ugen) {
+		spin_unlock_irqrestore(&luf_lock, flags);
+		return;
+	}
+
+	/*
+	 * Others are already working for us.
+	 */
+	if (luf_on_flushing) {
+		spin_unlock_irqrestore(&luf_lock, flags);
+		goto retry;
+	}
+
+	if (!luf_ubc.flush_required) {
+		spin_unlock_irqrestore(&luf_lock, flags);
+		return;
+	}
+
+	fold_ubc(tlb_ubc, &luf_ubc);
+	luf_gen_next = ugen_next(luf_gen);
+	luf_on_flushing = true;
+	spin_unlock_irqrestore(&luf_lock, flags);
+
+	try_to_unmap_flush();
+
+	spin_lock_irqsave(&luf_lock, flags);
+	luf_on_flushing = false;
+
+	/*
+	 * luf_gen_done can be read by another with luf_lock not
+	 * held so use WRITE_ONCE() to prevent tearing.
+	 */
+	WRITE_ONCE(luf_gen_done, ugen);
+	spin_unlock_irqrestore(&luf_lock, flags);
+}
+
+void luf_flush(void)
+{
+	unsigned long flags;
+	unsigned short int ugen;
+
+	/*
+	 * Obtain the latest ugen number.
+	 */
+	spin_lock_irqsave(&luf_lock, flags);
+	ugen = luf_gen;
+	spin_unlock_irqrestore(&luf_lock, flags);
+
+	check_luf_flush(ugen);
+}
 
 void fold_ubc(struct tlbflush_unmap_batch *dst,
 	      struct tlbflush_unmap_batch *src)
@@ -666,13 +930,15 @@ void fold_ubc(struct tlbflush_unmap_batch *dst,
 void try_to_unmap_flush(void)
 {
 	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
-	struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
+	struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;
 
-	fold_ubc(tlb_ubc, tlb_ubc_ro);
+	fold_ubc(tlb_ubc, tlb_ubc_luf);
 	if (!tlb_ubc->flush_required)
 		return;
 
+	rmap_flush_start();
 	arch_tlbbatch_flush(&tlb_ubc->arch);
+	rmap_flush_end(tlb_ubc);
 	arch_tlbbatch_clear(&tlb_ubc->arch);
 	tlb_ubc->flush_required = false;
 	tlb_ubc->writable = false;
@@ -682,9 +948,9 @@ void try_to_unmap_flush(void)
 void try_to_unmap_flush_dirty(void)
 {
 	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
-	struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
+	struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;
 
-	if (tlb_ubc->writable || tlb_ubc_ro->writable)
+	if (tlb_ubc->writable || tlb_ubc_luf->writable)
 		try_to_unmap_flush();
 }
 
@@ -708,9 +974,15 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval,
 	if (!pte_accessible(mm, pteval))
 		return;
 
-	if (pte_write(pteval))
+	if (pte_write(pteval)) {
 		tlb_ubc = &current->tlb_ubc;
-	else
+
+		/*
+		 * luf cannot work with the folio once it found a
+		 * writable or dirty mapping on it.
+		 */
+		can_luf_fail();
+	} else
 		tlb_ubc = &current->tlb_ubc_ro;
 
 	arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr);
@@ -1976,11 +2248,23 @@ void try_to_unmap(struct folio *folio, enum ttu_flags flags)
 		.done = folio_not_mapped,
 		.anon_lock = folio_lock_anon_vma_read,
 	};
+	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+	struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
+	struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;
+	bool can_luf;
+
+	can_luf_init();
 
 	if (flags & TTU_RMAP_LOCKED)
 		rmap_walk_locked(folio, &rwc);
 	else
 		rmap_walk(folio, &rwc);
+
+	can_luf = can_luf_folio(folio) && can_luf_test();
+	if (can_luf)
+		fold_ubc(tlb_ubc_luf, tlb_ubc_ro);
+	else
+		fold_ubc(tlb_ubc, tlb_ubc_ro);
 }
 
 /*
@@ -2325,6 +2609,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
 		.done = folio_not_mapped,
 		.anon_lock = folio_lock_anon_vma_read,
 	};
+	struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
+	struct tlbflush_unmap_batch *tlb_ubc_ro = &current->tlb_ubc_ro;
+	struct tlbflush_unmap_batch *tlb_ubc_luf = &current->tlb_ubc_luf;
+	bool can_luf;
 
 	/*
 	 * Migration always ignores mlock and only supports TTU_RMAP_LOCKED and
@@ -2349,10 +2637,18 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
 	if (!folio_test_ksm(folio) && folio_test_anon(folio))
 		rwc.invalid_vma = invalid_migration_vma;
 
+	can_luf_init();
+
 	if (flags & TTU_RMAP_LOCKED)
 		rmap_walk_locked(folio, &rwc);
 	else
 		rmap_walk(folio, &rwc);
+
+	can_luf = can_luf_folio(folio) && can_luf_test();
+	if (can_luf)
+		fold_ubc(tlb_ubc_luf, tlb_ubc_ro);
+	else
+		fold_ubc(tlb_ubc, tlb_ubc_ro);
 }
 
 #ifdef CONFIG_DEVICE_PRIVATE
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v10 10/12] mm: separate move/undo parts from migrate_pages_batch()
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
                   ` (8 preceding siblings ...)
  2024-05-10  6:52 ` [PATCH v10 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped Byungchul Park
@ 2024-05-10  6:52 ` Byungchul Park
  2024-05-10  6:52 ` [PATCH v10 11/12] mm, migrate: apply luf mechanism to unmapping during migration Byungchul Park
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-10  6:52 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

Functionally, no change.  This is a preparation for luf mechanism that
requires to use separated folio lists for its own handling during
migration.  Refactored migrate_pages_batch() so as to separate move/undo
parts from migrate_pages_batch().

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 mm/migrate.c | 134 +++++++++++++++++++++++++++++++--------------------
 1 file changed, 83 insertions(+), 51 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index c7692f303fa7..f9ed7a2b8720 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1609,6 +1609,81 @@ static int migrate_hugetlbs(struct list_head *from, new_folio_t get_new_folio,
 	return nr_failed;
 }
 
+static void migrate_folios_move(struct list_head *src_folios,
+		struct list_head *dst_folios,
+		free_folio_t put_new_folio, unsigned long private,
+		enum migrate_mode mode, int reason,
+		struct list_head *ret_folios,
+		struct migrate_pages_stats *stats,
+		int *retry, int *thp_retry, int *nr_failed,
+		int *nr_retry_pages)
+{
+	struct folio *folio, *folio2, *dst, *dst2;
+	bool is_thp;
+	int nr_pages;
+	int rc;
+
+	dst = list_first_entry(dst_folios, struct folio, lru);
+	dst2 = list_next_entry(dst, lru);
+	list_for_each_entry_safe(folio, folio2, src_folios, lru) {
+		is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio);
+		nr_pages = folio_nr_pages(folio);
+
+		cond_resched();
+
+		rc = migrate_folio_move(put_new_folio, private,
+				folio, dst, mode,
+				reason, ret_folios);
+		/*
+		 * The rules are:
+		 *	Success: folio will be freed
+		 *	-EAGAIN: stay on the unmap_folios list
+		 *	Other errno: put on ret_folios list
+		 */
+		switch(rc) {
+		case -EAGAIN:
+			*retry += 1;
+			*thp_retry += is_thp;
+			*nr_retry_pages += nr_pages;
+			break;
+		case MIGRATEPAGE_SUCCESS:
+			stats->nr_succeeded += nr_pages;
+			stats->nr_thp_succeeded += is_thp;
+			break;
+		default:
+			*nr_failed += 1;
+			stats->nr_thp_failed += is_thp;
+			stats->nr_failed_pages += nr_pages;
+			break;
+		}
+		dst = dst2;
+		dst2 = list_next_entry(dst, lru);
+	}
+}
+
+static void migrate_folios_undo(struct list_head *src_folios,
+		struct list_head *dst_folios,
+		free_folio_t put_new_folio, unsigned long private,
+		struct list_head *ret_folios)
+{
+	struct folio *folio, *folio2, *dst, *dst2;
+
+	dst = list_first_entry(dst_folios, struct folio, lru);
+	dst2 = list_next_entry(dst, lru);
+	list_for_each_entry_safe(folio, folio2, src_folios, lru) {
+		int old_page_state = 0;
+		struct anon_vma *anon_vma = NULL;
+
+		__migrate_folio_extract(dst, &old_page_state, &anon_vma);
+		migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED,
+				anon_vma, true, ret_folios);
+		list_del(&dst->lru);
+		migrate_folio_undo_dst(dst, true, put_new_folio, private);
+		dst = dst2;
+		dst2 = list_next_entry(dst, lru);
+	}
+}
+
 /*
  * migrate_pages_batch() first unmaps folios in the from list as many as
  * possible, then move the unmapped folios.
@@ -1631,7 +1706,7 @@ static int migrate_pages_batch(struct list_head *from,
 	int pass = 0;
 	bool is_thp = false;
 	bool is_large = false;
-	struct folio *folio, *folio2, *dst = NULL, *dst2;
+	struct folio *folio, *folio2, *dst = NULL;
 	int rc, rc_saved = 0, nr_pages;
 	LIST_HEAD(unmap_folios);
 	LIST_HEAD(dst_folios);
@@ -1790,42 +1865,11 @@ static int migrate_pages_batch(struct list_head *from,
 		thp_retry = 0;
 		nr_retry_pages = 0;
 
-		dst = list_first_entry(&dst_folios, struct folio, lru);
-		dst2 = list_next_entry(dst, lru);
-		list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) {
-			is_thp = folio_test_large(folio) && folio_test_pmd_mappable(folio);
-			nr_pages = folio_nr_pages(folio);
-
-			cond_resched();
-
-			rc = migrate_folio_move(put_new_folio, private,
-						folio, dst, mode,
-						reason, ret_folios);
-			/*
-			 * The rules are:
-			 *	Success: folio will be freed
-			 *	-EAGAIN: stay on the unmap_folios list
-			 *	Other errno: put on ret_folios list
-			 */
-			switch(rc) {
-			case -EAGAIN:
-				retry++;
-				thp_retry += is_thp;
-				nr_retry_pages += nr_pages;
-				break;
-			case MIGRATEPAGE_SUCCESS:
-				stats->nr_succeeded += nr_pages;
-				stats->nr_thp_succeeded += is_thp;
-				break;
-			default:
-				nr_failed++;
-				stats->nr_thp_failed += is_thp;
-				stats->nr_failed_pages += nr_pages;
-				break;
-			}
-			dst = dst2;
-			dst2 = list_next_entry(dst, lru);
-		}
+		/* Move the unmapped folios */
+		migrate_folios_move(&unmap_folios, &dst_folios,
+				put_new_folio, private, mode, reason,
+				ret_folios, stats, &retry, &thp_retry,
+				&nr_failed, &nr_retry_pages);
 	}
 	nr_failed += retry;
 	stats->nr_thp_failed += thp_retry;
@@ -1834,20 +1878,8 @@ static int migrate_pages_batch(struct list_head *from,
 	rc = rc_saved ? : nr_failed;
 out:
 	/* Cleanup remaining folios */
-	dst = list_first_entry(&dst_folios, struct folio, lru);
-	dst2 = list_next_entry(dst, lru);
-	list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) {
-		int old_page_state = 0;
-		struct anon_vma *anon_vma = NULL;
-
-		__migrate_folio_extract(dst, &old_page_state, &anon_vma);
-		migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED,
-				       anon_vma, true, ret_folios);
-		list_del(&dst->lru);
-		migrate_folio_undo_dst(dst, true, put_new_folio, private);
-		dst = dst2;
-		dst2 = list_next_entry(dst, lru);
-	}
+	migrate_folios_undo(&unmap_folios, &dst_folios,
+			put_new_folio, private, ret_folios);
 
 	return rc;
 }
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v10 11/12] mm, migrate: apply luf mechanism to unmapping during migration
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
                   ` (9 preceding siblings ...)
  2024-05-10  6:52 ` [PATCH v10 10/12] mm: separate move/undo parts from migrate_pages_batch() Byungchul Park
@ 2024-05-10  6:52 ` Byungchul Park
  2024-05-10  6:52 ` [PATCH v10 12/12] mm, vmscan: apply luf mechanism to unmapping during folio reclaim Byungchul Park
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-10  6:52 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again.  It's
safe for folios that had been mapped read only and were unmapped, since
the contents of the folios don't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

Applied the mechanism to unmapping during migration.

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 include/linux/rmap.h |  2 +-
 mm/migrate.c         | 56 ++++++++++++++++++++++++++++++++------------
 mm/rmap.c            |  9 ++++---
 3 files changed, 48 insertions(+), 19 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 0f906dc6d280..1898a2c1c087 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -657,7 +657,7 @@ static inline int folio_try_share_anon_rmap_pmd(struct folio *folio,
 int folio_referenced(struct folio *, int is_locked,
 			struct mem_cgroup *memcg, unsigned long *vm_flags);
 
-void try_to_migrate(struct folio *folio, enum ttu_flags flags);
+bool try_to_migrate(struct folio *folio, enum ttu_flags flags);
 void try_to_unmap(struct folio *, enum ttu_flags flags);
 
 int make_device_exclusive_range(struct mm_struct *mm, unsigned long start,
diff --git a/mm/migrate.c b/mm/migrate.c
index f9ed7a2b8720..c8b0e5203e9a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1090,7 +1090,8 @@ static void migrate_folio_undo_dst(struct folio *dst, bool locked,
 
 /* Cleanup src folio upon migration success */
 static void migrate_folio_done(struct folio *src,
-			       enum migrate_reason reason)
+			       enum migrate_reason reason,
+			       unsigned short int ugen)
 {
 	/*
 	 * Compaction can migrate also non-LRU pages which are
@@ -1101,8 +1102,12 @@ static void migrate_folio_done(struct folio *src,
 		mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON +
 				    folio_is_file_lru(src), -folio_nr_pages(src));
 
-	if (reason != MR_MEMORY_FAILURE)
-		/* We release the page in page_handle_poison. */
+	/* We release the page in page_handle_poison. */
+	if (reason == MR_MEMORY_FAILURE)
+		check_luf_flush(ugen);
+	else if (ugen)
+		folio_put_ugen(src, ugen);
+	else
 		folio_put(src);
 }
 
@@ -1110,7 +1115,8 @@ static void migrate_folio_done(struct folio *src,
 static int migrate_folio_unmap(new_folio_t get_new_folio,
 		free_folio_t put_new_folio, unsigned long private,
 		struct folio *src, struct folio **dstp, enum migrate_mode mode,
-		enum migrate_reason reason, struct list_head *ret)
+		enum migrate_reason reason, struct list_head *ret,
+		bool *can_luf)
 {
 	struct folio *dst;
 	int rc = -EAGAIN;
@@ -1126,7 +1132,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
 		folio_clear_unevictable(src);
 		/* free_pages_prepare() will clear PG_isolated. */
 		list_del(&src->lru);
-		migrate_folio_done(src, reason);
+		migrate_folio_done(src, reason, 0);
 		return MIGRATEPAGE_SUCCESS;
 	}
 
@@ -1244,7 +1250,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
 		/* Establish migration ptes */
 		VM_BUG_ON_FOLIO(folio_test_anon(src) &&
 			       !folio_test_ksm(src) && !anon_vma, src);
-		try_to_migrate(src, mode == MIGRATE_ASYNC ? TTU_BATCH_FLUSH : 0);
+		*can_luf = try_to_migrate(src, mode == MIGRATE_ASYNC ? TTU_BATCH_FLUSH : 0);
 		old_page_state |= PAGE_WAS_MAPPED;
 	}
 
@@ -1272,7 +1278,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
 static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
 			      struct folio *src, struct folio *dst,
 			      enum migrate_mode mode, enum migrate_reason reason,
-			      struct list_head *ret)
+			      struct list_head *ret, unsigned short int ugen)
 {
 	int rc;
 	int old_page_state = 0;
@@ -1326,7 +1332,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
 	if (anon_vma)
 		put_anon_vma(anon_vma);
 	folio_unlock(src);
-	migrate_folio_done(src, reason);
+	migrate_folio_done(src, reason, ugen);
 
 	return rc;
 out:
@@ -1616,7 +1622,7 @@ static void migrate_folios_move(struct list_head *src_folios,
 		struct list_head *ret_folios,
 		struct migrate_pages_stats *stats,
 		int *retry, int *thp_retry, int *nr_failed,
-		int *nr_retry_pages)
+		int *nr_retry_pages, unsigned short int ugen)
 {
 	struct folio *folio, *folio2, *dst, *dst2;
 	bool is_thp;
@@ -1633,7 +1639,7 @@ static void migrate_folios_move(struct list_head *src_folios,
 
 		rc = migrate_folio_move(put_new_folio, private,
 				folio, dst, mode,
-				reason, ret_folios);
+				reason, ret_folios, ugen);
 		/*
 		 * The rules are:
 		 *	Success: folio will be freed
@@ -1710,7 +1716,11 @@ static int migrate_pages_batch(struct list_head *from,
 	int rc, rc_saved = 0, nr_pages;
 	LIST_HEAD(unmap_folios);
 	LIST_HEAD(dst_folios);
+	LIST_HEAD(unmap_folios_luf);
+	LIST_HEAD(dst_folios_luf);
 	bool nosplit = (reason == MR_NUMA_MISPLACED);
+	unsigned short int ugen;
+	bool can_luf;
 
 	VM_WARN_ON_ONCE(mode != MIGRATE_ASYNC &&
 			!list_empty(from) && !list_is_singular(from));
@@ -1773,9 +1783,11 @@ static int migrate_pages_batch(struct list_head *from,
 				continue;
 			}
 
+			can_luf = false;
 			rc = migrate_folio_unmap(get_new_folio, put_new_folio,
 					private, folio, &dst, mode, reason,
-					ret_folios);
+					ret_folios, &can_luf);
+
 			/*
 			 * The rules are:
 			 *	Success: folio will be freed
@@ -1821,7 +1833,8 @@ static int migrate_pages_batch(struct list_head *from,
 				/* nr_failed isn't updated for not used */
 				stats->nr_thp_failed += thp_retry;
 				rc_saved = rc;
-				if (list_empty(&unmap_folios))
+				if (list_empty(&unmap_folios) &&
+				    list_empty(&unmap_folios_luf))
 					goto out;
 				else
 					goto move;
@@ -1835,8 +1848,13 @@ static int migrate_pages_batch(struct list_head *from,
 				stats->nr_thp_succeeded += is_thp;
 				break;
 			case MIGRATEPAGE_UNMAP:
-				list_move_tail(&folio->lru, &unmap_folios);
-				list_add_tail(&dst->lru, &dst_folios);
+				if (can_luf) {
+					list_move_tail(&folio->lru, &unmap_folios_luf);
+					list_add_tail(&dst->lru, &dst_folios_luf);
+				} else {
+					list_move_tail(&folio->lru, &unmap_folios);
+					list_add_tail(&dst->lru, &dst_folios);
+				}
 				break;
 			default:
 				/*
@@ -1856,6 +1874,8 @@ static int migrate_pages_batch(struct list_head *from,
 	stats->nr_thp_failed += thp_retry;
 	stats->nr_failed_pages += nr_retry_pages;
 move:
+	/* Should be before try_to_unmap_flush() */
+	ugen = try_to_unmap_luf();
 	/* Flush TLBs for all unmapped folios */
 	try_to_unmap_flush();
 
@@ -1869,7 +1889,11 @@ static int migrate_pages_batch(struct list_head *from,
 		migrate_folios_move(&unmap_folios, &dst_folios,
 				put_new_folio, private, mode, reason,
 				ret_folios, stats, &retry, &thp_retry,
-				&nr_failed, &nr_retry_pages);
+				&nr_failed, &nr_retry_pages, 0);
+		migrate_folios_move(&unmap_folios_luf, &dst_folios_luf,
+				put_new_folio, private, mode, reason,
+				ret_folios, stats, &retry, &thp_retry,
+				&nr_failed, &nr_retry_pages, ugen);
 	}
 	nr_failed += retry;
 	stats->nr_thp_failed += thp_retry;
@@ -1880,6 +1904,8 @@ static int migrate_pages_batch(struct list_head *from,
 	/* Cleanup remaining folios */
 	migrate_folios_undo(&unmap_folios, &dst_folios,
 			put_new_folio, private, ret_folios);
+	migrate_folios_undo(&unmap_folios_luf, &dst_folios_luf,
+			put_new_folio, private, ret_folios);
 
 	return rc;
 }
diff --git a/mm/rmap.c b/mm/rmap.c
index e42783c02114..d25ae20a47b5 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2600,8 +2600,9 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
  *
  * Tries to remove all the page table entries which are mapping this folio and
  * replace them with special swap entries. Caller must hold the folio lock.
+ * Return true if all the mappings are read-only, otherwise false.
  */
-void try_to_migrate(struct folio *folio, enum ttu_flags flags)
+bool try_to_migrate(struct folio *folio, enum ttu_flags flags)
 {
 	struct rmap_walk_control rwc = {
 		.rmap_one = try_to_migrate_one,
@@ -2620,11 +2621,11 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
 	 */
 	if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
 					TTU_SYNC | TTU_BATCH_FLUSH)))
-		return;
+		return false;
 
 	if (folio_is_zone_device(folio) &&
 	    (!folio_is_device_private(folio) && !folio_is_device_coherent(folio)))
-		return;
+		return false;
 
 	/*
 	 * During exec, a temporary VMA is setup and later moved.
@@ -2649,6 +2650,8 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
 		fold_ubc(tlb_ubc_luf, tlb_ubc_ro);
 	else
 		fold_ubc(tlb_ubc, tlb_ubc_ro);
+
+	return can_luf;
 }
 
 #ifdef CONFIG_DEVICE_PRIVATE
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v10 12/12] mm, vmscan: apply luf mechanism to unmapping during folio reclaim
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
                   ` (10 preceding siblings ...)
  2024-05-10  6:52 ` [PATCH v10 11/12] mm, migrate: apply luf mechanism to unmapping during migration Byungchul Park
@ 2024-05-10  6:52 ` Byungchul Park
  2024-05-11  6:54 ` [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Huang, Ying
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-10  6:52 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

A new mechanism, LUF(Lazy Unmap Flush), defers tlb flush until folios
that have been unmapped and freed, eventually get allocated again.  It's
safe for folios that had been mapped read only and were unmapped, since
the contents of the folios don't change while staying in pcp or buddy
so we can still read the data through the stale tlb entries.

Applied the mechanism to unmapping during folio reclaim.

Signed-off-by: Byungchul Park <byungchul@sk.com>
---
 include/linux/rmap.h |  5 +++--
 mm/rmap.c            |  5 ++++-
 mm/vmscan.c          | 21 ++++++++++++++++++++-
 3 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 1898a2c1c087..9ca752f8de97 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -658,7 +658,7 @@ int folio_referenced(struct folio *, int is_locked,
 			struct mem_cgroup *memcg, unsigned long *vm_flags);
 
 bool try_to_migrate(struct folio *folio, enum ttu_flags flags);
-void try_to_unmap(struct folio *, enum ttu_flags flags);
+bool try_to_unmap(struct folio *, enum ttu_flags flags);
 
 int make_device_exclusive_range(struct mm_struct *mm, unsigned long start,
 				unsigned long end, struct page **pages,
@@ -777,8 +777,9 @@ static inline int folio_referenced(struct folio *folio, int is_locked,
 	return 0;
 }
 
-static inline void try_to_unmap(struct folio *folio, enum ttu_flags flags)
+static inline bool try_to_unmap(struct folio *folio, enum ttu_flags flags)
 {
+	return false;
 }
 
 static inline int folio_mkclean(struct folio *folio)
diff --git a/mm/rmap.c b/mm/rmap.c
index d25ae20a47b5..571e337af448 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2237,10 +2237,11 @@ static int folio_not_mapped(struct folio *folio)
  * Tries to remove all the page table entries which are mapping this
  * folio.  It is the caller's responsibility to check if the folio is
  * still mapped if needed (use TTU_SYNC to prevent accounting races).
+ * Return true if all the mappings are read-only, otherwise false.
  *
  * Context: Caller must hold the folio lock.
  */
-void try_to_unmap(struct folio *folio, enum ttu_flags flags)
+bool try_to_unmap(struct folio *folio, enum ttu_flags flags)
 {
 	struct rmap_walk_control rwc = {
 		.rmap_one = try_to_unmap_one,
@@ -2265,6 +2266,8 @@ void try_to_unmap(struct folio *folio, enum ttu_flags flags)
 		fold_ubc(tlb_ubc_luf, tlb_ubc_ro);
 	else
 		fold_ubc(tlb_ubc, tlb_ubc_ro);
+
+	return can_luf;
 }
 
 /*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bb0ff11f9ec9..4e2e9d07cd96 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1031,14 +1031,17 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 		struct reclaim_stat *stat, bool ignore_references)
 {
 	struct folio_batch free_folios;
+	struct folio_batch free_folios_luf;
 	LIST_HEAD(ret_folios);
 	LIST_HEAD(demote_folios);
 	unsigned int nr_reclaimed = 0;
 	unsigned int pgactivate = 0;
 	bool do_demote_pass;
 	struct swap_iocb *plug = NULL;
+	unsigned short int ugen;
 
 	folio_batch_init(&free_folios);
+	folio_batch_init(&free_folios_luf);
 	memset(stat, 0, sizeof(*stat));
 	cond_resched();
 	do_demote_pass = can_demote(pgdat->node_id, sc);
@@ -1050,6 +1053,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 		enum folio_references references = FOLIOREF_RECLAIM;
 		bool dirty, writeback;
 		unsigned int nr_pages;
+		bool can_luf = false;
 
 		cond_resched();
 
@@ -1292,7 +1296,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 			if (folio_test_large(folio) && list_empty(&folio->_deferred_list))
 				flags |= TTU_SYNC;
 
-			try_to_unmap(folio, flags);
+			can_luf = try_to_unmap(folio, flags);
 			if (folio_mapped(folio)) {
 				stat->nr_unmap_fail += nr_pages;
 				if (!was_swapbacked &&
@@ -1457,6 +1461,18 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 		if (folio_test_large(folio) &&
 		    folio_test_large_rmappable(folio))
 			folio_undo_large_rmappable(folio);
+
+		if (can_luf) {
+			if (folio_batch_add(&free_folios_luf, folio) == 0) {
+				mem_cgroup_uncharge_folios(&free_folios_luf);
+				ugen = try_to_unmap_luf();
+				if (!ugen)
+					try_to_unmap_flush();
+				free_unref_folios(&free_folios_luf, ugen);
+			}
+			continue;
+		}
+
 		if (folio_batch_add(&free_folios, folio) == 0) {
 			mem_cgroup_uncharge_folios(&free_folios);
 			try_to_unmap_flush();
@@ -1526,8 +1542,11 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 	pgactivate = stat->nr_activate[0] + stat->nr_activate[1];
 
 	mem_cgroup_uncharge_folios(&free_folios);
+	mem_cgroup_uncharge_folios(&free_folios_luf);
+	ugen = try_to_unmap_luf();
 	try_to_unmap_flush();
 	free_unref_folios(&free_folios, 0);
+	free_unref_folios(&free_folios_luf, ugen);
 
 	list_splice(&ret_folios, folio_list);
 	count_vm_events(PGACTIVATE, pgactivate);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
                   ` (11 preceding siblings ...)
  2024-05-10  6:52 ` [PATCH v10 12/12] mm, vmscan: apply luf mechanism to unmapping during folio reclaim Byungchul Park
@ 2024-05-11  6:54 ` Huang, Ying
  2024-05-13  1:41   ` Byungchul Park
  2024-05-11  7:15 ` Huang, Ying
                   ` (2 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Huang, Ying @ 2024-05-11  6:54 UTC (permalink / raw)
  To: Byungchul Park
  Cc: linux-kernel, linux-mm, kernel_team, akpm, vernhao, mgorman,
	hughd, willy, david, peterz, luto, tglx, mingo, bp, dave.hansen,
	rjgolo

Byungchul Park <byungchul@sk.com> writes:

> Hi everyone,
>
> While I'm working with a tiered memory system e.g. CXL memory, I have
> been facing migration overhead esp. tlb shootdown on promotion or
> demotion between different tiers.  Yeah..  most tlb shootdowns on
> migration through hinting fault can be avoided thanks to Huang Ying's
> work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> is inaccessible").  See the following link for more information:
>
> https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
>
> However, it's only for migration through hinting fault.  I thought it'd
> be much better if we have a general mechanism to reduce all the tlb
> numbers that we can apply to any unmap code, that we normally believe
> tlb flush should be followed.
>
> I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), defers tlb flush
> until folios that have been unmapped and freed, eventually get allocated
> again.  It's safe for folios that had been mapped read-only and were
> unmapped, since the contents of the folios don't change while staying in
> pcp or buddy so we can still read the data through the stale tlb entries.
>
> tlb flush can be defered when folios get unmapped as long as it
> guarantees to perform tlb flush needed, before the folios actually
> become used, of course, only if all the corresponding ptes don't have
> write permission.  Otherwise, the system will get messed up.
>
> To achieve that:
>
>    1. For the folios that map only to non-writable tlb entries, prevent
>       tlb flush during unmapping but perform it just before the folios
>       actually become used, out of buddy or pcp.
>
>    2. When any non-writable ptes change to writable e.g. through fault
>       handler, give up luf mechanism and perform tlb flush required
>       right away.
>
>    3. When a writable mapping is created e.g. through mmap(), give up
>       luf mechanism and perform tlb flush required right away.
>
> No matter what type of workload is used for performance evaluation, the
> result would be positive thanks to the unconditional reduction of tlb
> flushes, tlb misses and interrupts.

Are there any downsides of the optimization?  Will it cause regression
for workloads with almost no read-only mappings?  Will it cause
regression for page allocation?

> For the test, I picked up one of
> the most popular and heavy workload, llama.cpp that is a
> LLM(Large Language Model) inference engine.

IIUC, llama.cpp is a workload with huge read-only mapping.

> The result would depend on memory latency and how often reclaim runs,
> which implies tlb miss overhead and how many times unmapping happens.
> In my system, the result shows:
>
>    1. tlb flushes are reduced about 95%.
>    2. tlb misses(itlb) are reduced about 80%.
>    3. tlb misses(dtlb store) are reduced about 57%.
>    4. tlb misses(dtlb load) are reduced about 24%.
>    5. tlb shootdown interrupts are reduced about 95%.
>    6. The test program runtime is reduced about 5%.
>
> The test environment and the result is like:
>
>    Machine: bare metal, x86_64, Intel(R) Xeon(R) Gold 6430
>    CPU: 1 socket 64 core with hyper thread on
>    Numa: 2 nodes (64 CPUs DRAM 42GB, no CPUs CXL expander 98GB)
>    Config: swap off, numa balancing tiering on, demotion enabled
>
>    The test set:
>
>       llama.cpp/main -m $(70G_model1) -p "who are you?" -s 1 -t 15 -n 20 &
>       llama.cpp/main -m $(70G_model2) -p "who are you?" -s 1 -t 15 -n 20 &
>       llama.cpp/main -m $(70G_model3) -p "who are you?" -s 1 -t 15 -n 20 &
>       wait
>
>       where -t: nr of threads, -s: seed used to make the runtime stable,
>       -n: nr of tokens that determines the runtime, -p: prompt to ask,
>       -m: LLM model to use.
>
>    Run the test set 10 times successively with caches dropped every run
>    via 'echo 3 > /proc/sys/vm/drop_caches'.  Each inference prints its
>    runtime at the end of each.
>
>    1. Runtime from the output of llama.cpp:
>
>    BEFORE
>    ------
>    llama_print_timings:       total time = 1002461.95 ms /    24 tokens
>    llama_print_timings:       total time = 1044978.38 ms /    24 tokens
>    llama_print_timings:       total time = 1000653.09 ms /    24 tokens
>    llama_print_timings:       total time = 1047104.80 ms /    24 tokens
>    llama_print_timings:       total time = 1069430.36 ms /    24 tokens
>    llama_print_timings:       total time = 1068201.16 ms /    24 tokens
>    llama_print_timings:       total time = 1078092.59 ms /    24 tokens
>    llama_print_timings:       total time = 1073200.45 ms /    24 tokens
>    llama_print_timings:       total time = 1067136.00 ms /    24 tokens
>    llama_print_timings:       total time = 1076442.56 ms /    24 tokens
>    llama_print_timings:       total time = 1004142.64 ms /    24 tokens
>    llama_print_timings:       total time = 1042942.65 ms /    24 tokens
>    llama_print_timings:       total time =  999933.76 ms /    24 tokens
>    llama_print_timings:       total time = 1046548.83 ms /    24 tokens
>    llama_print_timings:       total time = 1068671.48 ms /    24 tokens
>    llama_print_timings:       total time = 1068285.76 ms /    24 tokens
>    llama_print_timings:       total time = 1077789.63 ms /    24 tokens
>    llama_print_timings:       total time = 1071558.93 ms /    24 tokens
>    llama_print_timings:       total time = 1066181.55 ms /    24 tokens
>    llama_print_timings:       total time = 1076767.53 ms /    24 tokens
>    llama_print_timings:       total time = 1004065.63 ms /    24 tokens
>    llama_print_timings:       total time = 1044522.13 ms /    24 tokens
>    llama_print_timings:       total time =  999725.33 ms /    24 tokens
>    llama_print_timings:       total time = 1047510.77 ms /    24 tokens
>    llama_print_timings:       total time = 1068010.27 ms /    24 tokens
>    llama_print_timings:       total time = 1068999.31 ms /    24 tokens
>    llama_print_timings:       total time = 1077648.05 ms /    24 tokens
>    llama_print_timings:       total time = 1071378.96 ms /    24 tokens
>    llama_print_timings:       total time = 1066326.32 ms /    24 tokens
>    llama_print_timings:       total time = 1077088.92 ms /    24 tokens
>
>    AFTER
>    -----
>    llama_print_timings:       total time =  988522.03 ms /    24 tokens
>    llama_print_timings:       total time =  997204.52 ms /    24 tokens
>    llama_print_timings:       total time =  996605.86 ms /    24 tokens
>    llama_print_timings:       total time =  991985.50 ms /    24 tokens
>    llama_print_timings:       total time = 1035143.31 ms /    24 tokens
>    llama_print_timings:       total time =  993660.18 ms /    24 tokens
>    llama_print_timings:       total time =  983082.14 ms /    24 tokens
>    llama_print_timings:       total time =  990431.36 ms /    24 tokens
>    llama_print_timings:       total time =  992707.09 ms /    24 tokens
>    llama_print_timings:       total time =  992673.27 ms /    24 tokens
>    llama_print_timings:       total time =  989285.43 ms /    24 tokens
>    llama_print_timings:       total time =  996710.06 ms /    24 tokens
>    llama_print_timings:       total time =  996534.64 ms /    24 tokens
>    llama_print_timings:       total time =  991344.17 ms /    24 tokens
>    llama_print_timings:       total time = 1035210.84 ms /    24 tokens
>    llama_print_timings:       total time =  994714.13 ms /    24 tokens
>    llama_print_timings:       total time =  984184.15 ms /    24 tokens
>    llama_print_timings:       total time =  990909.45 ms /    24 tokens
>    llama_print_timings:       total time =  991881.48 ms /    24 tokens
>    llama_print_timings:       total time =  993918.03 ms /    24 tokens
>    llama_print_timings:       total time =  990061.34 ms /    24 tokens
>    llama_print_timings:       total time =  998076.69 ms /    24 tokens
>    llama_print_timings:       total time =  997082.59 ms /    24 tokens
>    llama_print_timings:       total time =  990677.58 ms /    24 tokens
>    llama_print_timings:       total time = 1036054.94 ms /    24 tokens
>    llama_print_timings:       total time =  994125.93 ms /    24 tokens
>    llama_print_timings:       total time =  982467.01 ms /    24 tokens
>    llama_print_timings:       total time =  990191.60 ms /    24 tokens
>    llama_print_timings:       total time =  993319.24 ms /    24 tokens
>    llama_print_timings:       total time =  992540.57 ms /    24 tokens
>
>    2. tlb shootdowns from 'cat /proc/interrupts':
>
>    BEFORE
>    ------
>    TLB:
>    125553646  141418810  161932620  176853972  186655697  190399283
>    192143823  196414038  192872439  193313658  193395617  192521416
>    190788161  195067598  198016061  193607347  194293972  190786732
>    191545637  194856822  191801931  189634535  190399803  196365922
>    195268398  190115840  188050050  193194908  195317617  190820190
>    190164820  185556071  226797214  229592631  216112464  209909495
>    205575979  205950252  204948111  197999795  198892232  205287952
>    199344631  195015158  195869844  198858745  195692876  200961904
>    203463252  205921722  199850838  206145986  199613202  199961345
>    200129577  203020521  207873649  203697671  197093386  204243803
>    205993323  200934664  204193128  194435376  TLB shootdowns
>
>    AFTER
>    -----
>    TLB:
>      5648092    6610142    7032849    7882308    8088518    8352310
>      8656536    8705136    8647426    8905583    8985408    8704522
>      8884344    9026261    8929974    8869066    8877575    8810096
>      8770984    8754503    8801694    8865925    8787524    8656432
>      8755912    8682034    8773935    8832925    8797997    8515777
>      8481240    8891258   10595243   10285973    9756935    9573681
>      9398968    9069244    9242984    8899009    9310690    9029095
>      9069758    9105825    9092703    9270202    9460287    9258546
>      9180415    9232723    9270611    9175020    9490420    9360316
>      9420818    9057663    9525631    9310152    9152242    8654483
>      9181804    9050847    8919916    8883856  TLB shootdowns
>
>    3. tlb numbers from 'perf stat' per test set:
>
>    BEFORE
>    ------
>    3163679332	dTLB-load-misses
>    2017751856	dTLB-store-misses
>    327092903	iTLB-load-misses
>    1357543886	tlb:tlb_flush
>
>    AFTER
>    -----
>    2394694609	dTLB-load-misses
>    861144167	dTLB-store-misses
>    64055579	iTLB-load-misses
>    69175002	tlb:tlb_flush
>
> ---
>
> Changes from v9:
>
> 	1. Expand the candidate to apply this mechanism:
>
> 	   BEFORE - The souce folios at any type of migration.
> 	   AFTER  - Any folios that have been unmapped and freed.
>
> 	2. Change the workload for test:
>
> 	   BEFORE - XSBench
> 	   AFTER  - llama.cpp (one of the most popluar real workload)
>
> 	3. Change the test environment:
>
> 	   BEFORE - qemu machine, too small DRAM(1GB), large remote mem
> 	   AFTER  - bare metal, real CXL memory, practical memory size
>
> 	4. Rename the mechanism from MIGRC(Migration Read Copy) to
> 	   LUF(Lazy Unmap Flush) to reflect the current version of the
> 	   mechanism can be applied not only to unmap during migration
> 	   but any unmap code e.g. unmap in shrink_folio_list().
>
> 	5. Fix build error for riscv. (feedbacked by kernel test bot)
>
> 	6. Supplement commit messages to describe what this mechanism is
> 	   for, especially in the patches for arch code. (feedbacked by
> 	   Thomas Gleixner)
>
> 	7. Clean up some trivial things.
>
> Changes from v8:
>
> 	1. Rebase on akpm/mm.git mm-unstable as of April 18, 2024.
> 	2. Supplement comments and commit message.
> 	3. Change the candidate to apply migrc mechanism:
>
> 	   BEFORE - The source folios at demotion and promotion.
> 	   AFTER  - The souce folios at any type of migration.
>
> 	4. Change how migrc mechanism works:
>
> 	   BEFORE - Reduce tlb flushes by deferring folio_free() for
> 	            source folios during demotion and promotion.
> 	   AFTER  - Reduce tlb flushes by deferring tlb flush until they
> 	            actually become used, out of pcp or buddy. The
> 		    current version of migrc does *not* defer calling
> 	            folio_free() but let it go as it is as the same as
> 		    vanilla kernel, with the folios marked kind of 'need
> 		    to tlb flush'. And then handle the flush when the
> 		    page exits from pcp or buddy so as to prevent
> 		    changing vm stats e.g. free pages.
>
> Changes from v7:
>
> 	1. Rewrite cover letter to explain what 'migrc' mechasism is.
> 	   (feedbacked by Andrew Morton)
> 	2. Supplement the commit message of a patch 'mm: Add APIs to
> 	   free a folio directly to the buddy bypassing pcp'.
> 	   (feedbacked by Andrew Morton)
>
> Changes from v6:
>
> 	1. Fix build errors in case of
> 	   CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH disabled by moving
> 	   migrc_flush_{start,end}() calls from arch code to
> 	   try_to_unmap_flush() in mm/rmap.c.
>
> Changes from v5:
>
> 	1. Fix build errors in case of CONFIG_MIGRATION disabled or
> 	   CONFIG_HWPOISON_INJECT moduled. (feedbacked by kernel test
> 	   bot and Raymond Jay Golo)
> 	2. Organize migrc code with two kconfigs, CONFIG_MIGRATION and
> 	   CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH.
>
> Changes from v4:
>
> 	1. Rebase on v6.7.
> 	2. Fix build errors in arm64 that is doing nothing for tlb flush
> 	   but has CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH. (reported
> 	   by kernel test robot)
> 	3. Don't use any page flag. So the system would give up migrc
> 	   mechanism more often but it's okay. The final improvement is
> 	   good enough.
> 	4. Instead, optimize full tlb flush(arch_tlbbatch_flush()) by
> 	   avoiding redundant CPUs from tlb flush.
>
> Changes from v3:
>
> 	1. Don't use the kconfig, CONFIG_MIGRC, and remove sysctl knob,
> 	   migrc_enable. (feedbacked by Nadav)
> 	2. Remove the optimization skipping CPUs that have already
> 	   performed tlb flushes needed by any reason when performing
> 	   tlb flushes by migrc because I can't tell the performance
> 	   difference between w/ the optimization and w/o that.
> 	   (feedbacked by Nadav)
> 	3. Minimize arch-specific code. While at it, move all the migrc
>            declarations and inline functions from include/linux/mm.h to
>            mm/internal.h (feedbacked by Dave Hansen, Nadav)
> 	4. Separate a part making migrc paused when the system is in
> 	   high memory pressure to another patch. (feedbacked by Nadav)
> 	5. Rename:
> 	      a. arch_tlbbatch_clean() to arch_tlbbatch_clear(),
> 	      b. tlb_ubc_nowr to tlb_ubc_ro,
> 	      c. migrc_try_flush_free_folios() to migrc_flush_free_folios(),
> 	      d. migrc_stop to migrc_pause.
> 	   (feedbacked by Nadav)
> 	6. Use ->lru list_head instead of introducing a new llist_head.
> 	   (feedbacked by Nadav)
> 	7. Use non-atomic operations of page-flag when it's safe.
> 	   (feedbacked by Nadav)
> 	8. Use stack instead of keeping a pointer of 'struct migrc_req'
> 	   in struct task, which is for manipulating it locally.
> 	   (feedbacked by Nadav)
> 	9. Replace a lot of simple functions to inline functions placed
> 	   in a header, mm/internal.h. (feedbacked by Nadav)
> 	10. Add additional sufficient comments. (feedbacked by Nadav)
> 	11. Remove a lot of wrapper functions. (feedbacked by Nadav)
>
> Changes from RFC v2:
>
> 	1. Remove additional occupation in struct page. To do that,
> 	   unioned with lru field for migrc's list and added a page
> 	   flag. I know page flag is a thing that we don't like to add
> 	   but no choice because migrc should distinguish folios under
> 	   migrc's control from others. Instead, I force migrc to be
> 	   used only on 64 bit system to mitigate you guys from getting
> 	   angry.
> 	2. Remove meaningless internal object allocator that I
> 	   introduced to minimize impact onto the system. However, a ton
> 	   of tests showed there was no difference.
> 	3. Stop migrc from working when the system is in high memory
> 	   pressure like about to perform direct reclaim. At the
> 	   condition where the swap mechanism is heavily used, I found
> 	   the system suffered from regression without this control.
> 	4. Exclude folios that pte_dirty() == true from migrc's interest
> 	   so that migrc can work simpler.
> 	5. Combine several patches that work tightly coupled to one.
> 	6. Add sufficient comments for better review.
> 	7. Manage migrc's request in per-node manner (from globally).
> 	8. Add tlb miss improvement in commit message.
> 	9. Test with more CPUs(4 -> 16) to see bigger improvement.
>
> Changes from RFC:
>
> 	1. Fix a bug triggered when a destination folio at the previous
> 	   migration becomes a source folio at the next migration,
> 	   before the folio gets handled properly so that the folio can
> 	   play with another migration. There was inconsistency in the
> 	   folio's state. Fixed it.
> 	2. Split the patch set into more pieces so that the folks can
> 	   review better. (Feedbacked by Nadav Amit)
> 	3. Fix a wrong usage of barrier e.g. smp_mb__after_atomic().
> 	   (Feedbacked by Nadav Amit)
> 	4. Tried to add sufficient comments to explain the patch set
> 	   better. (Feedbacked by Nadav Amit)
>
> Byungchul Park (12):
>   x86/tlb: add APIs manipulating tlb batch's arch data
>   arm64: tlbflush: add APIs manipulating tlb batch's arch data
>   riscv, tlb: add APIs manipulating tlb batch's arch data
>   x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of
>     arch_tlbbatch_flush()
>   mm: buddy: make room for a new variable, ugen, in struct page
>   mm: add folio_put_ugen() to deliver unmap generation number to pcp or
>     buddy
>   mm: add a parameter, unmap generation number, to free_unref_folios()
>   mm/rmap: recognize read-only tlb entries during batched tlb flush
>   mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get
>     unmapped
>   mm: separate move/undo parts from migrate_pages_batch()
>   mm, migrate: apply luf mechanism to unmapping during migration
>   mm, vmscan: apply luf mechanism to unmapping during folio reclaim
>
>  arch/arm64/include/asm/tlbflush.h |  18 ++
>  arch/riscv/include/asm/tlbflush.h |  21 ++
>  arch/riscv/mm/tlbflush.c          |   1 -
>  arch/x86/include/asm/tlbflush.h   |  18 ++
>  arch/x86/mm/tlb.c                 |   2 -
>  include/linux/mm.h                |  22 ++
>  include/linux/mm_types.h          |  40 +++-
>  include/linux/rmap.h              |   7 +-
>  include/linux/sched.h             |  11 +
>  mm/compaction.c                   |  10 +
>  mm/internal.h                     | 115 +++++++++-
>  mm/memory.c                       |   8 +
>  mm/migrate.c                      | 184 ++++++++++------
>  mm/mmap.c                         |   8 +
>  mm/page_alloc.c                   | 157 +++++++++++---
>  mm/page_isolation.c               |   6 +
>  mm/page_reporting.c               |  10 +
>  mm/rmap.c                         | 345 +++++++++++++++++++++++++++++-
>  mm/swap.c                         |  18 +-
>  mm/vmscan.c                       |  29 ++-
>  20 files changed, 904 insertions(+), 126 deletions(-)
>
>
> base-commit: f52bcd4a9f6058704a6f6b6b50418f579defd4fe

--
Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
                   ` (12 preceding siblings ...)
  2024-05-11  6:54 ` [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Huang, Ying
@ 2024-05-11  7:15 ` Huang, Ying
  2024-05-13  1:44   ` Byungchul Park
  2024-05-24 17:16 ` Dave Hansen
  2024-05-28  8:41 ` David Hildenbrand
  15 siblings, 1 reply; 49+ messages in thread
From: Huang, Ying @ 2024-05-11  7:15 UTC (permalink / raw)
  To: Byungchul Park
  Cc: linux-kernel, linux-mm, kernel_team, akpm, vernhao, mgorman,
	hughd, willy, david, peterz, luto, tglx, mingo, bp, dave.hansen,
	rjgolo

Byungchul Park <byungchul@sk.com> writes:

> Hi everyone,
>
> While I'm working with a tiered memory system e.g. CXL memory, I have
> been facing migration overhead esp. tlb shootdown on promotion or
> demotion between different tiers.  Yeah..  most tlb shootdowns on
> migration through hinting fault can be avoided thanks to Huang Ying's
> work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> is inaccessible").  See the following link for more information:
>
> https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/

And, I still have interest of the performance impact of commit
7e12beb8ca2a ("migrate_pages: batch flushing TLB").  In the email above,
you said that the performance of v6.5-rc5 + 7e12beb8ca2a reverted has
better performance than v6.5-rc5.  Can you provide more details?  For
example, the number of TLB flushing IPI for two kernels?

I should have followed up the above email.  Sorry about that.  Anyway,
we should try to fix issue of that commit too.

--
Best Regards,
Huang, Ying

[snip]


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-11  6:54 ` [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Huang, Ying
@ 2024-05-13  1:41   ` Byungchul Park
  0 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-13  1:41 UTC (permalink / raw)
  To: Huang, Ying
  Cc: linux-kernel, linux-mm, kernel_team, akpm, vernhao, mgorman,
	hughd, willy, david, peterz, luto, tglx, mingo, bp, dave.hansen,
	rjgolo

On Sat, May 11, 2024 at 02:54:51PM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > Hi everyone,
> >
> > While I'm working with a tiered memory system e.g. CXL memory, I have
> > been facing migration overhead esp. tlb shootdown on promotion or
> > demotion between different tiers.  Yeah..  most tlb shootdowns on
> > migration through hinting fault can be avoided thanks to Huang Ying's
> > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> > is inaccessible").  See the following link for more information:
> >
> > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> >
> > However, it's only for migration through hinting fault.  I thought it'd
> > be much better if we have a general mechanism to reduce all the tlb
> > numbers that we can apply to any unmap code, that we normally believe
> > tlb flush should be followed.
> >
> > I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), defers tlb flush
> > until folios that have been unmapped and freed, eventually get allocated
> > again.  It's safe for folios that had been mapped read-only and were
> > unmapped, since the contents of the folios don't change while staying in
> > pcp or buddy so we can still read the data through the stale tlb entries.
> >
> > tlb flush can be defered when folios get unmapped as long as it
> > guarantees to perform tlb flush needed, before the folios actually
> > become used, of course, only if all the corresponding ptes don't have
> > write permission.  Otherwise, the system will get messed up.
> >
> > To achieve that:
> >
> >    1. For the folios that map only to non-writable tlb entries, prevent
> >       tlb flush during unmapping but perform it just before the folios
> >       actually become used, out of buddy or pcp.
> >
> >    2. When any non-writable ptes change to writable e.g. through fault
> >       handler, give up luf mechanism and perform tlb flush required
> >       right away.
> >
> >    3. When a writable mapping is created e.g. through mmap(), give up
> >       luf mechanism and perform tlb flush required right away.
> >
> > No matter what type of workload is used for performance evaluation, the
> > result would be positive thanks to the unconditional reduction of tlb
> > flushes, tlb misses and interrupts.
> 
> Are there any downsides of the optimization?  Will it cause regression
> for workloads with almost no read-only mappings?  Will it cause

IMHO, no.  LUF does almost nothing for folios writable mapped.

> regression for page allocation?

TLB flush might be added in prep_new_page() if pended, however, that
should have been already performed anyway.  It's not additional overhead.

The TLB flush even can be skipped by the batched work.

> > For the test, I picked up one of
> > the most popular and heavy workload, llama.cpp that is a
> > LLM(Large Language Model) inference engine.
> 
> IIUC, llama.cpp is a workload with huge read-only mapping.

Right.  LUF works on read-only mapping.  So the more read-only mappings
are used, the better LUF works.  Fortunately, such a workload with huge
read-only mappings that is normal in a light weight inference engine, is
quite popular these days in the era of LLM.

	Byungchul

> > The result would depend on memory latency and how often reclaim runs,
> > which implies tlb miss overhead and how many times unmapping happens.
> > In my system, the result shows:
> >
> >    1. tlb flushes are reduced about 95%.
> >    2. tlb misses(itlb) are reduced about 80%.
> >    3. tlb misses(dtlb store) are reduced about 57%.
> >    4. tlb misses(dtlb load) are reduced about 24%.
> >    5. tlb shootdown interrupts are reduced about 95%.
> >    6. The test program runtime is reduced about 5%.
> >
> > The test environment and the result is like:
> >
> >    Machine: bare metal, x86_64, Intel(R) Xeon(R) Gold 6430
> >    CPU: 1 socket 64 core with hyper thread on
> >    Numa: 2 nodes (64 CPUs DRAM 42GB, no CPUs CXL expander 98GB)
> >    Config: swap off, numa balancing tiering on, demotion enabled
> >
> >    The test set:
> >
> >       llama.cpp/main -m $(70G_model1) -p "who are you?" -s 1 -t 15 -n 20 &
> >       llama.cpp/main -m $(70G_model2) -p "who are you?" -s 1 -t 15 -n 20 &
> >       llama.cpp/main -m $(70G_model3) -p "who are you?" -s 1 -t 15 -n 20 &
> >       wait
> >
> >       where -t: nr of threads, -s: seed used to make the runtime stable,
> >       -n: nr of tokens that determines the runtime, -p: prompt to ask,
> >       -m: LLM model to use.
> >
> >    Run the test set 10 times successively with caches dropped every run
> >    via 'echo 3 > /proc/sys/vm/drop_caches'.  Each inference prints its
> >    runtime at the end of each.
> >
> >    1. Runtime from the output of llama.cpp:
> >
> >    BEFORE
> >    ------
> >    llama_print_timings:       total time = 1002461.95 ms /    24 tokens
> >    llama_print_timings:       total time = 1044978.38 ms /    24 tokens
> >    llama_print_timings:       total time = 1000653.09 ms /    24 tokens
> >    llama_print_timings:       total time = 1047104.80 ms /    24 tokens
> >    llama_print_timings:       total time = 1069430.36 ms /    24 tokens
> >    llama_print_timings:       total time = 1068201.16 ms /    24 tokens
> >    llama_print_timings:       total time = 1078092.59 ms /    24 tokens
> >    llama_print_timings:       total time = 1073200.45 ms /    24 tokens
> >    llama_print_timings:       total time = 1067136.00 ms /    24 tokens
> >    llama_print_timings:       total time = 1076442.56 ms /    24 tokens
> >    llama_print_timings:       total time = 1004142.64 ms /    24 tokens
> >    llama_print_timings:       total time = 1042942.65 ms /    24 tokens
> >    llama_print_timings:       total time =  999933.76 ms /    24 tokens
> >    llama_print_timings:       total time = 1046548.83 ms /    24 tokens
> >    llama_print_timings:       total time = 1068671.48 ms /    24 tokens
> >    llama_print_timings:       total time = 1068285.76 ms /    24 tokens
> >    llama_print_timings:       total time = 1077789.63 ms /    24 tokens
> >    llama_print_timings:       total time = 1071558.93 ms /    24 tokens
> >    llama_print_timings:       total time = 1066181.55 ms /    24 tokens
> >    llama_print_timings:       total time = 1076767.53 ms /    24 tokens
> >    llama_print_timings:       total time = 1004065.63 ms /    24 tokens
> >    llama_print_timings:       total time = 1044522.13 ms /    24 tokens
> >    llama_print_timings:       total time =  999725.33 ms /    24 tokens
> >    llama_print_timings:       total time = 1047510.77 ms /    24 tokens
> >    llama_print_timings:       total time = 1068010.27 ms /    24 tokens
> >    llama_print_timings:       total time = 1068999.31 ms /    24 tokens
> >    llama_print_timings:       total time = 1077648.05 ms /    24 tokens
> >    llama_print_timings:       total time = 1071378.96 ms /    24 tokens
> >    llama_print_timings:       total time = 1066326.32 ms /    24 tokens
> >    llama_print_timings:       total time = 1077088.92 ms /    24 tokens
> >
> >    AFTER
> >    -----
> >    llama_print_timings:       total time =  988522.03 ms /    24 tokens
> >    llama_print_timings:       total time =  997204.52 ms /    24 tokens
> >    llama_print_timings:       total time =  996605.86 ms /    24 tokens
> >    llama_print_timings:       total time =  991985.50 ms /    24 tokens
> >    llama_print_timings:       total time = 1035143.31 ms /    24 tokens
> >    llama_print_timings:       total time =  993660.18 ms /    24 tokens
> >    llama_print_timings:       total time =  983082.14 ms /    24 tokens
> >    llama_print_timings:       total time =  990431.36 ms /    24 tokens
> >    llama_print_timings:       total time =  992707.09 ms /    24 tokens
> >    llama_print_timings:       total time =  992673.27 ms /    24 tokens
> >    llama_print_timings:       total time =  989285.43 ms /    24 tokens
> >    llama_print_timings:       total time =  996710.06 ms /    24 tokens
> >    llama_print_timings:       total time =  996534.64 ms /    24 tokens
> >    llama_print_timings:       total time =  991344.17 ms /    24 tokens
> >    llama_print_timings:       total time = 1035210.84 ms /    24 tokens
> >    llama_print_timings:       total time =  994714.13 ms /    24 tokens
> >    llama_print_timings:       total time =  984184.15 ms /    24 tokens
> >    llama_print_timings:       total time =  990909.45 ms /    24 tokens
> >    llama_print_timings:       total time =  991881.48 ms /    24 tokens
> >    llama_print_timings:       total time =  993918.03 ms /    24 tokens
> >    llama_print_timings:       total time =  990061.34 ms /    24 tokens
> >    llama_print_timings:       total time =  998076.69 ms /    24 tokens
> >    llama_print_timings:       total time =  997082.59 ms /    24 tokens
> >    llama_print_timings:       total time =  990677.58 ms /    24 tokens
> >    llama_print_timings:       total time = 1036054.94 ms /    24 tokens
> >    llama_print_timings:       total time =  994125.93 ms /    24 tokens
> >    llama_print_timings:       total time =  982467.01 ms /    24 tokens
> >    llama_print_timings:       total time =  990191.60 ms /    24 tokens
> >    llama_print_timings:       total time =  993319.24 ms /    24 tokens
> >    llama_print_timings:       total time =  992540.57 ms /    24 tokens
> >
> >    2. tlb shootdowns from 'cat /proc/interrupts':
> >
> >    BEFORE
> >    ------
> >    TLB:
> >    125553646  141418810  161932620  176853972  186655697  190399283
> >    192143823  196414038  192872439  193313658  193395617  192521416
> >    190788161  195067598  198016061  193607347  194293972  190786732
> >    191545637  194856822  191801931  189634535  190399803  196365922
> >    195268398  190115840  188050050  193194908  195317617  190820190
> >    190164820  185556071  226797214  229592631  216112464  209909495
> >    205575979  205950252  204948111  197999795  198892232  205287952
> >    199344631  195015158  195869844  198858745  195692876  200961904
> >    203463252  205921722  199850838  206145986  199613202  199961345
> >    200129577  203020521  207873649  203697671  197093386  204243803
> >    205993323  200934664  204193128  194435376  TLB shootdowns
> >
> >    AFTER
> >    -----
> >    TLB:
> >      5648092    6610142    7032849    7882308    8088518    8352310
> >      8656536    8705136    8647426    8905583    8985408    8704522
> >      8884344    9026261    8929974    8869066    8877575    8810096
> >      8770984    8754503    8801694    8865925    8787524    8656432
> >      8755912    8682034    8773935    8832925    8797997    8515777
> >      8481240    8891258   10595243   10285973    9756935    9573681
> >      9398968    9069244    9242984    8899009    9310690    9029095
> >      9069758    9105825    9092703    9270202    9460287    9258546
> >      9180415    9232723    9270611    9175020    9490420    9360316
> >      9420818    9057663    9525631    9310152    9152242    8654483
> >      9181804    9050847    8919916    8883856  TLB shootdowns
> >
> >    3. tlb numbers from 'perf stat' per test set:
> >
> >    BEFORE
> >    ------
> >    3163679332	dTLB-load-misses
> >    2017751856	dTLB-store-misses
> >    327092903	iTLB-load-misses
> >    1357543886	tlb:tlb_flush
> >
> >    AFTER
> >    -----
> >    2394694609	dTLB-load-misses
> >    861144167	dTLB-store-misses
> >    64055579	iTLB-load-misses
> >    69175002	tlb:tlb_flush
> >
> > ---
> >
> > Changes from v9:
> >
> > 	1. Expand the candidate to apply this mechanism:
> >
> > 	   BEFORE - The souce folios at any type of migration.
> > 	   AFTER  - Any folios that have been unmapped and freed.
> >
> > 	2. Change the workload for test:
> >
> > 	   BEFORE - XSBench
> > 	   AFTER  - llama.cpp (one of the most popluar real workload)
> >
> > 	3. Change the test environment:
> >
> > 	   BEFORE - qemu machine, too small DRAM(1GB), large remote mem
> > 	   AFTER  - bare metal, real CXL memory, practical memory size
> >
> > 	4. Rename the mechanism from MIGRC(Migration Read Copy) to
> > 	   LUF(Lazy Unmap Flush) to reflect the current version of the
> > 	   mechanism can be applied not only to unmap during migration
> > 	   but any unmap code e.g. unmap in shrink_folio_list().
> >
> > 	5. Fix build error for riscv. (feedbacked by kernel test bot)
> >
> > 	6. Supplement commit messages to describe what this mechanism is
> > 	   for, especially in the patches for arch code. (feedbacked by
> > 	   Thomas Gleixner)
> >
> > 	7. Clean up some trivial things.
> >
> > Changes from v8:
> >
> > 	1. Rebase on akpm/mm.git mm-unstable as of April 18, 2024.
> > 	2. Supplement comments and commit message.
> > 	3. Change the candidate to apply migrc mechanism:
> >
> > 	   BEFORE - The source folios at demotion and promotion.
> > 	   AFTER  - The souce folios at any type of migration.
> >
> > 	4. Change how migrc mechanism works:
> >
> > 	   BEFORE - Reduce tlb flushes by deferring folio_free() for
> > 	            source folios during demotion and promotion.
> > 	   AFTER  - Reduce tlb flushes by deferring tlb flush until they
> > 	            actually become used, out of pcp or buddy. The
> > 		    current version of migrc does *not* defer calling
> > 	            folio_free() but let it go as it is as the same as
> > 		    vanilla kernel, with the folios marked kind of 'need
> > 		    to tlb flush'. And then handle the flush when the
> > 		    page exits from pcp or buddy so as to prevent
> > 		    changing vm stats e.g. free pages.
> >
> > Changes from v7:
> >
> > 	1. Rewrite cover letter to explain what 'migrc' mechasism is.
> > 	   (feedbacked by Andrew Morton)
> > 	2. Supplement the commit message of a patch 'mm: Add APIs to
> > 	   free a folio directly to the buddy bypassing pcp'.
> > 	   (feedbacked by Andrew Morton)
> >
> > Changes from v6:
> >
> > 	1. Fix build errors in case of
> > 	   CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH disabled by moving
> > 	   migrc_flush_{start,end}() calls from arch code to
> > 	   try_to_unmap_flush() in mm/rmap.c.
> >
> > Changes from v5:
> >
> > 	1. Fix build errors in case of CONFIG_MIGRATION disabled or
> > 	   CONFIG_HWPOISON_INJECT moduled. (feedbacked by kernel test
> > 	   bot and Raymond Jay Golo)
> > 	2. Organize migrc code with two kconfigs, CONFIG_MIGRATION and
> > 	   CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH.
> >
> > Changes from v4:
> >
> > 	1. Rebase on v6.7.
> > 	2. Fix build errors in arm64 that is doing nothing for tlb flush
> > 	   but has CONFIG_ARCH_WANT_BATCHED_UNMAP_tlb_FLUSH. (reported
> > 	   by kernel test robot)
> > 	3. Don't use any page flag. So the system would give up migrc
> > 	   mechanism more often but it's okay. The final improvement is
> > 	   good enough.
> > 	4. Instead, optimize full tlb flush(arch_tlbbatch_flush()) by
> > 	   avoiding redundant CPUs from tlb flush.
> >
> > Changes from v3:
> >
> > 	1. Don't use the kconfig, CONFIG_MIGRC, and remove sysctl knob,
> > 	   migrc_enable. (feedbacked by Nadav)
> > 	2. Remove the optimization skipping CPUs that have already
> > 	   performed tlb flushes needed by any reason when performing
> > 	   tlb flushes by migrc because I can't tell the performance
> > 	   difference between w/ the optimization and w/o that.
> > 	   (feedbacked by Nadav)
> > 	3. Minimize arch-specific code. While at it, move all the migrc
> >            declarations and inline functions from include/linux/mm.h to
> >            mm/internal.h (feedbacked by Dave Hansen, Nadav)
> > 	4. Separate a part making migrc paused when the system is in
> > 	   high memory pressure to another patch. (feedbacked by Nadav)
> > 	5. Rename:
> > 	      a. arch_tlbbatch_clean() to arch_tlbbatch_clear(),
> > 	      b. tlb_ubc_nowr to tlb_ubc_ro,
> > 	      c. migrc_try_flush_free_folios() to migrc_flush_free_folios(),
> > 	      d. migrc_stop to migrc_pause.
> > 	   (feedbacked by Nadav)
> > 	6. Use ->lru list_head instead of introducing a new llist_head.
> > 	   (feedbacked by Nadav)
> > 	7. Use non-atomic operations of page-flag when it's safe.
> > 	   (feedbacked by Nadav)
> > 	8. Use stack instead of keeping a pointer of 'struct migrc_req'
> > 	   in struct task, which is for manipulating it locally.
> > 	   (feedbacked by Nadav)
> > 	9. Replace a lot of simple functions to inline functions placed
> > 	   in a header, mm/internal.h. (feedbacked by Nadav)
> > 	10. Add additional sufficient comments. (feedbacked by Nadav)
> > 	11. Remove a lot of wrapper functions. (feedbacked by Nadav)
> >
> > Changes from RFC v2:
> >
> > 	1. Remove additional occupation in struct page. To do that,
> > 	   unioned with lru field for migrc's list and added a page
> > 	   flag. I know page flag is a thing that we don't like to add
> > 	   but no choice because migrc should distinguish folios under
> > 	   migrc's control from others. Instead, I force migrc to be
> > 	   used only on 64 bit system to mitigate you guys from getting
> > 	   angry.
> > 	2. Remove meaningless internal object allocator that I
> > 	   introduced to minimize impact onto the system. However, a ton
> > 	   of tests showed there was no difference.
> > 	3. Stop migrc from working when the system is in high memory
> > 	   pressure like about to perform direct reclaim. At the
> > 	   condition where the swap mechanism is heavily used, I found
> > 	   the system suffered from regression without this control.
> > 	4. Exclude folios that pte_dirty() == true from migrc's interest
> > 	   so that migrc can work simpler.
> > 	5. Combine several patches that work tightly coupled to one.
> > 	6. Add sufficient comments for better review.
> > 	7. Manage migrc's request in per-node manner (from globally).
> > 	8. Add tlb miss improvement in commit message.
> > 	9. Test with more CPUs(4 -> 16) to see bigger improvement.
> >
> > Changes from RFC:
> >
> > 	1. Fix a bug triggered when a destination folio at the previous
> > 	   migration becomes a source folio at the next migration,
> > 	   before the folio gets handled properly so that the folio can
> > 	   play with another migration. There was inconsistency in the
> > 	   folio's state. Fixed it.
> > 	2. Split the patch set into more pieces so that the folks can
> > 	   review better. (Feedbacked by Nadav Amit)
> > 	3. Fix a wrong usage of barrier e.g. smp_mb__after_atomic().
> > 	   (Feedbacked by Nadav Amit)
> > 	4. Tried to add sufficient comments to explain the patch set
> > 	   better. (Feedbacked by Nadav Amit)
> >
> > Byungchul Park (12):
> >   x86/tlb: add APIs manipulating tlb batch's arch data
> >   arm64: tlbflush: add APIs manipulating tlb batch's arch data
> >   riscv, tlb: add APIs manipulating tlb batch's arch data
> >   x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of
> >     arch_tlbbatch_flush()
> >   mm: buddy: make room for a new variable, ugen, in struct page
> >   mm: add folio_put_ugen() to deliver unmap generation number to pcp or
> >     buddy
> >   mm: add a parameter, unmap generation number, to free_unref_folios()
> >   mm/rmap: recognize read-only tlb entries during batched tlb flush
> >   mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get
> >     unmapped
> >   mm: separate move/undo parts from migrate_pages_batch()
> >   mm, migrate: apply luf mechanism to unmapping during migration
> >   mm, vmscan: apply luf mechanism to unmapping during folio reclaim
> >
> >  arch/arm64/include/asm/tlbflush.h |  18 ++
> >  arch/riscv/include/asm/tlbflush.h |  21 ++
> >  arch/riscv/mm/tlbflush.c          |   1 -
> >  arch/x86/include/asm/tlbflush.h   |  18 ++
> >  arch/x86/mm/tlb.c                 |   2 -
> >  include/linux/mm.h                |  22 ++
> >  include/linux/mm_types.h          |  40 +++-
> >  include/linux/rmap.h              |   7 +-
> >  include/linux/sched.h             |  11 +
> >  mm/compaction.c                   |  10 +
> >  mm/internal.h                     | 115 +++++++++-
> >  mm/memory.c                       |   8 +
> >  mm/migrate.c                      | 184 ++++++++++------
> >  mm/mmap.c                         |   8 +
> >  mm/page_alloc.c                   | 157 +++++++++++---
> >  mm/page_isolation.c               |   6 +
> >  mm/page_reporting.c               |  10 +
> >  mm/rmap.c                         | 345 +++++++++++++++++++++++++++++-
> >  mm/swap.c                         |  18 +-
> >  mm/vmscan.c                       |  29 ++-
> >  20 files changed, 904 insertions(+), 126 deletions(-)
> >
> >
> > base-commit: f52bcd4a9f6058704a6f6b6b50418f579defd4fe
> 
> --
> Best Regards,
> Huang, Ying


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-11  7:15 ` Huang, Ying
@ 2024-05-13  1:44   ` Byungchul Park
  2024-05-22  2:16     ` Byungchul Park
  0 siblings, 1 reply; 49+ messages in thread
From: Byungchul Park @ 2024-05-13  1:44 UTC (permalink / raw)
  To: Huang, Ying
  Cc: linux-kernel, linux-mm, kernel_team, akpm, vernhao, mgorman,
	hughd, willy, david, peterz, luto, tglx, mingo, bp, dave.hansen,
	rjgolo

On Sat, May 11, 2024 at 03:15:01PM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > Hi everyone,
> >
> > While I'm working with a tiered memory system e.g. CXL memory, I have
> > been facing migration overhead esp. tlb shootdown on promotion or
> > demotion between different tiers.  Yeah..  most tlb shootdowns on
> > migration through hinting fault can be avoided thanks to Huang Ying's
> > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> > is inaccessible").  See the following link for more information:
> >
> > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> 
> And, I still have interest of the performance impact of commit
> 7e12beb8ca2a ("migrate_pages: batch flushing TLB").  In the email above,
> you said that the performance of v6.5-rc5 + 7e12beb8ca2a reverted has
> better performance than v6.5-rc5.  Can you provide more details?  For
> example, the number of TLB flushing IPI for two kernels?

Okay.  I will test and share the result with what you asked me now once
I get available for the test.

	Byungchul

> I should have followed up the above email.  Sorry about that.  Anyway,
> we should try to fix issue of that commit too.
> 
> --
> Best Regards,
> Huang, Ying
> 
> [snip]


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-13  1:44   ` Byungchul Park
@ 2024-05-22  2:16     ` Byungchul Park
  2024-05-22  7:38       ` Huang, Ying
  0 siblings, 1 reply; 49+ messages in thread
From: Byungchul Park @ 2024-05-22  2:16 UTC (permalink / raw)
  To: Huang, Ying
  Cc: linux-kernel, linux-mm, kernel_team, akpm, vernhao, mgorman,
	hughd, willy, david, peterz, luto, tglx, mingo, bp, dave.hansen,
	rjgolo

On Mon, May 13, 2024 at 10:44:29AM +0900, Byungchul Park wrote:
> On Sat, May 11, 2024 at 03:15:01PM +0800, Huang, Ying wrote:
> > Byungchul Park <byungchul@sk.com> writes:
> > 
> > > Hi everyone,
> > >
> > > While I'm working with a tiered memory system e.g. CXL memory, I have
> > > been facing migration overhead esp. tlb shootdown on promotion or
> > > demotion between different tiers.  Yeah..  most tlb shootdowns on
> > > migration through hinting fault can be avoided thanks to Huang Ying's
> > > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> > > is inaccessible").  See the following link for more information:
> > >
> > > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> > 
> > And, I still have interest of the performance impact of commit
> > 7e12beb8ca2a ("migrate_pages: batch flushing TLB").  In the email above,
> > you said that the performance of v6.5-rc5 + 7e12beb8ca2a reverted has
> > better performance than v6.5-rc5.  Can you provide more details?  For
> > example, the number of TLB flushing IPI for two kernels?
> 
> Okay.  I will test and share the result with what you asked me now once
> I get available for the test.

I should admit that the test using qemu is so unstable.  While using
qemu for the test, kernel with 7e12beb8ca2a applied gave better results
sometimes and worse ones sometimes.  I should've used a bare metal from
the beginning.  Sorry for making you confused with the unstable result.

Since I thought you asked me for the test with the same environment in
the link above, I used qemu to reproduce the similar result but changed
the number of threads for the test from 16 to 14 to get rid of noise
that might be introduced by other than the intended test just in case.

As expected, the stats are better with your work:

   ------------------------------------------
   v6.6-rc5 with 7e12beb8ca2a commit reverted
   ------------------------------------------

   1) from output of XSBench

   Threads:     14              
   Runtime:     1127.043 seconds
   Lookups:     1,700,000,000   
   Lookups/s:   1,508,371       

   2) from /proc/vmstat

   numa_hit 15580171                      
   numa_miss 1034233                      
   numa_foreign 1034233                   
   numa_interleave 773                    
   numa_local 7927442                     
   numa_other 8686962                     
   numa_pte_updates 24068923              
   numa_hint_faults 24061125              
   numa_hint_faults_local 0               
   numa_pages_migrated 7426480            
   pgmigrate_success 15407375             
   pgmigrate_fail 1849                    
   compact_migrate_scanned 4445414        
   compact_daemon_migrate_scanned 4445414 
   pgdemote_kswapd 7651061                
   pgdemote_direct 0                      
   nr_tlb_remote_flush 8080092            
   nr_tlb_remote_flush_received 109915713 
   nr_tlb_local_flush_all 53800           
   nr_tlb_local_flush_one 770466                                                   
   
   3) from /proc/interrupts

   TLB: 8022927    7840769     123588    7837008    7835967    7839837
   	7838332    7839886    7837610    7837221    7834524     407260
   	7430090    7835696    7839081    7712568    TLB shootdowns  
   
   4) from 'perf stat -a'

   222371217		itlb.itlb_flush      
   919832520		tlb_flush.dtlb_thread
   372223809		tlb_flush.stlb_any   
   120210808042		dTLB-load-misses     
   979352769		dTLB-store-misses    
   3650767665		iTLB-load-misses     

   -----------------------------------------
   v6.6-rc5 with 7e12beb8ca2a commit applied
   -----------------------------------------

   1) from output of XSBench

   Threads:     14
   Runtime:     1105.521 seconds
   Lookups:     1,700,000,000
   Lookups/s:   1,537,737

   2) from /proc/vmstat

   numa_hit 24148399
   numa_miss 797483
   numa_foreign 797483
   numa_interleave 772
   numa_local 12214575
   numa_other 12731307
   numa_pte_updates 24250278
   numa_hint_faults 24199756
   numa_hint_faults_local 0
   numa_pages_migrated 11476195
   pgmigrate_success 23634639
   pgmigrate_fail 1391
   compact_migrate_scanned 3760803
   compact_daemon_migrate_scanned 3760803
   pgdemote_kswapd 11932217
   pgdemote_direct 0
   nr_tlb_remote_flush 2151945
   nr_tlb_remote_flush_received 29672808
   nr_tlb_local_flush_all 124006
   nr_tlb_local_flush_one 741165
   
   3) from /proc/interrupts

   TLB: 2130784    2120142    2117571     844962    2071766     114675
   	2117258    2119596    2116816    1205446    2119176    2119209
   	2116792    2118763    2118773    2117762    TLB shootdowns

   4) from 'perf stat -a'

   60851902		itlb.itlb_flush
   334068491		tlb_flush.dtlb_thread
   223732916		tlb_flush.stlb_any
   120207083382		dTLB-load-misses
   446823059		dTLB-store-misses
   1926669373		iTLB-load-misses

---

	Byungchul


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-22  2:16     ` Byungchul Park
@ 2024-05-22  7:38       ` Huang, Ying
  2024-05-22 10:27         ` Byungchul Park
  0 siblings, 1 reply; 49+ messages in thread
From: Huang, Ying @ 2024-05-22  7:38 UTC (permalink / raw)
  To: Byungchul Park
  Cc: linux-kernel, linux-mm, kernel_team, akpm, vernhao, mgorman,
	hughd, willy, david, peterz, luto, tglx, mingo, bp, dave.hansen,
	rjgolo

Hi, Byungchul,

Byungchul Park <byungchul@sk.com> writes:

> On Mon, May 13, 2024 at 10:44:29AM +0900, Byungchul Park wrote:
>> On Sat, May 11, 2024 at 03:15:01PM +0800, Huang, Ying wrote:
>> > Byungchul Park <byungchul@sk.com> writes:
>> > 
>> > > Hi everyone,
>> > >
>> > > While I'm working with a tiered memory system e.g. CXL memory, I have
>> > > been facing migration overhead esp. tlb shootdown on promotion or
>> > > demotion between different tiers.  Yeah..  most tlb shootdowns on
>> > > migration through hinting fault can be avoided thanks to Huang Ying's
>> > > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
>> > > is inaccessible").  See the following link for more information:
>> > >
>> > > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
>> > 
>> > And, I still have interest of the performance impact of commit
>> > 7e12beb8ca2a ("migrate_pages: batch flushing TLB").  In the email above,
>> > you said that the performance of v6.5-rc5 + 7e12beb8ca2a reverted has
>> > better performance than v6.5-rc5.  Can you provide more details?  For
>> > example, the number of TLB flushing IPI for two kernels?
>> 
>> Okay.  I will test and share the result with what you asked me now once
>> I get available for the test.
>
> I should admit that the test using qemu is so unstable.  While using
> qemu for the test, kernel with 7e12beb8ca2a applied gave better results
> sometimes and worse ones sometimes.  I should've used a bare metal from
> the beginning.  Sorry for making you confused with the unstable result.
>
> Since I thought you asked me for the test with the same environment in
> the link above, I used qemu to reproduce the similar result but changed
> the number of threads for the test from 16 to 14 to get rid of noise
> that might be introduced by other than the intended test just in case.
>
> As expected, the stats are better with your work:
>
>    ------------------------------------------
>    v6.6-rc5 with 7e12beb8ca2a commit reverted
>    ------------------------------------------
>
>    1) from output of XSBench
>
>    Threads:     14              
>    Runtime:     1127.043 seconds
>    Lookups:     1,700,000,000   
>    Lookups/s:   1,508,371       
>
>    2) from /proc/vmstat
>
>    numa_hit 15580171                      
>    numa_miss 1034233                      
>    numa_foreign 1034233                   
>    numa_interleave 773                    
>    numa_local 7927442                     
>    numa_other 8686962                     
>    numa_pte_updates 24068923              
>    numa_hint_faults 24061125              
>    numa_hint_faults_local 0               
>    numa_pages_migrated 7426480            
>    pgmigrate_success 15407375             
>    pgmigrate_fail 1849                    
>    compact_migrate_scanned 4445414        
>    compact_daemon_migrate_scanned 4445414 
>    pgdemote_kswapd 7651061                
>    pgdemote_direct 0                      
>    nr_tlb_remote_flush 8080092            
>    nr_tlb_remote_flush_received 109915713 
>    nr_tlb_local_flush_all 53800           
>    nr_tlb_local_flush_one 770466                                                   
>    
>    3) from /proc/interrupts
>
>    TLB: 8022927    7840769     123588    7837008    7835967    7839837
>    	7838332    7839886    7837610    7837221    7834524     407260
>    	7430090    7835696    7839081    7712568    TLB shootdowns  
>    
>    4) from 'perf stat -a'
>
>    222371217		itlb.itlb_flush      
>    919832520		tlb_flush.dtlb_thread
>    372223809		tlb_flush.stlb_any   
>    120210808042		dTLB-load-misses     
>    979352769		dTLB-store-misses    
>    3650767665		iTLB-load-misses     
>
>    -----------------------------------------
>    v6.6-rc5 with 7e12beb8ca2a commit applied
>    -----------------------------------------
>
>    1) from output of XSBench
>
>    Threads:     14
>    Runtime:     1105.521 seconds
>    Lookups:     1,700,000,000
>    Lookups/s:   1,537,737
>
>    2) from /proc/vmstat
>
>    numa_hit 24148399
>    numa_miss 797483
>    numa_foreign 797483
>    numa_interleave 772
>    numa_local 12214575
>    numa_other 12731307
>    numa_pte_updates 24250278
>    numa_hint_faults 24199756
>    numa_hint_faults_local 0
>    numa_pages_migrated 11476195
>    pgmigrate_success 23634639
>    pgmigrate_fail 1391
>    compact_migrate_scanned 3760803
>    compact_daemon_migrate_scanned 3760803
>    pgdemote_kswapd 11932217
>    pgdemote_direct 0
>    nr_tlb_remote_flush 2151945
>    nr_tlb_remote_flush_received 29672808
>    nr_tlb_local_flush_all 124006
>    nr_tlb_local_flush_one 741165
>    
>    3) from /proc/interrupts
>
>    TLB: 2130784    2120142    2117571     844962    2071766     114675
>    	2117258    2119596    2116816    1205446    2119176    2119209
>    	2116792    2118763    2118773    2117762    TLB shootdowns
>
>    4) from 'perf stat -a'
>
>    60851902		itlb.itlb_flush
>    334068491		tlb_flush.dtlb_thread
>    223732916		tlb_flush.stlb_any
>    120207083382		dTLB-load-misses
>    446823059		dTLB-store-misses
>    1926669373		iTLB-load-misses
>

Thanks a lot for test results!

From your test results, the TLB shootdown IPI can be reduced effectively
with commit 7e12beb8ca2a.  So that the benchmark score improved a
little.

And, your changes will reduce the TLB shootdown IPI further, right?  Do
you have the number?

--
Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-22  7:38       ` Huang, Ying
@ 2024-05-22 10:27         ` Byungchul Park
  2024-05-22 14:15           ` Byungchul Park
  0 siblings, 1 reply; 49+ messages in thread
From: Byungchul Park @ 2024-05-22 10:27 UTC (permalink / raw)
  To: Huang, Ying
  Cc: linux-kernel, linux-mm, kernel_team, akpm, vernhao, mgorman,
	hughd, willy, david, peterz, luto, tglx, mingo, bp, dave.hansen,
	rjgolo

On Wed, May 22, 2024 at 03:38:04PM +0800, Huang, Ying wrote:
> Hi, Byungchul,
> 
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Mon, May 13, 2024 at 10:44:29AM +0900, Byungchul Park wrote:
> >> On Sat, May 11, 2024 at 03:15:01PM +0800, Huang, Ying wrote:
> >> > Byungchul Park <byungchul@sk.com> writes:
> >> > 
> >> > > Hi everyone,
> >> > >
> >> > > While I'm working with a tiered memory system e.g. CXL memory, I have
> >> > > been facing migration overhead esp. tlb shootdown on promotion or
> >> > > demotion between different tiers.  Yeah..  most tlb shootdowns on
> >> > > migration through hinting fault can be avoided thanks to Huang Ying's
> >> > > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> >> > > is inaccessible").  See the following link for more information:
> >> > >
> >> > > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> >> > 
> >> > And, I still have interest of the performance impact of commit
> >> > 7e12beb8ca2a ("migrate_pages: batch flushing TLB").  In the email above,
> >> > you said that the performance of v6.5-rc5 + 7e12beb8ca2a reverted has
> >> > better performance than v6.5-rc5.  Can you provide more details?  For
> >> > example, the number of TLB flushing IPI for two kernels?
> >> 
> >> Okay.  I will test and share the result with what you asked me now once
> >> I get available for the test.
> >
> > I should admit that the test using qemu is so unstable.  While using
> > qemu for the test, kernel with 7e12beb8ca2a applied gave better results
> > sometimes and worse ones sometimes.  I should've used a bare metal from
> > the beginning.  Sorry for making you confused with the unstable result.
> >
> > Since I thought you asked me for the test with the same environment in
> > the link above, I used qemu to reproduce the similar result but changed
> > the number of threads for the test from 16 to 14 to get rid of noise
> > that might be introduced by other than the intended test just in case.
> >
> > As expected, the stats are better with your work:
> >
> >    ------------------------------------------
> >    v6.6-rc5 with 7e12beb8ca2a commit reverted
> >    ------------------------------------------
> >
> >    1) from output of XSBench
> >
> >    Threads:     14              
> >    Runtime:     1127.043 seconds
> >    Lookups:     1,700,000,000   
> >    Lookups/s:   1,508,371       
> >
> >    2) from /proc/vmstat
> >
> >    numa_hit 15580171                      
> >    numa_miss 1034233                      
> >    numa_foreign 1034233                   
> >    numa_interleave 773                    
> >    numa_local 7927442                     
> >    numa_other 8686962                     
> >    numa_pte_updates 24068923              
> >    numa_hint_faults 24061125              
> >    numa_hint_faults_local 0               
> >    numa_pages_migrated 7426480            
> >    pgmigrate_success 15407375             
> >    pgmigrate_fail 1849                    
> >    compact_migrate_scanned 4445414        
> >    compact_daemon_migrate_scanned 4445414 
> >    pgdemote_kswapd 7651061                
> >    pgdemote_direct 0                      
> >    nr_tlb_remote_flush 8080092            
> >    nr_tlb_remote_flush_received 109915713 
> >    nr_tlb_local_flush_all 53800           
> >    nr_tlb_local_flush_one 770466                                                   
> >    
> >    3) from /proc/interrupts
> >
> >    TLB: 8022927    7840769     123588    7837008    7835967    7839837
> >    	7838332    7839886    7837610    7837221    7834524     407260
> >    	7430090    7835696    7839081    7712568    TLB shootdowns  
> >    
> >    4) from 'perf stat -a'
> >
> >    222371217		itlb.itlb_flush      
> >    919832520		tlb_flush.dtlb_thread
> >    372223809		tlb_flush.stlb_any   
> >    120210808042		dTLB-load-misses     
> >    979352769		dTLB-store-misses    
> >    3650767665		iTLB-load-misses     
> >
> >    -----------------------------------------
> >    v6.6-rc5 with 7e12beb8ca2a commit applied
> >    -----------------------------------------
> >
> >    1) from output of XSBench
> >
> >    Threads:     14
> >    Runtime:     1105.521 seconds
> >    Lookups:     1,700,000,000
> >    Lookups/s:   1,537,737
> >
> >    2) from /proc/vmstat
> >
> >    numa_hit 24148399
> >    numa_miss 797483
> >    numa_foreign 797483
> >    numa_interleave 772
> >    numa_local 12214575
> >    numa_other 12731307
> >    numa_pte_updates 24250278
> >    numa_hint_faults 24199756
> >    numa_hint_faults_local 0
> >    numa_pages_migrated 11476195
> >    pgmigrate_success 23634639
> >    pgmigrate_fail 1391
> >    compact_migrate_scanned 3760803
> >    compact_daemon_migrate_scanned 3760803
> >    pgdemote_kswapd 11932217
> >    pgdemote_direct 0
> >    nr_tlb_remote_flush 2151945
> >    nr_tlb_remote_flush_received 29672808
> >    nr_tlb_local_flush_all 124006
> >    nr_tlb_local_flush_one 741165
> >    
> >    3) from /proc/interrupts
> >
> >    TLB: 2130784    2120142    2117571     844962    2071766     114675
> >    	2117258    2119596    2116816    1205446    2119176    2119209
> >    	2116792    2118763    2118773    2117762    TLB shootdowns
> >
> >    4) from 'perf stat -a'
> >
> >    60851902		itlb.itlb_flush
> >    334068491		tlb_flush.dtlb_thread
> >    223732916		tlb_flush.stlb_any
> >    120207083382		dTLB-load-misses
> >    446823059		dTLB-store-misses
> >    1926669373		iTLB-load-misses
> >
> 
> Thanks a lot for test results!
> 
> >From your test results, the TLB shootdown IPI can be reduced effectively
> with commit 7e12beb8ca2a.  So that the benchmark score improved a
> little.
> 
> And, your changes will reduce the TLB shootdown IPI further, right?  Do

Yes, right. LUF(Lazy Unmap Flush) reduces TLB shootdown IPI further.

> you have the number?

You can find the number obtained from llama.cpp in this cover letter:

   https://lore.kernel.org/lkml/20240520021734.21527-1-byungchul@sk.com/

If you meant the number from the same test above, XSBench + qemu, I will
re-test with mm-unstable branch of mm tree and share the result shortly.

	Byungchul


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-22 10:27         ` Byungchul Park
@ 2024-05-22 14:15           ` Byungchul Park
  0 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-22 14:15 UTC (permalink / raw)
  To: Huang, Ying
  Cc: linux-kernel, linux-mm, kernel_team, akpm, vernhao, mgorman,
	hughd, willy, david, peterz, luto, tglx, mingo, bp, dave.hansen,
	rjgolo

On Wed, May 22, 2024 at 07:27:44PM +0900, Byungchul Park wrote:
> On Wed, May 22, 2024 at 03:38:04PM +0800, Huang, Ying wrote:
> > Hi, Byungchul,
> > 
> > Byungchul Park <byungchul@sk.com> writes:
> > 
> > > On Mon, May 13, 2024 at 10:44:29AM +0900, Byungchul Park wrote:
> > >> On Sat, May 11, 2024 at 03:15:01PM +0800, Huang, Ying wrote:
> > >> > Byungchul Park <byungchul@sk.com> writes:
> > >> > 
> > >> > > Hi everyone,
> > >> > >
> > >> > > While I'm working with a tiered memory system e.g. CXL memory, I have
> > >> > > been facing migration overhead esp. tlb shootdown on promotion or
> > >> > > demotion between different tiers.  Yeah..  most tlb shootdowns on
> > >> > > migration through hinting fault can be avoided thanks to Huang Ying's
> > >> > > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> > >> > > is inaccessible").  See the following link for more information:
> > >> > >
> > >> > > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> > >> > 
> > >> > And, I still have interest of the performance impact of commit
> > >> > 7e12beb8ca2a ("migrate_pages: batch flushing TLB").  In the email above,
> > >> > you said that the performance of v6.5-rc5 + 7e12beb8ca2a reverted has
> > >> > better performance than v6.5-rc5.  Can you provide more details?  For
> > >> > example, the number of TLB flushing IPI for two kernels?
> > >> 
> > >> Okay.  I will test and share the result with what you asked me now once
> > >> I get available for the test.
> > >
> > > I should admit that the test using qemu is so unstable.  While using
> > > qemu for the test, kernel with 7e12beb8ca2a applied gave better results
> > > sometimes and worse ones sometimes.  I should've used a bare metal from
> > > the beginning.  Sorry for making you confused with the unstable result.
> > >
> > > Since I thought you asked me for the test with the same environment in
> > > the link above, I used qemu to reproduce the similar result but changed
> > > the number of threads for the test from 16 to 14 to get rid of noise
> > > that might be introduced by other than the intended test just in case.
> > >
> > > As expected, the stats are better with your work:
> > >
> > >    ------------------------------------------
> > >    v6.6-rc5 with 7e12beb8ca2a commit reverted
> > >    ------------------------------------------
> > >
> > >    1) from output of XSBench
> > >
> > >    Threads:     14              
> > >    Runtime:     1127.043 seconds
> > >    Lookups:     1,700,000,000   
> > >    Lookups/s:   1,508,371       
> > >
> > >    2) from /proc/vmstat
> > >
> > >    numa_hit 15580171                      
> > >    numa_miss 1034233                      
> > >    numa_foreign 1034233                   
> > >    numa_interleave 773                    
> > >    numa_local 7927442                     
> > >    numa_other 8686962                     
> > >    numa_pte_updates 24068923              
> > >    numa_hint_faults 24061125              
> > >    numa_hint_faults_local 0               
> > >    numa_pages_migrated 7426480            
> > >    pgmigrate_success 15407375             
> > >    pgmigrate_fail 1849                    
> > >    compact_migrate_scanned 4445414        
> > >    compact_daemon_migrate_scanned 4445414 
> > >    pgdemote_kswapd 7651061                
> > >    pgdemote_direct 0                      
> > >    nr_tlb_remote_flush 8080092            
> > >    nr_tlb_remote_flush_received 109915713 
> > >    nr_tlb_local_flush_all 53800           
> > >    nr_tlb_local_flush_one 770466                                                   
> > >    
> > >    3) from /proc/interrupts
> > >
> > >    TLB: 8022927    7840769     123588    7837008    7835967    7839837
> > >    	7838332    7839886    7837610    7837221    7834524     407260
> > >    	7430090    7835696    7839081    7712568    TLB shootdowns  
> > >    
> > >    4) from 'perf stat -a'
> > >
> > >    222371217		itlb.itlb_flush      
> > >    919832520		tlb_flush.dtlb_thread
> > >    372223809		tlb_flush.stlb_any   
> > >    120210808042		dTLB-load-misses     
> > >    979352769		dTLB-store-misses    
> > >    3650767665		iTLB-load-misses     
> > >
> > >    -----------------------------------------
> > >    v6.6-rc5 with 7e12beb8ca2a commit applied
> > >    -----------------------------------------
> > >
> > >    1) from output of XSBench
> > >
> > >    Threads:     14
> > >    Runtime:     1105.521 seconds
> > >    Lookups:     1,700,000,000
> > >    Lookups/s:   1,537,737
> > >
> > >    2) from /proc/vmstat
> > >
> > >    numa_hit 24148399
> > >    numa_miss 797483
> > >    numa_foreign 797483
> > >    numa_interleave 772
> > >    numa_local 12214575
> > >    numa_other 12731307
> > >    numa_pte_updates 24250278
> > >    numa_hint_faults 24199756
> > >    numa_hint_faults_local 0
> > >    numa_pages_migrated 11476195
> > >    pgmigrate_success 23634639
> > >    pgmigrate_fail 1391
> > >    compact_migrate_scanned 3760803
> > >    compact_daemon_migrate_scanned 3760803
> > >    pgdemote_kswapd 11932217
> > >    pgdemote_direct 0
> > >    nr_tlb_remote_flush 2151945
> > >    nr_tlb_remote_flush_received 29672808
> > >    nr_tlb_local_flush_all 124006
> > >    nr_tlb_local_flush_one 741165
> > >    
> > >    3) from /proc/interrupts
> > >
> > >    TLB: 2130784    2120142    2117571     844962    2071766     114675
> > >    	2117258    2119596    2116816    1205446    2119176    2119209
> > >    	2116792    2118763    2118773    2117762    TLB shootdowns
> > >
> > >    4) from 'perf stat -a'
> > >
> > >    60851902		itlb.itlb_flush
> > >    334068491		tlb_flush.dtlb_thread
> > >    223732916		tlb_flush.stlb_any
> > >    120207083382		dTLB-load-misses
> > >    446823059		dTLB-store-misses
> > >    1926669373		iTLB-load-misses
> > >
> > 
> > Thanks a lot for test results!
> > 
> > >From your test results, the TLB shootdown IPI can be reduced effectively
> > with commit 7e12beb8ca2a.  So that the benchmark score improved a
> > little.
> > 
> > And, your changes will reduce the TLB shootdown IPI further, right?  Do
> 
> Yes, right. LUF(Lazy Unmap Flush) reduces TLB shootdown IPI further.
> 
> > you have the number?
> 
> You can find the number obtained from llama.cpp in this cover letter:
> 
>    https://lore.kernel.org/lkml/20240520021734.21527-1-byungchul@sk.com/
> 
> If you meant the number from the same test above, XSBench + qemu, I will
> re-test with mm-unstable branch of mm tree and share the result shortly.

I retested the same test but based on a recent mm-unstable branch of mm
tree instead of v6.6-rc5.  The result changed because of the different
base, from v6.6-rc5 to a recent mm-unstable branch of mm tree.

   ----------------------------------------------------
   mm-unstable branch with 7e12beb8ca2a commit reverted
   ----------------------------------------------------

   1) from output of XSBench

   Threads:     14
   Runtime:     1067.771 seconds
   Lookups:     1,700,000,000
   Lookups/s:   1,592,101

   2) from /proc/vmstat

   numa_hit 11502876
   numa_miss 1130877
   numa_foreign 1130877
   numa_interleave 115
   numa_local 5879006
   numa_other 6754747
   numa_pte_updates 19390661
   numa_hint_faults 19319467
   numa_hint_faults_local 0
   numa_pages_migrated 5472749
   pgmigrate_success 11593079
   pgmigrate_fail 549666
   compact_migrate_scanned 5408404
   compact_daemon_migrate_scanned 5408404
   pgdemote_kswapd 5610705
   pgdemote_direct 0
   nr_tlb_remote_flush 6200106
   nr_tlb_remote_flush_received 84362539
   nr_tlb_local_flush_all 39202
   nr_tlb_local_flush_one 760046

   3) from /proc/interrupts

   TLB: 3812782    3840646    4806989    5235846    5127512    5048603
	6012100    6022642    5088907    5212207    4076329    6014857
	6017060    6014964    6009362    6018368    TLB shootdowns

   4) from 'perf stat -a'

   180449546		itlb.itlb_flush
   768913454		tlb_flush.dtlb_thread
   304745973		tlb_flush.stlb_any
   119589742349		dTLB-load-misses
   826525376		dTLB-store-misses
   2950724801		iTLB-load-misses

   ---------------------------------------------------
   mm-unstable branch with 7e12beb8ca2a commit applied
   ---------------------------------------------------

   1) from output of XSBench

   Threads:     14
   Runtime:     1043.972 seconds
   Lookups:     1,700,000,000
   Lookups/s:   1,628,395

   2) from /proc/vmstat

   numa_hit 16865880
   numa_miss 1129958
   numa_foreign 1129958
   numa_interleave 115
   numa_local 8565072
   numa_other 9430766
   numa_pte_updates 19240583
   numa_hint_faults 19239948
   numa_hint_faults_local 0
   numa_pages_migrated 8159078
   pgmigrate_success 17000781
   pgmigrate_fail 1410437
   compact_migrate_scanned 5075605
   compact_daemon_migrate_scanned 5075605
   pgdemote_kswapd 8297460
   pgdemote_direct 0
   nr_tlb_remote_flush 1516807
   nr_tlb_remote_flush_received 20938785
   nr_tlb_local_flush_all 95801
   nr_tlb_local_flush_one 740597

   3) from /proc/interrupts

   TLB:  927080     567584     840684    1484285    1495859    1408641
	1496227    1493909    1359465    1227623    1265431    1496361
	1392337    1489451    1495799    1494700    TLB shootdowns

   4) from 'perf stat -a'

   43564429		itlb.itlb_flush
   272921880		tlb_flush.dtlb_thread
   175495467		tlb_flush.stlb_any
   119602211976		dTLB-load-misses
   355190881		dTLB-store-misses
   1539926469		iTLB-load-misses

   ---------------------------------------------------------
   mm-unstable branch with 7e12beb8ca2a commit applied + LUF
   ---------------------------------------------------------

   1) from output of XSBench

   Threads:     14
   Runtime:     1033.973 seconds
   Lookups:     1,700,000,000
   Lookups/s:   1,644,144

   2) from /proc/vmstat

   numa_hit 18617127
   numa_miss 1075467
   numa_foreign 1075467
   numa_interleave 115
   numa_local 9440134
   numa_other 10252460
   numa_pte_updates 19473883
   numa_hint_faults 19470143
   numa_hint_faults_local 0
   numa_pages_migrated 8978959
   pgmigrate_success 18675500
   pgmigrate_fail 1577460
   compact_migrate_scanned 5465414
   compact_daemon_migrate_scanned 5465414
   pgdemote_kswapd 9172431
   pgdemote_direct 0
   nr_tlb_remote_flush 85818
   nr_tlb_remote_flush_received 1036316
   nr_tlb_local_flush_all 34674
   nr_tlb_local_flush_one 740870

   3) from /proc/interrupts

   TLB: 55328      31254      44449      72887      73407      73775
	73353      73658      35802      68184      70998      73504
	74072      64700      73718      73862      TLB shootdowns

   4) from 'perf stat -a'

   2054390		itlb.itlb_flush
   150073902		tlb_flush.dtlb_thread
   135630767		tlb_flush.stlb_any
   117880065362		dTLB-load-misses
   217521760		dTLB-store-misses
   908338035		iTLB-load-misses

---

The result looks incredible.  You can also see the result if you try to
test a workload triggering reclaim or migration with LUF.

	Byungchul


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
                   ` (13 preceding siblings ...)
  2024-05-11  7:15 ` Huang, Ying
@ 2024-05-24 17:16 ` Dave Hansen
  2024-05-27  1:57   ` Byungchul Park
  2024-05-28  8:41 ` David Hildenbrand
  15 siblings, 1 reply; 49+ messages in thread
From: Dave Hansen @ 2024-05-24 17:16 UTC (permalink / raw)
  To: Byungchul Park, linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	david, peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

On 5/9/24 23:51, Byungchul Park wrote:
> To achieve that:
> 
>    1. For the folios that map only to non-writable tlb entries, prevent
>       tlb flush during unmapping but perform it just before the folios
>       actually become used, out of buddy or pcp.

Is this just _pure_ unmapping (like MADV_DONTNEED), or does it apply to
changing the memory map, like munmap() itself?

>    2. When any non-writable ptes change to writable e.g. through fault
>       handler, give up luf mechanism and perform tlb flush required
>       right away.
> 
>    3. When a writable mapping is created e.g. through mmap(), give up
>       luf mechanism and perform tlb flush required right away.

Let's say you do this:

	fd = open("/some/file", O_RDONLY);
	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
	foo1 = *ptr1;

You now have a read-only PTE pointing to the first page of /some/file.
Let's say try_to_unmap() comes along and decides it can_luf_folio().
The page gets pulled out of the page cache and freed, the PTE is zeroed.
 But the TLB is never flushed.

Now, someone does:

	fd2 = open("/some/other/file", O_RDONLY);
	ptr2 = mmap(ptr1, size, PROT_READ, MAP_FIXED, fd, ...);
	foo2 = *ptr2;

and they overwrite the old VMA.  Does foo2 have the contents of the new
"/some/other/file" or the old "/some/file"?  How does the new mmap()
know that there was something to flush?

BTW, the same thing could happen without a new mmap().  Someone could
modify the file in the middle, maybe even from another process.

	fd = open("/some/file", O_RDONLY);
	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
	foo1 = *ptr1;
	// LUF happens here
	// "/some/file" changes
	foo2 = *ptr1; // Does this see the change?




^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-24 17:16 ` Dave Hansen
@ 2024-05-27  1:57   ` Byungchul Park
  2024-05-27  2:43     ` Dave Hansen
  2024-05-27  3:10     ` Huang, Ying
  0 siblings, 2 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-27  1:57 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, kernel_team, akpm, ying.huang, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Fri, May 24, 2024 at 10:16:39AM -0700, Dave Hansen wrote:
> On 5/9/24 23:51, Byungchul Park wrote:
> > To achieve that:
> > 
> >    1. For the folios that map only to non-writable tlb entries, prevent
> >       tlb flush during unmapping but perform it just before the folios
> >       actually become used, out of buddy or pcp.
> 
> Is this just _pure_ unmapping (like MADV_DONTNEED), or does it apply to
> changing the memory map, like munmap() itself?

I think it can be applied to any unmapping of ro ones but LUF for now is
working only with unmapping during folio migrion and reclaim.

> >    2. When any non-writable ptes change to writable e.g. through fault
> >       handler, give up luf mechanism and perform tlb flush required
> >       right away.
> > 
> >    3. When a writable mapping is created e.g. through mmap(), give up
> >       luf mechanism and perform tlb flush required right away.
> 
> Let's say you do this:
> 
> 	fd = open("/some/file", O_RDONLY);
> 	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> 	foo1 = *ptr1;
> 
> You now have a read-only PTE pointing to the first page of /some/file.
> Let's say try_to_unmap() comes along and decides it can_luf_folio().
> The page gets pulled out of the page cache and freed, the PTE is zeroed.
>  But the TLB is never flushed.
> 
> Now, someone does:
> 
> 	fd2 = open("/some/other/file", O_RDONLY);
> 	ptr2 = mmap(ptr1, size, PROT_READ, MAP_FIXED, fd, ...);
> 	foo2 = *ptr2;
> 
> and they overwrite the old VMA.  Does foo2 have the contents of the new
> "/some/other/file" or the old "/some/file"?  How does the new mmap()

Good point.  It should've give up LUF at the 2nd mmap() in this case.
I will fix it by introducing a new flag in task_struct indicating if LUF
has left stale maps for the task so that LUF can give up and flush right
away in mmap().

> know that there was something to flush?
> 
> BTW, the same thing could happen without a new mmap().  Someone could
> modify the file in the middle, maybe even from another process.

Thank you for the pointing out.  I will fix it too by introducing a new
flag in inode or something to make LUF aware if updating the file has
been tried so that LUF can give up and flush right away in the case.

Plus, I will add another give-up at code changing the permission of vma
to writable.

Thank you very much.

	Byungchul

> 	fd = open("/some/file", O_RDONLY);
> 	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> 	foo1 = *ptr1;
> 	// LUF happens here
> 	// "/some/file" changes
> 	foo2 = *ptr1; // Does this see the change?


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-27  1:57   ` Byungchul Park
@ 2024-05-27  2:43     ` Dave Hansen
  2024-05-27  3:46       ` Byungchul Park
  2024-05-27 22:58       ` Byungchul Park
  2024-05-27  3:10     ` Huang, Ying
  1 sibling, 2 replies; 49+ messages in thread
From: Dave Hansen @ 2024-05-27  2:43 UTC (permalink / raw)
  To: Byungchul Park
  Cc: linux-kernel, linux-mm, kernel_team, akpm, ying.huang, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On 5/26/24 18:57, Byungchul Park wrote:
...
> Plus, I will add another give-up at code changing the permission of vma
> to writable.

I suspect you have a much more general problem on your hands. Just
tweaking the VFS or mmap() code likely isn't going to cut it.

I guess we'll see what you come up with next, but this email was really
just the result of Vlastimil and I chatting on IRC for five minutes
about this set.

It has absolutely not been tested nor reviewed enough.  <fud>I hope the
performance gains stick around once more of the bugs are gone.</fud>


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-27  1:57   ` Byungchul Park
  2024-05-27  2:43     ` Dave Hansen
@ 2024-05-27  3:10     ` Huang, Ying
  2024-05-27  3:56       ` Byungchul Park
  2024-05-28 15:14       ` Dave Hansen
  1 sibling, 2 replies; 49+ messages in thread
From: Huang, Ying @ 2024-05-27  3:10 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Dave Hansen, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

Byungchul Park <byungchul@sk.com> writes:

> On Fri, May 24, 2024 at 10:16:39AM -0700, Dave Hansen wrote:
>> On 5/9/24 23:51, Byungchul Park wrote:
>> > To achieve that:
>> > 
>> >    1. For the folios that map only to non-writable tlb entries, prevent
>> >       tlb flush during unmapping but perform it just before the folios
>> >       actually become used, out of buddy or pcp.
>> 
>> Is this just _pure_ unmapping (like MADV_DONTNEED), or does it apply to
>> changing the memory map, like munmap() itself?
>
> I think it can be applied to any unmapping of ro ones but LUF for now is
> working only with unmapping during folio migrion and reclaim.
>
>> >    2. When any non-writable ptes change to writable e.g. through fault
>> >       handler, give up luf mechanism and perform tlb flush required
>> >       right away.
>> > 
>> >    3. When a writable mapping is created e.g. through mmap(), give up
>> >       luf mechanism and perform tlb flush required right away.
>> 
>> Let's say you do this:
>> 
>> 	fd = open("/some/file", O_RDONLY);
>> 	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
>> 	foo1 = *ptr1;
>> 
>> You now have a read-only PTE pointing to the first page of /some/file.
>> Let's say try_to_unmap() comes along and decides it can_luf_folio().
>> The page gets pulled out of the page cache and freed, the PTE is zeroed.
>>  But the TLB is never flushed.
>> 
>> Now, someone does:
>> 
>> 	fd2 = open("/some/other/file", O_RDONLY);
>> 	ptr2 = mmap(ptr1, size, PROT_READ, MAP_FIXED, fd, ...);
>> 	foo2 = *ptr2;
>> 
>> and they overwrite the old VMA.  Does foo2 have the contents of the new
>> "/some/other/file" or the old "/some/file"?  How does the new mmap()
>
> Good point.  It should've give up LUF at the 2nd mmap() in this case.
> I will fix it by introducing a new flag in task_struct indicating if LUF
> has left stale maps for the task so that LUF can give up and flush right
> away in mmap().
>
>> know that there was something to flush?
>> 
>> BTW, the same thing could happen without a new mmap().  Someone could
>> modify the file in the middle, maybe even from another process.
>
> Thank you for the pointing out.  I will fix it too by introducing a new
> flag in inode or something to make LUF aware if updating the file has
> been tried so that LUF can give up and flush right away in the case.
>
> Plus, I will add another give-up at code changing the permission of vma
> to writable.

I guess that you need a framework similar as
"flush_tlb_batched_pending()" to deal with interaction with other TLB
related operations.

--
Best Regards,
Huang, Ying

> Thank you very much.
>
> 	Byungchul
>
>> 	fd = open("/some/file", O_RDONLY);
>> 	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
>> 	foo1 = *ptr1;
>> 	// LUF happens here
>> 	// "/some/file" changes
>> 	foo2 = *ptr1; // Does this see the change?


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-27  2:43     ` Dave Hansen
@ 2024-05-27  3:46       ` Byungchul Park
  2024-05-27  4:19         ` Byungchul Park
  2024-05-27 22:58       ` Byungchul Park
  1 sibling, 1 reply; 49+ messages in thread
From: Byungchul Park @ 2024-05-27  3:46 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, kernel_team, akpm, ying.huang, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Sun, May 26, 2024 at 07:43:10PM -0700, Dave Hansen wrote:
> On 5/26/24 18:57, Byungchul Park wrote:
> ...
> > Plus, I will add another give-up at code changing the permission of vma
> > to writable.
> 
> I suspect you have a much more general problem on your hands. Just
> tweaking the VFS or mmap() code likely isn't going to cut it.

LUF is interested in limited folios that are migratable or reclaimable
in lru for now.  So, IMHO, fixing a few things is going to cut it.

> I guess we'll see what you come up with next, but this email was really
> just the result of Vlastimil and I chatting on IRC for five minutes
> about this set.
> 
> It has absolutely not been tested nor reviewed enough.  <fud>I hope the
> performance gains stick around once more of the bugs are gone.</fud>

Sure. It should be.

	Byungchul


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-27  3:10     ` Huang, Ying
@ 2024-05-27  3:56       ` Byungchul Park
  2024-05-28 15:14       ` Dave Hansen
  1 sibling, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-27  3:56 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Dave Hansen, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Mon, May 27, 2024 at 11:10:15AM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Fri, May 24, 2024 at 10:16:39AM -0700, Dave Hansen wrote:
> >> On 5/9/24 23:51, Byungchul Park wrote:
> >> > To achieve that:
> >> > 
> >> >    1. For the folios that map only to non-writable tlb entries, prevent
> >> >       tlb flush during unmapping but perform it just before the folios
> >> >       actually become used, out of buddy or pcp.
> >> 
> >> Is this just _pure_ unmapping (like MADV_DONTNEED), or does it apply to
> >> changing the memory map, like munmap() itself?
> >
> > I think it can be applied to any unmapping of ro ones but LUF for now is
> > working only with unmapping during folio migrion and reclaim.
> >
> >> >    2. When any non-writable ptes change to writable e.g. through fault
> >> >       handler, give up luf mechanism and perform tlb flush required
> >> >       right away.
> >> > 
> >> >    3. When a writable mapping is created e.g. through mmap(), give up
> >> >       luf mechanism and perform tlb flush required right away.
> >> 
> >> Let's say you do this:
> >> 
> >> 	fd = open("/some/file", O_RDONLY);
> >> 	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >> 	foo1 = *ptr1;
> >> 
> >> You now have a read-only PTE pointing to the first page of /some/file.
> >> Let's say try_to_unmap() comes along and decides it can_luf_folio().
> >> The page gets pulled out of the page cache and freed, the PTE is zeroed.
> >>  But the TLB is never flushed.
> >> 
> >> Now, someone does:
> >> 
> >> 	fd2 = open("/some/other/file", O_RDONLY);
> >> 	ptr2 = mmap(ptr1, size, PROT_READ, MAP_FIXED, fd, ...);
> >> 	foo2 = *ptr2;
> >> 
> >> and they overwrite the old VMA.  Does foo2 have the contents of the new
> >> "/some/other/file" or the old "/some/file"?  How does the new mmap()
> >
> > Good point.  It should've give up LUF at the 2nd mmap() in this case.
> > I will fix it by introducing a new flag in task_struct indicating if LUF
> > has left stale maps for the task so that LUF can give up and flush right
> > away in mmap().
> >
> >> know that there was something to flush?
> >> 
> >> BTW, the same thing could happen without a new mmap().  Someone could
> >> modify the file in the middle, maybe even from another process.
> >
> > Thank you for the pointing out.  I will fix it too by introducing a new
> > flag in inode or something to make LUF aware if updating the file has
> > been tried so that LUF can give up and flush right away in the case.
> >
> > Plus, I will add another give-up at code changing the permission of vma
> > to writable.
> 
> I guess that you need a framework similar as
> "flush_tlb_batched_pending()" to deal with interaction with other TLB
> related operations.

Thank you.  I will check it.

	Byungchul

> --
> Best Regards,
> Huang, Ying
> 
> > Thank you very much.
> >
> > 	Byungchul
> >
> >> 	fd = open("/some/file", O_RDONLY);
> >> 	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >> 	foo1 = *ptr1;
> >> 	// LUF happens here
> >> 	// "/some/file" changes
> >> 	foo2 = *ptr1; // Does this see the change?


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-27  3:46       ` Byungchul Park
@ 2024-05-27  4:19         ` Byungchul Park
  2024-05-27  4:25           ` Byungchul Park
  0 siblings, 1 reply; 49+ messages in thread
From: Byungchul Park @ 2024-05-27  4:19 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, kernel_team, akpm, ying.huang, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Mon, May 27, 2024 at 12:46:14PM +0900, Byungchul Park wrote:
> On Sun, May 26, 2024 at 07:43:10PM -0700, Dave Hansen wrote:
> > On 5/26/24 18:57, Byungchul Park wrote:
> > ...
> > > Plus, I will add another give-up at code changing the permission of vma
> > > to writable.
> > 
> > I suspect you have a much more general problem on your hands. Just
> > tweaking the VFS or mmap() code likely isn't going to cut it.

What a stupid idiot I am.

I already discuss the exact cases with Nadav Amit at the very beginning
around v1.  I didn't remember it when I was answering to you.

mmap() or changing the permission by user already performs TLB flush
needed within that code, which LUF never touch.

Worth noting currently LUF touchs only unmapping during migration or
reclaim.  Other updating mapping would perform TLB flush it needs, as is.
I guess updating page cache is also already perform TLB flush needed.
I need to check it.  Probably, it would already do.

	Byungchul

> LUF is interested in limited folios that are migratable or reclaimable
> in lru for now.  So, IMHO, fixing a few things is going to cut it.
> 
> > I guess we'll see what you come up with next, but this email was really
> > just the result of Vlastimil and I chatting on IRC for five minutes
> > about this set.
> > 
> > It has absolutely not been tested nor reviewed enough.  <fud>I hope the
> > performance gains stick around once more of the bugs are gone.</fud>
> 
> Sure. It should be.
> 
> 	Byungchul


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-27  4:19         ` Byungchul Park
@ 2024-05-27  4:25           ` Byungchul Park
  0 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-27  4:25 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, kernel_team, akpm, ying.huang, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Mon, May 27, 2024 at 01:19:46PM +0900, Byungchul Park wrote:
> On Mon, May 27, 2024 at 12:46:14PM +0900, Byungchul Park wrote:
> > On Sun, May 26, 2024 at 07:43:10PM -0700, Dave Hansen wrote:
> > > On 5/26/24 18:57, Byungchul Park wrote:
> > > ...
> > > > Plus, I will add another give-up at code changing the permission of vma
> > > > to writable.
> > > 
> > > I suspect you have a much more general problem on your hands. Just
> > > tweaking the VFS or mmap() code likely isn't going to cut it.
> 
> What a stupid idiot I am.
> 
> I already discuss the exact cases with Nadav Amit at the very beginning
> around v1.  I didn't remember it when I was answering to you.
> 
> mmap() or changing the permission by user already performs TLB flush
> needed within that code, which LUF never touch.
> 
> Worth noting currently LUF touchs only unmapping during migration or
> reclaim.  Other updating mapping would perform TLB flush it needs, as is.
> I guess updating page cache is also already perform TLB flush needed.

This may not be the case tho..  I might need to work on page cache.

	Byungchul

> I need to check it.  Probably, it would already do.
> 
> 	Byungchul
> 
> > LUF is interested in limited folios that are migratable or reclaimable
> > in lru for now.  So, IMHO, fixing a few things is going to cut it.
> > 
> > > I guess we'll see what you come up with next, but this email was really
> > > just the result of Vlastimil and I chatting on IRC for five minutes
> > > about this set.
> > > 
> > > It has absolutely not been tested nor reviewed enough.  <fud>I hope the
> > > performance gains stick around once more of the bugs are gone.</fud>
> > 
> > Sure. It should be.
> > 
> > 	Byungchul


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-27  2:43     ` Dave Hansen
  2024-05-27  3:46       ` Byungchul Park
@ 2024-05-27 22:58       ` Byungchul Park
  2024-05-29  2:16         ` Huang, Ying
  1 sibling, 1 reply; 49+ messages in thread
From: Byungchul Park @ 2024-05-27 22:58 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, linux-mm, kernel_team, akpm, ying.huang, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Sun, May 26, 2024 at 07:43:10PM -0700, Dave Hansen wrote:
> It has absolutely not been tested nor reviewed enough.  <fud>I hope the

It has been tested enough on my side, and it should be reviewed enough
for sure.  I will respin after rebasing the current mm-unstble and
working on vfs shortly.

	Byungchul

> performance gains stick around once more of the bugs are gone.</fud>


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
                   ` (14 preceding siblings ...)
  2024-05-24 17:16 ` Dave Hansen
@ 2024-05-28  8:41 ` David Hildenbrand
  2024-05-29  4:39   ` Byungchul Park
  15 siblings, 1 reply; 49+ messages in thread
From: David Hildenbrand @ 2024-05-28  8:41 UTC (permalink / raw)
  To: Byungchul Park, linux-kernel, linux-mm
  Cc: kernel_team, akpm, ying.huang, vernhao, mgorman, hughd, willy,
	peterz, luto, tglx, mingo, bp, dave.hansen, rjgolo

Am 10.05.24 um 08:51 schrieb Byungchul Park:
> Hi everyone,
> 
> While I'm working with a tiered memory system e.g. CXL memory, I have
> been facing migration overhead esp. tlb shootdown on promotion or
> demotion between different tiers.  Yeah..  most tlb shootdowns on
> migration through hinting fault can be avoided thanks to Huang Ying's
> work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> is inaccessible").  See the following link for more information:
> 
> https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> 
> However, it's only for migration through hinting fault.  I thought it'd
> be much better if we have a general mechanism to reduce all the tlb
> numbers that we can apply to any unmap code, that we normally believe
> tlb flush should be followed.
> 
> I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), defers tlb flush
> until folios that have been unmapped and freed, eventually get allocated
> again.  It's safe for folios that had been mapped read-only and were
> unmapped, since the contents of the folios don't change while staying in
> pcp or buddy so we can still read the data through the stale tlb entries.
> 
> tlb flush can be defered when folios get unmapped as long as it
> guarantees to perform tlb flush needed, before the folios actually
> become used, of course, only if all the corresponding ptes don't have
> write permission.  Otherwise, the system will get messed up.
> 
> To achieve that:
> 
>     1. For the folios that map only to non-writable tlb entries, prevent
>        tlb flush during unmapping but perform it just before the folios
>        actually become used, out of buddy or pcp.

Trying to understand the impact: Effectively, a CPU could still read data from a 
page that has already been freed, until that page gets reallocated again.

The important part I can see is

1) PCP/buddy must not change page content (e.g., poison, init_on_free), 
otherwise an app might read wrong content.

2) If we mess up the flush-before-realloc, an app might observe data written by 
whoever allocated the page.

3) We must reliably detect+handle any read-only PTEs for which we didn't flush 
the TLB yet, otherwise an app could see its memory writes getting lost. I recall 
that at least uffd-wp might defer TLB flushes (see comment in do_wp_page()). Not 
sure about other pte_wrprotect() callers that flush the TLB after processing 
multiple page tables, whereby rmap code might succeed in unmapping a page before 
the TLB flush happened.

Any other possible issues you stumbled over that are worth mentioning?

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-27  3:10     ` Huang, Ying
  2024-05-27  3:56       ` Byungchul Park
@ 2024-05-28 15:14       ` Dave Hansen
  2024-05-29  5:00         ` Byungchul Park
  1 sibling, 1 reply; 49+ messages in thread
From: Dave Hansen @ 2024-05-28 15:14 UTC (permalink / raw)
  To: Huang, Ying, Byungchul Park
  Cc: linux-kernel, linux-mm, kernel_team, akpm, vernhao, mgorman,
	hughd, willy, david, peterz, luto, tglx, mingo, bp, dave.hansen,
	rjgolo

On 5/26/24 20:10, Huang, Ying wrote:
>> Thank you for the pointing out.  I will fix it too by introducing a new
>> flag in inode or something to make LUF aware if updating the file has
>> been tried so that LUF can give up and flush right away in the case.
>>
>> Plus, I will add another give-up at code changing the permission of vma
>> to writable.
> I guess that you need a framework similar as
> "flush_tlb_batched_pending()" to deal with interaction with other TLB
> related operations.

Where "other TLB related operations" includes both things that
traditionally invalidate TLBs (like going Present 1=>0) and things like
fault-in that go Present 0=>1 that can result in TLB population.

It's actually a really crummy problem to solve.  We don't have _any_
machinery to say, "Hey, you know that PTE you wanted to install?  There
was something there before you and we haven't flushed it yet.  Can you
be a doll and do a flush before _populating_ that PTE?"

To solve it generically, I suspect you'll need some kind of special
non-present PTE to say:

	There _was_ a PTE here that wasn't flushed.

Sure, you can add gunk to the VMA to track when this happens.  But
that'll penalize anyone populating a PTE anywhere in the VMA at least
once.  If there were other threads faulting in pages to the same VMA,
they'll just end up doing the flush that LUF tried to avoid in the first
place.


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-27 22:58       ` Byungchul Park
@ 2024-05-29  2:16         ` Huang, Ying
  2024-05-30  1:02           ` Byungchul Park
  0 siblings, 1 reply; 49+ messages in thread
From: Huang, Ying @ 2024-05-29  2:16 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Dave Hansen, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

Byungchul Park <byungchul@sk.com> writes:

> On Sun, May 26, 2024 at 07:43:10PM -0700, Dave Hansen wrote:
>> It has absolutely not been tested nor reviewed enough.  <fud>I hope the
>
> It has been tested enough on my side, and it should be reviewed enough
> for sure.

I believe that you have tested and reviewed the patchset by yourself.
But there are some other cases that you haven't thought about enough
before, as Dave pointed out.

So, I suggest you to try to find out more possible weakness of your
patchset.  Begin with what Dave and David pointed out.

> I will respin after rebasing the current mm-unstble and
> working on vfs shortly.
>
> 	Byungchul
>
>> performance gains stick around once more of the bugs are gone.</fud>

--
Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-28  8:41 ` David Hildenbrand
@ 2024-05-29  4:39   ` Byungchul Park
  0 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-29  4:39 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, kernel_team, akpm, ying.huang, vernhao,
	mgorman, hughd, willy, peterz, luto, tglx, mingo, bp, dave.hansen,
	rjgolo

On Tue, May 28, 2024 at 10:41:54AM +0200, David Hildenbrand wrote:
> Am 10.05.24 um 08:51 schrieb Byungchul Park:
> > Hi everyone,
> > 
> > While I'm working with a tiered memory system e.g. CXL memory, I have
> > been facing migration overhead esp. tlb shootdown on promotion or
> > demotion between different tiers.  Yeah..  most tlb shootdowns on
> > migration through hinting fault can be avoided thanks to Huang Ying's
> > work, commit 4d4b6d66db ("mm,unmap: avoid flushing tlb in batch if PTE
> > is inaccessible").  See the following link for more information:
> > 
> > https://lore.kernel.org/lkml/20231115025755.GA29979@system.software.com/
> > 
> > However, it's only for migration through hinting fault.  I thought it'd
> > be much better if we have a general mechanism to reduce all the tlb
> > numbers that we can apply to any unmap code, that we normally believe
> > tlb flush should be followed.
> > 
> > I'm suggesting a new mechanism, LUF(Lazy Unmap Flush), defers tlb flush
> > until folios that have been unmapped and freed, eventually get allocated
> > again.  It's safe for folios that had been mapped read-only and were
> > unmapped, since the contents of the folios don't change while staying in
> > pcp or buddy so we can still read the data through the stale tlb entries.
> > 
> > tlb flush can be defered when folios get unmapped as long as it
> > guarantees to perform tlb flush needed, before the folios actually
> > become used, of course, only if all the corresponding ptes don't have
> > write permission.  Otherwise, the system will get messed up.
> > 
> > To achieve that:
> > 
> >     1. For the folios that map only to non-writable tlb entries, prevent
> >        tlb flush during unmapping but perform it just before the folios
> >        actually become used, out of buddy or pcp.
> 
> Trying to understand the impact: Effectively, a CPU could still read data
> from a page that has already been freed, until that page gets reallocated
> again.
> 
> The important part I can see is
> 
> 1) PCP/buddy must not change page content (e.g., poison, init_on_free),
> otherwise an app might read wrong content.

Exactly.  I will take them into account.  Thank you.

> 2) If we mess up the flush-before-realloc, an app might observe data written
> by whoever allocated the page.

Yes.  However, appropiate TLB flush is performed in prep_new_page().
Basically you are right.  I need to pay enough attention to it.

> 3) We must reliably detect+handle any read-only PTEs for which we didn't
> flush the TLB yet, otherwise an app could see its memory writes getting
> lost. I recall that at least uffd-wp might defer TLB flushes (see comment in
> do_wp_page()). Not sure about other pte_wrprotect() callers that flush the
> TLB after processing multiple page tables, whereby rmap code might succeed
> in unmapping a page before the TLB flush happened.
> 
> Any other possible issues you stumbled over that are worth mentioning?

You mentioned all that I'm concerning but in a clear way.

	Byungchul

> 
> -- 
> Thanks,
> 
> David / dhildenb


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-28 15:14       ` Dave Hansen
@ 2024-05-29  5:00         ` Byungchul Park
  2024-05-29 16:41           ` Dave Hansen
  0 siblings, 1 reply; 49+ messages in thread
From: Byungchul Park @ 2024-05-29  5:00 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Huang, Ying, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Tue, May 28, 2024 at 08:14:43AM -0700, Dave Hansen wrote:
> On 5/26/24 20:10, Huang, Ying wrote:
> >> Thank you for the pointing out.  I will fix it too by introducing a new
> >> flag in inode or something to make LUF aware if updating the file has
> >> been tried so that LUF can give up and flush right away in the case.
> >>
> >> Plus, I will add another give-up at code changing the permission of vma
> >> to writable.
> > I guess that you need a framework similar as
> > "flush_tlb_batched_pending()" to deal with interaction with other TLB
> > related operations.
> 
> Where "other TLB related operations" includes both things that
> traditionally invalidate TLBs (like going Present 1=>0) and things like
> fault-in that go Present 0=>1 that can result in TLB population.
> 
> It's actually a really crummy problem to solve.  We don't have _any_
> machinery to say, "Hey, you know that PTE you wanted to install?  There
> was something there before you and we haven't flushed it yet.  Can you
> be a doll and do a flush before _populating_ that PTE?"

All the code updating ptes already performs TLB flush needed in a safe
way if it's inevitable e.g. munmap.  LUF which controls when to flush in
a higer level than arch code, just leaves stale ro tlb entries that are
currently supposed to be in use.  Could you give a scenario that you are
concering?

	Byungchul

> To solve it generically, I suspect you'll need some kind of special
> non-present PTE to say:
> 
> 	There _was_ a PTE here that wasn't flushed.
> 
> Sure, you can add gunk to the VMA to track when this happens.  But
> that'll penalize anyone populating a PTE anywhere in the VMA at least
> once.  If there were other threads faulting in pages to the same VMA,
> they'll just end up doing the flush that LUF tried to avoid in the first
> place.


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-29  5:00         ` Byungchul Park
@ 2024-05-29 16:41           ` Dave Hansen
  2024-05-30  0:50             ` Byungchul Park
  0 siblings, 1 reply; 49+ messages in thread
From: Dave Hansen @ 2024-05-29 16:41 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Huang, Ying, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On 5/28/24 22:00, Byungchul Park wrote:
> All the code updating ptes already performs TLB flush needed in a safe
> way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> a higer level than arch code, just leaves stale ro tlb entries that are
> currently supposed to be in use.  Could you give a scenario that you are
> concering?

Let's go back this scenario:

 	fd = open("/some/file", O_RDONLY);
 	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
 	foo1 = *ptr1;

There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
eligible for LUF via the try_to_unmap() paths.  In other words, the page
might be reclaimed at any time.  If it is reclaimed, the PTE will be
cleared.

Then, the user might do:

	munmap(ptr1, PAGE_SIZE);

Which will _eventually_ wind up in the zap_pte_range() loop.  But that
loop will only see pte_none().  It doesn't do _anything_ to the 'struct
mmu_gather'.

The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
'struct mmu_gather':

        if (!(tlb->freed_tables || tlb->cleared_ptes ||
	      tlb->cleared_pmds || tlb->cleared_puds ||
	      tlb->cleared_p4ds))
                return;

But since there were no cleared PTEs (or anything else) during the
unmap, this just returns and doesn't flush the TLB.

We now have an address space with a stale TLB entry at 'ptr1' and not
even a VMA there.  There's nothing to stop a new VMA from going in,
installing a *new* PTE, but getting data from the stale TLB entry that
still hasn't been flushed.


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-29 16:41           ` Dave Hansen
@ 2024-05-30  0:50             ` Byungchul Park
  2024-05-30  0:59               ` Byungchul Park
  2024-05-30  1:11               ` Huang, Ying
  0 siblings, 2 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-30  0:50 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Huang, Ying, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> On 5/28/24 22:00, Byungchul Park wrote:
> > All the code updating ptes already performs TLB flush needed in a safe
> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> > a higer level than arch code, just leaves stale ro tlb entries that are
> > currently supposed to be in use.  Could you give a scenario that you are
> > concering?
> 
> Let's go back this scenario:
> 
>  	fd = open("/some/file", O_RDONLY);
>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
>  	foo1 = *ptr1;
> 
> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> eligible for LUF via the try_to_unmap() paths.  In other words, the page
> might be reclaimed at any time.  If it is reclaimed, the PTE will be
> cleared.
> 
> Then, the user might do:
> 
> 	munmap(ptr1, PAGE_SIZE);
> 
> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> mmu_gather'.
> 
> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> 'struct mmu_gather':
> 
>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> 	      tlb->cleared_pmds || tlb->cleared_puds ||
> 	      tlb->cleared_p4ds))
>                 return;
> 
> But since there were no cleared PTEs (or anything else) during the
> unmap, this just returns and doesn't flush the TLB.
> 
> We now have an address space with a stale TLB entry at 'ptr1' and not
> even a VMA there.  There's nothing to stop a new VMA from going in,
> installing a *new* PTE, but getting data from the stale TLB entry that
> still hasn't been flushed.

Thank you for the explanation.  I got you.  I think I could handle the
case through a new flag in vma or something indicating LUF has deferred
necessary TLB flush for it during unmapping so that mmu_gather mechanism
can be aware of it.  Of course, the performance change should be checked
again.  Thoughts?

Thanks again.

	Byungchul


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-30  0:50             ` Byungchul Park
@ 2024-05-30  0:59               ` Byungchul Park
  2024-05-30  1:11               ` Huang, Ying
  1 sibling, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-30  0:59 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Huang, Ying, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Thu, May 30, 2024 at 09:50:26AM +0900, Byungchul Park wrote:
> On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> > On 5/28/24 22:00, Byungchul Park wrote:
> > > All the code updating ptes already performs TLB flush needed in a safe
> > > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> > > a higer level than arch code, just leaves stale ro tlb entries that are
> > > currently supposed to be in use.  Could you give a scenario that you are
> > > concering?
> > 
> > Let's go back this scenario:
> > 
> >  	fd = open("/some/file", O_RDONLY);
> >  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >  	foo1 = *ptr1;
> > 
> > There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> > eligible for LUF via the try_to_unmap() paths.  In other words, the page
> > might be reclaimed at any time.  If it is reclaimed, the PTE will be
> > cleared.
> > 
> > Then, the user might do:
> > 
> > 	munmap(ptr1, PAGE_SIZE);
> > 
> > Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> > loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> > mmu_gather'.
> > 
> > The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> > 'struct mmu_gather':
> > 
> >         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> > 	      tlb->cleared_pmds || tlb->cleared_puds ||
> > 	      tlb->cleared_p4ds))
> >                 return;
> > 
> > But since there were no cleared PTEs (or anything else) during the
> > unmap, this just returns and doesn't flush the TLB.
> > 
> > We now have an address space with a stale TLB entry at 'ptr1' and not
> > even a VMA there.  There's nothing to stop a new VMA from going in,
> > installing a *new* PTE, but getting data from the stale TLB entry that
> > still hasn't been flushed.
> 
> Thank you for the explanation.  I got you.  I think I could handle the
> case through a new flag in vma or something indicating LUF has deferred
> necessary TLB flush for it during unmapping so that mmu_gather mechanism
> can be aware of it.  Of course, the performance change should be checked
> again.  Thoughts?

I will check the existing optimization of TLB flsuh more in arch level
and suggest a better way.

	Byungchul

> Thanks again.
> 
> 	Byungchul


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-29  2:16         ` Huang, Ying
@ 2024-05-30  1:02           ` Byungchul Park
  0 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-30  1:02 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Dave Hansen, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Wed, May 29, 2024 at 10:16:26AM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Sun, May 26, 2024 at 07:43:10PM -0700, Dave Hansen wrote:
> >> It has absolutely not been tested nor reviewed enough.  <fud>I hope the
> >
> > It has been tested enough on my side, and it should be reviewed enough
> > for sure.
> 
> I believe that you have tested and reviewed the patchset by yourself.
> But there are some other cases that you haven't thought about enough
> before, as Dave pointed out.
> 
> So, I suggest you to try to find out more possible weakness of your
> patchset.  Begin with what Dave and David pointed out.

I will.

	Byungchul

> > I will respin after rebasing the current mm-unstble and
> > working on vfs shortly.
> >
> > 	Byungchul
> >
> >> performance gains stick around once more of the bugs are gone.</fud>
> 
> --
> Best Regards,
> Huang, Ying


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-30  0:50             ` Byungchul Park
  2024-05-30  0:59               ` Byungchul Park
@ 2024-05-30  1:11               ` Huang, Ying
  2024-05-30  1:33                 ` Byungchul Park
  2024-05-30  7:18                 ` Byungchul Park
  1 sibling, 2 replies; 49+ messages in thread
From: Huang, Ying @ 2024-05-30  1:11 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Dave Hansen, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

Byungchul Park <byungchul@sk.com> writes:

> On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
>> On 5/28/24 22:00, Byungchul Park wrote:
>> > All the code updating ptes already performs TLB flush needed in a safe
>> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
>> > a higer level than arch code, just leaves stale ro tlb entries that are
>> > currently supposed to be in use.  Could you give a scenario that you are
>> > concering?
>> 
>> Let's go back this scenario:
>> 
>>  	fd = open("/some/file", O_RDONLY);
>>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
>>  	foo1 = *ptr1;
>> 
>> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
>> eligible for LUF via the try_to_unmap() paths.  In other words, the page
>> might be reclaimed at any time.  If it is reclaimed, the PTE will be
>> cleared.
>> 
>> Then, the user might do:
>> 
>> 	munmap(ptr1, PAGE_SIZE);
>> 
>> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
>> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
>> mmu_gather'.
>> 
>> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
>> 'struct mmu_gather':
>> 
>>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
>> 	      tlb->cleared_pmds || tlb->cleared_puds ||
>> 	      tlb->cleared_p4ds))
>>                 return;
>> 
>> But since there were no cleared PTEs (or anything else) during the
>> unmap, this just returns and doesn't flush the TLB.
>> 
>> We now have an address space with a stale TLB entry at 'ptr1' and not
>> even a VMA there.  There's nothing to stop a new VMA from going in,
>> installing a *new* PTE, but getting data from the stale TLB entry that
>> still hasn't been flushed.
>
> Thank you for the explanation.  I got you.  I think I could handle the
> case through a new flag in vma or something indicating LUF has deferred
> necessary TLB flush for it during unmapping so that mmu_gather mechanism
> can be aware of it.  Of course, the performance change should be checked
> again.  Thoughts?

I suggest you to start with the simple case.  That is, only support page
reclaiming and migration.  A TLB flushing can be enforced during unmap
with something similar as flush_tlb_batched_pending().

--
Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-30  1:11               ` Huang, Ying
@ 2024-05-30  1:33                 ` Byungchul Park
  2024-05-30  7:18                 ` Byungchul Park
  1 sibling, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-30  1:33 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Dave Hansen, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> >> On 5/28/24 22:00, Byungchul Park wrote:
> >> > All the code updating ptes already performs TLB flush needed in a safe
> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> >> > a higer level than arch code, just leaves stale ro tlb entries that are
> >> > currently supposed to be in use.  Could you give a scenario that you are
> >> > concering?
> >> 
> >> Let's go back this scenario:
> >> 
> >>  	fd = open("/some/file", O_RDONLY);
> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >>  	foo1 = *ptr1;
> >> 
> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
> >> cleared.
> >> 
> >> Then, the user might do:
> >> 
> >> 	munmap(ptr1, PAGE_SIZE);
> >> 
> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> >> mmu_gather'.
> >> 
> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> >> 'struct mmu_gather':
> >> 
> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
> >> 	      tlb->cleared_p4ds))
> >>                 return;
> >> 
> >> But since there were no cleared PTEs (or anything else) during the
> >> unmap, this just returns and doesn't flush the TLB.
> >> 
> >> We now have an address space with a stale TLB entry at 'ptr1' and not
> >> even a VMA there.  There's nothing to stop a new VMA from going in,
> >> installing a *new* PTE, but getting data from the stale TLB entry that
> >> still hasn't been flushed.
> >
> > Thank you for the explanation.  I got you.  I think I could handle the
> > case through a new flag in vma or something indicating LUF has deferred
> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
> > can be aware of it.  Of course, the performance change should be checked
> > again.  Thoughts?
> 
> I suggest you to start with the simple case.  That is, only support page
> reclaiming and migration.  A TLB flushing can be enforced during unmap
> with something similar as flush_tlb_batched_pending().

Right.  I'm thinking to add a related code to flush_tlb_batched_pending().

	Byungchul

> --
> Best Regards,
> Huang, Ying


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-30  1:11               ` Huang, Ying
  2024-05-30  1:33                 ` Byungchul Park
@ 2024-05-30  7:18                 ` Byungchul Park
  2024-05-30  8:24                   ` Huang, Ying
  1 sibling, 1 reply; 49+ messages in thread
From: Byungchul Park @ 2024-05-30  7:18 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Dave Hansen, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> >> On 5/28/24 22:00, Byungchul Park wrote:
> >> > All the code updating ptes already performs TLB flush needed in a safe
> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> >> > a higer level than arch code, just leaves stale ro tlb entries that are
> >> > currently supposed to be in use.  Could you give a scenario that you are
> >> > concering?
> >> 
> >> Let's go back this scenario:
> >> 
> >>  	fd = open("/some/file", O_RDONLY);
> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >>  	foo1 = *ptr1;
> >> 
> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
> >> cleared.
> >> 
> >> Then, the user might do:
> >> 
> >> 	munmap(ptr1, PAGE_SIZE);
> >> 
> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> >> mmu_gather'.
> >> 
> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> >> 'struct mmu_gather':
> >> 
> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
> >> 	      tlb->cleared_p4ds))
> >>                 return;
> >> 
> >> But since there were no cleared PTEs (or anything else) during the
> >> unmap, this just returns and doesn't flush the TLB.
> >> 
> >> We now have an address space with a stale TLB entry at 'ptr1' and not
> >> even a VMA there.  There's nothing to stop a new VMA from going in,
> >> installing a *new* PTE, but getting data from the stale TLB entry that
> >> still hasn't been flushed.
> >
> > Thank you for the explanation.  I got you.  I think I could handle the
> > case through a new flag in vma or something indicating LUF has deferred
> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
> > can be aware of it.  Of course, the performance change should be checked
> > again.  Thoughts?
> 
> I suggest you to start with the simple case.  That is, only support page
> reclaiming and migration.  A TLB flushing can be enforced during unmap
> with something similar as flush_tlb_batched_pending().

While reading flush_tlb_batched_pending(mm), I found it already performs
TLB flush for the target mm, if set_tlb_ubc_flush_pending(mm) has been
hit at least once since the last flush_tlb_batched_pending(mm).

Since LUF also relies on set_tlb_ubc_flush_pending(mm), it's going to
perform TLB flush required, in flush_tlb_batched_pending(mm) during
munmap().  So it looks safe to me with regard to munmap() already.

Is there something that I'm missing?

JFYI, regarding to mmap(), I have reworked on fault handler to give up
luf when needed in a better way.

	Byungchul

> --
> Best Regards,
> Huang, Ying


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-30  7:18                 ` Byungchul Park
@ 2024-05-30  8:24                   ` Huang, Ying
  2024-05-30  8:41                     ` Byungchul Park
  2024-05-30  9:33                     ` Byungchul Park
  0 siblings, 2 replies; 49+ messages in thread
From: Huang, Ying @ 2024-05-30  8:24 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Dave Hansen, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

Byungchul Park <byungchul@sk.com> writes:

> On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
>> Byungchul Park <byungchul@sk.com> writes:
>> 
>> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
>> >> On 5/28/24 22:00, Byungchul Park wrote:
>> >> > All the code updating ptes already performs TLB flush needed in a safe
>> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
>> >> > a higer level than arch code, just leaves stale ro tlb entries that are
>> >> > currently supposed to be in use.  Could you give a scenario that you are
>> >> > concering?
>> >> 
>> >> Let's go back this scenario:
>> >> 
>> >>  	fd = open("/some/file", O_RDONLY);
>> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
>> >>  	foo1 = *ptr1;
>> >> 
>> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
>> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
>> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
>> >> cleared.
>> >> 
>> >> Then, the user might do:
>> >> 
>> >> 	munmap(ptr1, PAGE_SIZE);
>> >> 
>> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
>> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
>> >> mmu_gather'.
>> >> 
>> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
>> >> 'struct mmu_gather':
>> >> 
>> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
>> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
>> >> 	      tlb->cleared_p4ds))
>> >>                 return;
>> >> 
>> >> But since there were no cleared PTEs (or anything else) during the
>> >> unmap, this just returns and doesn't flush the TLB.
>> >> 
>> >> We now have an address space with a stale TLB entry at 'ptr1' and not
>> >> even a VMA there.  There's nothing to stop a new VMA from going in,
>> >> installing a *new* PTE, but getting data from the stale TLB entry that
>> >> still hasn't been flushed.
>> >
>> > Thank you for the explanation.  I got you.  I think I could handle the
>> > case through a new flag in vma or something indicating LUF has deferred
>> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
>> > can be aware of it.  Of course, the performance change should be checked
>> > again.  Thoughts?
>> 
>> I suggest you to start with the simple case.  That is, only support page
>> reclaiming and migration.  A TLB flushing can be enforced during unmap
>> with something similar as flush_tlb_batched_pending().
>
> While reading flush_tlb_batched_pending(mm), I found it already performs
> TLB flush for the target mm, if set_tlb_ubc_flush_pending(mm) has been
> hit at least once since the last flush_tlb_batched_pending(mm).
>
> Since LUF also relies on set_tlb_ubc_flush_pending(mm), it's going to
> perform TLB flush required, in flush_tlb_batched_pending(mm) during
> munmap().  So it looks safe to me with regard to munmap() already.
>
> Is there something that I'm missing?
>
> JFYI, regarding to mmap(), I have reworked on fault handler to give up
> luf when needed in a better way.

If TLB flush is always enforced during munmap(), then your solution can
only avoid TLB flushing for page reclaiming and migration, not unmap.
Or do I miss something?

--
Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-30  8:24                   ` Huang, Ying
@ 2024-05-30  8:41                     ` Byungchul Park
  2024-05-30 13:50                       ` Dave Hansen
  2024-05-30  9:33                     ` Byungchul Park
  1 sibling, 1 reply; 49+ messages in thread
From: Byungchul Park @ 2024-05-30  8:41 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Dave Hansen, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Thu, May 30, 2024 at 04:24:12PM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
> >> Byungchul Park <byungchul@sk.com> writes:
> >> 
> >> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> >> >> On 5/28/24 22:00, Byungchul Park wrote:
> >> >> > All the code updating ptes already performs TLB flush needed in a safe
> >> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> >> >> > a higer level than arch code, just leaves stale ro tlb entries that are
> >> >> > currently supposed to be in use.  Could you give a scenario that you are
> >> >> > concering?
> >> >> 
> >> >> Let's go back this scenario:
> >> >> 
> >> >>  	fd = open("/some/file", O_RDONLY);
> >> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >> >>  	foo1 = *ptr1;
> >> >> 
> >> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> >> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
> >> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
> >> >> cleared.
> >> >> 
> >> >> Then, the user might do:
> >> >> 
> >> >> 	munmap(ptr1, PAGE_SIZE);
> >> >> 
> >> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> >> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> >> >> mmu_gather'.
> >> >> 
> >> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> >> >> 'struct mmu_gather':
> >> >> 
> >> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> >> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
> >> >> 	      tlb->cleared_p4ds))
> >> >>                 return;
> >> >> 
> >> >> But since there were no cleared PTEs (or anything else) during the
> >> >> unmap, this just returns and doesn't flush the TLB.
> >> >> 
> >> >> We now have an address space with a stale TLB entry at 'ptr1' and not
> >> >> even a VMA there.  There's nothing to stop a new VMA from going in,
> >> >> installing a *new* PTE, but getting data from the stale TLB entry that
> >> >> still hasn't been flushed.
> >> >
> >> > Thank you for the explanation.  I got you.  I think I could handle the
> >> > case through a new flag in vma or something indicating LUF has deferred
> >> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
> >> > can be aware of it.  Of course, the performance change should be checked
> >> > again.  Thoughts?
> >> 
> >> I suggest you to start with the simple case.  That is, only support page
> >> reclaiming and migration.  A TLB flushing can be enforced during unmap
> >> with something similar as flush_tlb_batched_pending().
> >
> > While reading flush_tlb_batched_pending(mm), I found it already performs
> > TLB flush for the target mm, if set_tlb_ubc_flush_pending(mm) has been
> > hit at least once since the last flush_tlb_batched_pending(mm).
> >
> > Since LUF also relies on set_tlb_ubc_flush_pending(mm), it's going to
> > perform TLB flush required, in flush_tlb_batched_pending(mm) during
> > munmap().  So it looks safe to me with regard to munmap() already.
> >
> > Is there something that I'm missing?
> >
> > JFYI, regarding to mmap(), I have reworked on fault handler to give up
> > luf when needed in a better way.
> 
> If TLB flush is always enforced during munmap(), then your solution can
> only avoid TLB flushing for page reclaiming and migration, not unmap.
								 ^
								 munmap()?

Do you mean munmap()?  IIUC, yes.  LUF only works for page reclaiming
and migration, but not for munmap().  When munmap()ing, LUF rather needs
to give up and perform tlb flush pended.

LUF should not optimize tlb flushes for mappings that users explicitly
change e.g. through mmap() and munmap().

	Byungchul

> Or do I miss something?
> 
> --
> Best Regards,
> Huang, Ying


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-30  8:24                   ` Huang, Ying
  2024-05-30  8:41                     ` Byungchul Park
@ 2024-05-30  9:33                     ` Byungchul Park
  2024-05-31  1:45                       ` Huang, Ying
  1 sibling, 1 reply; 49+ messages in thread
From: Byungchul Park @ 2024-05-30  9:33 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Dave Hansen, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Thu, May 30, 2024 at 04:24:12PM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
> >> Byungchul Park <byungchul@sk.com> writes:
> >> 
> >> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> >> >> On 5/28/24 22:00, Byungchul Park wrote:
> >> >> > All the code updating ptes already performs TLB flush needed in a safe
> >> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> >> >> > a higer level than arch code, just leaves stale ro tlb entries that are
> >> >> > currently supposed to be in use.  Could you give a scenario that you are
> >> >> > concering?
> >> >> 
> >> >> Let's go back this scenario:
> >> >> 
> >> >>  	fd = open("/some/file", O_RDONLY);
> >> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >> >>  	foo1 = *ptr1;
> >> >> 
> >> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> >> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
> >> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
> >> >> cleared.
> >> >> 
> >> >> Then, the user might do:
> >> >> 
> >> >> 	munmap(ptr1, PAGE_SIZE);
> >> >> 
> >> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> >> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> >> >> mmu_gather'.
> >> >> 
> >> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> >> >> 'struct mmu_gather':
> >> >> 
> >> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> >> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
> >> >> 	      tlb->cleared_p4ds))
> >> >>                 return;
> >> >> 
> >> >> But since there were no cleared PTEs (or anything else) during the
> >> >> unmap, this just returns and doesn't flush the TLB.
> >> >> 
> >> >> We now have an address space with a stale TLB entry at 'ptr1' and not
> >> >> even a VMA there.  There's nothing to stop a new VMA from going in,
> >> >> installing a *new* PTE, but getting data from the stale TLB entry that
> >> >> still hasn't been flushed.
> >> >
> >> > Thank you for the explanation.  I got you.  I think I could handle the
> >> > case through a new flag in vma or something indicating LUF has deferred
> >> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
> >> > can be aware of it.  Of course, the performance change should be checked
> >> > again.  Thoughts?
> >> 
> >> I suggest you to start with the simple case.  That is, only support page
> >> reclaiming and migration.  A TLB flushing can be enforced during unmap
> >> with something similar as flush_tlb_batched_pending().
> >
> > While reading flush_tlb_batched_pending(mm), I found it already performs
> > TLB flush for the target mm, if set_tlb_ubc_flush_pending(mm) has been
> > hit at least once since the last flush_tlb_batched_pending(mm).
> >
> > Since LUF also relies on set_tlb_ubc_flush_pending(mm), it's going to
> > perform TLB flush required, in flush_tlb_batched_pending(mm) during
> > munmap().  So it looks safe to me with regard to munmap() already.
> >
> > Is there something that I'm missing?
> >
> > JFYI, regarding to mmap(), I have reworked on fault handler to give up
> > luf when needed in a better way.
> 
> If TLB flush is always enforced during munmap(), then your solution can
> only avoid TLB flushing for page reclaiming and migration, not unmap.

I'm not sure if I understand what you meant.  Could you explain it in
more detail?

LUF works for only *unmapping* that happens during page reclaiming and
migration.  Other unmappings than page reclaiming and migration are not
what LUF works for.  That's why I thought flush_tlb_batched_pending()
could handle the pending tlb flushes in the case.

It'd be appreciated if you explain what you meant more.

	Byungchul

> Or do I miss something?
> 
> --
> Best Regards,
> Huang, Ying


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-30  8:41                     ` Byungchul Park
@ 2024-05-30 13:50                       ` Dave Hansen
  2024-05-31  2:06                         ` Byungchul Park
  0 siblings, 1 reply; 49+ messages in thread
From: Dave Hansen @ 2024-05-30 13:50 UTC (permalink / raw)
  To: Byungchul Park, Huang, Ying
  Cc: linux-kernel, linux-mm, kernel_team, akpm, vernhao, mgorman,
	hughd, willy, david, peterz, luto, tglx, mingo, bp, dave.hansen,
	rjgolo

On 5/30/24 01:41, Byungchul Park wrote:
> LUF should not optimize tlb flushes for mappings that users explicitly
> change e.g. through mmap() and munmap().

We are thoroughly going around in circles at this point.

I'm not quite sure what to do.  Ying and I see a problem that we've
tried to explain a couple of times.  We've tried to show the connection
between a LUF-elided TLB flush and how that could affect a later
munmap() or mmap(MAP_FIXED).

But these responses seem to keep going back to the fact that LUF doesn't
directly affect munmap(), which is true, but quite irrelevant to the
problem being described.

So we're at an impasse.

Byungchul, perhaps you should spin another series and maybe Ying and I
have to write up a test case to show the bug that we see.  Or perhaps
someone else can jump into the thread and bridge the communication gap.



^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-30  9:33                     ` Byungchul Park
@ 2024-05-31  1:45                       ` Huang, Ying
  2024-05-31  2:20                         ` Byungchul Park
  0 siblings, 1 reply; 49+ messages in thread
From: Huang, Ying @ 2024-05-31  1:45 UTC (permalink / raw)
  To: Byungchul Park
  Cc: Dave Hansen, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

Byungchul Park <byungchul@sk.com> writes:

> On Thu, May 30, 2024 at 04:24:12PM +0800, Huang, Ying wrote:
>> Byungchul Park <byungchul@sk.com> writes:
>> 
>> > On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
>> >> Byungchul Park <byungchul@sk.com> writes:
>> >> 
>> >> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
>> >> >> On 5/28/24 22:00, Byungchul Park wrote:
>> >> >> > All the code updating ptes already performs TLB flush needed in a safe
>> >> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
>> >> >> > a higer level than arch code, just leaves stale ro tlb entries that are
>> >> >> > currently supposed to be in use.  Could you give a scenario that you are
>> >> >> > concering?
>> >> >> 
>> >> >> Let's go back this scenario:
>> >> >> 
>> >> >>  	fd = open("/some/file", O_RDONLY);
>> >> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
>> >> >>  	foo1 = *ptr1;
>> >> >> 
>> >> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
>> >> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
>> >> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
>> >> >> cleared.
>> >> >> 
>> >> >> Then, the user might do:
>> >> >> 
>> >> >> 	munmap(ptr1, PAGE_SIZE);
>> >> >> 
>> >> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
>> >> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
>> >> >> mmu_gather'.
>> >> >> 
>> >> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
>> >> >> 'struct mmu_gather':
>> >> >> 
>> >> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
>> >> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
>> >> >> 	      tlb->cleared_p4ds))
>> >> >>                 return;
>> >> >> 
>> >> >> But since there were no cleared PTEs (or anything else) during the
>> >> >> unmap, this just returns and doesn't flush the TLB.
>> >> >> 
>> >> >> We now have an address space with a stale TLB entry at 'ptr1' and not
>> >> >> even a VMA there.  There's nothing to stop a new VMA from going in,
>> >> >> installing a *new* PTE, but getting data from the stale TLB entry that
>> >> >> still hasn't been flushed.
>> >> >
>> >> > Thank you for the explanation.  I got you.  I think I could handle the
>> >> > case through a new flag in vma or something indicating LUF has deferred
>> >> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
>> >> > can be aware of it.  Of course, the performance change should be checked
>> >> > again.  Thoughts?
>> >> 
>> >> I suggest you to start with the simple case.  That is, only support page
>> >> reclaiming and migration.  A TLB flushing can be enforced during unmap
>> >> with something similar as flush_tlb_batched_pending().
>> >
>> > While reading flush_tlb_batched_pending(mm), I found it already performs
>> > TLB flush for the target mm, if set_tlb_ubc_flush_pending(mm) has been
>> > hit at least once since the last flush_tlb_batched_pending(mm).
>> >
>> > Since LUF also relies on set_tlb_ubc_flush_pending(mm), it's going to
>> > perform TLB flush required, in flush_tlb_batched_pending(mm) during
>> > munmap().  So it looks safe to me with regard to munmap() already.
>> >
>> > Is there something that I'm missing?
>> >
>> > JFYI, regarding to mmap(), I have reworked on fault handler to give up
>> > luf when needed in a better way.
>> 
>> If TLB flush is always enforced during munmap(), then your solution can
>> only avoid TLB flushing for page reclaiming and migration, not unmap.
>
> I'm not sure if I understand what you meant.  Could you explain it in
> more detail?
>
> LUF works for only *unmapping* that happens during page reclaiming and
> migration.  Other unmappings than page reclaiming and migration are not
> what LUF works for.  That's why I thought flush_tlb_batched_pending()
> could handle the pending tlb flushes in the case.
>
> It'd be appreciated if you explain what you meant more.
>

In the following email, you have claimed that LUF can avoid TLB flushing
for munmap()/mmap().

https://lore.kernel.org/linux-mm/20240527015732.GA61604@system.software.com/

Now, you said it can only avoid TLB flushing for page reclaiming and
migration.

So, to avoid confusion, I suggest you to send out a new series and make
it explicit that it can only optimize page reclaiming and migration, but
not munmap().  And it would be good too to add some words about how it
interact with other TLB flushing mechanisms.

--
Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-30 13:50                       ` Dave Hansen
@ 2024-05-31  2:06                         ` Byungchul Park
  0 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-31  2:06 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Huang, Ying, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Thu, May 30, 2024 at 06:50:48AM -0700, Dave Hansen wrote:
> On 5/30/24 01:41, Byungchul Park wrote:
> > LUF should not optimize tlb flushes for mappings that users explicitly
> > change e.g. through mmap() and munmap().
> 
> We are thoroughly going around in circles at this point.
> 
> I'm not quite sure what to do.  Ying and I see a problem that we've
> tried to explain a couple of times.  We've tried to show the connection
> between a LUF-elided TLB flush and how that could affect a later
> munmap() or mmap(MAP_FIXED).
> 
> But these responses seem to keep going back to the fact that LUF doesn't

I just wanted to understand exactly what Ying meant.  My answer might be
done in a wrong way if I wrongly got him.

> directly affect munmap(), which is true, but quite irrelevant to the
> problem being described.
> 
> So we're at an impasse.
> 
> Byungchul, perhaps you should spin another series and maybe Ying and I

I don't think the current implementation is perfect.  I just wanted to
know what I'm missing now but.. yes.  It would be much better to
communicate with a real bug if existing.

I will respin the next version shortly.

	Byungchul

> have to write up a test case to show the bug that we see.  Or perhaps
> someone else can jump into the thread and bridge the communication gap.


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90%
  2024-05-31  1:45                       ` Huang, Ying
@ 2024-05-31  2:20                         ` Byungchul Park
  0 siblings, 0 replies; 49+ messages in thread
From: Byungchul Park @ 2024-05-31  2:20 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Dave Hansen, linux-kernel, linux-mm, kernel_team, akpm, vernhao,
	mgorman, hughd, willy, david, peterz, luto, tglx, mingo, bp,
	dave.hansen, rjgolo

On Fri, May 31, 2024 at 09:45:33AM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > On Thu, May 30, 2024 at 04:24:12PM +0800, Huang, Ying wrote:
> >> Byungchul Park <byungchul@sk.com> writes:
> >> 
> >> > On Thu, May 30, 2024 at 09:11:45AM +0800, Huang, Ying wrote:
> >> >> Byungchul Park <byungchul@sk.com> writes:
> >> >> 
> >> >> > On Wed, May 29, 2024 at 09:41:22AM -0700, Dave Hansen wrote:
> >> >> >> On 5/28/24 22:00, Byungchul Park wrote:
> >> >> >> > All the code updating ptes already performs TLB flush needed in a safe
> >> >> >> > way if it's inevitable e.g. munmap.  LUF which controls when to flush in
> >> >> >> > a higer level than arch code, just leaves stale ro tlb entries that are
> >> >> >> > currently supposed to be in use.  Could you give a scenario that you are
> >> >> >> > concering?
> >> >> >> 
> >> >> >> Let's go back this scenario:
> >> >> >> 
> >> >> >>  	fd = open("/some/file", O_RDONLY);
> >> >> >>  	ptr1 = mmap(-1, size, PROT_READ, ..., fd, ...);
> >> >> >>  	foo1 = *ptr1;
> >> >> >> 
> >> >> >> There's a read-only PTE at 'ptr1'.  Right?  The page being pointed to is
> >> >> >> eligible for LUF via the try_to_unmap() paths.  In other words, the page
> >> >> >> might be reclaimed at any time.  If it is reclaimed, the PTE will be
> >> >> >> cleared.
> >> >> >> 
> >> >> >> Then, the user might do:
> >> >> >> 
> >> >> >> 	munmap(ptr1, PAGE_SIZE);
> >> >> >> 
> >> >> >> Which will _eventually_ wind up in the zap_pte_range() loop.  But that
> >> >> >> loop will only see pte_none().  It doesn't do _anything_ to the 'struct
> >> >> >> mmu_gather'.
> >> >> >> 
> >> >> >> The munmap() then lands in tlb_flush_mmu_tlbonly() where it looks at the
> >> >> >> 'struct mmu_gather':
> >> >> >> 
> >> >> >>         if (!(tlb->freed_tables || tlb->cleared_ptes ||
> >> >> >> 	      tlb->cleared_pmds || tlb->cleared_puds ||
> >> >> >> 	      tlb->cleared_p4ds))
> >> >> >>                 return;
> >> >> >> 
> >> >> >> But since there were no cleared PTEs (or anything else) during the
> >> >> >> unmap, this just returns and doesn't flush the TLB.
> >> >> >> 
> >> >> >> We now have an address space with a stale TLB entry at 'ptr1' and not
> >> >> >> even a VMA there.  There's nothing to stop a new VMA from going in,
> >> >> >> installing a *new* PTE, but getting data from the stale TLB entry that
> >> >> >> still hasn't been flushed.
> >> >> >
> >> >> > Thank you for the explanation.  I got you.  I think I could handle the
> >> >> > case through a new flag in vma or something indicating LUF has deferred
> >> >> > necessary TLB flush for it during unmapping so that mmu_gather mechanism
> >> >> > can be aware of it.  Of course, the performance change should be checked
> >> >> > again.  Thoughts?
> >> >> 
> >> >> I suggest you to start with the simple case.  That is, only support page
> >> >> reclaiming and migration.  A TLB flushing can be enforced during unmap
> >> >> with something similar as flush_tlb_batched_pending().
> >> >
> >> > While reading flush_tlb_batched_pending(mm), I found it already performs
> >> > TLB flush for the target mm, if set_tlb_ubc_flush_pending(mm) has been
> >> > hit at least once since the last flush_tlb_batched_pending(mm).
> >> >
> >> > Since LUF also relies on set_tlb_ubc_flush_pending(mm), it's going to
> >> > perform TLB flush required, in flush_tlb_batched_pending(mm) during
> >> > munmap().  So it looks safe to me with regard to munmap() already.
> >> >
> >> > Is there something that I'm missing?
> >> >
> >> > JFYI, regarding to mmap(), I have reworked on fault handler to give up
> >> > luf when needed in a better way.
> >> 
> >> If TLB flush is always enforced during munmap(), then your solution can
> >> only avoid TLB flushing for page reclaiming and migration, not unmap.
> >
> > I'm not sure if I understand what you meant.  Could you explain it in
> > more detail?
> >
> > LUF works for only *unmapping* that happens during page reclaiming and
> > migration.  Other unmappings than page reclaiming and migration are not
> > what LUF works for.  That's why I thought flush_tlb_batched_pending()
> > could handle the pending tlb flushes in the case.
> >
> > It'd be appreciated if you explain what you meant more.
> >
> 
> In the following email, you have claimed that LUF can avoid TLB flushing
> for munmap()/mmap().

My bad.  Sorry for that confusing expression.

"give up LUF at mmap()" doesn't mean giving up applying LUF to mmap().

"give up LUF at mmap()" means giving up the pending that has been
induced by LUF, in other words, giving up the benefit by LUF because we
are going through mmap() / munmap().

I will be more careful in expressing these things.

> https://lore.kernel.org/linux-mm/20240527015732.GA61604@system.software.com/
> 
> Now, you said it can only avoid TLB flushing for page reclaiming and
> migration.

This is true.

	Byungchul

> So, to avoid confusion, I suggest you to send out a new series and make
> it explicit that it can only optimize page reclaiming and migration, but
> not munmap().  And it would be good too to add some words about how it
> interact with other TLB flushing mechanisms.
> 
> --
> Best Regards,
> Huang, Ying


^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2024-05-31  2:21 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-10  6:51 [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Byungchul Park
2024-05-10  6:51 ` [PATCH v10 01/12] x86/tlb: add APIs manipulating tlb batch's arch data Byungchul Park
2024-05-10  6:51 ` [PATCH v10 02/12] arm64: tlbflush: " Byungchul Park
2024-05-10  6:51 ` [PATCH v10 03/12] riscv, tlb: " Byungchul Park
2024-05-10  6:51 ` [PATCH v10 04/12] x86/tlb, riscv/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush() Byungchul Park
2024-05-10  6:51 ` [PATCH v10 05/12] mm: buddy: make room for a new variable, ugen, in struct page Byungchul Park
2024-05-10  6:52 ` [PATCH v10 06/12] mm: add folio_put_ugen() to deliver unmap generation number to pcp or buddy Byungchul Park
2024-05-10  6:52 ` [PATCH v10 07/12] mm: add a parameter, unmap generation number, to free_unref_folios() Byungchul Park
2024-05-10  6:52 ` [PATCH v10 08/12] mm/rmap: recognize read-only tlb entries during batched tlb flush Byungchul Park
2024-05-10  6:52 ` [PATCH v10 09/12] mm: implement LUF(Lazy Unmap Flush) defering tlb flush when folios get unmapped Byungchul Park
2024-05-10  6:52 ` [PATCH v10 10/12] mm: separate move/undo parts from migrate_pages_batch() Byungchul Park
2024-05-10  6:52 ` [PATCH v10 11/12] mm, migrate: apply luf mechanism to unmapping during migration Byungchul Park
2024-05-10  6:52 ` [PATCH v10 12/12] mm, vmscan: apply luf mechanism to unmapping during folio reclaim Byungchul Park
2024-05-11  6:54 ` [PATCH v10 00/12] LUF(Lazy Unmap Flush) reducing tlb numbers over 90% Huang, Ying
2024-05-13  1:41   ` Byungchul Park
2024-05-11  7:15 ` Huang, Ying
2024-05-13  1:44   ` Byungchul Park
2024-05-22  2:16     ` Byungchul Park
2024-05-22  7:38       ` Huang, Ying
2024-05-22 10:27         ` Byungchul Park
2024-05-22 14:15           ` Byungchul Park
2024-05-24 17:16 ` Dave Hansen
2024-05-27  1:57   ` Byungchul Park
2024-05-27  2:43     ` Dave Hansen
2024-05-27  3:46       ` Byungchul Park
2024-05-27  4:19         ` Byungchul Park
2024-05-27  4:25           ` Byungchul Park
2024-05-27 22:58       ` Byungchul Park
2024-05-29  2:16         ` Huang, Ying
2024-05-30  1:02           ` Byungchul Park
2024-05-27  3:10     ` Huang, Ying
2024-05-27  3:56       ` Byungchul Park
2024-05-28 15:14       ` Dave Hansen
2024-05-29  5:00         ` Byungchul Park
2024-05-29 16:41           ` Dave Hansen
2024-05-30  0:50             ` Byungchul Park
2024-05-30  0:59               ` Byungchul Park
2024-05-30  1:11               ` Huang, Ying
2024-05-30  1:33                 ` Byungchul Park
2024-05-30  7:18                 ` Byungchul Park
2024-05-30  8:24                   ` Huang, Ying
2024-05-30  8:41                     ` Byungchul Park
2024-05-30 13:50                       ` Dave Hansen
2024-05-31  2:06                         ` Byungchul Park
2024-05-30  9:33                     ` Byungchul Park
2024-05-31  1:45                       ` Huang, Ying
2024-05-31  2:20                         ` Byungchul Park
2024-05-28  8:41 ` David Hildenbrand
2024-05-29  4:39   ` Byungchul Park

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).