linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Byungchul Park <byungchul@sk.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	kernel_team@skhynix.com, akpm@linux-foundation.org,
	namit@vmware.com, xhao@linux.alibaba.com,
	mgorman@techsingularity.net, hughd@google.com,
	willy@infradead.org, david@redhat.com, peterz@infradead.org,
	luto@kernel.org, tglx@linutronix.de, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com
Subject: Re: [v4 0/3] Reduce TLB flushes under some specific conditions
Date: Wed, 15 Nov 2023 11:57:55 +0900	[thread overview]
Message-ID: <20231115025755.GA29979@system.software.com> (raw)
In-Reply-To: <87il6bijtu.fsf@yhuang6-desk2.ccr.corp.intel.com>

On Thu, Nov 09, 2023 at 01:20:29PM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@sk.com> writes:
> 
> > Hi everyone,
> >
> > While I'm working with CXL memory, I have been facing migration overhead
> > esp. TLB shootdown on promotion or demotion between different tiers.
> > Yeah.. most TLB shootdowns on migration through hinting fault can be
> > avoided thanks to Huang Ying's work, commit 4d4b6d66db ("mm,unmap: avoid
> > flushing TLB in batch if PTE is inaccessible").
> >
> > However, it's only for ones using hinting fault. I thought it'd be much
> > better if we have a general mechanism to reduce # of TLB flushes and
> > TLB misses, that we can apply to any type of migration. I tried it only
> > for tiering migration for now tho.
> >
> > I'm suggesting a mechanism to reduce TLB flushes by keeping source and
> > destination of folios participated in the migrations until all TLB
> > flushes required are done, only if those folios are not mapped with
> > write permission PTE entries at all. I worked Based on v6.6-rc5.
> >
> > Can you believe it? I saw the number of TLB full flush reduced about
> > 80% and iTLB miss reduced about 50%, and the time wise performance
> > always shows at least 1% stable improvement with the workload I tested
> > with, XSBench. However, I believe that it would help more with other
> > ones or any real ones. It'd be appreciated to let me know if I'm missing
> > something.
> 
> Can you help to test the effect of commit 7e12beb8ca2a ("migrate_pages:
> batch flushing TLB") for your test case?  To test it, you can revert it
> and compare the performance before and after the reverting.
> 
> And, how do you trigger migration when testing XSBench?  Use a tiered
> memory system, and migrate pages between DRAM and CXL memory back and
> forth?  If so, how many pages will you migrate for each migration

It was not an actual CXL memory but a cpuless remote numa node's DRAM
recognized as a slow tier (node_is_toptier() == false) by the kernel.
It's been okay to me because I've been focusing on TLB # and migration #
while working with numa tiering mechanism and, I think, the time wise
performance will be followed, big or little depending on the system
configuration.

So it migrates pages between the two DRAMs back and forth - promotion by
hinting fault and demotion by page reclaim. I tested what you asked me
with another slower system to make TLB miss overhead stand out.

Unfortunately I got even worse result with vanilla v6.6-rc5 than
v6.6-rc5 with 7e12beb8ca2a reverted, while the 'v6.6-rc5 + migrc'
definitely shows far better result.

Thoughts?

	Byungchul

---

   Architecture - x86_64                                               
   QEMU - kvm enabled, host cpu                                        
   Numa - 2 nodes (16 CPUs 1GB, no CPUs 8GB)                           
   Kernel - v6.6-rc5, NUMA_BALANCING_MEMORY_TIERING, demotion enabled
   Benchmark - XSBench -p 50000000 (-p option makes the runtime longer)

   CASE1 - mainline v6.6-rc5 + 7e12beb8ca2a reverted
   -------------------------------------------------
   $ perf stat -a \
	   -e itlb.itlb_flush \
	   -e tlb_flush.dtlb_thread \
	   -e tlb_flush.stlb_any \
	   -e dTLB-load-misses \
	   -e dTLB-store-misses \
	   -e iTLB-load-misses \
	   ./XSBench -p 50000000

   Performance counter stats for 'system wide':
   
      190247118     itlb.itlb_flush      
      716182438     tlb_flush.dtlb_thread
      327051673     tlb_flush.stlb_any   
      119542331968  dTLB-load-misses     
      724072795     dTLB-store-misses    
      3054343419    iTLB-load-misses     
   
   1172.580552728 seconds time elapsed      
   
   $ cat /proc/vmstat
   
   ...
   numa_pages_migrated 5968431                  
   pgmigrate_success 12484773                   
   nr_tlb_remote_flush 6614459                  
   nr_tlb_remote_flush_received 96022799        
   nr_tlb_local_flush_all 50869                 
   nr_tlb_local_flush_one 785597          
   ...

   CASE2 - mainline v6.6-rc5 (vanilla)
   -------------------------------------------------
   $ perf stat -a \
	   -e itlb.itlb_flush \
	   -e tlb_flush.dtlb_thread \
	   -e tlb_flush.stlb_any \
	   -e dTLB-load-misses \
	   -e dTLB-store-misses \
	   -e iTLB-load-misses \
	   ./XSBench -p 50000000
   
   Performance counter stats for 'system wide':
   
      55139061      itlb.itlb_flush      
      286725687     tlb_flush.dtlb_thread
      199687660     tlb_flush.stlb_any   
      119497951269  dTLB-load-misses     
      358434759     dTLB-store-misses    
      1867135967    iTLB-load-misses     
   
   1181.311084373 seconds time elapsed      
   
   $ cat /proc/vmstat
   
   ...
   numa_pages_migrated 8190027                  
   pgmigrate_success 17098994                   
   nr_tlb_remote_flush 1955114                  
   nr_tlb_remote_flush_received 29028093        
   nr_tlb_local_flush_all 140921                
   nr_tlb_local_flush_one 740767                
   ...

   CASE3 - mainline v6.6-rc5 + migrc
   -------------------------------------------------
   $ perf stat -a \
	   -e itlb.itlb_flush \
	   -e tlb_flush.dtlb_thread \
	   -e tlb_flush.stlb_any \
	   -e dTLB-load-misses \
	   -e dTLB-store-misses \
	   -e iTLB-load-misses \
	   ./XSBench -p 50000000

   Performance counter stats for 'system wide':

      6337091       itlb.itlb_flush      
      157229778     tlb_flush.dtlb_thread
      148240163     tlb_flush.stlb_any   
      117701381319  dTLB-load-misses     
      231212468     dTLB-store-misses    
      973083466     iTLB-load-misses     

   1105.756705157 seconds time elapsed      
   
   $ cat /proc/vmstat
   
   ...
   numa_pages_migrated 8791934                  
   pgmigrate_success 18276174                   
   nr_tlb_remote_flush 311146                   
   nr_tlb_remote_flush_received 4387708         
   nr_tlb_local_flush_all 143883                
   nr_tlb_local_flush_one 740953    
   ...


  parent reply	other threads:[~2023-11-15  2:58 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-09  4:59 [v4 0/3] Reduce TLB flushes under some specific conditions Byungchul Park
2023-11-09  4:59 ` [v4 1/3] mm/rmap: Recognize read-only TLB entries during batched TLB flush Byungchul Park
2023-11-09 20:26   ` kernel test robot
2023-11-09  4:59 ` [v4 2/3] mm: Defer TLB flush by keeping both src and dst folios at migration Byungchul Park
2023-11-09 14:36   ` Matthew Wilcox
2023-11-10  1:29     ` Byungchul Park
2024-01-15  7:55     ` Byungchul Park
2023-11-09 17:09   ` kernel test robot
2023-11-09 19:07   ` kernel test robot
2023-11-09  4:59 ` [v4 3/3] mm: Pause migrc mechanism at high memory pressure Byungchul Park
2023-11-09  5:20 ` [v4 0/3] Reduce TLB flushes under some specific conditions Huang, Ying
2023-11-10  1:32   ` Byungchul Park
2023-11-15  2:57   ` Byungchul Park [this message]
2023-11-09 14:26 ` Dave Hansen
2023-11-10  1:08   ` Byungchul Park
2023-11-15  6:43   ` Byungchul Park
2024-01-15  7:58   ` Byungchul Park

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231115025755.GA29979@system.software.com \
    --to=byungchul@sk.com \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=hughd@google.com \
    --cc=kernel_team@skhynix.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@redhat.com \
    --cc=namit@vmware.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=willy@infradead.org \
    --cc=xhao@linux.alibaba.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).