linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [RFC 0/2] Enable ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH on POWER
@ 2017-11-01 10:17 Anshuman Khandual
  2017-11-01 10:17 ` [RFC 1/2] mm/tlbbatch: Introduce arch_tlbbatch_should_defer() Anshuman Khandual
  2017-11-01 10:17 ` [RFC 2/2] powerpc/mm: Enable deferred flushing of TLB during reclaim Anshuman Khandual
  0 siblings, 2 replies; 4+ messages in thread
From: Anshuman Khandual @ 2017-11-01 10:17 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: mpe, aneesh.kumar, npiggin

From: Anshuman Khandual <Khandual@linux.vnet.ibm.com>

Batched TLB flush during reclaim path has been around for couple of years
now and been enabled on X86 platform. The idea is to batch multiple page
TLB invalidation requests together and flush all those CPUs completely
who might have the TLB cache for any of the unmapped pages instead of just
sending multiple IPIs and flushing out individual pages each time reclaim
unmaps one page. This has the potential to improve performance for certain
types of workloads under memory pressure provided some conditions related
to individual page TLB invalidation, CPU wide TLB invalidation, system
wide TLB invalidation, TLB reload, IPI costs etc are met.

Please refer the commit 72b252aed5 ("mm: send one IPI per CPU to TLB flush
all entries after unmapping pages") from Mel Gorman for more details on
how it can impact the performance for various workloads. This enablement
improves performance for the original test case 'case-lru-file-mmap-read'
from vm-scallability bucket but only from system time perspective.

time ./run case-lru-file-mmap-read

Without the patch:

real    4m20.364s
user    102m52.492s
sys     433m26.190s

With the patch:

real    4m15.942s	(-  1.69%)
user    111m16.662s	(+  7.55%)
sys     382m35.202s	(- 11.73%)

Parallel kernel compilation does not see any performance improvement or
degradation with and with out this patch. It remains within margin of
error.

Without the patch:

real    1m13.850s
user    39m21.803s
sys     2m43.362s

With the patch:

real    1m14.481s	(+ 0.85%)
user    39m27.409s	(+ 0.23%)
sys     2m44.656s	(+ 0.79%)

It batches up multiple struct mm during reclaim and keeps on accumulating
the superset of struct mm's cpu mask who might have a TLB which needs to
be invalidated. Then local struct mm wide invalidation is performance on
the cpu mask for all those batched ones. Please do the review and let me
know if there is any other way to do this better. Thank you.

Anshuman Khandual (2):
  mm/tlbbatch: Introduce arch_tlbbatch_should_defer()
  powerpc/mm: Enable deferred flushing of TLB during reclaim

 arch/powerpc/Kconfig                |  1 +
 arch/powerpc/include/asm/tlbbatch.h | 30 +++++++++++++++++++++++
 arch/powerpc/include/asm/tlbflush.h |  3 +++
 arch/powerpc/mm/tlb-radix.c         | 49 +++++++++++++++++++++++++++++++++++++
 arch/x86/include/asm/tlbflush.h     | 12 +++++++++
 mm/rmap.c                           |  9 +------
 6 files changed, 96 insertions(+), 8 deletions(-)
 create mode 100644 arch/powerpc/include/asm/tlbbatch.h

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2017-11-02  4:02 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-11-01 10:17 [RFC 0/2] Enable ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH on POWER Anshuman Khandual
2017-11-01 10:17 ` [RFC 1/2] mm/tlbbatch: Introduce arch_tlbbatch_should_defer() Anshuman Khandual
2017-11-01 10:17 ` [RFC 2/2] powerpc/mm: Enable deferred flushing of TLB during reclaim Anshuman Khandual
2017-11-02  4:02   ` Anshuman Khandual

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).