public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Alex Shi <alex.shi@intel.com>
To: Alex Shi <alex.shi@intel.com>
Cc: tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com,
	arnd@arndb.de, rostedt@goodmis.org, fweisbec@gmail.com,
	jeremy@goop.org, riel@redhat.com, luto@mit.edu, avi@redhat.com,
	len.brown@intel.com, dhowells@redhat.com, fenghua.yu@intel.com,
	borislav.petkov@amd.com, yinghai@kernel.org, ak@linux.intel.com,
	cpw@sgi.com, steiner@sgi.com, akpm@linux-foundation.org,
	penberg@kernel.org, a.p.zijlstra@chello.nl, hughd@google.com,
	kamezawa.hiroyu@jp.fujitsu.com, viro@zeniv.linux.org.uk,
	linux-kernel@vger.kernel.org, yongjie.ren@intel.com
Subject: Re: [PATCH v6 0/7] tlb flush optimization on x86
Date: Thu, 17 May 2012 16:49:48 +0800	[thread overview]
Message-ID: <4FB4BBAC.1050106@intel.com> (raw)
In-Reply-To: <4FB4B964.6050501@intel.com>

Added a little more info.
the machine is 2P * 4core* HT NHM EP with 12GB memory, and THP set 'always'.

>

> Here is the macro benchmark to measure munmap change:
> 
> tlb_flushall_shift = -1
> [alexs@lkp-ne04 tlb]$ 
> [alexs@lkp-ne04 tlb]$ for t in `echo 4 8 16  `; do echo "=============== t = $t ===================="; for i in `echo  8 16 32  `; do sudo  ./munmap -t $t -n $i; done done
> =============== t = 4 ====================
> munmap use 164ms 5032ns/time, memory access uses 81605 times/thread/ms, cost 12ns/time
> munmap use 86ms 5251ns/time, memory access uses 83378 times/thread/ms, cost 11ns/time
> munmap use 46ms 5642ns/time, memory access uses 87212 times/thread/ms, cost 11ns/time
> =============== t = 8 ====================
> munmap use 197ms 6036ns/time, memory access uses 69295 times/thread/ms, cost 14ns/time
> munmap use 96ms 5896ns/time, memory access uses 71895 times/thread/ms, cost 13ns/time
> munmap use 62ms 7608ns/time, memory access uses 83895 times/thread/ms, cost 11ns/time
> =============== t = 16 ====================
> munmap use 274ms 8367ns/time, memory access uses 37860 times/thread/ms, cost 26ns/time
> munmap use 139ms 8543ns/time, memory access uses 38137 times/thread/ms, cost 26ns/time
> munmap use 74ms 9033ns/time, memory access uses 38349 times/thread/ms, cost 26ns/time
> [alexs@lkp-ne04 tlb]$ 
> [alexs@lkp-ne04 tlb]$ 
> tlb_flushall_shift = 5
> [alexs@lkp-ne04 tlb]$ for t in `echo 4 8 16  `; do echo "=============== t = $t ===================="; for i in `echo  8 16 32  `; do sudo  ./munmap -t $t -n $i; done done
> =============== t = 4 ====================
> munmap use 212ms 6485ns/time, memory access uses 114003 times/thread/ms, cost 8ns/time
> munmap use 130ms 7972ns/time, memory access uses 110725 times/thread/ms, cost 9ns/time
> munmap use 45ms 5581ns/time, memory access uses 87866 times/thread/ms, cost 11ns/time
> =============== t = 8 ====================
> munmap use 253ms 7734ns/time, memory access uses 94578 times/thread/ms, cost 10ns/time
> munmap use 147ms 9012ns/time, memory access uses 83851 times/thread/ms, cost 11ns/time
> munmap use 63ms 7713ns/time, memory access uses 87473 times/thread/ms, cost 11ns/time
> =============== t = 16 ====================
> munmap use 369ms 11284ns/time, memory access uses 38854 times/thread/ms, cost 25ns/time
> munmap use 264ms 16131ns/time, memory access uses 37870 times/thread/ms, cost 26ns/time
> munmap use 73ms 8981ns/time, memory access uses 38309 times/thread/ms, cost 26ns/time



  reply	other threads:[~2012-05-17  8:51 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-17  5:42 [PATCH v6 0/7] tlb flush optimization on x86 Alex Shi
2012-05-17  5:42 ` [PATCH v6 1/7] x86/tlb: unify TLB_FLUSH_ALL definition Alex Shi
2012-05-17  5:42 ` [PATCH v6 2/7] x86/tlb_info: get last level TLB entry number of CPU Alex Shi
2012-05-17  5:42 ` [PATCH v6 3/7] x86/flush_tlb: try flush_tlb_single one by one in flush_tlb_range Alex Shi
2012-05-17  5:42 ` [PATCH v6 4/7] x86/tlb: fall back to flush all when meet a THP large page Alex Shi
2012-05-17  5:42 ` [PATCH v6 5/7] x86/tlb: add tlb_flushall_shift for specific CPU Alex Shi
2012-05-17  5:42 ` [PATCH v6 6/7] x86/tlb: enable tlb flush range support for generic mmu on x86 Alex Shi
2012-05-17  5:42 ` [PATCH v6 7/7] x86/tlb: add tlb_flushall_shift knob into debugfs Alex Shi
2012-05-17  8:40 ` [PATCH v6 0/7] tlb flush optimization on x86 Alex Shi
2012-05-17  8:49   ` Alex Shi [this message]
2012-05-18  0:16 ` Alex Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FB4BBAC.1050106@intel.com \
    --to=alex.shi@intel.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=avi@redhat.com \
    --cc=borislav.petkov@amd.com \
    --cc=cpw@sgi.com \
    --cc=dhowells@redhat.com \
    --cc=fenghua.yu@intel.com \
    --cc=fweisbec@gmail.com \
    --cc=hpa@zytor.com \
    --cc=hughd@google.com \
    --cc=jeremy@goop.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@mit.edu \
    --cc=mingo@redhat.com \
    --cc=penberg@kernel.org \
    --cc=riel@redhat.com \
    --cc=rostedt@goodmis.org \
    --cc=steiner@sgi.com \
    --cc=tglx@linutronix.de \
    --cc=viro@zeniv.linux.org.uk \
    --cc=yinghai@kernel.org \
    --cc=yongjie.ren@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox