From: Arjan van de Ven <arjan@linux.intel.com>
To: Alex Shi <alex.shi@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
Andrew Lutomirski <luto@mit.edu>, Jan Beulich <JBeulich@suse.com>,
borislav.petkov@amd.com, arnd@arndb.de, akinobu.mita@gmail.com,
eric.dumazet@gmail.com, fweisbec@gmail.com, rostedt@goodmis.org,
hughd@google.com, jeremy@goop.org, len.brown@intel.com,
tony.luck@intel.com, yongjie.ren@intel.com,
kamezawa.hiroyu@jp.fujitsu.com, seto.hidetoshi@jp.fujitsu.com,
penberg@kernel.org, yinghai@kernel.org, tglx@linutronix.de,
akpm@linux-foundation.org, ak@linux.intel.com, avi@redhat.com,
dhowells@redhat.com, mingo@redhat.com, riel@redhat.com,
cpw@sgi.com, steiner@sgi.com, linux-kernel@vger.kernel.org,
viro@zeniv.linux.org.uk, hpa@zytor.com
Subject: Re: [PATCH v7 8/8] x86/tlb: just do tlb flush on one of siblings of SMT
Date: Thu, 24 May 2012 07:18:29 -0700 [thread overview]
Message-ID: <4FBE4335.6020602@linux.intel.com> (raw)
In-Reply-To: <4FBE3D95.8030501@intel.com>
On 5/24/2012 6:54 AM, Alex Shi wrote:
> On 05/24/2012 09:39 PM, Arjan van de Ven wrote:
>
>> On 5/24/2012 6:23 AM, Peter Zijlstra wrote:
>>> On Thu, 2012-05-24 at 06:19 -0700, Andrew Lutomirski wrote:
>>>>
>>>> A decent heuristic might be to prefer idle SMT siblings for TLB
>>>> invalidation. I don't know what effect that would have on power
>>>> consumption (it would be rather bad if idling one SMT thread while the
>>>> other one is busy saves much power).
>>
>> we really really shouldn't do flushing of tlb's on only one half of SMT.
>> SMT sibblings have their own TLB pool at least on some of Intels chips.
>
>
> That is also the biggest question I want to know. Actually, some
> documents, wiki said the SMT sibling just has process registers and
> interrupt part, no any tlb/l1 cache etc, (like intel's doc
> vol6iss1_hyper_threading_technology.pdf). And the patch runs well on
> NHM EP/WSM EP/NHM EX/SNB EP CPUs.
>
> But hard to get such clearly per cpu info of SMT/HT, so, what the
> detailed Intel chips has 'TLB pool' on SMT?
all of them.
the TLB pool is shared as physical resource (dynamic or static, that
depends), but each tlb entry will be tagged for which of the two HT
pairs it's for, and on a logical level, they are completely separate as
a result (as they should be)
next prev parent reply other threads:[~2012-05-24 14:18 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-05-23 14:15 [PATCH v7 0/8] x86 tlb optimisations Alex Shi
2012-05-23 14:15 ` [PATCH v7 1/8] x86/tlb_info: get last level TLB entry number of CPU Alex Shi
2012-05-23 14:15 ` [PATCH v7 2/8] x86/flush_tlb: try flush_tlb_single one by one in flush_tlb_range Alex Shi
2012-05-23 14:51 ` Jan Beulich
2012-05-24 6:41 ` Alex Shi
2012-05-24 8:12 ` Jan Beulich
2012-05-24 8:55 ` Alex Shi
2012-05-24 9:44 ` Jan Beulich
2012-05-24 14:36 ` Alex Shi
2012-05-25 2:43 ` Alex Shi
2012-05-23 14:15 ` [PATCH v7 3/8] x86/tlb: fall back to flush all when meet a THP large page Alex Shi
2012-05-23 14:15 ` [PATCH v7 4/8] x86/tlb: add tlb_flushall_shift for specific CPU Alex Shi
2012-05-23 14:15 ` [PATCH v7 5/8] x86/tlb: enable tlb flush range support for generic mmu and x86 Alex Shi
2012-05-23 14:15 ` [PATCH v7 6/8] x86/tlb: add tlb_flushall_shift knob into debugfs Alex Shi
2012-05-23 14:15 ` [PATCH v7 7/8] x86/tlb: replace INVALIDATE_TLB_VECTOR by CALL_FUNCTION_VECTOR Alex Shi
2012-05-23 14:15 ` [PATCH v7 8/8] x86/tlb: just do tlb flush on one of siblings of SMT Alex Shi
2012-05-23 15:05 ` Jan Beulich
2012-05-23 17:09 ` Peter Zijlstra
2012-05-23 17:15 ` Peter Zijlstra
2012-05-24 1:46 ` Andrew Lutomirski
2012-05-24 5:12 ` Alex Shi
2012-05-24 6:04 ` Borislav Petkov
2012-05-24 7:40 ` Peter Zijlstra
2012-05-24 13:19 ` Andrew Lutomirski
2012-05-24 13:23 ` Peter Zijlstra
2012-05-24 13:39 ` Arjan van de Ven
2012-05-24 13:54 ` Alex Shi
2012-05-24 14:18 ` Arjan van de Ven [this message]
2012-05-24 14:32 ` Alex Shi
2012-05-24 15:03 ` H. Peter Anvin
2012-05-25 0:24 ` Alex Shi
2012-05-24 16:08 ` Arjan van de Ven
2012-05-25 0:28 ` Alex Shi
2012-05-25 0:46 ` Arjan van de Ven
2012-05-24 8:32 ` Alex Shi
2012-05-24 8:42 ` Peter Zijlstra
2012-05-24 8:48 ` Alex Shi
2012-05-24 11:35 ` Rusty Russell
2012-05-24 14:03 ` Alex Shi
2012-05-24 9:27 ` Alex Shi
2012-05-24 9:42 ` Peter Zijlstra
2012-05-24 9:46 ` Jan Beulich
2012-05-24 14:06 ` Alex Shi
2012-05-24 8:43 ` Peter Zijlstra
2012-05-24 8:48 ` Jan Beulich
2012-05-24 9:02 ` Alex Shi
2012-05-24 9:45 ` Jan Beulich
2012-05-24 15:04 ` H. Peter Anvin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4FBE4335.6020602@linux.intel.com \
--to=arjan@linux.intel.com \
--cc=JBeulich@suse.com \
--cc=ak@linux.intel.com \
--cc=akinobu.mita@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=alex.shi@intel.com \
--cc=arnd@arndb.de \
--cc=avi@redhat.com \
--cc=borislav.petkov@amd.com \
--cc=cpw@sgi.com \
--cc=dhowells@redhat.com \
--cc=eric.dumazet@gmail.com \
--cc=fweisbec@gmail.com \
--cc=hpa@zytor.com \
--cc=hughd@google.com \
--cc=jeremy@goop.org \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=len.brown@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@mit.edu \
--cc=mingo@redhat.com \
--cc=penberg@kernel.org \
--cc=peterz@infradead.org \
--cc=riel@redhat.com \
--cc=rostedt@goodmis.org \
--cc=seto.hidetoshi@jp.fujitsu.com \
--cc=steiner@sgi.com \
--cc=tglx@linutronix.de \
--cc=tony.luck@intel.com \
--cc=viro@zeniv.linux.org.uk \
--cc=yinghai@kernel.org \
--cc=yongjie.ren@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox