xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [PATCH 1/3] x86: avoid flush IPI when possible
Date: Wed, 10 Feb 2016 17:51:25 +0000	[thread overview]
Message-ID: <56BB789D.9080701@citrix.com> (raw)
In-Reply-To: <56BB673A02000078000D0A7D@prv-mh.provo.novell.com>

On 10/02/16 15:37, Jan Beulich wrote:
>>>> On 10.02.16 at 16:00, <andrew.cooper3@citrix.com> wrote:
>> On 10/02/16 12:56, Jan Beulich wrote:
>>> Since CLFLUSH, other than WBINVD, is a coherency domain wide flush,
>> I can't parse this sentence.
> Should have been "..., is a cache coherency domain wide flush, ..." -
> does it read any better then?

I believe, given the code in the patch, your intent is "if we WBINVD, we
don't need to IPI other cores cache flushing reasons".

However, given your comment below...

>
>> CLFUSH states "Invalidates from every level of the cache hierarchy in
>> the cache coherence domain"
>>
>> WBINVD however states "The instruction then issues a special-function
>> bus cycle that directs external caches to also write back modified data
>> and another bus cycle to indicate that the external caches should be
>> invalidated."
>>
>> I think we need input from Intel and AMD here as to the behaviour and
>> terminology here, and in particular, where the coherency domain
>> boundaries are.  All CPUs, even across multiple sockets, see coherent
>> caching, but it is unclear whether this qualifies them to be in the same
>> cache coherency domain per the instruction spec.
> Linux already doing what this patch switches us to, I'm not sure
> we need much extra input.
>
>> In particular, given the architecture of 8-socket systems and 45MB of
>> RAM in L3 caches, does wbinvd seriously drain all caches everywhere? 
> Not everywhere, just on the local socket (assuming there's no external
> cache).

If this is true, then it is clearly not safe to omit the IPIs.

>
>> Causing 45MB of data to move to remote memory controllers all at once
>> would cause a massive system stall.
> That's why it takes (as we know) so long. See the figure in SDM Vol 3
> section "Invalidating Caches and TLBs".

I presume you mean Figure 2-10. WBINVD Invalidation of Shared and
Non-Shared Cache Hierarchy?

This quite clearly shows that WBINVD will not invalidate or write back
the L1 caches for other cores in the same processor.

Have I misunderstood the logic for choosing when to omit the IPIs?

~Andrew

  reply	other threads:[~2016-02-10 17:51 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-10 12:40 [PATCH 0/3] x86: clflush related adjustments Jan Beulich
2016-02-10 12:56 ` [PATCH 1/3] x86: avoid flush IPI when possible Jan Beulich
2016-02-10 15:00   ` Andrew Cooper
2016-02-10 15:37     ` Jan Beulich
2016-02-10 17:51       ` Andrew Cooper [this message]
2016-02-11 10:48         ` Jan Beulich
2016-02-10 12:57 ` [PATCH 2/3] x86: use CLFLUSHOPT when available Jan Beulich
2016-02-10 15:03   ` Andrew Cooper
2016-02-10 15:39     ` Jan Beulich
2016-02-10 16:27       ` Andrew Cooper
2016-02-10 12:57 ` [PATCH 3/3] x86: rename X86_FEATURE_{CLFLSH -> CLFLUSH} Jan Beulich
2016-02-10 15:04   ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56BB789D.9080701@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=keir@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).