From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [PATCH 3/4] xen/x86: Replace incorrect mandatory barriers with SMP barriers
Date: Mon, 5 Dec 2016 14:24:26 +0000 [thread overview]
Message-ID: <b97725c8-73d7-8d2d-6c13-3ef0c5a97364@citrix.com> (raw)
In-Reply-To: <584581A402000078001251CA@prv-mh.provo.novell.com>
On 05/12/16 14:03, Jan Beulich wrote:
>>>> On 05.12.16 at 14:29, <andrew.cooper3@citrix.com> wrote:
>> On 05/12/16 11:47, Jan Beulich wrote:
>>>>>> On 05.12.16 at 11:05, <andrew.cooper3@citrix.com> wrote:
>>>> --- a/xen/arch/x86/acpi/cpu_idle.c
>>>> +++ b/xen/arch/x86/acpi/cpu_idle.c
>>>> @@ -391,9 +391,9 @@ void mwait_idle_with_hints(unsigned int eax, unsigned int ecx)
>>>>
>>>> if ( boot_cpu_has(X86_FEATURE_CLFLUSH_MONITOR) )
>>>> {
>>>> - mb();
>>>> + smp_mb();
>>>> clflush((void *)&mwait_wakeup(cpu));
>>>> - mb();
>>>> + smp_mb();
>>>> }
>>> Both need to stay as they are afaict: In order for the clflush() to do
>>> what we want we have to order it wrt earlier as well as later writes,
>>> regardless of SMP-ness. Or wait - the SDM has changed in that
>>> respect (and a footnote describes the earlier specified behavior now).
>>> AMD, otoh, continues to require MFENCE for ordering purposes.
>> mb() == smp_mb(). They are both mfence instructions.
> Of course. But still smp_mb() would be less correct from an
> abstract perspective
? That is entirely the purpose and intended meaning of the abstraction.
smp_mb() orders operations such that (visible to other CPUs in the
system), all writes will have completed before any subsequent reads begin.
> , as here we care only about the local CPU.
> That said, ...
>
>> However, if AMD specifically requires mfence, we should explicitly use
>> that rather than relying on the implementation details of smp_mb().
> ... I'd be fine with this.
An earlier version of the series introduced explicit {m,s,l}fence()
defines. I will reintroduce these.
>
>>>> --- a/xen/arch/x86/cpu/mcheck/amd_nonfatal.c
>>>> +++ b/xen/arch/x86/cpu/mcheck/amd_nonfatal.c
>>>> @@ -175,7 +175,7 @@ static void mce_amd_work_fn(void *data)
>>>> /* Counter enable */
>>>> value |= (1ULL << 51);
>>>> mca_wrmsr(MSR_IA32_MCx_MISC(4), value);
>>>> - wmb();
>>>> + smp_wmb();
>>>> }
>>>> }
>>>>
>>>> @@ -240,7 +240,7 @@ void amd_nonfatal_mcheck_init(struct cpuinfo_x86 *c)
>>>> value |= (1ULL << 51);
>>>> wrmsrl(MSR_IA32_MCx_MISC(4), value);
>>>> /* serialize */
>>>> - wmb();
>>>> + smp_wmb();
>>>> printk(XENLOG_INFO "MCA: Use hw thresholding to adjust polling frequency\n");
>>>> }
>>>> }
>>> These will need confirming by AMD engineers.
>> I was uncertain whether these were necessary at all, but as identified
>> in the commit message, this is no functional change as Xen currently has
>> rmb/wmb as plain barriers, not fence instructions.
> And may hence be subtly broken, if this code was lifted from Linux?
It doesn't resemble anything in Linux these days. I don't know if that
means we have lagged, or it was developed independently.
Looking at the Linux code, there are a few mandatory barriers which
should all be SMP barriers instead (guarding updates of shared memory),
but no barriers at all around MSR reads or writes.
>
>>>> @@ -433,7 +433,7 @@ mctelem_cookie_t mctelem_consume_oldest_begin(mctelem_class_t which)
>>>> }
>>>>
>>>> mctelem_processing_hold(tep);
>>>> - wmb();
>>>> + smp_wmb();
>>>> spin_unlock(&processing_lock);
>>> Don't we imply unlocks to be write barriers?
>> They are, as an unlock is necessarily a write, combined with x86's
>> ordering guarantees.
>>
>> Then again, I am not sure how this would interact with TSX, so am not
>> sure if we should assume or rely on such behaviour.
> Isn't architectural state at the end of a transactional region
> indistinguishable as to whether TSX was actually used or the
> abort path taken (assuming the two code patch don't differ
> in their actions)?
I'd hope so, but I haven't had occasion to dig into TSX in detail yet.
>>>> @@ -124,7 +124,7 @@ static void synchronize_tsc_master(unsigned int slave)
>>>> for ( i = 1; i <= 5; i++ )
>>>> {
>>>> tsc_value = rdtsc_ordered();
>>>> - wmb();
>>>> + smp_wmb();
>>>> atomic_inc(&tsc_count);
>>> Same question as above wrt the following LOCKed instruction.
>> I'm not sure the locked instruction is relevant here. C's
>> ordering-model is sufficient to make this correct.
> I don't follow - the two involved variables are distinct, so I don't
> see how C's ordering model helps here at all. We need (at the
> machine level) the write to tsc_value to precede the increment
> of tsc_count, and I don't think C alone guarantees any such
> ordering.
Sorry - I looked at that code and thought they were both using tsc_value.
Yes, we do need at least a compiler barrer here.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-12-05 14:24 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-05 10:05 [PATCH 0/4] Fixes to common and x86 barriers Andrew Cooper
2016-12-05 10:05 ` [PATCH 1/4] xen/common: Replace incorrect mandatory barriers with SMP barriers Andrew Cooper
2016-12-05 10:55 ` Jan Beulich
2016-12-05 19:07 ` Stefano Stabellini
2016-12-05 10:05 ` [PATCH 2/4] xen/x86: Drop erronious barriers Andrew Cooper
2016-12-05 11:18 ` Jan Beulich
2016-12-05 11:25 ` Andrew Cooper
2016-12-05 12:28 ` Jan Beulich
2016-12-05 13:43 ` Andrew Cooper
2016-12-05 13:50 ` Jan Beulich
2016-12-05 13:59 ` Andrew Cooper
2016-12-05 14:07 ` Jan Beulich
2016-12-05 19:14 ` Stefano Stabellini
2016-12-05 19:17 ` Stefano Stabellini
2016-12-06 0:10 ` Andrew Cooper
2016-12-06 20:27 ` Stefano Stabellini
2016-12-06 20:32 ` Stefano Stabellini
2016-12-07 1:03 ` Andrew Cooper
2016-12-07 1:20 ` Stefano Stabellini
2016-12-07 1:46 ` Andrew Cooper
2016-12-07 18:31 ` Julien Grall
2016-12-07 18:44 ` Stefano Stabellini
2016-12-07 18:55 ` Julien Grall
2016-12-05 10:05 ` [PATCH 3/4] xen/x86: Replace incorrect mandatory barriers with SMP barriers Andrew Cooper
2016-12-05 11:47 ` Jan Beulich
2016-12-05 13:29 ` Andrew Cooper
2016-12-05 14:03 ` Jan Beulich
2016-12-05 14:24 ` Andrew Cooper [this message]
2016-12-05 14:33 ` Jan Beulich
2016-12-05 10:05 ` [PATCH 4/4] xen/x86: Correct mandatory and SMP barrier definitions Andrew Cooper
2016-12-05 10:11 ` Juergen Gross
2016-12-05 13:45 ` Andrew Cooper
2016-12-05 11:51 ` Jan Beulich
2016-12-05 14:08 ` Andrew Cooper
2016-12-05 12:01 ` David Vrabel
2016-12-05 13:46 ` Andrew Cooper
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b97725c8-73d7-8d2d-6c13-3ef0c5a97364@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=JBeulich@suse.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).