xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: David Woodhouse <dwmw2@infradead.org>
To: Jan Beulich <JBeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	"H. Peter Anvin" <h.peter.anvin@intel.com>,
	xen-devel@lists.xen.org
Subject: Re: Xen Security Advisory 154 (CVE-2016-2270) - x86: inconsistent cachability flags on guest mappings
Date: Wed, 01 Feb 2017 20:23:59 +0000	[thread overview]
Message-ID: <1485980639.30423.160.camel@infradead.org> (raw)
In-Reply-To: <5888C2810200007800133CDC@prv-mh.provo.novell.com>


[-- Attachment #1.1: Type: text/plain, Size: 880 bytes --]

On Wed, 2017-01-25 at 07:21 -0700, Jan Beulich wrote:
> 
> Well, in the context of this XSA we've asked both of them, and iirc
> we've got a vague reply from Intel and none from AMD. In fact we
> did defer the XSA for quite a bit waiting for any useful feedback.
> To AMD's advantage I'd like to add though that iirc they're a little
> more clear in their PM about the specific question of UC and WC
> you raise: They group the various cacheabilities into two groups
> (cacheable and uncacheable) and require there to only not be
> any mixture between groups. Iirc Intel's somewhat vague reply
> allowed us to conclude we're likely safe that way on their side too.

It would be good to get a definitive answer from Intel, to match AMD's.
That's basically why I added hpa to CC, in fact.

Peter, is there any possibility of a clarification here, please?

Thanks.

[-- Attachment #1.2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 4938 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

      parent reply	other threads:[~2017-02-01 20:23 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-17 12:28 Xen Security Advisory 154 (CVE-2016-2270) - x86: inconsistent cachability flags on guest mappings Xen.org security team
2017-01-25 14:08 ` David Woodhouse
2017-01-25 14:21   ` Jan Beulich
2017-01-25 14:34     ` David Woodhouse
2017-01-25 16:08     ` David Woodhouse
2017-01-26  8:57     ` [PATCH] x86: Allow write-combining on MMIO mappings again David Woodhouse
2017-01-26 10:45       ` Jan Beulich
2017-01-26 10:55         ` David Woodhouse
2017-01-26 11:32           ` Jan Beulich
2017-01-26 12:39         ` [PATCH v2] x86/ept: Allow write-combining on !mfn_valid() " David Woodhouse
2017-01-26 14:35           ` Jan Beulich
2017-01-26 14:42             ` David Woodhouse
2017-01-26 14:50       ` [PATCH v3] " David Woodhouse
2017-01-26 15:48         ` Jan Beulich
2017-01-27 15:36           ` Konrad Rzeszutek Wilk
2017-02-06 11:33             ` David Woodhouse
2017-02-07  5:08               ` Tian, Kevin
2017-04-14  7:51               ` Tian, Kevin
2017-02-07  5:05           ` Tian, Kevin
2017-02-08 16:04         ` David Woodhouse
2017-02-01 20:23     ` David Woodhouse [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1485980639.30423.160.camel@infradead.org \
    --to=dwmw2@infradead.org \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=h.peter.anvin@intel.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).