From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: huaitong.han@intel.com, Luwei Kang <luwei.kang@intel.com>,
yong.y.wang@intel.com, xen-devel@lists.xen.org
Subject: Re: [PATCH] x86/cpuid: fix dom0 crash on skylake machine
Date: Wed, 1 Jun 2016 14:03:36 +0100 [thread overview]
Message-ID: <574EDD28.5020105@citrix.com> (raw)
In-Reply-To: <574EEAA202000078000F064B@prv-mh.provo.novell.com>
On 01/06/16 13:01, Jan Beulich wrote:
>>
>>>> I want to adjust the representation of cpuid information in struct
>>>> domain. The current loop in domain_cpuid() causes an O(N) overhead for
>>>> every query, which is very poor for actions which really should be a
>>>> single bit test at a fixed offset.
>>>>
>>>> This needs to be combined with properly splitting the per-domain and
>>>> per-vcpu information, which requires knowing the expected vcpu topology
>>>> during domain creation.
>>>>
>>>> On top of that, there needs to be verification logic to check the
>>>> correctness of information passed from the toolstack.
>>>>
>>>> All of these areas are covered in the "known issues" section of the
>>>> feature doc, and I do plan to fix them all. However, it isn't a couple
>>>> of hours worth of work.
>>> All understood, yet not to the point: The original remark was that
>>> the very XSTATE handling could be done better with far not as much
>>> of a change, at least afaict without having tried.
>> In which case I don't know what you were suggesting.
> Make {hvm,pv}_cpuid() invoke themselves recursively to
> determine what bits to mask off from CPUID[0xd].EAX.
So that would work. However, to do this, you need to query leaves 1,
0x80000001 and 7, all of which will hit the O(N) loop in domain_cpuid()
Luckily, none of those specific paths further recurse into {hvm,pv}_cpuid().
I am unsure which to go with. My gut feel is that this would be quite a
performance hit, but I have no evidence either way. OTOH, it will give
the correct answer, rather than an approximation.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-06-01 13:03 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-01 4:58 [PATCH] x86/cpuid: fix dom0 crash on skylake machine Luwei Kang
2016-06-01 5:54 ` Han, Huaitong
2016-06-01 8:49 ` Jan Beulich
2016-06-01 9:00 ` Han, Huaitong
2016-06-01 9:03 ` Andrew Cooper
2016-06-01 9:17 ` Jan Beulich
2016-06-01 9:34 ` Andrew Cooper
2016-06-01 9:43 ` Jan Beulich
2016-06-01 11:27 ` Andrew Cooper
2016-06-01 11:38 ` Jan Beulich
2016-06-01 11:45 ` Andrew Cooper
2016-06-01 12:01 ` Jan Beulich
2016-06-01 13:03 ` Andrew Cooper [this message]
2016-06-01 13:28 ` Jan Beulich
2016-06-02 11:12 ` Andrew Cooper
2016-06-02 11:34 ` Jan Beulich
2016-06-02 11:44 ` Andrew Cooper
2016-06-02 12:15 ` Jan Beulich
2016-06-01 9:21 ` Han, Huaitong
2016-06-01 9:30 ` Wei Liu
2016-06-01 9:45 ` Andrew Cooper
2016-06-01 10:54 ` Kang, Luwei
2016-06-01 10:57 ` Andrew Cooper
2016-06-01 9:04 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=574EDD28.5020105@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=JBeulich@suse.com \
--cc=huaitong.han@intel.com \
--cc=luwei.kang@intel.com \
--cc=xen-devel@lists.xen.org \
--cc=yong.y.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).