xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: "Tian, Kevin" <kevin.tian@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"sergey.dyasli@citrix.com >> Sergey Dyasli"
	<sergey.dyasli@citrix.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	"Nakajima, Jun" <jun.nakajima@intel.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v1 5/6] x86/vvmx: correctly report vvmcs size
Date: Thu, 1 Nov 2018 09:22:44 +0000	[thread overview]
Message-ID: <abc0130c-3f89-aead-42cd-f762b1af5002@citrix.com> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D19BE33FA7@SHSMSX101.ccr.corp.intel.com>

On 01/11/2018 02:29, Tian, Kevin wrote:
>> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
>> Sent: Tuesday, October 30, 2018 8:36 PM
>>
>> On 30/10/2018 08:06, Tian, Kevin wrote:
>>>> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
>>>> Sent: Friday, October 12, 2018 11:28 PM
>>>>
>>>> The size of Xen's virtual vmcs region is 4096 bytes. Correctly report
>>>> it to the guest in case when VMCS shadowing is not available instead of
>>>> providing H/W value (which is usually smaller).
>>>
>>> what is the problem of reporting smaller size even when actual
>>> size is 4096? is L1 expected to access the portion beyond h/w
>>> reported size?
>>>
>>
>> Here's the code snippet from kvm-unit-tests:
>>
>> 	vmcs[0]->hdr.revision_id = basic.revision;
>> 	assert(!vmcs_clear(vmcs[0]));
>> 	assert(!make_vmcs_current(vmcs[0]));
>> 	set_all_vmcs_fields(0x86);
>>
>> 	assert(!vmcs_clear(vmcs[0]));
>> 	memcpy(vmcs[1], vmcs[0], basic.size);
>> 	assert(!make_vmcs_current(vmcs[1]));
>> 	report("test vmclear flush (current VMCS)",
>> check_all_vmcs_fields(0x86));
>>
>> set_all_vmcs_fields() vmwrites almost 4k, but memcpy() copies only 1024
>> bytes and vmreads get incorrect values.
>>
> 
> I didn't understand why set_all_vmcs_fields blindly touch 4k instead of
> following reported size. Also I didn't get the reason of this patch - whatever
> size reported, xen just needs to emulate hw behavior according to spec,
> i.e. do proper emulation if offset < size, otherwise just vmfail. Guest is
> not aware of shadow vmcs. why do we want to report different vmcs
> size based on presence of shadow vmcs?

Here's the detailed explanation (for when vmcs shadowing is not
available in H/W):

1. Guest reads vmcs region size as 1024 (from H/W), allocates it and
does vmptrld

2. Xen maps provided guest's memory and uses it as virtual vmcs (vmcs12)

3. Guest uses vmwrites to set up the vmcs

4. During emulation (set_vvmcs_virtual()), Xen writes values into
virtual vmcs but the resulting offset can be larger than 1024 (when
multiplied by sizeof(u64)).

There is even a comment in include/asm-x86/hvm/vmx/vvmx.h:

/*
 * Virtual VMCS layout
 *
 * Since physical VMCS layout is unknown, a custom layout is used
 * for virtual VMCS seen by guest. It occupies a 4k page, and the
 * field is offset by an 9-bit offset into u64[], The offset is as
 * follow, which means every <width, type> pair has a max of 32
 * fields available.
 *
 *             9       7      5               0
 *             --------------------------------
 *     offset: | width | type |     index     |
 *             --------------------------------
 *
 * Also, since the lower range <width=0, type={0,1}> has only one
 * field: VPID, it is moved to a higher offset (63), and leaves the
 * lower range to non-indexed field like VMCS revision.
 *
 */

--
Thanks,
Sergey

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-11-01  9:22 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-12 15:27 [PATCH v1 0/6] x86/vvmx: various fixes Sergey Dyasli
2018-10-12 15:27 ` [PATCH v1 1/6] x86/vvmx: introduce vvmcx_valid() Sergey Dyasli
2018-10-12 17:10   ` Wei Liu
2018-10-30  7:41   ` Tian, Kevin
2018-10-30 12:41     ` Sergey Dyasli
2018-11-01  2:22       ` Tian, Kevin
2018-11-01  8:52         ` Sergey Dyasli
2018-11-01 11:15   ` Andrew Cooper
2018-10-12 15:27 ` [PATCH v1 2/6] x86/vvmx: correct vmfail() usage for vmptrld and vmclear Sergey Dyasli
2018-10-12 17:10   ` Wei Liu
2018-10-30  7:45   ` Tian, Kevin
2018-10-12 15:27 ` [PATCH v1 3/6] x86/vvmx: add VMX_INSN_VMPTRLD_WITH_VMXON_PTR errno Sergey Dyasli
2018-10-30  7:49   ` Tian, Kevin
2018-10-12 15:27 ` [PATCH v1 4/6] x86/vvmx: add VMX_INSN_VMCLEAR_WITH_VMXON_PTR errno Sergey Dyasli
2018-10-30  7:53   ` Tian, Kevin
2018-10-12 15:27 ` [PATCH v1 5/6] x86/vvmx: correctly report vvmcs size Sergey Dyasli
2018-10-30  8:06   ` Tian, Kevin
2018-10-30 12:36     ` Sergey Dyasli
2018-11-01  2:29       ` Tian, Kevin
2018-11-01  9:22         ` Sergey Dyasli [this message]
2018-10-12 15:27 ` [PATCH v1 6/6] x86/vvmx: fix I/O and MSR bitmaps mapping Sergey Dyasli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=abc0130c-3f89-aead-42cd-f762b1af5002@citrix.com \
    --to=sergey.dyasli@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=jun.nakajima@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).