public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Ladi Prosek <lprosek@redhat.com>
Cc: KVM list <kvm@vger.kernel.org>,
	kai.huang@linux.intel.com, Wanpeng Li <wanpeng.li@hotmail.com>
Subject: Re: [PATCH] KVM: nVMX: initialize PML fields in vmcs02
Date: Tue, 4 Apr 2017 15:09:36 +0200	[thread overview]
Message-ID: <903edb56-6e81-b528-cc20-a710e91aba3b@redhat.com> (raw)
In-Reply-To: <CABdb7342nVi7yMY0r2my2vwwFxLs-Oqfn3eXssR=qKAkVi-Xyw@mail.gmail.com>


>>> +     if (enable_pml) {
>>> +             /*
>>> +              * Conceptually we want to copy the PML address and index from
>>> +              * vmcs01 here, and then back to vmcs01 on nested vmexit. But,
>>> +              * since we always flush the log on each vmexit, this happens
>>
>> we == KVM running in g2?
>>
>> If so, other hypervisors might handle this differently.
> 
> No, we as KVM in L0. Hypervisors running in L1 do not see PML at all,
> this is L0-only code.

Okay, was just confused why we enable PML for our nested guest (L2)
although not supported/enabled for guest hypervisors (L1). I would have
guessed that it is to be kept disabled completely for nested guests
(!SECONDARY_EXEC_ENABLE_PML).

But I assume that this a mysterious detail of the MMU code I still have
to look into in detail.

> 
> I hope the comment is not confusing. The desired behavior is that PML
> maintains the same state, regardless of whether we are in guest mode
> or not. But the implementation allows for this shortcut where we just
> reset the fields to their initial values on each nested entry.

If we really treat PML here just like ordinary L1 runs, than it makes
perfect sense and the comment is not confusing. vmcs01 says it all. Just
me being curious :)

> 
>>> +              * to be equivalent to simply resetting the fields in vmcs02.
>>> +              */
>>> +             ASSERT(vmx->pml_pg);

Looking at the code (especially the check in vmx_vcpu_setup()), I think
this ASSERT can be removed.

>>> +             vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg));
>>> +             vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1);

So this really just mimics vmx_vcpu_setup() pml handling here.

>>> +     }
>>> +
>>>       if (nested_cpu_has_ept(vmcs12)) {
>>>               kvm_mmu_unload(vcpu);
>>>               nested_ept_init_mmu_context(vcpu);
>>>
>>


-- 

Thanks,

David

  reply	other threads:[~2017-04-04 13:09 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-04 12:18 [PATCH] KVM: nVMX: initialize PML fields in vmcs02 Ladi Prosek
2017-04-04 12:44 ` David Hildenbrand
2017-04-04 12:55   ` Ladi Prosek
2017-04-04 13:09     ` David Hildenbrand [this message]
2017-04-04 13:19       ` Ladi Prosek
2017-04-04 13:34         ` David Hildenbrand
2017-04-04 13:25       ` David Hildenbrand
2017-04-04 13:37         ` Ladi Prosek
2017-04-04 13:55         ` Paolo Bonzini
2017-04-04 14:22           ` David Hildenbrand
2017-04-05 14:49 ` Radim Krčmář

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=903edb56-6e81-b528-cc20-a710e91aba3b@redhat.com \
    --to=david@redhat.com \
    --cc=kai.huang@linux.intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=lprosek@redhat.com \
    --cc=wanpeng.li@hotmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox