public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <seanjc@google.com>
To: David Woodhouse <dwmw2@infradead.org>
Cc: "borntraeger@linux.ibm.com" <borntraeger@linux.ibm.com>,
	"pbonzini@redhat.com" <pbonzini@redhat.com>,
	"frankja@linux.ibm.com" <frankja@linux.ibm.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"imbrenda@linux.ibm.com" <imbrenda@linux.ibm.com>,
	"david@redhat.com" <david@redhat.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [EXTERNAL] [PATCH v2] KVM: Don't actually set a request when evicting vCPUs for GFN cache invd
Date: Fri, 25 Feb 2022 17:27:12 +0000	[thread overview]
Message-ID: <YhkRcK64Jya6YpA9@google.com> (raw)
In-Reply-To: <915ddc7327585bbe8587b91b8cd208520d684db1.camel@infradead.org>

On Fri, Feb 25, 2022, David Woodhouse wrote:
> On Fri, 2022-02-25 at 16:13 +0000, Sean Christopherson wrote:
> > On Fri, Feb 25, 2022, Woodhouse, David wrote:
> > > Since we need an active vCPU context to do dirty logging (thanks, dirty
> > > ring)... and since any time vcpu_run exits to userspace for any reason
> > > might be the last time we ever get an active vCPU context... I think
> > > that kind of fundamentally means that we must flush dirty state to the
> > > log on *every* return to userspace, doesn't it?
> > 
> > I would rather add a variant of mark_page_dirty_in_slot() that takes a vCPU, which
> > we whould have in all cases.  I see no reason to require use of kvm_get_running_vcpu().
> 
> We already have kvm_vcpu_mark_page_dirty(), but it can't use just 'some
> vcpu' because the dirty ring is lockless. So if you're ever going to
> use anything other than kvm_get_running_vcpu() we need to add locks.

Heh, actually, scratch my previous comment.  I was going to respond that
kvm_get_running_vcpu() is mutually exclusive with all other ioctls() on the same
vCPU by virtue of vcpu->mutex, but I had forgotten that kvm_get_running_vcpu()
really should be "kvm_get_loaded_vcpu()".  I.e. as long as KVM is in a vCPU-ioctl
path, kvm_get_running_vcpu() will be non-null.

> And while we *could* do that, I don't think it would negate the
> fundamental observation that *any* time we return from vcpu_run to
> userspace, that could be the last time. Userspace might read the dirty
> log for the *last* time, and any internally-cached "oh, at some point
> we need to mark <this> page dirty" is lost because by the time the vCPU
> is finally destroyed, it's too late.

Hmm, isn't that an existing bug?  I think the correct fix would be to flush all
dirty vmcs12 pages to the memslot in vmx_get_nested_state().  Userspace _must_
invoke that if it wants to migrated a nested vCPU.

> I think I'm going to rip out the 'dirty' flag from the gfn_to_pfn_cache
> completely and add a function (to be called with an active vCPU
> context) which marks the page dirty *now*.

Hrm, something like?

  1. Drop @dirty from kvm_gfn_to_pfn_cache_init()
  2. Rename @dirty => @old_dirty in kvm_gfn_to_pfn_cache_refresh()
  3. Add an API to mark the associated slot dirty without unmapping

I think that makes sense.

> KVM_GUEST_USES_PFN users like nested VMX will be expected to do this
> before returning from vcpu_run anytime it's in L2 guest mode. 

As above, I think the correct thing to do is enlightent the flows that retrieve
the state being cached.

  reply	other threads:[~2022-02-25 17:27 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-23 16:53 [PATCH v2] KVM: Don't actually set a request when evicting vCPUs for GFN cache invd Sean Christopherson
2022-02-23 17:01 ` David Woodhouse
2022-02-23 17:08   ` Sean Christopherson
2022-02-24 10:14 ` [EXTERNAL] " David Woodhouse
     [not found] ` <2547e9675d855449bc5cc7efb97251d6286a377c.camel@amazon.co.uk>
2022-02-25 16:13   ` Sean Christopherson
2022-02-25 16:59     ` [EXTERNAL] " David Woodhouse
2022-02-25 17:27       ` Sean Christopherson [this message]
2022-02-25 18:41         ` David Woodhouse
2022-02-25 18:52           ` Sean Christopherson
2022-02-25 20:15             ` David Woodhouse
2022-02-27 15:11             ` [PATCH] KVM: Remove dirty handling from gfn_to_pfn_cache completely David Woodhouse
2022-03-08 15:41 ` [PATCH v2] KVM: Don't actually set a request when evicting vCPUs for GFN cache invd Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YhkRcK64Jya6YpA9@google.com \
    --to=seanjc@google.com \
    --cc=borntraeger@linux.ibm.com \
    --cc=david@redhat.com \
    --cc=dwmw2@infradead.org \
    --cc=frankja@linux.ibm.com \
    --cc=imbrenda@linux.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox