qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: QEMU Devel Mailing List <qemu-devel@nongnu.org>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: Question on dirty sync before kvm memslot removal
Date: Wed, 1 Apr 2020 01:12:04 +0200	[thread overview]
Message-ID: <2eebbb76-0a12-87f4-812c-27d3e3f16a7c@redhat.com> (raw)
In-Reply-To: <20200331165133.GI522868@xz-x1>

On 31/03/20 18:51, Peter Xu wrote:
> On Tue, Mar 31, 2020 at 05:34:43PM +0200, Paolo Bonzini wrote:
>> On 31/03/20 17:23, Peter Xu wrote:
>>>> Or KVM_MEM_READONLY.
>>> Yeah, I used a new flag because I thought READONLY was a bit tricky to
>>> be used directly here.  The thing is IIUC if guest writes to a
>>> READONLY slot then KVM should either ignore the write or trigger an
>>> error which I didn't check, however here what we want to do is to let
>>> the write to fallback to the userspace so it's neither dropped (we
>>> still want the written data to land gracefully on RAM), nor triggering
>>> an error (because the slot is actually writable).
>>
>> No, writes fall back to userspace with KVM_MEM_READONLY.
> 
> I read that __kvm_write_guest_page() will return -EFAULT when writting
> to the read-only memslot, and e.g. kvm_write_guest_virt_helper() will
> return with X86EMUL_IO_NEEDED, which will be translated into a
> EMULATION_OK in x86_emulate_insn().  Then in x86_emulate_instruction()
> it seems to get a "1" returned (note that I think it does not set
> either vcpu->arch.pio.count or vcpu->mmio_needed).  Does that mean
> it'll retry the write forever instead of quit into the userspace?  I
> may possibly have misread somewhere, though..

We are definitely relying on KVM_MEM_READONLY to exit to userspace, in
order to emulate flash memory.

> However... I think I might find another race with this:
> 
>           main thread                       vcpu thread
>           -----------                       -----------
>                                             dirty GFN1, cached in PML
>                                             ...
>           remove memslot1 of GFN1
>             set slot READONLY (whatever, or INVALID)
>             sync log (NOTE: no GFN1 yet)
>                                             vmexit, flush PML with RCU
>                                             (will flush to old bitmap) <------- [1]
>             delete memslot1 (old bitmap freed)                         <------- [2]
>           add memslot2 of GFN1 (memslot2 could be smaller)
>             add memslot2
> 
> I'm not 100% sure, but I think GFN1's dirty bit will be lost though
> it's correctly applied at [1] but quickly freed at [2].

Yes, we probably need to do a mass vCPU kick when a slot is made
READONLY, before KVM_SET_USER_MEMORY_REGION returns (and after releasing
slots_lock).  It makes sense to guarantee that you can't get any more
dirtying after KVM_SET_USER_MEMORY_REGION returns.

Paolo

>>> I think the whole kick operation is indeed too heavy for this when
>>> with the run_on_cpu() trick, because the thing we want to know (pml
>>> flushing) is actually per-vcpu and no BQL interaction. Do we have/need
>>> a lightweight way to kick one vcpu in synchronous way?  I was
>>> wondering maybe something like responding a "sync kick" request in the
>>> vcpu thread right after KVM_RUN ends (when we don't have BQL yet).
>>> Would that make sense?
>>
>> Not synchronously, because anything synchronous is very susceptible to
>> deadlocks.
> 
> Yeah it's easy to deadlock (I suffer from it...), but besides above
> case (which I really think it's special) I still think unluckily we
> need a synchronous way.  For example, the VGA code will need the
> latest dirty bit information to decide whether to update the screen
> (or it could stall), or the migration code where we need to calculate
> downtime with the current dirty bit information, etc.
> 



  reply	other threads:[~2020-03-31 23:13 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-27 15:04 Question on dirty sync before kvm memslot removal Peter Xu
2020-03-30 13:11 ` Paolo Bonzini
2020-03-31 15:23   ` Peter Xu
2020-03-31 15:34     ` Paolo Bonzini
2020-03-31 16:51       ` Peter Xu
2020-03-31 23:12         ` Paolo Bonzini [this message]
2020-04-01 23:09           ` Peter Xu
2020-04-02 20:47             ` Peter Xu
2020-04-02 22:32               ` Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2eebbb76-0a12-87f4-812c-27d3e3f16a7c@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).