qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Laszlo Ersek <lersek@redhat.com>
To: Vitaly Kuznetsov <vkuznets@redhat.com>, Peter Xu <peterx@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	qemu-devel@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>
Subject: Re: [PATCH RFC] memory: pause all vCPUs for the duration of memory transactions
Date: Wed, 4 Nov 2020 18:58:08 +0100	[thread overview]
Message-ID: <2e6e47d4-2a77-b8fb-723c-f38ec944057c@redhat.com> (raw)
In-Reply-To: <87v9emy4g2.fsf@vitty.brq.redhat.com>

On 11/03/20 14:07, Vitaly Kuznetsov wrote:
> Peter Xu <peterx@redhat.com> writes:
> 
>> Vitaly,
>>
>> On Mon, Oct 26, 2020 at 09:49:16AM +0100, Vitaly Kuznetsov wrote:
>>> Currently, KVM doesn't provide an API to make atomic updates to memmap when
>>> the change touches more than one memory slot, e.g. in case we'd like to
>>> punch a hole in an existing slot.
>>>
>>> Reports are that multi-CPU Q35 VMs booted with OVMF sometimes print something
>>> like
>>>
>>> !!!! X64 Exception Type - 0E(#PF - Page-Fault)  CPU Apic ID - 00000003 !!!!
>>> ExceptionData - 0000000000000010  I:1 R:0 U:0 W:0 P:0 PK:0 SS:0 SGX:0
>>> RIP  - 000000007E35FAB6, CS  - 0000000000000038, RFLAGS - 0000000000010006
>>> RAX  - 0000000000000000, RCX - 000000007E3598F2, RDX - 00000000078BFBFF
>>> ...
>>>
>>> The problem seems to be that TSEG manipulations on one vCPU are not atomic
>>> from other vCPUs views. In particular, here's the strace:
>>>
>>> Initial creation of the 'problematic' slot:
>>>
>>> 10085 ioctl(13, KVM_SET_USER_MEMORY_REGION, {slot=6, flags=0, guest_phys_addr=0x100000,
>>>    memory_size=2146435072, userspace_addr=0x7fb89bf00000}) = 0
>>>
>>> ... and then the update (caused by e.g. mch_update_smram()) later:
>>>
>>> 10090 ioctl(13, KVM_SET_USER_MEMORY_REGION, {slot=6, flags=0, guest_phys_addr=0x100000,
>>>    memory_size=0, userspace_addr=0x7fb89bf00000}) = 0
>>> 10090 ioctl(13, KVM_SET_USER_MEMORY_REGION, {slot=6, flags=0, guest_phys_addr=0x100000,
>>>    memory_size=2129657856, userspace_addr=0x7fb89bf00000}) = 0
>>>
>>> In case KVM has to handle any event on a different vCPU in between these
>>> two calls the #PF will get triggered.
>>
>> A pure question: Why a #PF?  Is it injected into the guest?
>>
> 
> Yes, we see a #PF injected in the guest during OVMF boot.
> 
>> My understanding (which could be wrong) is that all thing should start with a
>> vcpu page fault onto the removed range, then when kvm finds that the memory
>> accessed is not within a valid memslot (since we're adding it back but not
>> yet), it'll become an user exit back to QEMU assuming it's an MMIO access.  Or
>> am I wrong somewhere?
> 
> In case it is a normal access from the guest, yes, but AFAIR here
> guest's CR3 is pointing to non existent memory and when KVM detects that
> it injects #PF by itself without a loop through userspace.
> 

Indeed that's how I seem to remember it too; the guest page tables
cannot be walked (by the processor implicitly, or by KVM explicitly -- I
can't tell which one of those applies).

Thanks
Laszlo



      parent reply	other threads:[~2020-11-04 17:59 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-26  8:49 [PATCH RFC] memory: pause all vCPUs for the duration of memory transactions Vitaly Kuznetsov
2020-10-26 10:43 ` David Hildenbrand
2020-10-26 11:17   ` David Hildenbrand
2020-10-27 12:36     ` Vitaly Kuznetsov
2020-10-27 12:42       ` David Hildenbrand
2020-10-27 13:02         ` Vitaly Kuznetsov
2020-10-27 13:08           ` David Hildenbrand
2020-10-27 13:19             ` Vitaly Kuznetsov
2020-10-27 13:35               ` David Hildenbrand
2020-10-27 13:47                 ` Vitaly Kuznetsov
2020-10-27 14:20                   ` Igor Mammedov
2020-11-02 19:57 ` Peter Xu
2020-11-03 13:07   ` Vitaly Kuznetsov
2020-11-03 16:37     ` Peter Xu
2020-11-04 18:09       ` Laszlo Ersek
2020-11-04 19:23         ` Peter Xu
2020-11-05 15:36           ` Vitaly Kuznetsov
2020-11-05 16:35             ` Peter Xu
2020-11-04 17:58     ` Laszlo Ersek [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2e6e47d4-2a77-b8fb-723c-f38ec944057c@redhat.com \
    --to=lersek@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=vkuznets@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).