From: David Hildenbrand <david@redhat.com>
To: "Maciej S. Szmigiero" <mail@maciej.szmigiero.name>
Cc: "Michael S . Tsirkin" <mst@redhat.com>,
"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
"Alex Bennée" <alex.bennee@linaro.org>,
"Thomas Huth" <thuth@redhat.com>,
"Marc-André Lureau" <marcandre.lureau@redhat.com>,
"Daniel P. Berrangé" <berrange@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Eric Blake" <eblake@redhat.com>,
"Markus Armbruster" <armbru@redhat.com>,
qemu-devel@nongnu.org, "Paolo Bonzini" <pbonzini@redhat.com>,
"Richard Henderson" <richard.henderson@linaro.org>,
"Eduardo Habkost" <eduardo@habkost.net>
Subject: Re: [PATCH][RESEND v3 1/3] hapvdimm: add a virtual DIMM device for memory hot-add protocols
Date: Tue, 28 Feb 2023 16:02:47 +0100 [thread overview]
Message-ID: <f81827ce-2553-7b50-adba-a32e82f87e1f@redhat.com> (raw)
In-Reply-To: <a230f8bc-ef59-d2ad-1316-554f1a293da9@maciej.szmigiero.name>
>
> That was more or less the approach that v1 of this driver took:
> The QEMU manager inserted virtual DIMMs (Hyper-V DM memory devices,
> whatever one calls them) explicitly via the machine hotplug handler
> (using the device_add command).
>
> At that time you said [1] that:
>> 1) I dislike that an external entity has to do vDIMM adaptions /
>> ballooning adaptions when rebooting or when wanting to resize a guest.
>
> because:
>> Once you have the current approach upstream (vDIMMs, ballooning),
>> there is no easy way to change that later (requires deprecating, etc.).
>
> That's why this version hides these vDIMMs.
Note that I don't have really strong feelings about letting the user
hotplug devices. My comment was in general about user interactions when
adding/removing memory or when rebooting the VM. As soon as you use
individual memory blocks and/or devices, we end up with a similar user
experience as we have already with DIMMS+virtio-balloon (bad IMHO).
Hiding the devices internally might make it a little bit easier to use,
but it's still the same underlying concept: to add more memory you have
to figure out whether to deflate the balloon or whether to add a new
memory backend. What memory backends will remain when we reboot? When
can we remove memory backends?
But that's just about the user interaction in general. My comment here
was about the hidden devices: they have to go through plug handlers to
get resources assigned, not self-assign resources in the realize function.
Note that virtio-mem uses a single sparse memory backend to make
resizing easier (well, and to handle migration and some other things
easier). But it comes with other things that require optimization. Using
multiple memslots to expose memory to the VM is one optimization I'm
working on. Resizable memory backends are another one.
I think you could implement the memory adding part similar to
virtio-mem, and simply have a large sparse memory backend, from which
you expose new memory to the VM as you please. And you could even use
multiple memslots for that. But that's your design decision, and I won't
argue with that, just pointing that out.
> Instead, the QEMU manager (user) directly provides the raw memory
> backend device (for example, memory-backend-ram) to the driver via a QMP
> command.
Yes, that's what I understood.
>
> Since now the user is not expected to touch these vDIMMs directly in any
> way these become an implementation detail than can be changed or even
> removed if needed at some point, without affecting the existing users.
>
>> But before we dive into the details of that, I wonder if you could just avoid having a memory device for each block of memory you want to add.
>>
>>
>> An alternative might be the following:
>>
>> Have a hv-balloon device be a memory device with a configured maximum size and a memory device region container. Let the machine hotplug handler assign a contiguous region in the device memory region and map the memory device region container (while plugging that hv-balloon device), just like we do it for virtio-mem and virtio-pmem.
>>
>> In essence, you reserve a region in physical address space that way and can decide what to (un)map into that memory device region container, you do your own placement.
>>
>> So when instructed to add a new memory backend, you simply assign an address in the assigned region yourself, and map the memory backend memory region into the device memory region container.
>>
>> The only catch is that that memory device (hv-balloon) will then consume multiple memslots (one for each memory backend), right now we only support 1 memslot (e.g., asking if one more slot is free when plugging the device).
>>
>>
> Technically in this case a "main" hv-balloon device is still needed -
> in contrast with virtio-mem (which allows multiple instances) there can
> be only one Dynamic Memory protocol provider on the VMBus.
Yes, just like virtio-balloon. There cannot be multiple instances.
>
> That means these "container" sub-devices would need to register with that
> main hv-balloon device.
>
My question is, if they really have to be devices. Why wouldn't it
sufficient to map the memory backends directly into the container? Why
is the
> However, I'm not sure what is exactly gained by this approach.
>
> These sub-devices still need to implement the TYPE_MEMORY_DEVICE interface
No, they wouldn't unless I am missing something. Only the hv-balloon
device would be a TYPE_MEMORY_DEVICE.
> so they are accounted for properly (the alternative would be to patch
> the relevant QEMU code all over the place - that's probably why
> virtio-mem also implements this interface instead).
Please elaborate, I don't understand what you are trying to say here.
Memory devices provide hooks, and the hooks exist for a reason --
because memory devices are no longer simple DIMMs/NVDIMMs. And
virtio-mem + virtio-omem was responsible for adding some of these hooks.
>
> One still needs some QMP command to add a raw memory backend to
> the chosen "container" hv-balloon sub-device.
If you go with multiple memory backends, yes.
>
> Since now the QEMU manager (user) is aware of the presence of these
> "container" sub-devices, and has to manage them, changing the QEMU
> interface in the future is more complex (as you said in [1]).
Can you elaborate? Yes, when you design the feature around "multiple
memory backends", you'll have to have an interface to add such. Well,
and to query them during migration. And, maybe also to detect when to
remove some (migration)?
>
> I understand that virtio-mem uses a similar approach, however that's
> because the virtio-mem protocol itself works that way.
>
>> I'm adding support for that right now to implement a virtio-mem
>> extension -- the memory device says how many memslots it requires,
>> and these will get reserved for that memory device; the memory device
>> can then consume them later without further checks dynamically. That
>> approach could be extended to increase/decrease the memslot
>> requirement (the device would ask to increase/decrease its limit),
>> if ever required.
>
> In terms of future virtio-mem things I'm also eagerly waiting for an
> ability to set a removed virtio-mem block read-only (or not covered by
> any memslot) - this most probably could be reused later for implementing
> the same functionality in this driver.
In contrast to setting them read-only, the memslots that contain no
plugged blocks anymore will be completely removed. The goal is to not
consume any metadata overhead in KVM (well, and also do one step into
the direction of protecting unplugged memory from getting reallocated).
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2023-02-28 15:04 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-24 21:41 [PATCH][RESEND v3 0/3] Hyper-V Dynamic Memory Protocol driver (hv-balloon) Maciej S. Szmigiero
2023-02-24 21:41 ` [PATCH][RESEND v3 1/3] hapvdimm: add a virtual DIMM device for memory hot-add protocols Maciej S. Szmigiero
2023-02-27 15:25 ` David Hildenbrand
2023-02-28 14:14 ` Maciej S. Szmigiero
2023-02-28 15:02 ` David Hildenbrand [this message]
2023-02-28 21:27 ` Maciej S. Szmigiero
2023-02-28 22:12 ` David Hildenbrand
2023-03-01 16:26 ` Maciej S. Szmigiero
2023-03-01 17:24 ` David Hildenbrand
2023-03-01 22:08 ` Maciej S. Szmigiero
2023-03-02 9:28 ` David Hildenbrand
2023-02-24 21:41 ` [PATCH][RESEND v3 2/3] Add Hyper-V Dynamic Memory Protocol definitions Maciej S. Szmigiero
2023-02-24 21:41 ` [PATCH][RESEND v3 3/3] Add a Hyper-V Dynamic Memory Protocol driver (hv-balloon) Maciej S. Szmigiero
2023-02-28 16:18 ` Igor Mammedov
2023-02-28 17:12 ` David Hildenbrand
2023-02-28 17:34 ` Daniel P. Berrangé
2023-02-28 21:24 ` Maciej S. Szmigiero
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f81827ce-2553-7b50-adba-a32e82f87e1f@redhat.com \
--to=david@redhat.com \
--cc=alex.bennee@linaro.org \
--cc=armbru@redhat.com \
--cc=berrange@redhat.com \
--cc=eblake@redhat.com \
--cc=eduardo@habkost.net \
--cc=mail@maciej.szmigiero.name \
--cc=marcandre.lureau@redhat.com \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=richard.henderson@linaro.org \
--cc=thuth@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).