qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: "Maciej S. Szmigiero" <mail@maciej.szmigiero.name>
Cc: "Michael S . Tsirkin" <mst@redhat.com>,
	"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
	"Alex Bennée" <alex.bennee@linaro.org>,
	"Thomas Huth" <thuth@redhat.com>,
	"Marc-André Lureau" <marcandre.lureau@redhat.com>,
	"Daniel P. Berrangé" <berrange@redhat.com>,
	"Philippe Mathieu-Daudé" <philmd@linaro.org>,
	"Eric Blake" <eblake@redhat.com>,
	"Markus Armbruster" <armbru@redhat.com>,
	qemu-devel@nongnu.org, "Paolo Bonzini" <pbonzini@redhat.com>,
	"Richard Henderson" <richard.henderson@linaro.org>,
	"Eduardo Habkost" <eduardo@habkost.net>
Subject: Re: [PATCH][RESEND v3 1/3] hapvdimm: add a virtual DIMM device for memory hot-add protocols
Date: Wed, 1 Mar 2023 18:24:28 +0100	[thread overview]
Message-ID: <678fb11d-4ac8-238f-9ead-d68d59d0a8ba@redhat.com> (raw)
In-Reply-To: <9f581e62-0cb3-7f0f-8feb-ddfda5bba621@maciej.szmigiero.name>

> 
> The idea would seem reasonable, but: (there's always some "but")
> 1) Once we implement NUMA support we'd probably need multiple
> TYPE_MEMORY_DEVICEs anyway, since it seems one memdev can sit on only
> one NUMA node,
> 

Not necessarily. You could extend the hv-balloon device to have one 
memslot for each NUMA node. Of course, once again, you have to plan 
ahead how to distribute memory across NUMA nodes (same with virtio-mem).

Having that said, last time I checked, HV dynamic memory was 
force-disabled when enabling vNUMA under HV. Simply because balloon 
inflation is not NUMA aware.

> With virtio-mem one can simply have per-node virtio-mem devices.
> 
> 2) I'm not sure what's the overhead of having, let's say, 1 TiB backing
> memory device mostly marked madvise(MADV_DONTNEED).
> Like, how much memory + swap this setup would actually consume - that's
> something I would need to measure.

There are some WIP items to improve that (QEMU metadata (e.g., bitmaps), 
KVM metadata (e.g., per-memslot), Linux metadata (e.g., page tables).
Memory overcommit handling also has to be tackled.

So it would be a "shared" problem with virtio-mem and will be sorted out 
eventually :)

> 
> 3) In a public cloud environment malicious guests are a possibility.
> Currently (without things like resizable memslots) the best idea I tried
> was to place the whole QEMU process into a memory-limited cgroup
> (limited to the guest target size).

Yes. Protection of unplugged memory is on my TODO list for virtio-mem as 
well, to avoid having to rely on cgroups.

> 
> There are still some issues with it: one needs to reserve swap space up
> to the guest maximum size so the QEMU process doesn't get OOM-killed if
> guest touches that memory and the cgroup memory controller for some
> reason seems to start swapping even before reaching its limit (that's
> still under investigation why).

Yes, putting a memory cap on Linux was always tricky.

> 
>> Reboot? Logically unplug all memory and as the guest boots up, re-add the memory after the guest booted up.
>>
>> The only thing we can't do is the following: when going below 4G, we cannot resize boot memory.
>>
>>
>> But I recall that that's *exactly* how the HV version I played with ~2 years ago worked: always start up with some initial memory ("startup memory"). After the VM is up for some seconds, we either add more memory (requested > startup) or request the VM to inflate memory (requested < startup).
> 
> Hyper-V actually "cleans up" the guest memory map on reboot - if the
> guest was effectively resized up then on reboot the guest boot memory is
> resized up to match that last size.
> Similarly, if the guest was ballooned out - that amount of memory is
> removed from the boot memory on reboot.

Yes, it cleans up, but as I said last time I checked there was this 
concept of startup vs. minimum vs. maximum, at least for dynamic memory:

https://www.fastvue.co/tmgreporter/blog/understanding-hyper-v-dynamic-memory-dynamic-ram/

Startup RAM would be whatever you specify for "-m xG". If you go below 
min, you remove memory via deflation once the guest is up.

> 
> So it's not exactly doing a hot-add after the guest boots.

I recall BUG reports in Linux, that we got hv-balloon hot-add requests 
~1 minute after Linux booted up, because of the above reason of startup 
memory [in these BUG reports, memory onlining was disabled and the VM 
would run out of memory because we hotplugged too much memory]. That's 
why I remember that this approach once was done.

Maybe there are multiple implementations noways. At least in QEMU you 
could chose whatever makes most sense for QEMU.


> This approach (of resizing the boot memory) also avoids problems if the
> guest loses hot-add / ballooning capability after a reboot - for example,
> rebooting into a Linux guest from Windows with hv-balloon.

TBH, I wouldn't be too concerned about that scenario ("hotplugged memory 
to a guest, guest reboots into a weird OS, weird OS isn't able to use 
hotplugged memory). For virtio-mem, the important part was that you 
always "know" how much memory the VM is aware about. If you always start 
with "Startup memory" and hotadd later (only if you detected guest 
support after a bootup), you can handle that scenario.

> 
> But unfortunately such resizing the guest boot memory seems not trivial
> to implement in QEMU.

Yes, avoiding changing memory layout to keep memory migration feasible 
was another thing I considered when designing virtio-mem.


Anyhow, I'm just throwing out ideas here on how to eventually handle it 
differently.

-- 
Thanks,

David / dhildenb



  reply	other threads:[~2023-03-01 17:24 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-24 21:41 [PATCH][RESEND v3 0/3] Hyper-V Dynamic Memory Protocol driver (hv-balloon) Maciej S. Szmigiero
2023-02-24 21:41 ` [PATCH][RESEND v3 1/3] hapvdimm: add a virtual DIMM device for memory hot-add protocols Maciej S. Szmigiero
2023-02-27 15:25   ` David Hildenbrand
2023-02-28 14:14     ` Maciej S. Szmigiero
2023-02-28 15:02       ` David Hildenbrand
2023-02-28 21:27         ` Maciej S. Szmigiero
2023-02-28 22:12           ` David Hildenbrand
2023-03-01 16:26             ` Maciej S. Szmigiero
2023-03-01 17:24               ` David Hildenbrand [this message]
2023-03-01 22:08                 ` Maciej S. Szmigiero
2023-03-02  9:28                   ` David Hildenbrand
2023-02-24 21:41 ` [PATCH][RESEND v3 2/3] Add Hyper-V Dynamic Memory Protocol definitions Maciej S. Szmigiero
2023-02-24 21:41 ` [PATCH][RESEND v3 3/3] Add a Hyper-V Dynamic Memory Protocol driver (hv-balloon) Maciej S. Szmigiero
2023-02-28 16:18   ` Igor Mammedov
2023-02-28 17:12     ` David Hildenbrand
2023-02-28 17:34   ` Daniel P. Berrangé
2023-02-28 21:24     ` Maciej S. Szmigiero

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=678fb11d-4ac8-238f-9ead-d68d59d0a8ba@redhat.com \
    --to=david@redhat.com \
    --cc=alex.bennee@linaro.org \
    --cc=armbru@redhat.com \
    --cc=berrange@redhat.com \
    --cc=eblake@redhat.com \
    --cc=eduardo@habkost.net \
    --cc=mail@maciej.szmigiero.name \
    --cc=marcandre.lureau@redhat.com \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).