From: David Hildenbrand <david@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: "Laurent Vivier" <lvivier@redhat.com>,
"Thomas Huth" <thuth@redhat.com>,
"Eduardo Habkost" <ehabkost@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
qemu-devel@nongnu.org,
"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
"Alex Williamson" <alex.williamson@redhat.com>,
"Claudio Fontana" <cfontana@suse.de>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Marc-André Lureau" <marcandre.lureau@redhat.com>,
"Alex Bennée" <alex.bennee@linaro.org>,
"Igor Mammedov" <imammedo@redhat.com>,
"Stefan Berger" <stefanb@linux.ibm.com>
Subject: Re: [PATCH resend v2 5/5] softmmu/memory_mapping: optimize for RamDiscardManager sections
Date: Mon, 26 Jul 2021 09:51:25 +0200 [thread overview]
Message-ID: <162c3460-a8a3-4f7c-c85b-4d423bbbb40a@redhat.com> (raw)
In-Reply-To: <YPtDsQxJcy4Am2wG@t490s>
On 24.07.21 00:33, Peter Xu wrote:
> On Fri, Jul 23, 2021 at 08:56:54PM +0200, David Hildenbrand wrote:
>>>
>>> As I've asked this question previously elsewhere, it's more or less also
>>> related to the design decision of having virtio-mem being able to sparsely
>>> plugged in such a small granularity rather than making the plug/unplug still
>>> continuous within GPA range (so we move page when unplug).
>>
>> Yes, in an ideal world that would be optimal solution. Unfortunately, we're
>> not living in an ideal world :)
>>
>> virtio-mem in Linux guests will as default try unplugging highest-to-lowest
>> address, and I have on my TODO list an item to shrink the usable region (->
>> later, shrinking the actual RAMBlock) once possible.
>>
>> So virtio-mem is prepared for that, but it will only apply in some cases.
>>
>>>
>>> There's definitely reasons there and I believe you're the expert on that (as
>>> you mentioned once: some guest GUPed pages cannot migrate so cannot get those
>>> ranges offlined otherwise), but so far I still not sure whether that's a kernel
>>> issue to solve on GUP, although I agree it's a complicated one anyway!
>>
>> To do something like that reliably, you have to manage hotplugged memory in
>> a special way, for example, in a movable zone.
>>
>> We have a at least 4 cases:
>>
>> a) The guest OS supports the movable zone and uses it for all hotplugged
>> memory
>> b) The guest OS supports the movable zone and uses it for some
>> hotplugged memory
>> c) The guest OS supports the movable zone and uses it for no hotplugged
>> memory
>> d) The guest OS does not support the concept of movable zones
>>
>>
>> a) is the dream but only applies in some cases if Linux is properly
>> configured (e.g., never hotplug more than 3 times boot memory)
>> b) will be possible under Linux soon (e.g., when hotplugging more than 3
>> times boot memory)
>> c) is the default under Linux for most Linux distributions
>> d) Is Windows
>>
>> In addition, we can still have random unplug errors when using the movable
>> zone, for example, if someone references a page just a little too long.
>>
>> Maybe that helps.
>
> Yes, thanks.
>
>>
>>>
>>> Maybe it's a trade-off you made at last, I don't have enough knowledge to tell.
>>
>> That's the precise description of what virtio-mem is. It's a trade-off
>> between which OSs we want to support, what the guest OS can actually do, how
>> we can manage memory in the hypervisor efficiently, ...
>>
>>>
>>> The patch itself looks okay to me, there's just a slight worry on not sure how
>>> long would the list be at last; if it's chopped in 1M/2M small chunks.
>>
>> I don't think that's really an issue: take a look at
>> qemu_get_guest_memory_mapping(), which will create as many entries as
>> necessary to express the guest physical mapping of the guest virtual (!)
>> address space with such chunks. That can be a lot :)
>
> I'm indeed a bit surprised by the "paging" parameter.. I gave it a try, the
> list grows into tens of thousands.
Yes, and the bigger the VM, the more entries you should get ... like
with virtio-mem.
>
> One last question: will virtio-mem still do best-effort to move the pages, so
> as to grant as less holes as possible?
That depends on the guest OS.
Linux guests will unplug highest-to-lowest addresses. They will try
migrating pages away (alloc_contig_range()) to minimize fragmentation.
Further, when (un)plugging, they will try a) unplug within already
fragmented Linux memory blocks (e.g., 128 MiB) b) plugging within
already fragmented Linux memory blocks first. Because the goal is to
require as little as possible Linux memory blocks to reduce metadata
(memmap) overhead.
I recall that the Windows prototype also tries unplug highest-to-lowest
using the Windows range allocator, however, I have no idea what that
range allcoator actually does (if it only grabs free pages or if it can
actually move around busy pages).
For Linux guests, there is a work item to continue defragmenting the
layout to free up complete Linux memory blocks over time.
With a 1 TiB virtio-mem device and a 2 MiB block size (default), in the
worst case we would get 262144 individual blocks (every second one
plugged). While this is far from realistic, I assume we can get
something comparable when dumping a huge VM in paging mode.
With 262144 entires, with ~48 byte (6*8 byte) per element, we'd consume
12 MiB for the whole list. Not perfect, but not too bad.
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2021-07-26 7:52 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-20 13:02 [PATCH resend v2 0/5] softmmu/memory_mapping: optimize dump/tpm for virtio-mem David Hildenbrand
2021-07-20 13:03 ` [PATCH resend v2 1/5] tpm: mark correct memory region range dirty when clearing RAM David Hildenbrand
2021-07-23 14:52 ` Peter Xu
2021-07-23 19:15 ` David Hildenbrand
2021-07-23 22:35 ` Peter Xu
2021-07-26 8:08 ` David Hildenbrand
2021-07-26 14:21 ` Peter Xu
2021-07-20 13:03 ` [PATCH resend v2 2/5] softmmu/memory_mapping: reuse qemu_get_guest_simple_memory_mapping() David Hildenbrand
2021-07-20 13:37 ` Stefan Berger
2021-07-20 13:45 ` David Hildenbrand
2021-07-20 13:03 ` [PATCH resend v2 3/5] softmmu/memory_mapping: never merge ranges accross memory regions David Hildenbrand
2021-07-20 13:26 ` Stefan Berger
2021-07-23 15:09 ` Peter Xu
2021-07-20 13:03 ` [PATCH resend v2 4/5] softmmu/memory_mapping: factor out adding physical memory ranges David Hildenbrand
2021-07-20 13:25 ` Stefan Berger
2021-07-23 15:09 ` Peter Xu
2021-07-20 13:03 ` [PATCH resend v2 5/5] softmmu/memory_mapping: optimize for RamDiscardManager sections David Hildenbrand
2021-07-23 15:28 ` Peter Xu
2021-07-23 18:56 ` David Hildenbrand
2021-07-23 22:33 ` Peter Xu
2021-07-26 7:51 ` David Hildenbrand [this message]
2021-07-26 15:24 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=162c3460-a8a3-4f7c-c85b-4d423bbbb40a@redhat.com \
--to=david@redhat.com \
--cc=alex.bennee@linaro.org \
--cc=alex.williamson@redhat.com \
--cc=cfontana@suse.de \
--cc=dgilbert@redhat.com \
--cc=ehabkost@redhat.com \
--cc=imammedo@redhat.com \
--cc=lvivier@redhat.com \
--cc=marcandre.lureau@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanb@linux.ibm.com \
--cc=thuth@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).