From: David Hildenbrand <david@redhat.com>
To: Yangming <yangming73@huawei.com>,
"mst@redhat.com" <mst@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Cc: "wangzhigang (O)" <wangzhigang17@huawei.com>,
"zhangliang (AG)" <zhangliang5@huawei.com>,
xiqi <xiqi2@huawei.com>
Subject: Re: [PATCH v2] virtio-balloon: optimize the virtio-balloon on the ARM platform
Date: Wed, 1 Mar 2023 09:16:57 +0100 [thread overview]
Message-ID: <cf2cf2c8-108c-2e21-2695-161b13cea31b@redhat.com> (raw)
In-Reply-To: <afd620a5e7c14a0794812e72ba1af545@huawei.com>
On 01.03.23 07:38, Yangming wrote:
> Optimize the virtio-balloon feature on the ARM platform by adding
> a variable to keep track of the current hot-plugged pc-dimm size,
> instead of traversing the virtual machine's memory modules to count
> the current RAM size during the balloon inflation or deflation
> process. This variable can be updated only when plugging or unplugging
> the device, which will result in an increase of approximately 60%
> efficiency of balloon process on the ARM platform.
>
> We tested the total amount of time required for the balloon inflation process on ARM:
> inflate the balloon to 64GB of a 128GB guest under stress.
> Before: 102 seconds
> After: 42 seconds
>
> Signed-off-by: Qi Xi <xiqi2@huawei.com>
> Signed-off-by: Ming Yang yangming73@huawei.com
> ---
> Refactor the code by adding comments and removing unnecessary code.
>
> hw/mem/pc-dimm.c | 7 +++++++
> hw/virtio/virtio-balloon.c | 33 +++++----------------------------
> include/hw/boards.h | 2 ++
> 3 files changed, 14 insertions(+), 28 deletions(-)
>
> diff --git a/hw/mem/pc-dimm.c b/hw/mem/pc-dimm.c
> index 50ef83215c..3f2734a267 100644
> --- a/hw/mem/pc-dimm.c
> +++ b/hw/mem/pc-dimm.c
> @@ -81,6 +81,10 @@ void pc_dimm_plug(PCDIMMDevice *dimm, MachineState *machine)
>
> memory_device_plug(MEMORY_DEVICE(dimm), machine);
> vmstate_register_ram(vmstate_mr, DEVICE(dimm));
> + /* count only "real" DIMMs, not NVDIMMs */
> + if (!object_dynamic_cast(OBJECT(dimm), TYPE_NVDIMM)) {
> + machine->device_memory->dimm_size += vmstate_mr->size;
> + }
> }
>
> void pc_dimm_unplug(PCDIMMDevice *dimm, MachineState *machine)
> @@ -90,6 +94,9 @@ void pc_dimm_unplug(PCDIMMDevice *dimm, MachineState *machine)
>
> memory_device_unplug(MEMORY_DEVICE(dimm), machine);
> vmstate_unregister_ram(vmstate_mr, DEVICE(dimm));
> + if (!object_dynamic_cast(OBJECT(dimm), TYPE_NVDIMM)) {
> + machine->device_memory->dimm_size -= vmstate_mr->size;
> + }
> }
>
> static int pc_dimm_slot2bitmap(Object *obj, void *opaque)
> diff --git a/hw/virtio/virtio-balloon.c b/hw/virtio/virtio-balloon.c
> index 746f07c4d2..2814a47cb1 100644
> --- a/hw/virtio/virtio-balloon.c
> +++ b/hw/virtio/virtio-balloon.c
> @@ -729,37 +729,14 @@ static void virtio_balloon_get_config(VirtIODevice *vdev, uint8_t *config_data)
> memcpy(config_data, &config, virtio_balloon_config_size(dev));
> }
>
> -static int build_dimm_list(Object *obj, void *opaque)
> -{
> - GSList **list = opaque;
> -
> - if (object_dynamic_cast(obj, TYPE_PC_DIMM)) {
> - DeviceState *dev = DEVICE(obj);
> - if (dev->realized) { /* only realized DIMMs matter */
> - *list = g_slist_prepend(*list, dev);
> - }
> - }
> -
> - object_child_foreach(obj, build_dimm_list, opaque);
> - return 0;
> -}
> -
> static ram_addr_t get_current_ram_size(void)
> {
> - GSList *list = NULL, *item;
> - ram_addr_t size = current_machine->ram_size;
> -
> - build_dimm_list(qdev_get_machine(), &list);
> - for (item = list; item; item = g_slist_next(item)) {
> - Object *obj = OBJECT(item->data);
> - if (!strcmp(object_get_typename(obj), TYPE_PC_DIMM)) {
> - size += object_property_get_int(obj, PC_DIMM_SIZE_PROP,
> - &error_abort);
> - }
> + MachineState *machine = MACHINE(qdev_get_machine());
> + if (machine->device_memory) {
> + return machine->ram_size + machine->device_memory->dimm_size;
> + } else {
> + return machine->ram_size;
> }
> - g_slist_free(list);
> -
> - return size;
> }
>
> static bool virtio_balloon_page_poison_support(void *opaque)
> diff --git a/include/hw/boards.h b/include/hw/boards.h
> index 6fbbfd56c8..397ec10468 100644
> --- a/include/hw/boards.h
> +++ b/include/hw/boards.h
> @@ -292,10 +292,12 @@ struct MachineClass {
> * @base: address in guest physical address space where the memory
> * address space for memory devices starts
> * @mr: address space container for memory devices
> + * @dimm_size: the sum of plugged DIMMs' sizes
> */
> typedef struct DeviceMemoryState {
> hwaddr base;
> MemoryRegion mr;
> + ram_addr_t dimm_size;
> } DeviceMemoryState;
>
> /**
Acked-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2023-03-01 8:18 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20230301062642.1058-1-xiqi2@huawei.com>
2023-03-01 6:38 ` [PATCH v2] virtio-balloon: optimize the virtio-balloon on the ARM platform Yangming via
2023-03-01 8:16 ` David Hildenbrand [this message]
2023-03-08 0:42 ` Michael S. Tsirkin
2023-03-08 10:34 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cf2cf2c8-108c-2e21-2695-161b13cea31b@redhat.com \
--to=david@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=wangzhigang17@huawei.com \
--cc=xiqi2@huawei.com \
--cc=yangming73@huawei.com \
--cc=zhangliang5@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).