From: David Hildenbrand <david@redhat.com>
To: qemu-devel@nongnu.org
Cc: "David Hildenbrand" <david@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Stefan Hajnoczi" <stefanha@redhat.com>,
"Igor Mammedov" <imammedo@redhat.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Peter Xu" <peterx@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>
Subject: [PATCH v3 0/3] vhost: memslot handling improvements
Date: Wed, 3 May 2023 19:21:18 +0200 [thread overview]
Message-ID: <20230503172121.733642-1-david@redhat.com> (raw)
Following up on my previous work to make virtio-mem consume multiple
memslots dynamically [1] that requires precise accounting between used vs.
reserved memslots, I realized that vhost makes this extra hard by
filtering out some memory region sections (so they don't consume a
memslot) in the vhost-user case, which messes up the whole memslot
accounting.
This series fixes what I found to be broken and prepares for more work on
[1]. Further, it cleanes up the merge checks that I consider unnecessary.
[1] https://lkml.kernel.org/r/20211027124531.57561-8-david@redhat.com
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: "Philippe Mathieu-Daudé" <philmd@linaro.org>
v2 -> v3:
- Add ACKs
- "softmmu/physmem: Fixup qemu_ram_block_from_host() documentation"
-- Fix typo in description
v1 -> v2:
- "vhost: Rework memslot filtering and fix "used_memslot" tracking"
-- New approach: keep filtering, but make filtering less generic and
track separately. This should keep any existing setups working.
- "softmmu/physmem: Fixup qemu_ram_block_from_host() documentation"
-- As requested by Igor
David Hildenbrand (3):
vhost: Rework memslot filtering and fix "used_memslot" tracking
vhost: Remove vhost_backend_can_merge() callback
softmmu/physmem: Fixup qemu_ram_block_from_host() documentation
hw/virtio/vhost-user.c | 21 ++---------
hw/virtio/vhost-vdpa.c | 1 -
hw/virtio/vhost.c | 62 ++++++++++++++++++++++++-------
include/exec/cpu-common.h | 15 ++++++++
include/hw/virtio/vhost-backend.h | 9 +----
softmmu/physmem.c | 17 ---------
6 files changed, 68 insertions(+), 57 deletions(-)
--
2.40.0
next reply other threads:[~2023-05-03 17:21 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-03 17:21 David Hildenbrand [this message]
2023-05-03 17:21 ` [PATCH v3 1/3] vhost: Rework memslot filtering and fix "used_memslot" tracking David Hildenbrand
2023-05-23 15:34 ` Peter Xu
2023-05-23 15:42 ` David Hildenbrand
2023-05-03 17:21 ` [PATCH v3 2/3] vhost: Remove vhost_backend_can_merge() callback David Hildenbrand
2023-05-23 15:40 ` Peter Xu
2023-05-03 17:21 ` [PATCH v3 3/3] softmmu/physmem: Fixup qemu_ram_block_from_host() documentation David Hildenbrand
2023-05-23 15:42 ` Peter Xu
2023-05-23 14:25 ` [PATCH v3 0/3] vhost: memslot handling improvements David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230503172121.733642-1-david@redhat.com \
--to=david@redhat.com \
--cc=imammedo@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).