qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: David Hildenbrand <david@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Pankaj Gupta <pankaj.gupta@cloud.ionos.com>,
	qemu-devel@nongnu.org, Peter Xu <peterx@redhat.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	Auger Eric <eric.auger@redhat.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	teawater <teawaterz@linux.alibaba.com>,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Marek Kedzierski <mkedzier@redhat.com>
Subject: Re: [PATCH v5 00/11] virtio-mem: vfio support
Date: Wed, 27 Jan 2021 07:45:03 -0500	[thread overview]
Message-ID: <20210127074407-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20210121110540.33704-1-david@redhat.com>

On Thu, Jan 21, 2021 at 12:05:29PM +0100, David Hildenbrand wrote:
> A virtio-mem device manages a memory region in guest physical address
> space, represented as a single (currently large) memory region in QEMU,
> mapped into system memory address space. Before the guest is allowed to use
> memory blocks, it must coordinate with the hypervisor (plug blocks). After
> a reboot, all memory is usually unplugged - when the guest comes up, it
> detects the virtio-mem device and selects memory blocks to plug (based on
> resize requests from the hypervisor).
> 
> Memory hot(un)plug consists of (un)plugging memory blocks via a virtio-mem
> device (triggered by the guest). When unplugging blocks, we discard the
> memory - similar to memory balloon inflation. In contrast to memory
> ballooning, we always know which memory blocks a guest may actually use -
> especially during a reboot, after a crash, or after kexec (and during
> hibernation as well). Guests agreed to not access unplugged memory again,
> especially not via DMA.
> 
> The issue with vfio is, that it cannot deal with random discards - for this
> reason, virtio-mem and vfio can currently only run mutually exclusive.
> Especially, vfio would currently map the whole memory region (with possible
> only little/no plugged blocks), resulting in all pages getting pinned and
> therefore resulting in a higher memory consumption than expected (turning
> virtio-mem basically useless in these environments).
> 
> To make vfio work nicely with virtio-mem, we have to map only the plugged
> blocks, and map/unmap properly when plugging/unplugging blocks (including
> discarding of RAM when unplugging). We achieve that by using a new notifier
> mechanism that communicates changes.

series

Acked-by: Michael S. Tsirkin <mst@redhat.com>

virtio bits

Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

This needs to go through vfio tree I assume.


> It's important to map memory in the granularity in which we could see
> unmaps again (-> virtio-mem block size) - so when e.g., plugging
> consecutive 100 MB with a block size of 2 MB, we need 50 mappings. When
> unmapping, we can use a single vfio_unmap call for the applicable range.
> We expect that the block size of virtio-mem devices will be fairly large
> in the future (to not run out of mappings and to improve hot(un)plug
> performance), configured by the user, when used with vfio (e.g., 128MB,
> 1G, ...), but it will depend on the setup.
> 
> More info regarding virtio-mem can be found at:
>     https://virtio-mem.gitlab.io/
> 
> v5 is located at:
>   git@github.com:davidhildenbrand/qemu.git virtio-mem-vfio-v5
> 
> v4 -> v5:
> - "vfio: Support for RamDiscardMgr in the !vIOMMU case"
> -- Added more assertions for granularity vs. iommu supported pagesize
> - "vfio: Sanity check maximum number of DMA mappings with RamDiscardMgr"
> -- Fix accounting of mappings
> - "vfio: Disable only uncoordinated discards for VFIO_TYPE1 iommus"
> -- Fence off SPAPR and add some comments regarding future support.
> -- Tweak patch description
> - Rebase and retest
> 
> v3 -> v4:
> - "vfio: Query and store the maximum number of DMA mappings
> -- Limit the patch to querying and storing only
> -- Renamed to "vfio: Query and store the maximum number of possible DMA
>    mappings"
> - "vfio: Support for RamDiscardMgr in the !vIOMMU case"
> -- Remove sanity checks / warning the user
> - "vfio: Sanity check maximum number of DMA mappings with RamDiscardMgr"
> -- Perform sanity checks by looking at the number of memslots and all
>    registered RamDiscardMgr sections
> - Rebase and retest
> - Reshuffled the patches slightly
> 
> v2 -> v3:
> - Rebased + retested
> - Fixed some typos
> - Added RB's
> 
> v1 -> v2:
> - "memory: Introduce RamDiscardMgr for RAM memory regions"
> -- Fix some errors in the documentation
> -- Make register_listener() notify about populated parts and
>    unregister_listener() notify about discarding populated parts, to
>    simplify future locking inside virtio-mem, when handling requests via a
>    separate thread.
> - "vfio: Query and store the maximum number of DMA mappings"
> -- Query number of mappings and track mappings (except for vIOMMU)
> - "vfio: Support for RamDiscardMgr in the !vIOMMU case"
> -- Adapt to RamDiscardMgr changes and warn via generic DMA reservation
> - "vfio: Support for RamDiscardMgr in the vIOMMU case"
> -- Use vmstate priority to handle migration dependencies
> 
> RFC - v1:
> - VFIO migration code. Due to missing kernel support, I cannot really test
>   if that part works.
> - Understand/test/document vIOMMU implications, also regarding migration
> - Nicer ram_block_discard_disable/require handling.
> - s/SparseRAMHandler/RamDiscardMgr/, refactorings, cleanups, documentation,
>   testing, ...
> 
> David Hildenbrand (11):
>   memory: Introduce RamDiscardMgr for RAM memory regions
>   virtio-mem: Factor out traversing unplugged ranges
>   virtio-mem: Implement RamDiscardMgr interface
>   vfio: Support for RamDiscardMgr in the !vIOMMU case
>   vfio: Query and store the maximum number of possible DMA mappings
>   vfio: Sanity check maximum number of DMA mappings with RamDiscardMgr
>   vfio: Support for RamDiscardMgr in the vIOMMU case
>   softmmu/physmem: Don't use atomic operations in
>     ram_block_discard_(disable|require)
>   softmmu/physmem: Extend ram_block_discard_(require|disable) by two
>     discard types
>   virtio-mem: Require only coordinated discards
>   vfio: Disable only uncoordinated discards for VFIO_TYPE1 iommus
> 
>  hw/vfio/common.c               | 348 +++++++++++++++++++++++++++++++--
>  hw/virtio/virtio-mem.c         | 347 ++++++++++++++++++++++++++++----
>  include/exec/memory.h          | 249 ++++++++++++++++++++++-
>  include/hw/vfio/vfio-common.h  |  13 ++
>  include/hw/virtio/virtio-mem.h |   3 +
>  include/migration/vmstate.h    |   1 +
>  softmmu/memory.c               |  22 +++
>  softmmu/physmem.c              | 108 +++++++---
>  8 files changed, 1007 insertions(+), 84 deletions(-)
> 
> -- 
> 2.29.2



  parent reply	other threads:[~2021-01-27 12:46 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-21 11:05 [PATCH v5 00/11] virtio-mem: vfio support David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 01/11] memory: Introduce RamDiscardMgr for RAM memory regions David Hildenbrand
2021-02-16 18:50   ` David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 02/11] virtio-mem: Factor out traversing unplugged ranges David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 03/11] virtio-mem: Implement RamDiscardMgr interface David Hildenbrand
2021-01-27 20:14   ` Dr. David Alan Gilbert
2021-01-27 20:20     ` David Hildenbrand
2021-02-22 11:29     ` David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 04/11] vfio: Support for RamDiscardMgr in the !vIOMMU case David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 05/11] vfio: Query and store the maximum number of possible DMA mappings David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 06/11] vfio: Sanity check maximum number of DMA mappings with RamDiscardMgr David Hildenbrand
2021-02-16 18:34   ` Alex Williamson
2021-01-21 11:05 ` [PATCH v5 07/11] vfio: Support for RamDiscardMgr in the vIOMMU case David Hildenbrand
2021-02-16 18:34   ` Alex Williamson
2021-01-21 11:05 ` [PATCH v5 08/11] softmmu/physmem: Don't use atomic operations in ram_block_discard_(disable|require) David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 09/11] softmmu/physmem: Extend ram_block_discard_(require|disable) by two discard types David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 10/11] virtio-mem: Require only coordinated discards David Hildenbrand
2021-01-21 11:05 ` [PATCH v5 11/11] vfio: Disable only uncoordinated discards for VFIO_TYPE1 iommus David Hildenbrand
2021-02-16 19:03   ` Alex Williamson
2021-01-27 12:45 ` Michael S. Tsirkin [this message]
2021-02-08  8:28   ` [PATCH v5 00/11] virtio-mem: vfio support David Hildenbrand
2021-02-15 14:03     ` David Hildenbrand
2021-02-16 18:33       ` Alex Williamson
2021-02-16 18:49         ` David Hildenbrand
2021-02-16 19:04           ` Alex Williamson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210127074407-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=alex.williamson@redhat.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=mkedzier@redhat.com \
    --cc=pankaj.gupta.linux@gmail.com \
    --cc=pankaj.gupta@cloud.ionos.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.weiyang@linux.alibaba.com \
    --cc=teawaterz@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).