qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Cornelia Huck <cohuck@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>,
	qemu-devel@nongnu.org, kvm@vger.kernel.org
Subject: Re: [Qemu-devel] [RFC PATCH 0/3] Balloon inhibit enhancements
Date: Wed, 18 Jul 2018 11:36:40 +0200	[thread overview]
Message-ID: <20180718113640.7b3b905d.cohuck@redhat.com> (raw)
In-Reply-To: <20180718064803.GA6197@xz-mi>

On Wed, 18 Jul 2018 14:48:03 +0800
Peter Xu <peterx@redhat.com> wrote:

> On Tue, Jul 17, 2018 at 04:47:31PM -0600, Alex Williamson wrote:
> > Directly assigned vfio devices have never been compatible with
> > ballooning.  Zapping MADV_DONTNEED pages happens completely
> > independent of vfio page pinning and IOMMU mapping, leaving us with
> > inconsistent GPA to HPA mapping between vCPUs and assigned devices
> > when the balloon deflates.  Mediated devices can theoretically do
> > better, if we make the assumption that the mdev vendor driver is fully
> > synchronized to the actual working set of the guest driver.  In that
> > case the guest balloon driver should never be able to allocate an mdev
> > pinned page for balloon inflation.  Unfortunately, QEMU can't know the
> > workings of the vendor driver pinning, and doesn't actually know the
> > difference between mdev devices and directly assigned devices.  Until
> > we can sort out how the vfio IOMMU backend can tell us if ballooning
> > is safe, the best approach is to disabling ballooning any time a vfio
> > devices is attached.
> > 
> > To do that, simply make the balloon inhibitor a counter rather than a
> > boolean, fixup a case where KVM can then simply use the inhibit
> > interface, and inhibit ballooning any time a vfio device is attached.
> > I'm expecting we'll expose some sort of flag similar to
> > KVM_CAP_SYNC_MMU from the vfio IOMMU for cases where we can resolve
> > this.  An addition we could consider here would be yet another device
> > option for vfio, such as x-disable-balloon-inhibit, in case there are
> > mdev devices that behave in a manner compatible with ballooning.
> > 
> > Please let me know if this looks like a good idea.  Thanks,  
> 
> IMHO patches 1-2 are good cleanup as standalone patches...
> 
> I totally have no idea on whether people would like to use vfio-pci
> and the balloon device at the same time.  After all vfio-pci are
> majorly for performance players, then I would vaguely guess that they
> don't really care thin provisioning of memory at all, hence the usage
> scenario might not exist much.  Is that the major reason that we'd
> just like to disable it (which makes sense to me)?

Don't people use vfio-pci as well if they want some special
capabilities from the passed-through device? (At least that's the main
use case for vfio-ccw, not any performance considerations.)

> 
> I'm wondering what if want to do that somehow some day... Whether
> it'll work if we just let vfio-pci devices to register some guest
> memory invalidation hook (just like the iommu notifiers, but for guest
> memory address space instead), then we map/unmap the IOMMU pages there
> for vfio-pci device to make sure the inflated balloon pages are not
> mapped and also make sure new pages are remapped with correct HPA
> after deflated.  This is a pure question out of my curiosity, and for
> sure it makes little sense if the answer of the first question above
> is positive.
> 
> Thanks,
> 

  reply	other threads:[~2018-07-18  9:36 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-17 22:47 [Qemu-devel] [RFC PATCH 0/3] Balloon inhibit enhancements Alex Williamson
2018-07-17 22:47 ` [Qemu-devel] [RFC PATCH 1/3] balloon: Allow nested inhibits Alex Williamson
2018-07-18  6:40   ` Peter Xu
2018-07-18 16:37     ` Alex Williamson
2018-07-19  4:45       ` Peter Xu
2018-07-17 22:47 ` [Qemu-devel] [RFC PATCH 2/3] kvm: Use inhibit to prevent ballooning without synchronous mmu Alex Williamson
2018-07-17 22:47 ` [Qemu-devel] [RFC PATCH 3/3] vfio: Inhibit ballooning Alex Williamson
2018-07-18  6:48 ` [Qemu-devel] [RFC PATCH 0/3] Balloon inhibit enhancements Peter Xu
2018-07-18  9:36   ` Cornelia Huck [this message]
2018-07-19  4:49     ` Peter Xu
2018-07-19  8:42       ` Cornelia Huck
2018-07-19  9:30         ` Peter Xu
2018-07-19 15:31       ` Alex Williamson
2018-07-18 16:31   ` Alex Williamson
2018-07-19  5:40     ` Peter Xu
2018-07-19 15:01       ` Alex Williamson
2018-07-20  2:56         ` Peter Xu
2018-07-30 13:34 ` Michael S. Tsirkin
2018-07-30 13:54   ` David Hildenbrand
2018-07-30 13:59     ` Michael S. Tsirkin
2018-07-30 14:46       ` David Hildenbrand
2018-07-30 14:58         ` Michael S. Tsirkin
2018-07-30 15:05           ` David Hildenbrand
2018-07-30 14:39   ` Alex Williamson
2018-07-30 14:51     ` Michael S. Tsirkin
2018-07-30 15:01       ` Alex Williamson
2018-07-30 15:49         ` Michael S. Tsirkin
2018-07-30 16:42           ` Alex Williamson
2018-07-30 17:35             ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180718113640.7b3b905d.cohuck@redhat.com \
    --to=cohuck@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).