From: Peter Xu <peterx@redhat.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org
Subject: Re: [Qemu-devel] [RFC PATCH 5/5] vfio/quirks: Enable ioeventfd quirks to be handled by vfio directly
Date: Sun, 11 Feb 2018 10:38:12 +0800 [thread overview]
Message-ID: <20180211023812.GG2783@xz-mi> (raw)
In-Reply-To: <20180209150933.658881ed@w520.home>
On Fri, Feb 09, 2018 at 03:09:33PM -0700, Alex Williamson wrote:
> On Fri, 9 Feb 2018 15:11:45 +0800
> Peter Xu <peterx@redhat.com> wrote:
>
> > On Tue, Feb 06, 2018 at 05:26:46PM -0700, Alex Williamson wrote:
> > > With vfio ioeventfd support, we can program vfio-pci to perform a
> > > specified BAR write when an eventfd is triggered. This allows the
> > > KVM ioeventfd to be wired directly to vfio-pci, entirely avoiding
> > > userspace handling for these events. On the same micro-benchmark
> > > where the ioeventfd got us to almost 90% of performance versus
> > > disabling the GeForce quirks, this gets us to within 95%.
> > >
> > > Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
> > > ---
> > > hw/vfio/pci-quirks.c | 42 ++++++++++++++++++++++++++++++++++++------
> > > 1 file changed, 36 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
> > > index e739efe601b1..35a4d5197e2d 100644
> > > --- a/hw/vfio/pci-quirks.c
> > > +++ b/hw/vfio/pci-quirks.c
> > > @@ -16,6 +16,7 @@
> > > #include "qemu/range.h"
> > > #include "qapi/error.h"
> > > #include "qapi/visitor.h"
> > > +#include <sys/ioctl.h>
> > > #include "hw/nvram/fw_cfg.h"
> > > #include "pci.h"
> > > #include "trace.h"
> > > @@ -287,13 +288,27 @@ static VFIOQuirk *vfio_quirk_alloc(int nr_mem)
> > > return quirk;
> > > }
> > >
> > > -static void vfio_ioeventfd_exit(VFIOIOEventFD *ioeventfd)
> > > +static void vfio_ioeventfd_exit(VFIOPCIDevice *vdev, VFIOIOEventFD *ioeventfd)
> > > {
> > > + struct vfio_device_ioeventfd vfio_ioeventfd;
> > > +
> > > QLIST_REMOVE(ioeventfd, next);
> > > +
> > > memory_region_del_eventfd(ioeventfd->mr, ioeventfd->addr, ioeventfd->size,
> > > ioeventfd->match_data, ioeventfd->data,
> > > &ioeventfd->e);
> > > +
> > > qemu_set_fd_handler(event_notifier_get_fd(&ioeventfd->e), NULL, NULL, NULL);
> > > +
> > > + vfio_ioeventfd.argsz = sizeof(vfio_ioeventfd);
> > > + vfio_ioeventfd.flags = ioeventfd->size;
> > > + vfio_ioeventfd.data = ioeventfd->data;
> > > + vfio_ioeventfd.offset = ioeventfd->region->fd_offset +
> > > + ioeventfd->region_addr;
> > > + vfio_ioeventfd.fd = -1;
> > > +
> > > + ioctl(vdev->vbasedev.fd, VFIO_DEVICE_IOEVENTFD, &vfio_ioeventfd);
> > > +
> > > event_notifier_cleanup(&ioeventfd->e);
> > > g_free(ioeventfd);
> > > }
> > > @@ -315,6 +330,8 @@ static VFIOIOEventFD *vfio_ioeventfd_init(VFIOPCIDevice *vdev,
> > > hwaddr region_addr)
> > > {
> > > VFIOIOEventFD *ioeventfd = g_malloc0(sizeof(*ioeventfd));
> > > + struct vfio_device_ioeventfd vfio_ioeventfd;
> > > + char vfio_enabled = '+';
> > >
> > > if (event_notifier_init(&ioeventfd->e, 0)) {
> > > g_free(ioeventfd);
> > > @@ -329,15 +346,28 @@ static VFIOIOEventFD *vfio_ioeventfd_init(VFIOPCIDevice *vdev,
> > > ioeventfd->region = region;
> > > ioeventfd->region_addr = region_addr;
> > >
> > > - qemu_set_fd_handler(event_notifier_get_fd(&ioeventfd->e),
> > > - vfio_ioeventfd_handler, NULL, ioeventfd);
> > > + vfio_ioeventfd.argsz = sizeof(vfio_ioeventfd);
> > > + vfio_ioeventfd.flags = ioeventfd->size;
> > > + vfio_ioeventfd.data = ioeventfd->data;
> > > + vfio_ioeventfd.offset = ioeventfd->region->fd_offset +
> > > + ioeventfd->region_addr;
> > > + vfio_ioeventfd.fd = event_notifier_get_fd(&ioeventfd->e);
> > > +
> > > + if (ioctl(vdev->vbasedev.fd,
> > > + VFIO_DEVICE_IOEVENTFD, &vfio_ioeventfd) != 0) {
> > > + qemu_set_fd_handler(event_notifier_get_fd(&ioeventfd->e),
> > > + vfio_ioeventfd_handler, NULL, ioeventfd);
> > > + vfio_enabled = '-';
> >
> > Would the performance be even slower if a new QEMU runs on a old
> > kernel due to these ioeventfds (MMIO -> eventfd -> same MMIO again)?
> > If so, shall we only enable this ioeventfd enhancement only if we
> > detected that the kernel supports this new feature (assuming this
> > feature bit won't change after VM starts)?
>
> No, it's actually still a significant improvement to enable the KVM
> ioeventfd even if we can't enable vfio. My testing shows that the KVM
> ioeventfd alone accounts for slightly more than half of the total
> improvement, so I don't see any reason to restrict this to depending on
> both ends being available. Thanks,
The numbers (83%->90%->95%) were mentioned in different patches but I
didn't really catch all of them. Sorry.
And obviously the userspace code path is different, which I missed
too. And it makes sense that ioeventfd should always be faster.
Thanks,
--
Peter Xu
prev parent reply other threads:[~2018-02-11 2:38 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-02-07 0:26 [Qemu-devel] [RFC PATCH 0/5] vfio: ioeventfd support Alex Williamson
2018-02-07 0:26 ` [Qemu-devel] [RFC PATCH 1/5] vfio/quirks: Add common quirk alloc helper Alex Williamson
2018-02-08 11:10 ` Auger Eric
2018-02-08 18:28 ` Alex Williamson
2018-02-07 0:26 ` [Qemu-devel] [RFC PATCH 2/5] vfio/quirks: Add generic support for ioveventfds Alex Williamson
2018-02-08 11:11 ` Auger Eric
2018-02-08 18:33 ` Alex Williamson
2018-02-08 20:37 ` Auger Eric
2018-02-07 0:26 ` [Qemu-devel] [RFC PATCH 3/5] vfio/quirks: Automatic ioeventfd enabling for NVIDIA BAR0 quirks Alex Williamson
2018-02-08 11:10 ` Auger Eric
2018-02-08 11:33 ` Auger Eric
2018-02-08 18:24 ` Alex Williamson
2018-02-08 20:52 ` Auger Eric
2018-02-07 0:26 ` [Qemu-devel] [RFC PATCH 4/5] vfio: Update linux header Alex Williamson
2018-02-07 0:26 ` [Qemu-devel] [RFC PATCH 5/5] vfio/quirks: Enable ioeventfd quirks to be handled by vfio directly Alex Williamson
2018-02-08 11:42 ` Auger Eric
2018-02-08 18:41 ` Alex Williamson
2018-02-09 7:11 ` Peter Xu
2018-02-09 22:09 ` Alex Williamson
2018-02-11 2:38 ` Peter Xu [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180211023812.GG2783@xz-mi \
--to=peterx@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).