From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
To: aafabbri <aafabbri@cisco.com>
Cc: Alexey Kardashevskiy <aik@au1.ibm.com>,
kvm@vger.kernel.org, Paul Mackerras <pmac@au1.ibm.com>,
"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
qemu-devel <qemu-devel@nongnu.org>,
iommu <iommu@lists.linux-foundation.org>,
chrisw <chrisw@sous-sol.org>,
Alex Williamson <alex.williamson@redhat.com>,
Avi Kivity <avi@redhat.com>,
linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
benve@cisco.com
Subject: Re: [Qemu-devel] kvm PCI assignment & VFIO ramblings
Date: Tue, 23 Aug 2011 07:49:45 +1000 [thread overview]
Message-ID: <1314049785.7662.44.camel@pasglop> (raw)
In-Reply-To: <CA781A61.FB15%aafabbri@cisco.com>
> > I wouldn't use uiommu for that.
>
> Any particular reason besides saving a file descriptor?
>
> We use it today, and it seems like a cleaner API than what you propose
> changing it to.
Well for one, we are back to square one vs. grouping constraints.
.../...
> If we in singleton-group land were building our own "groups" which were sets
> of devices sharing the IOMMU domains we wanted, I suppose we could do away
> with uiommu fds, but it sounds like the current proposal would create 20
> singleton groups (x86 iommu w/o PCI bridges => all devices are partitionable
> endpoints). Asking me to ioctl(inherit) them together into a blob sounds
> worse than the current explicit uiommu API.
I'd rather have an API to create super-groups (groups of groups)
statically and then you can use such groups as normal groups using the
same interface. That create/management process could be done via a
simple command line utility or via sysfs banging, whatever...
Cheers,
Ben.
> Thanks,
> Aaron
>
> >
> > Another option is to make that static configuration APIs via special
> > ioctls (or even netlink if you really like it), to change the grouping
> > on architectures that allow it.
> >
> > Cheers.
> > Ben.
> >
> >>
> >> -Aaron
> >>
> >>> As necessary in the future, we can
> >>> define a more high performance dma mapping interface for streaming dma
> >>> via the group fd. I expect we'll also include architecture specific
> >>> group ioctls to describe features and capabilities of the iommu. The
> >>> group fd will need to prevent concurrent open()s to maintain a 1:1 group
> >>> to userspace process ownership model.
> >>>
> >>> Also on the table is supporting non-PCI devices with vfio. To do this,
> >>> we need to generalize the read/write/mmap and irq eventfd interfaces.
> >>> We could keep the same model of segmenting the device fd address space,
> >>> perhaps adding ioctls to define the segment offset bit position or we
> >>> could split each region into it's own fd (VFIO_GET_PCI_BAR_FD(0),
> >>> VFIO_GET_PCI_CONFIG_FD(), VFIO_GET_MMIO_FD(3)), though we're already
> >>> suffering some degree of fd bloat (group fd, device fd(s), interrupt
> >>> event fd(s), per resource fd, etc). For interrupts we can overload
> >>> VFIO_SET_IRQ_EVENTFD to be either PCI INTx or non-PCI irq (do non-PCI
> >>> devices support MSI?).
> >>>
> >>> For qemu, these changes imply we'd only support a model where we have a
> >>> 1:1 group to iommu domain. The current vfio driver could probably
> >>> become vfio-pci as we might end up with more target specific vfio
> >>> drivers for non-pci. PCI should be able to maintain a simple -device
> >>> vfio-pci,host=bb:dd.f to enable hotplug of individual devices. We'll
> >>> need to come up with extra options when we need to expose groups to
> >>> guest for pvdma.
> >>>
> >>> Hope that captures it, feel free to jump in with corrections and
> >>> suggestions. Thanks,
> >>>
> >>> Alex
> >>>
> >
> >
next prev parent reply other threads:[~2011-08-22 21:50 UTC|newest]
Thread overview: 93+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1311983933.8793.42.camel@pasglop>
2011-07-30 18:20 ` [Qemu-devel] kvm PCI assignment & VFIO ramblings Alex Williamson
2011-07-30 23:54 ` Benjamin Herrenschmidt
2011-08-01 18:59 ` Alex Williamson
2011-08-02 2:00 ` Benjamin Herrenschmidt
2011-07-30 23:55 ` Benjamin Herrenschmidt
2011-08-02 8:28 ` David Gibson
2011-08-02 18:14 ` Alex Williamson
2011-08-02 18:35 ` Alex Williamson
2011-08-03 2:04 ` David Gibson
2011-08-03 3:44 ` Alex Williamson
2011-08-04 0:39 ` David Gibson
2011-08-08 8:28 ` Avi Kivity
2011-08-09 23:24 ` Alex Williamson
2011-08-10 2:48 ` Benjamin Herrenschmidt
2011-08-20 16:51 ` Alex Williamson
2011-08-22 5:55 ` David Gibson
2011-08-22 15:45 ` Alex Williamson
2011-08-22 21:01 ` Benjamin Herrenschmidt
2011-08-23 19:30 ` Alex Williamson
2011-08-23 23:51 ` Benjamin Herrenschmidt
2011-08-24 3:40 ` Alexander Graf
2011-08-24 14:47 ` Alex Williamson
2011-08-24 8:43 ` Joerg Roedel
2011-08-24 14:56 ` Alex Williamson
2011-08-25 11:01 ` Roedel, Joerg
2011-08-23 2:38 ` David Gibson
2011-08-23 16:23 ` Alex Williamson
2011-08-23 23:41 ` Benjamin Herrenschmidt
2011-08-24 3:36 ` Alexander Graf
2011-08-22 6:30 ` Avi Kivity
2011-08-22 10:46 ` Joerg Roedel
2011-08-22 10:51 ` Avi Kivity
2011-08-22 12:36 ` Roedel, Joerg
2011-08-22 12:42 ` Avi Kivity
2011-08-22 12:55 ` Roedel, Joerg
2011-08-22 13:06 ` Avi Kivity
2011-08-22 13:15 ` Roedel, Joerg
2011-08-22 13:17 ` Avi Kivity
2011-08-22 14:37 ` Roedel, Joerg
2011-08-22 20:53 ` Benjamin Herrenschmidt
2011-08-22 17:25 ` Joerg Roedel
2011-08-22 19:17 ` Alex Williamson
2011-08-23 13:14 ` Roedel, Joerg
2011-08-23 17:08 ` Alex Williamson
2011-08-24 8:52 ` Roedel, Joerg
2011-08-24 15:07 ` Alex Williamson
2011-08-25 12:31 ` Roedel, Joerg
2011-08-25 13:25 ` Alexander Graf
2011-08-26 4:24 ` David Gibson
2011-08-26 9:24 ` Roedel, Joerg
2011-08-28 13:14 ` Avi Kivity
2011-08-28 13:56 ` Joerg Roedel
2011-08-28 14:04 ` Avi Kivity
2011-08-30 16:14 ` Joerg Roedel
2011-08-22 21:03 ` Benjamin Herrenschmidt
2011-08-23 13:18 ` Roedel, Joerg
2011-08-23 23:35 ` Benjamin Herrenschmidt
2011-08-24 8:53 ` Roedel, Joerg
2011-08-22 20:29 ` aafabbri
2011-08-22 20:49 ` Benjamin Herrenschmidt
2011-08-22 21:38 ` aafabbri
2011-08-22 21:49 ` Benjamin Herrenschmidt [this message]
2011-08-23 0:52 ` aafabbri
2011-08-23 6:54 ` Benjamin Herrenschmidt
2011-08-23 11:09 ` Joerg Roedel
2011-08-23 17:01 ` Alex Williamson
2011-08-23 17:33 ` Aaron Fabbri
2011-08-23 18:01 ` Alex Williamson
2011-08-24 9:10 ` Joerg Roedel
2011-08-24 21:13 ` Alex Williamson
2011-08-25 10:54 ` Roedel, Joerg
2011-08-25 15:38 ` Don Dutile
2011-08-25 16:46 ` Roedel, Joerg
2011-08-25 17:20 ` Alex Williamson
2011-08-25 18:05 ` Joerg Roedel
2011-08-26 18:04 ` Alex Williamson
2011-08-30 16:13 ` Joerg Roedel
2011-08-23 11:04 ` Joerg Roedel
2011-08-23 16:54 ` aafabbri
2011-08-24 9:14 ` Roedel, Joerg
2011-08-24 9:33 ` David Gibson
2011-08-24 11:03 ` Roedel, Joerg
2011-08-26 4:20 ` David Gibson
2011-08-26 9:33 ` Roedel, Joerg
2011-08-26 14:07 ` Alexander Graf
2011-08-26 15:24 ` Joerg Roedel
2011-08-26 15:29 ` Alexander Graf
2011-08-26 17:52 ` Aaron Fabbri
2011-08-26 19:35 ` Chris Wright
2011-08-26 20:17 ` Aaron Fabbri
2011-08-26 21:06 ` Chris Wright
2011-08-30 1:29 ` David Gibson
2011-08-04 10:35 ` Joerg Roedel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1314049785.7662.44.camel@pasglop \
--to=benh@kernel.crashing.org \
--cc=aafabbri@cisco.com \
--cc=aik@au1.ibm.com \
--cc=alex.williamson@redhat.com \
--cc=avi@redhat.com \
--cc=benve@cisco.com \
--cc=chrisw@sous-sol.org \
--cc=iommu@lists.linux-foundation.org \
--cc=kvm@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=pmac@au1.ibm.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).