From: Alex Williamson <alex.williamson@redhat.com>
To: chrisw@sous-sol.org, aik@au1.ibm.com, pmac@au1.ibm.com,
dwg@au1.ibm.com, joerg.roedel@amd.com, agraf@suse.de,
benve@cisco.com, aafabbri@cisco.com, B08248@freescale.com,
B07421@freescale.com, avi@redhat.com, kvm@vger.kernel.org,
qemu-devel@nongnu.org, iommu@lists.linux-foundation.org,
linux-pci@vger.kernel.org
Cc: alex.williamson@redhat.com
Subject: [Qemu-devel] [RFC PATCH 2/5] intel-iommu: Implement iommu_device_group
Date: Thu, 01 Sep 2011 13:50:37 -0600 [thread overview]
Message-ID: <20110901195037.2391.62303.stgit@s20.home> (raw)
In-Reply-To: <20110901194915.2391.97400.stgit@s20.home>
We generally have BDF granularity for devices, so we just need
to make sure devices aren't hidden behind PCIe-to-PCI bridges.
We can then make up a group number that's simply the concatenated
seg|bus|dev|fn so we don't have to track them (not that users
should depend on that).
Also add an option to group multi-function (non-SR-IOV) devices
together. It's disturbingly not uncommon for functions to have
dependencies between each other on the same device and systems
may wish to enforce that they are grouped together.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
---
drivers/pci/intel-iommu.c | 52 +++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 52 insertions(+), 0 deletions(-)
diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c
index f02c34d..a4d9a1a 100644
--- a/drivers/pci/intel-iommu.c
+++ b/drivers/pci/intel-iommu.c
@@ -404,6 +404,7 @@ static int dmar_map_gfx = 1;
static int dmar_forcedac;
static int intel_iommu_strict;
static int intel_iommu_superpage = 1;
+static int intel_iommu_no_mf_groups;
#define DUMMY_DEVICE_DOMAIN_INFO ((struct device_domain_info *)(-1))
static DEFINE_SPINLOCK(device_domain_lock);
@@ -438,6 +439,10 @@ static int __init intel_iommu_setup(char *str)
printk(KERN_INFO
"Intel-IOMMU: disable supported super page\n");
intel_iommu_superpage = 0;
+ } else if (!strncmp(str, "no_mf_groups", 12)) {
+ printk(KERN_INFO
+ "Intel-IOMMU: disable separate groups for multifunction devices\n");
+ intel_iommu_no_mf_groups = 1;
}
str += strcspn(str, ",");
@@ -3902,6 +3907,52 @@ static int intel_iommu_domain_has_cap(struct iommu_domain *domain,
return 0;
}
+/* Group numbers are arbitrary. Device with the same group number
+ * indicate the iommu cannot differentiate between them. To avoid
+ * tracking used groups we just use the seg|bus|devfn of the lowest
+ * level we're able to differentiate devices */
+static int intel_iommu_device_group(struct device *dev, unsigned int *groupid)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct pci_dev *bridge;
+ union {
+ struct {
+ u8 devfn;
+ u8 bus;
+ u16 segment;
+ } pci;
+ u32 group;
+ } id;
+
+ if (iommu_no_mapping(dev))
+ return -ENODEV;
+
+ id.pci.segment = pci_domain_nr(pdev->bus);
+ id.pci.bus = pdev->bus->number;
+ id.pci.devfn = pdev->devfn;
+
+ if (!device_to_iommu(id.pci.segment, id.pci.bus, id.pci.devfn))
+ return -ENODEV;
+
+ bridge = pci_find_upstream_pcie_bridge(pdev);
+ if (bridge) {
+ if (pci_is_pcie(bridge)) {
+ id.pci.bus = bridge->subordinate->number;
+ id.pci.devfn = 0;
+ } else {
+ id.pci.bus = bridge->bus->number;
+ id.pci.devfn = bridge->devfn;
+ }
+ }
+
+ /* Virtual functions always get their own group */
+ if (!pdev->is_virtfn && intel_iommu_no_mf_groups)
+ id.pci.devfn = PCI_DEVFN(PCI_SLOT(id.pci.devfn), 0);
+
+ *groupid = id.group;
+ return 0;
+}
+
static struct iommu_ops intel_iommu_ops = {
.domain_init = intel_iommu_domain_init,
.domain_destroy = intel_iommu_domain_destroy,
@@ -3911,6 +3962,7 @@ static struct iommu_ops intel_iommu_ops = {
.unmap = intel_iommu_unmap,
.iova_to_phys = intel_iommu_iova_to_phys,
.domain_has_cap = intel_iommu_domain_has_cap,
+ .device_group = intel_iommu_device_group,
};
static void __devinit quirk_iommu_rwbf(struct pci_dev *dev)
next prev parent reply other threads:[~2011-09-01 19:51 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-09-01 19:50 [Qemu-devel] [RFC PATCH 0/5] VFIO-NG group/device/iommu framework Alex Williamson
2011-09-01 19:50 ` [Qemu-devel] [RFC PATCH 1/5] iommu: Add iommu_device_group callback and iommu_group sysfs entry Alex Williamson
2011-09-01 19:50 ` Alex Williamson [this message]
2011-09-01 19:50 ` [Qemu-devel] [RFC PATCH 3/5] VFIO: Base framework for new VFIO driver Alex Williamson
2011-09-07 14:52 ` Konrad Rzeszutek Wilk
2011-09-19 16:42 ` Alex Williamson
2011-09-01 19:50 ` [Qemu-devel] [RFC PATCH 4/5] VFIO: Add PCI device support Alex Williamson
2011-09-07 18:55 ` Konrad Rzeszutek Wilk
2011-09-08 7:52 ` Avi Kivity
2011-09-08 21:52 ` Alex Williamson
2011-09-01 19:50 ` [Qemu-devel] [RFC PATCH 5/5] VFIO: Simple test tool Alex Williamson
2011-09-07 11:58 ` [Qemu-devel] [RFC PATCH 0/5] VFIO-NG group/device/iommu framework Alexander Graf
2011-09-08 21:54 ` Alex Williamson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110901195037.2391.62303.stgit@s20.home \
--to=alex.williamson@redhat.com \
--cc=B07421@freescale.com \
--cc=B08248@freescale.com \
--cc=aafabbri@cisco.com \
--cc=agraf@suse.de \
--cc=aik@au1.ibm.com \
--cc=avi@redhat.com \
--cc=benve@cisco.com \
--cc=chrisw@sous-sol.org \
--cc=dwg@au1.ibm.com \
--cc=iommu@lists.linux-foundation.org \
--cc=joerg.roedel@amd.com \
--cc=kvm@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=pmac@au1.ibm.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).