Linux IOMMU Development
 help / color / mirror / Atom feed
From: Zenghui Yu <yuzenghui@huawei.com>
To: Lu Baolu <baolu.lu@linux.intel.com>
Cc: Joerg Roedel <joro@8bytes.org>, Kevin Tian <kevin.tian@intel.com>,
	Jason Gunthorpe <jgg@nvidia.com>, <linux-kernel@vger.kernel.org>,
	<iommu@lists.linux.dev>
Subject: Re: [RESEND PATCH v8 04/11] bus: platform, amba, fsl-mc, PCI: Add device DMA ownership management
Date: Mon, 26 Jun 2023 21:02:40 +0800	[thread overview]
Message-ID: <6472f254-c3c4-8610-4a37-8d9dfdd54ce8@huawei.com> (raw)
In-Reply-To: <20220418005000.897664-5-baolu.lu@linux.intel.com>

On 2022/4/18 8:49, Lu Baolu wrote:
> The devices on platform/amba/fsl-mc/PCI buses could be bound to drivers
> with the device DMA managed by kernel drivers or user-space applications.
> Unfortunately, multiple devices may be placed in the same IOMMU group
> because they cannot be isolated from each other. The DMA on these devices
> must either be entirely under kernel control or userspace control, never
> a mixture. Otherwise the driver integrity is not guaranteed because they
> could access each other through the peer-to-peer accesses which by-pass
> the IOMMU protection.
> 
> This checks and sets the default DMA mode during driver binding, and
> cleanups during driver unbinding. In the default mode, the device DMA is
> managed by the device driver which handles DMA operations through the
> kernel DMA APIs (see Documentation/core-api/dma-api.rst).
> 
> For cases where the devices are assigned for userspace control through the
> userspace driver framework(i.e. VFIO), the drivers(for example, vfio_pci/
> vfio_platfrom etc.) may set a new flag (driver_managed_dma) to skip this
> default setting in the assumption that the drivers know what they are
> doing with the device DMA.
> 
> Calling iommu_device_use_default_domain() before {of,acpi}_dma_configure
> is currently a problem. As things stand, the IOMMU driver ignored the
> initial iommu_probe_device() call when the device was added, since at
> that point it had no fwspec yet. In this situation,
> {of,acpi}_iommu_configure() are retriggering iommu_probe_device() after
> the IOMMU driver has seen the firmware data via .of_xlate to learn that
> it actually responsible for the given device. As the result, before
> that gets fixed, iommu_use_default_domain() goes at the end, and calls
> arch_teardown_dma_ops() if it fails.
> 
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Cc: Bjorn Helgaas <bhelgaas@google.com>
> Cc: Stuart Yoder <stuyoder@gmail.com>
> Cc: Laurentiu Tudor <laurentiu.tudor@nxp.com>
> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
> Tested-by: Eric Auger <eric.auger@redhat.com>

[...]

> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 60adf42460ab..b933d2b08d4d 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -895,6 +895,13 @@ struct module;
>   *              created once it is bound to the driver.
>   * @driver:	Driver model structure.
>   * @dynids:	List of dynamically added device IDs.
> + * @driver_managed_dma: Device driver doesn't use kernel DMA API for DMA.
> + *		For most device drivers, no need to care about this flag
> + *		as long as all DMAs are handled through the kernel DMA API.
> + *		For some special ones, for example VFIO drivers, they know
> + *		how to manage the DMA themselves and set this flag so that
> + *		the IOMMU layer will allow them to setup and manage their
> + *		own I/O address space.
>   */
>  struct pci_driver {
>  	struct list_head	node;
> @@ -913,6 +920,7 @@ struct pci_driver {
>  	const struct attribute_group **dev_groups;
>  	struct device_driver	driver;
>  	struct pci_dynids	dynids;
> +	bool driver_managed_dma;
>  };
>  
>  static inline struct pci_driver *to_pci_driver(struct device_driver *drv)

[...]

> diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
> index 4ceeb75fc899..f83f7fbac68f 100644
> --- a/drivers/pci/pci-driver.c
> +++ b/drivers/pci/pci-driver.c
> @@ -20,6 +20,7 @@
>  #include <linux/of_device.h>
>  #include <linux/acpi.h>
>  #include <linux/dma-map-ops.h>
> +#include <linux/iommu.h>
>  #include "pci.h"
>  #include "pcie/portdrv.h"
>  
> @@ -1601,6 +1602,7 @@ static int pci_bus_num_vf(struct device *dev)
>   */
>  static int pci_dma_configure(struct device *dev)
>  {
> +	struct pci_driver *driver = to_pci_driver(dev->driver);
>  	struct device *bridge;
>  	int ret = 0;
>  
> @@ -1616,9 +1618,24 @@ static int pci_dma_configure(struct device *dev)
>  	}
>  
>  	pci_put_host_bridge_device(bridge);
> +
> +	if (!ret && !driver->driver_managed_dma) {
> +		ret = iommu_device_use_default_domain(dev);
> +		if (ret)
> +			arch_teardown_dma_ops(dev);
> +	}
> +
>  	return ret;
>  }
>  
> +static void pci_dma_cleanup(struct device *dev)
> +{
> +	struct pci_driver *driver = to_pci_driver(dev->driver);
> +
> +	if (!driver->driver_managed_dma)
> +		iommu_device_unuse_default_domain(dev);
> +}
> +
>  struct bus_type pci_bus_type = {
>  	.name		= "pci",
>  	.match		= pci_bus_match,
> @@ -1632,6 +1649,7 @@ struct bus_type pci_bus_type = {
>  	.pm		= PCI_PM_OPS_PTR,
>  	.num_vf		= pci_bus_num_vf,
>  	.dma_configure	= pci_dma_configure,
> +	.dma_cleanup	= pci_dma_cleanup,
>  };
>  EXPORT_SYMBOL(pci_bus_type);

I (somehow forgot to delete DEBUG_TEST_DRIVER_REMOVE in my .config, and)
failed to start the guest with an assigned PCI device, with something
like:

| qemu-system-aarch64: -device 
vfio-pci,host=0000:03:00.1,id=hostdev0,bus=pci.8,addr=0x0: vfio 
0000:03:00.1: group 45 is not viable
| Please ensure all devices within the iommu_group are bound to their 
vfio bus driver.

It looks like on device probe, with DEBUG_TEST_DRIVER_REMOVE,
.dma_configure() will be executed *twice* via the
really_probe()/re_probe path, and *no* .dma_cleanup() will be executed.
The resulting dev::iommu_group::owner_cnt is 2, which will confuse the
later iommu_group_dma_owner_claimed() call from VFIO on guest startup.

I can locally workaround the problem by deleting the DEBUG option or
performing a .dma_cleanup() during test_remove'ing, but it'd be great if
you can take a look.

Thanks,
Zenghui

  reply	other threads:[~2023-06-26 13:20 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-18  0:49 [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier() Lu Baolu
2022-04-18  0:49 ` [RESEND PATCH v8 01/11] iommu: Add DMA ownership management interfaces Lu Baolu
2022-06-15  9:53   ` Steven Price
2022-06-15 10:57     ` Robin Murphy
2022-06-15 15:09       ` Steven Price
2022-04-18  0:49 ` [RESEND PATCH v8 02/11] driver core: Add dma_cleanup callback in bus_type Lu Baolu
2022-04-18  0:49 ` [RESEND PATCH v8 03/11] amba: Stop sharing platform_dma_configure() Lu Baolu
2022-04-18  0:49 ` [RESEND PATCH v8 04/11] bus: platform, amba, fsl-mc, PCI: Add device DMA ownership management Lu Baolu
2023-06-26 13:02   ` Zenghui Yu [this message]
2023-06-28 14:36     ` Jason Gunthorpe
2023-06-29  2:55       ` Zenghui Yu
2022-04-18  0:49 ` [RESEND PATCH v8 05/11] PCI: pci_stub: Set driver_managed_dma Lu Baolu
2022-04-18  0:49 ` [RESEND PATCH v8 06/11] PCI: portdrv: " Lu Baolu
2022-04-18  0:49 ` [RESEND PATCH v8 07/11] vfio: Set DMA ownership for VFIO devices Lu Baolu
2022-04-18  0:49 ` [RESEND PATCH v8 08/11] vfio: Remove use of vfio_group_viable() Lu Baolu
2022-04-18  0:49 ` [RESEND PATCH v8 09/11] vfio: Delete the unbound_list Lu Baolu
2022-04-18  0:49 ` [RESEND PATCH v8 10/11] vfio: Remove iommu group notifier Lu Baolu
2022-04-18  0:50 ` [RESEND PATCH v8 11/11] iommu: Remove iommu group changes notifier Lu Baolu
2022-04-28  9:32 ` [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier() Joerg Roedel
2022-04-28 11:54   ` Jason Gunthorpe via iommu
2022-04-28 13:34     ` Joerg Roedel
2022-05-02 16:12 ` Qian Cai
2022-05-02 16:42   ` Jason Gunthorpe via iommu
2022-05-03 13:04     ` Robin Murphy
2022-05-03 15:23       ` Jason Gunthorpe via iommu
2022-05-03 17:22         ` Robin Murphy
2022-05-04  8:42   ` Joerg Roedel
2022-05-04 11:51     ` Jason Gunthorpe via iommu
2022-05-04 11:57       ` Joerg Roedel
2022-05-09 18:33         ` Jason Gunthorpe via iommu
2022-05-13  8:13           ` Tian, Kevin
2022-05-04 16:29     ` Alex Williamson
2022-05-13 15:49       ` Joerg Roedel
2022-05-13 16:25         ` Alex Williamson
2022-05-13 19:06           ` Jason Gunthorpe via iommu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6472f254-c3c4-8610-4a37-8d9dfdd54ce8@huawei.com \
    --to=yuzenghui@huawei.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox