Archive-only list for patches
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Jean-Philippe Brucker <jean-philippe@linaro.org>
Cc: iommu@lists.linux.dev, Joerg Roedel <joro@8bytes.org>,
	Robin Murphy <robin.murphy@arm.com>,
	virtualization@lists.linux.dev, Will Deacon <will@kernel.org>,
	Eric Auger <eric.auger@redhat.com>,
	patches@lists.linux.dev
Subject: Re: [PATCH 1/5] iommu/virtio: Break out bypass identity support into a global static
Date: Mon, 24 Feb 2025 15:37:16 -0400	[thread overview]
Message-ID: <20250224193716.GA524338@nvidia.com> (raw)
In-Reply-To: <20250221113527.GA719702@myrica>

On Fri, Feb 21, 2025 at 11:35:27AM +0000, Jean-Philippe Brucker wrote:
> > +static int viommu_attach_identity_domain(struct iommu_domain *domain,
> > +					 struct device *dev)
> > +{
> > +	int ret = 0;
> > +	struct virtio_iommu_req_attach req;
> > +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> > +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> > +
> > +	req = (struct virtio_iommu_req_attach) {
> > +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
> > +		.domain		= cpu_to_le32(vdev->viommu->identity_domain_id),
> > +		.flags          = cpu_to_le32(VIRTIO_IOMMU_ATTACH_F_BYPASS),
> > +	};
> > +
> > +	ret = viommu_send_attach_req(vdev->viommu, dev, &req);
> > +	if (ret)
> > +		return ret;
> > +
> > +	if (vdev->vdomain) {
> > +		vdev->vdomain->nr_endpoints--;
> > +		vdomain->nr_endpoints++;
> > +		vdev->vdomain = vdomain;
> 
> These two need to be unconditional

Woops yes

> > +static struct viommu_domain viommu_identity_domain = {
> > +	.domain = { .type = IOMMU_DOMAIN_IDENTITY,
> > +		    .ops =
> > +			    &(const struct iommu_domain_ops){
> > +				    .attach_dev = viommu_attach_identity_domain,
> > +			    } }
> > +};
> 
> nit: how about
> 
> 	static struct viommu_domain viommu_identity_domain = {
> 		.domain = {
> 			.type = IOMMU_DOMAIN_IDENTITY,
> 			.ops = &(const struct iommu_domain_ops) {
> 				.attach_dev = viommu_attach_identity_domain,
> 			},
> 		},
> 	};

Done

> > +	/* Reserve an ID to use as the bypass domain */
> > +	if (virtio_has_feature(viommu->vdev, VIRTIO_IOMMU_F_BYPASS_CONFIG)) {
> > +		viommu->identity_domain_id = viommu->first_domain;
> > +		viommu->first_domain++;
> > +	} else {
> > +		/*
> > +		 * Assume the VMM is sensible and it either supports bypass on
> > +		 * all instances or no instances.
> > +		 */
> 
> Maybe also a WARN_ON(!viommu_ops.identity_domain) above?

It starts working in the following patch because the core will call
viommu_domain_alloc_identity() that can make a correct
per-device/per-viommu determination of bypass support.

Let me fold this into the next patch to make that clearer:

@@ -998,7 +998,7 @@ static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
        iommu_dma_get_resv_regions(dev, head);
 }
 
-static struct iommu_ops viommu_ops;
+static const struct iommu_ops viommu_ops;
 static struct virtio_driver virtio_iommu_drv;
 
 static int viommu_match_node(struct device *dev, const void *data)
@@ -1086,8 +1086,7 @@ static bool viommu_capable(struct device *dev, enum iommu_cap cap)
        }
 }
 
-static struct iommu_ops viommu_ops = {
-       .identity_domain        = &viommu_identity_domain.domain,
+static const struct iommu_ops viommu_ops = {
        .capable                = viommu_capable,
        .domain_alloc_identity  = viommu_domain_alloc_identity,
        .domain_alloc_paging    = viommu_domain_alloc_paging,
@@ -1216,12 +1215,6 @@ static int viommu_probe(struct virtio_device *vdev)
        if (virtio_has_feature(viommu->vdev, VIRTIO_IOMMU_F_BYPASS_CONFIG)) {
                viommu->identity_domain_id = viommu->first_domain;
                viommu->first_domain++;
-       } else {
-               /*
-                * Assume the VMM is sensible and it either supports bypass on
-                * all instances or no instances.
-                */
-               viommu_ops.identity_domain = NULL;
        }

Jason

  reply	other threads:[~2025-02-24 19:37 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-07 14:46 [PATCH 0/5] Convert virtio-iommu to domain_alloc_paging() Jason Gunthorpe
2025-02-07 14:46 ` [PATCH 1/5] iommu/virtio: Break out bypass identity support into a global static Jason Gunthorpe
2025-02-12  0:43   ` Jacob Pan
2025-02-18 18:21     ` Jason Gunthorpe
2025-02-21 11:35   ` Jean-Philippe Brucker
2025-02-24 19:37     ` Jason Gunthorpe [this message]
2025-02-07 14:46 ` [PATCH 2/5] iommu: Add domain_alloc_identity() Jason Gunthorpe
2025-02-12 13:56   ` Robin Murphy
2025-02-12 14:03     ` Jason Gunthorpe
2025-02-12 14:16       ` Robin Murphy
2025-02-12 14:45         ` Jason Gunthorpe
2025-02-07 14:46 ` [PATCH 3/5] iommu/virtio: Move to domain_alloc_paging() Jason Gunthorpe
2025-02-12 19:22   ` Jacob Pan
2025-02-12 23:30     ` Jason Gunthorpe
2025-02-13  5:47       ` Jacob Pan
2025-02-18 20:01         ` Jason Gunthorpe
     [not found]       ` <67ad876d.170a0220.3c21dc.85ceSMTPIN_ADDED_BROKEN@mx.google.com>
2025-02-13  9:46         ` Jean-Philippe Brucker
2025-02-13 17:03           ` Yu Zhang
2025-02-13 18:09             ` Jean-Philippe Brucker
2025-02-19  9:39               ` Yu Zhang
2025-02-19 10:35                 ` Jean-Philippe Brucker
2025-02-19 11:11                   ` Yu Zhang
2025-02-19 11:57                     ` Jean-Philippe Brucker
2025-02-19 13:10                   ` Yi Liu
2025-02-20  2:58                   ` Baolu Lu
2025-02-20  3:44                     ` Yu Zhang
2025-02-07 14:46 ` [PATCH 4/5] iommu: Do not call domain_alloc() in iommu_sva_domain_alloc() Jason Gunthorpe
2025-02-07 14:46 ` [PATCH 5/5] iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU Jason Gunthorpe
2025-02-12  0:41 ` [PATCH 0/5] Convert virtio-iommu to domain_alloc_paging() Jacob Pan
2025-02-12 12:50   ` Jason Gunthorpe
2025-02-12 18:50     ` Jacob Pan
2025-02-12 20:10       ` Robin Murphy
2025-02-21 11:42     ` Jean-Philippe Brucker
2025-02-24 19:39       ` Jason Gunthorpe
     [not found] ` <67abee53.170a0220.154671.ae28SMTPIN_ADDED_BROKEN@mx.google.com>
2025-02-12 11:58   ` Jean-Philippe Brucker
2025-02-12 17:05     ` Jacob Pan
     [not found]     ` <67acd4e2.630a0220.365aab.e098SMTPIN_ADDED_BROKEN@mx.google.com>
2025-02-12 19:16       ` Jean-Philippe Brucker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250224193716.GA524338@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=eric.auger@redhat.com \
    --cc=iommu@lists.linux.dev \
    --cc=jean-philippe@linaro.org \
    --cc=joro@8bytes.org \
    --cc=patches@lists.linux.dev \
    --cc=robin.murphy@arm.com \
    --cc=virtualization@lists.linux.dev \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox