public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Alex Mastro <amastro@fb.com>
To: David Matlack <dmatlack@google.com>
Cc: Alex Williamson <alex@shazbot.org>, Shuah Khan <shuah@kernel.org>,
	<kvm@vger.kernel.org>, <linux-kselftest@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, Jason Gunthorpe <jgg@ziepe.ca>
Subject: Re: [PATCH 1/4] vfio: selftests: add iova range query helpers
Date: Mon, 10 Nov 2025 14:32:21 -0800	[thread overview]
Message-ID: <aRJn9Z8nho3GNOU/@devgpu015.cco6.facebook.com> (raw)
In-Reply-To: <aRJhSkj6S48G_pHI@google.com>

On Mon, Nov 10, 2025 at 10:03:54PM +0000, David Matlack wrote:
> On 2025-11-10 01:10 PM, Alex Mastro wrote:
> > +/*
> > + * Return iova ranges for the device's container. Normalize vfio_iommu_type1 to
> > + * report iommufd's iommu_iova_range. Free with free().
> > + */
> > +static struct iommu_iova_range *vfio_iommu_iova_ranges(struct vfio_pci_device *device,
> > +						       size_t *nranges)
> > +{
> > +	struct vfio_iommu_type1_info_cap_iova_range *cap_range;
> > +	struct vfio_iommu_type1_info *buf;
> 
> nit: Maybe name this variable `info` here and in vfio_iommu_info_buf()
> and vfio_iommu_info_cap_hdr()? It is not an opaque buffer.
> 
> > +	struct vfio_info_cap_header *hdr;
> > +	struct iommu_iova_range *ranges = NULL;
> > +
> > +	buf = vfio_iommu_info_buf(device);
> 
> nit: How about naming this vfio_iommu_get_info() since it actually
> fetches the info from VFIO? (It doesn't just allocate a buffer.)
> 
> > +	VFIO_ASSERT_NOT_NULL(buf);
> 
> This assert is unnecessary.
> 
> > +
> > +	hdr = vfio_iommu_info_cap_hdr(buf, VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE);
> > +	if (!hdr)
> > +		goto free_buf;
> 
> Is this to account for running on old versions of VFIO? Or are there
> some scenarios when VFIO can't report the list of IOVA ranges?

I wanted to avoid being overly assertive in this low-level helper function,
mostly out of ignorance about where/in which system states this capability may
not be reported.

> > +
> > +	cap_range = container_of(hdr, struct vfio_iommu_type1_info_cap_iova_range, header);
> > +	if (!cap_range->nr_iovas)
> > +		goto free_buf;
> > +
> > +	ranges = malloc(cap_range->nr_iovas * sizeof(*ranges));
> > +	VFIO_ASSERT_NOT_NULL(ranges);
> > +
> > +	for (u32 i = 0; i < cap_range->nr_iovas; i++) {
> > +		ranges[i] = (struct iommu_iova_range){
> > +			.start = cap_range->iova_ranges[i].start,
> > +			.last = cap_range->iova_ranges[i].end,
> > +		};
> > +	}
> > +
> > +	*nranges = cap_range->nr_iovas;
> > +
> > +free_buf:
> > +	free(buf);
> > +	return ranges;
> > +}
> > +
> > +/* Return iova ranges of the device's IOAS. Free with free() */
> > +struct iommu_iova_range *iommufd_iova_ranges(struct vfio_pci_device *device,
> > +					     size_t *nranges)
> > +{
> > +	struct iommu_iova_range *ranges;
> > +	int ret;
> > +
> > +	struct iommu_ioas_iova_ranges query = {
> > +		.size = sizeof(query),
> > +		.ioas_id = device->ioas_id,
> > +	};
> > +
> > +	ret = ioctl(device->iommufd, IOMMU_IOAS_IOVA_RANGES, &query);
> > +	VFIO_ASSERT_EQ(ret, -1);
> > +	VFIO_ASSERT_EQ(errno, EMSGSIZE);
> > +	VFIO_ASSERT_GT(query.num_iovas, 0);
> > +
> > +	ranges = malloc(query.num_iovas * sizeof(*ranges));
> > +	VFIO_ASSERT_NOT_NULL(ranges);
> > +
> > +	query.allowed_iovas = (uintptr_t)ranges;
> > +
> > +	ioctl_assert(device->iommufd, IOMMU_IOAS_IOVA_RANGES, &query);
> > +	*nranges = query.num_iovas;
> > +
> > +	return ranges;
> > +}
> > +
> > +struct iommu_iova_range *vfio_pci_iova_ranges(struct vfio_pci_device *device,
> > +					      size_t *nranges)
> 
> nit: Both iommufd and VFIO represent the number of IOVA ranges as a u32.
> Perhaps we should do the same in VFIO selftests?

Thanks David. All suggestions SGTM -- will roll into v2.

  reply	other threads:[~2025-11-10 22:32 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-10 21:10 [PATCH 0/4] vfio: selftests: update DMA mapping tests to use queried IOVA ranges Alex Mastro
2025-11-10 21:10 ` [PATCH 1/4] vfio: selftests: add iova range query helpers Alex Mastro
2025-11-10 21:31   ` Alex Williamson
2025-11-10 22:35     ` Alex Mastro
2025-11-10 22:03   ` David Matlack
2025-11-10 22:32     ` Alex Mastro [this message]
2025-11-10 23:02       ` David Matlack
2025-11-10 23:08         ` Alex Mastro
2025-11-10 21:10 ` [PATCH 2/4] vfio: selftests: fix map limit tests to use last available iova Alex Mastro
2025-11-10 21:31   ` Alex Williamson
2025-11-10 22:38     ` Alex Mastro
2025-11-11  0:09   ` David Matlack
2025-11-10 21:10 ` [PATCH 3/4] vfio: selftests: add iova allocator Alex Mastro
2025-11-10 21:31   ` Alex Williamson
2025-11-10 22:37     ` Alex Mastro
2025-11-10 22:54   ` David Matlack
2025-11-10 23:14     ` Alex Mastro
2025-11-10 21:10 ` [PATCH 4/4] vfio: selftests: update vfio_dma_mapping_test to allocate iovas Alex Mastro
2025-11-10 21:31   ` Alex Williamson
2025-11-10 22:36     ` Alex Mastro
2025-11-10 23:06 ` [PATCH 0/4] vfio: selftests: update DMA mapping tests to use queried IOVA ranges David Matlack

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aRJn9Z8nho3GNOU/@devgpu015.cco6.facebook.com \
    --to=amastro@fb.com \
    --cc=alex@shazbot.org \
    --cc=dmatlack@google.com \
    --cc=jgg@ziepe.ca \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=shuah@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox