From: Nicolin Chen <nicolinc@nvidia.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: <kevin.tian@intel.com>, <corbet@lwn.net>, <will@kernel.org>,
<bagasdotme@gmail.com>, <robin.murphy@arm.com>, <joro@8bytes.org>,
<thierry.reding@gmail.com>, <vdumpa@nvidia.com>,
<jonathanh@nvidia.com>, <shuah@kernel.org>, <jsnitsel@redhat.com>,
<nathan@kernel.org>, <peterz@infradead.org>, <yi.l.liu@intel.com>,
<mshavit@google.com>, <praan@google.com>,
<zhangzekun11@huawei.com>, <iommu@lists.linux.dev>,
<linux-doc@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<linux-arm-kernel@lists.infradead.org>,
<linux-tegra@vger.kernel.org>, <linux-kselftest@vger.kernel.org>,
<patches@lists.linux.dev>, <mochs@nvidia.com>,
<alok.a.tiwari@oracle.com>, <vasant.hegde@amd.com>,
<dwmw2@infradead.org>, <baolu.lu@linux.intel.com>
Subject: Re: [PATCH v8 14/29] iommufd/viommu: Add IOMMUFD_CMD_HW_QUEUE_ALLOC ioctl
Date: Wed, 9 Jul 2025 22:34:23 -0700 [thread overview]
Message-ID: <aG9Q39PKjY/TXiMe@Asurada-Nvidia> (raw)
In-Reply-To: <aG67GDY97U8T6kuD@Asurada-Nvidia>
On Wed, Jul 09, 2025 at 11:55:20AM -0700, Nicolin Chen wrote:
> On Wed, Jul 09, 2025 at 02:09:04PM -0300, Jason Gunthorpe wrote:
> > On Fri, Jul 04, 2025 at 06:13:30PM -0700, Nicolin Chen wrote:
> > > +static struct iommufd_access *
> > > +iommufd_hw_queue_alloc_phys(struct iommu_hw_queue_alloc *cmd,
> > > + struct iommufd_viommu *viommu, phys_addr_t *base_pa)
> > > +{
> > > + struct iommufd_access *access;
> > > + struct page **pages;
> > > + size_t max_npages;
> > > + size_t length;
> > > + u64 offset;
> > > + size_t i;
> > > + int rc;
> > > +
> > > + offset =
> > > + cmd->nesting_parent_iova - PAGE_ALIGN(cmd->nesting_parent_iova);
> >
> > PAGE_ALIGN is ALIGN UP, that is the wrong direction?
> >
> > It is just:
> >
> > offset = cmd->nesting_parent_iova % PAGE_SIZE;
> >
> > And this is missing:
> >
> > *base_pa = (page_to_pfn(pages[0]) << PAGE_SHIFT) + offset;
> >
> > ??
>
> Yes, my bad. And I realized that all my testings are page aligned.
> Maybe we could add an IOMMU_TEST_OP_HW_QUEUE_CHECK_ADDR..
I found that this needed a bit more care. The pin_pages() API
accepts aligned iova and length inputs.
So, doing this instead (and adding a selftest coverage):
diff --git a/drivers/iommu/iommufd/viommu.c b/drivers/iommu/iommufd/viommu.c
index 00641204efb2..91339f799916 100644
--- a/drivers/iommu/iommufd/viommu.c
+++ b/drivers/iommu/iommufd/viommu.c
@@ -206,7 +206,11 @@ static void iommufd_hw_queue_destroy_access(struct iommufd_ctx *ictx,
struct iommufd_access *access,
u64 base_iova, size_t length)
{
- iommufd_access_unpin_pages(access, base_iova, length);
+ u64 aligned_iova = PAGE_ALIGN_DOWN(base_iova);
+ u64 offset = base_iova - aligned_iova;
+
+ iommufd_access_unpin_pages(access, aligned_iova,
+ PAGE_ALIGN(length + offset));
iommufd_access_detach_internal(access);
iommufd_access_destroy_internal(ictx, access);
}
@@ -239,22 +243,23 @@ static struct iommufd_access *
iommufd_hw_queue_alloc_phys(struct iommu_hw_queue_alloc *cmd,
struct iommufd_viommu *viommu, phys_addr_t *base_pa)
{
+ u64 aligned_iova = PAGE_ALIGN_DOWN(cmd->nesting_parent_iova);
+ u64 offset = cmd->nesting_parent_iova - aligned_iova;
struct iommufd_access *access;
struct page **pages;
size_t max_npages;
size_t length;
- u64 offset;
size_t i;
int rc;
- offset =
- cmd->nesting_parent_iova - PAGE_ALIGN(cmd->nesting_parent_iova);
- /* DIV_ROUND_UP(offset + cmd->length, PAGE_SIZE) */
+ /* max_npages = DIV_ROUND_UP(offset + cmd->length, PAGE_SIZE) */
if (check_add_overflow(offset, cmd->length, &length))
return ERR_PTR(-ERANGE);
if (check_add_overflow(length, PAGE_SIZE - 1, &length))
return ERR_PTR(-ERANGE);
max_npages = length / PAGE_SIZE;
+ /* length needs to be page aligned too */
+ length = max_npages * PAGE_SIZE;
/*
* Use kvcalloc() to avoid memory fragmentation for a large page array.
@@ -274,8 +279,7 @@ iommufd_hw_queue_alloc_phys(struct iommu_hw_queue_alloc *cmd,
if (rc)
goto out_destroy;
- rc = iommufd_access_pin_pages(access, cmd->nesting_parent_iova,
- cmd->length, pages, 0);
+ rc = iommufd_access_pin_pages(access, aligned_iova, length, pages, 0);
if (rc)
goto out_detach;
@@ -287,13 +291,12 @@ iommufd_hw_queue_alloc_phys(struct iommu_hw_queue_alloc *cmd,
goto out_unpin;
}
- *base_pa = page_to_pfn(pages[0]) << PAGE_SHIFT;
+ *base_pa = (page_to_pfn(pages[0]) << PAGE_SHIFT) + offset;
kfree(pages);
return access;
out_unpin:
- iommufd_access_unpin_pages(access, cmd->nesting_parent_iova,
- cmd->length);
+ iommufd_access_unpin_pages(access, aligned_iova, length);
out_detach:
iommufd_access_detach_internal(access);
out_destroy:
diff --git a/tools/testing/selftests/iommu/iommufd.c b/tools/testing/selftests/iommu/iommufd.c
index 9d5b852d5e19..694b178f8ca4 100644
--- a/tools/testing/selftests/iommu/iommufd.c
+++ b/tools/testing/selftests/iommu/iommufd.c
@@ -3104,7 +3104,7 @@ TEST_F(iommufd_viommu, hw_queue)
/* Allocate index=0, declare ownership of the iova */
test_cmd_hw_queue_alloc(viommu_id, IOMMU_HW_QUEUE_TYPE_SELFTEST, 0,
iova, PAGE_SIZE, &hw_queue_id[0]);
- /* Fail duplicate */
+ /* Fail duplicated index */
test_err_hw_queue_alloc(EEXIST, viommu_id, IOMMU_HW_QUEUE_TYPE_SELFTEST,
0, iova, PAGE_SIZE, &hw_queue_id[0]);
/* Fail unmap, due to iova ownership */
@@ -3112,9 +3112,10 @@ TEST_F(iommufd_viommu, hw_queue)
/* The 2nd page is not pinned, so it can be unmmap */
test_ioctl_ioas_unmap(iova + PAGE_SIZE, PAGE_SIZE);
- /* Allocate index=1 */
+ /* Allocate index=1, with an unaligned case */
test_cmd_hw_queue_alloc(viommu_id, IOMMU_HW_QUEUE_TYPE_SELFTEST, 1,
- iova, PAGE_SIZE, &hw_queue_id[1]);
+ iova + PAGE_SIZE / 2, PAGE_SIZE / 2,
+ &hw_queue_id[1]);
/* Fail to destroy, due to dependency */
EXPECT_ERRNO(EBUSY, _test_ioctl_destroy(self->fd, hw_queue_id[0]));
next prev parent reply other threads:[~2025-07-10 5:34 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-05 1:13 [PATCH v8 00/29] iommufd: Add vIOMMU infrastructure (Part-4 HW QUEUE) Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 01/29] iommufd: Report unmapped bytes in the error path of iopt_unmap_iova_range Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 02/29] iommufd: Correct virt_id kdoc at struct iommu_vdevice_alloc Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 03/29] iommufd/viommu: Explicitly define vdev->virt_id Nicolin Chen
2025-07-05 15:03 ` Vasant Hegde
2025-07-05 1:13 ` [PATCH v8 04/29] iommu: Use enum iommu_hw_info_type for type in hw_info op Nicolin Chen
2025-07-09 16:47 ` Jason Gunthorpe
2025-07-10 2:57 ` Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 05/29] iommu: Add iommu_copy_struct_to_user helper Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 06/29] iommu: Pass in a driver-level user data structure to viommu_init op Nicolin Chen
2025-07-07 8:05 ` Vasant Hegde
2025-07-05 1:13 ` [PATCH v8 07/29] iommufd/viommu: Allow driver-specific user data for a vIOMMU object Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 08/29] iommufd/selftest: Support user_data in mock_viommu_alloc Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 09/29] iommufd/selftest: Add coverage for viommu data Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 10/29] iommufd/access: Add internal APIs for HW queue to use Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 11/29] iommufd/access: Bypass access->ops->unmap for internal use Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 12/29] iommufd/viommu: Add driver-defined vDEVICE support Nicolin Chen
2025-07-07 7:41 ` Vasant Hegde
2025-07-05 1:13 ` [PATCH v8 13/29] iommufd/viommu: Introduce IOMMUFD_OBJ_HW_QUEUE and its related struct Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 14/29] iommufd/viommu: Add IOMMUFD_CMD_HW_QUEUE_ALLOC ioctl Nicolin Chen
2025-07-07 7:41 ` Vasant Hegde
2025-07-09 19:02 ` Nicolin Chen
2025-07-09 17:09 ` Jason Gunthorpe
2025-07-09 18:55 ` Nicolin Chen
2025-07-10 5:34 ` Nicolin Chen [this message]
2025-07-05 1:13 ` [PATCH v8 15/29] iommufd/driver: Add iommufd_hw_queue_depend/undepend() helpers Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 16/29] iommufd/selftest: Add coverage for IOMMUFD_CMD_HW_QUEUE_ALLOC Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 17/29] iommufd: Add mmap interface Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 18/29] iommufd/selftest: Add coverage for the new " Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 19/29] Documentation: userspace-api: iommufd: Update HW QUEUE Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 20/29] iommu: Allow an input type in hw_info op Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 21/29] iommufd: Allow an input data_type via iommu_hw_info Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 22/29] iommufd/selftest: Update hw_info coverage for an input data_type Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 23/29] iommu/arm-smmu-v3-iommufd: Add vsmmu_size/type and vsmmu_init impl ops Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 24/29] iommu/arm-smmu-v3-iommufd: Add hw_info to impl_ops Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 25/29] iommu/tegra241-cmdqv: Use request_threaded_irq Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 26/29] iommu/tegra241-cmdqv: Simplify deinit flow in tegra241_cmdqv_remove_vintf() Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 27/29] iommu/tegra241-cmdqv: Do not statically map LVCMDQs Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 28/29] iommu/tegra241-cmdqv: Add user-space use support Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 29/29] iommu/tegra241-cmdqv: Add IOMMU_VEVENTQ_TYPE_TEGRA241_CMDQV support Nicolin Chen
2025-07-09 17:33 ` [PATCH v8 00/29] iommufd: Add vIOMMU infrastructure (Part-4 HW QUEUE) Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aG9Q39PKjY/TXiMe@Asurada-Nvidia \
--to=nicolinc@nvidia.com \
--cc=alok.a.tiwari@oracle.com \
--cc=bagasdotme@gmail.com \
--cc=baolu.lu@linux.intel.com \
--cc=corbet@lwn.net \
--cc=dwmw2@infradead.org \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=jonathanh@nvidia.com \
--cc=joro@8bytes.org \
--cc=jsnitsel@redhat.com \
--cc=kevin.tian@intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=mochs@nvidia.com \
--cc=mshavit@google.com \
--cc=nathan@kernel.org \
--cc=patches@lists.linux.dev \
--cc=peterz@infradead.org \
--cc=praan@google.com \
--cc=robin.murphy@arm.com \
--cc=shuah@kernel.org \
--cc=thierry.reding@gmail.com \
--cc=vasant.hegde@amd.com \
--cc=vdumpa@nvidia.com \
--cc=will@kernel.org \
--cc=yi.l.liu@intel.com \
--cc=zhangzekun11@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).