From: Nicolin Chen <nicolinc@nvidia.com>
To: Vasant Hegde <vasant.hegde@amd.com>
Cc: <jgg@nvidia.com>, <kevin.tian@intel.com>, <corbet@lwn.net>,
<will@kernel.org>, <bagasdotme@gmail.com>, <robin.murphy@arm.com>,
<joro@8bytes.org>, <thierry.reding@gmail.com>,
<vdumpa@nvidia.com>, <jonathanh@nvidia.com>, <shuah@kernel.org>,
<jsnitsel@redhat.com>, <nathan@kernel.org>,
<peterz@infradead.org>, <yi.l.liu@intel.com>,
<mshavit@google.com>, <praan@google.com>,
<zhangzekun11@huawei.com>, <iommu@lists.linux.dev>,
<linux-doc@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<linux-arm-kernel@lists.infradead.org>,
<linux-tegra@vger.kernel.org>, <linux-kselftest@vger.kernel.org>,
<patches@lists.linux.dev>, <mochs@nvidia.com>,
<alok.a.tiwari@oracle.com>, <dwmw2@infradead.org>,
<baolu.lu@linux.intel.com>
Subject: Re: [PATCH v8 14/29] iommufd/viommu: Add IOMMUFD_CMD_HW_QUEUE_ALLOC ioctl
Date: Wed, 9 Jul 2025 12:02:03 -0700 [thread overview]
Message-ID: <aG68q0Uv+AuIgbvX@Asurada-Nvidia> (raw)
In-Reply-To: <1f18d7a3-b515-4096-aff5-1aea31ce4f7e@amd.com>
On Mon, Jul 07, 2025 at 01:11:00PM +0530, Vasant Hegde wrote:
> Hi ,
>
>
> On 7/5/2025 6:43 AM, Nicolin Chen wrote:
> > Introduce a new IOMMUFD_CMD_HW_QUEUE_ALLOC ioctl for user space to allocate
> > a HW QUEUE object for a vIOMMU specific HW-accelerated queue, e.g.:
> > - NVIDIA's Virtual Command Queue
> > - AMD vIOMMU's Command Buffer, Event Log Buffers, and PPR Log Buffers
> >
> > Since this is introduced with NVIDIA's VCMDQs that access the guest memory
> > in the physical address space, add an iommufd_hw_queue_alloc_phys() helper
> > that will create an access object to the queue memory in the IOAS, to avoid
> > the mappings of the guest memory from being unmapped, during the life cycle
> > of the HW queue object.
> >
> > AMD's HW will need an hw_queue_init op that is mutually exclusive with the
> > hw_queue_init_phys op, and their case will bypass the access part, i.e. no
> > iommufd_hw_queue_alloc_phys() call.
>
> Thanks. We will implement hw_queue_init[_iova] to support AMD driver and fixup
> iommufd_hw_queue_alloc_ioctl(). Is that the correct understanding?
Yes. I think just a simple "hw_queue_init" will be good as the
object structure stores "iova" already:
struct iommufd_hw_queue {
...
u64 base_addr; /* in guest physical address space */
...
};
Thanks
Nicolin
next prev parent reply other threads:[~2025-07-09 19:02 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-05 1:13 [PATCH v8 00/29] iommufd: Add vIOMMU infrastructure (Part-4 HW QUEUE) Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 01/29] iommufd: Report unmapped bytes in the error path of iopt_unmap_iova_range Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 02/29] iommufd: Correct virt_id kdoc at struct iommu_vdevice_alloc Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 03/29] iommufd/viommu: Explicitly define vdev->virt_id Nicolin Chen
2025-07-05 15:03 ` Vasant Hegde
2025-07-05 1:13 ` [PATCH v8 04/29] iommu: Use enum iommu_hw_info_type for type in hw_info op Nicolin Chen
2025-07-09 16:47 ` Jason Gunthorpe
2025-07-10 2:57 ` Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 05/29] iommu: Add iommu_copy_struct_to_user helper Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 06/29] iommu: Pass in a driver-level user data structure to viommu_init op Nicolin Chen
2025-07-07 8:05 ` Vasant Hegde
2025-07-05 1:13 ` [PATCH v8 07/29] iommufd/viommu: Allow driver-specific user data for a vIOMMU object Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 08/29] iommufd/selftest: Support user_data in mock_viommu_alloc Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 09/29] iommufd/selftest: Add coverage for viommu data Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 10/29] iommufd/access: Add internal APIs for HW queue to use Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 11/29] iommufd/access: Bypass access->ops->unmap for internal use Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 12/29] iommufd/viommu: Add driver-defined vDEVICE support Nicolin Chen
2025-07-07 7:41 ` Vasant Hegde
2025-07-05 1:13 ` [PATCH v8 13/29] iommufd/viommu: Introduce IOMMUFD_OBJ_HW_QUEUE and its related struct Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 14/29] iommufd/viommu: Add IOMMUFD_CMD_HW_QUEUE_ALLOC ioctl Nicolin Chen
2025-07-07 7:41 ` Vasant Hegde
2025-07-09 19:02 ` Nicolin Chen [this message]
2025-07-09 17:09 ` Jason Gunthorpe
2025-07-09 18:55 ` Nicolin Chen
2025-07-10 5:34 ` Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 15/29] iommufd/driver: Add iommufd_hw_queue_depend/undepend() helpers Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 16/29] iommufd/selftest: Add coverage for IOMMUFD_CMD_HW_QUEUE_ALLOC Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 17/29] iommufd: Add mmap interface Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 18/29] iommufd/selftest: Add coverage for the new " Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 19/29] Documentation: userspace-api: iommufd: Update HW QUEUE Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 20/29] iommu: Allow an input type in hw_info op Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 21/29] iommufd: Allow an input data_type via iommu_hw_info Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 22/29] iommufd/selftest: Update hw_info coverage for an input data_type Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 23/29] iommu/arm-smmu-v3-iommufd: Add vsmmu_size/type and vsmmu_init impl ops Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 24/29] iommu/arm-smmu-v3-iommufd: Add hw_info to impl_ops Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 25/29] iommu/tegra241-cmdqv: Use request_threaded_irq Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 26/29] iommu/tegra241-cmdqv: Simplify deinit flow in tegra241_cmdqv_remove_vintf() Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 27/29] iommu/tegra241-cmdqv: Do not statically map LVCMDQs Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 28/29] iommu/tegra241-cmdqv: Add user-space use support Nicolin Chen
2025-07-05 1:13 ` [PATCH v8 29/29] iommu/tegra241-cmdqv: Add IOMMU_VEVENTQ_TYPE_TEGRA241_CMDQV support Nicolin Chen
2025-07-09 17:33 ` [PATCH v8 00/29] iommufd: Add vIOMMU infrastructure (Part-4 HW QUEUE) Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aG68q0Uv+AuIgbvX@Asurada-Nvidia \
--to=nicolinc@nvidia.com \
--cc=alok.a.tiwari@oracle.com \
--cc=bagasdotme@gmail.com \
--cc=baolu.lu@linux.intel.com \
--cc=corbet@lwn.net \
--cc=dwmw2@infradead.org \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=jonathanh@nvidia.com \
--cc=joro@8bytes.org \
--cc=jsnitsel@redhat.com \
--cc=kevin.tian@intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=mochs@nvidia.com \
--cc=mshavit@google.com \
--cc=nathan@kernel.org \
--cc=patches@lists.linux.dev \
--cc=peterz@infradead.org \
--cc=praan@google.com \
--cc=robin.murphy@arm.com \
--cc=shuah@kernel.org \
--cc=thierry.reding@gmail.com \
--cc=vasant.hegde@amd.com \
--cc=vdumpa@nvidia.com \
--cc=will@kernel.org \
--cc=yi.l.liu@intel.com \
--cc=zhangzekun11@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox