From: Nicolin Chen <nicolinc@nvidia.com>
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: "jgg@nvidia.com" <jgg@nvidia.com>,
"corbet@lwn.net" <corbet@lwn.net>,
"will@kernel.org" <will@kernel.org>,
"robin.murphy@arm.com" <robin.murphy@arm.com>,
"joro@8bytes.org" <joro@8bytes.org>,
"thierry.reding@gmail.com" <thierry.reding@gmail.com>,
"vdumpa@nvidia.com" <vdumpa@nvidia.com>,
"jonathanh@nvidia.com" <jonathanh@nvidia.com>,
"shuah@kernel.org" <shuah@kernel.org>,
"praan@google.com" <praan@google.com>,
"nathan@kernel.org" <nathan@kernel.org>,
"peterz@infradead.org" <peterz@infradead.org>,
"Liu, Yi L" <yi.l.liu@intel.com>,
"jsnitsel@redhat.com" <jsnitsel@redhat.com>,
"mshavit@google.com" <mshavit@google.com>,
"zhangzekun11@huawei.com" <zhangzekun11@huawei.com>,
"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>,
"linux-tegra@vger.kernel.org" <linux-tegra@vger.kernel.org>,
"linux-kselftest@vger.kernel.org"
<linux-kselftest@vger.kernel.org>,
"patches@lists.linux.dev" <patches@lists.linux.dev>
Subject: Re: [PATCH v1 15/16] iommu/tegra241-cmdqv: Add user-space use support
Date: Mon, 21 Apr 2025 12:14:01 -0700 [thread overview]
Message-ID: <aAaY+f2/jw9NaIWF@Asurada-Nvidia> (raw)
In-Reply-To: <BN9PR11MB52768197516FB895146A12078CB82@BN9PR11MB5276.namprd11.prod.outlook.com>
On Mon, Apr 21, 2025 at 08:37:40AM +0000, Tian, Kevin wrote:
> > From: Nicolin Chen <nicolinc@nvidia.com>
> > Sent: Friday, April 11, 2025 2:38 PM
> >
> > Add the support via vIOMMU infrastructure for virtualization use case.
> >
> > This basically allows VMM to allocate VINTFs (as a vIOMMU object) and
> > assign VCMDQs (vCMDQ objects) to it. A VINTF's MMIO page0 can be
> > mmap'd
> > to user space for VM to access directly without VMEXIT and corresponding
> > hypercall.
>
> it'd be helpful to add a bit more context, e.g. what page0 contains, sid
> slots, vcmdq_base (mapped in s2), etc. so it's easier for one to understand
> it from start instead of by reading the code.
Will do. It basically has all the control/status bits for direct
vCMDQ controls.
> > As an initial version, the number of VCMDQs per VINTF is fixed to two.
>
> so an user could map both VCMDQs of an VINTF even when only one
> VCMDQ is created, given the entire 64K page0 is legible for mmap once
> the VINTF is associated to a viommu?
Oh, that's a good point!
If a guest OS ignores the total number of VCMDQs emulated by the
VMM and tries to enable the VCMDQ via the "reserved" MMIO region
in the mmap'd VINTF page0, the host system would be spammed with
vCMDQ TIMEOUTs that aren't supposed to happen nor be forwarded
back to the guest.
It looks like we need some dynamic VCMDQ mapping to a VINTF v.s.
static allocation, though we can still set the max number to 2.
> no security issue given the VINTF is not shared, but conceptually if
> feasible (e.g. two CMDQ's MMIO ranges sits in different 4k pages of
> VINTF page0) does it make sense to do per-VCMDQ mmap control
> and return mmap info at VCMDQ alloc?
Page size can be 64K on ARM. And each additional logical VCMDQ
(in a VINTF page0) has only an offset of 0x80. So, vCMDQ cannot
be mmap'd individually.
> > + if (vintf->lvcmdqs[arg.vcmdq_id]) {
> > + vcmdq = vintf->lvcmdqs[arg.vcmdq_id];
> > +
> > + /* deinit the previous setting as a reset, before re-init */
> > + tegra241_vcmdq_hw_deinit(vcmdq);
> > +
> > + vcmdq->cmdq.q.q_base = q_base & VCMDQ_ADDR;
> > + vcmdq->cmdq.q.q_base |= arg.vcmdq_log2size;
> > + tegra241_vcmdq_hw_init_user(vcmdq);
> > +
> > + return &vcmdq->core;
> > + }
>
> why not returning -EBUSY here?
Hmm, this seems to a WAR that I forgot to drop! Will check and
remove this.
> > +
> > + vcmdq = iommufd_vcmdq_alloc(viommu, struct tegra241_vcmdq,
> > core);
> > + if (!vcmdq)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + ret = tegra241_vintf_init_lvcmdq(vintf, arg.vcmdq_id, vcmdq);
> > + if (ret)
> > + goto free_vcmdq;
> > + dev_dbg(cmdqv->dev, "%sallocated\n",
> > + lvcmdq_error_header(vcmdq, header, 64));
> > +
> > + vcmdq->cmdq.q.q_base = q_base & VCMDQ_ADDR;
> > + vcmdq->cmdq.q.q_base |= arg.vcmdq_log2size;
>
> could the queue size be multiple pages? there is no guarantee
> that the HPA of guest queue would be contiguous :/
It certainly can. VMM must make sure the guest PA are contiguous
by using huge pages to back the guest RAM space. Kernel has no
control of this but only has to trust the VMM.
I'm adding a note here:
/* User space ensures that the queue memory is physically contiguous */
And likely something similar in the uAPI header too.
Thanks!
Nicolin
next prev parent reply other threads:[~2025-04-21 19:16 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-11 6:37 [PATCH v1 00/16] iommufd: Add vIOMMU infrastructure (Part-4 vCMDQ) Nicolin Chen
2025-04-11 6:37 ` [PATCH v1 01/16] iommu: Pass in a driver-level user data structure to viommu_alloc op Nicolin Chen
2025-04-23 13:16 ` Jason Gunthorpe
2025-04-11 6:37 ` [PATCH v1 02/16] iommufd/viommu: Allow driver-specific user data for a vIOMMU object Nicolin Chen
2025-04-23 13:16 ` Jason Gunthorpe
2025-04-11 6:37 ` [PATCH v1 03/16] iommu: Add iommu_copy_struct_to_user helper Nicolin Chen
2025-04-11 12:35 ` ALOK TIWARI
2025-04-14 18:03 ` Nicolin Chen
2025-04-14 15:25 ` Matt Ochs
2025-04-14 18:01 ` Nicolin Chen
2025-04-23 13:17 ` Jason Gunthorpe
2025-04-11 6:37 ` [PATCH v1 04/16] iommufd: Add iommufd_struct_destroy to revert iommufd_viommu_alloc Nicolin Chen
2025-04-23 13:18 ` Jason Gunthorpe
2025-04-11 6:37 ` [PATCH v1 05/16] iommufd/selftest: Support user_data in mock_viommu_alloc Nicolin Chen
2025-04-11 6:37 ` [PATCH v1 06/16] iommufd/selftest: Add covearge for viommu data Nicolin Chen
2025-04-11 6:37 ` [PATCH v1 07/16] iommufd/viommu: Add driver-allocated vDEVICE support Nicolin Chen
2025-04-21 8:00 ` Tian, Kevin
2025-04-21 15:35 ` Nicolin Chen
2025-04-23 13:36 ` Jason Gunthorpe
2025-04-11 6:37 ` [PATCH v1 08/16] iommufd/viommu: Introduce IOMMUFD_OBJ_VCMDQ and its related struct Nicolin Chen
2025-04-21 8:03 ` Tian, Kevin
2025-04-21 15:38 ` Nicolin Chen
2025-04-11 6:37 ` [PATCH v1 09/16] iommufd/viommmu: Add IOMMUFD_CMD_VCMDQ_ALLOC ioctl Nicolin Chen
2025-04-21 8:05 ` Tian, Kevin
2025-04-21 15:42 ` Nicolin Chen
2025-04-11 6:37 ` [PATCH v1 10/16] iommufd: Add mmap interface Nicolin Chen
2025-04-21 8:16 ` Tian, Kevin
2025-04-21 17:23 ` Nicolin Chen
2025-04-21 17:45 ` Nicolin Chen
2025-04-11 6:37 ` [PATCH v1 11/16] iommufd/selftest: Add coverage for the new " Nicolin Chen
2025-04-11 6:37 ` [PATCH v1 12/16] Documentation: userspace-api: iommufd: Update vCMDQ Nicolin Chen
2025-04-11 6:37 ` [PATCH v1 13/16] iommu/tegra241-cmdqv: Use request_threaded_irq Nicolin Chen
2025-04-11 6:37 ` [PATCH v1 14/16] iommu/arm-smmu-v3: Add vsmmu_alloc impl op Nicolin Chen
2025-04-21 8:23 ` Tian, Kevin
2025-04-21 17:47 ` Nicolin Chen
2025-04-11 6:37 ` [PATCH v1 15/16] iommu/tegra241-cmdqv: Add user-space use support Nicolin Chen
2025-04-21 8:37 ` Tian, Kevin
2025-04-21 19:14 ` Nicolin Chen [this message]
2025-04-23 8:05 ` Tian, Kevin
2025-04-23 11:55 ` Jason Gunthorpe
2025-04-23 18:31 ` Nicolin Chen
2025-04-23 23:13 ` Jason Gunthorpe
2025-04-24 6:51 ` Nicolin Chen
2025-04-24 8:04 ` Tian, Kevin
2025-04-24 13:40 ` Jason Gunthorpe
2025-04-24 15:46 ` Nicolin Chen
2025-04-11 6:37 ` [PATCH v1 16/16] iommu/tegra241-cmdqv: Add IOMMU_VEVENTQ_TYPE_TEGRA241_CMDQV support Nicolin Chen
2025-04-23 7:28 ` [PATCH v1 00/16] iommufd: Add vIOMMU infrastructure (Part-4 vCMDQ) Vasant Hegde
2025-04-23 7:45 ` Nicolin Chen
2025-04-24 11:21 ` Vasant Hegde
2025-04-24 8:21 ` Tian, Kevin
2025-04-24 15:54 ` Nicolin Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aAaY+f2/jw9NaIWF@Asurada-Nvidia \
--to=nicolinc@nvidia.com \
--cc=corbet@lwn.net \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=jonathanh@nvidia.com \
--cc=joro@8bytes.org \
--cc=jsnitsel@redhat.com \
--cc=kevin.tian@intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=mshavit@google.com \
--cc=nathan@kernel.org \
--cc=patches@lists.linux.dev \
--cc=peterz@infradead.org \
--cc=praan@google.com \
--cc=robin.murphy@arm.com \
--cc=shuah@kernel.org \
--cc=thierry.reding@gmail.com \
--cc=vdumpa@nvidia.com \
--cc=will@kernel.org \
--cc=yi.l.liu@intel.com \
--cc=zhangzekun11@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox