linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nicolin Chen <nicolinc@nvidia.com>
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: "jgg@nvidia.com" <jgg@nvidia.com>,
	"corbet@lwn.net" <corbet@lwn.net>,
	"will@kernel.org" <will@kernel.org>,
	"bagasdotme@gmail.com" <bagasdotme@gmail.com>,
	"robin.murphy@arm.com" <robin.murphy@arm.com>,
	"joro@8bytes.org" <joro@8bytes.org>,
	"thierry.reding@gmail.com" <thierry.reding@gmail.com>,
	"vdumpa@nvidia.com" <vdumpa@nvidia.com>,
	"jonathanh@nvidia.com" <jonathanh@nvidia.com>,
	"shuah@kernel.org" <shuah@kernel.org>,
	"jsnitsel@redhat.com" <jsnitsel@redhat.com>,
	"nathan@kernel.org" <nathan@kernel.org>,
	"peterz@infradead.org" <peterz@infradead.org>,
	"Liu, Yi L" <yi.l.liu@intel.com>,
	"mshavit@google.com" <mshavit@google.com>,
	"praan@google.com" <praan@google.com>,
	"zhangzekun11@huawei.com" <zhangzekun11@huawei.com>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"linux-tegra@vger.kernel.org" <linux-tegra@vger.kernel.org>,
	"linux-kselftest@vger.kernel.org"
	<linux-kselftest@vger.kernel.org>,
	"patches@lists.linux.dev" <patches@lists.linux.dev>,
	"mochs@nvidia.com" <mochs@nvidia.com>,
	"alok.a.tiwari@oracle.com" <alok.a.tiwari@oracle.com>,
	"vasant.hegde@amd.com" <vasant.hegde@amd.com>
Subject: Re: [PATCH v4 11/23] iommufd/viommu: Add IOMMUFD_CMD_HW_QUEUE_ALLOC ioctl
Date: Thu, 15 May 2025 11:44:40 -0700	[thread overview]
Message-ID: <aCY2GKfMh6f+vXHj@Asurada-Nvidia> (raw)
In-Reply-To: <BN9PR11MB52761EF8360BEF5C19C2382F8C90A@BN9PR11MB5276.namprd11.prod.outlook.com>

On Thu, May 15, 2025 at 06:30:27AM +0000, Tian, Kevin wrote:
> > From: Nicolin Chen <nicolinc@nvidia.com>
> > Sent: Friday, May 9, 2025 11:03 AM
> > 
> > +
> > +/**
> > + * struct iommu_hw_queue_alloc - ioctl(IOMMU_HW_QUEUE_ALLOC)
> > + * @size: sizeof(struct iommu_hw_queue_alloc)
> > + * @flags: Must be 0
> > + * @viommu_id: Virtual IOMMU ID to associate the HW queue with
> > + * @type: One of enum iommu_hw_queue_type
> > + * @index: The logical index to the HW queue per virtual IOMMU for a
> > multi-queue
> > + *         model
> 
> I'm thinking of an alternative way w/o having the user to assign index
> and allowing the driver to poke object dependency (next patch).
> 
> Let's say the index is internally assigned by the driver. so this cmd is
> just for allowing a hw queue and it's the driver to decide the allocation
> policy, e.g. in ascending order.
> 
> Introduce a new flag in viommu_ops to indicate to core that the
> new hw queue should hold a reference to the previous hw queue.
> 
> core maintains a last_queue field in viommu. Upon success return
> from @hw_queue_alloc() the core increments the users refcnt of
> last_queue, records the dependency in iommufd_hw_queue struct,
> and update viommu->last_queue.
> 
> Then the destroy order is naturally guaranteed.

I have thought about that too. It's nice that the core can easily
maintain the dependency for the driver. 

But there would still need an out_index to mark each dynamically
allocated queue. So VMM would know where it should map the queue.

For example, if VMM wants to allocate a queue at its own index=1
without allocating index=0 first, kernel cannot fail that as VMM
doesn't provide the index. The only way left for kernel would be
to output the allocated queue with index=0 and then wish VMM can
validate it, which doesn't sound safe..

> > + * @out_hw_queue_id: The ID of the new HW queue
> > + * @base_addr: Base address of the queue memory in guest physical
> > address space
> > + * @length: Length of the queue memory in the guest physical address
> > space
> 
> length is agnostic to address space.

Ack.

* @length: Length of the queue memory

> > +int iommufd_hw_queue_alloc_ioctl(struct iommufd_ucmd *ucmd)
> > +{
> > +	struct iommu_hw_queue_alloc *cmd = ucmd->cmd;
> > +	struct iommufd_hw_queue *hw_queue;
> > +	struct iommufd_hwpt_paging *hwpt;
> > +	struct iommufd_viommu *viommu;
> > +	struct page **pages;
> > +	int max_npages, i;
> > +	u64 end;
> > +	int rc;
> > +
> > +	if (cmd->flags || cmd->type == IOMMU_HW_QUEUE_TYPE_DEFAULT)
> > +		return -EOPNOTSUPP;
> > +	if (!cmd->base_addr || !cmd->length)
> > +		return -EINVAL;
> > +	if (check_add_overflow(cmd->base_addr, cmd->length - 1, &end))
> > +		return -EOVERFLOW;
> > +
> > +	max_npages = DIV_ROUND_UP(cmd->length, PAGE_SIZE);
> 
> what about [base_addr, base_addr+length) spanning two pages but
> 'length' is smaller than the size of one page? 

Ah, right! Probably should be something like:

	offset = cmd->base_addr - PAGE_ALIGN(cmd->base_addr);
	max_npages = DIV_ROUND_UP(offset + cmd->length, PAGE_SIZE);

> > +	pages = kcalloc(max_npages, sizeof(*pages), GFP_KERNEL);
> > +	if (!pages)
> > +		return -ENOMEM;
> 
> this could be moved to right before iopt_pin_pages().

Ack.

> > +
> > +	viommu = iommufd_get_viommu(ucmd, cmd->viommu_id);
> > +	if (IS_ERR(viommu)) {
> > +		rc = PTR_ERR(viommu);
> > +		goto out_free;
> > +	}
> > +	hwpt = viommu->hwpt;
> > +
> > +	if (!viommu->ops || !viommu->ops->hw_queue_alloc) {
> > +		rc = -EOPNOTSUPP;
> > +		goto out_put_viommu;
> > +	}
> > +
> > +	/* Quick test on the base address */
> > +	if (!iommu_iova_to_phys(hwpt->common.domain, cmd->base_addr))
> > {
> > +		rc = -ENXIO;
> > +		goto out_put_viommu;
> > +	}
> 
> this check is redundant. Actually it's not future proof, assuming that
> S2 must be pinned before the user attempts to call this cmd. But what
> if one day iommufd supports unpinned S2 (if a device is 100% PRI faultable)
> then this path will be broken.
 
OK. Let's drop it.

> > +	hw_queue = viommu->ops->hw_queue_alloc(viommu, cmd->type,
> > cmd->index,
> > +					       cmd->base_addr, cmd->length);
> > +	if (IS_ERR(hw_queue)) {
> > +		rc = PTR_ERR(hw_queue);
> > +		goto out_unpin;
> > +	}
> > +
> > +	hw_queue->viommu = viommu;
> > +	refcount_inc(&viommu->obj.users);
> > +	hw_queue->ictx = ucmd->ictx;
> 
> viommu/ictx are already set by iommufd_hw_queue_alloc().

OK. We'd need to be careful if someday there is a core-allocated
hw_queue that doesn't call iommufd_hw_queue_alloc(). Maybe I can
put a note here.

Thanks
Nicolin

  reply	other threads:[~2025-05-15 18:45 UTC|newest]

Thread overview: 106+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-09  3:02 [PATCH v4 00/23] iommufd: Add vIOMMU infrastructure (Part-4 HW QUEUE) Nicolin Chen
2025-05-09  3:02 ` [PATCH v4 01/23] iommufd/viommu: Add driver-allocated vDEVICE support Nicolin Chen
2025-05-15  5:42   ` Tian, Kevin
2025-05-15 16:55     ` Nicolin Chen
2025-05-09  3:02 ` [PATCH v4 02/23] iommu: Pass in a driver-level user data structure to viommu_alloc op Nicolin Chen
2025-05-15  5:44   ` Tian, Kevin
2025-05-09  3:02 ` [PATCH v4 03/23] iommufd/viommu: Allow driver-specific user data for a vIOMMU object Nicolin Chen
2025-05-15  5:45   ` Tian, Kevin
2025-05-15 16:56     ` Nicolin Chen
2025-05-09  3:02 ` [PATCH v4 04/23] iommu: Add iommu_copy_struct_to_user helper Nicolin Chen
2025-05-15  5:46   ` Tian, Kevin
2025-05-09  3:02 ` [PATCH v4 05/23] iommufd/driver: Let iommufd_viommu_alloc helper save ictx to viommu->ictx Nicolin Chen
2025-05-14 17:06   ` Jason Gunthorpe
2025-05-16  2:05     ` Nicolin Chen
2025-05-16 13:28       ` Jason Gunthorpe
2025-05-16 20:56         ` Nicolin Chen
2025-05-26 13:30           ` Jason Gunthorpe
2025-05-27 18:41             ` Nicolin Chen
2025-05-30 18:27               ` Jason Gunthorpe
2025-05-30 18:34                 ` Nicolin Chen
2025-05-15  5:48   ` Tian, Kevin
2025-05-09  3:02 ` [PATCH v4 06/23] iommufd/driver: Add iommufd_struct_destroy to revert iommufd_viommu_alloc Nicolin Chen
2025-05-14 18:26   ` Jason Gunthorpe
2025-05-14 19:21     ` Nicolin Chen
2025-05-15 12:49       ` Jason Gunthorpe
2025-05-15 16:55         ` Nicolin Chen
2025-05-09  3:02 ` [PATCH v4 07/23] iommufd/selftest: Support user_data in mock_viommu_alloc Nicolin Chen
2025-05-15  5:49   ` Tian, Kevin
2025-05-09  3:02 ` [PATCH v4 08/23] iommufd/selftest: Add covearge for viommu data Nicolin Chen
2025-05-15  5:50   ` Tian, Kevin
2025-05-09  3:02 ` [PATCH v4 09/23] iommufd: Abstract iopt_pin_pages and iopt_unpin_pages helpers Nicolin Chen
2025-05-14 18:45   ` Jason Gunthorpe
2025-05-15  5:54   ` Tian, Kevin
2025-05-09  3:02 ` [PATCH v4 10/23] iommufd/viommu: Introduce IOMMUFD_OBJ_HW_QUEUE and its related struct Nicolin Chen
2025-05-15  5:58   ` Tian, Kevin
2025-05-15 17:14     ` Nicolin Chen
2025-05-16  2:30     ` Nicolin Chen
2025-05-16  2:59       ` Tian, Kevin
2025-05-19 17:05         ` Vasant Hegde
2025-05-15 15:39   ` Jason Gunthorpe
2025-05-15 17:17     ` Nicolin Chen
2025-05-09  3:02 ` [PATCH v4 11/23] iommufd/viommu: Add IOMMUFD_CMD_HW_QUEUE_ALLOC ioctl Nicolin Chen
2025-05-15  6:30   ` Tian, Kevin
2025-05-15 18:44     ` Nicolin Chen [this message]
2025-05-16  2:49       ` Tian, Kevin
2025-05-16  3:16         ` Nicolin Chen
2025-05-16  3:52           ` Tian, Kevin
2025-05-16  4:05             ` Nicolin Chen
2025-05-18 15:19               ` Nicolin Chen
2025-05-15 16:06   ` Jason Gunthorpe
2025-05-15 18:16     ` Nicolin Chen
2025-05-15 18:59       ` Jason Gunthorpe
2025-05-15 20:32         ` Nicolin Chen
2025-05-16 13:26           ` Jason Gunthorpe
2025-05-16  2:42     ` Tian, Kevin
2025-05-16 13:25       ` Jason Gunthorpe
2025-05-19 17:29     ` Vasant Hegde
2025-05-19 18:14       ` Nicolin Chen
2025-05-20  8:38         ` Vasant Hegde
2025-05-23  1:51           ` Tian, Kevin
2025-05-26 13:29             ` Jason Gunthorpe
2025-05-09  3:02 ` [PATCH v4 12/23] iommufd/driver: Add iommufd_hw_queue_depend/undepend() helpers Nicolin Chen
2025-05-15 16:12   ` Jason Gunthorpe
2025-05-16  4:51     ` Nicolin Chen
2025-05-09  3:02 ` [PATCH v4 13/23] iommufd/selftest: Add coverage for IOMMUFD_CMD_HW_QUEUE_ALLOC Nicolin Chen
2025-05-09  3:02 ` [PATCH v4 14/23] iommufd: Add mmap interface Nicolin Chen
2025-05-09 14:13   ` kernel test robot
2025-05-09 19:30   ` Nicolin Chen
2025-05-15  6:41   ` Tian, Kevin
2025-05-15 16:47   ` Jason Gunthorpe
2025-05-16  4:08     ` Tian, Kevin
2025-05-16 13:29       ` Jason Gunthorpe
2025-05-16 17:42         ` Nicolin Chen
2025-05-09  3:02 ` [PATCH v4 15/23] iommufd/selftest: Add coverage for the new " Nicolin Chen
2025-05-09  3:02 ` [PATCH v4 16/23] Documentation: userspace-api: iommufd: Update HW QUEUE Nicolin Chen
2025-05-15  6:42   ` Tian, Kevin
2025-05-15 16:58   ` Jason Gunthorpe
2025-05-09  3:02 ` [PATCH v4 17/23] iommu/arm-smmu-v3-iommufd: Add vsmmu_alloc impl op Nicolin Chen
2025-05-15  7:52   ` Tian, Kevin
2025-05-15 17:19   ` Jason Gunthorpe
2025-05-15 17:32     ` Nicolin Chen
2025-05-09  3:02 ` [PATCH v4 18/23] iommu/arm-smmu-v3-iommufd: Support implementation-defined hw_info Nicolin Chen
2025-05-15  7:54   ` Tian, Kevin
2025-05-15 17:17   ` Jason Gunthorpe
2025-05-15 18:52     ` Nicolin Chen
2025-05-15 18:56       ` Jason Gunthorpe
2025-05-15 19:21         ` Nicolin Chen
2025-05-15 19:23           ` Jason Gunthorpe
2025-05-15 20:17             ` Nicolin Chen
2025-05-16 13:22               ` Jason Gunthorpe
2025-05-16 17:34                 ` Nicolin Chen
2025-05-09  3:02 ` [PATCH v4 19/23] iommu/tegra241-cmdqv: Use request_threaded_irq Nicolin Chen
2025-05-15  7:57   ` Tian, Kevin
2025-05-15 17:21   ` Jason Gunthorpe
2025-05-09  3:02 ` [PATCH v4 20/23] iommu/tegra241-cmdqv: Simplify deinit flow in tegra241_cmdqv_remove_vintf() Nicolin Chen
2025-05-15  8:00   ` Tian, Kevin
2025-05-15 17:27   ` Jason Gunthorpe
2025-05-09  3:02 ` [PATCH v4 21/23] iommu/tegra241-cmdqv: Do not statically map LVCMDQs Nicolin Chen
2025-05-15  8:20   ` Tian, Kevin
2025-05-15 17:03     ` Nicolin Chen
2025-05-09  3:02 ` [PATCH v4 22/23] iommu/tegra241-cmdqv: Add user-space use support Nicolin Chen
2025-05-15  8:27   ` Tian, Kevin
2025-05-15 17:13     ` Nicolin Chen
2025-05-16  4:00       ` Tian, Kevin
2025-05-16  4:10         ` Nicolin Chen
2025-05-09  3:02 ` [PATCH v4 23/23] iommu/tegra241-cmdqv: Add IOMMU_VEVENTQ_TYPE_TEGRA241_CMDQV support Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aCY2GKfMh6f+vXHj@Asurada-Nvidia \
    --to=nicolinc@nvidia.com \
    --cc=alok.a.tiwari@oracle.com \
    --cc=bagasdotme@gmail.com \
    --cc=corbet@lwn.net \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@nvidia.com \
    --cc=jonathanh@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=jsnitsel@redhat.com \
    --cc=kevin.tian@intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-tegra@vger.kernel.org \
    --cc=mochs@nvidia.com \
    --cc=mshavit@google.com \
    --cc=nathan@kernel.org \
    --cc=patches@lists.linux.dev \
    --cc=peterz@infradead.org \
    --cc=praan@google.com \
    --cc=robin.murphy@arm.com \
    --cc=shuah@kernel.org \
    --cc=thierry.reding@gmail.com \
    --cc=vasant.hegde@amd.com \
    --cc=vdumpa@nvidia.com \
    --cc=will@kernel.org \
    --cc=yi.l.liu@intel.com \
    --cc=zhangzekun11@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).