From: Yi Liu <yi.l.liu@intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: <joro@8bytes.org>, <alex.williamson@redhat.com>,
<kevin.tian@intel.com>, <robin.murphy@arm.com>,
<baolu.lu@linux.intel.com>, <cohuck@redhat.com>,
<eric.auger@redhat.com>, <nicolinc@nvidia.com>,
<kvm@vger.kernel.org>, <mjrosato@linux.ibm.com>,
<chao.p.peng@linux.intel.com>, <yi.y.sun@linux.intel.com>,
<peterx@redhat.com>, <jasowang@redhat.com>,
<shameerali.kolothum.thodi@huawei.com>, <lulu@redhat.com>,
<suravee.suthikulpanit@amd.com>, <iommu@lists.linux.dev>,
<linux-kernel@vger.kernel.org>, <linux-kselftest@vger.kernel.org>,
<zhenzhong.duan@intel.com>, <joao.m.martins@oracle.com>
Subject: Re: [PATCH v6 07/10] iommufd: Add a nested HW pagetable object
Date: Wed, 25 Oct 2023 18:19:46 +0800 [thread overview]
Message-ID: <5938157a-dfd4-43de-9e63-7669fc727a7f@intel.com> (raw)
In-Reply-To: <20231024173009.GQ3952@nvidia.com>
On 2023/10/25 01:30, Jason Gunthorpe wrote:
> On Tue, Oct 24, 2023 at 02:18:10PM -0300, Jason Gunthorpe wrote:
>> On Tue, Oct 24, 2023 at 08:06:06AM -0700, Yi Liu wrote:
>>> @@ -195,6 +279,10 @@ int iommufd_hwpt_alloc(struct iommufd_ucmd *ucmd)
>>> if (pt_obj->type == IOMMUFD_OBJ_IOAS) {
>>> struct iommufd_hwpt_paging *hwpt_paging;
>>>
>>> + if (cmd->data_type != IOMMU_HWPT_DATA_NONE) {
>>> + rc = -EINVAL;
>>> + goto out_put_pt;
>>> + }
>>> ioas = container_of(pt_obj, struct iommufd_ioas, obj);
>>> mutex_lock(&ioas->mutex);
>>> hwpt_paging = iommufd_hwpt_paging_alloc(ucmd->ictx, ioas, idev,
>>
>> ?? What is this?
>>
>> Ah something went wrong earlier in "iommu: Pass in parent domain with
>> user_data to domain_alloc_user op"
>
> Bah, I got confused because that had half the uapi, so in this pathc
>
>> Once we added the user_data we should flow it through to the op
>> always.
>
> Like this:
ack.
> diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c
> index 92859b907bb93c..a3382811af8a81 100644
> --- a/drivers/iommu/iommufd/device.c
> +++ b/drivers/iommu/iommufd/device.c
> @@ -586,8 +586,8 @@ iommufd_device_auto_get_domain(struct iommufd_device *idev,
> goto out_unlock;
> }
>
> - hwpt_paging = iommufd_hwpt_paging_alloc(idev->ictx, ioas, idev,
> - 0, immediate_attach);
> + hwpt_paging = iommufd_hwpt_paging_alloc(idev->ictx, ioas, idev, 0,
> + immediate_attach, NULL);
> if (IS_ERR(hwpt_paging)) {
> destroy_hwpt = ERR_CAST(hwpt_paging);
> goto out_unlock;
> diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/hw_pagetable.c
> index cfd85df693d7b2..324a6d73f032ee 100644
> --- a/drivers/iommu/iommufd/hw_pagetable.c
> +++ b/drivers/iommu/iommufd/hw_pagetable.c
> @@ -96,7 +96,8 @@ iommufd_hwpt_paging_enforce_cc(struct iommufd_hwpt_paging *hwpt_paging)
> struct iommufd_hwpt_paging *
> iommufd_hwpt_paging_alloc(struct iommufd_ctx *ictx, struct iommufd_ioas *ioas,
> struct iommufd_device *idev, u32 flags,
> - bool immediate_attach)
> + bool immediate_attach,
> + const struct iommu_user_data *user_data)
> {
> const u32 valid_flags = IOMMU_HWPT_ALLOC_NEST_PARENT |
> IOMMU_HWPT_ALLOC_DIRTY_TRACKING;
> @@ -107,7 +108,7 @@ iommufd_hwpt_paging_alloc(struct iommufd_ctx *ictx, struct iommufd_ioas *ioas,
>
> lockdep_assert_held(&ioas->mutex);
>
> - if (flags && !ops->domain_alloc_user)
> + if ((flags || user_data) && !ops->domain_alloc_user)
> return ERR_PTR(-EOPNOTSUPP);
> if (flags & ~valid_flags)
> return ERR_PTR(-EOPNOTSUPP);
> @@ -127,7 +128,7 @@ iommufd_hwpt_paging_alloc(struct iommufd_ctx *ictx, struct iommufd_ioas *ioas,
>
> if (ops->domain_alloc_user) {
> hwpt->domain = ops->domain_alloc_user(idev->dev, flags,
> - NULL, NULL);
> + NULL, user_data);
> if (IS_ERR(hwpt->domain)) {
> rc = PTR_ERR(hwpt->domain);
> hwpt->domain = NULL;
> @@ -210,8 +211,7 @@ iommufd_hwpt_nested_alloc(struct iommufd_ctx *ictx,
> struct iommufd_hw_pagetable *hwpt;
> int rc;
>
> - if (flags || user_data->type == IOMMU_HWPT_DATA_NONE ||
> - !ops->domain_alloc_user)
> + if (flags || !user_data->len || !ops->domain_alloc_user)
> return ERR_PTR(-EOPNOTSUPP);
> if (parent->auto_domain || !parent->nest_parent)
> return ERR_PTR(-EINVAL);
> @@ -249,6 +249,11 @@ iommufd_hwpt_nested_alloc(struct iommufd_ctx *ictx,
> int iommufd_hwpt_alloc(struct iommufd_ucmd *ucmd)
> {
> struct iommu_hwpt_alloc *cmd = ucmd->cmd;
> + const struct iommu_user_data user_data = {
> + .type = cmd->data_type,
> + .uptr = u64_to_user_ptr(cmd->data_uptr),
> + .len = cmd->data_len,
> + };
> struct iommufd_hw_pagetable *hwpt;
> struct iommufd_ioas *ioas = NULL;
> struct iommufd_object *pt_obj;
> @@ -273,25 +278,17 @@ int iommufd_hwpt_alloc(struct iommufd_ucmd *ucmd)
> if (pt_obj->type == IOMMUFD_OBJ_IOAS) {
> struct iommufd_hwpt_paging *hwpt_paging;
>
> - if (cmd->data_type != IOMMU_HWPT_DATA_NONE) {
> - rc = -EINVAL;
> - goto out_put_pt;
> - }
it can be covered by iommu driver when checking user data pointer. hence
remove above if.
> ioas = container_of(pt_obj, struct iommufd_ioas, obj);
> mutex_lock(&ioas->mutex);
> - hwpt_paging = iommufd_hwpt_paging_alloc(ucmd->ictx, ioas, idev,
> - cmd->flags, false);
> + hwpt_paging = iommufd_hwpt_paging_alloc(
> + ucmd->ictx, ioas, idev, cmd->flags, false,
> + user_data.len ? &user_data : NULL);
> if (IS_ERR(hwpt_paging)) {
> rc = PTR_ERR(hwpt_paging);
> goto out_unlock;
> }
> hwpt = &hwpt_paging->common;
> } else if (pt_obj->type == IOMMUFD_OBJ_HWPT_PAGING) {
> - const struct iommu_user_data user_data = {
> - .type = cmd->data_type,
> - .uptr = u64_to_user_ptr(cmd->data_uptr),
> - .len = cmd->data_len,
> - };
> struct iommufd_hwpt_nested *hwpt_nested;
> struct iommufd_hwpt_paging *parent;
>
> diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h
> index 6fdad56893af4d..24e5a36fc875e0 100644
> --- a/drivers/iommu/iommufd/iommufd_private.h
> +++ b/drivers/iommu/iommufd/iommufd_private.h
> @@ -290,7 +290,8 @@ int iommufd_hwpt_get_dirty_bitmap(struct iommufd_ucmd *ucmd);
> struct iommufd_hwpt_paging *
> iommufd_hwpt_paging_alloc(struct iommufd_ctx *ictx, struct iommufd_ioas *ioas,
> struct iommufd_device *idev, u32 flags,
> - bool immediate_attach);
> + bool immediate_attach,
> + const struct iommu_user_data *user_data);
> int iommufd_hw_pagetable_attach(struct iommufd_hw_pagetable *hwpt,
> struct iommufd_device *idev);
> struct iommufd_hw_pagetable *
--
Regards,
Yi Liu
next prev parent reply other threads:[~2023-10-25 10:17 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-24 15:05 [PATCH v6 00/10] iommufd: Add nesting infrastructure (part 1/2) Yi Liu
2023-10-24 15:06 ` [PATCH v6 01/10] iommu: Add IOMMU_DOMAIN_NESTED Yi Liu
2023-10-24 15:06 ` [PATCH v6 02/10] iommu: Pass in parent domain with user_data to domain_alloc_user op Yi Liu
2023-10-24 15:56 ` Joao Martins
2023-10-24 16:14 ` Yi Liu
2023-10-24 17:23 ` Jason Gunthorpe
2023-10-24 15:06 ` [PATCH v6 03/10] iommufd: Rename IOMMUFD_OBJ_HW_PAGETABLE to IOMMUFD_OBJ_HWPT_PAGING Yi Liu
2023-10-24 15:06 ` [PATCH v6 04/10] iommufd/device: Wrap IOMMUFD_OBJ_HWPT_PAGING-only configurations Yi Liu
2023-10-25 6:46 ` Tian, Kevin
2023-10-25 10:04 ` Yi Liu
2023-10-24 15:06 ` [PATCH v6 05/10] iommufd: Derive iommufd_hwpt_paging from iommufd_hw_pagetable Yi Liu
2023-10-24 16:31 ` Jason Gunthorpe
2023-10-24 15:06 ` [PATCH v6 06/10] iommufd: Share iommufd_hwpt_alloc with IOMMUFD_OBJ_HWPT_NESTED Yi Liu
2023-10-24 15:06 ` [PATCH v6 07/10] iommufd: Add a nested HW pagetable object Yi Liu
2023-10-24 17:13 ` Jason Gunthorpe
2023-10-24 17:18 ` Jason Gunthorpe
2023-10-24 17:28 ` Nicolin Chen
2023-10-24 17:31 ` Jason Gunthorpe
2023-10-24 17:50 ` Nicolin Chen
2023-10-24 18:00 ` Jason Gunthorpe
2023-10-24 18:19 ` Nicolin Chen
2023-10-25 4:05 ` Yi Liu
2023-10-24 17:30 ` Jason Gunthorpe
2023-10-25 10:19 ` Yi Liu [this message]
2023-10-24 17:37 ` Jason Gunthorpe
2023-10-24 15:06 ` [PATCH v6 08/10] iommu: Add iommu_copy_struct_from_user helper Yi Liu
2023-10-24 15:06 ` [PATCH v6 09/10] iommufd/selftest: Add nested domain allocation for mock domain Yi Liu
2023-10-24 15:06 ` [PATCH v6 10/10] iommufd/selftest: Add coverage for IOMMU_HWPT_ALLOC with nested HWPTs Yi Liu
2023-10-24 17:56 ` [PATCH v6 00/10] iommufd: Add nesting infrastructure (part 1/2) Jason Gunthorpe
2023-10-25 7:32 ` Tian, Kevin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5938157a-dfd4-43de-9e63-7669fc727a7f@intel.com \
--to=yi.l.liu@intel.com \
--cc=alex.williamson@redhat.com \
--cc=baolu.lu@linux.intel.com \
--cc=chao.p.peng@linux.intel.com \
--cc=cohuck@redhat.com \
--cc=eric.auger@redhat.com \
--cc=iommu@lists.linux.dev \
--cc=jasowang@redhat.com \
--cc=jgg@nvidia.com \
--cc=joao.m.martins@oracle.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=lulu@redhat.com \
--cc=mjrosato@linux.ibm.com \
--cc=nicolinc@nvidia.com \
--cc=peterx@redhat.com \
--cc=robin.murphy@arm.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=suravee.suthikulpanit@amd.com \
--cc=yi.y.sun@linux.intel.com \
--cc=zhenzhong.duan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox