Linux IOMMU Development
 help / color / mirror / Atom feed
From: Baolu Lu <baolu.lu@linux.intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: baolu.lu@linux.intel.com, iommu@lists.linux.dev,
	linux-kselftest@vger.kernel.org,
	Kevin Tian <kevin.tian@intel.com>,
	kvm@vger.kernel.org, Lixiao Yang <lixiao.yang@intel.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Nicolin Chen <nicolinc@nvidia.com>, Yi Liu <yi.l.liu@intel.com>
Subject: Re: [PATCH v7 03/19] iommufd: Replace the hwpt->devices list with iommufd_group
Date: Wed, 17 May 2023 12:15:01 +0800	[thread overview]
Message-ID: <852e85b3-9fd2-bfc2-6080-82cea7ab6abd@linux.intel.com> (raw)
In-Reply-To: <ZGN2yvhpIvrvu74r@nvidia.com>

On 5/16/23 8:27 PM, Jason Gunthorpe wrote:
> On Tue, May 16, 2023 at 11:00:16AM +0800, Baolu Lu wrote:
>> On 5/15/23 10:00 PM, Jason Gunthorpe wrote:
>>> The devices list was used as a simple way to avoid having per-group
>>> information. Now that this seems to be unavoidable, just commit to
>>> per-group information fully and remove the devices list from the HWPT.
>>>
>>> The iommufd_group stores the currently assigned HWPT for the entire group
>>> and we can manage the per-device attach/detach with a list in the
>>> iommufd_group.
>>
>> I am preparing the patches to route I/O page faults to user space
>> through iommufd. The iommufd page fault handler knows the hwpt and the
>> device pointer, but it needs to convert the device pointer into its
>> iommufd object id and pass the id to user space.
>>
>> It's fine that we remove the hwpt->devices here, but perhaps I need to
>> add the context pointer in ioas later,
>>
>> struct iommufd_ioas {
>>          struct io_pagetable iopt;
>>          struct mutex mutex;
>>          struct list_head hwpt_list;
>> +       struct iommufd_ctx *ictx;
>>   };
>>
>> and, use below helper to look up the device id.
>>
>> +u32 iommufd_get_device_id(struct iommufd_ctx *ictx, struct device *dev)
>> +{
>> +       struct iommu_group *group = iommu_group_get(dev);
>> +       u32 dev_id = IOMMUFD_INVALID_OBJ_ID;
>> +       struct iommufd_group *igroup;
>> +       struct iommufd_device *cur;
>> +       unsigned int id;
>> +
>> +       if (!group)
>> +               return IOMMUFD_INVALID_OBJ_ID;
>> +
>> +       id = iommu_group_id(group);
>> +       xa_lock(&ictx->groups);
>> +       igroup = xa_load(&ictx->groups, id);
>> +       if (!iommufd_group_try_get(igroup, group)) {
>> +               xa_unlock(&ictx->groups);
>> +               iommu_group_put(group);
>> +               return IOMMUFD_INVALID_OBJ_ID;
>> +        }
>> +        xa_unlock(&ictx->groups);
>> +
>> +       mutex_lock(&igroup->lock);
>> +       list_for_each_entry(cur, &igroup->device_list, group_item) {
>> +               if (cur->dev == dev) {
>> +                       dev_id = cur->obj.id;
>> +                       break;
>> +               }
>> +       }
> 
> I dislike how slow this is on something resembling a fastish path :\

Yes, agreed.

> Maybe we should stash something in the dev_iommu instead?
> 
> Or can the PRI stuff provide a cookie per-device?

We already have a per-device fault cookie:

/**
  * struct iommu_fault_param - per-device IOMMU fault data
  * @handler: Callback function to handle IOMMU faults at device level
  * @data: handler private data
  * @faults: holds the pending faults which needs response
  * @lock: protect pending faults list
  */
struct iommu_fault_param {
         iommu_dev_fault_handler_t handler;
         void *data;
         struct list_head faults;
         struct mutex lock;
};

Perhaps we can add a @dev_id memory here?

> 
> But it will work like this
> 
>>         dev_id = iommufd_get_device_id(hwpt->ioas->ictx, dev);
> 
> Where did the hwpt come from?

It is installed when setting up the iopf handler for the hwpt.

+	iommu_domain_set_iopf_handler(hwpt->domain,
+                                     iommufd_hw_pagetable_iopf_handler,
+                                     hwpt);

Best regards,
baolu

  reply	other threads:[~2023-05-17  4:15 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-15 14:00 [PATCH v7 00/19] Add iommufd physical device operations for replace and alloc hwpt Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 01/19] iommufd: Move isolated msi enforcement to iommufd_device_bind() Jason Gunthorpe
2023-05-16  4:07   ` Baolu Lu
2023-05-15 14:00 ` [PATCH v7 02/19] iommufd: Add iommufd_group Jason Gunthorpe
2023-05-16  2:43   ` Baolu Lu
2023-05-16 12:54     ` Jason Gunthorpe
2023-05-17  4:18       ` Baolu Lu
2023-05-15 14:00 ` [PATCH v7 03/19] iommufd: Replace the hwpt->devices list with iommufd_group Jason Gunthorpe
2023-05-16  3:00   ` Baolu Lu
2023-05-16 12:27     ` Jason Gunthorpe
2023-05-17  4:15       ` Baolu Lu [this message]
2023-05-17  6:33         ` Tian, Kevin
2023-05-17 12:43           ` Jason Gunthorpe
2023-05-18  7:05             ` Baolu Lu
2023-05-18 12:02               ` Jason Gunthorpe
2023-05-19  2:03                 ` Baolu Lu
2023-05-19  7:51                   ` Tian, Kevin
2023-05-19 11:42                   ` Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 04/19] iommu: Export iommu_get_resv_regions() Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 05/19] iommufd: Keep track of each device's reserved regions instead of groups Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 06/19] iommufd: Use the iommufd_group to avoid duplicate MSI setup Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 07/19] iommufd: Make sw_msi_start a group global Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 08/19] iommufd: Move putting a hwpt to a helper function Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 09/19] iommufd: Add enforced_cache_coherency to iommufd_hw_pagetable_alloc() Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 10/19] iommufd: Allow a hwpt to be aborted after allocation Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 11/19] iommufd: Fix locking around hwpt allocation Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 12/19] iommufd: Reorganize iommufd_device_attach into iommufd_device_change_pt Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 13/19] iommu: Introduce a new iommu_group_replace_domain() API Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 14/19] iommufd: Add iommufd_device_replace() Jason Gunthorpe
2023-07-07  8:00   ` Liu, Yi L
2023-07-10 16:46     ` Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 15/19] iommufd: Make destroy_rwsem use a lock class per object type Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 16/19] iommufd/selftest: Test iommufd_device_replace() Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 17/19] iommufd: Add IOMMU_HWPT_ALLOC Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 18/19] iommufd/selftest: Return the real idev id from selftest mock_domain Jason Gunthorpe
2023-05-15 14:00 ` [PATCH v7 19/19] iommufd/selftest: Add a selftest for IOMMU_HWPT_ALLOC Jason Gunthorpe
2023-05-17 23:57 ` [PATCH v7 00/19] Add iommufd physical device operations for replace and alloc hwpt Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=852e85b3-9fd2-bfc2-6080-82cea7ab6abd@linux.intel.com \
    --to=baolu.lu@linux.intel.com \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@nvidia.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=lixiao.yang@intel.com \
    --cc=mjrosato@linux.ibm.com \
    --cc=nicolinc@nvidia.com \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox